problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_10128
|
rasdani/github-patches
|
git_diff
|
MycroftAI__mycroft-core-2338
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Skill Shutdown sometimes fails causing duplicate instances of Skill
## Devices:
Issue occurred on two separate Debian Linux installations
- one running Mycroft-core 19.02.14,
- the other running dev branch the date of this issue.
## Steps to reproduce:
1. Load Mycroft and ensure working as expected
2. Make intentional code error in Alarm Skill to raise exception eg `self.log.info(not_a_real_var)`
3. Skill fails on reload with the following error:
```
12:04:25.758 | INFO | 22386 | mycroft.skills.skill_loader:reload:109 | ATTEMPTING TO RELOAD SKILL: mycroft-alarm.mycroftai
12:04:25.760 | ERROR | 22386 | mycroft.skills.skill_loader:_execute_instance_shutdown:145 | An error occurred while shutting down AlarmSkill
Traceback (most recent call last):
File "/home/fs-neriahbjato/Documents/Full_Scale/Mycroft/mycroft-core/mycroft/skills/skill_loader.py", line 142, in _execute_instance_shutdown
self.instance.default_shutdown()
File "/home/fs-neriahbjato/Documents/Full_Scale/Mycroft/mycroft-core/mycroft/skills/mycroft_skill/mycroft_skill.py", line 1162, in default_shutdown
self.event_scheduler.shutdown()
File "/home/fs-neriahbjato/Documents/Full_Scale/Mycroft/mycroft-core/mycroft/skills/event_scheduler.py", line 433, in shutdown
self.events.clear()
File "/home/fs-neriahbjato/Documents/Full_Scale/Mycroft/mycroft-core/mycroft/skills/mycroft_skill/event_container.py", line 183, in clear
self.bus.remove(e, f)
File "/home/fs-neriahbjato/Documents/Full_Scale/Mycroft/mycroft-core/mycroft/messagebus/client/client.py", line 172, in remove
self.emitter.remove_listener(event_name, func)
File "/home/fs-neriahbjato/Documents/Full_Scale/Mycroft/mycroft-core/mycroft/messagebus/client/threaded_event_emitter.py", line 57, in remove_listener
return super().remove_listener(event_name, func)
File "/home/fs-neriahbjato/Documents/Full_Scale/Mycroft/mycroft-core/.venv/lib/python3.6/site-packages/pyee/__init__.py", line 205, in remove_listener
self._events[event].pop(f)
KeyError: <function create_basic_wrapper.<locals>.wrapper at 0x7f2a504781e0>
```
4. Remove bad code and save to reload Skill
```
12:04:25.803 | INFO | 22386 | mycroft.skills.skill_loader:_communicate_load_status:273 | Skill mycroft-alarm.mycroftai loaded successfully
```
5. Trigger the Skill with an utterance eg "what alarms are set"
6. Mycroft responds twice
- once from the failed instance of the Skill: "An error occurred while processing a request in Alarm Skill"
- once from the new working instance of the Skill: "No alarms have been set"
The failed instance of the Skill will remain until Mycroft services are restarted.
After restarting, the same steps cause the same error.
## Investigation:
This was only occurring for the Alarm Skill, did not get the same behavior in the Date Time Skill.
To see what might be causing it, I added some logging to the offending function in [`mycroft/messagebus/client/threaded_event_emitter.py`](https://github.com/MycroftAI/mycroft-core/blob/dev/mycroft/messagebus/client/threaded_event_emitter.py):
```
def remove_listener(self, event_name, func):
""" Wrap the remove to translate from function to wrapped
function.
"""
for w in self.wrappers[event_name]:
if w[0] == func:
self.wrappers[event_name].remove(w)
return super().remove_listener(event_name, w[1])
# if no wrapper exists try removing the function
LOG.info("REMOVING LISTENER")
LOG.info("event_name: " + str(event_name))
LOG.info(event_name)
LOG.info("func: " + str(func))
LOG.info(func)
return super().remove_listener(event_name, func)
```
Then reloaded Mycroft and can no longer trigger the issue.
## Hypothesis
- Presumably the `func` was not being found in `self.wrappers` allowing it to proceed to try removing the function itself on [line 57](https://github.com/MycroftAI/mycroft-core/blob/dev/mycroft/messagebus/client/threaded_event_emitter.py#L57) which then failed.
- Possibly the function was modified or removed from `self.wrappers` causing it to not match?
- Is `self.wrappers` cached between service restarts, and would editing this file have recreated it?
As I'm no longer able to reproduce the error it's difficult to continue debugging. So documenting the situation here in case anyone else runs into it and we can pick it back up.
Unless anyone else has an idea of what might be happening?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mycroft/skills/mycroft_skill/event_container.py`
Content:
```
1 from inspect import signature
2
3 from mycroft.messagebus.message import Message
4 from mycroft.metrics import Stopwatch, report_timing
5 from mycroft.util.log import LOG
6
7 from ..skill_data import to_alnum
8
9
10 def unmunge_message(message, skill_id):
11 """Restore message keywords by removing the Letterified skill ID.
12 Arguments:
13 message (Message): Intent result message
14 skill_id (str): skill identifier
15 Returns:
16 Message without clear keywords
17 """
18 if isinstance(message, Message) and isinstance(message.data, dict):
19 skill_id = to_alnum(skill_id)
20 for key in list(message.data.keys()):
21 if key.startswith(skill_id):
22 # replace the munged key with the real one
23 new_key = key[len(skill_id):]
24 message.data[new_key] = message.data.pop(key)
25
26 return message
27
28
29 def get_handler_name(handler):
30 """Name (including class if available) of handler function.
31
32 Arguments:
33 handler (function): Function to be named
34
35 Returns:
36 string: handler name as string
37 """
38 if '__self__' in dir(handler) and 'name' in dir(handler.__self__):
39 return handler.__self__.name + '.' + handler.__name__
40 else:
41 return handler.__name__
42
43
44 def create_wrapper(handler, skill_id, on_start, on_end, on_error):
45 """Create the default skill handler wrapper.
46
47 This wrapper handles things like metrics, reporting handler start/stop
48 and errors.
49 handler (callable): method/function to call
50 skill_id: skill_id for associated skill
51 on_start (function): function to call before executing the handler
52 on_end (function): function to call after executing the handler
53 on_error (function): function to call for error reporting
54 """
55 def wrapper(message):
56 stopwatch = Stopwatch()
57 try:
58 message = unmunge_message(message, skill_id)
59 if on_start:
60 on_start(message)
61
62 with stopwatch:
63 if len(signature(handler).parameters) == 0:
64 handler()
65 else:
66 handler(message)
67
68 except Exception as e:
69 if on_error:
70 on_error(e)
71 finally:
72 if on_end:
73 on_end(message)
74
75 # Send timing metrics
76 context = message.context
77 if context and 'ident' in context:
78 report_timing(context['ident'], 'skill_handler', stopwatch,
79 {'handler': handler.__name__,
80 'skill_id': skill_id})
81 return wrapper
82
83
84 def create_basic_wrapper(handler, on_error=None):
85 """Create the default skill handler wrapper.
86
87 This wrapper handles things like metrics, reporting handler start/stop
88 and errors.
89
90 Arguments:
91 handler (callable): method/function to call
92 on_error (function): function to call to report error.
93
94 Returns:
95 Wrapped callable
96 """
97 def wrapper(message):
98 try:
99 if len(signature(handler).parameters) == 0:
100 handler()
101 else:
102 handler(message)
103 except Exception as e:
104 if on_error:
105 on_error(e)
106
107 return wrapper
108
109
110 class EventContainer:
111 """Container tracking messagbus handlers.
112
113 This container tracks events added by a skill, allowing unregistering
114 all events on shutdown.
115 """
116 def __init__(self, bus=None):
117 self.bus = bus
118 self.events = []
119
120 def set_bus(self, bus):
121 self.bus = bus
122
123 def add(self, name, handler, once=False):
124 """Create event handler for executing intent or other event.
125
126 Arguments:
127 name (string): IntentParser name
128 handler (func): Method to call
129 once (bool, optional): Event handler will be removed after it has
130 been run once.
131 """
132 def once_wrapper(message):
133 # Remove registered one-time handler before invoking,
134 # allowing them to re-schedule themselves.
135 handler(message)
136 self.remove(name)
137
138 if handler:
139 if once:
140 self.bus.once(name, once_wrapper)
141 else:
142 self.bus.on(name, handler)
143 self.events.append((name, handler))
144
145 def remove(self, name):
146 """Removes an event from bus emitter and events list.
147
148 Args:
149 name (string): Name of Intent or Scheduler Event
150 Returns:
151 bool: True if found and removed, False if not found
152 """
153 print("Removing event {}".format(name))
154 removed = False
155 for _name, _handler in list(self.events):
156 if name == _name:
157 try:
158 self.events.remove((_name, _handler))
159 except ValueError:
160 LOG.error('Failed to remove event {}'.format(name))
161 pass
162 removed = True
163
164 # Because of function wrappers, the emitter doesn't always directly
165 # hold the _handler function, it sometimes holds something like
166 # 'wrapper(_handler)'. So a call like:
167 # self.bus.remove(_name, _handler)
168 # will not find it, leaving an event handler with that name left behind
169 # waiting to fire if it is ever re-installed and triggered.
170 # Remove all handlers with the given name, regardless of handler.
171 if removed:
172 self.bus.remove_all_listeners(name)
173 return removed
174
175 def __iter__(self):
176 return iter(self.events)
177
178 def clear(self):
179 """Unregister all registered handlers and clear the list of registered
180 events.
181 """
182 for e, f in self.events:
183 self.bus.remove(e, f)
184 self.events = [] # Remove reference to wrappers
185
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mycroft/skills/mycroft_skill/event_container.py b/mycroft/skills/mycroft_skill/event_container.py
--- a/mycroft/skills/mycroft_skill/event_container.py
+++ b/mycroft/skills/mycroft_skill/event_container.py
@@ -138,9 +138,12 @@
if handler:
if once:
self.bus.once(name, once_wrapper)
+ self.events.append((name, once_wrapper))
else:
self.bus.on(name, handler)
- self.events.append((name, handler))
+ self.events.append((name, handler))
+
+ LOG.debug('Added event: {}'.format(name))
def remove(self, name):
"""Removes an event from bus emitter and events list.
|
{"golden_diff": "diff --git a/mycroft/skills/mycroft_skill/event_container.py b/mycroft/skills/mycroft_skill/event_container.py\n--- a/mycroft/skills/mycroft_skill/event_container.py\n+++ b/mycroft/skills/mycroft_skill/event_container.py\n@@ -138,9 +138,12 @@\n if handler:\n if once:\n self.bus.once(name, once_wrapper)\n+ self.events.append((name, once_wrapper))\n else:\n self.bus.on(name, handler)\n- self.events.append((name, handler))\n+ self.events.append((name, handler))\n+\n+ LOG.debug('Added event: {}'.format(name))\n \n def remove(self, name):\n \"\"\"Removes an event from bus emitter and events list.\n", "issue": "Skill Shutdown sometimes fails causing duplicate instances of Skill\n## Devices:\r\nIssue occurred on two separate Debian Linux installations\r\n- one running Mycroft-core 19.02.14, \r\n- the other running dev branch the date of this issue.\r\n\r\n## Steps to reproduce:\r\n1. Load Mycroft and ensure working as expected\r\n2. Make intentional code error in Alarm Skill to raise exception eg `self.log.info(not_a_real_var)`\r\n3. Skill fails on reload with the following error:\r\n```\r\n 12:04:25.758 | INFO | 22386 | mycroft.skills.skill_loader:reload:109 | ATTEMPTING TO RELOAD SKILL: mycroft-alarm.mycroftai\r\n 12:04:25.760 | ERROR | 22386 | mycroft.skills.skill_loader:_execute_instance_shutdown:145 | An error occurred while shutting down AlarmSkill\r\nTraceback (most recent call last):\r\n File \"/home/fs-neriahbjato/Documents/Full_Scale/Mycroft/mycroft-core/mycroft/skills/skill_loader.py\", line 142, in _execute_instance_shutdown\r\n self.instance.default_shutdown()\r\n File \"/home/fs-neriahbjato/Documents/Full_Scale/Mycroft/mycroft-core/mycroft/skills/mycroft_skill/mycroft_skill.py\", line 1162, in default_shutdown\r\n self.event_scheduler.shutdown()\r\n File \"/home/fs-neriahbjato/Documents/Full_Scale/Mycroft/mycroft-core/mycroft/skills/event_scheduler.py\", line 433, in shutdown\r\n self.events.clear()\r\n File \"/home/fs-neriahbjato/Documents/Full_Scale/Mycroft/mycroft-core/mycroft/skills/mycroft_skill/event_container.py\", line 183, in clear\r\n self.bus.remove(e, f)\r\n File \"/home/fs-neriahbjato/Documents/Full_Scale/Mycroft/mycroft-core/mycroft/messagebus/client/client.py\", line 172, in remove\r\n self.emitter.remove_listener(event_name, func)\r\n File \"/home/fs-neriahbjato/Documents/Full_Scale/Mycroft/mycroft-core/mycroft/messagebus/client/threaded_event_emitter.py\", line 57, in remove_listener\r\n return super().remove_listener(event_name, func)\r\n File \"/home/fs-neriahbjato/Documents/Full_Scale/Mycroft/mycroft-core/.venv/lib/python3.6/site-packages/pyee/__init__.py\", line 205, in remove_listener\r\n self._events[event].pop(f)\r\nKeyError: <function create_basic_wrapper.<locals>.wrapper at 0x7f2a504781e0>\r\n```\r\n4. Remove bad code and save to reload Skill\r\n```\r\n 12:04:25.803 | INFO | 22386 | mycroft.skills.skill_loader:_communicate_load_status:273 | Skill mycroft-alarm.mycroftai loaded successfully\r\n```\r\n5. Trigger the Skill with an utterance eg \"what alarms are set\"\r\n6. Mycroft responds twice\r\n - once from the failed instance of the Skill: \"An error occurred while processing a request in Alarm Skill\"\r\n - once from the new working instance of the Skill: \"No alarms have been set\"\r\n\r\nThe failed instance of the Skill will remain until Mycroft services are restarted.\r\nAfter restarting, the same steps cause the same error.\r\n\r\n## Investigation:\r\nThis was only occurring for the Alarm Skill, did not get the same behavior in the Date Time Skill.\r\nTo see what might be causing it, I added some logging to the offending function in [`mycroft/messagebus/client/threaded_event_emitter.py`](https://github.com/MycroftAI/mycroft-core/blob/dev/mycroft/messagebus/client/threaded_event_emitter.py):\r\n```\r\ndef remove_listener(self, event_name, func):\r\n \"\"\" Wrap the remove to translate from function to wrapped\r\n function.\r\n \"\"\"\r\n for w in self.wrappers[event_name]:\r\n if w[0] == func:\r\n self.wrappers[event_name].remove(w)\r\n return super().remove_listener(event_name, w[1])\r\n # if no wrapper exists try removing the function\r\n LOG.info(\"REMOVING LISTENER\")\r\n LOG.info(\"event_name: \" + str(event_name))\r\n LOG.info(event_name)\r\n LOG.info(\"func: \" + str(func))\r\n LOG.info(func)\r\n return super().remove_listener(event_name, func)\r\n```\r\nThen reloaded Mycroft and can no longer trigger the issue.\r\n\r\n## Hypothesis\r\n- Presumably the `func` was not being found in `self.wrappers` allowing it to proceed to try removing the function itself on [line 57](https://github.com/MycroftAI/mycroft-core/blob/dev/mycroft/messagebus/client/threaded_event_emitter.py#L57) which then failed.\r\n- Possibly the function was modified or removed from `self.wrappers` causing it to not match?\r\n- Is `self.wrappers` cached between service restarts, and would editing this file have recreated it?\r\n\r\n\r\nAs I'm no longer able to reproduce the error it's difficult to continue debugging. So documenting the situation here in case anyone else runs into it and we can pick it back up.\r\n\r\nUnless anyone else has an idea of what might be happening?\n", "before_files": [{"content": "from inspect import signature\n\nfrom mycroft.messagebus.message import Message\nfrom mycroft.metrics import Stopwatch, report_timing\nfrom mycroft.util.log import LOG\n\nfrom ..skill_data import to_alnum\n\n\ndef unmunge_message(message, skill_id):\n \"\"\"Restore message keywords by removing the Letterified skill ID.\n Arguments:\n message (Message): Intent result message\n skill_id (str): skill identifier\n Returns:\n Message without clear keywords\n \"\"\"\n if isinstance(message, Message) and isinstance(message.data, dict):\n skill_id = to_alnum(skill_id)\n for key in list(message.data.keys()):\n if key.startswith(skill_id):\n # replace the munged key with the real one\n new_key = key[len(skill_id):]\n message.data[new_key] = message.data.pop(key)\n\n return message\n\n\ndef get_handler_name(handler):\n \"\"\"Name (including class if available) of handler function.\n\n Arguments:\n handler (function): Function to be named\n\n Returns:\n string: handler name as string\n \"\"\"\n if '__self__' in dir(handler) and 'name' in dir(handler.__self__):\n return handler.__self__.name + '.' + handler.__name__\n else:\n return handler.__name__\n\n\ndef create_wrapper(handler, skill_id, on_start, on_end, on_error):\n \"\"\"Create the default skill handler wrapper.\n\n This wrapper handles things like metrics, reporting handler start/stop\n and errors.\n handler (callable): method/function to call\n skill_id: skill_id for associated skill\n on_start (function): function to call before executing the handler\n on_end (function): function to call after executing the handler\n on_error (function): function to call for error reporting\n \"\"\"\n def wrapper(message):\n stopwatch = Stopwatch()\n try:\n message = unmunge_message(message, skill_id)\n if on_start:\n on_start(message)\n\n with stopwatch:\n if len(signature(handler).parameters) == 0:\n handler()\n else:\n handler(message)\n\n except Exception as e:\n if on_error:\n on_error(e)\n finally:\n if on_end:\n on_end(message)\n\n # Send timing metrics\n context = message.context\n if context and 'ident' in context:\n report_timing(context['ident'], 'skill_handler', stopwatch,\n {'handler': handler.__name__,\n 'skill_id': skill_id})\n return wrapper\n\n\ndef create_basic_wrapper(handler, on_error=None):\n \"\"\"Create the default skill handler wrapper.\n\n This wrapper handles things like metrics, reporting handler start/stop\n and errors.\n\n Arguments:\n handler (callable): method/function to call\n on_error (function): function to call to report error.\n\n Returns:\n Wrapped callable\n \"\"\"\n def wrapper(message):\n try:\n if len(signature(handler).parameters) == 0:\n handler()\n else:\n handler(message)\n except Exception as e:\n if on_error:\n on_error(e)\n\n return wrapper\n\n\nclass EventContainer:\n \"\"\"Container tracking messagbus handlers.\n\n This container tracks events added by a skill, allowing unregistering\n all events on shutdown.\n \"\"\"\n def __init__(self, bus=None):\n self.bus = bus\n self.events = []\n\n def set_bus(self, bus):\n self.bus = bus\n\n def add(self, name, handler, once=False):\n \"\"\"Create event handler for executing intent or other event.\n\n Arguments:\n name (string): IntentParser name\n handler (func): Method to call\n once (bool, optional): Event handler will be removed after it has\n been run once.\n \"\"\"\n def once_wrapper(message):\n # Remove registered one-time handler before invoking,\n # allowing them to re-schedule themselves.\n handler(message)\n self.remove(name)\n\n if handler:\n if once:\n self.bus.once(name, once_wrapper)\n else:\n self.bus.on(name, handler)\n self.events.append((name, handler))\n\n def remove(self, name):\n \"\"\"Removes an event from bus emitter and events list.\n\n Args:\n name (string): Name of Intent or Scheduler Event\n Returns:\n bool: True if found and removed, False if not found\n \"\"\"\n print(\"Removing event {}\".format(name))\n removed = False\n for _name, _handler in list(self.events):\n if name == _name:\n try:\n self.events.remove((_name, _handler))\n except ValueError:\n LOG.error('Failed to remove event {}'.format(name))\n pass\n removed = True\n\n # Because of function wrappers, the emitter doesn't always directly\n # hold the _handler function, it sometimes holds something like\n # 'wrapper(_handler)'. So a call like:\n # self.bus.remove(_name, _handler)\n # will not find it, leaving an event handler with that name left behind\n # waiting to fire if it is ever re-installed and triggered.\n # Remove all handlers with the given name, regardless of handler.\n if removed:\n self.bus.remove_all_listeners(name)\n return removed\n\n def __iter__(self):\n return iter(self.events)\n\n def clear(self):\n \"\"\"Unregister all registered handlers and clear the list of registered\n events.\n \"\"\"\n for e, f in self.events:\n self.bus.remove(e, f)\n self.events = [] # Remove reference to wrappers\n", "path": "mycroft/skills/mycroft_skill/event_container.py"}], "after_files": [{"content": "from inspect import signature\n\nfrom mycroft.messagebus.message import Message\nfrom mycroft.metrics import Stopwatch, report_timing\nfrom mycroft.util.log import LOG\n\nfrom ..skill_data import to_alnum\n\n\ndef unmunge_message(message, skill_id):\n \"\"\"Restore message keywords by removing the Letterified skill ID.\n Arguments:\n message (Message): Intent result message\n skill_id (str): skill identifier\n Returns:\n Message without clear keywords\n \"\"\"\n if isinstance(message, Message) and isinstance(message.data, dict):\n skill_id = to_alnum(skill_id)\n for key in list(message.data.keys()):\n if key.startswith(skill_id):\n # replace the munged key with the real one\n new_key = key[len(skill_id):]\n message.data[new_key] = message.data.pop(key)\n\n return message\n\n\ndef get_handler_name(handler):\n \"\"\"Name (including class if available) of handler function.\n\n Arguments:\n handler (function): Function to be named\n\n Returns:\n string: handler name as string\n \"\"\"\n if '__self__' in dir(handler) and 'name' in dir(handler.__self__):\n return handler.__self__.name + '.' + handler.__name__\n else:\n return handler.__name__\n\n\ndef create_wrapper(handler, skill_id, on_start, on_end, on_error):\n \"\"\"Create the default skill handler wrapper.\n\n This wrapper handles things like metrics, reporting handler start/stop\n and errors.\n handler (callable): method/function to call\n skill_id: skill_id for associated skill\n on_start (function): function to call before executing the handler\n on_end (function): function to call after executing the handler\n on_error (function): function to call for error reporting\n \"\"\"\n def wrapper(message):\n stopwatch = Stopwatch()\n try:\n message = unmunge_message(message, skill_id)\n if on_start:\n on_start(message)\n\n with stopwatch:\n if len(signature(handler).parameters) == 0:\n handler()\n else:\n handler(message)\n\n except Exception as e:\n if on_error:\n on_error(e)\n finally:\n if on_end:\n on_end(message)\n\n # Send timing metrics\n context = message.context\n if context and 'ident' in context:\n report_timing(context['ident'], 'skill_handler', stopwatch,\n {'handler': handler.__name__,\n 'skill_id': skill_id})\n return wrapper\n\n\ndef create_basic_wrapper(handler, on_error=None):\n \"\"\"Create the default skill handler wrapper.\n\n This wrapper handles things like metrics, reporting handler start/stop\n and errors.\n\n Arguments:\n handler (callable): method/function to call\n on_error (function): function to call to report error.\n\n Returns:\n Wrapped callable\n \"\"\"\n def wrapper(message):\n try:\n if len(signature(handler).parameters) == 0:\n handler()\n else:\n handler(message)\n except Exception as e:\n if on_error:\n on_error(e)\n\n return wrapper\n\n\nclass EventContainer:\n \"\"\"Container tracking messagbus handlers.\n\n This container tracks events added by a skill, allowing unregistering\n all events on shutdown.\n \"\"\"\n def __init__(self, bus=None):\n self.bus = bus\n self.events = []\n\n def set_bus(self, bus):\n self.bus = bus\n\n def add(self, name, handler, once=False):\n \"\"\"Create event handler for executing intent or other event.\n\n Arguments:\n name (string): IntentParser name\n handler (func): Method to call\n once (bool, optional): Event handler will be removed after it has\n been run once.\n \"\"\"\n def once_wrapper(message):\n # Remove registered one-time handler before invoking,\n # allowing them to re-schedule themselves.\n handler(message)\n self.remove(name)\n\n if handler:\n if once:\n self.bus.once(name, once_wrapper)\n self.events.append((name, once_wrapper))\n else:\n self.bus.on(name, handler)\n self.events.append((name, handler))\n\n LOG.debug('Added event: {}'.format(name))\n\n def remove(self, name):\n \"\"\"Removes an event from bus emitter and events list.\n\n Args:\n name (string): Name of Intent or Scheduler Event\n Returns:\n bool: True if found and removed, False if not found\n \"\"\"\n print(\"Removing event {}\".format(name))\n removed = False\n for _name, _handler in list(self.events):\n if name == _name:\n try:\n self.events.remove((_name, _handler))\n except ValueError:\n LOG.error('Failed to remove event {}'.format(name))\n pass\n removed = True\n\n # Because of function wrappers, the emitter doesn't always directly\n # hold the _handler function, it sometimes holds something like\n # 'wrapper(_handler)'. So a call like:\n # self.bus.remove(_name, _handler)\n # will not find it, leaving an event handler with that name left behind\n # waiting to fire if it is ever re-installed and triggered.\n # Remove all handlers with the given name, regardless of handler.\n if removed:\n self.bus.remove_all_listeners(name)\n return removed\n\n def __iter__(self):\n return iter(self.events)\n\n def clear(self):\n \"\"\"Unregister all registered handlers and clear the list of registered\n events.\n \"\"\"\n for e, f in self.events:\n self.bus.remove(e, f)\n self.events = [] # Remove reference to wrappers\n", "path": "mycroft/skills/mycroft_skill/event_container.py"}]}
| 3,054 | 160 |
gh_patches_debug_33830
|
rasdani/github-patches
|
git_diff
|
scikit-image__scikit-image-4416
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Interpolation error in measure.profile_line
## Description
#4206, once merged, exposes interesting behavior for [measure.profile_line](https://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.profile_line). Consider
```python
image = np.arange(12).reshape(4, 3)
# reduce_func is a new parameter proposed in #4206
prof = profile_line(image, (0, 1), (3, 1), linewidth=3, reduce_func=None)
assert_array_equal(image, prof)
```
As I understand it, the returned profile should be identical with the original image but we get this (value at (0, 2) is different):
```
AssertionError:
Arrays are not equal
Mismatch: 8.33%
Max absolute difference: 2
Max relative difference: 0.
x: array([[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11]])
y: array([[ 0, 1, 0],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11]])
```
This might originate in [scipy.ndimage.map_coordinates](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.map_coordinates.html#scipy.ndimage.map_coordinates). Looks like a potential bug to me but might be an edge case for the underlying interpolation that can be ignored/is not reasonable to fix.
This behavior is already relevant even before merging #4206 because the faulty value will change the mean value of the profile line.
https://github.com/scikit-image/scikit-image/pull/4206#issuecomment-544920806 also highlights additional strange behavior.
## Version information
Current master, exposed once #4206 is merged.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skimage/measure/profile.py`
Content:
```
1 from warnings import warn
2 import numpy as np
3 from scipy import ndimage as ndi
4
5
6 def profile_line(image, src, dst, linewidth=1,
7 order=None, mode='constant', cval=0.0,
8 *, reduce_func=np.mean):
9 """Return the intensity profile of an image measured along a scan line.
10
11 Parameters
12 ----------
13 image : numeric array, shape (M, N[, C])
14 The image, either grayscale (2D array) or multichannel
15 (3D array, where the final axis contains the channel
16 information).
17 src : 2-tuple of numeric scalar (float or int)
18 The start point of the scan line.
19 dst : 2-tuple of numeric scalar (float or int)
20 The end point of the scan line. The destination point is *included*
21 in the profile, in contrast to standard numpy indexing.
22 linewidth : int, optional
23 Width of the scan, perpendicular to the line
24 order : int in {0, 1, 2, 3, 4, 5}, optional
25 The order of the spline interpolation, default is 0 if
26 image.dtype is bool and 1 otherwise. The order has to be in
27 the range 0-5. See `skimage.transform.warp` for detail.
28 mode : {'constant', 'nearest', 'reflect', 'mirror', 'wrap'}, optional
29 How to compute any values falling outside of the image.
30 cval : float, optional
31 If `mode` is 'constant', what constant value to use outside the image.
32 reduce_func : callable, optional
33 Function used to calculate the aggregation of pixel values
34 perpendicular to the profile_line direction when `linewidth` > 1.
35 If set to None the unreduced array will be returned.
36
37 Returns
38 -------
39 return_value : array
40 The intensity profile along the scan line. The length of the profile
41 is the ceil of the computed length of the scan line.
42
43 Examples
44 --------
45 >>> x = np.array([[1, 1, 1, 2, 2, 2]])
46 >>> img = np.vstack([np.zeros_like(x), x, x, x, np.zeros_like(x)])
47 >>> img
48 array([[0, 0, 0, 0, 0, 0],
49 [1, 1, 1, 2, 2, 2],
50 [1, 1, 1, 2, 2, 2],
51 [1, 1, 1, 2, 2, 2],
52 [0, 0, 0, 0, 0, 0]])
53 >>> profile_line(img, (2, 1), (2, 4))
54 array([1., 1., 2., 2.])
55 >>> profile_line(img, (1, 0), (1, 6), cval=4)
56 array([1., 1., 1., 2., 2., 2., 4.])
57
58 The destination point is included in the profile, in contrast to
59 standard numpy indexing.
60 For example:
61
62 >>> profile_line(img, (1, 0), (1, 6)) # The final point is out of bounds
63 array([1., 1., 1., 2., 2., 2., 0.])
64 >>> profile_line(img, (1, 0), (1, 5)) # This accesses the full first row
65 array([1., 1., 1., 2., 2., 2.])
66
67 For different reduce_func inputs:
68
69 >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.mean)
70 array([0.66666667, 0.66666667, 0.66666667, 1.33333333])
71 >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.max)
72 array([1, 1, 1, 2])
73 >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sum)
74 array([2, 2, 2, 4])
75
76 The unreduced array will be returned when `reduce_func` is None or when
77 `reduce_func` acts on each pixel value individually.
78
79 >>> profile_line(img, (1, 2), (4, 2), linewidth=3, order=0,
80 ... reduce_func=None)
81 array([[1, 1, 2],
82 [1, 1, 2],
83 [1, 1, 2],
84 [0, 0, 0]])
85 >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sqrt)
86 array([[1. , 1. , 0. ],
87 [1. , 1. , 0. ],
88 [1. , 1. , 0. ],
89 [1.41421356, 1.41421356, 0. ]])
90 """
91 if order is None:
92 order = 0 if image.dtype == bool else 1
93
94 if image.dtype == bool and order != 0:
95 warn("Input image dtype is bool. Interpolation is not defined "
96 "with bool data type. Please set order to 0 or explicitely "
97 "cast input image to another data type. Starting from version "
98 "0.19 a ValueError will be raised instead of this warning.",
99 FutureWarning, stacklevel=2)
100
101 perp_lines = _line_profile_coordinates(src, dst, linewidth=linewidth)
102 if image.ndim == 3:
103 pixels = [ndi.map_coordinates(image[..., i], perp_lines,
104 prefilter=order > 1,
105 order=order, mode=mode,
106 cval=cval) for i in
107 range(image.shape[2])]
108 pixels = np.transpose(np.asarray(pixels), (1, 2, 0))
109 else:
110 pixels = ndi.map_coordinates(image, perp_lines, prefilter=order > 1,
111 order=order, mode=mode, cval=cval)
112 # The outputted array with reduce_func=None gives an array where the
113 # row values (axis=1) are flipped. Here, we make this consistent.
114 pixels = np.flip(pixels, axis=1)
115
116 if reduce_func is None:
117 intensities = pixels
118 else:
119 try:
120 intensities = reduce_func(pixels, axis=1)
121 except TypeError: # function doesn't allow axis kwarg
122 intensities = np.apply_along_axis(reduce_func, arr=pixels, axis=1)
123
124 return intensities
125
126
127 def _line_profile_coordinates(src, dst, linewidth=1):
128 """Return the coordinates of the profile of an image along a scan line.
129
130 Parameters
131 ----------
132 src : 2-tuple of numeric scalar (float or int)
133 The start point of the scan line.
134 dst : 2-tuple of numeric scalar (float or int)
135 The end point of the scan line.
136 linewidth : int, optional
137 Width of the scan, perpendicular to the line
138
139 Returns
140 -------
141 coords : array, shape (2, N, C), float
142 The coordinates of the profile along the scan line. The length of the
143 profile is the ceil of the computed length of the scan line.
144
145 Notes
146 -----
147 This is a utility method meant to be used internally by skimage functions.
148 The destination point is included in the profile, in contrast to
149 standard numpy indexing.
150 """
151 src_row, src_col = src = np.asarray(src, dtype=float)
152 dst_row, dst_col = dst = np.asarray(dst, dtype=float)
153 d_row, d_col = dst - src
154 theta = np.arctan2(d_row, d_col)
155
156 length = int(np.ceil(np.hypot(d_row, d_col) + 1))
157 # we add one above because we include the last point in the profile
158 # (in contrast to standard numpy indexing)
159 line_col = np.linspace(src_col, dst_col, length)
160 line_row = np.linspace(src_row, dst_row, length)
161
162 # we subtract 1 from linewidth to change from pixel-counting
163 # (make this line 3 pixels wide) to point distances (the
164 # distance between pixel centers)
165 col_width = (linewidth - 1) * np.sin(-theta) / 2
166 row_width = (linewidth - 1) * np.cos(theta) / 2
167 perp_rows = np.array([np.linspace(row_i - row_width, row_i + row_width,
168 linewidth) for row_i in line_row])
169 perp_cols = np.array([np.linspace(col_i - col_width, col_i + col_width,
170 linewidth) for col_i in line_col])
171 return np.array([perp_rows, perp_cols])
172
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/skimage/measure/profile.py b/skimage/measure/profile.py
--- a/skimage/measure/profile.py
+++ b/skimage/measure/profile.py
@@ -4,21 +4,22 @@
def profile_line(image, src, dst, linewidth=1,
- order=None, mode='constant', cval=0.0,
+ order=None, mode=None, cval=0.0,
*, reduce_func=np.mean):
"""Return the intensity profile of an image measured along a scan line.
Parameters
----------
- image : numeric array, shape (M, N[, C])
+ image : ndarray, shape (M, N[, C])
The image, either grayscale (2D array) or multichannel
(3D array, where the final axis contains the channel
information).
- src : 2-tuple of numeric scalar (float or int)
- The start point of the scan line.
- dst : 2-tuple of numeric scalar (float or int)
- The end point of the scan line. The destination point is *included*
- in the profile, in contrast to standard numpy indexing.
+ src : array_like, shape (2, )
+ The coordinates of the start point of the scan line.
+ dst : array_like, shape (2, )
+ The coordinates of the end point of the scan
+ line. The destination point is *included* in the profile, in
+ contrast to standard numpy indexing.
linewidth : int, optional
Width of the scan, perpendicular to the line
order : int in {0, 1, 2, 3, 4, 5}, optional
@@ -97,6 +98,12 @@
"cast input image to another data type. Starting from version "
"0.19 a ValueError will be raised instead of this warning.",
FutureWarning, stacklevel=2)
+ if mode is None:
+ warn("Default out of bounds interpolation mode 'constant' is "
+ "deprecated. In version 0.19 it will be set to 'reflect'. "
+ "To avoid this warning, set `mode=` explicitly.",
+ FutureWarning, stacklevel=2)
+ mode = 'constant'
perp_lines = _line_profile_coordinates(src, dst, linewidth=linewidth)
if image.ndim == 3:
|
{"golden_diff": "diff --git a/skimage/measure/profile.py b/skimage/measure/profile.py\n--- a/skimage/measure/profile.py\n+++ b/skimage/measure/profile.py\n@@ -4,21 +4,22 @@\n \n \n def profile_line(image, src, dst, linewidth=1,\n- order=None, mode='constant', cval=0.0,\n+ order=None, mode=None, cval=0.0,\n *, reduce_func=np.mean):\n \"\"\"Return the intensity profile of an image measured along a scan line.\n \n Parameters\n ----------\n- image : numeric array, shape (M, N[, C])\n+ image : ndarray, shape (M, N[, C])\n The image, either grayscale (2D array) or multichannel\n (3D array, where the final axis contains the channel\n information).\n- src : 2-tuple of numeric scalar (float or int)\n- The start point of the scan line.\n- dst : 2-tuple of numeric scalar (float or int)\n- The end point of the scan line. The destination point is *included*\n- in the profile, in contrast to standard numpy indexing.\n+ src : array_like, shape (2, )\n+ The coordinates of the start point of the scan line.\n+ dst : array_like, shape (2, )\n+ The coordinates of the end point of the scan\n+ line. The destination point is *included* in the profile, in\n+ contrast to standard numpy indexing.\n linewidth : int, optional\n Width of the scan, perpendicular to the line\n order : int in {0, 1, 2, 3, 4, 5}, optional\n@@ -97,6 +98,12 @@\n \"cast input image to another data type. Starting from version \"\n \"0.19 a ValueError will be raised instead of this warning.\",\n FutureWarning, stacklevel=2)\n+ if mode is None:\n+ warn(\"Default out of bounds interpolation mode 'constant' is \"\n+ \"deprecated. In version 0.19 it will be set to 'reflect'. \"\n+ \"To avoid this warning, set `mode=` explicitly.\",\n+ FutureWarning, stacklevel=2)\n+ mode = 'constant'\n \n perp_lines = _line_profile_coordinates(src, dst, linewidth=linewidth)\n if image.ndim == 3:\n", "issue": "Interpolation error in measure.profile_line\n## Description\r\n\r\n#4206, once merged, exposes interesting behavior for [measure.profile_line](https://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.profile_line). Consider\r\n\r\n```python\r\nimage = np.arange(12).reshape(4, 3)\r\n# reduce_func is a new parameter proposed in #4206\r\nprof = profile_line(image, (0, 1), (3, 1), linewidth=3, reduce_func=None)\r\nassert_array_equal(image, prof)\r\n```\r\nAs I understand it, the returned profile should be identical with the original image but we get this (value at (0, 2) is different):\r\n```\r\nAssertionError: \r\nArrays are not equal\r\nMismatch: 8.33%\r\nMax absolute difference: 2\r\nMax relative difference: 0.\r\n x: array([[ 0, 1, 2],\r\n [ 3, 4, 5],\r\n [ 6, 7, 8],\r\n [ 9, 10, 11]])\r\n y: array([[ 0, 1, 0],\r\n [ 3, 4, 5],\r\n [ 6, 7, 8],\r\n [ 9, 10, 11]])\r\n```\r\nThis might originate in [scipy.ndimage.map_coordinates](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.map_coordinates.html#scipy.ndimage.map_coordinates). Looks like a potential bug to me but might be an edge case for the underlying interpolation that can be ignored/is not reasonable to fix. \r\nThis behavior is already relevant even before merging #4206 because the faulty value will change the mean value of the profile line.\r\n\r\nhttps://github.com/scikit-image/scikit-image/pull/4206#issuecomment-544920806 also highlights additional strange behavior.\r\n\r\n## Version information\r\n\r\nCurrent master, exposed once #4206 is merged.\r\n\n", "before_files": [{"content": "from warnings import warn\nimport numpy as np\nfrom scipy import ndimage as ndi\n\n\ndef profile_line(image, src, dst, linewidth=1,\n order=None, mode='constant', cval=0.0,\n *, reduce_func=np.mean):\n \"\"\"Return the intensity profile of an image measured along a scan line.\n\n Parameters\n ----------\n image : numeric array, shape (M, N[, C])\n The image, either grayscale (2D array) or multichannel\n (3D array, where the final axis contains the channel\n information).\n src : 2-tuple of numeric scalar (float or int)\n The start point of the scan line.\n dst : 2-tuple of numeric scalar (float or int)\n The end point of the scan line. The destination point is *included*\n in the profile, in contrast to standard numpy indexing.\n linewidth : int, optional\n Width of the scan, perpendicular to the line\n order : int in {0, 1, 2, 3, 4, 5}, optional\n The order of the spline interpolation, default is 0 if\n image.dtype is bool and 1 otherwise. The order has to be in\n the range 0-5. See `skimage.transform.warp` for detail.\n mode : {'constant', 'nearest', 'reflect', 'mirror', 'wrap'}, optional\n How to compute any values falling outside of the image.\n cval : float, optional\n If `mode` is 'constant', what constant value to use outside the image.\n reduce_func : callable, optional\n Function used to calculate the aggregation of pixel values\n perpendicular to the profile_line direction when `linewidth` > 1.\n If set to None the unreduced array will be returned.\n\n Returns\n -------\n return_value : array\n The intensity profile along the scan line. The length of the profile\n is the ceil of the computed length of the scan line.\n\n Examples\n --------\n >>> x = np.array([[1, 1, 1, 2, 2, 2]])\n >>> img = np.vstack([np.zeros_like(x), x, x, x, np.zeros_like(x)])\n >>> img\n array([[0, 0, 0, 0, 0, 0],\n [1, 1, 1, 2, 2, 2],\n [1, 1, 1, 2, 2, 2],\n [1, 1, 1, 2, 2, 2],\n [0, 0, 0, 0, 0, 0]])\n >>> profile_line(img, (2, 1), (2, 4))\n array([1., 1., 2., 2.])\n >>> profile_line(img, (1, 0), (1, 6), cval=4)\n array([1., 1., 1., 2., 2., 2., 4.])\n\n The destination point is included in the profile, in contrast to\n standard numpy indexing.\n For example:\n\n >>> profile_line(img, (1, 0), (1, 6)) # The final point is out of bounds\n array([1., 1., 1., 2., 2., 2., 0.])\n >>> profile_line(img, (1, 0), (1, 5)) # This accesses the full first row\n array([1., 1., 1., 2., 2., 2.])\n\n For different reduce_func inputs:\n\n >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.mean)\n array([0.66666667, 0.66666667, 0.66666667, 1.33333333])\n >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.max)\n array([1, 1, 1, 2])\n >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sum)\n array([2, 2, 2, 4])\n\n The unreduced array will be returned when `reduce_func` is None or when\n `reduce_func` acts on each pixel value individually.\n\n >>> profile_line(img, (1, 2), (4, 2), linewidth=3, order=0,\n ... reduce_func=None)\n array([[1, 1, 2],\n [1, 1, 2],\n [1, 1, 2],\n [0, 0, 0]])\n >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sqrt)\n array([[1. , 1. , 0. ],\n [1. , 1. , 0. ],\n [1. , 1. , 0. ],\n [1.41421356, 1.41421356, 0. ]])\n \"\"\"\n if order is None:\n order = 0 if image.dtype == bool else 1\n\n if image.dtype == bool and order != 0:\n warn(\"Input image dtype is bool. Interpolation is not defined \"\n \"with bool data type. Please set order to 0 or explicitely \"\n \"cast input image to another data type. Starting from version \"\n \"0.19 a ValueError will be raised instead of this warning.\",\n FutureWarning, stacklevel=2)\n\n perp_lines = _line_profile_coordinates(src, dst, linewidth=linewidth)\n if image.ndim == 3:\n pixels = [ndi.map_coordinates(image[..., i], perp_lines,\n prefilter=order > 1,\n order=order, mode=mode,\n cval=cval) for i in\n range(image.shape[2])]\n pixels = np.transpose(np.asarray(pixels), (1, 2, 0))\n else:\n pixels = ndi.map_coordinates(image, perp_lines, prefilter=order > 1,\n order=order, mode=mode, cval=cval)\n # The outputted array with reduce_func=None gives an array where the\n # row values (axis=1) are flipped. Here, we make this consistent.\n pixels = np.flip(pixels, axis=1)\n\n if reduce_func is None:\n intensities = pixels\n else:\n try:\n intensities = reduce_func(pixels, axis=1)\n except TypeError: # function doesn't allow axis kwarg\n intensities = np.apply_along_axis(reduce_func, arr=pixels, axis=1)\n\n return intensities\n\n\ndef _line_profile_coordinates(src, dst, linewidth=1):\n \"\"\"Return the coordinates of the profile of an image along a scan line.\n\n Parameters\n ----------\n src : 2-tuple of numeric scalar (float or int)\n The start point of the scan line.\n dst : 2-tuple of numeric scalar (float or int)\n The end point of the scan line.\n linewidth : int, optional\n Width of the scan, perpendicular to the line\n\n Returns\n -------\n coords : array, shape (2, N, C), float\n The coordinates of the profile along the scan line. The length of the\n profile is the ceil of the computed length of the scan line.\n\n Notes\n -----\n This is a utility method meant to be used internally by skimage functions.\n The destination point is included in the profile, in contrast to\n standard numpy indexing.\n \"\"\"\n src_row, src_col = src = np.asarray(src, dtype=float)\n dst_row, dst_col = dst = np.asarray(dst, dtype=float)\n d_row, d_col = dst - src\n theta = np.arctan2(d_row, d_col)\n\n length = int(np.ceil(np.hypot(d_row, d_col) + 1))\n # we add one above because we include the last point in the profile\n # (in contrast to standard numpy indexing)\n line_col = np.linspace(src_col, dst_col, length)\n line_row = np.linspace(src_row, dst_row, length)\n\n # we subtract 1 from linewidth to change from pixel-counting\n # (make this line 3 pixels wide) to point distances (the\n # distance between pixel centers)\n col_width = (linewidth - 1) * np.sin(-theta) / 2\n row_width = (linewidth - 1) * np.cos(theta) / 2\n perp_rows = np.array([np.linspace(row_i - row_width, row_i + row_width,\n linewidth) for row_i in line_row])\n perp_cols = np.array([np.linspace(col_i - col_width, col_i + col_width,\n linewidth) for col_i in line_col])\n return np.array([perp_rows, perp_cols])\n", "path": "skimage/measure/profile.py"}], "after_files": [{"content": "from warnings import warn\nimport numpy as np\nfrom scipy import ndimage as ndi\n\n\ndef profile_line(image, src, dst, linewidth=1,\n order=None, mode=None, cval=0.0,\n *, reduce_func=np.mean):\n \"\"\"Return the intensity profile of an image measured along a scan line.\n\n Parameters\n ----------\n image : ndarray, shape (M, N[, C])\n The image, either grayscale (2D array) or multichannel\n (3D array, where the final axis contains the channel\n information).\n src : array_like, shape (2, )\n The coordinates of the start point of the scan line.\n dst : array_like, shape (2, )\n The coordinates of the end point of the scan\n line. The destination point is *included* in the profile, in\n contrast to standard numpy indexing.\n linewidth : int, optional\n Width of the scan, perpendicular to the line\n order : int in {0, 1, 2, 3, 4, 5}, optional\n The order of the spline interpolation, default is 0 if\n image.dtype is bool and 1 otherwise. The order has to be in\n the range 0-5. See `skimage.transform.warp` for detail.\n mode : {'constant', 'nearest', 'reflect', 'mirror', 'wrap'}, optional\n How to compute any values falling outside of the image.\n cval : float, optional\n If `mode` is 'constant', what constant value to use outside the image.\n reduce_func : callable, optional\n Function used to calculate the aggregation of pixel values\n perpendicular to the profile_line direction when `linewidth` > 1.\n If set to None the unreduced array will be returned.\n\n Returns\n -------\n return_value : array\n The intensity profile along the scan line. The length of the profile\n is the ceil of the computed length of the scan line.\n\n Examples\n --------\n >>> x = np.array([[1, 1, 1, 2, 2, 2]])\n >>> img = np.vstack([np.zeros_like(x), x, x, x, np.zeros_like(x)])\n >>> img\n array([[0, 0, 0, 0, 0, 0],\n [1, 1, 1, 2, 2, 2],\n [1, 1, 1, 2, 2, 2],\n [1, 1, 1, 2, 2, 2],\n [0, 0, 0, 0, 0, 0]])\n >>> profile_line(img, (2, 1), (2, 4))\n array([1., 1., 2., 2.])\n >>> profile_line(img, (1, 0), (1, 6), cval=4)\n array([1., 1., 1., 2., 2., 2., 4.])\n\n The destination point is included in the profile, in contrast to\n standard numpy indexing.\n For example:\n\n >>> profile_line(img, (1, 0), (1, 6)) # The final point is out of bounds\n array([1., 1., 1., 2., 2., 2., 0.])\n >>> profile_line(img, (1, 0), (1, 5)) # This accesses the full first row\n array([1., 1., 1., 2., 2., 2.])\n\n For different reduce_func inputs:\n\n >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.mean)\n array([0.66666667, 0.66666667, 0.66666667, 1.33333333])\n >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.max)\n array([1, 1, 1, 2])\n >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sum)\n array([2, 2, 2, 4])\n\n The unreduced array will be returned when `reduce_func` is None or when\n `reduce_func` acts on each pixel value individually.\n\n >>> profile_line(img, (1, 2), (4, 2), linewidth=3, order=0,\n ... reduce_func=None)\n array([[1, 1, 2],\n [1, 1, 2],\n [1, 1, 2],\n [0, 0, 0]])\n >>> profile_line(img, (1, 0), (1, 3), linewidth=3, reduce_func=np.sqrt)\n array([[1. , 1. , 0. ],\n [1. , 1. , 0. ],\n [1. , 1. , 0. ],\n [1.41421356, 1.41421356, 0. ]])\n \"\"\"\n if order is None:\n order = 0 if image.dtype == bool else 1\n\n if image.dtype == bool and order != 0:\n warn(\"Input image dtype is bool. Interpolation is not defined \"\n \"with bool data type. Please set order to 0 or explicitely \"\n \"cast input image to another data type. Starting from version \"\n \"0.19 a ValueError will be raised instead of this warning.\",\n FutureWarning, stacklevel=2)\n if mode is None:\n warn(\"Default out of bounds interpolation mode 'constant' is \"\n \"deprecated. In version 0.19 it will be set to 'reflect'. \"\n \"To avoid this warning, set `mode=` explicitly.\",\n FutureWarning, stacklevel=2)\n mode = 'constant'\n\n perp_lines = _line_profile_coordinates(src, dst, linewidth=linewidth)\n if image.ndim == 3:\n pixels = [ndi.map_coordinates(image[..., i], perp_lines,\n prefilter=order > 1,\n order=order, mode=mode,\n cval=cval) for i in\n range(image.shape[2])]\n pixels = np.transpose(np.asarray(pixels), (1, 2, 0))\n else:\n pixels = ndi.map_coordinates(image, perp_lines, prefilter=order > 1,\n order=order, mode=mode, cval=cval)\n # The outputted array with reduce_func=None gives an array where the\n # row values (axis=1) are flipped. Here, we make this consistent.\n pixels = np.flip(pixels, axis=1)\n\n if reduce_func is None:\n intensities = pixels\n else:\n try:\n intensities = reduce_func(pixels, axis=1)\n except TypeError: # function doesn't allow axis kwarg\n intensities = np.apply_along_axis(reduce_func, arr=pixels, axis=1)\n\n return intensities\n\n\ndef _line_profile_coordinates(src, dst, linewidth=1):\n \"\"\"Return the coordinates of the profile of an image along a scan line.\n\n Parameters\n ----------\n src : 2-tuple of numeric scalar (float or int)\n The start point of the scan line.\n dst : 2-tuple of numeric scalar (float or int)\n The end point of the scan line.\n linewidth : int, optional\n Width of the scan, perpendicular to the line\n\n Returns\n -------\n coords : array, shape (2, N, C), float\n The coordinates of the profile along the scan line. The length of the\n profile is the ceil of the computed length of the scan line.\n\n Notes\n -----\n This is a utility method meant to be used internally by skimage functions.\n The destination point is included in the profile, in contrast to\n standard numpy indexing.\n \"\"\"\n src_row, src_col = src = np.asarray(src, dtype=float)\n dst_row, dst_col = dst = np.asarray(dst, dtype=float)\n d_row, d_col = dst - src\n theta = np.arctan2(d_row, d_col)\n\n length = int(np.ceil(np.hypot(d_row, d_col) + 1))\n # we add one above because we include the last point in the profile\n # (in contrast to standard numpy indexing)\n line_col = np.linspace(src_col, dst_col, length)\n line_row = np.linspace(src_row, dst_row, length)\n\n # we subtract 1 from linewidth to change from pixel-counting\n # (make this line 3 pixels wide) to point distances (the\n # distance between pixel centers)\n col_width = (linewidth - 1) * np.sin(-theta) / 2\n row_width = (linewidth - 1) * np.cos(theta) / 2\n perp_rows = np.array([np.linspace(row_i - row_width, row_i + row_width,\n linewidth) for row_i in line_row])\n perp_cols = np.array([np.linspace(col_i - col_width, col_i + col_width,\n linewidth) for col_i in line_col])\n return np.array([perp_rows, perp_cols])\n", "path": "skimage/measure/profile.py"}]}
| 3,125 | 528 |
gh_patches_debug_25538
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-2331
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix API error response when filtering on an invalid date
## Steps to reproduce
1. Send a request like
```http
GET http://localhost:8000/api/db/v0/tables/1/records/?limit=500&offset=0&filter={"equal":[{"column_id":[6]},{"literal":["2"]}]}
```
Here, column with id `6` is a date column, and the value `"2"` is a partially-entered date that should be considered invalid.
1. Expect an error response conforming to our [error standards](https://wiki.mathesar.org/en/engineering/standards/api#errors)
1. Instead, observe an HTTP 500 Django error.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mathesar/api/db/viewsets/records.py`
Content:
```
1 from psycopg2.errors import ForeignKeyViolation
2 from rest_access_policy import AccessViewSetMixin
3 from rest_framework import status, viewsets
4 from rest_framework.exceptions import NotFound
5 from rest_framework.renderers import BrowsableAPIRenderer
6 from rest_framework.response import Response
7 from sqlalchemy.exc import IntegrityError
8
9 from mathesar.api.db.permissions.records import RecordAccessPolicy
10 from mathesar.api.exceptions.error_codes import ErrorCodes
11 import mathesar.api.exceptions.database_exceptions.exceptions as database_api_exceptions
12 import mathesar.api.exceptions.generic_exceptions.base_exceptions as generic_api_exceptions
13 from db.functions.exceptions import (
14 BadDBFunctionFormat, ReferencedColumnsDontExist, UnknownDBFunctionID,
15 )
16 from db.records.exceptions import (
17 BadGroupFormat, GroupFieldNotFound, InvalidGroupType, UndefinedFunction,
18 BadSortFormat, SortFieldNotFound
19 )
20 from mathesar.api.pagination import TableLimitOffsetPagination
21 from mathesar.api.serializers.records import RecordListParameterSerializer, RecordSerializer
22 from mathesar.api.utils import get_table_or_404
23 from mathesar.functions.operations.convert import rewrite_db_function_spec_column_ids_to_names
24 from mathesar.models.base import Table
25 from mathesar.utils.json import MathesarJSONRenderer
26
27
28 class RecordViewSet(AccessViewSetMixin, viewsets.ViewSet):
29 access_policy = RecordAccessPolicy
30
31 # There is no 'update' method.
32 # We're not supporting PUT requests because there aren't a lot of use cases
33 # where the entire record needs to be replaced, PATCH suffices for updates.
34 def get_queryset(self):
35 return Table.objects.all().order_by('-created_at')
36
37 renderer_classes = [MathesarJSONRenderer, BrowsableAPIRenderer]
38
39 # For filter parameter formatting, see:
40 # db/functions/operations/deserialize.py::get_db_function_from_ma_function_spec function doc>
41 # For sorting parameter formatting, see:
42 # https://github.com/centerofci/sqlalchemy-filters#sort-format
43 def list(self, request, table_pk=None):
44 paginator = TableLimitOffsetPagination()
45
46 serializer = RecordListParameterSerializer(data=request.GET)
47 serializer.is_valid(raise_exception=True)
48 table = get_table_or_404(table_pk)
49
50 filter_unprocessed = serializer.validated_data['filter']
51 order_by = serializer.validated_data['order_by']
52 grouping = serializer.validated_data['grouping']
53 search_fuzzy = serializer.validated_data['search_fuzzy']
54 filter_processed = None
55 column_names_to_ids = table.get_column_name_id_bidirectional_map()
56 column_ids_to_names = column_names_to_ids.inverse
57 if filter_unprocessed:
58 filter_processed = rewrite_db_function_spec_column_ids_to_names(
59 column_ids_to_names=column_ids_to_names,
60 spec=filter_unprocessed,
61 )
62 # Replace column id value used in the `field` property with column name
63 name_converted_group_by = None
64 if grouping:
65 group_by_columns_names = [column_ids_to_names[column_id] for column_id in grouping['columns']]
66 name_converted_group_by = {**grouping, 'columns': group_by_columns_names}
67 name_converted_order_by = [{**column, 'field': column_ids_to_names[column['field']]} for column in order_by]
68 name_converted_search = [{**column, 'column': column_ids_to_names[column['field']]} for column in search_fuzzy]
69
70 try:
71
72 records = paginator.paginate_queryset(
73 self.get_queryset(), request, table, column_names_to_ids,
74 filters=filter_processed,
75 order_by=name_converted_order_by,
76 grouping=name_converted_group_by,
77 search=name_converted_search,
78 duplicate_only=serializer.validated_data['duplicate_only']
79 )
80 except (BadDBFunctionFormat, UnknownDBFunctionID, ReferencedColumnsDontExist) as e:
81 raise database_api_exceptions.BadFilterAPIException(
82 e,
83 field='filters',
84 status_code=status.HTTP_400_BAD_REQUEST
85 )
86 except (BadSortFormat, SortFieldNotFound) as e:
87 raise database_api_exceptions.BadSortAPIException(
88 e,
89 field='order_by',
90 status_code=status.HTTP_400_BAD_REQUEST
91 )
92 except (BadGroupFormat, GroupFieldNotFound, InvalidGroupType) as e:
93 raise database_api_exceptions.BadGroupAPIException(
94 e,
95 field='grouping',
96 status_code=status.HTTP_400_BAD_REQUEST
97 )
98 except UndefinedFunction as e:
99 raise database_api_exceptions.UndefinedFunctionAPIException(
100 e,
101 details=e.args[0],
102 status_code=status.HTTP_400_BAD_REQUEST
103 )
104 serializer = RecordSerializer(
105 records,
106 many=True,
107 context=self.get_serializer_context(table)
108 )
109 return paginator.get_paginated_response(serializer.data)
110
111 def retrieve(self, request, pk=None, table_pk=None):
112 table = get_table_or_404(table_pk)
113 # TODO refactor to use serializer for more DRY response logic
114 paginator = TableLimitOffsetPagination()
115 record_filters = {
116 "equal": [
117 {"column_name": [table.primary_key_column_name]},
118 {"literal": [pk]}
119 ]
120 }
121 column_names_to_ids = table.get_column_name_id_bidirectional_map()
122 records = paginator.paginate_queryset(
123 table,
124 request,
125 table,
126 column_names_to_ids,
127 filters=record_filters
128 )
129 if not records:
130 raise NotFound
131 serializer = RecordSerializer(
132 records,
133 many=True,
134 context=self.get_serializer_context(table)
135 )
136 return paginator.get_paginated_response(serializer.data)
137
138 def create(self, request, table_pk=None):
139 table = get_table_or_404(table_pk)
140 serializer = RecordSerializer(data=request.data, context=self.get_serializer_context(table))
141 serializer.is_valid(raise_exception=True)
142 serializer.save()
143 # TODO refactor to use serializer for more DRY response logic
144 column_name_id_map = table.get_column_name_id_bidirectional_map()
145 table_pk_column_id = column_name_id_map[table.primary_key_column_name]
146 pk_value = serializer.data[table_pk_column_id]
147 paginator = TableLimitOffsetPagination()
148 record_filters = {
149 "equal": [
150 {"column_name": [table.primary_key_column_name]},
151 {"literal": [pk_value]}
152 ]
153 }
154 column_names_to_ids = table.get_column_name_id_bidirectional_map()
155 records = paginator.paginate_queryset(
156 table,
157 request,
158 table,
159 column_names_to_ids,
160 filters=record_filters
161 )
162 serializer = RecordSerializer(
163 records,
164 many=True,
165 context=self.get_serializer_context(table)
166 )
167 response = paginator.get_paginated_response(serializer.data)
168 response.status_code = status.HTTP_201_CREATED
169 return response
170
171 def partial_update(self, request, pk=None, table_pk=None):
172 table = get_table_or_404(table_pk)
173 serializer = RecordSerializer(
174 {'id': pk},
175 data=request.data,
176 context=self.get_serializer_context(table),
177 partial=True
178 )
179 serializer.is_valid(raise_exception=True)
180 serializer.save()
181 # TODO refactor to use serializer for more DRY response logic
182 paginator = TableLimitOffsetPagination()
183 record_filters = {
184 "equal": [
185 {"column_name": [table.primary_key_column_name]},
186 {"literal": [pk]}
187 ]
188 }
189 column_names_to_ids = table.get_column_name_id_bidirectional_map()
190 records = paginator.paginate_queryset(
191 table,
192 request,
193 table,
194 column_names_to_ids,
195 filters=record_filters
196 )
197 serializer = RecordSerializer(
198 records,
199 many=True,
200 context=self.get_serializer_context(table)
201 )
202 return paginator.get_paginated_response(serializer.data)
203
204 def destroy(self, request, pk=None, table_pk=None):
205 table = get_table_or_404(table_pk)
206 if table.get_record(pk) is None:
207 raise generic_api_exceptions.NotFoundAPIException(
208 NotFound,
209 error_code=ErrorCodes.RecordNotFound.value,
210 message="Record doesn't exist"
211 )
212 try:
213 table.delete_record(pk)
214 except IntegrityError as e:
215 if isinstance(e.orig, ForeignKeyViolation):
216 raise database_api_exceptions.ForeignKeyViolationAPIException(
217 e,
218 status_code=status.HTTP_400_BAD_REQUEST,
219 referent_table=table,
220 )
221
222 return Response(status=status.HTTP_204_NO_CONTENT)
223
224 def get_serializer_context(self, table):
225 columns_map = table.get_column_name_id_bidirectional_map()
226 context = {'columns_map': columns_map, 'table': table}
227 return context
228
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mathesar/api/db/viewsets/records.py b/mathesar/api/db/viewsets/records.py
--- a/mathesar/api/db/viewsets/records.py
+++ b/mathesar/api/db/viewsets/records.py
@@ -1,10 +1,10 @@
-from psycopg2.errors import ForeignKeyViolation
+from psycopg2.errors import ForeignKeyViolation, InvalidDatetimeFormat
from rest_access_policy import AccessViewSetMixin
from rest_framework import status, viewsets
from rest_framework.exceptions import NotFound
from rest_framework.renderers import BrowsableAPIRenderer
from rest_framework.response import Response
-from sqlalchemy.exc import IntegrityError
+from sqlalchemy.exc import IntegrityError, DataError
from mathesar.api.db.permissions.records import RecordAccessPolicy
from mathesar.api.exceptions.error_codes import ErrorCodes
@@ -101,6 +101,13 @@
details=e.args[0],
status_code=status.HTTP_400_BAD_REQUEST
)
+ except DataError as e:
+ if isinstance(e.orig, InvalidDatetimeFormat):
+ raise database_api_exceptions.InvalidDateFormatAPIException(
+ e,
+ status_code=status.HTTP_400_BAD_REQUEST,
+ )
+
serializer = RecordSerializer(
records,
many=True,
|
{"golden_diff": "diff --git a/mathesar/api/db/viewsets/records.py b/mathesar/api/db/viewsets/records.py\n--- a/mathesar/api/db/viewsets/records.py\n+++ b/mathesar/api/db/viewsets/records.py\n@@ -1,10 +1,10 @@\n-from psycopg2.errors import ForeignKeyViolation\n+from psycopg2.errors import ForeignKeyViolation, InvalidDatetimeFormat\n from rest_access_policy import AccessViewSetMixin\n from rest_framework import status, viewsets\n from rest_framework.exceptions import NotFound\n from rest_framework.renderers import BrowsableAPIRenderer\n from rest_framework.response import Response\n-from sqlalchemy.exc import IntegrityError\n+from sqlalchemy.exc import IntegrityError, DataError\n \n from mathesar.api.db.permissions.records import RecordAccessPolicy\n from mathesar.api.exceptions.error_codes import ErrorCodes\n@@ -101,6 +101,13 @@\n details=e.args[0],\n status_code=status.HTTP_400_BAD_REQUEST\n )\n+ except DataError as e:\n+ if isinstance(e.orig, InvalidDatetimeFormat):\n+ raise database_api_exceptions.InvalidDateFormatAPIException(\n+ e,\n+ status_code=status.HTTP_400_BAD_REQUEST,\n+ )\n+\n serializer = RecordSerializer(\n records,\n many=True,\n", "issue": "Fix API error response when filtering on an invalid date\n## Steps to reproduce\n\n1. Send a request like\n\n ```http\n GET http://localhost:8000/api/db/v0/tables/1/records/?limit=500&offset=0&filter={\"equal\":[{\"column_id\":[6]},{\"literal\":[\"2\"]}]}\n ```\n\n Here, column with id `6` is a date column, and the value `\"2\"` is a partially-entered date that should be considered invalid.\n\n1. Expect an error response conforming to our [error standards](https://wiki.mathesar.org/en/engineering/standards/api#errors)\n\n1. Instead, observe an HTTP 500 Django error.\n\n", "before_files": [{"content": "from psycopg2.errors import ForeignKeyViolation\nfrom rest_access_policy import AccessViewSetMixin\nfrom rest_framework import status, viewsets\nfrom rest_framework.exceptions import NotFound\nfrom rest_framework.renderers import BrowsableAPIRenderer\nfrom rest_framework.response import Response\nfrom sqlalchemy.exc import IntegrityError\n\nfrom mathesar.api.db.permissions.records import RecordAccessPolicy\nfrom mathesar.api.exceptions.error_codes import ErrorCodes\nimport mathesar.api.exceptions.database_exceptions.exceptions as database_api_exceptions\nimport mathesar.api.exceptions.generic_exceptions.base_exceptions as generic_api_exceptions\nfrom db.functions.exceptions import (\n BadDBFunctionFormat, ReferencedColumnsDontExist, UnknownDBFunctionID,\n)\nfrom db.records.exceptions import (\n BadGroupFormat, GroupFieldNotFound, InvalidGroupType, UndefinedFunction,\n BadSortFormat, SortFieldNotFound\n)\nfrom mathesar.api.pagination import TableLimitOffsetPagination\nfrom mathesar.api.serializers.records import RecordListParameterSerializer, RecordSerializer\nfrom mathesar.api.utils import get_table_or_404\nfrom mathesar.functions.operations.convert import rewrite_db_function_spec_column_ids_to_names\nfrom mathesar.models.base import Table\nfrom mathesar.utils.json import MathesarJSONRenderer\n\n\nclass RecordViewSet(AccessViewSetMixin, viewsets.ViewSet):\n access_policy = RecordAccessPolicy\n\n # There is no 'update' method.\n # We're not supporting PUT requests because there aren't a lot of use cases\n # where the entire record needs to be replaced, PATCH suffices for updates.\n def get_queryset(self):\n return Table.objects.all().order_by('-created_at')\n\n renderer_classes = [MathesarJSONRenderer, BrowsableAPIRenderer]\n\n # For filter parameter formatting, see:\n # db/functions/operations/deserialize.py::get_db_function_from_ma_function_spec function doc>\n # For sorting parameter formatting, see:\n # https://github.com/centerofci/sqlalchemy-filters#sort-format\n def list(self, request, table_pk=None):\n paginator = TableLimitOffsetPagination()\n\n serializer = RecordListParameterSerializer(data=request.GET)\n serializer.is_valid(raise_exception=True)\n table = get_table_or_404(table_pk)\n\n filter_unprocessed = serializer.validated_data['filter']\n order_by = serializer.validated_data['order_by']\n grouping = serializer.validated_data['grouping']\n search_fuzzy = serializer.validated_data['search_fuzzy']\n filter_processed = None\n column_names_to_ids = table.get_column_name_id_bidirectional_map()\n column_ids_to_names = column_names_to_ids.inverse\n if filter_unprocessed:\n filter_processed = rewrite_db_function_spec_column_ids_to_names(\n column_ids_to_names=column_ids_to_names,\n spec=filter_unprocessed,\n )\n # Replace column id value used in the `field` property with column name\n name_converted_group_by = None\n if grouping:\n group_by_columns_names = [column_ids_to_names[column_id] for column_id in grouping['columns']]\n name_converted_group_by = {**grouping, 'columns': group_by_columns_names}\n name_converted_order_by = [{**column, 'field': column_ids_to_names[column['field']]} for column in order_by]\n name_converted_search = [{**column, 'column': column_ids_to_names[column['field']]} for column in search_fuzzy]\n\n try:\n\n records = paginator.paginate_queryset(\n self.get_queryset(), request, table, column_names_to_ids,\n filters=filter_processed,\n order_by=name_converted_order_by,\n grouping=name_converted_group_by,\n search=name_converted_search,\n duplicate_only=serializer.validated_data['duplicate_only']\n )\n except (BadDBFunctionFormat, UnknownDBFunctionID, ReferencedColumnsDontExist) as e:\n raise database_api_exceptions.BadFilterAPIException(\n e,\n field='filters',\n status_code=status.HTTP_400_BAD_REQUEST\n )\n except (BadSortFormat, SortFieldNotFound) as e:\n raise database_api_exceptions.BadSortAPIException(\n e,\n field='order_by',\n status_code=status.HTTP_400_BAD_REQUEST\n )\n except (BadGroupFormat, GroupFieldNotFound, InvalidGroupType) as e:\n raise database_api_exceptions.BadGroupAPIException(\n e,\n field='grouping',\n status_code=status.HTTP_400_BAD_REQUEST\n )\n except UndefinedFunction as e:\n raise database_api_exceptions.UndefinedFunctionAPIException(\n e,\n details=e.args[0],\n status_code=status.HTTP_400_BAD_REQUEST\n )\n serializer = RecordSerializer(\n records,\n many=True,\n context=self.get_serializer_context(table)\n )\n return paginator.get_paginated_response(serializer.data)\n\n def retrieve(self, request, pk=None, table_pk=None):\n table = get_table_or_404(table_pk)\n # TODO refactor to use serializer for more DRY response logic\n paginator = TableLimitOffsetPagination()\n record_filters = {\n \"equal\": [\n {\"column_name\": [table.primary_key_column_name]},\n {\"literal\": [pk]}\n ]\n }\n column_names_to_ids = table.get_column_name_id_bidirectional_map()\n records = paginator.paginate_queryset(\n table,\n request,\n table,\n column_names_to_ids,\n filters=record_filters\n )\n if not records:\n raise NotFound\n serializer = RecordSerializer(\n records,\n many=True,\n context=self.get_serializer_context(table)\n )\n return paginator.get_paginated_response(serializer.data)\n\n def create(self, request, table_pk=None):\n table = get_table_or_404(table_pk)\n serializer = RecordSerializer(data=request.data, context=self.get_serializer_context(table))\n serializer.is_valid(raise_exception=True)\n serializer.save()\n # TODO refactor to use serializer for more DRY response logic\n column_name_id_map = table.get_column_name_id_bidirectional_map()\n table_pk_column_id = column_name_id_map[table.primary_key_column_name]\n pk_value = serializer.data[table_pk_column_id]\n paginator = TableLimitOffsetPagination()\n record_filters = {\n \"equal\": [\n {\"column_name\": [table.primary_key_column_name]},\n {\"literal\": [pk_value]}\n ]\n }\n column_names_to_ids = table.get_column_name_id_bidirectional_map()\n records = paginator.paginate_queryset(\n table,\n request,\n table,\n column_names_to_ids,\n filters=record_filters\n )\n serializer = RecordSerializer(\n records,\n many=True,\n context=self.get_serializer_context(table)\n )\n response = paginator.get_paginated_response(serializer.data)\n response.status_code = status.HTTP_201_CREATED\n return response\n\n def partial_update(self, request, pk=None, table_pk=None):\n table = get_table_or_404(table_pk)\n serializer = RecordSerializer(\n {'id': pk},\n data=request.data,\n context=self.get_serializer_context(table),\n partial=True\n )\n serializer.is_valid(raise_exception=True)\n serializer.save()\n # TODO refactor to use serializer for more DRY response logic\n paginator = TableLimitOffsetPagination()\n record_filters = {\n \"equal\": [\n {\"column_name\": [table.primary_key_column_name]},\n {\"literal\": [pk]}\n ]\n }\n column_names_to_ids = table.get_column_name_id_bidirectional_map()\n records = paginator.paginate_queryset(\n table,\n request,\n table,\n column_names_to_ids,\n filters=record_filters\n )\n serializer = RecordSerializer(\n records,\n many=True,\n context=self.get_serializer_context(table)\n )\n return paginator.get_paginated_response(serializer.data)\n\n def destroy(self, request, pk=None, table_pk=None):\n table = get_table_or_404(table_pk)\n if table.get_record(pk) is None:\n raise generic_api_exceptions.NotFoundAPIException(\n NotFound,\n error_code=ErrorCodes.RecordNotFound.value,\n message=\"Record doesn't exist\"\n )\n try:\n table.delete_record(pk)\n except IntegrityError as e:\n if isinstance(e.orig, ForeignKeyViolation):\n raise database_api_exceptions.ForeignKeyViolationAPIException(\n e,\n status_code=status.HTTP_400_BAD_REQUEST,\n referent_table=table,\n )\n\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n def get_serializer_context(self, table):\n columns_map = table.get_column_name_id_bidirectional_map()\n context = {'columns_map': columns_map, 'table': table}\n return context\n", "path": "mathesar/api/db/viewsets/records.py"}], "after_files": [{"content": "from psycopg2.errors import ForeignKeyViolation, InvalidDatetimeFormat\nfrom rest_access_policy import AccessViewSetMixin\nfrom rest_framework import status, viewsets\nfrom rest_framework.exceptions import NotFound\nfrom rest_framework.renderers import BrowsableAPIRenderer\nfrom rest_framework.response import Response\nfrom sqlalchemy.exc import IntegrityError, DataError\n\nfrom mathesar.api.db.permissions.records import RecordAccessPolicy\nfrom mathesar.api.exceptions.error_codes import ErrorCodes\nimport mathesar.api.exceptions.database_exceptions.exceptions as database_api_exceptions\nimport mathesar.api.exceptions.generic_exceptions.base_exceptions as generic_api_exceptions\nfrom db.functions.exceptions import (\n BadDBFunctionFormat, ReferencedColumnsDontExist, UnknownDBFunctionID,\n)\nfrom db.records.exceptions import (\n BadGroupFormat, GroupFieldNotFound, InvalidGroupType, UndefinedFunction,\n BadSortFormat, SortFieldNotFound\n)\nfrom mathesar.api.pagination import TableLimitOffsetPagination\nfrom mathesar.api.serializers.records import RecordListParameterSerializer, RecordSerializer\nfrom mathesar.api.utils import get_table_or_404\nfrom mathesar.functions.operations.convert import rewrite_db_function_spec_column_ids_to_names\nfrom mathesar.models.base import Table\nfrom mathesar.utils.json import MathesarJSONRenderer\n\n\nclass RecordViewSet(AccessViewSetMixin, viewsets.ViewSet):\n access_policy = RecordAccessPolicy\n\n # There is no 'update' method.\n # We're not supporting PUT requests because there aren't a lot of use cases\n # where the entire record needs to be replaced, PATCH suffices for updates.\n def get_queryset(self):\n return Table.objects.all().order_by('-created_at')\n\n renderer_classes = [MathesarJSONRenderer, BrowsableAPIRenderer]\n\n # For filter parameter formatting, see:\n # db/functions/operations/deserialize.py::get_db_function_from_ma_function_spec function doc>\n # For sorting parameter formatting, see:\n # https://github.com/centerofci/sqlalchemy-filters#sort-format\n def list(self, request, table_pk=None):\n paginator = TableLimitOffsetPagination()\n\n serializer = RecordListParameterSerializer(data=request.GET)\n serializer.is_valid(raise_exception=True)\n table = get_table_or_404(table_pk)\n\n filter_unprocessed = serializer.validated_data['filter']\n order_by = serializer.validated_data['order_by']\n grouping = serializer.validated_data['grouping']\n search_fuzzy = serializer.validated_data['search_fuzzy']\n filter_processed = None\n column_names_to_ids = table.get_column_name_id_bidirectional_map()\n column_ids_to_names = column_names_to_ids.inverse\n if filter_unprocessed:\n filter_processed = rewrite_db_function_spec_column_ids_to_names(\n column_ids_to_names=column_ids_to_names,\n spec=filter_unprocessed,\n )\n # Replace column id value used in the `field` property with column name\n name_converted_group_by = None\n if grouping:\n group_by_columns_names = [column_ids_to_names[column_id] for column_id in grouping['columns']]\n name_converted_group_by = {**grouping, 'columns': group_by_columns_names}\n name_converted_order_by = [{**column, 'field': column_ids_to_names[column['field']]} for column in order_by]\n name_converted_search = [{**column, 'column': column_ids_to_names[column['field']]} for column in search_fuzzy]\n\n try:\n\n records = paginator.paginate_queryset(\n self.get_queryset(), request, table, column_names_to_ids,\n filters=filter_processed,\n order_by=name_converted_order_by,\n grouping=name_converted_group_by,\n search=name_converted_search,\n duplicate_only=serializer.validated_data['duplicate_only']\n )\n except (BadDBFunctionFormat, UnknownDBFunctionID, ReferencedColumnsDontExist) as e:\n raise database_api_exceptions.BadFilterAPIException(\n e,\n field='filters',\n status_code=status.HTTP_400_BAD_REQUEST\n )\n except (BadSortFormat, SortFieldNotFound) as e:\n raise database_api_exceptions.BadSortAPIException(\n e,\n field='order_by',\n status_code=status.HTTP_400_BAD_REQUEST\n )\n except (BadGroupFormat, GroupFieldNotFound, InvalidGroupType) as e:\n raise database_api_exceptions.BadGroupAPIException(\n e,\n field='grouping',\n status_code=status.HTTP_400_BAD_REQUEST\n )\n except UndefinedFunction as e:\n raise database_api_exceptions.UndefinedFunctionAPIException(\n e,\n details=e.args[0],\n status_code=status.HTTP_400_BAD_REQUEST\n )\n except DataError as e:\n if isinstance(e.orig, InvalidDatetimeFormat):\n raise database_api_exceptions.InvalidDateFormatAPIException(\n e,\n status_code=status.HTTP_400_BAD_REQUEST,\n )\n\n serializer = RecordSerializer(\n records,\n many=True,\n context=self.get_serializer_context(table)\n )\n return paginator.get_paginated_response(serializer.data)\n\n def retrieve(self, request, pk=None, table_pk=None):\n table = get_table_or_404(table_pk)\n # TODO refactor to use serializer for more DRY response logic\n paginator = TableLimitOffsetPagination()\n record_filters = {\n \"equal\": [\n {\"column_name\": [table.primary_key_column_name]},\n {\"literal\": [pk]}\n ]\n }\n column_names_to_ids = table.get_column_name_id_bidirectional_map()\n records = paginator.paginate_queryset(\n table,\n request,\n table,\n column_names_to_ids,\n filters=record_filters\n )\n if not records:\n raise NotFound\n serializer = RecordSerializer(\n records,\n many=True,\n context=self.get_serializer_context(table)\n )\n return paginator.get_paginated_response(serializer.data)\n\n def create(self, request, table_pk=None):\n table = get_table_or_404(table_pk)\n serializer = RecordSerializer(data=request.data, context=self.get_serializer_context(table))\n serializer.is_valid(raise_exception=True)\n serializer.save()\n # TODO refactor to use serializer for more DRY response logic\n column_name_id_map = table.get_column_name_id_bidirectional_map()\n table_pk_column_id = column_name_id_map[table.primary_key_column_name]\n pk_value = serializer.data[table_pk_column_id]\n paginator = TableLimitOffsetPagination()\n record_filters = {\n \"equal\": [\n {\"column_name\": [table.primary_key_column_name]},\n {\"literal\": [pk_value]}\n ]\n }\n column_names_to_ids = table.get_column_name_id_bidirectional_map()\n records = paginator.paginate_queryset(\n table,\n request,\n table,\n column_names_to_ids,\n filters=record_filters\n )\n serializer = RecordSerializer(\n records,\n many=True,\n context=self.get_serializer_context(table)\n )\n response = paginator.get_paginated_response(serializer.data)\n response.status_code = status.HTTP_201_CREATED\n return response\n\n def partial_update(self, request, pk=None, table_pk=None):\n table = get_table_or_404(table_pk)\n serializer = RecordSerializer(\n {'id': pk},\n data=request.data,\n context=self.get_serializer_context(table),\n partial=True\n )\n serializer.is_valid(raise_exception=True)\n serializer.save()\n # TODO refactor to use serializer for more DRY response logic\n paginator = TableLimitOffsetPagination()\n record_filters = {\n \"equal\": [\n {\"column_name\": [table.primary_key_column_name]},\n {\"literal\": [pk]}\n ]\n }\n column_names_to_ids = table.get_column_name_id_bidirectional_map()\n records = paginator.paginate_queryset(\n table,\n request,\n table,\n column_names_to_ids,\n filters=record_filters\n )\n serializer = RecordSerializer(\n records,\n many=True,\n context=self.get_serializer_context(table)\n )\n return paginator.get_paginated_response(serializer.data)\n\n def destroy(self, request, pk=None, table_pk=None):\n table = get_table_or_404(table_pk)\n if table.get_record(pk) is None:\n raise generic_api_exceptions.NotFoundAPIException(\n NotFound,\n error_code=ErrorCodes.RecordNotFound.value,\n message=\"Record doesn't exist\"\n )\n try:\n table.delete_record(pk)\n except IntegrityError as e:\n if isinstance(e.orig, ForeignKeyViolation):\n raise database_api_exceptions.ForeignKeyViolationAPIException(\n e,\n status_code=status.HTTP_400_BAD_REQUEST,\n referent_table=table,\n )\n\n return Response(status=status.HTTP_204_NO_CONTENT)\n\n def get_serializer_context(self, table):\n columns_map = table.get_column_name_id_bidirectional_map()\n context = {'columns_map': columns_map, 'table': table}\n return context\n", "path": "mathesar/api/db/viewsets/records.py"}]}
| 2,811 | 265 |
gh_patches_debug_23967
|
rasdani/github-patches
|
git_diff
|
pwr-Solaar__Solaar-1856
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add git information in about and console
**Information**
Installed version of solaar:: 1.1.7+git1897-9c361c4~ubuntu22.10.1
Distribution: Ubuntu 22.10 + https://launchpad.net/~solaar-unifying/+archive/ubuntu/ppa (git)
Kernel version: Linux 6.1.0-060100rc5-generic x86_64 GNU/Linux
Example:
a)

b)
$ solaar show
Solaar version 1.1.7-9c361c4
[...]
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python3
2
3 from glob import glob as _glob
4
5 try:
6 from setuptools import setup
7 except ImportError:
8 from distutils.core import setup
9
10 main_ns = {}
11 with open('lib/solaar/__init__.py') as ver_file:
12 exec(ver_file.read(), main_ns)
13 NAME = main_ns['NAME']
14 __version__ = main_ns['__version__']
15
16
17 def _data_files():
18 from os.path import dirname as _dirname
19
20 yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')
21 yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')
22 yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']
23
24 for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):
25 yield _dirname(mo), [mo]
26
27 yield 'share/applications', ['share/applications/solaar.desktop']
28 yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']
29 yield 'share/metainfo', ['share/solaar/io.github.pwr_solaar.solaar.metainfo.xml']
30
31 del _dirname
32
33
34 setup(
35 name=NAME.lower(),
36 version=__version__,
37 description='Linux device manager for Logitech receivers, keyboards, mice, and tablets.',
38 long_description='''
39 Solaar is a Linux device manager for many Logitech peripherals that connect through
40 Unifying and other receivers or via USB or Bluetooth.
41 Solaar is able to pair/unpair devices with receivers and show and modify some of the
42 modifiable features of devices.
43 For instructions on installing Solaar see https://pwr-solaar.github.io/Solaar/installation'''.strip(),
44 author='Daniel Pavel',
45 license='GPLv2',
46 url='http://pwr-solaar.github.io/Solaar/',
47 classifiers=[
48 'Development Status :: 4 - Beta',
49 'Environment :: X11 Applications :: GTK',
50 'Environment :: Console',
51 'Intended Audience :: End Users/Desktop',
52 'License :: DFSG approved',
53 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',
54 'Natural Language :: English',
55 'Programming Language :: Python :: 3 :: Only',
56 'Operating System :: POSIX :: Linux',
57 'Topic :: Utilities',
58 ],
59 platforms=['linux'],
60
61 # sudo apt install python-gi python3-gi \
62 # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1
63 # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],
64 python_requires='>=3.7',
65 install_requires=[
66 'evdev (>= 1.1.2)',
67 'pyudev (>= 0.13)',
68 'PyYAML (>= 3.12)',
69 'python-xlib (>= 0.27)',
70 'psutil (>= 5.4.3)',
71 ],
72 extras_require={
73 'report-descriptor': ['hid-parser'],
74 'desktop-notifications': ['Notify (>= 0.7)'],
75 },
76 package_dir={'': 'lib'},
77 packages=['keysyms', 'hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],
78 data_files=list(_data_files()),
79 scripts=_glob('bin/*'),
80 )
81
```
Path: `lib/solaar/__init__.py`
Content:
```
1 # -*- python-mode -*-
2
3 ## Copyright (C) 2012-2013 Daniel Pavel
4 ##
5 ## This program is free software; you can redistribute it and/or modify
6 ## it under the terms of the GNU General Public License as published by
7 ## the Free Software Foundation; either version 2 of the License, or
8 ## (at your option) any later version.
9 ##
10 ## This program is distributed in the hope that it will be useful,
11 ## but WITHOUT ANY WARRANTY; without even the implied warranty of
12 ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 ## GNU General Public License for more details.
14 ##
15 ## You should have received a copy of the GNU General Public License along
16 ## with this program; if not, write to the Free Software Foundation, Inc.,
17 ## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
18
19 __version__ = '1.1.7'
20 NAME = 'Solaar'
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lib/solaar/__init__.py b/lib/solaar/__init__.py
--- a/lib/solaar/__init__.py
+++ b/lib/solaar/__init__.py
@@ -16,5 +16,11 @@
## with this program; if not, write to the Free Software Foundation, Inc.,
## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
-__version__ = '1.1.7'
+__version_short__ = '1.1.7'
NAME = 'Solaar'
+
+try:
+ import gitinfo
+ __version__ = __version_short__ + '-' + gitinfo.get_git_info()['commit'][:8]
+except Exception:
+ __version__ = __version_short__
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -11,7 +11,7 @@
with open('lib/solaar/__init__.py') as ver_file:
exec(ver_file.read(), main_ns)
NAME = main_ns['NAME']
-__version__ = main_ns['__version__']
+__version__ = main_ns['__version_short__']
def _data_files():
@@ -72,6 +72,7 @@
extras_require={
'report-descriptor': ['hid-parser'],
'desktop-notifications': ['Notify (>= 0.7)'],
+ 'git-commit': ['python-git-info'],
},
package_dir={'': 'lib'},
packages=['keysyms', 'hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],
|
{"golden_diff": "diff --git a/lib/solaar/__init__.py b/lib/solaar/__init__.py\n--- a/lib/solaar/__init__.py\n+++ b/lib/solaar/__init__.py\n@@ -16,5 +16,11 @@\n ## with this program; if not, write to the Free Software Foundation, Inc.,\n ## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n \n-__version__ = '1.1.7'\n+__version_short__ = '1.1.7'\n NAME = 'Solaar'\n+\n+try:\n+ import gitinfo\n+ __version__ = __version_short__ + '-' + gitinfo.get_git_info()['commit'][:8]\n+except Exception:\n+ __version__ = __version_short__\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -11,7 +11,7 @@\n with open('lib/solaar/__init__.py') as ver_file:\n exec(ver_file.read(), main_ns)\n NAME = main_ns['NAME']\n-__version__ = main_ns['__version__']\n+__version__ = main_ns['__version_short__']\n \n \n def _data_files():\n@@ -72,6 +72,7 @@\n extras_require={\n 'report-descriptor': ['hid-parser'],\n 'desktop-notifications': ['Notify (>= 0.7)'],\n+ 'git-commit': ['python-git-info'],\n },\n package_dir={'': 'lib'},\n packages=['keysyms', 'hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n", "issue": "Add git information in about and console\n**Information**\r\nInstalled version of solaar:: 1.1.7+git1897-9c361c4~ubuntu22.10.1\r\nDistribution: Ubuntu 22.10 + https://launchpad.net/~solaar-unifying/+archive/ubuntu/ppa (git)\r\nKernel version: Linux 6.1.0-060100rc5-generic x86_64 GNU/Linux\r\n\r\nExample:\r\na)\r\n\r\n\r\nb)\r\n$ solaar show\r\nSolaar version 1.1.7-9c361c4\r\n\r\n[...]\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nfrom glob import glob as _glob\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nmain_ns = {}\nwith open('lib/solaar/__init__.py') as ver_file:\n exec(ver_file.read(), main_ns)\nNAME = main_ns['NAME']\n__version__ = main_ns['__version__']\n\n\ndef _data_files():\n from os.path import dirname as _dirname\n\n yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')\n yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')\n yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']\n\n for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n yield _dirname(mo), [mo]\n\n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n yield 'share/metainfo', ['share/solaar/io.github.pwr_solaar.solaar.metainfo.xml']\n\n del _dirname\n\n\nsetup(\n name=NAME.lower(),\n version=__version__,\n description='Linux device manager for Logitech receivers, keyboards, mice, and tablets.',\n long_description='''\nSolaar is a Linux device manager for many Logitech peripherals that connect through\nUnifying and other receivers or via USB or Bluetooth.\nSolaar is able to pair/unpair devices with receivers and show and modify some of the\nmodifiable features of devices.\nFor instructions on installing Solaar see https://pwr-solaar.github.io/Solaar/installation'''.strip(),\n author='Daniel Pavel',\n license='GPLv2',\n url='http://pwr-solaar.github.io/Solaar/',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: X11 Applications :: GTK',\n 'Environment :: Console',\n 'Intended Audience :: End Users/Desktop',\n 'License :: DFSG approved',\n 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3 :: Only',\n 'Operating System :: POSIX :: Linux',\n 'Topic :: Utilities',\n ],\n platforms=['linux'],\n\n # sudo apt install python-gi python3-gi \\\n # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1\n # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n python_requires='>=3.7',\n install_requires=[\n 'evdev (>= 1.1.2)',\n 'pyudev (>= 0.13)',\n 'PyYAML (>= 3.12)',\n 'python-xlib (>= 0.27)',\n 'psutil (>= 5.4.3)',\n ],\n extras_require={\n 'report-descriptor': ['hid-parser'],\n 'desktop-notifications': ['Notify (>= 0.7)'],\n },\n package_dir={'': 'lib'},\n packages=['keysyms', 'hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n data_files=list(_data_files()),\n scripts=_glob('bin/*'),\n)\n", "path": "setup.py"}, {"content": "# -*- python-mode -*-\n\n## Copyright (C) 2012-2013 Daniel Pavel\n##\n## This program is free software; you can redistribute it and/or modify\n## it under the terms of the GNU General Public License as published by\n## the Free Software Foundation; either version 2 of the License, or\n## (at your option) any later version.\n##\n## This program is distributed in the hope that it will be useful,\n## but WITHOUT ANY WARRANTY; without even the implied warranty of\n## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n## GNU General Public License for more details.\n##\n## You should have received a copy of the GNU General Public License along\n## with this program; if not, write to the Free Software Foundation, Inc.,\n## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\n__version__ = '1.1.7'\nNAME = 'Solaar'\n", "path": "lib/solaar/__init__.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nfrom glob import glob as _glob\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nmain_ns = {}\nwith open('lib/solaar/__init__.py') as ver_file:\n exec(ver_file.read(), main_ns)\nNAME = main_ns['NAME']\n__version__ = main_ns['__version_short__']\n\n\ndef _data_files():\n from os.path import dirname as _dirname\n\n yield 'share/solaar/icons', _glob('share/solaar/icons/solaar*.svg')\n yield 'share/solaar/icons', _glob('share/solaar/icons/light_*.png')\n yield 'share/icons/hicolor/scalable/apps', ['share/solaar/icons/solaar.svg']\n\n for mo in _glob('share/locale/*/LC_MESSAGES/solaar.mo'):\n yield _dirname(mo), [mo]\n\n yield 'share/applications', ['share/applications/solaar.desktop']\n yield 'share/solaar/udev-rules.d', ['rules.d/42-logitech-unify-permissions.rules']\n yield 'share/metainfo', ['share/solaar/io.github.pwr_solaar.solaar.metainfo.xml']\n\n del _dirname\n\n\nsetup(\n name=NAME.lower(),\n version=__version__,\n description='Linux device manager for Logitech receivers, keyboards, mice, and tablets.',\n long_description='''\nSolaar is a Linux device manager for many Logitech peripherals that connect through\nUnifying and other receivers or via USB or Bluetooth.\nSolaar is able to pair/unpair devices with receivers and show and modify some of the\nmodifiable features of devices.\nFor instructions on installing Solaar see https://pwr-solaar.github.io/Solaar/installation'''.strip(),\n author='Daniel Pavel',\n license='GPLv2',\n url='http://pwr-solaar.github.io/Solaar/',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: X11 Applications :: GTK',\n 'Environment :: Console',\n 'Intended Audience :: End Users/Desktop',\n 'License :: DFSG approved',\n 'License :: OSI Approved :: GNU General Public License v2 (GPLv2)',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3 :: Only',\n 'Operating System :: POSIX :: Linux',\n 'Topic :: Utilities',\n ],\n platforms=['linux'],\n\n # sudo apt install python-gi python3-gi \\\n # gir1.2-gtk-3.0 gir1.2-notify-0.7 gir1.2-ayatanaappindicator3-0.1\n # os_requires=['gi.repository.GObject (>= 2.0)', 'gi.repository.Gtk (>= 3.0)'],\n python_requires='>=3.7',\n install_requires=[\n 'evdev (>= 1.1.2)',\n 'pyudev (>= 0.13)',\n 'PyYAML (>= 3.12)',\n 'python-xlib (>= 0.27)',\n 'psutil (>= 5.4.3)',\n ],\n extras_require={\n 'report-descriptor': ['hid-parser'],\n 'desktop-notifications': ['Notify (>= 0.7)'],\n 'git-commit': ['python-git-info'],\n },\n package_dir={'': 'lib'},\n packages=['keysyms', 'hidapi', 'logitech_receiver', 'solaar', 'solaar.ui', 'solaar.cli'],\n data_files=list(_data_files()),\n scripts=_glob('bin/*'),\n)\n", "path": "setup.py"}, {"content": "# -*- python-mode -*-\n\n## Copyright (C) 2012-2013 Daniel Pavel\n##\n## This program is free software; you can redistribute it and/or modify\n## it under the terms of the GNU General Public License as published by\n## the Free Software Foundation; either version 2 of the License, or\n## (at your option) any later version.\n##\n## This program is distributed in the hope that it will be useful,\n## but WITHOUT ANY WARRANTY; without even the implied warranty of\n## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n## GNU General Public License for more details.\n##\n## You should have received a copy of the GNU General Public License along\n## with this program; if not, write to the Free Software Foundation, Inc.,\n## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\n__version_short__ = '1.1.7'\nNAME = 'Solaar'\n\ntry:\n import gitinfo\n __version__ = __version_short__ + '-' + gitinfo.get_git_info()['commit'][:8]\nexcept Exception:\n __version__ = __version_short__\n", "path": "lib/solaar/__init__.py"}]}
| 1,652 | 368 |
gh_patches_debug_37400
|
rasdani/github-patches
|
git_diff
|
Project-MONAI__MONAI-866
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
sphinx documentation warnings
**Describe the bug**
building the documentation locally generates a couple of warnings:
```bash
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 16 source files that are out of date
updating environment: [new config] 16 added, 0 changed, 0 removed
/usr/local/anaconda3/envs/py36/lib/python3.6/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
/usr/local/anaconda3/envs/py36/lib/python3.6/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document
warn("Container node skipped: type={0}".format(mdnode.t))
reading sources... [100%] visualize
WARNING: autodoc: failed to import class 'handlers.CheckpointLoader' from module 'monai'; the following exception was raised:
cannot import name 'RemovableEventHandle'
WARNING: autodoc: failed to import class 'handlers.CheckpointSaver' from module 'monai'; the following exception was raised:
cannot import name 'RemovableEventHandle'
WARNING: autodoc: failed to import class 'handlers.ClassificationSaver' from module 'monai'; the following exception was raised:
cannot import name 'RemovableEventHandle'
WARNING: autodoc: failed to import class 'handlers.MeanDice' from module 'monai'; the following exception was raised:
cannot import name 'RemovableEventHandle'
WARNING: autodoc: failed to import class 'handlers.ROCAUC' from module 'monai'; the following exception was raised:
cannot import name 'RemovableEventHandle'
WARNING: autodoc: failed to import class 'handlers.MetricLogger' from module 'monai'; the following exception was raised:
cannot import name 'RemovableEventHandle'
WARNING: autodoc: failed to import class 'handlers.SegmentationSaver' from module 'monai'; the following exception was raised:
cannot import name 'RemovableEventHandle'
WARNING: autodoc: failed to import class 'handlers.StatsHandler' from module 'monai'; the following exception was raised:
cannot import name 'RemovableEventHandle'
WARNING: autodoc: failed to import class 'handlers.TensorBoardStatsHandler' from module 'monai'; the following exception was raised:
cannot import name 'RemovableEventHandle'
WARNING: autodoc: failed to import class 'handlers.LrScheduleHandler' from module 'monai'; the following exception was raised:
cannot import name 'RemovableEventHandle'
WARNING: autodoc: failed to import class 'handlers.ValidationHandler' from module 'monai'; the following exception was raised:
cannot import name 'RemovableEventHandle'
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
writing output... [100%] visualize
```
**To Reproduce**
Steps to reproduce the behavior:
1. Go to `MONAI/docs`
2. ` pip install -U -r requirements.txt` to install doc deps
3. Run `make clean; make html`
**Expected behavior**
no warnings in the building log
**Environment (please complete the following information):**
```
MONAI version: 0.2.0+77.g783f6a1
Python version: 3.6.10 |Anaconda, Inc.| (default, Mar 25 2020, 18:53:43) [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]
Numpy version: 1.19.1
Pytorch version: 1.6.0
Optional dependencies:
Pytorch Ignite version: 0.3.0
Nibabel version: 3.1.1
scikit-image version: 0.17.2
Pillow version: 7.2.0
Tensorboard version: 2.3.0
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `monai/handlers/metric_logger.py`
Content:
```
1 # Copyright 2020 MONAI Consortium
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 # http://www.apache.org/licenses/LICENSE-2.0
6 # Unless required by applicable law or agreed to in writing, software
7 # distributed under the License is distributed on an "AS IS" BASIS,
8 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
9 # See the License for the specific language governing permissions and
10 # limitations under the License.
11
12 from collections import defaultdict
13 from typing import TYPE_CHECKING, Callable, DefaultDict, List
14
15 from monai.utils import exact_version, optional_import
16
17 Events, _ = optional_import("ignite.engine", "0.3.0", exact_version, "Events")
18 if TYPE_CHECKING:
19 from ignite.engine import Engine, RemovableEventHandle
20 else:
21 Engine, _ = optional_import("ignite.engine", "0.3.0", exact_version, "Engine")
22 RemovableEventHandle, _ = optional_import("ignite.engine", "0.3.0", exact_version, "RemovableEventHandle")
23
24
25 class MetricLogger:
26 def __init__(self, loss_transform: Callable = lambda x: x, metric_transform: Callable = lambda x: x) -> None:
27 self.loss_transform = loss_transform
28 self.metric_transform = metric_transform
29 self.loss: List = []
30 self.metrics: DefaultDict = defaultdict(list)
31
32 def attach(self, engine: Engine) -> RemovableEventHandle:
33 """
34 Args:
35 engine: Ignite Engine, it can be a trainer, validator or evaluator.
36 """
37 return engine.add_event_handler(Events.ITERATION_COMPLETED, self)
38
39 def __call__(self, engine: Engine) -> None:
40 """
41 Args:
42 engine: Ignite Engine, it can be a trainer, validator or evaluator.
43 """
44 self.loss.append(self.loss_transform(engine.state.output))
45
46 for m, v in engine.state.metrics.items():
47 v = self.metric_transform(v)
48 # # metrics may not be added on the first timestep, pad the list if this is the case
49 # # so that each metric list is the same length as self.loss
50 # if len(self.metrics[m])==0:
51 # self.metrics[m].append([v[0]]*len(self.loss))
52
53 self.metrics[m].append(v)
54
55
56 metriclogger = MetricLogger
57
```
Path: `monai/handlers/checkpoint_loader.py`
Content:
```
1 # Copyright 2020 MONAI Consortium
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 # http://www.apache.org/licenses/LICENSE-2.0
6 # Unless required by applicable law or agreed to in writing, software
7 # distributed under the License is distributed on an "AS IS" BASIS,
8 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
9 # See the License for the specific language governing permissions and
10 # limitations under the License.
11
12 import logging
13 from typing import TYPE_CHECKING, Dict, Optional
14
15 import torch
16
17 from monai.utils import exact_version, optional_import
18
19 Events, _ = optional_import("ignite.engine", "0.3.0", exact_version, "Events")
20 Checkpoint, _ = optional_import("ignite.handlers", "0.3.0", exact_version, "Checkpoint")
21 if TYPE_CHECKING:
22 from ignite.engine import Engine, RemovableEventHandle
23 else:
24 Engine, _ = optional_import("ignite.engine", "0.3.0", exact_version, "Engine")
25 RemovableEventHandle, _ = optional_import("ignite.engine", "0.3.0", exact_version, "RemovableEventHandle")
26
27
28 class CheckpointLoader:
29 """
30 CheckpointLoader acts as an Ignite handler to load checkpoint data from file.
31 It can load variables for network, optimizer, lr_scheduler, etc.
32 If saving checkpoint after `torch.nn.DataParallel`, need to save `model.module` instead
33 as PyTorch recommended and then use this loader to load the model.
34
35 Args:
36 load_path: the file path of checkpoint, it should be a PyTorch `pth` file.
37 load_dict: target objects that load checkpoint to. examples::
38
39 {'network': net, 'optimizer': optimizer, 'lr_scheduler': lr_scheduler}
40
41 name: identifier of logging.logger to use, if None, defaulting to ``engine.logger``.
42
43 """
44
45 def __init__(self, load_path: str, load_dict: Dict, name: Optional[str] = None) -> None:
46 assert load_path is not None, "must provide clear path to load checkpoint."
47 self.load_path = load_path
48 assert load_dict is not None and len(load_dict) > 0, "must provide target objects to load."
49 self.logger = logging.getLogger(name)
50 for k, v in load_dict.items():
51 if hasattr(v, "module"):
52 load_dict[k] = v.module
53 self.load_dict = load_dict
54
55 self._name = name
56
57 def attach(self, engine: Engine) -> RemovableEventHandle:
58 """
59 Args:
60 engine: Ignite Engine, it can be a trainer, validator or evaluator.
61 """
62 if self._name is None:
63 self.logger = engine.logger
64 return engine.add_event_handler(Events.STARTED, self)
65
66 def __call__(self, engine: Engine) -> None:
67 """
68 Args:
69 engine: Ignite Engine, it can be a trainer, validator or evaluator.
70 """
71 checkpoint = torch.load(self.load_path)
72 if len(self.load_dict) == 1:
73 key = list(self.load_dict.keys())[0]
74 if not (key in checkpoint):
75 checkpoint = {key: checkpoint}
76
77 Checkpoint.load_objects(to_load=self.load_dict, checkpoint=checkpoint)
78 self.logger.info(f"Restored all variables from {self.load_path}")
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/monai/handlers/checkpoint_loader.py b/monai/handlers/checkpoint_loader.py
--- a/monai/handlers/checkpoint_loader.py
+++ b/monai/handlers/checkpoint_loader.py
@@ -19,10 +19,9 @@
Events, _ = optional_import("ignite.engine", "0.3.0", exact_version, "Events")
Checkpoint, _ = optional_import("ignite.handlers", "0.3.0", exact_version, "Checkpoint")
if TYPE_CHECKING:
- from ignite.engine import Engine, RemovableEventHandle
+ from ignite.engine import Engine
else:
Engine, _ = optional_import("ignite.engine", "0.3.0", exact_version, "Engine")
- RemovableEventHandle, _ = optional_import("ignite.engine", "0.3.0", exact_version, "RemovableEventHandle")
class CheckpointLoader:
@@ -54,14 +53,14 @@
self._name = name
- def attach(self, engine: Engine) -> RemovableEventHandle:
+ def attach(self, engine: Engine) -> None:
"""
Args:
engine: Ignite Engine, it can be a trainer, validator or evaluator.
"""
if self._name is None:
self.logger = engine.logger
- return engine.add_event_handler(Events.STARTED, self)
+ engine.add_event_handler(Events.STARTED, self)
def __call__(self, engine: Engine) -> None:
"""
diff --git a/monai/handlers/metric_logger.py b/monai/handlers/metric_logger.py
--- a/monai/handlers/metric_logger.py
+++ b/monai/handlers/metric_logger.py
@@ -16,10 +16,9 @@
Events, _ = optional_import("ignite.engine", "0.3.0", exact_version, "Events")
if TYPE_CHECKING:
- from ignite.engine import Engine, RemovableEventHandle
+ from ignite.engine import Engine
else:
Engine, _ = optional_import("ignite.engine", "0.3.0", exact_version, "Engine")
- RemovableEventHandle, _ = optional_import("ignite.engine", "0.3.0", exact_version, "RemovableEventHandle")
class MetricLogger:
@@ -29,12 +28,12 @@
self.loss: List = []
self.metrics: DefaultDict = defaultdict(list)
- def attach(self, engine: Engine) -> RemovableEventHandle:
+ def attach(self, engine: Engine) -> None:
"""
Args:
engine: Ignite Engine, it can be a trainer, validator or evaluator.
"""
- return engine.add_event_handler(Events.ITERATION_COMPLETED, self)
+ engine.add_event_handler(Events.ITERATION_COMPLETED, self)
def __call__(self, engine: Engine) -> None:
"""
|
{"golden_diff": "diff --git a/monai/handlers/checkpoint_loader.py b/monai/handlers/checkpoint_loader.py\n--- a/monai/handlers/checkpoint_loader.py\n+++ b/monai/handlers/checkpoint_loader.py\n@@ -19,10 +19,9 @@\n Events, _ = optional_import(\"ignite.engine\", \"0.3.0\", exact_version, \"Events\")\n Checkpoint, _ = optional_import(\"ignite.handlers\", \"0.3.0\", exact_version, \"Checkpoint\")\n if TYPE_CHECKING:\n- from ignite.engine import Engine, RemovableEventHandle\n+ from ignite.engine import Engine\n else:\n Engine, _ = optional_import(\"ignite.engine\", \"0.3.0\", exact_version, \"Engine\")\n- RemovableEventHandle, _ = optional_import(\"ignite.engine\", \"0.3.0\", exact_version, \"RemovableEventHandle\")\n \n \n class CheckpointLoader:\n@@ -54,14 +53,14 @@\n \n self._name = name\n \n- def attach(self, engine: Engine) -> RemovableEventHandle:\n+ def attach(self, engine: Engine) -> None:\n \"\"\"\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n if self._name is None:\n self.logger = engine.logger\n- return engine.add_event_handler(Events.STARTED, self)\n+ engine.add_event_handler(Events.STARTED, self)\n \n def __call__(self, engine: Engine) -> None:\n \"\"\"\ndiff --git a/monai/handlers/metric_logger.py b/monai/handlers/metric_logger.py\n--- a/monai/handlers/metric_logger.py\n+++ b/monai/handlers/metric_logger.py\n@@ -16,10 +16,9 @@\n \n Events, _ = optional_import(\"ignite.engine\", \"0.3.0\", exact_version, \"Events\")\n if TYPE_CHECKING:\n- from ignite.engine import Engine, RemovableEventHandle\n+ from ignite.engine import Engine\n else:\n Engine, _ = optional_import(\"ignite.engine\", \"0.3.0\", exact_version, \"Engine\")\n- RemovableEventHandle, _ = optional_import(\"ignite.engine\", \"0.3.0\", exact_version, \"RemovableEventHandle\")\n \n \n class MetricLogger:\n@@ -29,12 +28,12 @@\n self.loss: List = []\n self.metrics: DefaultDict = defaultdict(list)\n \n- def attach(self, engine: Engine) -> RemovableEventHandle:\n+ def attach(self, engine: Engine) -> None:\n \"\"\"\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n- return engine.add_event_handler(Events.ITERATION_COMPLETED, self)\n+ engine.add_event_handler(Events.ITERATION_COMPLETED, self)\n \n def __call__(self, engine: Engine) -> None:\n \"\"\"\n", "issue": "sphinx documentation warnings\n**Describe the bug**\r\nbuilding the documentation locally generates a couple of warnings:\r\n```bash\r\nbuilding [mo]: targets for 0 po files that are out of date\r\nbuilding [html]: targets for 16 source files that are out of date\r\nupdating environment: [new config] 16 added, 0 changed, 0 removed\r\n/usr/local/anaconda3/envs/py36/lib/python3.6/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document\r\n warn(\"Container node skipped: type={0}\".format(mdnode.t))\r\n/usr/local/anaconda3/envs/py36/lib/python3.6/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document\r\n warn(\"Container node skipped: type={0}\".format(mdnode.t))\r\nreading sources... [100%] visualize \r\nWARNING: autodoc: failed to import class 'handlers.CheckpointLoader' from module 'monai'; the following exception was raised:\r\ncannot import name 'RemovableEventHandle'\r\nWARNING: autodoc: failed to import class 'handlers.CheckpointSaver' from module 'monai'; the following exception was raised:\r\ncannot import name 'RemovableEventHandle'\r\nWARNING: autodoc: failed to import class 'handlers.ClassificationSaver' from module 'monai'; the following exception was raised:\r\ncannot import name 'RemovableEventHandle'\r\nWARNING: autodoc: failed to import class 'handlers.MeanDice' from module 'monai'; the following exception was raised:\r\ncannot import name 'RemovableEventHandle'\r\nWARNING: autodoc: failed to import class 'handlers.ROCAUC' from module 'monai'; the following exception was raised:\r\ncannot import name 'RemovableEventHandle'\r\nWARNING: autodoc: failed to import class 'handlers.MetricLogger' from module 'monai'; the following exception was raised:\r\ncannot import name 'RemovableEventHandle'\r\nWARNING: autodoc: failed to import class 'handlers.SegmentationSaver' from module 'monai'; the following exception was raised:\r\ncannot import name 'RemovableEventHandle'\r\nWARNING: autodoc: failed to import class 'handlers.StatsHandler' from module 'monai'; the following exception was raised:\r\ncannot import name 'RemovableEventHandle'\r\nWARNING: autodoc: failed to import class 'handlers.TensorBoardStatsHandler' from module 'monai'; the following exception was raised:\r\ncannot import name 'RemovableEventHandle'\r\nWARNING: autodoc: failed to import class 'handlers.LrScheduleHandler' from module 'monai'; the following exception was raised:\r\ncannot import name 'RemovableEventHandle'\r\nWARNING: autodoc: failed to import class 'handlers.ValidationHandler' from module 'monai'; the following exception was raised:\r\ncannot import name 'RemovableEventHandle'\r\nlooking for now-outdated files... none found\r\npickling environment... done\r\nchecking consistency... done\r\npreparing documents... done\r\nwriting output... [100%] visualize \r\n```\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Go to `MONAI/docs`\r\n2. ` pip install -U -r requirements.txt` to install doc deps\r\n3. Run `make clean; make html`\r\n\r\n**Expected behavior**\r\nno warnings in the building log\r\n\r\n**Environment (please complete the following information):**\r\n```\r\nMONAI version: 0.2.0+77.g783f6a1\r\nPython version: 3.6.10 |Anaconda, Inc.| (default, Mar 25 2020, 18:53:43) [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]\r\nNumpy version: 1.19.1\r\nPytorch version: 1.6.0\r\n\r\nOptional dependencies:\r\nPytorch Ignite version: 0.3.0\r\nNibabel version: 3.1.1\r\nscikit-image version: 0.17.2\r\nPillow version: 7.2.0\r\nTensorboard version: 2.3.0\r\n```\r\n\n", "before_files": [{"content": "# Copyright 2020 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom collections import defaultdict\nfrom typing import TYPE_CHECKING, Callable, DefaultDict, List\n\nfrom monai.utils import exact_version, optional_import\n\nEvents, _ = optional_import(\"ignite.engine\", \"0.3.0\", exact_version, \"Events\")\nif TYPE_CHECKING:\n from ignite.engine import Engine, RemovableEventHandle\nelse:\n Engine, _ = optional_import(\"ignite.engine\", \"0.3.0\", exact_version, \"Engine\")\n RemovableEventHandle, _ = optional_import(\"ignite.engine\", \"0.3.0\", exact_version, \"RemovableEventHandle\")\n\n\nclass MetricLogger:\n def __init__(self, loss_transform: Callable = lambda x: x, metric_transform: Callable = lambda x: x) -> None:\n self.loss_transform = loss_transform\n self.metric_transform = metric_transform\n self.loss: List = []\n self.metrics: DefaultDict = defaultdict(list)\n\n def attach(self, engine: Engine) -> RemovableEventHandle:\n \"\"\"\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n return engine.add_event_handler(Events.ITERATION_COMPLETED, self)\n\n def __call__(self, engine: Engine) -> None:\n \"\"\"\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n self.loss.append(self.loss_transform(engine.state.output))\n\n for m, v in engine.state.metrics.items():\n v = self.metric_transform(v)\n # # metrics may not be added on the first timestep, pad the list if this is the case\n # # so that each metric list is the same length as self.loss\n # if len(self.metrics[m])==0:\n # self.metrics[m].append([v[0]]*len(self.loss))\n\n self.metrics[m].append(v)\n\n\nmetriclogger = MetricLogger\n", "path": "monai/handlers/metric_logger.py"}, {"content": "# Copyright 2020 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nfrom typing import TYPE_CHECKING, Dict, Optional\n\nimport torch\n\nfrom monai.utils import exact_version, optional_import\n\nEvents, _ = optional_import(\"ignite.engine\", \"0.3.0\", exact_version, \"Events\")\nCheckpoint, _ = optional_import(\"ignite.handlers\", \"0.3.0\", exact_version, \"Checkpoint\")\nif TYPE_CHECKING:\n from ignite.engine import Engine, RemovableEventHandle\nelse:\n Engine, _ = optional_import(\"ignite.engine\", \"0.3.0\", exact_version, \"Engine\")\n RemovableEventHandle, _ = optional_import(\"ignite.engine\", \"0.3.0\", exact_version, \"RemovableEventHandle\")\n\n\nclass CheckpointLoader:\n \"\"\"\n CheckpointLoader acts as an Ignite handler to load checkpoint data from file.\n It can load variables for network, optimizer, lr_scheduler, etc.\n If saving checkpoint after `torch.nn.DataParallel`, need to save `model.module` instead\n as PyTorch recommended and then use this loader to load the model.\n\n Args:\n load_path: the file path of checkpoint, it should be a PyTorch `pth` file.\n load_dict: target objects that load checkpoint to. examples::\n\n {'network': net, 'optimizer': optimizer, 'lr_scheduler': lr_scheduler}\n\n name: identifier of logging.logger to use, if None, defaulting to ``engine.logger``.\n\n \"\"\"\n\n def __init__(self, load_path: str, load_dict: Dict, name: Optional[str] = None) -> None:\n assert load_path is not None, \"must provide clear path to load checkpoint.\"\n self.load_path = load_path\n assert load_dict is not None and len(load_dict) > 0, \"must provide target objects to load.\"\n self.logger = logging.getLogger(name)\n for k, v in load_dict.items():\n if hasattr(v, \"module\"):\n load_dict[k] = v.module\n self.load_dict = load_dict\n\n self._name = name\n\n def attach(self, engine: Engine) -> RemovableEventHandle:\n \"\"\"\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n if self._name is None:\n self.logger = engine.logger\n return engine.add_event_handler(Events.STARTED, self)\n\n def __call__(self, engine: Engine) -> None:\n \"\"\"\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n checkpoint = torch.load(self.load_path)\n if len(self.load_dict) == 1:\n key = list(self.load_dict.keys())[0]\n if not (key in checkpoint):\n checkpoint = {key: checkpoint}\n\n Checkpoint.load_objects(to_load=self.load_dict, checkpoint=checkpoint)\n self.logger.info(f\"Restored all variables from {self.load_path}\")\n", "path": "monai/handlers/checkpoint_loader.py"}], "after_files": [{"content": "# Copyright 2020 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom collections import defaultdict\nfrom typing import TYPE_CHECKING, Callable, DefaultDict, List\n\nfrom monai.utils import exact_version, optional_import\n\nEvents, _ = optional_import(\"ignite.engine\", \"0.3.0\", exact_version, \"Events\")\nif TYPE_CHECKING:\n from ignite.engine import Engine\nelse:\n Engine, _ = optional_import(\"ignite.engine\", \"0.3.0\", exact_version, \"Engine\")\n\n\nclass MetricLogger:\n def __init__(self, loss_transform: Callable = lambda x: x, metric_transform: Callable = lambda x: x) -> None:\n self.loss_transform = loss_transform\n self.metric_transform = metric_transform\n self.loss: List = []\n self.metrics: DefaultDict = defaultdict(list)\n\n def attach(self, engine: Engine) -> None:\n \"\"\"\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n engine.add_event_handler(Events.ITERATION_COMPLETED, self)\n\n def __call__(self, engine: Engine) -> None:\n \"\"\"\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n self.loss.append(self.loss_transform(engine.state.output))\n\n for m, v in engine.state.metrics.items():\n v = self.metric_transform(v)\n # # metrics may not be added on the first timestep, pad the list if this is the case\n # # so that each metric list is the same length as self.loss\n # if len(self.metrics[m])==0:\n # self.metrics[m].append([v[0]]*len(self.loss))\n\n self.metrics[m].append(v)\n\n\nmetriclogger = MetricLogger\n", "path": "monai/handlers/metric_logger.py"}, {"content": "# Copyright 2020 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nfrom typing import TYPE_CHECKING, Dict, Optional\n\nimport torch\n\nfrom monai.utils import exact_version, optional_import\n\nEvents, _ = optional_import(\"ignite.engine\", \"0.3.0\", exact_version, \"Events\")\nCheckpoint, _ = optional_import(\"ignite.handlers\", \"0.3.0\", exact_version, \"Checkpoint\")\nif TYPE_CHECKING:\n from ignite.engine import Engine\nelse:\n Engine, _ = optional_import(\"ignite.engine\", \"0.3.0\", exact_version, \"Engine\")\n\n\nclass CheckpointLoader:\n \"\"\"\n CheckpointLoader acts as an Ignite handler to load checkpoint data from file.\n It can load variables for network, optimizer, lr_scheduler, etc.\n If saving checkpoint after `torch.nn.DataParallel`, need to save `model.module` instead\n as PyTorch recommended and then use this loader to load the model.\n\n Args:\n load_path: the file path of checkpoint, it should be a PyTorch `pth` file.\n load_dict: target objects that load checkpoint to. examples::\n\n {'network': net, 'optimizer': optimizer, 'lr_scheduler': lr_scheduler}\n\n name: identifier of logging.logger to use, if None, defaulting to ``engine.logger``.\n\n \"\"\"\n\n def __init__(self, load_path: str, load_dict: Dict, name: Optional[str] = None) -> None:\n assert load_path is not None, \"must provide clear path to load checkpoint.\"\n self.load_path = load_path\n assert load_dict is not None and len(load_dict) > 0, \"must provide target objects to load.\"\n self.logger = logging.getLogger(name)\n for k, v in load_dict.items():\n if hasattr(v, \"module\"):\n load_dict[k] = v.module\n self.load_dict = load_dict\n\n self._name = name\n\n def attach(self, engine: Engine) -> None:\n \"\"\"\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n if self._name is None:\n self.logger = engine.logger\n engine.add_event_handler(Events.STARTED, self)\n\n def __call__(self, engine: Engine) -> None:\n \"\"\"\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n checkpoint = torch.load(self.load_path)\n if len(self.load_dict) == 1:\n key = list(self.load_dict.keys())[0]\n if not (key in checkpoint):\n checkpoint = {key: checkpoint}\n\n Checkpoint.load_objects(to_load=self.load_dict, checkpoint=checkpoint)\n self.logger.info(f\"Restored all variables from {self.load_path}\")\n", "path": "monai/handlers/checkpoint_loader.py"}]}
| 2,727 | 645 |
gh_patches_debug_9534
|
rasdani/github-patches
|
git_diff
|
boto__boto-1824
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Documentation of security_groups argument doesn't mention VPC
The documentation for the 'security_groups' parameter for a LaunchConfiguration object could be better. As stated on:
http://docs.aws.amazon.com/cli/latest/reference/autoscaling/create-launch-configuration.html
"--security-groups (list)
The names of the security groups with which to associate Amazon EC2 or Amazon VPC instances. Specify Amazon EC2 security groups using security group names, such as websrv . Specify Amazon VPC security groups using security group IDs, such as sg-12345678 ."
In the Boto documentation for creating a launch configuration (http://boto.readthedocs.org/en/latest/ref/autoscale.html#boto.ec2.autoscale.launchconfig.LaunchConfiguration), the comment for the 'security_groups' parameter says:
"Names of the security groups with which to associate the EC2 instances."
In this comment, there is no mention of VPC instances or security group id's as seen in Amazon's own docs. I lost about a day of productivity figuring this out so I'm hoping that the documentation can be improved.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `boto/ec2/autoscale/launchconfig.py`
Content:
```
1 # Copyright (c) 2009 Reza Lotun http://reza.lotun.name/
2 # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining a
5 # copy of this software and associated documentation files (the
6 # "Software"), to deal in the Software without restriction, including
7 # without limitation the rights to use, copy, modify, merge, publish, dis-
8 # tribute, sublicense, and/or sell copies of the Software, and to permit
9 # persons to whom the Software is furnished to do so, subject to the fol-
10 # lowing conditions:
11 #
12 # The above copyright notice and this permission notice shall be included
13 # in all copies or substantial portions of the Software.
14 #
15 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
16 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
17 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
18 # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
19 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
21 # IN THE SOFTWARE.
22
23 from datetime import datetime
24 from boto.resultset import ResultSet
25 from boto.ec2.elb.listelement import ListElement
26 import boto.utils
27 import base64
28
29 # this should use the corresponding object from boto.ec2
30
31
32 class Ebs(object):
33 def __init__(self, connection=None, snapshot_id=None, volume_size=None):
34 self.connection = connection
35 self.snapshot_id = snapshot_id
36 self.volume_size = volume_size
37
38 def __repr__(self):
39 return 'Ebs(%s, %s)' % (self.snapshot_id, self.volume_size)
40
41 def startElement(self, name, attrs, connection):
42 pass
43
44 def endElement(self, name, value, connection):
45 if name == 'SnapshotId':
46 self.snapshot_id = value
47 elif name == 'VolumeSize':
48 self.volume_size = value
49
50
51 class InstanceMonitoring(object):
52 def __init__(self, connection=None, enabled='false'):
53 self.connection = connection
54 self.enabled = enabled
55
56 def __repr__(self):
57 return 'InstanceMonitoring(%s)' % self.enabled
58
59 def startElement(self, name, attrs, connection):
60 pass
61
62 def endElement(self, name, value, connection):
63 if name == 'Enabled':
64 self.enabled = value
65
66
67 # this should use the BlockDeviceMapping from boto.ec2.blockdevicemapping
68 class BlockDeviceMapping(object):
69 def __init__(self, connection=None, device_name=None, virtual_name=None):
70 self.connection = connection
71 self.device_name = None
72 self.virtual_name = None
73 self.ebs = None
74
75 def __repr__(self):
76 return 'BlockDeviceMapping(%s, %s)' % (self.device_name,
77 self.virtual_name)
78
79 def startElement(self, name, attrs, connection):
80 if name == 'Ebs':
81 self.ebs = Ebs(self)
82 return self.ebs
83
84 def endElement(self, name, value, connection):
85 if name == 'DeviceName':
86 self.device_name = value
87 elif name == 'VirtualName':
88 self.virtual_name = value
89
90
91 class LaunchConfiguration(object):
92 def __init__(self, connection=None, name=None, image_id=None,
93 key_name=None, security_groups=None, user_data=None,
94 instance_type='m1.small', kernel_id=None,
95 ramdisk_id=None, block_device_mappings=None,
96 instance_monitoring=False, spot_price=None,
97 instance_profile_name=None, ebs_optimized=False,
98 associate_public_ip_address=None):
99 """
100 A launch configuration.
101
102 :type name: str
103 :param name: Name of the launch configuration to create.
104
105 :type image_id: str
106 :param image_id: Unique ID of the Amazon Machine Image (AMI) which was
107 assigned during registration.
108
109 :type key_name: str
110 :param key_name: The name of the EC2 key pair.
111
112 :type security_groups: list
113 :param security_groups: Names of the security groups with which to
114 associate the EC2 instances.
115
116 :type user_data: str
117 :param user_data: The user data available to launched EC2 instances.
118
119 :type instance_type: str
120 :param instance_type: The instance type
121
122 :type kern_id: str
123 :param kern_id: Kernel id for instance
124
125 :type ramdisk_id: str
126 :param ramdisk_id: RAM disk id for instance
127
128 :type block_device_mappings: list
129 :param block_device_mappings: Specifies how block devices are exposed
130 for instances
131
132 :type instance_monitoring: bool
133 :param instance_monitoring: Whether instances in group are launched
134 with detailed monitoring.
135
136 :type spot_price: float
137 :param spot_price: The spot price you are bidding. Only applies
138 if you are building an autoscaling group with spot instances.
139
140 :type instance_profile_name: string
141 :param instance_profile_name: The name or the Amazon Resource
142 Name (ARN) of the instance profile associated with the IAM
143 role for the instance.
144
145 :type ebs_optimized: bool
146 :param ebs_optimized: Specifies whether the instance is optimized
147 for EBS I/O (true) or not (false).
148
149 :type associate_public_ip_address: bool
150 :param associate_public_ip_address: Used for Auto Scaling groups that launch instances into an Amazon Virtual Private Cloud.
151 Specifies whether to assign a public IP address to each instance launched in a Amazon VPC.
152 """
153 self.connection = connection
154 self.name = name
155 self.instance_type = instance_type
156 self.block_device_mappings = block_device_mappings
157 self.key_name = key_name
158 sec_groups = security_groups or []
159 self.security_groups = ListElement(sec_groups)
160 self.image_id = image_id
161 self.ramdisk_id = ramdisk_id
162 self.created_time = None
163 self.kernel_id = kernel_id
164 self.user_data = user_data
165 self.created_time = None
166 self.instance_monitoring = instance_monitoring
167 self.spot_price = spot_price
168 self.instance_profile_name = instance_profile_name
169 self.launch_configuration_arn = None
170 self.ebs_optimized = ebs_optimized
171 self.associate_public_ip_address = associate_public_ip_address
172
173 def __repr__(self):
174 return 'LaunchConfiguration:%s' % self.name
175
176 def startElement(self, name, attrs, connection):
177 if name == 'SecurityGroups':
178 return self.security_groups
179 elif name == 'BlockDeviceMappings':
180 self.block_device_mappings = ResultSet([('member',
181 BlockDeviceMapping)])
182 return self.block_device_mappings
183 elif name == 'InstanceMonitoring':
184 self.instance_monitoring = InstanceMonitoring(self)
185 return self.instance_monitoring
186
187 def endElement(self, name, value, connection):
188 if name == 'InstanceType':
189 self.instance_type = value
190 elif name == 'LaunchConfigurationName':
191 self.name = value
192 elif name == 'KeyName':
193 self.key_name = value
194 elif name == 'ImageId':
195 self.image_id = value
196 elif name == 'CreatedTime':
197 self.created_time = boto.utils.parse_ts(value)
198 elif name == 'KernelId':
199 self.kernel_id = value
200 elif name == 'RamdiskId':
201 self.ramdisk_id = value
202 elif name == 'UserData':
203 try:
204 self.user_data = base64.b64decode(value)
205 except TypeError:
206 self.user_data = value
207 elif name == 'LaunchConfigurationARN':
208 self.launch_configuration_arn = value
209 elif name == 'InstanceMonitoring':
210 self.instance_monitoring = value
211 elif name == 'SpotPrice':
212 self.spot_price = float(value)
213 elif name == 'IamInstanceProfile':
214 self.instance_profile_name = value
215 elif name == 'EbsOptimized':
216 self.ebs_optimized = True if value.lower() == 'true' else False
217 else:
218 setattr(self, name, value)
219
220 def delete(self):
221 """ Delete this launch configuration. """
222 return self.connection.delete_launch_configuration(self.name)
223
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/boto/ec2/autoscale/launchconfig.py b/boto/ec2/autoscale/launchconfig.py
--- a/boto/ec2/autoscale/launchconfig.py
+++ b/boto/ec2/autoscale/launchconfig.py
@@ -110,8 +110,9 @@
:param key_name: The name of the EC2 key pair.
:type security_groups: list
- :param security_groups: Names of the security groups with which to
- associate the EC2 instances.
+ :param security_groups: Names or security group id's of the security
+ groups with which to associate the EC2 instances or VPC instances,
+ respectively.
:type user_data: str
:param user_data: The user data available to launched EC2 instances.
|
{"golden_diff": "diff --git a/boto/ec2/autoscale/launchconfig.py b/boto/ec2/autoscale/launchconfig.py\n--- a/boto/ec2/autoscale/launchconfig.py\n+++ b/boto/ec2/autoscale/launchconfig.py\n@@ -110,8 +110,9 @@\n :param key_name: The name of the EC2 key pair.\n \n :type security_groups: list\n- :param security_groups: Names of the security groups with which to\n- associate the EC2 instances.\n+ :param security_groups: Names or security group id's of the security\n+ groups with which to associate the EC2 instances or VPC instances,\n+ respectively.\n \n :type user_data: str\n :param user_data: The user data available to launched EC2 instances.\n", "issue": "Documentation of security_groups argument doesn't mention VPC\nThe documentation for the 'security_groups' parameter for a LaunchConfiguration object could be better. As stated on:\n\nhttp://docs.aws.amazon.com/cli/latest/reference/autoscaling/create-launch-configuration.html\n\n\"--security-groups (list)\n\nThe names of the security groups with which to associate Amazon EC2 or Amazon VPC instances. Specify Amazon EC2 security groups using security group names, such as websrv . Specify Amazon VPC security groups using security group IDs, such as sg-12345678 .\"\n\nIn the Boto documentation for creating a launch configuration (http://boto.readthedocs.org/en/latest/ref/autoscale.html#boto.ec2.autoscale.launchconfig.LaunchConfiguration), the comment for the 'security_groups' parameter says:\n\n\"Names of the security groups with which to associate the EC2 instances.\"\n\nIn this comment, there is no mention of VPC instances or security group id's as seen in Amazon's own docs. I lost about a day of productivity figuring this out so I'm hoping that the documentation can be improved.\n\n", "before_files": [{"content": "# Copyright (c) 2009 Reza Lotun http://reza.lotun.name/\n# Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n\nfrom datetime import datetime\nfrom boto.resultset import ResultSet\nfrom boto.ec2.elb.listelement import ListElement\nimport boto.utils\nimport base64\n\n# this should use the corresponding object from boto.ec2\n\n\nclass Ebs(object):\n def __init__(self, connection=None, snapshot_id=None, volume_size=None):\n self.connection = connection\n self.snapshot_id = snapshot_id\n self.volume_size = volume_size\n\n def __repr__(self):\n return 'Ebs(%s, %s)' % (self.snapshot_id, self.volume_size)\n\n def startElement(self, name, attrs, connection):\n pass\n\n def endElement(self, name, value, connection):\n if name == 'SnapshotId':\n self.snapshot_id = value\n elif name == 'VolumeSize':\n self.volume_size = value\n\n\nclass InstanceMonitoring(object):\n def __init__(self, connection=None, enabled='false'):\n self.connection = connection\n self.enabled = enabled\n\n def __repr__(self):\n return 'InstanceMonitoring(%s)' % self.enabled\n\n def startElement(self, name, attrs, connection):\n pass\n\n def endElement(self, name, value, connection):\n if name == 'Enabled':\n self.enabled = value\n\n\n# this should use the BlockDeviceMapping from boto.ec2.blockdevicemapping\nclass BlockDeviceMapping(object):\n def __init__(self, connection=None, device_name=None, virtual_name=None):\n self.connection = connection\n self.device_name = None\n self.virtual_name = None\n self.ebs = None\n\n def __repr__(self):\n return 'BlockDeviceMapping(%s, %s)' % (self.device_name,\n self.virtual_name)\n\n def startElement(self, name, attrs, connection):\n if name == 'Ebs':\n self.ebs = Ebs(self)\n return self.ebs\n\n def endElement(self, name, value, connection):\n if name == 'DeviceName':\n self.device_name = value\n elif name == 'VirtualName':\n self.virtual_name = value\n\n\nclass LaunchConfiguration(object):\n def __init__(self, connection=None, name=None, image_id=None,\n key_name=None, security_groups=None, user_data=None,\n instance_type='m1.small', kernel_id=None,\n ramdisk_id=None, block_device_mappings=None,\n instance_monitoring=False, spot_price=None,\n instance_profile_name=None, ebs_optimized=False,\n associate_public_ip_address=None):\n \"\"\"\n A launch configuration.\n\n :type name: str\n :param name: Name of the launch configuration to create.\n\n :type image_id: str\n :param image_id: Unique ID of the Amazon Machine Image (AMI) which was\n assigned during registration.\n\n :type key_name: str\n :param key_name: The name of the EC2 key pair.\n\n :type security_groups: list\n :param security_groups: Names of the security groups with which to\n associate the EC2 instances.\n\n :type user_data: str\n :param user_data: The user data available to launched EC2 instances.\n\n :type instance_type: str\n :param instance_type: The instance type\n\n :type kern_id: str\n :param kern_id: Kernel id for instance\n\n :type ramdisk_id: str\n :param ramdisk_id: RAM disk id for instance\n\n :type block_device_mappings: list\n :param block_device_mappings: Specifies how block devices are exposed\n for instances\n\n :type instance_monitoring: bool\n :param instance_monitoring: Whether instances in group are launched\n with detailed monitoring.\n\n :type spot_price: float\n :param spot_price: The spot price you are bidding. Only applies\n if you are building an autoscaling group with spot instances.\n\n :type instance_profile_name: string\n :param instance_profile_name: The name or the Amazon Resource\n Name (ARN) of the instance profile associated with the IAM\n role for the instance.\n\n :type ebs_optimized: bool\n :param ebs_optimized: Specifies whether the instance is optimized\n for EBS I/O (true) or not (false).\n\n :type associate_public_ip_address: bool\n :param associate_public_ip_address: Used for Auto Scaling groups that launch instances into an Amazon Virtual Private Cloud. \n Specifies whether to assign a public IP address to each instance launched in a Amazon VPC.\n \"\"\"\n self.connection = connection\n self.name = name\n self.instance_type = instance_type\n self.block_device_mappings = block_device_mappings\n self.key_name = key_name\n sec_groups = security_groups or []\n self.security_groups = ListElement(sec_groups)\n self.image_id = image_id\n self.ramdisk_id = ramdisk_id\n self.created_time = None\n self.kernel_id = kernel_id\n self.user_data = user_data\n self.created_time = None\n self.instance_monitoring = instance_monitoring\n self.spot_price = spot_price\n self.instance_profile_name = instance_profile_name\n self.launch_configuration_arn = None\n self.ebs_optimized = ebs_optimized\n self.associate_public_ip_address = associate_public_ip_address\n\n def __repr__(self):\n return 'LaunchConfiguration:%s' % self.name\n\n def startElement(self, name, attrs, connection):\n if name == 'SecurityGroups':\n return self.security_groups\n elif name == 'BlockDeviceMappings':\n self.block_device_mappings = ResultSet([('member',\n BlockDeviceMapping)])\n return self.block_device_mappings\n elif name == 'InstanceMonitoring':\n self.instance_monitoring = InstanceMonitoring(self)\n return self.instance_monitoring\n\n def endElement(self, name, value, connection):\n if name == 'InstanceType':\n self.instance_type = value\n elif name == 'LaunchConfigurationName':\n self.name = value\n elif name == 'KeyName':\n self.key_name = value\n elif name == 'ImageId':\n self.image_id = value\n elif name == 'CreatedTime':\n self.created_time = boto.utils.parse_ts(value)\n elif name == 'KernelId':\n self.kernel_id = value\n elif name == 'RamdiskId':\n self.ramdisk_id = value\n elif name == 'UserData':\n try:\n self.user_data = base64.b64decode(value)\n except TypeError:\n self.user_data = value\n elif name == 'LaunchConfigurationARN':\n self.launch_configuration_arn = value\n elif name == 'InstanceMonitoring':\n self.instance_monitoring = value\n elif name == 'SpotPrice':\n self.spot_price = float(value)\n elif name == 'IamInstanceProfile':\n self.instance_profile_name = value\n elif name == 'EbsOptimized':\n self.ebs_optimized = True if value.lower() == 'true' else False\n else:\n setattr(self, name, value)\n\n def delete(self):\n \"\"\" Delete this launch configuration. \"\"\"\n return self.connection.delete_launch_configuration(self.name)\n", "path": "boto/ec2/autoscale/launchconfig.py"}], "after_files": [{"content": "# Copyright (c) 2009 Reza Lotun http://reza.lotun.name/\n# Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n\nfrom datetime import datetime\nfrom boto.resultset import ResultSet\nfrom boto.ec2.elb.listelement import ListElement\nimport boto.utils\nimport base64\n\n# this should use the corresponding object from boto.ec2\n\n\nclass Ebs(object):\n def __init__(self, connection=None, snapshot_id=None, volume_size=None):\n self.connection = connection\n self.snapshot_id = snapshot_id\n self.volume_size = volume_size\n\n def __repr__(self):\n return 'Ebs(%s, %s)' % (self.snapshot_id, self.volume_size)\n\n def startElement(self, name, attrs, connection):\n pass\n\n def endElement(self, name, value, connection):\n if name == 'SnapshotId':\n self.snapshot_id = value\n elif name == 'VolumeSize':\n self.volume_size = value\n\n\nclass InstanceMonitoring(object):\n def __init__(self, connection=None, enabled='false'):\n self.connection = connection\n self.enabled = enabled\n\n def __repr__(self):\n return 'InstanceMonitoring(%s)' % self.enabled\n\n def startElement(self, name, attrs, connection):\n pass\n\n def endElement(self, name, value, connection):\n if name == 'Enabled':\n self.enabled = value\n\n\n# this should use the BlockDeviceMapping from boto.ec2.blockdevicemapping\nclass BlockDeviceMapping(object):\n def __init__(self, connection=None, device_name=None, virtual_name=None):\n self.connection = connection\n self.device_name = None\n self.virtual_name = None\n self.ebs = None\n\n def __repr__(self):\n return 'BlockDeviceMapping(%s, %s)' % (self.device_name,\n self.virtual_name)\n\n def startElement(self, name, attrs, connection):\n if name == 'Ebs':\n self.ebs = Ebs(self)\n return self.ebs\n\n def endElement(self, name, value, connection):\n if name == 'DeviceName':\n self.device_name = value\n elif name == 'VirtualName':\n self.virtual_name = value\n\n\nclass LaunchConfiguration(object):\n def __init__(self, connection=None, name=None, image_id=None,\n key_name=None, security_groups=None, user_data=None,\n instance_type='m1.small', kernel_id=None,\n ramdisk_id=None, block_device_mappings=None,\n instance_monitoring=False, spot_price=None,\n instance_profile_name=None, ebs_optimized=False,\n associate_public_ip_address=None):\n \"\"\"\n A launch configuration.\n\n :type name: str\n :param name: Name of the launch configuration to create.\n\n :type image_id: str\n :param image_id: Unique ID of the Amazon Machine Image (AMI) which was\n assigned during registration.\n\n :type key_name: str\n :param key_name: The name of the EC2 key pair.\n\n :type security_groups: list\n :param security_groups: Names or security group id's of the security\n groups with which to associate the EC2 instances or VPC instances,\n respectively.\n\n :type user_data: str\n :param user_data: The user data available to launched EC2 instances.\n\n :type instance_type: str\n :param instance_type: The instance type\n\n :type kern_id: str\n :param kern_id: Kernel id for instance\n\n :type ramdisk_id: str\n :param ramdisk_id: RAM disk id for instance\n\n :type block_device_mappings: list\n :param block_device_mappings: Specifies how block devices are exposed\n for instances\n\n :type instance_monitoring: bool\n :param instance_monitoring: Whether instances in group are launched\n with detailed monitoring.\n\n :type spot_price: float\n :param spot_price: The spot price you are bidding. Only applies\n if you are building an autoscaling group with spot instances.\n\n :type instance_profile_name: string\n :param instance_profile_name: The name or the Amazon Resource\n Name (ARN) of the instance profile associated with the IAM\n role for the instance.\n\n :type ebs_optimized: bool\n :param ebs_optimized: Specifies whether the instance is optimized\n for EBS I/O (true) or not (false).\n\n :type associate_public_ip_address: bool\n :param associate_public_ip_address: Used for Auto Scaling groups that launch instances into an Amazon Virtual Private Cloud. \n Specifies whether to assign a public IP address to each instance launched in a Amazon VPC.\n \"\"\"\n self.connection = connection\n self.name = name\n self.instance_type = instance_type\n self.block_device_mappings = block_device_mappings\n self.key_name = key_name\n sec_groups = security_groups or []\n self.security_groups = ListElement(sec_groups)\n self.image_id = image_id\n self.ramdisk_id = ramdisk_id\n self.created_time = None\n self.kernel_id = kernel_id\n self.user_data = user_data\n self.created_time = None\n self.instance_monitoring = instance_monitoring\n self.spot_price = spot_price\n self.instance_profile_name = instance_profile_name\n self.launch_configuration_arn = None\n self.ebs_optimized = ebs_optimized\n self.associate_public_ip_address = associate_public_ip_address\n\n def __repr__(self):\n return 'LaunchConfiguration:%s' % self.name\n\n def startElement(self, name, attrs, connection):\n if name == 'SecurityGroups':\n return self.security_groups\n elif name == 'BlockDeviceMappings':\n self.block_device_mappings = ResultSet([('member',\n BlockDeviceMapping)])\n return self.block_device_mappings\n elif name == 'InstanceMonitoring':\n self.instance_monitoring = InstanceMonitoring(self)\n return self.instance_monitoring\n\n def endElement(self, name, value, connection):\n if name == 'InstanceType':\n self.instance_type = value\n elif name == 'LaunchConfigurationName':\n self.name = value\n elif name == 'KeyName':\n self.key_name = value\n elif name == 'ImageId':\n self.image_id = value\n elif name == 'CreatedTime':\n self.created_time = boto.utils.parse_ts(value)\n elif name == 'KernelId':\n self.kernel_id = value\n elif name == 'RamdiskId':\n self.ramdisk_id = value\n elif name == 'UserData':\n try:\n self.user_data = base64.b64decode(value)\n except TypeError:\n self.user_data = value\n elif name == 'LaunchConfigurationARN':\n self.launch_configuration_arn = value\n elif name == 'InstanceMonitoring':\n self.instance_monitoring = value\n elif name == 'SpotPrice':\n self.spot_price = float(value)\n elif name == 'IamInstanceProfile':\n self.instance_profile_name = value\n elif name == 'EbsOptimized':\n self.ebs_optimized = True if value.lower() == 'true' else False\n else:\n setattr(self, name, value)\n\n def delete(self):\n \"\"\" Delete this launch configuration. \"\"\"\n return self.connection.delete_launch_configuration(self.name)\n", "path": "boto/ec2/autoscale/launchconfig.py"}]}
| 2,877 | 180 |
gh_patches_debug_2662
|
rasdani/github-patches
|
git_diff
|
scikit-image__scikit-image-6343
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
imageIO warnings due to v2 -> v3 migration
## Description
As of imageIO 2.16.0 (Feb22) there are now a v2 and v3 namespaces in addition to the top-level namespace. As of 2.16.2 (released Apr22) directly using the top-level namespace results in warnings to either explicitly opt-into the v3 API or opt-out and import the v2.
This in turn causes warnings when using `skimage.io.imread`.
I suggest that this is a good first issue as there is no API design choices here (at least to start) and only needs the
```python
try:
import newway
except ImportError:
import old way
```
dance.
The warnings look like (lifted from a test suite):
```
____________________________________________________________________________ ReaderSequence.test_slice_of_slice ____________________________________________________________________________
pims/tests/test_imseq.py:256: in setUp
self.v = self.klass(self.filename, **self.kwargs)
pims/image_sequence.py:217: in __init__
with self.reader_cls(self._filepaths[0], **self.kwargs) as reader:
pims/image_reader.py:60: in __init__
self._data = Frame(imread(filename, **kwargs), frame_no=0)
../../../../.pybuild/bleeding/lib/python3.11/contextlib.py:155: in __exit__
self.gen.throw(typ, value, traceback)
../../../../.virtualenvs/bleeding/lib/python3.11/site-packages/skimage/io/util.py:43: in file_or_url_context
yield resource_name
../../../../.virtualenvs/bleeding/lib/python3.11/site-packages/skimage/io/_io.py:53: in imread
img = call_plugin('imread', fname, plugin=plugin, **plugin_args)
../../../../.virtualenvs/bleeding/lib/python3.11/site-packages/skimage/io/manage_plugins.py:207: in call_plugin
return func(*args, **kwargs)
../../../../.virtualenvs/bleeding/lib/python3.11/site-packages/skimage/io/_plugins/imageio_plugin.py:10: in imread
return np.asarray(imageio_imread(*args, **kwargs))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
uri = '/home/tcaswell/source/bnl/soft-matter/pims/pims/tests/data/image_sequence3d/file001.png', format = None, kwargs = {}
def imread(uri, format=None, **kwargs):
"""imread(uri, format=None, **kwargs)
Reads an image from the specified file. Returns a numpy array, which
comes with a dict of meta data at its 'meta' attribute.
Note that the image data is returned as-is, and may not always have
a dtype of uint8 (and thus may differ from what e.g. PIL returns).
Parameters
----------
uri : {str, pathlib.Path, bytes, file}
The resource to load the image from, e.g. a filename, pathlib.Path,
http address or file object, see the docs for more info.
format : str
The format to use to read the file. By default imageio selects
the appropriate for you based on the filename and its contents.
kwargs : ...
Further keyword arguments are passed to the reader. See :func:`.help`
to see what arguments are available for a particular format.
"""
> warnings.warn(
"Starting with ImageIO v3 the behavior of this function will switch to that of"
" iio.v3.imread. To keep the current behavior (and make this warning dissapear)"
" use `import imageio.v2 as imageio` or call `imageio.v2.imread` directly.",
DeprecationWarning,
)
E DeprecationWarning: Starting with ImageIO v3 the behavior of this function will switch to that of iio.v3.imread. To keep the current behavior (and make this warning dissapear) use `import imageio.v2 as imageio` or call `imageio.v2.imread` directly.
../../../../.virtualenvs/bleeding/lib/python3.11/site-packages/imageio/__init__.py:89: DeprecationWarning
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skimage/io/_plugins/imageio_plugin.py`
Content:
```
1 __all__ = ['imread', 'imsave']
2
3 from functools import wraps
4 import numpy as np
5 from imageio import imread as imageio_imread, imsave
6
7
8 @wraps(imageio_imread)
9 def imread(*args, **kwargs):
10 return np.asarray(imageio_imread(*args, **kwargs))
11
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/skimage/io/_plugins/imageio_plugin.py b/skimage/io/_plugins/imageio_plugin.py
--- a/skimage/io/_plugins/imageio_plugin.py
+++ b/skimage/io/_plugins/imageio_plugin.py
@@ -2,7 +2,12 @@
from functools import wraps
import numpy as np
-from imageio import imread as imageio_imread, imsave
+
+try:
+ # Try using the v2 API directly to avoid a warning from imageio >= 2.16.2
+ from imageio.v2 import imread as imageio_imread, imsave
+except ImportError:
+ from imageio import imread as imageio_imread, imsave
@wraps(imageio_imread)
|
{"golden_diff": "diff --git a/skimage/io/_plugins/imageio_plugin.py b/skimage/io/_plugins/imageio_plugin.py\n--- a/skimage/io/_plugins/imageio_plugin.py\n+++ b/skimage/io/_plugins/imageio_plugin.py\n@@ -2,7 +2,12 @@\n \n from functools import wraps\n import numpy as np\n-from imageio import imread as imageio_imread, imsave\n+\n+try:\n+ # Try using the v2 API directly to avoid a warning from imageio >= 2.16.2\n+ from imageio.v2 import imread as imageio_imread, imsave\n+except ImportError:\n+ from imageio import imread as imageio_imread, imsave\n \n \n @wraps(imageio_imread)\n", "issue": "imageIO warnings due to v2 -> v3 migration\n## Description\r\n\r\nAs of imageIO 2.16.0 (Feb22) there are now a v2 and v3 namespaces in addition to the top-level namespace. As of 2.16.2 (released Apr22) directly using the top-level namespace results in warnings to either explicitly opt-into the v3 API or opt-out and import the v2. \r\n\r\nThis in turn causes warnings when using `skimage.io.imread`. \r\n\r\nI suggest that this is a good first issue as there is no API design choices here (at least to start) and only needs the \r\n```python\r\ntry: \r\n import newway\r\nexcept ImportError: \r\n import old way\r\n```\r\ndance.\r\n\r\nThe warnings look like (lifted from a test suite):\r\n\r\n```\r\n____________________________________________________________________________ ReaderSequence.test_slice_of_slice ____________________________________________________________________________\r\npims/tests/test_imseq.py:256: in setUp\r\n self.v = self.klass(self.filename, **self.kwargs)\r\npims/image_sequence.py:217: in __init__\r\n with self.reader_cls(self._filepaths[0], **self.kwargs) as reader:\r\npims/image_reader.py:60: in __init__\r\n self._data = Frame(imread(filename, **kwargs), frame_no=0)\r\n../../../../.pybuild/bleeding/lib/python3.11/contextlib.py:155: in __exit__\r\n self.gen.throw(typ, value, traceback)\r\n../../../../.virtualenvs/bleeding/lib/python3.11/site-packages/skimage/io/util.py:43: in file_or_url_context\r\n yield resource_name\r\n../../../../.virtualenvs/bleeding/lib/python3.11/site-packages/skimage/io/_io.py:53: in imread\r\n img = call_plugin('imread', fname, plugin=plugin, **plugin_args)\r\n../../../../.virtualenvs/bleeding/lib/python3.11/site-packages/skimage/io/manage_plugins.py:207: in call_plugin\r\n return func(*args, **kwargs)\r\n../../../../.virtualenvs/bleeding/lib/python3.11/site-packages/skimage/io/_plugins/imageio_plugin.py:10: in imread\r\n return np.asarray(imageio_imread(*args, **kwargs))\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nuri = '/home/tcaswell/source/bnl/soft-matter/pims/pims/tests/data/image_sequence3d/file001.png', format = None, kwargs = {}\r\n\r\n def imread(uri, format=None, **kwargs):\r\n \"\"\"imread(uri, format=None, **kwargs)\r\n \r\n Reads an image from the specified file. Returns a numpy array, which\r\n comes with a dict of meta data at its 'meta' attribute.\r\n \r\n Note that the image data is returned as-is, and may not always have\r\n a dtype of uint8 (and thus may differ from what e.g. PIL returns).\r\n \r\n Parameters\r\n ----------\r\n uri : {str, pathlib.Path, bytes, file}\r\n The resource to load the image from, e.g. a filename, pathlib.Path,\r\n http address or file object, see the docs for more info.\r\n format : str\r\n The format to use to read the file. By default imageio selects\r\n the appropriate for you based on the filename and its contents.\r\n kwargs : ...\r\n Further keyword arguments are passed to the reader. See :func:`.help`\r\n to see what arguments are available for a particular format.\r\n \"\"\"\r\n \r\n> warnings.warn(\r\n \"Starting with ImageIO v3 the behavior of this function will switch to that of\"\r\n \" iio.v3.imread. To keep the current behavior (and make this warning dissapear)\"\r\n \" use `import imageio.v2 as imageio` or call `imageio.v2.imread` directly.\",\r\n DeprecationWarning,\r\n )\r\nE DeprecationWarning: Starting with ImageIO v3 the behavior of this function will switch to that of iio.v3.imread. To keep the current behavior (and make this warning dissapear) use `import imageio.v2 as imageio` or call `imageio.v2.imread` directly.\r\n\r\n../../../../.virtualenvs/bleeding/lib/python3.11/site-packages/imageio/__init__.py:89: DeprecationWarning\r\n\r\n```\n", "before_files": [{"content": "__all__ = ['imread', 'imsave']\n\nfrom functools import wraps\nimport numpy as np\nfrom imageio import imread as imageio_imread, imsave\n\n\n@wraps(imageio_imread)\ndef imread(*args, **kwargs):\n return np.asarray(imageio_imread(*args, **kwargs))\n", "path": "skimage/io/_plugins/imageio_plugin.py"}], "after_files": [{"content": "__all__ = ['imread', 'imsave']\n\nfrom functools import wraps\nimport numpy as np\n\ntry:\n # Try using the v2 API directly to avoid a warning from imageio >= 2.16.2\n from imageio.v2 import imread as imageio_imread, imsave\nexcept ImportError:\n from imageio import imread as imageio_imread, imsave\n\n\n@wraps(imageio_imread)\ndef imread(*args, **kwargs):\n return np.asarray(imageio_imread(*args, **kwargs))\n", "path": "skimage/io/_plugins/imageio_plugin.py"}]}
| 1,356 | 165 |
gh_patches_debug_9072
|
rasdani/github-patches
|
git_diff
|
RedHatInsights__insights-core-1524
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Ensure current working directory is on the python path with insights-run
`insights-run` can't invoke arbitrary scripts because it doesn't include the current working directory in the python path like `python -m insights` does.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `insights/__init__.py`
Content:
```
1 from __future__ import print_function
2 import logging
3 import pkgutil
4 import os
5 import yaml
6 from .core import Scannable, LogFileOutput, Parser, IniConfigFile # noqa: F401
7 from .core import FileListing, LegacyItemAccess, SysconfigOptions # noqa: F401
8 from .core import YAMLParser, JSONParser, XMLParser, CommandParser # noqa: F401
9 from .core import AttributeDict # noqa: F401
10 from .core import Syslog # noqa: F401
11 from .core.archives import COMPRESSION_TYPES, extract # noqa: F401
12 from .core import dr # noqa: F401
13 from .core.context import ClusterArchiveContext, HostContext, HostArchiveContext # noqa: F401
14 from .core.dr import SkipComponent # noqa: F401
15 from .core.hydration import create_context
16 from .core.plugins import combiner, fact, metadata, parser, rule # noqa: F401
17 from .core.plugins import datasource, condition, incident # noqa: F401
18 from .core.plugins import make_response, make_metadata, make_fingerprint # noqa: F401
19 from .core.plugins import make_pass, make_fail # noqa: F401
20 from .core.filters import add_filter, apply_filters, get_filters # noqa: F401
21 from .formats import get_formatter
22 from .parsers import get_active_lines # noqa: F401
23 from .util import defaults # noqa: F401
24
25 log = logging.getLogger(__name__)
26
27
28 package_info = dict((k, None) for k in ["RELEASE", "COMMIT", "VERSION", "NAME"])
29
30
31 for name in package_info:
32 package_info[name] = pkgutil.get_data(__name__, name).strip().decode("utf-8")
33
34
35 def get_nvr():
36 return "{0}-{1}-{2}".format(package_info["NAME"],
37 package_info["VERSION"],
38 package_info["RELEASE"])
39
40
41 RULES_STATUS = {}
42 """
43 Mapping of dictionaries containing nvr and commitid for each rule repo included
44 in this instance
45
46 {"rule_repo_1": {"version": nvr(), "commit": sha1}}
47 """
48
49
50 def add_status(name, nvr, commit):
51 """
52 Rule repositories should call this method in their package __init__ to
53 register their version information.
54 """
55 RULES_STATUS[name] = {"version": nvr, "commit": commit}
56
57
58 def process_dir(broker, root, graph, context, use_pandas=False):
59 ctx = create_context(root, context)
60 log.debug("Processing %s with %s" % (root, ctx))
61
62 if isinstance(ctx, ClusterArchiveContext):
63 from .core.cluster import process_cluster
64 archives = [f for f in ctx.all_files if f.endswith(COMPRESSION_TYPES)]
65 return process_cluster(archives, use_pandas=use_pandas, broker=broker)
66
67 broker[ctx.__class__] = ctx
68 broker = dr.run(graph, broker=broker)
69 return broker
70
71
72 def _run(broker, graph=None, root=None, context=None, use_pandas=False):
73 """
74 run is a general interface that is meant for stand alone scripts to use
75 when executing insights components.
76
77 Args:
78 root (str): None will causes a host collection in which command and
79 file specs are run. A directory or archive path will cause
80 collection from the directory or archive, and only file type specs
81 or those that depend on `insights.core.context.HostArchiveContext`
82 will execute.
83 component (function or class): The component to execute. Will only execute
84 the component and its dependency graph. If None, all components with
85 met dependencies will execute.
86
87 Returns:
88 broker: object containing the result of the evaluation.
89 """
90
91 if not root:
92 context = context or HostContext
93 broker[context] = context()
94 return dr.run(graph, broker=broker)
95
96 if os.path.isdir(root):
97 return process_dir(broker, root, graph, context, use_pandas)
98 else:
99 with extract(root) as ex:
100 return process_dir(broker, ex.tmp_dir, graph, context, use_pandas)
101
102
103 def apply_configs(configs):
104 """
105 Configures components. They can be enabled or disabled, have timeouts set
106 if applicable, and have metadata customized. Valid keys are name, enabled,
107 metadata, and timeout.
108
109 Args:
110 configs (list): a list of dictionaries with the following keys:
111 name, enabled, metadata, and timeout. All keys are optional except
112 name.
113
114 name is the prefix or exact name of any loaded component. Any
115 component starting with name will have the associated configuration
116 applied.
117
118 enabled is whether the matching components will execute even if
119 their dependencies are met. Defaults to True.
120
121 timeout sets the class level timeout attribute of any component so
122 long as the attribute already exists.
123
124 metadata is any dictionary that you want to attach to the
125 component. The dictionary can be retrieved by the component at
126 runtime.
127 """
128 delegate_keys = sorted(dr.DELEGATES, key=dr.get_name)
129 for comp_cfg in configs:
130 name = comp_cfg["name"]
131 for c in delegate_keys:
132 delegate = dr.DELEGATES[c]
133 cname = dr.get_name(c)
134 if cname.startswith(name):
135 dr.ENABLED[c] = comp_cfg.get("enabled", True)
136 delegate.metadata.update(comp_cfg.get("metadata", {}))
137 for k, v in delegate.metadata.items():
138 if hasattr(c, k):
139 setattr(c, k, v)
140 if hasattr(c, "timeout"):
141 c.timeout = comp_cfg.get("timeout", c.timeout)
142 if cname == name:
143 break
144
145
146 def _load_context(path):
147 if path is None:
148 return
149
150 if "." not in path:
151 path = ".".join(["insights.core.context", path])
152 return dr.get_component(path)
153
154
155 def run(component=None, root=None, print_summary=False,
156 context=None, use_pandas=False,
157 print_component=None):
158
159 from .core import dr
160 dr.load_components("insights.specs.default")
161 dr.load_components("insights.specs.insights_archive")
162 dr.load_components("insights.specs.sos_archive")
163 dr.load_components("insights.specs.jdr_archive")
164
165 args = None
166 formatter = None
167 if print_summary:
168 import argparse
169 import logging
170 p = argparse.ArgumentParser(add_help=False)
171 p.add_argument("archive", nargs="?", help="Archive or directory to analyze.")
172 p.add_argument("-p", "--plugins", default="", help="Comma-separated list without spaces of package(s) or module(s) containing plugins.")
173 p.add_argument("-c", "--config", help="Configure components.")
174 p.add_argument("-v", "--verbose", help="Verbose output.", action="store_true")
175 p.add_argument("-f", "--format", help="Output format.", default="insights.formats.text")
176 p.add_argument("-D", "--debug", help="Verbose debug output.", action="store_true")
177 p.add_argument("--context", help="Execution Context. Defaults to HostContext if an archive isn't passed.")
178 p.add_argument("--pandas", action="store_true", help="Use pandas dataframes with cluster rules.")
179
180 class Args(object):
181 pass
182
183 args = Args()
184 p.parse_known_args(namespace=args)
185 p = argparse.ArgumentParser(parents=[p])
186 args.format = "insights.formats._json" if args.format == "json" else args.format
187 args.format = "insights.formats._yaml" if args.format == "yaml" else args.format
188 fmt = args.format if "." in args.format else "insights.formats." + args.format
189 Formatter = dr.get_component(fmt)
190 if not Formatter:
191 dr.load_components(fmt, continue_on_error=False)
192 Formatter = get_formatter(fmt)
193 Formatter.configure(p)
194 p.parse_args(namespace=args)
195 formatter = Formatter(args)
196
197 logging.basicConfig(level=logging.DEBUG if args.debug else logging.INFO if args.verbose else logging.ERROR)
198 context = _load_context(args.context) or context
199 use_pandas = args.pandas or use_pandas
200
201 root = args.archive or root
202 if root:
203 root = os.path.realpath(root)
204
205 plugins = []
206 if args.plugins:
207 for path in args.plugins.split(","):
208 path = path.strip()
209 if path.endswith(".py"):
210 path, _ = os.path.splitext(path)
211 path = path.rstrip("/").replace("/", ".")
212 plugins.append(path)
213
214 for p in plugins:
215 dr.load_components(p, continue_on_error=False)
216
217 if args.config:
218 with open(args.config) as f:
219 apply_configs(yaml.load(f))
220
221 if component is None:
222 component = []
223 plugins = tuple(plugins)
224 for c in dr.DELEGATES:
225 if c.__module__.startswith(plugins):
226 component.append(c)
227
228 if component:
229 if not isinstance(component, (list, set)):
230 component = [component]
231 graph = {}
232 for c in component:
233 graph.update(dr.get_dependency_graph(c))
234 else:
235 graph = dr.COMPONENTS[dr.GROUPS.single]
236
237 broker = dr.Broker()
238
239 if formatter:
240 formatter.preprocess(broker)
241 broker = _run(broker, graph, root, context=context, use_pandas=use_pandas)
242 formatter.postprocess(broker)
243 elif print_component:
244 broker = _run(broker, graph, root, context=context, use_pandas=use_pandas)
245 broker.print_component(print_component)
246 else:
247 broker = _run(broker, graph, root, context=context, use_pandas=use_pandas)
248
249 return broker
250
251
252 def main():
253 run(print_summary=True)
254
255
256 if __name__ == "__main__":
257 main()
258
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/insights/__init__.py b/insights/__init__.py
--- a/insights/__init__.py
+++ b/insights/__init__.py
@@ -2,6 +2,7 @@
import logging
import pkgutil
import os
+import sys
import yaml
from .core import Scannable, LogFileOutput, Parser, IniConfigFile # noqa: F401
from .core import FileListing, LegacyItemAccess, SysconfigOptions # noqa: F401
@@ -250,6 +251,8 @@
def main():
+ if "" not in sys.path:
+ sys.path.insert(0, "")
run(print_summary=True)
|
{"golden_diff": "diff --git a/insights/__init__.py b/insights/__init__.py\n--- a/insights/__init__.py\n+++ b/insights/__init__.py\n@@ -2,6 +2,7 @@\n import logging\n import pkgutil\n import os\n+import sys\n import yaml\n from .core import Scannable, LogFileOutput, Parser, IniConfigFile # noqa: F401\n from .core import FileListing, LegacyItemAccess, SysconfigOptions # noqa: F401\n@@ -250,6 +251,8 @@\n \n \n def main():\n+ if \"\" not in sys.path:\n+ sys.path.insert(0, \"\")\n run(print_summary=True)\n", "issue": "Ensure current working directory is on the python path with insights-run\n`insights-run` can't invoke arbitrary scripts because it doesn't include the current working directory in the python path like `python -m insights` does.\n", "before_files": [{"content": "from __future__ import print_function\nimport logging\nimport pkgutil\nimport os\nimport yaml\nfrom .core import Scannable, LogFileOutput, Parser, IniConfigFile # noqa: F401\nfrom .core import FileListing, LegacyItemAccess, SysconfigOptions # noqa: F401\nfrom .core import YAMLParser, JSONParser, XMLParser, CommandParser # noqa: F401\nfrom .core import AttributeDict # noqa: F401\nfrom .core import Syslog # noqa: F401\nfrom .core.archives import COMPRESSION_TYPES, extract # noqa: F401\nfrom .core import dr # noqa: F401\nfrom .core.context import ClusterArchiveContext, HostContext, HostArchiveContext # noqa: F401\nfrom .core.dr import SkipComponent # noqa: F401\nfrom .core.hydration import create_context\nfrom .core.plugins import combiner, fact, metadata, parser, rule # noqa: F401\nfrom .core.plugins import datasource, condition, incident # noqa: F401\nfrom .core.plugins import make_response, make_metadata, make_fingerprint # noqa: F401\nfrom .core.plugins import make_pass, make_fail # noqa: F401\nfrom .core.filters import add_filter, apply_filters, get_filters # noqa: F401\nfrom .formats import get_formatter\nfrom .parsers import get_active_lines # noqa: F401\nfrom .util import defaults # noqa: F401\n\nlog = logging.getLogger(__name__)\n\n\npackage_info = dict((k, None) for k in [\"RELEASE\", \"COMMIT\", \"VERSION\", \"NAME\"])\n\n\nfor name in package_info:\n package_info[name] = pkgutil.get_data(__name__, name).strip().decode(\"utf-8\")\n\n\ndef get_nvr():\n return \"{0}-{1}-{2}\".format(package_info[\"NAME\"],\n package_info[\"VERSION\"],\n package_info[\"RELEASE\"])\n\n\nRULES_STATUS = {}\n\"\"\"\nMapping of dictionaries containing nvr and commitid for each rule repo included\nin this instance\n\n{\"rule_repo_1\": {\"version\": nvr(), \"commit\": sha1}}\n\"\"\"\n\n\ndef add_status(name, nvr, commit):\n \"\"\"\n Rule repositories should call this method in their package __init__ to\n register their version information.\n \"\"\"\n RULES_STATUS[name] = {\"version\": nvr, \"commit\": commit}\n\n\ndef process_dir(broker, root, graph, context, use_pandas=False):\n ctx = create_context(root, context)\n log.debug(\"Processing %s with %s\" % (root, ctx))\n\n if isinstance(ctx, ClusterArchiveContext):\n from .core.cluster import process_cluster\n archives = [f for f in ctx.all_files if f.endswith(COMPRESSION_TYPES)]\n return process_cluster(archives, use_pandas=use_pandas, broker=broker)\n\n broker[ctx.__class__] = ctx\n broker = dr.run(graph, broker=broker)\n return broker\n\n\ndef _run(broker, graph=None, root=None, context=None, use_pandas=False):\n \"\"\"\n run is a general interface that is meant for stand alone scripts to use\n when executing insights components.\n\n Args:\n root (str): None will causes a host collection in which command and\n file specs are run. A directory or archive path will cause\n collection from the directory or archive, and only file type specs\n or those that depend on `insights.core.context.HostArchiveContext`\n will execute.\n component (function or class): The component to execute. Will only execute\n the component and its dependency graph. If None, all components with\n met dependencies will execute.\n\n Returns:\n broker: object containing the result of the evaluation.\n \"\"\"\n\n if not root:\n context = context or HostContext\n broker[context] = context()\n return dr.run(graph, broker=broker)\n\n if os.path.isdir(root):\n return process_dir(broker, root, graph, context, use_pandas)\n else:\n with extract(root) as ex:\n return process_dir(broker, ex.tmp_dir, graph, context, use_pandas)\n\n\ndef apply_configs(configs):\n \"\"\"\n Configures components. They can be enabled or disabled, have timeouts set\n if applicable, and have metadata customized. Valid keys are name, enabled,\n metadata, and timeout.\n\n Args:\n configs (list): a list of dictionaries with the following keys:\n name, enabled, metadata, and timeout. All keys are optional except\n name.\n\n name is the prefix or exact name of any loaded component. Any\n component starting with name will have the associated configuration\n applied.\n\n enabled is whether the matching components will execute even if\n their dependencies are met. Defaults to True.\n\n timeout sets the class level timeout attribute of any component so\n long as the attribute already exists.\n\n metadata is any dictionary that you want to attach to the\n component. The dictionary can be retrieved by the component at\n runtime.\n \"\"\"\n delegate_keys = sorted(dr.DELEGATES, key=dr.get_name)\n for comp_cfg in configs:\n name = comp_cfg[\"name\"]\n for c in delegate_keys:\n delegate = dr.DELEGATES[c]\n cname = dr.get_name(c)\n if cname.startswith(name):\n dr.ENABLED[c] = comp_cfg.get(\"enabled\", True)\n delegate.metadata.update(comp_cfg.get(\"metadata\", {}))\n for k, v in delegate.metadata.items():\n if hasattr(c, k):\n setattr(c, k, v)\n if hasattr(c, \"timeout\"):\n c.timeout = comp_cfg.get(\"timeout\", c.timeout)\n if cname == name:\n break\n\n\ndef _load_context(path):\n if path is None:\n return\n\n if \".\" not in path:\n path = \".\".join([\"insights.core.context\", path])\n return dr.get_component(path)\n\n\ndef run(component=None, root=None, print_summary=False,\n context=None, use_pandas=False,\n print_component=None):\n\n from .core import dr\n dr.load_components(\"insights.specs.default\")\n dr.load_components(\"insights.specs.insights_archive\")\n dr.load_components(\"insights.specs.sos_archive\")\n dr.load_components(\"insights.specs.jdr_archive\")\n\n args = None\n formatter = None\n if print_summary:\n import argparse\n import logging\n p = argparse.ArgumentParser(add_help=False)\n p.add_argument(\"archive\", nargs=\"?\", help=\"Archive or directory to analyze.\")\n p.add_argument(\"-p\", \"--plugins\", default=\"\", help=\"Comma-separated list without spaces of package(s) or module(s) containing plugins.\")\n p.add_argument(\"-c\", \"--config\", help=\"Configure components.\")\n p.add_argument(\"-v\", \"--verbose\", help=\"Verbose output.\", action=\"store_true\")\n p.add_argument(\"-f\", \"--format\", help=\"Output format.\", default=\"insights.formats.text\")\n p.add_argument(\"-D\", \"--debug\", help=\"Verbose debug output.\", action=\"store_true\")\n p.add_argument(\"--context\", help=\"Execution Context. Defaults to HostContext if an archive isn't passed.\")\n p.add_argument(\"--pandas\", action=\"store_true\", help=\"Use pandas dataframes with cluster rules.\")\n\n class Args(object):\n pass\n\n args = Args()\n p.parse_known_args(namespace=args)\n p = argparse.ArgumentParser(parents=[p])\n args.format = \"insights.formats._json\" if args.format == \"json\" else args.format\n args.format = \"insights.formats._yaml\" if args.format == \"yaml\" else args.format\n fmt = args.format if \".\" in args.format else \"insights.formats.\" + args.format\n Formatter = dr.get_component(fmt)\n if not Formatter:\n dr.load_components(fmt, continue_on_error=False)\n Formatter = get_formatter(fmt)\n Formatter.configure(p)\n p.parse_args(namespace=args)\n formatter = Formatter(args)\n\n logging.basicConfig(level=logging.DEBUG if args.debug else logging.INFO if args.verbose else logging.ERROR)\n context = _load_context(args.context) or context\n use_pandas = args.pandas or use_pandas\n\n root = args.archive or root\n if root:\n root = os.path.realpath(root)\n\n plugins = []\n if args.plugins:\n for path in args.plugins.split(\",\"):\n path = path.strip()\n if path.endswith(\".py\"):\n path, _ = os.path.splitext(path)\n path = path.rstrip(\"/\").replace(\"/\", \".\")\n plugins.append(path)\n\n for p in plugins:\n dr.load_components(p, continue_on_error=False)\n\n if args.config:\n with open(args.config) as f:\n apply_configs(yaml.load(f))\n\n if component is None:\n component = []\n plugins = tuple(plugins)\n for c in dr.DELEGATES:\n if c.__module__.startswith(plugins):\n component.append(c)\n\n if component:\n if not isinstance(component, (list, set)):\n component = [component]\n graph = {}\n for c in component:\n graph.update(dr.get_dependency_graph(c))\n else:\n graph = dr.COMPONENTS[dr.GROUPS.single]\n\n broker = dr.Broker()\n\n if formatter:\n formatter.preprocess(broker)\n broker = _run(broker, graph, root, context=context, use_pandas=use_pandas)\n formatter.postprocess(broker)\n elif print_component:\n broker = _run(broker, graph, root, context=context, use_pandas=use_pandas)\n broker.print_component(print_component)\n else:\n broker = _run(broker, graph, root, context=context, use_pandas=use_pandas)\n\n return broker\n\n\ndef main():\n run(print_summary=True)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "insights/__init__.py"}], "after_files": [{"content": "from __future__ import print_function\nimport logging\nimport pkgutil\nimport os\nimport sys\nimport yaml\nfrom .core import Scannable, LogFileOutput, Parser, IniConfigFile # noqa: F401\nfrom .core import FileListing, LegacyItemAccess, SysconfigOptions # noqa: F401\nfrom .core import YAMLParser, JSONParser, XMLParser, CommandParser # noqa: F401\nfrom .core import AttributeDict # noqa: F401\nfrom .core import Syslog # noqa: F401\nfrom .core.archives import COMPRESSION_TYPES, extract # noqa: F401\nfrom .core import dr # noqa: F401\nfrom .core.context import ClusterArchiveContext, HostContext, HostArchiveContext # noqa: F401\nfrom .core.dr import SkipComponent # noqa: F401\nfrom .core.hydration import create_context\nfrom .core.plugins import combiner, fact, metadata, parser, rule # noqa: F401\nfrom .core.plugins import datasource, condition, incident # noqa: F401\nfrom .core.plugins import make_response, make_metadata, make_fingerprint # noqa: F401\nfrom .core.plugins import make_pass, make_fail # noqa: F401\nfrom .core.filters import add_filter, apply_filters, get_filters # noqa: F401\nfrom .formats import get_formatter\nfrom .parsers import get_active_lines # noqa: F401\nfrom .util import defaults # noqa: F401\n\nlog = logging.getLogger(__name__)\n\n\npackage_info = dict((k, None) for k in [\"RELEASE\", \"COMMIT\", \"VERSION\", \"NAME\"])\n\n\nfor name in package_info:\n package_info[name] = pkgutil.get_data(__name__, name).strip().decode(\"utf-8\")\n\n\ndef get_nvr():\n return \"{0}-{1}-{2}\".format(package_info[\"NAME\"],\n package_info[\"VERSION\"],\n package_info[\"RELEASE\"])\n\n\nRULES_STATUS = {}\n\"\"\"\nMapping of dictionaries containing nvr and commitid for each rule repo included\nin this instance\n\n{\"rule_repo_1\": {\"version\": nvr(), \"commit\": sha1}}\n\"\"\"\n\n\ndef add_status(name, nvr, commit):\n \"\"\"\n Rule repositories should call this method in their package __init__ to\n register their version information.\n \"\"\"\n RULES_STATUS[name] = {\"version\": nvr, \"commit\": commit}\n\n\ndef process_dir(broker, root, graph, context, use_pandas=False):\n ctx = create_context(root, context)\n log.debug(\"Processing %s with %s\" % (root, ctx))\n\n if isinstance(ctx, ClusterArchiveContext):\n from .core.cluster import process_cluster\n archives = [f for f in ctx.all_files if f.endswith(COMPRESSION_TYPES)]\n return process_cluster(archives, use_pandas=use_pandas, broker=broker)\n\n broker[ctx.__class__] = ctx\n broker = dr.run(graph, broker=broker)\n return broker\n\n\ndef _run(broker, graph=None, root=None, context=None, use_pandas=False):\n \"\"\"\n run is a general interface that is meant for stand alone scripts to use\n when executing insights components.\n\n Args:\n root (str): None will causes a host collection in which command and\n file specs are run. A directory or archive path will cause\n collection from the directory or archive, and only file type specs\n or those that depend on `insights.core.context.HostArchiveContext`\n will execute.\n component (function or class): The component to execute. Will only execute\n the component and its dependency graph. If None, all components with\n met dependencies will execute.\n\n Returns:\n broker: object containing the result of the evaluation.\n \"\"\"\n\n if not root:\n context = context or HostContext\n broker[context] = context()\n return dr.run(graph, broker=broker)\n\n if os.path.isdir(root):\n return process_dir(broker, root, graph, context, use_pandas)\n else:\n with extract(root) as ex:\n return process_dir(broker, ex.tmp_dir, graph, context, use_pandas)\n\n\ndef apply_configs(configs):\n \"\"\"\n Configures components. They can be enabled or disabled, have timeouts set\n if applicable, and have metadata customized. Valid keys are name, enabled,\n metadata, and timeout.\n\n Args:\n configs (list): a list of dictionaries with the following keys:\n name, enabled, metadata, and timeout. All keys are optional except\n name.\n\n name is the prefix or exact name of any loaded component. Any\n component starting with name will have the associated configuration\n applied.\n\n enabled is whether the matching components will execute even if\n their dependencies are met. Defaults to True.\n\n timeout sets the class level timeout attribute of any component so\n long as the attribute already exists.\n\n metadata is any dictionary that you want to attach to the\n component. The dictionary can be retrieved by the component at\n runtime.\n \"\"\"\n delegate_keys = sorted(dr.DELEGATES, key=dr.get_name)\n for comp_cfg in configs:\n name = comp_cfg[\"name\"]\n for c in delegate_keys:\n delegate = dr.DELEGATES[c]\n cname = dr.get_name(c)\n if cname.startswith(name):\n dr.ENABLED[c] = comp_cfg.get(\"enabled\", True)\n delegate.metadata.update(comp_cfg.get(\"metadata\", {}))\n for k, v in delegate.metadata.items():\n if hasattr(c, k):\n setattr(c, k, v)\n if hasattr(c, \"timeout\"):\n c.timeout = comp_cfg.get(\"timeout\", c.timeout)\n if cname == name:\n break\n\n\ndef _load_context(path):\n if path is None:\n return\n\n if \".\" not in path:\n path = \".\".join([\"insights.core.context\", path])\n return dr.get_component(path)\n\n\ndef run(component=None, root=None, print_summary=False,\n context=None, use_pandas=False,\n print_component=None):\n\n from .core import dr\n dr.load_components(\"insights.specs.default\")\n dr.load_components(\"insights.specs.insights_archive\")\n dr.load_components(\"insights.specs.sos_archive\")\n dr.load_components(\"insights.specs.jdr_archive\")\n\n args = None\n formatter = None\n if print_summary:\n import argparse\n import logging\n p = argparse.ArgumentParser(add_help=False)\n p.add_argument(\"archive\", nargs=\"?\", help=\"Archive or directory to analyze.\")\n p.add_argument(\"-p\", \"--plugins\", default=\"\", help=\"Comma-separated list without spaces of package(s) or module(s) containing plugins.\")\n p.add_argument(\"-c\", \"--config\", help=\"Configure components.\")\n p.add_argument(\"-v\", \"--verbose\", help=\"Verbose output.\", action=\"store_true\")\n p.add_argument(\"-f\", \"--format\", help=\"Output format.\", default=\"insights.formats.text\")\n p.add_argument(\"-D\", \"--debug\", help=\"Verbose debug output.\", action=\"store_true\")\n p.add_argument(\"--context\", help=\"Execution Context. Defaults to HostContext if an archive isn't passed.\")\n p.add_argument(\"--pandas\", action=\"store_true\", help=\"Use pandas dataframes with cluster rules.\")\n\n class Args(object):\n pass\n\n args = Args()\n p.parse_known_args(namespace=args)\n p = argparse.ArgumentParser(parents=[p])\n args.format = \"insights.formats._json\" if args.format == \"json\" else args.format\n args.format = \"insights.formats._yaml\" if args.format == \"yaml\" else args.format\n fmt = args.format if \".\" in args.format else \"insights.formats.\" + args.format\n Formatter = dr.get_component(fmt)\n if not Formatter:\n dr.load_components(fmt, continue_on_error=False)\n Formatter = get_formatter(fmt)\n Formatter.configure(p)\n p.parse_args(namespace=args)\n formatter = Formatter(args)\n\n logging.basicConfig(level=logging.DEBUG if args.debug else logging.INFO if args.verbose else logging.ERROR)\n context = _load_context(args.context) or context\n use_pandas = args.pandas or use_pandas\n\n root = args.archive or root\n if root:\n root = os.path.realpath(root)\n\n plugins = []\n if args.plugins:\n for path in args.plugins.split(\",\"):\n path = path.strip()\n if path.endswith(\".py\"):\n path, _ = os.path.splitext(path)\n path = path.rstrip(\"/\").replace(\"/\", \".\")\n plugins.append(path)\n\n for p in plugins:\n dr.load_components(p, continue_on_error=False)\n\n if args.config:\n with open(args.config) as f:\n apply_configs(yaml.load(f))\n\n if component is None:\n component = []\n plugins = tuple(plugins)\n for c in dr.DELEGATES:\n if c.__module__.startswith(plugins):\n component.append(c)\n\n if component:\n if not isinstance(component, (list, set)):\n component = [component]\n graph = {}\n for c in component:\n graph.update(dr.get_dependency_graph(c))\n else:\n graph = dr.COMPONENTS[dr.GROUPS.single]\n\n broker = dr.Broker()\n\n if formatter:\n formatter.preprocess(broker)\n broker = _run(broker, graph, root, context=context, use_pandas=use_pandas)\n formatter.postprocess(broker)\n elif print_component:\n broker = _run(broker, graph, root, context=context, use_pandas=use_pandas)\n broker.print_component(print_component)\n else:\n broker = _run(broker, graph, root, context=context, use_pandas=use_pandas)\n\n return broker\n\n\ndef main():\n if \"\" not in sys.path:\n sys.path.insert(0, \"\")\n run(print_summary=True)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "insights/__init__.py"}]}
| 3,136 | 158 |
gh_patches_debug_16306
|
rasdani/github-patches
|
git_diff
|
ARM-DOE__ACT-653
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Example Incorrect
### Description
This example says it's for plotting sounding data but the example uses MET data. We should update this to be what it was originally intended.
https://arm-doe.github.io/ACT/source/auto_examples/plotting/plot_sonde.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/plotting/plot_sonde.py`
Content:
```
1 """
2 Plot a timeseries of sounding data
3 ----------------------------------------------------
4
5 This is a simple example for how to plot a timeseries of sounding
6 data from the ARM SGP site.
7
8 Author: Robert Jackson
9 """
10
11 from matplotlib import pyplot as plt
12
13 import act
14
15 files = act.tests.sample_files.EXAMPLE_MET_WILDCARD
16 met_ds = act.io.armfiles.read_netcdf(files)
17 print(met_ds)
18 met_temp = met_ds.temp_mean
19 met_rh = met_ds.rh_mean
20 met_lcl = (20.0 + met_temp / 5.0) * (100.0 - met_rh) / 1000.0
21 met_ds['met_lcl'] = met_lcl * 1000.0
22 met_ds['met_lcl'].attrs['units'] = 'm'
23 met_ds['met_lcl'].attrs['long_name'] = 'LCL Calculated from SGP MET E13'
24
25 # Plot data
26 display = act.plotting.TimeSeriesDisplay(met_ds)
27 display.add_subplots((3,), figsize=(15, 10))
28 display.plot('wspd_vec_mean', subplot_index=(0,))
29 display.plot('temp_mean', subplot_index=(1,))
30 display.plot('rh_mean', subplot_index=(2,))
31 plt.show()
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/plotting/plot_sonde.py b/examples/plotting/plot_sonde.py
--- a/examples/plotting/plot_sonde.py
+++ b/examples/plotting/plot_sonde.py
@@ -12,20 +12,14 @@
import act
-files = act.tests.sample_files.EXAMPLE_MET_WILDCARD
-met_ds = act.io.armfiles.read_netcdf(files)
-print(met_ds)
-met_temp = met_ds.temp_mean
-met_rh = met_ds.rh_mean
-met_lcl = (20.0 + met_temp / 5.0) * (100.0 - met_rh) / 1000.0
-met_ds['met_lcl'] = met_lcl * 1000.0
-met_ds['met_lcl'].attrs['units'] = 'm'
-met_ds['met_lcl'].attrs['long_name'] = 'LCL Calculated from SGP MET E13'
+files = act.tests.sample_files.EXAMPLE_SONDE1
+sonde_ds = act.io.armfiles.read_netcdf(files)
+print(sonde_ds)
# Plot data
-display = act.plotting.TimeSeriesDisplay(met_ds)
+display = act.plotting.TimeSeriesDisplay(sonde_ds)
display.add_subplots((3,), figsize=(15, 10))
-display.plot('wspd_vec_mean', subplot_index=(0,))
-display.plot('temp_mean', subplot_index=(1,))
-display.plot('rh_mean', subplot_index=(2,))
+display.plot('wspd', subplot_index=(0,))
+display.plot('tdry', subplot_index=(1,))
+display.plot('rh', subplot_index=(2,))
plt.show()
|
{"golden_diff": "diff --git a/examples/plotting/plot_sonde.py b/examples/plotting/plot_sonde.py\n--- a/examples/plotting/plot_sonde.py\n+++ b/examples/plotting/plot_sonde.py\n@@ -12,20 +12,14 @@\n \n import act\n \n-files = act.tests.sample_files.EXAMPLE_MET_WILDCARD\n-met_ds = act.io.armfiles.read_netcdf(files)\n-print(met_ds)\n-met_temp = met_ds.temp_mean\n-met_rh = met_ds.rh_mean\n-met_lcl = (20.0 + met_temp / 5.0) * (100.0 - met_rh) / 1000.0\n-met_ds['met_lcl'] = met_lcl * 1000.0\n-met_ds['met_lcl'].attrs['units'] = 'm'\n-met_ds['met_lcl'].attrs['long_name'] = 'LCL Calculated from SGP MET E13'\n+files = act.tests.sample_files.EXAMPLE_SONDE1\n+sonde_ds = act.io.armfiles.read_netcdf(files)\n+print(sonde_ds)\n \n # Plot data\n-display = act.plotting.TimeSeriesDisplay(met_ds)\n+display = act.plotting.TimeSeriesDisplay(sonde_ds)\n display.add_subplots((3,), figsize=(15, 10))\n-display.plot('wspd_vec_mean', subplot_index=(0,))\n-display.plot('temp_mean', subplot_index=(1,))\n-display.plot('rh_mean', subplot_index=(2,))\n+display.plot('wspd', subplot_index=(0,))\n+display.plot('tdry', subplot_index=(1,))\n+display.plot('rh', subplot_index=(2,))\n plt.show()\n", "issue": "Example Incorrect\n### Description\r\n\r\nThis example says it's for plotting sounding data but the example uses MET data. We should update this to be what it was originally intended.\r\nhttps://arm-doe.github.io/ACT/source/auto_examples/plotting/plot_sonde.html\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nPlot a timeseries of sounding data\n----------------------------------------------------\n\nThis is a simple example for how to plot a timeseries of sounding\ndata from the ARM SGP site.\n\nAuthor: Robert Jackson\n\"\"\"\n\nfrom matplotlib import pyplot as plt\n\nimport act\n\nfiles = act.tests.sample_files.EXAMPLE_MET_WILDCARD\nmet_ds = act.io.armfiles.read_netcdf(files)\nprint(met_ds)\nmet_temp = met_ds.temp_mean\nmet_rh = met_ds.rh_mean\nmet_lcl = (20.0 + met_temp / 5.0) * (100.0 - met_rh) / 1000.0\nmet_ds['met_lcl'] = met_lcl * 1000.0\nmet_ds['met_lcl'].attrs['units'] = 'm'\nmet_ds['met_lcl'].attrs['long_name'] = 'LCL Calculated from SGP MET E13'\n\n# Plot data\ndisplay = act.plotting.TimeSeriesDisplay(met_ds)\ndisplay.add_subplots((3,), figsize=(15, 10))\ndisplay.plot('wspd_vec_mean', subplot_index=(0,))\ndisplay.plot('temp_mean', subplot_index=(1,))\ndisplay.plot('rh_mean', subplot_index=(2,))\nplt.show()\n", "path": "examples/plotting/plot_sonde.py"}], "after_files": [{"content": "\"\"\"\nPlot a timeseries of sounding data\n----------------------------------------------------\n\nThis is a simple example for how to plot a timeseries of sounding\ndata from the ARM SGP site.\n\nAuthor: Robert Jackson\n\"\"\"\n\nfrom matplotlib import pyplot as plt\n\nimport act\n\nfiles = act.tests.sample_files.EXAMPLE_SONDE1\nsonde_ds = act.io.armfiles.read_netcdf(files)\nprint(sonde_ds)\n\n# Plot data\ndisplay = act.plotting.TimeSeriesDisplay(sonde_ds)\ndisplay.add_subplots((3,), figsize=(15, 10))\ndisplay.plot('wspd', subplot_index=(0,))\ndisplay.plot('tdry', subplot_index=(1,))\ndisplay.plot('rh', subplot_index=(2,))\nplt.show()\n", "path": "examples/plotting/plot_sonde.py"}]}
| 652 | 379 |
gh_patches_debug_32364
|
rasdani/github-patches
|
git_diff
|
pwndbg__pwndbg-1158
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
nextproginstr doesn't stop when reaching the program instructions
The `nextproginstr` command should stop when it reaches an instruction in the binary, but it seems to reach those instructions and continue executing instead of breaking.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwndbg/proc.py`
Content:
```
1 """
2 Provides values which would be available from /proc which
3 are not fulfilled by other modules and some process/gdb flow
4 related information.
5 """
6
7 import functools
8 import sys
9 from types import ModuleType
10
11 import gdb
12
13 import pwndbg.gdblib.qemu
14 import pwndbg.lib.memoize
15
16
17 class module(ModuleType):
18 @property
19 def pid(self):
20 # QEMU usermode emualtion always returns 42000 for some reason.
21 # In any case, we can't use the info.
22 if pwndbg.gdblib.qemu.is_qemu_usermode():
23 return pwndbg.gdblib.qemu.pid()
24
25 i = gdb.selected_inferior()
26 if i is not None:
27 return i.pid
28 return 0
29
30 @property
31 def tid(self):
32 if pwndbg.gdblib.qemu.is_qemu_usermode():
33 return pwndbg.gdblib.qemu.pid()
34
35 i = gdb.selected_thread()
36 if i is not None:
37 return i.ptid[1]
38
39 return self.pid
40
41 @property
42 def alive(self):
43 return gdb.selected_thread() is not None
44
45 @property
46 def thread_is_stopped(self):
47 """
48 This detects whether selected thread is stopped.
49 It is not stopped in situations when gdb is executing commands
50 that are attached to a breakpoint by `command` command.
51
52 For more info see issue #229 ( https://github.com/pwndbg/pwndbg/issues/299 )
53 :return: Whether gdb executes commands attached to bp with `command` command.
54 """
55 return gdb.selected_thread().is_stopped()
56
57 @property
58 def exe(self):
59 """
60 Returns the debugged file name.
61
62 On remote targets, this may be prefixed with "target:" string.
63 See this by executing those in two terminals:
64 1. gdbserver 127.0.0.1:1234 /bin/ls
65 2. gdb -ex "target remote :1234" -ex "pi pwndbg.proc.exe"
66
67 If you need to process the debugged file use:
68 `pwndbg.file.get_file(pwndbg.proc.exe)`
69 """
70 return gdb.current_progspace().filename
71
72 @property
73 def mem_page(self):
74 return next(p for p in pwndbg.vmmap.get() if p.objfile == self.exe)
75
76 def OnlyWhenRunning(self, func):
77 @functools.wraps(func)
78 def wrapper(*a, **kw):
79 if self.alive:
80 return func(*a, **kw)
81
82 return wrapper
83
84
85 # To prevent garbage collection
86 tether = sys.modules[__name__]
87
88 sys.modules[__name__] = module(__name__, "")
89
```
Path: `pwndbg/commands/next.py`
Content:
```
1 """
2 Stepping until an event occurs
3 """
4
5 import argparse
6
7 import gdb
8
9 import pwndbg.commands
10 import pwndbg.gdblib.next
11
12
13 @pwndbg.commands.ArgparsedCommand("Breaks at the next jump instruction.", aliases=["nextjump"])
14 @pwndbg.commands.OnlyWhenRunning
15 def nextjmp():
16 """Breaks at the next jump instruction"""
17 if pwndbg.gdblib.next.break_next_branch():
18 pwndbg.commands.context.context()
19
20
21 parser = argparse.ArgumentParser(description="""Breaks at the next call instruction""")
22 parser.add_argument(
23 "symbol_regex",
24 type=str,
25 default=None,
26 nargs="?",
27 help="A regex matching the name of next symbol to be broken on before calling.",
28 )
29
30
31 @pwndbg.commands.ArgparsedCommand(parser)
32 @pwndbg.commands.OnlyWhenRunning
33 def nextcall(symbol_regex=None):
34 """Breaks at the next call instruction"""
35 if pwndbg.gdblib.next.break_next_call(symbol_regex):
36 pwndbg.commands.context.context()
37
38
39 @pwndbg.commands.ArgparsedCommand("""Breaks at next return-like instruction""")
40 @pwndbg.commands.OnlyWhenRunning
41 def nextret():
42 """Breaks at next return-like instruction"""
43 if pwndbg.gdblib.next.break_next_ret():
44 pwndbg.commands.context.context()
45
46
47 @pwndbg.commands.ArgparsedCommand("""Breaks at next return-like instruction by 'stepping' to it""")
48 @pwndbg.commands.OnlyWhenRunning
49 def stepret():
50 """Breaks at next return-like instruction by 'stepping' to it"""
51 while (
52 pwndbg.proc.alive
53 and not pwndbg.gdblib.next.break_next_ret()
54 and pwndbg.gdblib.next.break_next_branch()
55 ):
56 # Here we are e.g. on a CALL instruction (temporarily breakpointed by `break_next_branch`)
57 # We need to step so that we take this branch instead of ignoring it
58 gdb.execute("si")
59 continue
60
61 if pwndbg.proc.alive:
62 pwndbg.commands.context.context()
63
64
65 @pwndbg.commands.ArgparsedCommand(
66 """Breaks at the next instruction that belongs to the running program"""
67 )
68 @pwndbg.commands.OnlyWhenRunning
69 def nextproginstr():
70 """Breaks at the next instruction that belongs to the running program"""
71 if pwndbg.gdblib.next.break_on_program_code():
72 pwndbg.commands.context.context()
73
74
75 parser = argparse.ArgumentParser(
76 description="""Sets a breakpoint on the instruction after this one"""
77 )
78 parser.add_argument("addr", type=int, default=None, nargs="?", help="The address to break after.")
79
80
81 @pwndbg.commands.ArgparsedCommand(parser, aliases=["so"])
82 @pwndbg.commands.OnlyWhenRunning
83 def stepover(addr=None):
84 """Sets a breakpoint on the instruction after this one"""
85 pwndbg.gdblib.next.break_on_next(addr)
86
87
88 @pwndbg.commands.ArgparsedCommand(
89 "Breaks at the next syscall not taking branches.", aliases=["nextsc"]
90 )
91 @pwndbg.commands.OnlyWhenRunning
92 def nextsyscall():
93 """
94 Breaks at the next syscall not taking branches.
95 """
96 while (
97 pwndbg.proc.alive
98 and not pwndbg.gdblib.next.break_next_interrupt()
99 and pwndbg.gdblib.next.break_next_branch()
100 ):
101 continue
102
103 if pwndbg.proc.alive:
104 pwndbg.commands.context.context()
105
106
107 @pwndbg.commands.ArgparsedCommand(
108 "Breaks at the next syscall by taking branches.", aliases=["stepsc"]
109 )
110 @pwndbg.commands.OnlyWhenRunning
111 def stepsyscall():
112 """
113 Breaks at the next syscall by taking branches.
114 """
115 while (
116 pwndbg.proc.alive
117 and not pwndbg.gdblib.next.break_next_interrupt()
118 and pwndbg.gdblib.next.break_next_branch()
119 ):
120 # Here we are e.g. on a CALL instruction (temporarily breakpointed by `break_next_branch`)
121 # We need to step so that we take this branch instead of ignoring it
122 gdb.execute("si")
123 continue
124
125 if pwndbg.proc.alive:
126 pwndbg.commands.context.context()
127
```
Path: `pwndbg/gdblib/next.py`
Content:
```
1 """
2 Commands for setting temporary breakpoints on the next
3 instruction of some type (call, branch, etc.)
4 """
5
6 import re
7
8 import capstone
9 import gdb
10
11 import pwndbg.disasm
12 import pwndbg.gdblib.events
13 import pwndbg.gdblib.regs
14 import pwndbg.proc
15 from pwndbg.color import message
16
17 jumps = set((capstone.CS_GRP_CALL, capstone.CS_GRP_JUMP, capstone.CS_GRP_RET, capstone.CS_GRP_IRET))
18
19 interrupts = set((capstone.CS_GRP_INT,))
20
21
22 @pwndbg.gdblib.events.exit
23 def clear_temp_breaks():
24 if not pwndbg.proc.alive:
25 breakpoints = gdb.breakpoints()
26 if breakpoints:
27 for bp in breakpoints:
28 if (
29 bp.temporary and not bp.visible
30 ): # visible is used instead of internal because older gdb's don't support internal
31 bp.delete()
32
33
34 def next_int(address=None):
35 """
36 If there is a syscall in the current basic black,
37 return the instruction of the one closest to $PC.
38
39 Otherwise, return None.
40 """
41 if address is None:
42 ins = pwndbg.disasm.one(pwndbg.gdblib.regs.pc)
43 if not ins:
44 return None
45 address = ins.next
46
47 ins = pwndbg.disasm.one(address)
48 while ins:
49 if set(ins.groups) & jumps:
50 return None
51 if set(ins.groups) & interrupts:
52 return ins
53 ins = pwndbg.disasm.one(ins.next)
54
55 return None
56
57
58 def next_branch(address=None):
59 if address is None:
60 ins = pwndbg.disasm.one(pwndbg.gdblib.regs.pc)
61 if not ins:
62 return None
63 address = ins.next
64
65 ins = pwndbg.disasm.one(address)
66 while ins:
67 if set(ins.groups) & jumps:
68 return ins
69 ins = pwndbg.disasm.one(ins.next)
70
71 return None
72
73
74 def break_next_branch(address=None):
75 ins = next_branch(address)
76
77 if ins:
78 gdb.Breakpoint("*%#x" % ins.address, internal=True, temporary=True)
79 gdb.execute("continue", from_tty=False, to_string=True)
80 return ins
81
82
83 def break_next_interrupt(address=None):
84 ins = next_int(address)
85
86 if ins:
87 gdb.Breakpoint("*%#x" % ins.address, internal=True, temporary=True)
88 gdb.execute("continue", from_tty=False, to_string=True)
89 return ins
90
91
92 def break_next_call(symbol_regex=None):
93 while pwndbg.proc.alive:
94 ins = break_next_branch()
95
96 if not ins:
97 break
98
99 # continue if not a call
100 if capstone.CS_GRP_CALL not in ins.groups:
101 continue
102
103 # return call if we don't search for a symbol
104 if not symbol_regex:
105 return ins
106
107 # return call if we match target address
108 if ins.target_const and re.match("%s$" % symbol_regex, hex(ins.target)):
109 return ins
110
111 # return call if we match symbol name
112 if ins.symbol and re.match("%s$" % symbol_regex, ins.symbol):
113 return ins
114
115
116 def break_next_ret(address=None):
117 while pwndbg.proc.alive:
118 ins = break_next_branch(address)
119
120 if not ins:
121 break
122
123 if capstone.CS_GRP_RET in ins.groups:
124 return ins
125
126
127 def break_on_program_code():
128 """
129 Breaks on next instruction that belongs to process' objfile code.
130 :return: True for success, False when process ended or when pc is at the code.
131 """
132 mp = pwndbg.proc.mem_page
133 start = mp.start
134 end = mp.end
135
136 if start <= pwndbg.gdblib.regs.pc < end:
137 print(message.error("The pc is already at the binary objfile code. Not stepping."))
138 return False
139
140 while pwndbg.proc.alive:
141 gdb.execute("si", from_tty=False, to_string=False)
142
143 addr = pwndbg.gdblib.regs.pc
144 if start <= addr < end:
145 return True
146
147 return False
148
149
150 def break_on_next(address=None):
151 address = address or pwndbg.gdblib.regs.pc
152 ins = pwndbg.disasm.one(address)
153
154 gdb.Breakpoint("*%#x" % (ins.address + ins.size), temporary=True)
155 gdb.execute("continue", from_tty=False, to_string=True)
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pwndbg/commands/next.py b/pwndbg/commands/next.py
--- a/pwndbg/commands/next.py
+++ b/pwndbg/commands/next.py
@@ -67,9 +67,7 @@
)
@pwndbg.commands.OnlyWhenRunning
def nextproginstr():
- """Breaks at the next instruction that belongs to the running program"""
- if pwndbg.gdblib.next.break_on_program_code():
- pwndbg.commands.context.context()
+ pwndbg.gdblib.next.break_on_program_code()
parser = argparse.ArgumentParser(
diff --git a/pwndbg/gdblib/next.py b/pwndbg/gdblib/next.py
--- a/pwndbg/gdblib/next.py
+++ b/pwndbg/gdblib/next.py
@@ -129,20 +129,24 @@
Breaks on next instruction that belongs to process' objfile code.
:return: True for success, False when process ended or when pc is at the code.
"""
- mp = pwndbg.proc.mem_page
- start = mp.start
- end = mp.end
+ exe = pwndbg.proc.exe
+ binary_exec_page_ranges = [
+ (p.start, p.end) for p in pwndbg.vmmap.get() if p.objfile == exe and p.execute
+ ]
- if start <= pwndbg.gdblib.regs.pc < end:
- print(message.error("The pc is already at the binary objfile code. Not stepping."))
- return False
+ pc = pwndbg.gdblib.regs.pc
+ for start, end in binary_exec_page_ranges:
+ if start <= pc < end:
+ print(message.error("The pc is already at the binary objfile code. Not stepping."))
+ return False
while pwndbg.proc.alive:
gdb.execute("si", from_tty=False, to_string=False)
- addr = pwndbg.gdblib.regs.pc
- if start <= addr < end:
- return True
+ pc = pwndbg.gdblib.regs.pc
+ for start, end in binary_exec_page_ranges:
+ if start <= pc < end:
+ return True
return False
diff --git a/pwndbg/proc.py b/pwndbg/proc.py
--- a/pwndbg/proc.py
+++ b/pwndbg/proc.py
@@ -69,10 +69,6 @@
"""
return gdb.current_progspace().filename
- @property
- def mem_page(self):
- return next(p for p in pwndbg.vmmap.get() if p.objfile == self.exe)
-
def OnlyWhenRunning(self, func):
@functools.wraps(func)
def wrapper(*a, **kw):
|
{"golden_diff": "diff --git a/pwndbg/commands/next.py b/pwndbg/commands/next.py\n--- a/pwndbg/commands/next.py\n+++ b/pwndbg/commands/next.py\n@@ -67,9 +67,7 @@\n )\n @pwndbg.commands.OnlyWhenRunning\n def nextproginstr():\n- \"\"\"Breaks at the next instruction that belongs to the running program\"\"\"\n- if pwndbg.gdblib.next.break_on_program_code():\n- pwndbg.commands.context.context()\n+ pwndbg.gdblib.next.break_on_program_code()\n \n \n parser = argparse.ArgumentParser(\ndiff --git a/pwndbg/gdblib/next.py b/pwndbg/gdblib/next.py\n--- a/pwndbg/gdblib/next.py\n+++ b/pwndbg/gdblib/next.py\n@@ -129,20 +129,24 @@\n Breaks on next instruction that belongs to process' objfile code.\n :return: True for success, False when process ended or when pc is at the code.\n \"\"\"\n- mp = pwndbg.proc.mem_page\n- start = mp.start\n- end = mp.end\n+ exe = pwndbg.proc.exe\n+ binary_exec_page_ranges = [\n+ (p.start, p.end) for p in pwndbg.vmmap.get() if p.objfile == exe and p.execute\n+ ]\n \n- if start <= pwndbg.gdblib.regs.pc < end:\n- print(message.error(\"The pc is already at the binary objfile code. Not stepping.\"))\n- return False\n+ pc = pwndbg.gdblib.regs.pc\n+ for start, end in binary_exec_page_ranges:\n+ if start <= pc < end:\n+ print(message.error(\"The pc is already at the binary objfile code. Not stepping.\"))\n+ return False\n \n while pwndbg.proc.alive:\n gdb.execute(\"si\", from_tty=False, to_string=False)\n \n- addr = pwndbg.gdblib.regs.pc\n- if start <= addr < end:\n- return True\n+ pc = pwndbg.gdblib.regs.pc\n+ for start, end in binary_exec_page_ranges:\n+ if start <= pc < end:\n+ return True\n \n return False\n \ndiff --git a/pwndbg/proc.py b/pwndbg/proc.py\n--- a/pwndbg/proc.py\n+++ b/pwndbg/proc.py\n@@ -69,10 +69,6 @@\n \"\"\"\n return gdb.current_progspace().filename\n \n- @property\n- def mem_page(self):\n- return next(p for p in pwndbg.vmmap.get() if p.objfile == self.exe)\n-\n def OnlyWhenRunning(self, func):\n @functools.wraps(func)\n def wrapper(*a, **kw):\n", "issue": "nextproginstr doesn't stop when reaching the program instructions\nThe `nextproginstr` command should stop when it reaches an instruction in the binary, but it seems to reach those instructions and continue executing instead of breaking.\n", "before_files": [{"content": "\"\"\"\nProvides values which would be available from /proc which\nare not fulfilled by other modules and some process/gdb flow\nrelated information.\n\"\"\"\n\nimport functools\nimport sys\nfrom types import ModuleType\n\nimport gdb\n\nimport pwndbg.gdblib.qemu\nimport pwndbg.lib.memoize\n\n\nclass module(ModuleType):\n @property\n def pid(self):\n # QEMU usermode emualtion always returns 42000 for some reason.\n # In any case, we can't use the info.\n if pwndbg.gdblib.qemu.is_qemu_usermode():\n return pwndbg.gdblib.qemu.pid()\n\n i = gdb.selected_inferior()\n if i is not None:\n return i.pid\n return 0\n\n @property\n def tid(self):\n if pwndbg.gdblib.qemu.is_qemu_usermode():\n return pwndbg.gdblib.qemu.pid()\n\n i = gdb.selected_thread()\n if i is not None:\n return i.ptid[1]\n\n return self.pid\n\n @property\n def alive(self):\n return gdb.selected_thread() is not None\n\n @property\n def thread_is_stopped(self):\n \"\"\"\n This detects whether selected thread is stopped.\n It is not stopped in situations when gdb is executing commands\n that are attached to a breakpoint by `command` command.\n\n For more info see issue #229 ( https://github.com/pwndbg/pwndbg/issues/299 )\n :return: Whether gdb executes commands attached to bp with `command` command.\n \"\"\"\n return gdb.selected_thread().is_stopped()\n\n @property\n def exe(self):\n \"\"\"\n Returns the debugged file name.\n\n On remote targets, this may be prefixed with \"target:\" string.\n See this by executing those in two terminals:\n 1. gdbserver 127.0.0.1:1234 /bin/ls\n 2. gdb -ex \"target remote :1234\" -ex \"pi pwndbg.proc.exe\"\n\n If you need to process the debugged file use:\n `pwndbg.file.get_file(pwndbg.proc.exe)`\n \"\"\"\n return gdb.current_progspace().filename\n\n @property\n def mem_page(self):\n return next(p for p in pwndbg.vmmap.get() if p.objfile == self.exe)\n\n def OnlyWhenRunning(self, func):\n @functools.wraps(func)\n def wrapper(*a, **kw):\n if self.alive:\n return func(*a, **kw)\n\n return wrapper\n\n\n# To prevent garbage collection\ntether = sys.modules[__name__]\n\nsys.modules[__name__] = module(__name__, \"\")\n", "path": "pwndbg/proc.py"}, {"content": "\"\"\"\nStepping until an event occurs\n\"\"\"\n\nimport argparse\n\nimport gdb\n\nimport pwndbg.commands\nimport pwndbg.gdblib.next\n\n\[email protected](\"Breaks at the next jump instruction.\", aliases=[\"nextjump\"])\[email protected]\ndef nextjmp():\n \"\"\"Breaks at the next jump instruction\"\"\"\n if pwndbg.gdblib.next.break_next_branch():\n pwndbg.commands.context.context()\n\n\nparser = argparse.ArgumentParser(description=\"\"\"Breaks at the next call instruction\"\"\")\nparser.add_argument(\n \"symbol_regex\",\n type=str,\n default=None,\n nargs=\"?\",\n help=\"A regex matching the name of next symbol to be broken on before calling.\",\n)\n\n\[email protected](parser)\[email protected]\ndef nextcall(symbol_regex=None):\n \"\"\"Breaks at the next call instruction\"\"\"\n if pwndbg.gdblib.next.break_next_call(symbol_regex):\n pwndbg.commands.context.context()\n\n\[email protected](\"\"\"Breaks at next return-like instruction\"\"\")\[email protected]\ndef nextret():\n \"\"\"Breaks at next return-like instruction\"\"\"\n if pwndbg.gdblib.next.break_next_ret():\n pwndbg.commands.context.context()\n\n\[email protected](\"\"\"Breaks at next return-like instruction by 'stepping' to it\"\"\")\[email protected]\ndef stepret():\n \"\"\"Breaks at next return-like instruction by 'stepping' to it\"\"\"\n while (\n pwndbg.proc.alive\n and not pwndbg.gdblib.next.break_next_ret()\n and pwndbg.gdblib.next.break_next_branch()\n ):\n # Here we are e.g. on a CALL instruction (temporarily breakpointed by `break_next_branch`)\n # We need to step so that we take this branch instead of ignoring it\n gdb.execute(\"si\")\n continue\n\n if pwndbg.proc.alive:\n pwndbg.commands.context.context()\n\n\[email protected](\n \"\"\"Breaks at the next instruction that belongs to the running program\"\"\"\n)\[email protected]\ndef nextproginstr():\n \"\"\"Breaks at the next instruction that belongs to the running program\"\"\"\n if pwndbg.gdblib.next.break_on_program_code():\n pwndbg.commands.context.context()\n\n\nparser = argparse.ArgumentParser(\n description=\"\"\"Sets a breakpoint on the instruction after this one\"\"\"\n)\nparser.add_argument(\"addr\", type=int, default=None, nargs=\"?\", help=\"The address to break after.\")\n\n\[email protected](parser, aliases=[\"so\"])\[email protected]\ndef stepover(addr=None):\n \"\"\"Sets a breakpoint on the instruction after this one\"\"\"\n pwndbg.gdblib.next.break_on_next(addr)\n\n\[email protected](\n \"Breaks at the next syscall not taking branches.\", aliases=[\"nextsc\"]\n)\[email protected]\ndef nextsyscall():\n \"\"\"\n Breaks at the next syscall not taking branches.\n \"\"\"\n while (\n pwndbg.proc.alive\n and not pwndbg.gdblib.next.break_next_interrupt()\n and pwndbg.gdblib.next.break_next_branch()\n ):\n continue\n\n if pwndbg.proc.alive:\n pwndbg.commands.context.context()\n\n\[email protected](\n \"Breaks at the next syscall by taking branches.\", aliases=[\"stepsc\"]\n)\[email protected]\ndef stepsyscall():\n \"\"\"\n Breaks at the next syscall by taking branches.\n \"\"\"\n while (\n pwndbg.proc.alive\n and not pwndbg.gdblib.next.break_next_interrupt()\n and pwndbg.gdblib.next.break_next_branch()\n ):\n # Here we are e.g. on a CALL instruction (temporarily breakpointed by `break_next_branch`)\n # We need to step so that we take this branch instead of ignoring it\n gdb.execute(\"si\")\n continue\n\n if pwndbg.proc.alive:\n pwndbg.commands.context.context()\n", "path": "pwndbg/commands/next.py"}, {"content": "\"\"\"\nCommands for setting temporary breakpoints on the next\ninstruction of some type (call, branch, etc.)\n\"\"\"\n\nimport re\n\nimport capstone\nimport gdb\n\nimport pwndbg.disasm\nimport pwndbg.gdblib.events\nimport pwndbg.gdblib.regs\nimport pwndbg.proc\nfrom pwndbg.color import message\n\njumps = set((capstone.CS_GRP_CALL, capstone.CS_GRP_JUMP, capstone.CS_GRP_RET, capstone.CS_GRP_IRET))\n\ninterrupts = set((capstone.CS_GRP_INT,))\n\n\[email protected]\ndef clear_temp_breaks():\n if not pwndbg.proc.alive:\n breakpoints = gdb.breakpoints()\n if breakpoints:\n for bp in breakpoints:\n if (\n bp.temporary and not bp.visible\n ): # visible is used instead of internal because older gdb's don't support internal\n bp.delete()\n\n\ndef next_int(address=None):\n \"\"\"\n If there is a syscall in the current basic black,\n return the instruction of the one closest to $PC.\n\n Otherwise, return None.\n \"\"\"\n if address is None:\n ins = pwndbg.disasm.one(pwndbg.gdblib.regs.pc)\n if not ins:\n return None\n address = ins.next\n\n ins = pwndbg.disasm.one(address)\n while ins:\n if set(ins.groups) & jumps:\n return None\n if set(ins.groups) & interrupts:\n return ins\n ins = pwndbg.disasm.one(ins.next)\n\n return None\n\n\ndef next_branch(address=None):\n if address is None:\n ins = pwndbg.disasm.one(pwndbg.gdblib.regs.pc)\n if not ins:\n return None\n address = ins.next\n\n ins = pwndbg.disasm.one(address)\n while ins:\n if set(ins.groups) & jumps:\n return ins\n ins = pwndbg.disasm.one(ins.next)\n\n return None\n\n\ndef break_next_branch(address=None):\n ins = next_branch(address)\n\n if ins:\n gdb.Breakpoint(\"*%#x\" % ins.address, internal=True, temporary=True)\n gdb.execute(\"continue\", from_tty=False, to_string=True)\n return ins\n\n\ndef break_next_interrupt(address=None):\n ins = next_int(address)\n\n if ins:\n gdb.Breakpoint(\"*%#x\" % ins.address, internal=True, temporary=True)\n gdb.execute(\"continue\", from_tty=False, to_string=True)\n return ins\n\n\ndef break_next_call(symbol_regex=None):\n while pwndbg.proc.alive:\n ins = break_next_branch()\n\n if not ins:\n break\n\n # continue if not a call\n if capstone.CS_GRP_CALL not in ins.groups:\n continue\n\n # return call if we don't search for a symbol\n if not symbol_regex:\n return ins\n\n # return call if we match target address\n if ins.target_const and re.match(\"%s$\" % symbol_regex, hex(ins.target)):\n return ins\n\n # return call if we match symbol name\n if ins.symbol and re.match(\"%s$\" % symbol_regex, ins.symbol):\n return ins\n\n\ndef break_next_ret(address=None):\n while pwndbg.proc.alive:\n ins = break_next_branch(address)\n\n if not ins:\n break\n\n if capstone.CS_GRP_RET in ins.groups:\n return ins\n\n\ndef break_on_program_code():\n \"\"\"\n Breaks on next instruction that belongs to process' objfile code.\n :return: True for success, False when process ended or when pc is at the code.\n \"\"\"\n mp = pwndbg.proc.mem_page\n start = mp.start\n end = mp.end\n\n if start <= pwndbg.gdblib.regs.pc < end:\n print(message.error(\"The pc is already at the binary objfile code. Not stepping.\"))\n return False\n\n while pwndbg.proc.alive:\n gdb.execute(\"si\", from_tty=False, to_string=False)\n\n addr = pwndbg.gdblib.regs.pc\n if start <= addr < end:\n return True\n\n return False\n\n\ndef break_on_next(address=None):\n address = address or pwndbg.gdblib.regs.pc\n ins = pwndbg.disasm.one(address)\n\n gdb.Breakpoint(\"*%#x\" % (ins.address + ins.size), temporary=True)\n gdb.execute(\"continue\", from_tty=False, to_string=True)\n", "path": "pwndbg/gdblib/next.py"}], "after_files": [{"content": "\"\"\"\nProvides values which would be available from /proc which\nare not fulfilled by other modules and some process/gdb flow\nrelated information.\n\"\"\"\n\nimport functools\nimport sys\nfrom types import ModuleType\n\nimport gdb\n\nimport pwndbg.gdblib.qemu\nimport pwndbg.lib.memoize\n\n\nclass module(ModuleType):\n @property\n def pid(self):\n # QEMU usermode emualtion always returns 42000 for some reason.\n # In any case, we can't use the info.\n if pwndbg.gdblib.qemu.is_qemu_usermode():\n return pwndbg.gdblib.qemu.pid()\n\n i = gdb.selected_inferior()\n if i is not None:\n return i.pid\n return 0\n\n @property\n def tid(self):\n if pwndbg.gdblib.qemu.is_qemu_usermode():\n return pwndbg.gdblib.qemu.pid()\n\n i = gdb.selected_thread()\n if i is not None:\n return i.ptid[1]\n\n return self.pid\n\n @property\n def alive(self):\n return gdb.selected_thread() is not None\n\n @property\n def thread_is_stopped(self):\n \"\"\"\n This detects whether selected thread is stopped.\n It is not stopped in situations when gdb is executing commands\n that are attached to a breakpoint by `command` command.\n\n For more info see issue #229 ( https://github.com/pwndbg/pwndbg/issues/299 )\n :return: Whether gdb executes commands attached to bp with `command` command.\n \"\"\"\n return gdb.selected_thread().is_stopped()\n\n @property\n def exe(self):\n \"\"\"\n Returns the debugged file name.\n\n On remote targets, this may be prefixed with \"target:\" string.\n See this by executing those in two terminals:\n 1. gdbserver 127.0.0.1:1234 /bin/ls\n 2. gdb -ex \"target remote :1234\" -ex \"pi pwndbg.proc.exe\"\n\n If you need to process the debugged file use:\n `pwndbg.file.get_file(pwndbg.proc.exe)`\n \"\"\"\n return gdb.current_progspace().filename\n\n def OnlyWhenRunning(self, func):\n @functools.wraps(func)\n def wrapper(*a, **kw):\n if self.alive:\n return func(*a, **kw)\n\n return wrapper\n\n\n# To prevent garbage collection\ntether = sys.modules[__name__]\n\nsys.modules[__name__] = module(__name__, \"\")\n", "path": "pwndbg/proc.py"}, {"content": "\"\"\"\nStepping until an event occurs\n\"\"\"\n\nimport argparse\n\nimport gdb\n\nimport pwndbg.commands\nimport pwndbg.gdblib.next\n\n\[email protected](\"Breaks at the next jump instruction.\", aliases=[\"nextjump\"])\[email protected]\ndef nextjmp():\n \"\"\"Breaks at the next jump instruction\"\"\"\n if pwndbg.gdblib.next.break_next_branch():\n pwndbg.commands.context.context()\n\n\nparser = argparse.ArgumentParser(description=\"\"\"Breaks at the next call instruction\"\"\")\nparser.add_argument(\n \"symbol_regex\",\n type=str,\n default=None,\n nargs=\"?\",\n help=\"A regex matching the name of next symbol to be broken on before calling.\",\n)\n\n\[email protected](parser)\[email protected]\ndef nextcall(symbol_regex=None):\n \"\"\"Breaks at the next call instruction\"\"\"\n if pwndbg.gdblib.next.break_next_call(symbol_regex):\n pwndbg.commands.context.context()\n\n\[email protected](\"\"\"Breaks at next return-like instruction\"\"\")\[email protected]\ndef nextret():\n \"\"\"Breaks at next return-like instruction\"\"\"\n if pwndbg.gdblib.next.break_next_ret():\n pwndbg.commands.context.context()\n\n\[email protected](\"\"\"Breaks at next return-like instruction by 'stepping' to it\"\"\")\[email protected]\ndef stepret():\n \"\"\"Breaks at next return-like instruction by 'stepping' to it\"\"\"\n while (\n pwndbg.proc.alive\n and not pwndbg.gdblib.next.break_next_ret()\n and pwndbg.gdblib.next.break_next_branch()\n ):\n # Here we are e.g. on a CALL instruction (temporarily breakpointed by `break_next_branch`)\n # We need to step so that we take this branch instead of ignoring it\n gdb.execute(\"si\")\n continue\n\n if pwndbg.proc.alive:\n pwndbg.commands.context.context()\n\n\[email protected](\n \"\"\"Breaks at the next instruction that belongs to the running program\"\"\"\n)\[email protected]\ndef nextproginstr():\n pwndbg.gdblib.next.break_on_program_code()\n\n\nparser = argparse.ArgumentParser(\n description=\"\"\"Sets a breakpoint on the instruction after this one\"\"\"\n)\nparser.add_argument(\"addr\", type=int, default=None, nargs=\"?\", help=\"The address to break after.\")\n\n\[email protected](parser, aliases=[\"so\"])\[email protected]\ndef stepover(addr=None):\n \"\"\"Sets a breakpoint on the instruction after this one\"\"\"\n pwndbg.gdblib.next.break_on_next(addr)\n\n\[email protected](\n \"Breaks at the next syscall not taking branches.\", aliases=[\"nextsc\"]\n)\[email protected]\ndef nextsyscall():\n \"\"\"\n Breaks at the next syscall not taking branches.\n \"\"\"\n while (\n pwndbg.proc.alive\n and not pwndbg.gdblib.next.break_next_interrupt()\n and pwndbg.gdblib.next.break_next_branch()\n ):\n continue\n\n if pwndbg.proc.alive:\n pwndbg.commands.context.context()\n\n\[email protected](\n \"Breaks at the next syscall by taking branches.\", aliases=[\"stepsc\"]\n)\[email protected]\ndef stepsyscall():\n \"\"\"\n Breaks at the next syscall by taking branches.\n \"\"\"\n while (\n pwndbg.proc.alive\n and not pwndbg.gdblib.next.break_next_interrupt()\n and pwndbg.gdblib.next.break_next_branch()\n ):\n # Here we are e.g. on a CALL instruction (temporarily breakpointed by `break_next_branch`)\n # We need to step so that we take this branch instead of ignoring it\n gdb.execute(\"si\")\n continue\n\n if pwndbg.proc.alive:\n pwndbg.commands.context.context()\n", "path": "pwndbg/commands/next.py"}, {"content": "\"\"\"\nCommands for setting temporary breakpoints on the next\ninstruction of some type (call, branch, etc.)\n\"\"\"\n\nimport re\n\nimport capstone\nimport gdb\n\nimport pwndbg.disasm\nimport pwndbg.gdblib.events\nimport pwndbg.gdblib.regs\nimport pwndbg.proc\nfrom pwndbg.color import message\n\njumps = set((capstone.CS_GRP_CALL, capstone.CS_GRP_JUMP, capstone.CS_GRP_RET, capstone.CS_GRP_IRET))\n\ninterrupts = set((capstone.CS_GRP_INT,))\n\n\[email protected]\ndef clear_temp_breaks():\n if not pwndbg.proc.alive:\n breakpoints = gdb.breakpoints()\n if breakpoints:\n for bp in breakpoints:\n if (\n bp.temporary and not bp.visible\n ): # visible is used instead of internal because older gdb's don't support internal\n bp.delete()\n\n\ndef next_int(address=None):\n \"\"\"\n If there is a syscall in the current basic black,\n return the instruction of the one closest to $PC.\n\n Otherwise, return None.\n \"\"\"\n if address is None:\n ins = pwndbg.disasm.one(pwndbg.gdblib.regs.pc)\n if not ins:\n return None\n address = ins.next\n\n ins = pwndbg.disasm.one(address)\n while ins:\n if set(ins.groups) & jumps:\n return None\n if set(ins.groups) & interrupts:\n return ins\n ins = pwndbg.disasm.one(ins.next)\n\n return None\n\n\ndef next_branch(address=None):\n if address is None:\n ins = pwndbg.disasm.one(pwndbg.gdblib.regs.pc)\n if not ins:\n return None\n address = ins.next\n\n ins = pwndbg.disasm.one(address)\n while ins:\n if set(ins.groups) & jumps:\n return ins\n ins = pwndbg.disasm.one(ins.next)\n\n return None\n\n\ndef break_next_branch(address=None):\n ins = next_branch(address)\n\n if ins:\n gdb.Breakpoint(\"*%#x\" % ins.address, internal=True, temporary=True)\n gdb.execute(\"continue\", from_tty=False, to_string=True)\n return ins\n\n\ndef break_next_interrupt(address=None):\n ins = next_int(address)\n\n if ins:\n gdb.Breakpoint(\"*%#x\" % ins.address, internal=True, temporary=True)\n gdb.execute(\"continue\", from_tty=False, to_string=True)\n return ins\n\n\ndef break_next_call(symbol_regex=None):\n while pwndbg.proc.alive:\n ins = break_next_branch()\n\n if not ins:\n break\n\n # continue if not a call\n if capstone.CS_GRP_CALL not in ins.groups:\n continue\n\n # return call if we don't search for a symbol\n if not symbol_regex:\n return ins\n\n # return call if we match target address\n if ins.target_const and re.match(\"%s$\" % symbol_regex, hex(ins.target)):\n return ins\n\n # return call if we match symbol name\n if ins.symbol and re.match(\"%s$\" % symbol_regex, ins.symbol):\n return ins\n\n\ndef break_next_ret(address=None):\n while pwndbg.proc.alive:\n ins = break_next_branch(address)\n\n if not ins:\n break\n\n if capstone.CS_GRP_RET in ins.groups:\n return ins\n\n\ndef break_on_program_code():\n \"\"\"\n Breaks on next instruction that belongs to process' objfile code.\n :return: True for success, False when process ended or when pc is at the code.\n \"\"\"\n exe = pwndbg.proc.exe\n binary_exec_page_ranges = [\n (p.start, p.end) for p in pwndbg.vmmap.get() if p.objfile == exe and p.execute\n ]\n\n pc = pwndbg.gdblib.regs.pc\n for start, end in binary_exec_page_ranges:\n if start <= pc < end:\n print(message.error(\"The pc is already at the binary objfile code. Not stepping.\"))\n return False\n\n while pwndbg.proc.alive:\n gdb.execute(\"si\", from_tty=False, to_string=False)\n\n pc = pwndbg.gdblib.regs.pc\n for start, end in binary_exec_page_ranges:\n if start <= pc < end:\n return True\n\n return False\n\n\ndef break_on_next(address=None):\n address = address or pwndbg.gdblib.regs.pc\n ins = pwndbg.disasm.one(address)\n\n gdb.Breakpoint(\"*%#x\" % (ins.address + ins.size), temporary=True)\n gdb.execute(\"continue\", from_tty=False, to_string=True)\n", "path": "pwndbg/gdblib/next.py"}]}
| 3,669 | 635 |
gh_patches_debug_13026
|
rasdani/github-patches
|
git_diff
|
mampfes__hacs_waste_collection_schedule-561
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Specify local file path information in docs
Neither
- https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/faq.md
nor
- https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/ics.md#configuration-variables
- https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/ics.md#local-file
specify, where the file has to be stored.
1. Is "calendar.ics" meant to be the share `/config/calendar.ics`?
2. What would the `file:` parameter need to look like when the file is e. g. located in `/config/customfolder/calendar.ics` ?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py`
Content:
```
1 import datetime
2 import logging
3 from pathlib import Path
4
5 import requests
6 from waste_collection_schedule import Collection # type: ignore[attr-defined]
7 from waste_collection_schedule.service.ICS import ICS
8 from waste_collection_schedule.service.ICS_v1 import ICS_v1
9
10 TITLE = "ICS"
11 DESCRIPTION = "Source for ICS based schedules."
12 URL = None
13 TEST_CASES = {
14 "Dortmund, Dudenstr. 5": {
15 "url": "https://www.edg.de/ical/kalender.ics?Strasse=Dudenstr.&Hausnummer=5&Erinnerung=-1&Abfallart=1,2,3,4"
16 },
17 "Leipzig, Sandgrubenweg 27": {
18 "url": "https://stadtreinigung-leipzig.de/wir-kommen-zu-ihnen/abfallkalender/ical.ics?position_nos=38296&name=Sandgrubenweg%2027"
19 },
20 "Ludwigsburg": {
21 "url": "https://www.avl-ludwigsburg.de/fileadmin/Files/Abfallkalender/ICS/Privat/Privat_{%Y}_Ossweil.ics"
22 },
23 "Esslingen, Bahnhof": {
24 "url": "https://api.abfall.io/?kh=DaA02103019b46345f1998698563DaAd&t=ics&s=1a862df26f6943997cef90233877a4fe"
25 },
26 "Test File": {
27 # Path is used here to allow to call the Source from any location.
28 # This is not required in a yaml configuration!
29 "file": str(Path(__file__).resolve().parents[1].joinpath("test/test.ics"))
30 },
31 "Test File (recurring)": {
32 # Path is used here to allow to call the Source from any location.
33 # This is not required in a yaml configuration!
34 "file": str(Path(__file__).resolve().parents[1].joinpath("test/recurring.ics"))
35 },
36 "München, Bahnstr. 11": {
37 "url": "https://www.awm-muenchen.de/entsorgen/abfuhrkalender?tx_awmabfuhrkalender_abfuhrkalender%5Bhausnummer%5D=11&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BB%5D=1%2F2%3BU&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BP%5D=1%2F2%3BG&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BR%5D=001%3BU&tx_awmabfuhrkalender_abfuhrkalender%5Bsection%5D=ics&tx_awmabfuhrkalender_abfuhrkalender%5Bsinglestandplatz%5D=false&tx_awmabfuhrkalender_abfuhrkalender%5Bstandplatzwahl%5D=true&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Bbio%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Bpapier%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Brestmuell%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstrasse%5D=bahnstr.&tx_awmabfuhrkalender_abfuhrkalender%5Byear%5D={%Y}",
38 "version": 1,
39 },
40 "Buxtehude, Am Berg": {
41 "url": "https://abfall.landkreis-stade.de/api_v2/collection_dates/1/ort/10/strasse/90/hausnummern/1/abfallarten/R02-R04-B02-D04-D12-P04-R12-R14-W0-R22-R24-R31/kalender.ics"
42 },
43 # "Hausmüllinfo: ASR Chemnitz": {
44 # "url": "https://asc.hausmuell.info/ics/ics.php",
45 # "method": "POST",
46 # "params": {
47 # "hidden_id_egebiet": 439087,
48 # "input_ort": "Chemnitz",
49 # "input_str": "Straße der Nationen",
50 # "input_hnr": 2,
51 # "hidden_send_btn": "ics",
52 # # "hiddenYear": 2021,
53 # "hidden_id_ort": 10,
54 # "hidden_id_ortsteil": 0,
55 # "hidden_id_str": 17814,
56 # "hidden_id_hnr": 5538100,
57 # "hidden_kalenderart": "privat",
58 # "showBinsBio": "on",
59 # "showBinsRest": "on",
60 # "showBinsRest_rc": "on",
61 # "showBinsPapier": "on",
62 # "showBinsOrganic": "on",
63 # "showBinsXmas": "on",
64 # "showBinsDsd": "on",
65 # "showBinsProb": "on",
66 # },
67 # "year_field": "hiddenYear",
68 # },
69 "Abfall Zollernalbkreis, Ebingen": {
70 "url": "https://www.abfallkalender-zak.de",
71 "params": {
72 "city": "2,3,4",
73 "street": "3",
74 "types[]": [
75 "restmuell",
76 "gelbersack",
77 "papiertonne",
78 "biomuell",
79 "gruenabfall",
80 "schadstoffsammlung",
81 "altpapiersammlung",
82 "schrottsammlung",
83 "weihnachtsbaeume",
84 "elektrosammlung",
85 ],
86 "go_ics": "Download",
87 },
88 "year_field": "year",
89 },
90 "Detmold": {
91 "url": "https://abfuhrkalender.detmold.de/icsmaker.php",
92 "method": "GET",
93 "params": {"strid": 338},
94 "year_field": "year",
95 },
96 "EAW Rheingau Taunus": {
97 "url": "https://www.eaw-rheingau-taunus.de/abfallsammlung/abfuhrtermine/feed.ics?tx_vierwdeaw_garbagecalendarics%5Baction%5D=ics&tx_vierwdeaw_garbagecalendarics%5Bcontroller%5D=GarbageCalendar&tx_vierwdeaw_garbagecalendarics%5Bstreet%5D=38",
98 "split_at": ",",
99 },
100 "Recollect, Ottawa": {
101 "url": "https://recollect.a.ssl.fastly.net/api/places/BCCDF30E-578B-11E4-AD38-5839C200407A/services/208/events.en.ics",
102 "split_at": "\\, [and ]*",
103 },
104 "Frankfurt am Main, Achenbachstrasse 3": {
105 "url": "https://www.fes-frankfurt.de/abfallkalender/QWNoZW5iYWNoc3RyLnwzfDYwNTk2.ics"
106 },
107 "Erlensee, Am Haspel": {
108 "url": "https://sperrmuell.erlensee.de/?type=reminder",
109 "method": "POST",
110 "params": {
111 "street": 8,
112 "eventType[]": [27, 23, 19, 20, 21, 24, 22, 25, 26],
113 "timeframe": 23,
114 "download": "ical",
115 },
116 },
117 }
118
119
120 HEADERS = {"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"}
121 _LOGGER = logging.getLogger(__name__)
122
123
124 class Source:
125 def __init__(
126 self,
127 url=None,
128 file=None,
129 offset=None,
130 params=None,
131 year_field=None,
132 method="GET",
133 regex=None,
134 split_at=None,
135 version=2,
136 verify_ssl=True,
137 ):
138 self._url = url
139 self._file = file
140 if bool(self._url is not None) == bool(self._file is not None):
141 raise RuntimeError("Specify either url or file")
142 if version == 1:
143 self._ics = ICS_v1(offset=offset, split_at=split_at, regex=regex)
144 else:
145 self._ics = ICS(offset=offset, split_at=split_at, regex=regex)
146 self._params = params
147 self._year_field = year_field # replace this field in params with current year
148 self._method = method # The method to send the params
149 self._verify_ssl = verify_ssl
150
151 def fetch(self):
152 if self._url is not None:
153 if "{%Y}" in self._url or self._year_field is not None:
154 # url contains wildcard or params contains year field
155 now = datetime.datetime.now()
156
157 # replace year in url
158 url = self._url.replace("{%Y}", str(now.year))
159
160 # replace year in params
161 if self._year_field is not None:
162 if self._params is None:
163 raise RuntimeError("year_field specified without params")
164 self._params[self._year_field] = str(now.year)
165
166 entries = self.fetch_url(url, self._params)
167
168 if now.month == 12:
169 # also get data for next year if we are already in december
170 url = self._url.replace("{%Y}", str(now.year + 1))
171 if self._year_field is not None:
172 self._params[self._year_field] = str(now.year + 1)
173
174 try:
175 entries.extend(self.fetch_url(url, self._params))
176 except Exception:
177 # ignore if fetch for next year fails
178 pass
179 return entries
180 else:
181 return self.fetch_url(self._url, self._params)
182 elif self._file is not None:
183 return self.fetch_file(self._file)
184
185 def fetch_url(self, url, params=None):
186 # get ics file
187 if self._method == "GET":
188 r = requests.get(
189 url, params=params, headers=HEADERS, verify=self._verify_ssl
190 )
191 elif self._method == "POST":
192 r = requests.post(
193 url, data=params, headers=HEADERS, verify=self._verify_ssl
194 )
195 else:
196 raise RuntimeError(
197 "Error: unknown method to fetch URL, use GET or POST; got {self._method}"
198 )
199 r.raise_for_status()
200
201 r.encoding = "utf-8" # requests doesn't guess the encoding correctly
202 return self._convert(r.text)
203
204 def fetch_file(self, file):
205 f = open(file)
206 return self._convert(f.read())
207
208 def _convert(self, data):
209 dates = self._ics.convert(data)
210
211 entries = []
212 for d in dates:
213 entries.append(Collection(d[0], d[1]))
214 return entries
215
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py
@@ -1,5 +1,6 @@
import datetime
import logging
+from os import getcwd
from pathlib import Path
import requests
@@ -202,7 +203,11 @@
return self._convert(r.text)
def fetch_file(self, file):
- f = open(file)
+ try:
+ f = open(file)
+ except FileNotFoundError as e:
+ _LOGGER.error(f"Working directory: '{getcwd()}'")
+ raise
return self._convert(f.read())
def _convert(self, data):
|
{"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py\n@@ -1,5 +1,6 @@\n import datetime\n import logging\n+from os import getcwd\n from pathlib import Path\n \n import requests\n@@ -202,7 +203,11 @@\n return self._convert(r.text)\n \n def fetch_file(self, file):\n- f = open(file)\n+ try:\n+ f = open(file)\n+ except FileNotFoundError as e:\n+ _LOGGER.error(f\"Working directory: '{getcwd()}'\")\n+ raise\n return self._convert(f.read())\n \n def _convert(self, data):\n", "issue": "Specify local file path information in docs\nNeither\r\n- https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/faq.md\r\n\r\nnor\r\n- https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/ics.md#configuration-variables\r\n- https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/ics.md#local-file\r\n\r\nspecify, where the file has to be stored.\r\n\r\n1. Is \"calendar.ics\" meant to be the share `/config/calendar.ics`?\r\n2. What would the `file:` parameter need to look like when the file is e. g. located in `/config/customfolder/calendar.ics` ?\n", "before_files": [{"content": "import datetime\nimport logging\nfrom pathlib import Path\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\nfrom waste_collection_schedule.service.ICS_v1 import ICS_v1\n\nTITLE = \"ICS\"\nDESCRIPTION = \"Source for ICS based schedules.\"\nURL = None\nTEST_CASES = {\n \"Dortmund, Dudenstr. 5\": {\n \"url\": \"https://www.edg.de/ical/kalender.ics?Strasse=Dudenstr.&Hausnummer=5&Erinnerung=-1&Abfallart=1,2,3,4\"\n },\n \"Leipzig, Sandgrubenweg 27\": {\n \"url\": \"https://stadtreinigung-leipzig.de/wir-kommen-zu-ihnen/abfallkalender/ical.ics?position_nos=38296&name=Sandgrubenweg%2027\"\n },\n \"Ludwigsburg\": {\n \"url\": \"https://www.avl-ludwigsburg.de/fileadmin/Files/Abfallkalender/ICS/Privat/Privat_{%Y}_Ossweil.ics\"\n },\n \"Esslingen, Bahnhof\": {\n \"url\": \"https://api.abfall.io/?kh=DaA02103019b46345f1998698563DaAd&t=ics&s=1a862df26f6943997cef90233877a4fe\"\n },\n \"Test File\": {\n # Path is used here to allow to call the Source from any location.\n # This is not required in a yaml configuration!\n \"file\": str(Path(__file__).resolve().parents[1].joinpath(\"test/test.ics\"))\n },\n \"Test File (recurring)\": {\n # Path is used here to allow to call the Source from any location.\n # This is not required in a yaml configuration!\n \"file\": str(Path(__file__).resolve().parents[1].joinpath(\"test/recurring.ics\"))\n },\n \"M\u00fcnchen, Bahnstr. 11\": {\n \"url\": \"https://www.awm-muenchen.de/entsorgen/abfuhrkalender?tx_awmabfuhrkalender_abfuhrkalender%5Bhausnummer%5D=11&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BB%5D=1%2F2%3BU&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BP%5D=1%2F2%3BG&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BR%5D=001%3BU&tx_awmabfuhrkalender_abfuhrkalender%5Bsection%5D=ics&tx_awmabfuhrkalender_abfuhrkalender%5Bsinglestandplatz%5D=false&tx_awmabfuhrkalender_abfuhrkalender%5Bstandplatzwahl%5D=true&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Bbio%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Bpapier%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Brestmuell%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstrasse%5D=bahnstr.&tx_awmabfuhrkalender_abfuhrkalender%5Byear%5D={%Y}\",\n \"version\": 1,\n },\n \"Buxtehude, Am Berg\": {\n \"url\": \"https://abfall.landkreis-stade.de/api_v2/collection_dates/1/ort/10/strasse/90/hausnummern/1/abfallarten/R02-R04-B02-D04-D12-P04-R12-R14-W0-R22-R24-R31/kalender.ics\"\n },\n # \"Hausm\u00fcllinfo: ASR Chemnitz\": {\n # \"url\": \"https://asc.hausmuell.info/ics/ics.php\",\n # \"method\": \"POST\",\n # \"params\": {\n # \"hidden_id_egebiet\": 439087,\n # \"input_ort\": \"Chemnitz\",\n # \"input_str\": \"Stra\u00dfe der Nationen\",\n # \"input_hnr\": 2,\n # \"hidden_send_btn\": \"ics\",\n # # \"hiddenYear\": 2021,\n # \"hidden_id_ort\": 10,\n # \"hidden_id_ortsteil\": 0,\n # \"hidden_id_str\": 17814,\n # \"hidden_id_hnr\": 5538100,\n # \"hidden_kalenderart\": \"privat\",\n # \"showBinsBio\": \"on\",\n # \"showBinsRest\": \"on\",\n # \"showBinsRest_rc\": \"on\",\n # \"showBinsPapier\": \"on\",\n # \"showBinsOrganic\": \"on\",\n # \"showBinsXmas\": \"on\",\n # \"showBinsDsd\": \"on\",\n # \"showBinsProb\": \"on\",\n # },\n # \"year_field\": \"hiddenYear\",\n # },\n \"Abfall Zollernalbkreis, Ebingen\": {\n \"url\": \"https://www.abfallkalender-zak.de\",\n \"params\": {\n \"city\": \"2,3,4\",\n \"street\": \"3\",\n \"types[]\": [\n \"restmuell\",\n \"gelbersack\",\n \"papiertonne\",\n \"biomuell\",\n \"gruenabfall\",\n \"schadstoffsammlung\",\n \"altpapiersammlung\",\n \"schrottsammlung\",\n \"weihnachtsbaeume\",\n \"elektrosammlung\",\n ],\n \"go_ics\": \"Download\",\n },\n \"year_field\": \"year\",\n },\n \"Detmold\": {\n \"url\": \"https://abfuhrkalender.detmold.de/icsmaker.php\",\n \"method\": \"GET\",\n \"params\": {\"strid\": 338},\n \"year_field\": \"year\",\n },\n \"EAW Rheingau Taunus\": {\n \"url\": \"https://www.eaw-rheingau-taunus.de/abfallsammlung/abfuhrtermine/feed.ics?tx_vierwdeaw_garbagecalendarics%5Baction%5D=ics&tx_vierwdeaw_garbagecalendarics%5Bcontroller%5D=GarbageCalendar&tx_vierwdeaw_garbagecalendarics%5Bstreet%5D=38\",\n \"split_at\": \",\",\n },\n \"Recollect, Ottawa\": {\n \"url\": \"https://recollect.a.ssl.fastly.net/api/places/BCCDF30E-578B-11E4-AD38-5839C200407A/services/208/events.en.ics\",\n \"split_at\": \"\\\\, [and ]*\",\n },\n \"Frankfurt am Main, Achenbachstrasse 3\": {\n \"url\": \"https://www.fes-frankfurt.de/abfallkalender/QWNoZW5iYWNoc3RyLnwzfDYwNTk2.ics\"\n },\n \"Erlensee, Am Haspel\": {\n \"url\": \"https://sperrmuell.erlensee.de/?type=reminder\",\n \"method\": \"POST\",\n \"params\": {\n \"street\": 8,\n \"eventType[]\": [27, 23, 19, 20, 21, 24, 22, 25, 26],\n \"timeframe\": 23,\n \"download\": \"ical\",\n },\n },\n}\n\n\nHEADERS = {\"user-agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64)\"}\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Source:\n def __init__(\n self,\n url=None,\n file=None,\n offset=None,\n params=None,\n year_field=None,\n method=\"GET\",\n regex=None,\n split_at=None,\n version=2,\n verify_ssl=True,\n ):\n self._url = url\n self._file = file\n if bool(self._url is not None) == bool(self._file is not None):\n raise RuntimeError(\"Specify either url or file\")\n if version == 1:\n self._ics = ICS_v1(offset=offset, split_at=split_at, regex=regex)\n else:\n self._ics = ICS(offset=offset, split_at=split_at, regex=regex)\n self._params = params\n self._year_field = year_field # replace this field in params with current year\n self._method = method # The method to send the params\n self._verify_ssl = verify_ssl\n\n def fetch(self):\n if self._url is not None:\n if \"{%Y}\" in self._url or self._year_field is not None:\n # url contains wildcard or params contains year field\n now = datetime.datetime.now()\n\n # replace year in url\n url = self._url.replace(\"{%Y}\", str(now.year))\n\n # replace year in params\n if self._year_field is not None:\n if self._params is None:\n raise RuntimeError(\"year_field specified without params\")\n self._params[self._year_field] = str(now.year)\n\n entries = self.fetch_url(url, self._params)\n\n if now.month == 12:\n # also get data for next year if we are already in december\n url = self._url.replace(\"{%Y}\", str(now.year + 1))\n if self._year_field is not None:\n self._params[self._year_field] = str(now.year + 1)\n\n try:\n entries.extend(self.fetch_url(url, self._params))\n except Exception:\n # ignore if fetch for next year fails\n pass\n return entries\n else:\n return self.fetch_url(self._url, self._params)\n elif self._file is not None:\n return self.fetch_file(self._file)\n\n def fetch_url(self, url, params=None):\n # get ics file\n if self._method == \"GET\":\n r = requests.get(\n url, params=params, headers=HEADERS, verify=self._verify_ssl\n )\n elif self._method == \"POST\":\n r = requests.post(\n url, data=params, headers=HEADERS, verify=self._verify_ssl\n )\n else:\n raise RuntimeError(\n \"Error: unknown method to fetch URL, use GET or POST; got {self._method}\"\n )\n r.raise_for_status()\n\n r.encoding = \"utf-8\" # requests doesn't guess the encoding correctly\n return self._convert(r.text)\n\n def fetch_file(self, file):\n f = open(file)\n return self._convert(f.read())\n\n def _convert(self, data):\n dates = self._ics.convert(data)\n\n entries = []\n for d in dates:\n entries.append(Collection(d[0], d[1]))\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py"}], "after_files": [{"content": "import datetime\nimport logging\nfrom os import getcwd\nfrom pathlib import Path\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\nfrom waste_collection_schedule.service.ICS_v1 import ICS_v1\n\nTITLE = \"ICS\"\nDESCRIPTION = \"Source for ICS based schedules.\"\nURL = None\nTEST_CASES = {\n \"Dortmund, Dudenstr. 5\": {\n \"url\": \"https://www.edg.de/ical/kalender.ics?Strasse=Dudenstr.&Hausnummer=5&Erinnerung=-1&Abfallart=1,2,3,4\"\n },\n \"Leipzig, Sandgrubenweg 27\": {\n \"url\": \"https://stadtreinigung-leipzig.de/wir-kommen-zu-ihnen/abfallkalender/ical.ics?position_nos=38296&name=Sandgrubenweg%2027\"\n },\n \"Ludwigsburg\": {\n \"url\": \"https://www.avl-ludwigsburg.de/fileadmin/Files/Abfallkalender/ICS/Privat/Privat_{%Y}_Ossweil.ics\"\n },\n \"Esslingen, Bahnhof\": {\n \"url\": \"https://api.abfall.io/?kh=DaA02103019b46345f1998698563DaAd&t=ics&s=1a862df26f6943997cef90233877a4fe\"\n },\n \"Test File\": {\n # Path is used here to allow to call the Source from any location.\n # This is not required in a yaml configuration!\n \"file\": str(Path(__file__).resolve().parents[1].joinpath(\"test/test.ics\"))\n },\n \"Test File (recurring)\": {\n # Path is used here to allow to call the Source from any location.\n # This is not required in a yaml configuration!\n \"file\": str(Path(__file__).resolve().parents[1].joinpath(\"test/recurring.ics\"))\n },\n \"M\u00fcnchen, Bahnstr. 11\": {\n \"url\": \"https://www.awm-muenchen.de/entsorgen/abfuhrkalender?tx_awmabfuhrkalender_abfuhrkalender%5Bhausnummer%5D=11&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BB%5D=1%2F2%3BU&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BP%5D=1%2F2%3BG&tx_awmabfuhrkalender_abfuhrkalender%5Bleerungszyklus%5D%5BR%5D=001%3BU&tx_awmabfuhrkalender_abfuhrkalender%5Bsection%5D=ics&tx_awmabfuhrkalender_abfuhrkalender%5Bsinglestandplatz%5D=false&tx_awmabfuhrkalender_abfuhrkalender%5Bstandplatzwahl%5D=true&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Bbio%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Bpapier%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstellplatz%5D%5Brestmuell%5D=70024507&tx_awmabfuhrkalender_abfuhrkalender%5Bstrasse%5D=bahnstr.&tx_awmabfuhrkalender_abfuhrkalender%5Byear%5D={%Y}\",\n \"version\": 1,\n },\n \"Buxtehude, Am Berg\": {\n \"url\": \"https://abfall.landkreis-stade.de/api_v2/collection_dates/1/ort/10/strasse/90/hausnummern/1/abfallarten/R02-R04-B02-D04-D12-P04-R12-R14-W0-R22-R24-R31/kalender.ics\"\n },\n # \"Hausm\u00fcllinfo: ASR Chemnitz\": {\n # \"url\": \"https://asc.hausmuell.info/ics/ics.php\",\n # \"method\": \"POST\",\n # \"params\": {\n # \"hidden_id_egebiet\": 439087,\n # \"input_ort\": \"Chemnitz\",\n # \"input_str\": \"Stra\u00dfe der Nationen\",\n # \"input_hnr\": 2,\n # \"hidden_send_btn\": \"ics\",\n # # \"hiddenYear\": 2021,\n # \"hidden_id_ort\": 10,\n # \"hidden_id_ortsteil\": 0,\n # \"hidden_id_str\": 17814,\n # \"hidden_id_hnr\": 5538100,\n # \"hidden_kalenderart\": \"privat\",\n # \"showBinsBio\": \"on\",\n # \"showBinsRest\": \"on\",\n # \"showBinsRest_rc\": \"on\",\n # \"showBinsPapier\": \"on\",\n # \"showBinsOrganic\": \"on\",\n # \"showBinsXmas\": \"on\",\n # \"showBinsDsd\": \"on\",\n # \"showBinsProb\": \"on\",\n # },\n # \"year_field\": \"hiddenYear\",\n # },\n \"Abfall Zollernalbkreis, Ebingen\": {\n \"url\": \"https://www.abfallkalender-zak.de\",\n \"params\": {\n \"city\": \"2,3,4\",\n \"street\": \"3\",\n \"types[]\": [\n \"restmuell\",\n \"gelbersack\",\n \"papiertonne\",\n \"biomuell\",\n \"gruenabfall\",\n \"schadstoffsammlung\",\n \"altpapiersammlung\",\n \"schrottsammlung\",\n \"weihnachtsbaeume\",\n \"elektrosammlung\",\n ],\n \"go_ics\": \"Download\",\n },\n \"year_field\": \"year\",\n },\n \"Detmold\": {\n \"url\": \"https://abfuhrkalender.detmold.de/icsmaker.php\",\n \"method\": \"GET\",\n \"params\": {\"strid\": 338},\n \"year_field\": \"year\",\n },\n \"EAW Rheingau Taunus\": {\n \"url\": \"https://www.eaw-rheingau-taunus.de/abfallsammlung/abfuhrtermine/feed.ics?tx_vierwdeaw_garbagecalendarics%5Baction%5D=ics&tx_vierwdeaw_garbagecalendarics%5Bcontroller%5D=GarbageCalendar&tx_vierwdeaw_garbagecalendarics%5Bstreet%5D=38\",\n \"split_at\": \",\",\n },\n \"Recollect, Ottawa\": {\n \"url\": \"https://recollect.a.ssl.fastly.net/api/places/BCCDF30E-578B-11E4-AD38-5839C200407A/services/208/events.en.ics\",\n \"split_at\": \"\\\\, [and ]*\",\n },\n \"Frankfurt am Main, Achenbachstrasse 3\": {\n \"url\": \"https://www.fes-frankfurt.de/abfallkalender/QWNoZW5iYWNoc3RyLnwzfDYwNTk2.ics\"\n },\n \"Erlensee, Am Haspel\": {\n \"url\": \"https://sperrmuell.erlensee.de/?type=reminder\",\n \"method\": \"POST\",\n \"params\": {\n \"street\": 8,\n \"eventType[]\": [27, 23, 19, 20, 21, 24, 22, 25, 26],\n \"timeframe\": 23,\n \"download\": \"ical\",\n },\n },\n}\n\n\nHEADERS = {\"user-agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64)\"}\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Source:\n def __init__(\n self,\n url=None,\n file=None,\n offset=None,\n params=None,\n year_field=None,\n method=\"GET\",\n regex=None,\n split_at=None,\n version=2,\n verify_ssl=True,\n ):\n self._url = url\n self._file = file\n if bool(self._url is not None) == bool(self._file is not None):\n raise RuntimeError(\"Specify either url or file\")\n if version == 1:\n self._ics = ICS_v1(offset=offset, split_at=split_at, regex=regex)\n else:\n self._ics = ICS(offset=offset, split_at=split_at, regex=regex)\n self._params = params\n self._year_field = year_field # replace this field in params with current year\n self._method = method # The method to send the params\n self._verify_ssl = verify_ssl\n\n def fetch(self):\n if self._url is not None:\n if \"{%Y}\" in self._url or self._year_field is not None:\n # url contains wildcard or params contains year field\n now = datetime.datetime.now()\n\n # replace year in url\n url = self._url.replace(\"{%Y}\", str(now.year))\n\n # replace year in params\n if self._year_field is not None:\n if self._params is None:\n raise RuntimeError(\"year_field specified without params\")\n self._params[self._year_field] = str(now.year)\n\n entries = self.fetch_url(url, self._params)\n\n if now.month == 12:\n # also get data for next year if we are already in december\n url = self._url.replace(\"{%Y}\", str(now.year + 1))\n if self._year_field is not None:\n self._params[self._year_field] = str(now.year + 1)\n\n try:\n entries.extend(self.fetch_url(url, self._params))\n except Exception:\n # ignore if fetch for next year fails\n pass\n return entries\n else:\n return self.fetch_url(self._url, self._params)\n elif self._file is not None:\n return self.fetch_file(self._file)\n\n def fetch_url(self, url, params=None):\n # get ics file\n if self._method == \"GET\":\n r = requests.get(\n url, params=params, headers=HEADERS, verify=self._verify_ssl\n )\n elif self._method == \"POST\":\n r = requests.post(\n url, data=params, headers=HEADERS, verify=self._verify_ssl\n )\n else:\n raise RuntimeError(\n \"Error: unknown method to fetch URL, use GET or POST; got {self._method}\"\n )\n r.raise_for_status()\n\n r.encoding = \"utf-8\" # requests doesn't guess the encoding correctly\n return self._convert(r.text)\n\n def fetch_file(self, file):\n try:\n f = open(file)\n except FileNotFoundError as e:\n _LOGGER.error(f\"Working directory: '{getcwd()}'\")\n raise\n return self._convert(f.read())\n\n def _convert(self, data):\n dates = self._ics.convert(data)\n\n entries = []\n for d in dates:\n entries.append(Collection(d[0], d[1]))\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/ics.py"}]}
| 3,633 | 193 |
gh_patches_debug_135
|
rasdani/github-patches
|
git_diff
|
encode__httpx-691
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Version 0.10.0
Let's get squared away what we need for Version 0.10.0.
The key here is we need to make sure we've sorted out any API that'd block us from reintroducing the sync API, so...
* [x] Let's drop `.read()` from `Request` - It's not documented anywhere, not *actually* required by users, and the content is available on `request.stream` if absolutely needed. #679
* [x] We need `Response.read()` and `Response.close()` to be `Response.aread()` and `Response.aclose()`. The only point where this is relevant is users performing conditional reads inside a `with httpx.stream(method, url) as response` block. #674
* [x] We ought to have `Client.close()` become `Client.aclose()` for consistency. #675
* [x] Good point to get #617 in.
* [x] We'll need to change `response.next()`. We could *either* underspecify the return type, and allow it to be either sync or async depending on the context, *or* we use `response.anext()` and `response.next()`. #676
* [ ] ~Good point to address~ #656.
* [x] *Potentially* we could introduce an `httpx.AsyncClient` synonm for `httpx.Client`, and advise our users to switch towards that usage, so that there's no breaking changes for them once we fully reintroduce a sync API all the way through. #680
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `httpx/__version__.py`
Content:
```
1 __title__ = "httpx"
2 __description__ = "A next generation HTTP client, for Python 3."
3 __version__ = "0.9.5"
4
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/httpx/__version__.py b/httpx/__version__.py
--- a/httpx/__version__.py
+++ b/httpx/__version__.py
@@ -1,3 +1,3 @@
__title__ = "httpx"
__description__ = "A next generation HTTP client, for Python 3."
-__version__ = "0.9.5"
+__version__ = "0.10.0"
|
{"golden_diff": "diff --git a/httpx/__version__.py b/httpx/__version__.py\n--- a/httpx/__version__.py\n+++ b/httpx/__version__.py\n@@ -1,3 +1,3 @@\n __title__ = \"httpx\"\n __description__ = \"A next generation HTTP client, for Python 3.\"\n-__version__ = \"0.9.5\"\n+__version__ = \"0.10.0\"\n", "issue": "Version 0.10.0\nLet's get squared away what we need for Version 0.10.0.\r\n\r\nThe key here is we need to make sure we've sorted out any API that'd block us from reintroducing the sync API, so...\r\n\r\n* [x] Let's drop `.read()` from `Request` - It's not documented anywhere, not *actually* required by users, and the content is available on `request.stream` if absolutely needed. #679\r\n* [x] We need `Response.read()` and `Response.close()` to be `Response.aread()` and `Response.aclose()`. The only point where this is relevant is users performing conditional reads inside a `with httpx.stream(method, url) as response` block. #674\r\n* [x] We ought to have `Client.close()` become `Client.aclose()` for consistency. #675\r\n* [x] Good point to get #617 in.\r\n* [x] We'll need to change `response.next()`. We could *either* underspecify the return type, and allow it to be either sync or async depending on the context, *or* we use `response.anext()` and `response.next()`. #676\r\n* [ ] ~Good point to address~ #656.\r\n* [x] *Potentially* we could introduce an `httpx.AsyncClient` synonm for `httpx.Client`, and advise our users to switch towards that usage, so that there's no breaking changes for them once we fully reintroduce a sync API all the way through. #680\n", "before_files": [{"content": "__title__ = \"httpx\"\n__description__ = \"A next generation HTTP client, for Python 3.\"\n__version__ = \"0.9.5\"\n", "path": "httpx/__version__.py"}], "after_files": [{"content": "__title__ = \"httpx\"\n__description__ = \"A next generation HTTP client, for Python 3.\"\n__version__ = \"0.10.0\"\n", "path": "httpx/__version__.py"}]}
| 644 | 95 |
gh_patches_debug_674
|
rasdani/github-patches
|
git_diff
|
pyodide__pyodide-3562
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error about `--user` and `--target` flag when installing xbuildenv
I sometimes get following error while installing xbuild environment:
```bash
$ pyodide build .
Downloading xbuild environment
Installing xbuild environment
stderr:
ERROR: Can not combine '--user' and '--target'
[notice] A new release of pip available: 22.3.1 -> 23.0
[notice] To update, run: /home/gitpod/.pyenv/versions/3.10.2/bin/python -m pip install --upgrade pip
```
It happens here, which installs host site packages:
https://github.com/pyodide/pyodide/blob/7cc1058358242a5a9012edbb8163d86a860a1a28/pyodide-build/pyodide_build/install_xbuildenv.py#L50-L57
I think we need to add `--no-user` flag explicitly to prevent this error.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyodide-build/pyodide_build/install_xbuildenv.py`
Content:
```
1 import argparse
2 import json
3 import shutil
4 import subprocess
5 from pathlib import Path
6 from urllib.request import urlopen, urlretrieve
7
8 from .common import exit_with_stdio, get_make_flag, get_pyodide_root
9 from .create_pypa_index import create_pypa_index
10 from .logger import logger
11
12
13 def make_parser(parser: argparse.ArgumentParser) -> argparse.ArgumentParser:
14 parser.description = (
15 "Install xbuild env.\n\n"
16 "The installed environment is the same as the one that would result from\n"
17 "`PYODIDE_PACKAGES='scipy' make` except that it is much faster.\n"
18 "The goal is to enable out-of-tree builds for binary packages that depend\n"
19 "on numpy or scipy.\n"
20 "Note: this is a private endpoint that should not be used outside of the Pyodide Makefile."
21 )
22 parser.add_argument("--download", action="store_true", help="Download xbuild env")
23 parser.add_argument("xbuildenv", type=str, nargs=1)
24 return parser
25
26
27 def download_xbuildenv(version: str, xbuildenv_path: Path) -> None:
28 from shutil import rmtree, unpack_archive
29 from tempfile import NamedTemporaryFile
30
31 logger.info("Downloading xbuild environment")
32 rmtree(xbuildenv_path, ignore_errors=True)
33 with NamedTemporaryFile(suffix=".tar") as f:
34 urlretrieve(
35 f"https://github.com/pyodide/pyodide/releases/download/{version}/xbuildenv-{version}.tar.bz2",
36 f.name,
37 )
38 unpack_archive(f.name, xbuildenv_path)
39
40
41 def install_xbuildenv(version: str, xbuildenv_path: Path) -> None:
42 logger.info("Installing xbuild environment")
43 xbuildenv_path = xbuildenv_path / "xbuildenv"
44 pyodide_root = get_pyodide_root()
45 xbuildenv_root = xbuildenv_path / "pyodide-root"
46 host_site_packages = xbuildenv_root / Path(
47 get_make_flag("HOSTSITEPACKAGES")
48 ).relative_to(pyodide_root)
49 host_site_packages.mkdir(exist_ok=True, parents=True)
50 result = subprocess.run(
51 [
52 "pip",
53 "install",
54 "-t",
55 host_site_packages,
56 "-r",
57 xbuildenv_path / "requirements.txt",
58 ],
59 capture_output=True,
60 encoding="utf8",
61 )
62 if result.returncode != 0:
63 exit_with_stdio(result)
64 # Copy the site-packages-extras (coming from the cross-build-files meta.yaml
65 # key) over the site-packages directory with the newly installed packages.
66 shutil.copytree(
67 xbuildenv_path / "site-packages-extras", host_site_packages, dirs_exist_ok=True
68 )
69 cdn_base = f"https://cdn.jsdelivr.net/pyodide/v{version}/full/"
70 if (xbuildenv_root / "repodata.json").exists():
71 repodata_bytes = (xbuildenv_root / "repodata.json").read_bytes()
72 else:
73 repodata_url = cdn_base + "repodata.json"
74 with urlopen(repodata_url) as response:
75 repodata_bytes = response.read()
76 repodata = json.loads(repodata_bytes)
77 version = repodata["info"]["version"]
78 create_pypa_index(repodata["packages"], xbuildenv_root, cdn_base)
79
80
81 def main(args: argparse.Namespace) -> None:
82 from . import __version__
83
84 xbuildenv_path = Path(args.xbuildenv[0])
85 version = __version__
86 if args.download:
87 download_xbuildenv(version, xbuildenv_path)
88 install_xbuildenv(version, xbuildenv_path)
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pyodide-build/pyodide_build/install_xbuildenv.py b/pyodide-build/pyodide_build/install_xbuildenv.py
--- a/pyodide-build/pyodide_build/install_xbuildenv.py
+++ b/pyodide-build/pyodide_build/install_xbuildenv.py
@@ -51,6 +51,7 @@
[
"pip",
"install",
+ "--no-user",
"-t",
host_site_packages,
"-r",
|
{"golden_diff": "diff --git a/pyodide-build/pyodide_build/install_xbuildenv.py b/pyodide-build/pyodide_build/install_xbuildenv.py\n--- a/pyodide-build/pyodide_build/install_xbuildenv.py\n+++ b/pyodide-build/pyodide_build/install_xbuildenv.py\n@@ -51,6 +51,7 @@\n [\n \"pip\",\n \"install\",\n+ \"--no-user\",\n \"-t\",\n host_site_packages,\n \"-r\",\n", "issue": "Error about `--user` and `--target` flag when installing xbuildenv\nI sometimes get following error while installing xbuild environment:\r\n\r\n```bash\r\n$ pyodide build .\r\nDownloading xbuild environment\r\nInstalling xbuild environment\r\n stderr:\r\n ERROR: Can not combine '--user' and '--target'\r\n\r\n [notice] A new release of pip available: 22.3.1 -> 23.0\r\n [notice] To update, run: /home/gitpod/.pyenv/versions/3.10.2/bin/python -m pip install --upgrade pip\r\n```\r\n\r\nIt happens here, which installs host site packages:\r\n\r\nhttps://github.com/pyodide/pyodide/blob/7cc1058358242a5a9012edbb8163d86a860a1a28/pyodide-build/pyodide_build/install_xbuildenv.py#L50-L57\r\n\r\nI think we need to add `--no-user` flag explicitly to prevent this error.\n", "before_files": [{"content": "import argparse\nimport json\nimport shutil\nimport subprocess\nfrom pathlib import Path\nfrom urllib.request import urlopen, urlretrieve\n\nfrom .common import exit_with_stdio, get_make_flag, get_pyodide_root\nfrom .create_pypa_index import create_pypa_index\nfrom .logger import logger\n\n\ndef make_parser(parser: argparse.ArgumentParser) -> argparse.ArgumentParser:\n parser.description = (\n \"Install xbuild env.\\n\\n\"\n \"The installed environment is the same as the one that would result from\\n\"\n \"`PYODIDE_PACKAGES='scipy' make` except that it is much faster.\\n\"\n \"The goal is to enable out-of-tree builds for binary packages that depend\\n\"\n \"on numpy or scipy.\\n\"\n \"Note: this is a private endpoint that should not be used outside of the Pyodide Makefile.\"\n )\n parser.add_argument(\"--download\", action=\"store_true\", help=\"Download xbuild env\")\n parser.add_argument(\"xbuildenv\", type=str, nargs=1)\n return parser\n\n\ndef download_xbuildenv(version: str, xbuildenv_path: Path) -> None:\n from shutil import rmtree, unpack_archive\n from tempfile import NamedTemporaryFile\n\n logger.info(\"Downloading xbuild environment\")\n rmtree(xbuildenv_path, ignore_errors=True)\n with NamedTemporaryFile(suffix=\".tar\") as f:\n urlretrieve(\n f\"https://github.com/pyodide/pyodide/releases/download/{version}/xbuildenv-{version}.tar.bz2\",\n f.name,\n )\n unpack_archive(f.name, xbuildenv_path)\n\n\ndef install_xbuildenv(version: str, xbuildenv_path: Path) -> None:\n logger.info(\"Installing xbuild environment\")\n xbuildenv_path = xbuildenv_path / \"xbuildenv\"\n pyodide_root = get_pyodide_root()\n xbuildenv_root = xbuildenv_path / \"pyodide-root\"\n host_site_packages = xbuildenv_root / Path(\n get_make_flag(\"HOSTSITEPACKAGES\")\n ).relative_to(pyodide_root)\n host_site_packages.mkdir(exist_ok=True, parents=True)\n result = subprocess.run(\n [\n \"pip\",\n \"install\",\n \"-t\",\n host_site_packages,\n \"-r\",\n xbuildenv_path / \"requirements.txt\",\n ],\n capture_output=True,\n encoding=\"utf8\",\n )\n if result.returncode != 0:\n exit_with_stdio(result)\n # Copy the site-packages-extras (coming from the cross-build-files meta.yaml\n # key) over the site-packages directory with the newly installed packages.\n shutil.copytree(\n xbuildenv_path / \"site-packages-extras\", host_site_packages, dirs_exist_ok=True\n )\n cdn_base = f\"https://cdn.jsdelivr.net/pyodide/v{version}/full/\"\n if (xbuildenv_root / \"repodata.json\").exists():\n repodata_bytes = (xbuildenv_root / \"repodata.json\").read_bytes()\n else:\n repodata_url = cdn_base + \"repodata.json\"\n with urlopen(repodata_url) as response:\n repodata_bytes = response.read()\n repodata = json.loads(repodata_bytes)\n version = repodata[\"info\"][\"version\"]\n create_pypa_index(repodata[\"packages\"], xbuildenv_root, cdn_base)\n\n\ndef main(args: argparse.Namespace) -> None:\n from . import __version__\n\n xbuildenv_path = Path(args.xbuildenv[0])\n version = __version__\n if args.download:\n download_xbuildenv(version, xbuildenv_path)\n install_xbuildenv(version, xbuildenv_path)\n", "path": "pyodide-build/pyodide_build/install_xbuildenv.py"}], "after_files": [{"content": "import argparse\nimport json\nimport shutil\nimport subprocess\nfrom pathlib import Path\nfrom urllib.request import urlopen, urlretrieve\n\nfrom .common import exit_with_stdio, get_make_flag, get_pyodide_root\nfrom .create_pypa_index import create_pypa_index\nfrom .logger import logger\n\n\ndef make_parser(parser: argparse.ArgumentParser) -> argparse.ArgumentParser:\n parser.description = (\n \"Install xbuild env.\\n\\n\"\n \"The installed environment is the same as the one that would result from\\n\"\n \"`PYODIDE_PACKAGES='scipy' make` except that it is much faster.\\n\"\n \"The goal is to enable out-of-tree builds for binary packages that depend\\n\"\n \"on numpy or scipy.\\n\"\n \"Note: this is a private endpoint that should not be used outside of the Pyodide Makefile.\"\n )\n parser.add_argument(\"--download\", action=\"store_true\", help=\"Download xbuild env\")\n parser.add_argument(\"xbuildenv\", type=str, nargs=1)\n return parser\n\n\ndef download_xbuildenv(version: str, xbuildenv_path: Path) -> None:\n from shutil import rmtree, unpack_archive\n from tempfile import NamedTemporaryFile\n\n logger.info(\"Downloading xbuild environment\")\n rmtree(xbuildenv_path, ignore_errors=True)\n with NamedTemporaryFile(suffix=\".tar\") as f:\n urlretrieve(\n f\"https://github.com/pyodide/pyodide/releases/download/{version}/xbuildenv-{version}.tar.bz2\",\n f.name,\n )\n unpack_archive(f.name, xbuildenv_path)\n\n\ndef install_xbuildenv(version: str, xbuildenv_path: Path) -> None:\n logger.info(\"Installing xbuild environment\")\n xbuildenv_path = xbuildenv_path / \"xbuildenv\"\n pyodide_root = get_pyodide_root()\n xbuildenv_root = xbuildenv_path / \"pyodide-root\"\n host_site_packages = xbuildenv_root / Path(\n get_make_flag(\"HOSTSITEPACKAGES\")\n ).relative_to(pyodide_root)\n host_site_packages.mkdir(exist_ok=True, parents=True)\n result = subprocess.run(\n [\n \"pip\",\n \"install\",\n \"--no-user\",\n \"-t\",\n host_site_packages,\n \"-r\",\n xbuildenv_path / \"requirements.txt\",\n ],\n capture_output=True,\n encoding=\"utf8\",\n )\n if result.returncode != 0:\n exit_with_stdio(result)\n # Copy the site-packages-extras (coming from the cross-build-files meta.yaml\n # key) over the site-packages directory with the newly installed packages.\n shutil.copytree(\n xbuildenv_path / \"site-packages-extras\", host_site_packages, dirs_exist_ok=True\n )\n cdn_base = f\"https://cdn.jsdelivr.net/pyodide/v{version}/full/\"\n if (xbuildenv_root / \"repodata.json\").exists():\n repodata_bytes = (xbuildenv_root / \"repodata.json\").read_bytes()\n else:\n repodata_url = cdn_base + \"repodata.json\"\n with urlopen(repodata_url) as response:\n repodata_bytes = response.read()\n repodata = json.loads(repodata_bytes)\n version = repodata[\"info\"][\"version\"]\n create_pypa_index(repodata[\"packages\"], xbuildenv_root, cdn_base)\n\n\ndef main(args: argparse.Namespace) -> None:\n from . import __version__\n\n xbuildenv_path = Path(args.xbuildenv[0])\n version = __version__\n if args.download:\n download_xbuildenv(version, xbuildenv_path)\n install_xbuildenv(version, xbuildenv_path)\n", "path": "pyodide-build/pyodide_build/install_xbuildenv.py"}]}
| 1,457 | 105 |
gh_patches_debug_7623
|
rasdani/github-patches
|
git_diff
|
deis__deis-2029
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
chore(*): bump CoreOS to 459.0.0
Only the rackspace image is missing (I don't know if this could be merged without that)
Change to controller scheduler for fleet v0.8.3
Since v0.8.2 fleet return the proper http status code.
In the line https://github.com/deis/deis/blob/master/controller/scheduler/coreos.py#L58 must change 204 to 201.
(founded using the logspout PR in v257.0.0 with fleet v0.8.3)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `controller/scheduler/coreos.py`
Content:
```
1 import cStringIO
2 import base64
3 import copy
4 import json
5 import httplib
6 import paramiko
7 import socket
8 import re
9 import time
10
11
12 MATCH = re.compile(
13 '(?P<app>[a-z0-9-]+)_?(?P<version>v[0-9]+)?\.?(?P<c_type>[a-z-_]+)?.(?P<c_num>[0-9]+)')
14
15
16 class UHTTPConnection(httplib.HTTPConnection):
17 """Subclass of Python library HTTPConnection that uses a Unix domain socket.
18 """
19
20 def __init__(self, path):
21 httplib.HTTPConnection.__init__(self, 'localhost')
22 self.path = path
23
24 def connect(self):
25 sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
26 sock.connect(self.path)
27 self.sock = sock
28
29
30 class FleetHTTPClient(object):
31
32 def __init__(self, cluster_name, hosts, auth, domain, options):
33 self.name = cluster_name
34 self.hosts = hosts
35 self.auth = auth
36 self.domain = domain
37 self.options = options
38 # single global connection
39 self.conn = UHTTPConnection('/var/run/fleet.sock')
40
41 # scheduler setup / teardown
42
43 def setUp(self):
44 pass
45
46 def tearDown(self):
47 pass
48
49 # connection helpers
50
51 def _put_unit(self, name, body):
52 headers = {'Content-Type': 'application/json'}
53 self.conn.request('PUT', '/v1-alpha/units/{name}.service'.format(**locals()),
54 headers=headers, body=json.dumps(body))
55 resp = self.conn.getresponse()
56 data = resp.read()
57 if resp.status != 204:
58 errmsg = "Failed to create unit: {} {} - {}".format(
59 resp.status, resp.reason, data)
60 raise RuntimeError(errmsg)
61 return data
62
63 def _delete_unit(self, name):
64 headers = {'Content-Type': 'application/json'}
65 self.conn.request('DELETE', '/v1-alpha/units/{name}.service'.format(**locals()),
66 headers=headers)
67 resp = self.conn.getresponse()
68 data = resp.read()
69 if resp.status not in (404, 204):
70 errmsg = "Failed to delete unit: {} {} - {}".format(
71 resp.status, resp.reason, data)
72 raise RuntimeError(errmsg)
73 return data
74
75 def _get_state(self, name=None):
76 headers = {'Content-Type': 'application/json'}
77 url = '/v1-alpha/state'
78 if name:
79 url += '?unitName={name}.service'.format(**locals())
80 self.conn.request('GET', url, headers=headers)
81 resp = self.conn.getresponse()
82 data = resp.read()
83 if resp.status not in (200,):
84 errmsg = "Failed to retrieve state: {} {} - {}".format(
85 resp.status, resp.reason, data)
86 raise RuntimeError(errmsg)
87 return json.loads(data)
88
89 def _get_machines(self):
90 headers = {'Content-Type': 'application/json'}
91 url = '/v1-alpha/machines'
92 self.conn.request('GET', url, headers=headers)
93 resp = self.conn.getresponse()
94 data = resp.read()
95 if resp.status not in (200,):
96 errmsg = "Failed to retrieve machines: {} {} - {}".format(
97 resp.status, resp.reason, data)
98 raise RuntimeError(errmsg)
99 return json.loads(data)
100
101 # container api
102
103 def create(self, name, image, command='', template=None, **kwargs):
104 """Create a container"""
105 self._create_container(name, image, command,
106 template or copy.deepcopy(CONTAINER_TEMPLATE), **kwargs)
107
108 def _create_container(self, name, image, command, unit, **kwargs):
109 l = locals().copy()
110 l.update(re.match(MATCH, name).groupdict())
111 # prepare memory limit for the container type
112 mem = kwargs.get('memory', {}).get(l['c_type'], None)
113 if mem:
114 l.update({'memory': '-m {}'.format(mem.lower())})
115 else:
116 l.update({'memory': ''})
117 # prepare memory limit for the container type
118 cpu = kwargs.get('cpu', {}).get(l['c_type'], None)
119 if cpu:
120 l.update({'cpu': '-c {}'.format(cpu)})
121 else:
122 l.update({'cpu': ''})
123 # should a special entrypoint be used
124 entrypoint = kwargs.get('entrypoint')
125 if entrypoint:
126 l.update({'entrypoint': '{}'.format(entrypoint)})
127 # construct unit from template
128 for f in unit:
129 f['value'] = f['value'].format(**l)
130 # prepare tags only if one was provided
131 tags = kwargs.get('tags', {})
132 if tags:
133 tagset = ' '.join(['"{}={}"'.format(k, v) for k, v in tags.items()])
134 unit.append({"section": "X-Fleet", "name": "MachineMetadata",
135 "value": tagset})
136 # post unit to fleet
137 self._put_unit(name, {"desiredState": "launched", "options": unit})
138
139 def start(self, name):
140 """Start a container"""
141 self._wait_for_container(name)
142
143 def _wait_for_container(self, name):
144 # we bump to 20 minutes here to match the timeout on the router and in the app unit files
145 for _ in range(1200):
146 states = self._get_state(name)
147 if states and len(states.get('states', [])) == 1:
148 state = states.get('states')[0]
149 subState = state.get('systemdSubState')
150 if subState == 'running' or subState == 'exited':
151 break
152 elif subState == 'failed':
153 raise RuntimeError('container failed to start')
154 time.sleep(1)
155 else:
156 raise RuntimeError('container failed to start')
157
158 def _wait_for_destroy(self, name):
159 for _ in range(30):
160 states = self._get_state(name)
161 if not states:
162 break
163 time.sleep(1)
164 else:
165 raise RuntimeError('timeout on container destroy')
166
167 def stop(self, name):
168 """Stop a container"""
169 raise NotImplementedError
170
171 def destroy(self, name):
172 """Destroy a container"""
173 # call all destroy functions, ignoring any errors
174 try:
175 self._destroy_container(name)
176 except:
177 pass
178 self._wait_for_destroy(name)
179
180 def _destroy_container(self, name):
181 return self._delete_unit(name)
182
183 def run(self, name, image, entrypoint, command): # noqa
184 """Run a one-off command"""
185 self._create_container(name, image, command, copy.deepcopy(RUN_TEMPLATE),
186 entrypoint=entrypoint)
187
188 # wait for the container to get scheduled
189 for _ in range(30):
190 states = self._get_state(name)
191 if states and len(states.get('states', [])) == 1:
192 state = states.get('states')[0]
193 break
194 time.sleep(1)
195 else:
196 raise RuntimeError('container did not report state')
197 machineID = state.get('machineID')
198
199 # find the machine
200 machines = self._get_machines()
201 if not machines:
202 raise RuntimeError('no available hosts to run command')
203
204 # find the machine's primaryIP
205 primaryIP = None
206 for m in machines.get('machines', []):
207 if m['id'] == machineID:
208 primaryIP = m['primaryIP']
209 if not primaryIP:
210 raise RuntimeError('could not find host')
211
212 # prepare ssh key
213 file_obj = cStringIO.StringIO(base64.b64decode(self.auth))
214 pkey = paramiko.RSAKey(file_obj=file_obj)
215
216 # grab output via docker logs over SSH
217 ssh = paramiko.SSHClient()
218 ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
219 ssh.connect(primaryIP, username="core", pkey=pkey)
220 # share a transport
221 tran = ssh.get_transport()
222
223 def _do_ssh(cmd):
224 chan = tran.open_session()
225 # get a pty so stdout/stderr look right
226 chan.get_pty()
227 out = chan.makefile()
228 chan.exec_command(cmd)
229 rc, output = chan.recv_exit_status(), out.read()
230 return rc, output
231
232 # wait for container to start
233 for _ in range(1200):
234 rc, _ = _do_ssh('docker inspect {name}'.format(**locals()))
235 if rc == 0:
236 break
237 time.sleep(1)
238 else:
239 raise RuntimeError('container failed to start on host')
240
241 # wait for container to complete
242 for _ in range(1200):
243 _rc, _output = _do_ssh('docker inspect {name}'.format(**locals()))
244 if _rc != 0:
245 raise RuntimeError('failed to inspect container')
246 _container = json.loads(_output)
247 finished_at = _container[0]["State"]["FinishedAt"]
248 if not finished_at.startswith('0001'):
249 break
250 time.sleep(1)
251 else:
252 raise RuntimeError('container timed out')
253
254 # gather container output
255 _rc, output = _do_ssh('docker logs {name}'.format(**locals()))
256 if _rc != 0:
257 raise RuntimeError('could not attach to container')
258
259 # determine container exit code
260 _rc, _output = _do_ssh('docker inspect {name}'.format(**locals()))
261 if _rc != 0:
262 raise RuntimeError('could not determine exit code')
263 container = json.loads(_output)
264 rc = container[0]["State"]["ExitCode"]
265
266 # cleanup
267 self._destroy_container(name)
268 self._wait_for_destroy(name)
269
270 # return rc and output
271 return rc, output
272
273 def attach(self, name):
274 """
275 Attach to a job's stdin, stdout and stderr
276 """
277 raise NotImplementedError
278
279 SchedulerClient = FleetHTTPClient
280
281
282 CONTAINER_TEMPLATE = [
283 {"section": "Unit", "name": "Description", "value": "{name}"},
284 {"section": "Service", "name": "ExecStartPre", "value": '''/bin/sh -c "IMAGE=$(etcdctl get /deis/registry/host 2>&1):$(etcdctl get /deis/registry/port 2>&1)/{image}; docker pull $IMAGE"'''}, # noqa
285 {"section": "Service", "name": "ExecStartPre", "value": '''/bin/sh -c "docker inspect {name} >/dev/null 2>&1 && docker rm -f {name} || true"'''}, # noqa
286 {"section": "Service", "name": "ExecStart", "value": '''/bin/sh -c "IMAGE=$(etcdctl get /deis/registry/host 2>&1):$(etcdctl get /deis/registry/port 2>&1)/{image}; port=$(docker inspect -f '{{{{range $k, $v := .ContainerConfig.ExposedPorts }}}}{{{{$k}}}}{{{{end}}}}' $IMAGE | cut -d/ -f1) ; docker run --name {name} {memory} {cpu} -P -e PORT=$port $IMAGE {command}"'''}, # noqa
287 {"section": "Service", "name": "ExecStop", "value": '''/usr/bin/docker rm -f {name}'''},
288 {"section": "Service", "name": "TimeoutStartSec", "value": "20m"},
289 {"section": "Service", "name": "RestartSec", "value": "5"},
290 {"section": "Service", "name": "Restart", "value": "on-failure"},
291 ]
292
293
294 RUN_TEMPLATE = [
295 {"section": "Unit", "name": "Description", "value": "{name} admin command"},
296 {"section": "Service", "name": "ExecStartPre", "value": '''/bin/sh -c "IMAGE=$(etcdctl get /deis/registry/host 2>&1):$(etcdctl get /deis/registry/port 2>&1)/{image}; docker pull $IMAGE"'''}, # noqa
297 {"section": "Service", "name": "ExecStartPre", "value": '''/bin/sh -c "docker inspect {name} >/dev/null 2>&1 && docker rm -f {name} || true"'''}, # noqa
298 {"section": "Service", "name": "ExecStart", "value": '''/bin/sh -c "IMAGE=$(etcdctl get /deis/registry/host 2>&1):$(etcdctl get /deis/registry/port 2>&1)/{image}; docker run --name {name} --entrypoint={entrypoint} -a stdout -a stderr $IMAGE {command}"'''}, # noqa
299 {"section": "Service", "name": "TimeoutStartSec", "value": "20m"},
300 ]
301
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/controller/scheduler/coreos.py b/controller/scheduler/coreos.py
--- a/controller/scheduler/coreos.py
+++ b/controller/scheduler/coreos.py
@@ -54,7 +54,7 @@
headers=headers, body=json.dumps(body))
resp = self.conn.getresponse()
data = resp.read()
- if resp.status != 204:
+ if not 200 <= resp.status <= 299:
errmsg = "Failed to create unit: {} {} - {}".format(
resp.status, resp.reason, data)
raise RuntimeError(errmsg)
|
{"golden_diff": "diff --git a/controller/scheduler/coreos.py b/controller/scheduler/coreos.py\n--- a/controller/scheduler/coreos.py\n+++ b/controller/scheduler/coreos.py\n@@ -54,7 +54,7 @@\n headers=headers, body=json.dumps(body))\n resp = self.conn.getresponse()\n data = resp.read()\n- if resp.status != 204:\n+ if not 200 <= resp.status <= 299:\n errmsg = \"Failed to create unit: {} {} - {}\".format(\n resp.status, resp.reason, data)\n raise RuntimeError(errmsg)\n", "issue": "chore(*): bump CoreOS to 459.0.0\nOnly the rackspace image is missing (I don't know if this could be merged without that)\n\nChange to controller scheduler for fleet v0.8.3\nSince v0.8.2 fleet return the proper http status code.\nIn the line https://github.com/deis/deis/blob/master/controller/scheduler/coreos.py#L58 must change 204 to 201.\n\n(founded using the logspout PR in v257.0.0 with fleet v0.8.3)\n\n", "before_files": [{"content": "import cStringIO\nimport base64\nimport copy\nimport json\nimport httplib\nimport paramiko\nimport socket\nimport re\nimport time\n\n\nMATCH = re.compile(\n '(?P<app>[a-z0-9-]+)_?(?P<version>v[0-9]+)?\\.?(?P<c_type>[a-z-_]+)?.(?P<c_num>[0-9]+)')\n\n\nclass UHTTPConnection(httplib.HTTPConnection):\n \"\"\"Subclass of Python library HTTPConnection that uses a Unix domain socket.\n \"\"\"\n\n def __init__(self, path):\n httplib.HTTPConnection.__init__(self, 'localhost')\n self.path = path\n\n def connect(self):\n sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)\n sock.connect(self.path)\n self.sock = sock\n\n\nclass FleetHTTPClient(object):\n\n def __init__(self, cluster_name, hosts, auth, domain, options):\n self.name = cluster_name\n self.hosts = hosts\n self.auth = auth\n self.domain = domain\n self.options = options\n # single global connection\n self.conn = UHTTPConnection('/var/run/fleet.sock')\n\n # scheduler setup / teardown\n\n def setUp(self):\n pass\n\n def tearDown(self):\n pass\n\n # connection helpers\n\n def _put_unit(self, name, body):\n headers = {'Content-Type': 'application/json'}\n self.conn.request('PUT', '/v1-alpha/units/{name}.service'.format(**locals()),\n headers=headers, body=json.dumps(body))\n resp = self.conn.getresponse()\n data = resp.read()\n if resp.status != 204:\n errmsg = \"Failed to create unit: {} {} - {}\".format(\n resp.status, resp.reason, data)\n raise RuntimeError(errmsg)\n return data\n\n def _delete_unit(self, name):\n headers = {'Content-Type': 'application/json'}\n self.conn.request('DELETE', '/v1-alpha/units/{name}.service'.format(**locals()),\n headers=headers)\n resp = self.conn.getresponse()\n data = resp.read()\n if resp.status not in (404, 204):\n errmsg = \"Failed to delete unit: {} {} - {}\".format(\n resp.status, resp.reason, data)\n raise RuntimeError(errmsg)\n return data\n\n def _get_state(self, name=None):\n headers = {'Content-Type': 'application/json'}\n url = '/v1-alpha/state'\n if name:\n url += '?unitName={name}.service'.format(**locals())\n self.conn.request('GET', url, headers=headers)\n resp = self.conn.getresponse()\n data = resp.read()\n if resp.status not in (200,):\n errmsg = \"Failed to retrieve state: {} {} - {}\".format(\n resp.status, resp.reason, data)\n raise RuntimeError(errmsg)\n return json.loads(data)\n\n def _get_machines(self):\n headers = {'Content-Type': 'application/json'}\n url = '/v1-alpha/machines'\n self.conn.request('GET', url, headers=headers)\n resp = self.conn.getresponse()\n data = resp.read()\n if resp.status not in (200,):\n errmsg = \"Failed to retrieve machines: {} {} - {}\".format(\n resp.status, resp.reason, data)\n raise RuntimeError(errmsg)\n return json.loads(data)\n\n # container api\n\n def create(self, name, image, command='', template=None, **kwargs):\n \"\"\"Create a container\"\"\"\n self._create_container(name, image, command,\n template or copy.deepcopy(CONTAINER_TEMPLATE), **kwargs)\n\n def _create_container(self, name, image, command, unit, **kwargs):\n l = locals().copy()\n l.update(re.match(MATCH, name).groupdict())\n # prepare memory limit for the container type\n mem = kwargs.get('memory', {}).get(l['c_type'], None)\n if mem:\n l.update({'memory': '-m {}'.format(mem.lower())})\n else:\n l.update({'memory': ''})\n # prepare memory limit for the container type\n cpu = kwargs.get('cpu', {}).get(l['c_type'], None)\n if cpu:\n l.update({'cpu': '-c {}'.format(cpu)})\n else:\n l.update({'cpu': ''})\n # should a special entrypoint be used\n entrypoint = kwargs.get('entrypoint')\n if entrypoint:\n l.update({'entrypoint': '{}'.format(entrypoint)})\n # construct unit from template\n for f in unit:\n f['value'] = f['value'].format(**l)\n # prepare tags only if one was provided\n tags = kwargs.get('tags', {})\n if tags:\n tagset = ' '.join(['\"{}={}\"'.format(k, v) for k, v in tags.items()])\n unit.append({\"section\": \"X-Fleet\", \"name\": \"MachineMetadata\",\n \"value\": tagset})\n # post unit to fleet\n self._put_unit(name, {\"desiredState\": \"launched\", \"options\": unit})\n\n def start(self, name):\n \"\"\"Start a container\"\"\"\n self._wait_for_container(name)\n\n def _wait_for_container(self, name):\n # we bump to 20 minutes here to match the timeout on the router and in the app unit files\n for _ in range(1200):\n states = self._get_state(name)\n if states and len(states.get('states', [])) == 1:\n state = states.get('states')[0]\n subState = state.get('systemdSubState')\n if subState == 'running' or subState == 'exited':\n break\n elif subState == 'failed':\n raise RuntimeError('container failed to start')\n time.sleep(1)\n else:\n raise RuntimeError('container failed to start')\n\n def _wait_for_destroy(self, name):\n for _ in range(30):\n states = self._get_state(name)\n if not states:\n break\n time.sleep(1)\n else:\n raise RuntimeError('timeout on container destroy')\n\n def stop(self, name):\n \"\"\"Stop a container\"\"\"\n raise NotImplementedError\n\n def destroy(self, name):\n \"\"\"Destroy a container\"\"\"\n # call all destroy functions, ignoring any errors\n try:\n self._destroy_container(name)\n except:\n pass\n self._wait_for_destroy(name)\n\n def _destroy_container(self, name):\n return self._delete_unit(name)\n\n def run(self, name, image, entrypoint, command): # noqa\n \"\"\"Run a one-off command\"\"\"\n self._create_container(name, image, command, copy.deepcopy(RUN_TEMPLATE),\n entrypoint=entrypoint)\n\n # wait for the container to get scheduled\n for _ in range(30):\n states = self._get_state(name)\n if states and len(states.get('states', [])) == 1:\n state = states.get('states')[0]\n break\n time.sleep(1)\n else:\n raise RuntimeError('container did not report state')\n machineID = state.get('machineID')\n\n # find the machine\n machines = self._get_machines()\n if not machines:\n raise RuntimeError('no available hosts to run command')\n\n # find the machine's primaryIP\n primaryIP = None\n for m in machines.get('machines', []):\n if m['id'] == machineID:\n primaryIP = m['primaryIP']\n if not primaryIP:\n raise RuntimeError('could not find host')\n\n # prepare ssh key\n file_obj = cStringIO.StringIO(base64.b64decode(self.auth))\n pkey = paramiko.RSAKey(file_obj=file_obj)\n\n # grab output via docker logs over SSH\n ssh = paramiko.SSHClient()\n ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())\n ssh.connect(primaryIP, username=\"core\", pkey=pkey)\n # share a transport\n tran = ssh.get_transport()\n\n def _do_ssh(cmd):\n chan = tran.open_session()\n # get a pty so stdout/stderr look right\n chan.get_pty()\n out = chan.makefile()\n chan.exec_command(cmd)\n rc, output = chan.recv_exit_status(), out.read()\n return rc, output\n\n # wait for container to start\n for _ in range(1200):\n rc, _ = _do_ssh('docker inspect {name}'.format(**locals()))\n if rc == 0:\n break\n time.sleep(1)\n else:\n raise RuntimeError('container failed to start on host')\n\n # wait for container to complete\n for _ in range(1200):\n _rc, _output = _do_ssh('docker inspect {name}'.format(**locals()))\n if _rc != 0:\n raise RuntimeError('failed to inspect container')\n _container = json.loads(_output)\n finished_at = _container[0][\"State\"][\"FinishedAt\"]\n if not finished_at.startswith('0001'):\n break\n time.sleep(1)\n else:\n raise RuntimeError('container timed out')\n\n # gather container output\n _rc, output = _do_ssh('docker logs {name}'.format(**locals()))\n if _rc != 0:\n raise RuntimeError('could not attach to container')\n\n # determine container exit code\n _rc, _output = _do_ssh('docker inspect {name}'.format(**locals()))\n if _rc != 0:\n raise RuntimeError('could not determine exit code')\n container = json.loads(_output)\n rc = container[0][\"State\"][\"ExitCode\"]\n\n # cleanup\n self._destroy_container(name)\n self._wait_for_destroy(name)\n\n # return rc and output\n return rc, output\n\n def attach(self, name):\n \"\"\"\n Attach to a job's stdin, stdout and stderr\n \"\"\"\n raise NotImplementedError\n\nSchedulerClient = FleetHTTPClient\n\n\nCONTAINER_TEMPLATE = [\n {\"section\": \"Unit\", \"name\": \"Description\", \"value\": \"{name}\"},\n {\"section\": \"Service\", \"name\": \"ExecStartPre\", \"value\": '''/bin/sh -c \"IMAGE=$(etcdctl get /deis/registry/host 2>&1):$(etcdctl get /deis/registry/port 2>&1)/{image}; docker pull $IMAGE\"'''}, # noqa\n {\"section\": \"Service\", \"name\": \"ExecStartPre\", \"value\": '''/bin/sh -c \"docker inspect {name} >/dev/null 2>&1 && docker rm -f {name} || true\"'''}, # noqa\n {\"section\": \"Service\", \"name\": \"ExecStart\", \"value\": '''/bin/sh -c \"IMAGE=$(etcdctl get /deis/registry/host 2>&1):$(etcdctl get /deis/registry/port 2>&1)/{image}; port=$(docker inspect -f '{{{{range $k, $v := .ContainerConfig.ExposedPorts }}}}{{{{$k}}}}{{{{end}}}}' $IMAGE | cut -d/ -f1) ; docker run --name {name} {memory} {cpu} -P -e PORT=$port $IMAGE {command}\"'''}, # noqa\n {\"section\": \"Service\", \"name\": \"ExecStop\", \"value\": '''/usr/bin/docker rm -f {name}'''},\n {\"section\": \"Service\", \"name\": \"TimeoutStartSec\", \"value\": \"20m\"},\n {\"section\": \"Service\", \"name\": \"RestartSec\", \"value\": \"5\"},\n {\"section\": \"Service\", \"name\": \"Restart\", \"value\": \"on-failure\"},\n]\n\n\nRUN_TEMPLATE = [\n {\"section\": \"Unit\", \"name\": \"Description\", \"value\": \"{name} admin command\"},\n {\"section\": \"Service\", \"name\": \"ExecStartPre\", \"value\": '''/bin/sh -c \"IMAGE=$(etcdctl get /deis/registry/host 2>&1):$(etcdctl get /deis/registry/port 2>&1)/{image}; docker pull $IMAGE\"'''}, # noqa\n {\"section\": \"Service\", \"name\": \"ExecStartPre\", \"value\": '''/bin/sh -c \"docker inspect {name} >/dev/null 2>&1 && docker rm -f {name} || true\"'''}, # noqa\n {\"section\": \"Service\", \"name\": \"ExecStart\", \"value\": '''/bin/sh -c \"IMAGE=$(etcdctl get /deis/registry/host 2>&1):$(etcdctl get /deis/registry/port 2>&1)/{image}; docker run --name {name} --entrypoint={entrypoint} -a stdout -a stderr $IMAGE {command}\"'''}, # noqa\n {\"section\": \"Service\", \"name\": \"TimeoutStartSec\", \"value\": \"20m\"},\n]\n", "path": "controller/scheduler/coreos.py"}], "after_files": [{"content": "import cStringIO\nimport base64\nimport copy\nimport json\nimport httplib\nimport paramiko\nimport socket\nimport re\nimport time\n\n\nMATCH = re.compile(\n '(?P<app>[a-z0-9-]+)_?(?P<version>v[0-9]+)?\\.?(?P<c_type>[a-z-_]+)?.(?P<c_num>[0-9]+)')\n\n\nclass UHTTPConnection(httplib.HTTPConnection):\n \"\"\"Subclass of Python library HTTPConnection that uses a Unix domain socket.\n \"\"\"\n\n def __init__(self, path):\n httplib.HTTPConnection.__init__(self, 'localhost')\n self.path = path\n\n def connect(self):\n sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)\n sock.connect(self.path)\n self.sock = sock\n\n\nclass FleetHTTPClient(object):\n\n def __init__(self, cluster_name, hosts, auth, domain, options):\n self.name = cluster_name\n self.hosts = hosts\n self.auth = auth\n self.domain = domain\n self.options = options\n # single global connection\n self.conn = UHTTPConnection('/var/run/fleet.sock')\n\n # scheduler setup / teardown\n\n def setUp(self):\n pass\n\n def tearDown(self):\n pass\n\n # connection helpers\n\n def _put_unit(self, name, body):\n headers = {'Content-Type': 'application/json'}\n self.conn.request('PUT', '/v1-alpha/units/{name}.service'.format(**locals()),\n headers=headers, body=json.dumps(body))\n resp = self.conn.getresponse()\n data = resp.read()\n if not 200 <= resp.status <= 299:\n errmsg = \"Failed to create unit: {} {} - {}\".format(\n resp.status, resp.reason, data)\n raise RuntimeError(errmsg)\n return data\n\n def _delete_unit(self, name):\n headers = {'Content-Type': 'application/json'}\n self.conn.request('DELETE', '/v1-alpha/units/{name}.service'.format(**locals()),\n headers=headers)\n resp = self.conn.getresponse()\n data = resp.read()\n if resp.status not in (404, 204):\n errmsg = \"Failed to delete unit: {} {} - {}\".format(\n resp.status, resp.reason, data)\n raise RuntimeError(errmsg)\n return data\n\n def _get_state(self, name=None):\n headers = {'Content-Type': 'application/json'}\n url = '/v1-alpha/state'\n if name:\n url += '?unitName={name}.service'.format(**locals())\n self.conn.request('GET', url, headers=headers)\n resp = self.conn.getresponse()\n data = resp.read()\n if resp.status not in (200,):\n errmsg = \"Failed to retrieve state: {} {} - {}\".format(\n resp.status, resp.reason, data)\n raise RuntimeError(errmsg)\n return json.loads(data)\n\n def _get_machines(self):\n headers = {'Content-Type': 'application/json'}\n url = '/v1-alpha/machines'\n self.conn.request('GET', url, headers=headers)\n resp = self.conn.getresponse()\n data = resp.read()\n if resp.status not in (200,):\n errmsg = \"Failed to retrieve machines: {} {} - {}\".format(\n resp.status, resp.reason, data)\n raise RuntimeError(errmsg)\n return json.loads(data)\n\n # container api\n\n def create(self, name, image, command='', template=None, **kwargs):\n \"\"\"Create a container\"\"\"\n self._create_container(name, image, command,\n template or copy.deepcopy(CONTAINER_TEMPLATE), **kwargs)\n\n def _create_container(self, name, image, command, unit, **kwargs):\n l = locals().copy()\n l.update(re.match(MATCH, name).groupdict())\n # prepare memory limit for the container type\n mem = kwargs.get('memory', {}).get(l['c_type'], None)\n if mem:\n l.update({'memory': '-m {}'.format(mem.lower())})\n else:\n l.update({'memory': ''})\n # prepare memory limit for the container type\n cpu = kwargs.get('cpu', {}).get(l['c_type'], None)\n if cpu:\n l.update({'cpu': '-c {}'.format(cpu)})\n else:\n l.update({'cpu': ''})\n # should a special entrypoint be used\n entrypoint = kwargs.get('entrypoint')\n if entrypoint:\n l.update({'entrypoint': '{}'.format(entrypoint)})\n # construct unit from template\n for f in unit:\n f['value'] = f['value'].format(**l)\n # prepare tags only if one was provided\n tags = kwargs.get('tags', {})\n if tags:\n tagset = ' '.join(['\"{}={}\"'.format(k, v) for k, v in tags.items()])\n unit.append({\"section\": \"X-Fleet\", \"name\": \"MachineMetadata\",\n \"value\": tagset})\n # post unit to fleet\n self._put_unit(name, {\"desiredState\": \"launched\", \"options\": unit})\n\n def start(self, name):\n \"\"\"Start a container\"\"\"\n self._wait_for_container(name)\n\n def _wait_for_container(self, name):\n # we bump to 20 minutes here to match the timeout on the router and in the app unit files\n for _ in range(1200):\n states = self._get_state(name)\n if states and len(states.get('states', [])) == 1:\n state = states.get('states')[0]\n subState = state.get('systemdSubState')\n if subState == 'running' or subState == 'exited':\n break\n elif subState == 'failed':\n raise RuntimeError('container failed to start')\n time.sleep(1)\n else:\n raise RuntimeError('container failed to start')\n\n def _wait_for_destroy(self, name):\n for _ in range(30):\n states = self._get_state(name)\n if not states:\n break\n time.sleep(1)\n else:\n raise RuntimeError('timeout on container destroy')\n\n def stop(self, name):\n \"\"\"Stop a container\"\"\"\n raise NotImplementedError\n\n def destroy(self, name):\n \"\"\"Destroy a container\"\"\"\n # call all destroy functions, ignoring any errors\n try:\n self._destroy_container(name)\n except:\n pass\n self._wait_for_destroy(name)\n\n def _destroy_container(self, name):\n return self._delete_unit(name)\n\n def run(self, name, image, entrypoint, command): # noqa\n \"\"\"Run a one-off command\"\"\"\n self._create_container(name, image, command, copy.deepcopy(RUN_TEMPLATE),\n entrypoint=entrypoint)\n\n # wait for the container to get scheduled\n for _ in range(30):\n states = self._get_state(name)\n if states and len(states.get('states', [])) == 1:\n state = states.get('states')[0]\n break\n time.sleep(1)\n else:\n raise RuntimeError('container did not report state')\n machineID = state.get('machineID')\n\n # find the machine\n machines = self._get_machines()\n if not machines:\n raise RuntimeError('no available hosts to run command')\n\n # find the machine's primaryIP\n primaryIP = None\n for m in machines.get('machines', []):\n if m['id'] == machineID:\n primaryIP = m['primaryIP']\n if not primaryIP:\n raise RuntimeError('could not find host')\n\n # prepare ssh key\n file_obj = cStringIO.StringIO(base64.b64decode(self.auth))\n pkey = paramiko.RSAKey(file_obj=file_obj)\n\n # grab output via docker logs over SSH\n ssh = paramiko.SSHClient()\n ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())\n ssh.connect(primaryIP, username=\"core\", pkey=pkey)\n # share a transport\n tran = ssh.get_transport()\n\n def _do_ssh(cmd):\n chan = tran.open_session()\n # get a pty so stdout/stderr look right\n chan.get_pty()\n out = chan.makefile()\n chan.exec_command(cmd)\n rc, output = chan.recv_exit_status(), out.read()\n return rc, output\n\n # wait for container to start\n for _ in range(1200):\n rc, _ = _do_ssh('docker inspect {name}'.format(**locals()))\n if rc == 0:\n break\n time.sleep(1)\n else:\n raise RuntimeError('container failed to start on host')\n\n # wait for container to complete\n for _ in range(1200):\n _rc, _output = _do_ssh('docker inspect {name}'.format(**locals()))\n if _rc != 0:\n raise RuntimeError('failed to inspect container')\n _container = json.loads(_output)\n finished_at = _container[0][\"State\"][\"FinishedAt\"]\n if not finished_at.startswith('0001'):\n break\n time.sleep(1)\n else:\n raise RuntimeError('container timed out')\n\n # gather container output\n _rc, output = _do_ssh('docker logs {name}'.format(**locals()))\n if _rc != 0:\n raise RuntimeError('could not attach to container')\n\n # determine container exit code\n _rc, _output = _do_ssh('docker inspect {name}'.format(**locals()))\n if _rc != 0:\n raise RuntimeError('could not determine exit code')\n container = json.loads(_output)\n rc = container[0][\"State\"][\"ExitCode\"]\n\n # cleanup\n self._destroy_container(name)\n self._wait_for_destroy(name)\n\n # return rc and output\n return rc, output\n\n def attach(self, name):\n \"\"\"\n Attach to a job's stdin, stdout and stderr\n \"\"\"\n raise NotImplementedError\n\nSchedulerClient = FleetHTTPClient\n\n\nCONTAINER_TEMPLATE = [\n {\"section\": \"Unit\", \"name\": \"Description\", \"value\": \"{name}\"},\n {\"section\": \"Service\", \"name\": \"ExecStartPre\", \"value\": '''/bin/sh -c \"IMAGE=$(etcdctl get /deis/registry/host 2>&1):$(etcdctl get /deis/registry/port 2>&1)/{image}; docker pull $IMAGE\"'''}, # noqa\n {\"section\": \"Service\", \"name\": \"ExecStartPre\", \"value\": '''/bin/sh -c \"docker inspect {name} >/dev/null 2>&1 && docker rm -f {name} || true\"'''}, # noqa\n {\"section\": \"Service\", \"name\": \"ExecStart\", \"value\": '''/bin/sh -c \"IMAGE=$(etcdctl get /deis/registry/host 2>&1):$(etcdctl get /deis/registry/port 2>&1)/{image}; port=$(docker inspect -f '{{{{range $k, $v := .ContainerConfig.ExposedPorts }}}}{{{{$k}}}}{{{{end}}}}' $IMAGE | cut -d/ -f1) ; docker run --name {name} {memory} {cpu} -P -e PORT=$port $IMAGE {command}\"'''}, # noqa\n {\"section\": \"Service\", \"name\": \"ExecStop\", \"value\": '''/usr/bin/docker rm -f {name}'''},\n {\"section\": \"Service\", \"name\": \"TimeoutStartSec\", \"value\": \"20m\"},\n {\"section\": \"Service\", \"name\": \"RestartSec\", \"value\": \"5\"},\n {\"section\": \"Service\", \"name\": \"Restart\", \"value\": \"on-failure\"},\n]\n\n\nRUN_TEMPLATE = [\n {\"section\": \"Unit\", \"name\": \"Description\", \"value\": \"{name} admin command\"},\n {\"section\": \"Service\", \"name\": \"ExecStartPre\", \"value\": '''/bin/sh -c \"IMAGE=$(etcdctl get /deis/registry/host 2>&1):$(etcdctl get /deis/registry/port 2>&1)/{image}; docker pull $IMAGE\"'''}, # noqa\n {\"section\": \"Service\", \"name\": \"ExecStartPre\", \"value\": '''/bin/sh -c \"docker inspect {name} >/dev/null 2>&1 && docker rm -f {name} || true\"'''}, # noqa\n {\"section\": \"Service\", \"name\": \"ExecStart\", \"value\": '''/bin/sh -c \"IMAGE=$(etcdctl get /deis/registry/host 2>&1):$(etcdctl get /deis/registry/port 2>&1)/{image}; docker run --name {name} --entrypoint={entrypoint} -a stdout -a stderr $IMAGE {command}\"'''}, # noqa\n {\"section\": \"Service\", \"name\": \"TimeoutStartSec\", \"value\": \"20m\"},\n]\n", "path": "controller/scheduler/coreos.py"}]}
| 4,010 | 129 |
gh_patches_debug_16969
|
rasdani/github-patches
|
git_diff
|
cal-itp__benefits-430
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update rendering of Agency index page
## Background
In tandem with #366, this ticket moves the bulk of the contextual information from the application index to the agency index.
We need more flexibility on this page than the current viewmodel / page template allows for.

## Tasks
* [x] Create/update text entries in English and Spanish for each paragraph, title, etc.
* [x] Can we embed the **bold** font style in the translation?
* [x] Create text entries for a pre-button label / subhead
* [x] Create a new template `core/agency_index.html` to use for this page
* [x] In the view use the new template
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `benefits/core/views.py`
Content:
```
1 """
2 The core application: view definition for the root of the webapp.
3 """
4 from django.http import HttpResponseBadRequest, HttpResponseNotFound, HttpResponseServerError
5 from django.template import loader
6 from django.template.response import TemplateResponse
7 from django.urls import reverse
8 from django.utils.translation import gettext as _
9
10 from . import middleware, models, session, viewmodels
11
12
13 def PageTemplateResponse(request, page_vm):
14 """Helper returns a TemplateResponse using the common page template."""
15 return TemplateResponse(request, "core/page.html", page_vm.context_dict())
16
17
18 def _index_content_title():
19 """Helper returns the content title for the common index page."""
20 return _("core.pages.index.content_title")
21
22
23 def _index_paragraphs():
24 """Helper returns the content paragraphs for the common index page."""
25 return [_("core.pages.index.p[0]"), _("core.pages.index.p[1]"), _("core.pages.index.p[2]")]
26
27
28 def _index_url():
29 """Helper computes the index url path."""
30 return reverse("core:index")
31
32
33 @middleware.pageview_decorator
34 def index(request):
35 """View handler for the main entry page."""
36 session.reset(request)
37
38 # generate a button to the landing page for each active agency
39 agencies = models.TransitAgency.all_active()
40 buttons = [viewmodels.Button.outline_primary(text=a.short_name, url=a.index_url) for a in agencies]
41 buttons[0].classes.append("mt-3")
42 buttons[0].label = _("core.pages.index.chooseprovider")
43
44 page = viewmodels.Page(
45 content_title=_index_content_title(),
46 paragraphs=_index_paragraphs(),
47 buttons=buttons,
48 classes="home",
49 )
50
51 return PageTemplateResponse(request, page)
52
53
54 @middleware.pageview_decorator
55 def agency_index(request, agency):
56 """View handler for an agency entry page."""
57 session.reset(request)
58 session.update(request, agency=agency, origin=agency.index_url)
59
60 page = viewmodels.Page(
61 content_title=_index_content_title(),
62 paragraphs=_index_paragraphs(),
63 button=viewmodels.Button.primary(text=_("core.pages.index.continue"), url=reverse("eligibility:index")),
64 classes="home",
65 )
66
67 return PageTemplateResponse(request, page)
68
69
70 @middleware.pageview_decorator
71 def help(request):
72 """View handler for the help page."""
73 if session.active_agency(request):
74 agency = session.agency(request)
75 buttons = viewmodels.Button.agency_contact_links(agency)
76 else:
77 buttons = [btn for a in models.TransitAgency.all_active() for btn in viewmodels.Button.agency_contact_links(a)]
78
79 buttons.append(viewmodels.Button.home(request, _("core.buttons.back")))
80
81 page = viewmodels.Page(
82 title=_("core.buttons.help"),
83 content_title=_("core.buttons.help"),
84 buttons=buttons,
85 classes="text-lg-center",
86 noimage=True,
87 )
88
89 return TemplateResponse(request, "core/help.html", page.context_dict())
90
91
92 @middleware.pageview_decorator
93 def bad_request(request, exception, template_name="400.html"):
94 """View handler for HTTP 400 Bad Request responses."""
95 if session.active_agency(request):
96 session.update(request, origin=session.agency(request).index_url)
97 else:
98 session.update(request, origin=_index_url())
99
100 home = viewmodels.Button.home(request)
101 page = viewmodels.ErrorPage.error(button=home)
102 t = loader.get_template(template_name)
103
104 return HttpResponseBadRequest(t.render(page.context_dict()))
105
106
107 @middleware.pageview_decorator
108 def csrf_failure(request, reason):
109 """
110 View handler for CSRF_FAILURE_VIEW with custom data.
111 """
112 if session.active_agency(request):
113 session.update(request, origin=session.agency(request).index_url)
114 else:
115 session.update(request, origin=_index_url())
116
117 home = viewmodels.Button.home(request)
118 page = viewmodels.ErrorPage.not_found(button=home, path=request.path)
119 t = loader.get_template("400.html")
120
121 return HttpResponseNotFound(t.render(page.context_dict()))
122
123
124 @middleware.pageview_decorator
125 def page_not_found(request, exception, template_name="404.html"):
126 """View handler for HTTP 404 Not Found responses."""
127 if session.active_agency(request):
128 session.update(request, origin=session.agency(request).index_url)
129 else:
130 session.update(request, origin=_index_url())
131
132 home = viewmodels.Button.home(request)
133 page = viewmodels.ErrorPage.not_found(button=home, path=request.path)
134 t = loader.get_template(template_name)
135
136 return HttpResponseNotFound(t.render(page.context_dict()))
137
138
139 @middleware.pageview_decorator
140 def server_error(request, template_name="500.html"):
141 """View handler for HTTP 500 Server Error responses."""
142 if session.active_agency(request):
143 session.update(request, origin=session.agency(request).index_url)
144 else:
145 session.update(request, origin=_index_url())
146
147 home = viewmodels.Button.home(request)
148 page = viewmodels.ErrorPage.error(button=home)
149 t = loader.get_template(template_name)
150
151 return HttpResponseServerError(t.render(page.context_dict()))
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/benefits/core/views.py b/benefits/core/views.py
--- a/benefits/core/views.py
+++ b/benefits/core/views.py
@@ -57,14 +57,19 @@
session.reset(request)
session.update(request, agency=agency, origin=agency.index_url)
+ button = viewmodels.Button.primary(text=_("core.pages.index.continue"), url=reverse("eligibility:index"))
+ button.label = _("core.pages.agency_index.button.label")
+
page = viewmodels.Page(
- content_title=_index_content_title(),
- paragraphs=_index_paragraphs(),
- button=viewmodels.Button.primary(text=_("core.pages.index.continue"), url=reverse("eligibility:index")),
+ content_title=_("core.pages.agency_index.content_title"),
+ button=button,
classes="home",
)
- return PageTemplateResponse(request, page)
+ help_page = reverse("core:help")
+ context_dict = {**page.context_dict(), **{"info_link": f"{help_page}#about"}}
+
+ return TemplateResponse(request, "core/agency_index.html", context_dict)
@middleware.pageview_decorator
|
{"golden_diff": "diff --git a/benefits/core/views.py b/benefits/core/views.py\n--- a/benefits/core/views.py\n+++ b/benefits/core/views.py\n@@ -57,14 +57,19 @@\n session.reset(request)\n session.update(request, agency=agency, origin=agency.index_url)\n \n+ button = viewmodels.Button.primary(text=_(\"core.pages.index.continue\"), url=reverse(\"eligibility:index\"))\n+ button.label = _(\"core.pages.agency_index.button.label\")\n+\n page = viewmodels.Page(\n- content_title=_index_content_title(),\n- paragraphs=_index_paragraphs(),\n- button=viewmodels.Button.primary(text=_(\"core.pages.index.continue\"), url=reverse(\"eligibility:index\")),\n+ content_title=_(\"core.pages.agency_index.content_title\"),\n+ button=button,\n classes=\"home\",\n )\n \n- return PageTemplateResponse(request, page)\n+ help_page = reverse(\"core:help\")\n+ context_dict = {**page.context_dict(), **{\"info_link\": f\"{help_page}#about\"}}\n+\n+ return TemplateResponse(request, \"core/agency_index.html\", context_dict)\n \n \n @middleware.pageview_decorator\n", "issue": "Update rendering of Agency index page\n## Background\r\n\r\nIn tandem with #366, this ticket moves the bulk of the contextual information from the application index to the agency index.\r\n\r\nWe need more flexibility on this page than the current viewmodel / page template allows for.\r\n\r\n\r\n\r\n## Tasks\r\n\r\n* [x] Create/update text entries in English and Spanish for each paragraph, title, etc.\r\n* [x] Can we embed the **bold** font style in the translation?\r\n* [x] Create text entries for a pre-button label / subhead\r\n* [x] Create a new template `core/agency_index.html` to use for this page\r\n* [x] In the view use the new template\n", "before_files": [{"content": "\"\"\"\nThe core application: view definition for the root of the webapp.\n\"\"\"\nfrom django.http import HttpResponseBadRequest, HttpResponseNotFound, HttpResponseServerError\nfrom django.template import loader\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse\nfrom django.utils.translation import gettext as _\n\nfrom . import middleware, models, session, viewmodels\n\n\ndef PageTemplateResponse(request, page_vm):\n \"\"\"Helper returns a TemplateResponse using the common page template.\"\"\"\n return TemplateResponse(request, \"core/page.html\", page_vm.context_dict())\n\n\ndef _index_content_title():\n \"\"\"Helper returns the content title for the common index page.\"\"\"\n return _(\"core.pages.index.content_title\")\n\n\ndef _index_paragraphs():\n \"\"\"Helper returns the content paragraphs for the common index page.\"\"\"\n return [_(\"core.pages.index.p[0]\"), _(\"core.pages.index.p[1]\"), _(\"core.pages.index.p[2]\")]\n\n\ndef _index_url():\n \"\"\"Helper computes the index url path.\"\"\"\n return reverse(\"core:index\")\n\n\[email protected]_decorator\ndef index(request):\n \"\"\"View handler for the main entry page.\"\"\"\n session.reset(request)\n\n # generate a button to the landing page for each active agency\n agencies = models.TransitAgency.all_active()\n buttons = [viewmodels.Button.outline_primary(text=a.short_name, url=a.index_url) for a in agencies]\n buttons[0].classes.append(\"mt-3\")\n buttons[0].label = _(\"core.pages.index.chooseprovider\")\n\n page = viewmodels.Page(\n content_title=_index_content_title(),\n paragraphs=_index_paragraphs(),\n buttons=buttons,\n classes=\"home\",\n )\n\n return PageTemplateResponse(request, page)\n\n\[email protected]_decorator\ndef agency_index(request, agency):\n \"\"\"View handler for an agency entry page.\"\"\"\n session.reset(request)\n session.update(request, agency=agency, origin=agency.index_url)\n\n page = viewmodels.Page(\n content_title=_index_content_title(),\n paragraphs=_index_paragraphs(),\n button=viewmodels.Button.primary(text=_(\"core.pages.index.continue\"), url=reverse(\"eligibility:index\")),\n classes=\"home\",\n )\n\n return PageTemplateResponse(request, page)\n\n\[email protected]_decorator\ndef help(request):\n \"\"\"View handler for the help page.\"\"\"\n if session.active_agency(request):\n agency = session.agency(request)\n buttons = viewmodels.Button.agency_contact_links(agency)\n else:\n buttons = [btn for a in models.TransitAgency.all_active() for btn in viewmodels.Button.agency_contact_links(a)]\n\n buttons.append(viewmodels.Button.home(request, _(\"core.buttons.back\")))\n\n page = viewmodels.Page(\n title=_(\"core.buttons.help\"),\n content_title=_(\"core.buttons.help\"),\n buttons=buttons,\n classes=\"text-lg-center\",\n noimage=True,\n )\n\n return TemplateResponse(request, \"core/help.html\", page.context_dict())\n\n\[email protected]_decorator\ndef bad_request(request, exception, template_name=\"400.html\"):\n \"\"\"View handler for HTTP 400 Bad Request responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.error(button=home)\n t = loader.get_template(template_name)\n\n return HttpResponseBadRequest(t.render(page.context_dict()))\n\n\[email protected]_decorator\ndef csrf_failure(request, reason):\n \"\"\"\n View handler for CSRF_FAILURE_VIEW with custom data.\n \"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.not_found(button=home, path=request.path)\n t = loader.get_template(\"400.html\")\n\n return HttpResponseNotFound(t.render(page.context_dict()))\n\n\[email protected]_decorator\ndef page_not_found(request, exception, template_name=\"404.html\"):\n \"\"\"View handler for HTTP 404 Not Found responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.not_found(button=home, path=request.path)\n t = loader.get_template(template_name)\n\n return HttpResponseNotFound(t.render(page.context_dict()))\n\n\[email protected]_decorator\ndef server_error(request, template_name=\"500.html\"):\n \"\"\"View handler for HTTP 500 Server Error responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.error(button=home)\n t = loader.get_template(template_name)\n\n return HttpResponseServerError(t.render(page.context_dict()))\n", "path": "benefits/core/views.py"}], "after_files": [{"content": "\"\"\"\nThe core application: view definition for the root of the webapp.\n\"\"\"\nfrom django.http import HttpResponseBadRequest, HttpResponseNotFound, HttpResponseServerError\nfrom django.template import loader\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse\nfrom django.utils.translation import gettext as _\n\nfrom . import middleware, models, session, viewmodels\n\n\ndef PageTemplateResponse(request, page_vm):\n \"\"\"Helper returns a TemplateResponse using the common page template.\"\"\"\n return TemplateResponse(request, \"core/page.html\", page_vm.context_dict())\n\n\ndef _index_content_title():\n \"\"\"Helper returns the content title for the common index page.\"\"\"\n return _(\"core.pages.index.content_title\")\n\n\ndef _index_paragraphs():\n \"\"\"Helper returns the content paragraphs for the common index page.\"\"\"\n return [_(\"core.pages.index.p[0]\"), _(\"core.pages.index.p[1]\"), _(\"core.pages.index.p[2]\")]\n\n\ndef _index_url():\n \"\"\"Helper computes the index url path.\"\"\"\n return reverse(\"core:index\")\n\n\[email protected]_decorator\ndef index(request):\n \"\"\"View handler for the main entry page.\"\"\"\n session.reset(request)\n\n # generate a button to the landing page for each active agency\n agencies = models.TransitAgency.all_active()\n buttons = [viewmodels.Button.outline_primary(text=a.short_name, url=a.index_url) for a in agencies]\n buttons[0].classes.append(\"mt-3\")\n buttons[0].label = _(\"core.pages.index.chooseprovider\")\n\n page = viewmodels.Page(\n content_title=_index_content_title(),\n paragraphs=_index_paragraphs(),\n buttons=buttons,\n classes=\"home\",\n )\n\n return PageTemplateResponse(request, page)\n\n\[email protected]_decorator\ndef agency_index(request, agency):\n \"\"\"View handler for an agency entry page.\"\"\"\n session.reset(request)\n session.update(request, agency=agency, origin=agency.index_url)\n\n button = viewmodels.Button.primary(text=_(\"core.pages.index.continue\"), url=reverse(\"eligibility:index\"))\n button.label = _(\"core.pages.agency_index.button.label\")\n\n page = viewmodels.Page(\n content_title=_(\"core.pages.agency_index.content_title\"),\n button=button,\n classes=\"home\",\n )\n\n help_page = reverse(\"core:help\")\n context_dict = {**page.context_dict(), **{\"info_link\": f\"{help_page}#about\"}}\n\n return TemplateResponse(request, \"core/agency_index.html\", context_dict)\n\n\[email protected]_decorator\ndef help(request):\n \"\"\"View handler for the help page.\"\"\"\n if session.active_agency(request):\n agency = session.agency(request)\n buttons = viewmodels.Button.agency_contact_links(agency)\n else:\n buttons = [btn for a in models.TransitAgency.all_active() for btn in viewmodels.Button.agency_contact_links(a)]\n\n buttons.append(viewmodels.Button.home(request, _(\"core.buttons.back\")))\n\n page = viewmodels.Page(\n title=_(\"core.buttons.help\"),\n content_title=_(\"core.buttons.help\"),\n buttons=buttons,\n classes=\"text-lg-center\",\n noimage=True,\n )\n\n return TemplateResponse(request, \"core/help.html\", page.context_dict())\n\n\[email protected]_decorator\ndef bad_request(request, exception, template_name=\"400.html\"):\n \"\"\"View handler for HTTP 400 Bad Request responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.error(button=home)\n t = loader.get_template(template_name)\n\n return HttpResponseBadRequest(t.render(page.context_dict()))\n\n\[email protected]_decorator\ndef csrf_failure(request, reason):\n \"\"\"\n View handler for CSRF_FAILURE_VIEW with custom data.\n \"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.not_found(button=home, path=request.path)\n t = loader.get_template(\"400.html\")\n\n return HttpResponseNotFound(t.render(page.context_dict()))\n\n\[email protected]_decorator\ndef page_not_found(request, exception, template_name=\"404.html\"):\n \"\"\"View handler for HTTP 404 Not Found responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.not_found(button=home, path=request.path)\n t = loader.get_template(template_name)\n\n return HttpResponseNotFound(t.render(page.context_dict()))\n\n\[email protected]_decorator\ndef server_error(request, template_name=\"500.html\"):\n \"\"\"View handler for HTTP 500 Server Error responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=_index_url())\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.error(button=home)\n t = loader.get_template(template_name)\n\n return HttpResponseServerError(t.render(page.context_dict()))\n", "path": "benefits/core/views.py"}]}
| 1,890 | 254 |
gh_patches_debug_44234
|
rasdani/github-patches
|
git_diff
|
deepset-ai__haystack-3953
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove frontmatter and add to `meta` in markdown converter
**Is your feature request related to a problem? Please describe.**
We would like to use the markdown converter for the `haystack-tutorials` search
**Describe the solution you'd like**
Frontmatter is often used to store some meta information of markdown files. So it would be a useful feature to be able to auto add these to the `meta` of the created documents
It's also a good idea to remove the fronmatter from the created documents too.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `haystack/nodes/file_converter/markdown.py`
Content:
```
1 import logging
2 import re
3 from pathlib import Path
4 from typing import Dict, List, Optional, Tuple, Any
5
6 try:
7 from bs4 import BeautifulSoup, NavigableString
8 from markdown import markdown
9 except (ImportError, ModuleNotFoundError) as ie:
10 from haystack.utils.import_utils import _optional_component_not_installed
11
12 _optional_component_not_installed(__name__, "preprocessing", ie)
13
14 from haystack.nodes.file_converter.base import BaseConverter
15 from haystack.schema import Document
16
17
18 logger = logging.getLogger(__name__)
19
20
21 class MarkdownConverter(BaseConverter):
22 def __init__(
23 self,
24 remove_numeric_tables: bool = False,
25 valid_languages: Optional[List[str]] = None,
26 id_hash_keys: Optional[List[str]] = None,
27 progress_bar: bool = True,
28 remove_code_snippets: bool = True,
29 extract_headlines: bool = False,
30 ):
31 """
32 :param remove_numeric_tables: Not applicable.
33 :param valid_languages: Not applicable.
34 :param id_hash_keys: Generate the document ID from a custom list of strings that refer to the document's
35 attributes. To make sure you don't have duplicate documents in your DocumentStore if texts are
36 not unique, you can modify the metadata and pass for example, `"meta"` to this field ([`"content"`, `"meta"`]).
37 In this case, the ID is generated by using the content and the defined metadata.
38 :param progress_bar: Show a progress bar for the conversion.
39 :param remove_code_snippets: Whether to remove snippets from the markdown file.
40 :param extract_headlines: Whether to extract headings from the markdown file.
41 """
42 super().__init__(
43 remove_numeric_tables=remove_numeric_tables,
44 valid_languages=valid_languages,
45 id_hash_keys=id_hash_keys,
46 progress_bar=progress_bar,
47 )
48
49 self.remove_code_snippets = remove_code_snippets
50 self.extract_headlines = extract_headlines
51
52 def convert(
53 self,
54 file_path: Path,
55 meta: Optional[Dict[str, Any]] = None,
56 remove_numeric_tables: Optional[bool] = None,
57 valid_languages: Optional[List[str]] = None,
58 encoding: Optional[str] = "utf-8",
59 id_hash_keys: Optional[List[str]] = None,
60 remove_code_snippets: Optional[bool] = None,
61 extract_headlines: Optional[bool] = None,
62 ) -> List[Document]:
63 """
64 Reads text from a markdown file and executes optional preprocessing steps.
65
66 :param file_path: path of the file to convert
67 :param meta: dictionary of meta data key-value pairs to append in the returned document.
68 :param encoding: Select the file encoding (default is `utf-8`)
69 :param remove_numeric_tables: Not applicable
70 :param valid_languages: Not applicable
71 :param id_hash_keys: Generate the document id from a custom list of strings that refer to the document's
72 attributes. If you want to ensure you don't have duplicate documents in your DocumentStore but texts are
73 not unique, you can modify the metadata and pass e.g. `"meta"` to this field (e.g. [`"content"`, `"meta"`]).
74 In this case the id will be generated by using the content and the defined metadata.
75 :param remove_code_snippets: Whether to remove snippets from the markdown file.
76 :param extract_headlines: Whether to extract headings from the markdown file.
77 """
78
79 id_hash_keys = id_hash_keys if id_hash_keys is not None else self.id_hash_keys
80 remove_code_snippets = remove_code_snippets if remove_code_snippets is not None else self.remove_code_snippets
81 extract_headlines = extract_headlines if extract_headlines is not None else self.extract_headlines
82
83 with open(file_path, encoding=encoding, errors="ignore") as f:
84 markdown_text = f.read()
85
86 # md -> html -> text since BeautifulSoup can extract text cleanly
87 html = markdown(markdown_text)
88
89 # remove code snippets
90 if remove_code_snippets:
91 html = re.sub(r"<pre>(.*?)</pre>", " ", html, flags=re.DOTALL)
92 html = re.sub(r"<code>(.*?)</code>", " ", html, flags=re.DOTALL)
93 soup = BeautifulSoup(html, "html.parser")
94
95 if extract_headlines:
96 text, headlines = self._extract_text_and_headlines(soup)
97 if meta is None:
98 meta = {}
99 meta["headlines"] = headlines
100 else:
101 text = soup.get_text()
102
103 document = Document(content=text, meta=meta, id_hash_keys=id_hash_keys)
104 return [document]
105
106 @staticmethod
107 def _extract_text_and_headlines(soup: BeautifulSoup) -> Tuple[str, List[Dict]]:
108 """
109 Extracts text and headings from a soup object.
110 """
111 headline_tags = {"h1", "h2", "h3", "h4", "h5", "h6"}
112 headlines = []
113 text = ""
114 for desc in soup.descendants:
115 if desc.name in headline_tags:
116 current_headline = desc.get_text()
117 current_start_idx = len(text)
118 current_level = int(desc.name[-1]) - 1
119 headlines.append({"headline": current_headline, "start_idx": current_start_idx, "level": current_level})
120
121 if isinstance(desc, NavigableString):
122 text += desc.get_text()
123
124 return text, headlines
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/haystack/nodes/file_converter/markdown.py b/haystack/nodes/file_converter/markdown.py
--- a/haystack/nodes/file_converter/markdown.py
+++ b/haystack/nodes/file_converter/markdown.py
@@ -4,6 +4,7 @@
from typing import Dict, List, Optional, Tuple, Any
try:
+ import frontmatter
from bs4 import BeautifulSoup, NavigableString
from markdown import markdown
except (ImportError, ModuleNotFoundError) as ie:
@@ -27,6 +28,7 @@
progress_bar: bool = True,
remove_code_snippets: bool = True,
extract_headlines: bool = False,
+ add_frontmatter_to_meta: bool = False,
):
"""
:param remove_numeric_tables: Not applicable.
@@ -38,6 +40,7 @@
:param progress_bar: Show a progress bar for the conversion.
:param remove_code_snippets: Whether to remove snippets from the markdown file.
:param extract_headlines: Whether to extract headings from the markdown file.
+ :param add_frontmatter_to_meta: Whether to add the contents of the frontmatter to `meta`.
"""
super().__init__(
remove_numeric_tables=remove_numeric_tables,
@@ -48,6 +51,7 @@
self.remove_code_snippets = remove_code_snippets
self.extract_headlines = extract_headlines
+ self.add_frontmatter_to_meta = add_frontmatter_to_meta
def convert(
self,
@@ -59,6 +63,7 @@
id_hash_keys: Optional[List[str]] = None,
remove_code_snippets: Optional[bool] = None,
extract_headlines: Optional[bool] = None,
+ add_frontmatter_to_meta: Optional[bool] = None,
) -> List[Document]:
"""
Reads text from a markdown file and executes optional preprocessing steps.
@@ -74,14 +79,18 @@
In this case the id will be generated by using the content and the defined metadata.
:param remove_code_snippets: Whether to remove snippets from the markdown file.
:param extract_headlines: Whether to extract headings from the markdown file.
+ :param add_frontmatter_to_meta: Whether to add the contents of the frontmatter to `meta`.
"""
id_hash_keys = id_hash_keys if id_hash_keys is not None else self.id_hash_keys
remove_code_snippets = remove_code_snippets if remove_code_snippets is not None else self.remove_code_snippets
extract_headlines = extract_headlines if extract_headlines is not None else self.extract_headlines
+ add_frontmatter_to_meta = (
+ add_frontmatter_to_meta if add_frontmatter_to_meta is not None else self.add_frontmatter_to_meta
+ )
with open(file_path, encoding=encoding, errors="ignore") as f:
- markdown_text = f.read()
+ metadata, markdown_text = frontmatter.parse(f.read())
# md -> html -> text since BeautifulSoup can extract text cleanly
html = markdown(markdown_text)
@@ -92,6 +101,12 @@
html = re.sub(r"<code>(.*?)</code>", " ", html, flags=re.DOTALL)
soup = BeautifulSoup(html, "html.parser")
+ if add_frontmatter_to_meta:
+ if meta is None:
+ meta = metadata
+ else:
+ meta.update(metadata)
+
if extract_headlines:
text, headlines = self._extract_text_and_headlines(soup)
if meta is None:
|
{"golden_diff": "diff --git a/haystack/nodes/file_converter/markdown.py b/haystack/nodes/file_converter/markdown.py\n--- a/haystack/nodes/file_converter/markdown.py\n+++ b/haystack/nodes/file_converter/markdown.py\n@@ -4,6 +4,7 @@\n from typing import Dict, List, Optional, Tuple, Any\n \n try:\n+ import frontmatter\n from bs4 import BeautifulSoup, NavigableString\n from markdown import markdown\n except (ImportError, ModuleNotFoundError) as ie:\n@@ -27,6 +28,7 @@\n progress_bar: bool = True,\n remove_code_snippets: bool = True,\n extract_headlines: bool = False,\n+ add_frontmatter_to_meta: bool = False,\n ):\n \"\"\"\n :param remove_numeric_tables: Not applicable.\n@@ -38,6 +40,7 @@\n :param progress_bar: Show a progress bar for the conversion.\n :param remove_code_snippets: Whether to remove snippets from the markdown file.\n :param extract_headlines: Whether to extract headings from the markdown file.\n+ :param add_frontmatter_to_meta: Whether to add the contents of the frontmatter to `meta`.\n \"\"\"\n super().__init__(\n remove_numeric_tables=remove_numeric_tables,\n@@ -48,6 +51,7 @@\n \n self.remove_code_snippets = remove_code_snippets\n self.extract_headlines = extract_headlines\n+ self.add_frontmatter_to_meta = add_frontmatter_to_meta\n \n def convert(\n self,\n@@ -59,6 +63,7 @@\n id_hash_keys: Optional[List[str]] = None,\n remove_code_snippets: Optional[bool] = None,\n extract_headlines: Optional[bool] = None,\n+ add_frontmatter_to_meta: Optional[bool] = None,\n ) -> List[Document]:\n \"\"\"\n Reads text from a markdown file and executes optional preprocessing steps.\n@@ -74,14 +79,18 @@\n In this case the id will be generated by using the content and the defined metadata.\n :param remove_code_snippets: Whether to remove snippets from the markdown file.\n :param extract_headlines: Whether to extract headings from the markdown file.\n+ :param add_frontmatter_to_meta: Whether to add the contents of the frontmatter to `meta`.\n \"\"\"\n \n id_hash_keys = id_hash_keys if id_hash_keys is not None else self.id_hash_keys\n remove_code_snippets = remove_code_snippets if remove_code_snippets is not None else self.remove_code_snippets\n extract_headlines = extract_headlines if extract_headlines is not None else self.extract_headlines\n+ add_frontmatter_to_meta = (\n+ add_frontmatter_to_meta if add_frontmatter_to_meta is not None else self.add_frontmatter_to_meta\n+ )\n \n with open(file_path, encoding=encoding, errors=\"ignore\") as f:\n- markdown_text = f.read()\n+ metadata, markdown_text = frontmatter.parse(f.read())\n \n # md -> html -> text since BeautifulSoup can extract text cleanly\n html = markdown(markdown_text)\n@@ -92,6 +101,12 @@\n html = re.sub(r\"<code>(.*?)</code>\", \" \", html, flags=re.DOTALL)\n soup = BeautifulSoup(html, \"html.parser\")\n \n+ if add_frontmatter_to_meta:\n+ if meta is None:\n+ meta = metadata\n+ else:\n+ meta.update(metadata)\n+\n if extract_headlines:\n text, headlines = self._extract_text_and_headlines(soup)\n if meta is None:\n", "issue": "Remove frontmatter and add to `meta` in markdown converter\n**Is your feature request related to a problem? Please describe.**\r\nWe would like to use the markdown converter for the `haystack-tutorials` search\r\n\r\n**Describe the solution you'd like**\r\nFrontmatter is often used to store some meta information of markdown files. So it would be a useful feature to be able to auto add these to the `meta` of the created documents\r\nIt's also a good idea to remove the fronmatter from the created documents too. \n", "before_files": [{"content": "import logging\nimport re\nfrom pathlib import Path\nfrom typing import Dict, List, Optional, Tuple, Any\n\ntry:\n from bs4 import BeautifulSoup, NavigableString\n from markdown import markdown\nexcept (ImportError, ModuleNotFoundError) as ie:\n from haystack.utils.import_utils import _optional_component_not_installed\n\n _optional_component_not_installed(__name__, \"preprocessing\", ie)\n\nfrom haystack.nodes.file_converter.base import BaseConverter\nfrom haystack.schema import Document\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass MarkdownConverter(BaseConverter):\n def __init__(\n self,\n remove_numeric_tables: bool = False,\n valid_languages: Optional[List[str]] = None,\n id_hash_keys: Optional[List[str]] = None,\n progress_bar: bool = True,\n remove_code_snippets: bool = True,\n extract_headlines: bool = False,\n ):\n \"\"\"\n :param remove_numeric_tables: Not applicable.\n :param valid_languages: Not applicable.\n :param id_hash_keys: Generate the document ID from a custom list of strings that refer to the document's\n attributes. To make sure you don't have duplicate documents in your DocumentStore if texts are\n not unique, you can modify the metadata and pass for example, `\"meta\"` to this field ([`\"content\"`, `\"meta\"`]).\n In this case, the ID is generated by using the content and the defined metadata.\n :param progress_bar: Show a progress bar for the conversion.\n :param remove_code_snippets: Whether to remove snippets from the markdown file.\n :param extract_headlines: Whether to extract headings from the markdown file.\n \"\"\"\n super().__init__(\n remove_numeric_tables=remove_numeric_tables,\n valid_languages=valid_languages,\n id_hash_keys=id_hash_keys,\n progress_bar=progress_bar,\n )\n\n self.remove_code_snippets = remove_code_snippets\n self.extract_headlines = extract_headlines\n\n def convert(\n self,\n file_path: Path,\n meta: Optional[Dict[str, Any]] = None,\n remove_numeric_tables: Optional[bool] = None,\n valid_languages: Optional[List[str]] = None,\n encoding: Optional[str] = \"utf-8\",\n id_hash_keys: Optional[List[str]] = None,\n remove_code_snippets: Optional[bool] = None,\n extract_headlines: Optional[bool] = None,\n ) -> List[Document]:\n \"\"\"\n Reads text from a markdown file and executes optional preprocessing steps.\n\n :param file_path: path of the file to convert\n :param meta: dictionary of meta data key-value pairs to append in the returned document.\n :param encoding: Select the file encoding (default is `utf-8`)\n :param remove_numeric_tables: Not applicable\n :param valid_languages: Not applicable\n :param id_hash_keys: Generate the document id from a custom list of strings that refer to the document's\n attributes. If you want to ensure you don't have duplicate documents in your DocumentStore but texts are\n not unique, you can modify the metadata and pass e.g. `\"meta\"` to this field (e.g. [`\"content\"`, `\"meta\"`]).\n In this case the id will be generated by using the content and the defined metadata.\n :param remove_code_snippets: Whether to remove snippets from the markdown file.\n :param extract_headlines: Whether to extract headings from the markdown file.\n \"\"\"\n\n id_hash_keys = id_hash_keys if id_hash_keys is not None else self.id_hash_keys\n remove_code_snippets = remove_code_snippets if remove_code_snippets is not None else self.remove_code_snippets\n extract_headlines = extract_headlines if extract_headlines is not None else self.extract_headlines\n\n with open(file_path, encoding=encoding, errors=\"ignore\") as f:\n markdown_text = f.read()\n\n # md -> html -> text since BeautifulSoup can extract text cleanly\n html = markdown(markdown_text)\n\n # remove code snippets\n if remove_code_snippets:\n html = re.sub(r\"<pre>(.*?)</pre>\", \" \", html, flags=re.DOTALL)\n html = re.sub(r\"<code>(.*?)</code>\", \" \", html, flags=re.DOTALL)\n soup = BeautifulSoup(html, \"html.parser\")\n\n if extract_headlines:\n text, headlines = self._extract_text_and_headlines(soup)\n if meta is None:\n meta = {}\n meta[\"headlines\"] = headlines\n else:\n text = soup.get_text()\n\n document = Document(content=text, meta=meta, id_hash_keys=id_hash_keys)\n return [document]\n\n @staticmethod\n def _extract_text_and_headlines(soup: BeautifulSoup) -> Tuple[str, List[Dict]]:\n \"\"\"\n Extracts text and headings from a soup object.\n \"\"\"\n headline_tags = {\"h1\", \"h2\", \"h3\", \"h4\", \"h5\", \"h6\"}\n headlines = []\n text = \"\"\n for desc in soup.descendants:\n if desc.name in headline_tags:\n current_headline = desc.get_text()\n current_start_idx = len(text)\n current_level = int(desc.name[-1]) - 1\n headlines.append({\"headline\": current_headline, \"start_idx\": current_start_idx, \"level\": current_level})\n\n if isinstance(desc, NavigableString):\n text += desc.get_text()\n\n return text, headlines\n", "path": "haystack/nodes/file_converter/markdown.py"}], "after_files": [{"content": "import logging\nimport re\nfrom pathlib import Path\nfrom typing import Dict, List, Optional, Tuple, Any\n\ntry:\n import frontmatter\n from bs4 import BeautifulSoup, NavigableString\n from markdown import markdown\nexcept (ImportError, ModuleNotFoundError) as ie:\n from haystack.utils.import_utils import _optional_component_not_installed\n\n _optional_component_not_installed(__name__, \"preprocessing\", ie)\n\nfrom haystack.nodes.file_converter.base import BaseConverter\nfrom haystack.schema import Document\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass MarkdownConverter(BaseConverter):\n def __init__(\n self,\n remove_numeric_tables: bool = False,\n valid_languages: Optional[List[str]] = None,\n id_hash_keys: Optional[List[str]] = None,\n progress_bar: bool = True,\n remove_code_snippets: bool = True,\n extract_headlines: bool = False,\n add_frontmatter_to_meta: bool = False,\n ):\n \"\"\"\n :param remove_numeric_tables: Not applicable.\n :param valid_languages: Not applicable.\n :param id_hash_keys: Generate the document ID from a custom list of strings that refer to the document's\n attributes. To make sure you don't have duplicate documents in your DocumentStore if texts are\n not unique, you can modify the metadata and pass for example, `\"meta\"` to this field ([`\"content\"`, `\"meta\"`]).\n In this case, the ID is generated by using the content and the defined metadata.\n :param progress_bar: Show a progress bar for the conversion.\n :param remove_code_snippets: Whether to remove snippets from the markdown file.\n :param extract_headlines: Whether to extract headings from the markdown file.\n :param add_frontmatter_to_meta: Whether to add the contents of the frontmatter to `meta`.\n \"\"\"\n super().__init__(\n remove_numeric_tables=remove_numeric_tables,\n valid_languages=valid_languages,\n id_hash_keys=id_hash_keys,\n progress_bar=progress_bar,\n )\n\n self.remove_code_snippets = remove_code_snippets\n self.extract_headlines = extract_headlines\n self.add_frontmatter_to_meta = add_frontmatter_to_meta\n\n def convert(\n self,\n file_path: Path,\n meta: Optional[Dict[str, Any]] = None,\n remove_numeric_tables: Optional[bool] = None,\n valid_languages: Optional[List[str]] = None,\n encoding: Optional[str] = \"utf-8\",\n id_hash_keys: Optional[List[str]] = None,\n remove_code_snippets: Optional[bool] = None,\n extract_headlines: Optional[bool] = None,\n add_frontmatter_to_meta: Optional[bool] = None,\n ) -> List[Document]:\n \"\"\"\n Reads text from a markdown file and executes optional preprocessing steps.\n\n :param file_path: path of the file to convert\n :param meta: dictionary of meta data key-value pairs to append in the returned document.\n :param encoding: Select the file encoding (default is `utf-8`)\n :param remove_numeric_tables: Not applicable\n :param valid_languages: Not applicable\n :param id_hash_keys: Generate the document id from a custom list of strings that refer to the document's\n attributes. If you want to ensure you don't have duplicate documents in your DocumentStore but texts are\n not unique, you can modify the metadata and pass e.g. `\"meta\"` to this field (e.g. [`\"content\"`, `\"meta\"`]).\n In this case the id will be generated by using the content and the defined metadata.\n :param remove_code_snippets: Whether to remove snippets from the markdown file.\n :param extract_headlines: Whether to extract headings from the markdown file.\n :param add_frontmatter_to_meta: Whether to add the contents of the frontmatter to `meta`.\n \"\"\"\n\n id_hash_keys = id_hash_keys if id_hash_keys is not None else self.id_hash_keys\n remove_code_snippets = remove_code_snippets if remove_code_snippets is not None else self.remove_code_snippets\n extract_headlines = extract_headlines if extract_headlines is not None else self.extract_headlines\n add_frontmatter_to_meta = (\n add_frontmatter_to_meta if add_frontmatter_to_meta is not None else self.add_frontmatter_to_meta\n )\n\n with open(file_path, encoding=encoding, errors=\"ignore\") as f:\n metadata, markdown_text = frontmatter.parse(f.read())\n\n # md -> html -> text since BeautifulSoup can extract text cleanly\n html = markdown(markdown_text)\n\n # remove code snippets\n if remove_code_snippets:\n html = re.sub(r\"<pre>(.*?)</pre>\", \" \", html, flags=re.DOTALL)\n html = re.sub(r\"<code>(.*?)</code>\", \" \", html, flags=re.DOTALL)\n soup = BeautifulSoup(html, \"html.parser\")\n\n if add_frontmatter_to_meta:\n if meta is None:\n meta = metadata\n else:\n meta.update(metadata)\n\n if extract_headlines:\n text, headlines = self._extract_text_and_headlines(soup)\n if meta is None:\n meta = {}\n meta[\"headlines\"] = headlines\n else:\n text = soup.get_text()\n\n document = Document(content=text, meta=meta, id_hash_keys=id_hash_keys)\n return [document]\n\n @staticmethod\n def _extract_text_and_headlines(soup: BeautifulSoup) -> Tuple[str, List[Dict]]:\n \"\"\"\n Extracts text and headings from a soup object.\n \"\"\"\n headline_tags = {\"h1\", \"h2\", \"h3\", \"h4\", \"h5\", \"h6\"}\n headlines = []\n text = \"\"\n for desc in soup.descendants:\n if desc.name in headline_tags:\n current_headline = desc.get_text()\n current_start_idx = len(text)\n current_level = int(desc.name[-1]) - 1\n headlines.append({\"headline\": current_headline, \"start_idx\": current_start_idx, \"level\": current_level})\n\n if isinstance(desc, NavigableString):\n text += desc.get_text()\n\n return text, headlines\n", "path": "haystack/nodes/file_converter/markdown.py"}]}
| 1,806 | 782 |
gh_patches_debug_25505
|
rasdani/github-patches
|
git_diff
|
onnx__onnx-1648
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Proposal: Op Registry
Once we start getting into more complicated passes having knowledge about operators becomes incredibly useful. What I propose is to maintain a registry of operators mapping to a set of descriptors.
For example let's take a look at the [eliminate_nop_monotone_argmax](https://github.com/onnx/onnx/blob/master/onnx/optimizer/passes/eliminate_nop_monotone_argmax.h) pass. Within the pass we have to maintain an unordered set of ops that are monotone. The correct design would be to have a registry which we can ask whether or not if a certain op is monotone. Once such a registry is implemented, passes can be generalized through descriptors, therefore as we add new ops and their respective descriptors our passes naturally scale out.
Some possible descriptors we could have are:
- monotonic
- elementwise independent (op works on an individual element without knowledge of other elements, such as log, but not softmax since it requires calculating a sum over an axis)
- elementwise dependent (softmax, logsoftmax, softmin, softplus)
- reduction
- shape transform (e.g. transpose, view)
@houseroad @bddppq What do you guys think?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `onnx/optimizer.py`
Content:
```
1 # ATTENTION: The code in this file is highly EXPERIMENTAL.
2 # Adventurous users should note that the APIs will probably change.
3
4 """onnx optimizer
5
6 This enables users to optimize their models.
7 """
8 from __future__ import absolute_import
9 from __future__ import division
10 from __future__ import print_function
11 from __future__ import unicode_literals
12
13 import onnx
14 import onnx.onnx_cpp2py_export.optimizer as C
15 from onnx import ModelProto
16 from typing import Text, Sequence, Optional
17
18 """Apply the optimization on the serialized ModelProto.
19
20 Arguments:
21 input (ModelProto): model
22 names (list of string): list of optimization names
23
24 Return:
25 return (ModelProto) optimized model
26
27 Supported pass names:
28 -- nop
29 -- eliminate_identity
30 -- eliminate_nop_transpose
31 -- eliminate_nop_pad
32 -- eliminate_unused_initializer
33 -- fuse_consecutive_squeezes
34 -- fuse_consecutive_transposes
35 -- fuse_add_bias_into_conv
36 -- fuse_transpose_into_gemm
37 """
38
39 get_available_passes = C.get_available_passes
40
41
42 def optimize(model, passes=None, fixed_point=False): # type: (ModelProto, Optional[Sequence[Text]], bool) -> ModelProto
43 if passes is None:
44 passes = ['eliminate_nop_transpose',
45 'eliminate_nop_pad',
46 'fuse_consecutive_transposes',
47 'fuse_transpose_into_gemm']
48 if not isinstance(model, ModelProto):
49 raise ValueError('Optimizer only accepts ModelProto, incorrect type: {}'.format(type(model)))
50
51 model_str = model.SerializeToString()
52 if fixed_point:
53 optimized_model_str = C.optimize_fixedpoint(model_str, passes)
54 else:
55 optimized_model_str = C.optimize(model_str, passes)
56
57 return onnx.load_from_string(optimized_model_str)
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/onnx/optimizer.py b/onnx/optimizer.py
--- a/onnx/optimizer.py
+++ b/onnx/optimizer.py
@@ -26,13 +26,23 @@
Supported pass names:
-- nop
+ -- eliminate_deadend
-- eliminate_identity
- -- eliminate_nop_transpose
+ -- eliminate_nop_dropout
+ -- eliminate_nop_monotone_argmax
-- eliminate_nop_pad
+ -- eliminate_nop_transpose
-- eliminate_unused_initializer
+ -- extract_constant_to_initializer
+ -- fuse_add_bias_into_conv
+ -- fuse_bn_into_conv
+ -- fuse_consecutive_concats
+ -- fuse_consecutive_log_softmax
+ -- fuse_consecutive_reduce_unsqueeze
-- fuse_consecutive_squeezes
-- fuse_consecutive_transposes
- -- fuse_add_bias_into_conv
+ -- fuse_matmul_add_bias_into_gemm
+ -- fuse_pad_into_conv
-- fuse_transpose_into_gemm
"""
@@ -45,8 +55,10 @@
'eliminate_nop_pad',
'fuse_consecutive_transposes',
'fuse_transpose_into_gemm']
+
if not isinstance(model, ModelProto):
- raise ValueError('Optimizer only accepts ModelProto, incorrect type: {}'.format(type(model)))
+ raise ValueError(
+ 'Optimizer only accepts ModelProto, incorrect type: {}'.format(type(model)))
model_str = model.SerializeToString()
if fixed_point:
|
{"golden_diff": "diff --git a/onnx/optimizer.py b/onnx/optimizer.py\n--- a/onnx/optimizer.py\n+++ b/onnx/optimizer.py\n@@ -26,13 +26,23 @@\n \n Supported pass names:\n -- nop\n+ -- eliminate_deadend\n -- eliminate_identity\n- -- eliminate_nop_transpose\n+ -- eliminate_nop_dropout\n+ -- eliminate_nop_monotone_argmax\n -- eliminate_nop_pad\n+ -- eliminate_nop_transpose\n -- eliminate_unused_initializer\n+ -- extract_constant_to_initializer\n+ -- fuse_add_bias_into_conv\n+ -- fuse_bn_into_conv\n+ -- fuse_consecutive_concats\n+ -- fuse_consecutive_log_softmax\n+ -- fuse_consecutive_reduce_unsqueeze\n -- fuse_consecutive_squeezes\n -- fuse_consecutive_transposes\n- -- fuse_add_bias_into_conv\n+ -- fuse_matmul_add_bias_into_gemm\n+ -- fuse_pad_into_conv\n -- fuse_transpose_into_gemm\n \"\"\"\n \n@@ -45,8 +55,10 @@\n 'eliminate_nop_pad',\n 'fuse_consecutive_transposes',\n 'fuse_transpose_into_gemm']\n+\n if not isinstance(model, ModelProto):\n- raise ValueError('Optimizer only accepts ModelProto, incorrect type: {}'.format(type(model)))\n+ raise ValueError(\n+ 'Optimizer only accepts ModelProto, incorrect type: {}'.format(type(model)))\n \n model_str = model.SerializeToString()\n if fixed_point:\n", "issue": "Proposal: Op Registry\nOnce we start getting into more complicated passes having knowledge about operators becomes incredibly useful. What I propose is to maintain a registry of operators mapping to a set of descriptors. \r\n\r\nFor example let's take a look at the [eliminate_nop_monotone_argmax](https://github.com/onnx/onnx/blob/master/onnx/optimizer/passes/eliminate_nop_monotone_argmax.h) pass. Within the pass we have to maintain an unordered set of ops that are monotone. The correct design would be to have a registry which we can ask whether or not if a certain op is monotone. Once such a registry is implemented, passes can be generalized through descriptors, therefore as we add new ops and their respective descriptors our passes naturally scale out.\r\n\r\nSome possible descriptors we could have are:\r\n- monotonic\r\n- elementwise independent (op works on an individual element without knowledge of other elements, such as log, but not softmax since it requires calculating a sum over an axis)\r\n- elementwise dependent (softmax, logsoftmax, softmin, softplus)\r\n- reduction\r\n- shape transform (e.g. transpose, view)\r\n\r\n@houseroad @bddppq What do you guys think?\n", "before_files": [{"content": "# ATTENTION: The code in this file is highly EXPERIMENTAL.\n# Adventurous users should note that the APIs will probably change.\n\n\"\"\"onnx optimizer\n\nThis enables users to optimize their models.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport onnx\nimport onnx.onnx_cpp2py_export.optimizer as C\nfrom onnx import ModelProto\nfrom typing import Text, Sequence, Optional\n\n\"\"\"Apply the optimization on the serialized ModelProto.\n\nArguments:\n input (ModelProto): model\n names (list of string): list of optimization names\n\nReturn:\n return (ModelProto) optimized model\n\nSupported pass names:\n -- nop\n -- eliminate_identity\n -- eliminate_nop_transpose\n -- eliminate_nop_pad\n -- eliminate_unused_initializer\n -- fuse_consecutive_squeezes\n -- fuse_consecutive_transposes\n -- fuse_add_bias_into_conv\n -- fuse_transpose_into_gemm\n\"\"\"\n\nget_available_passes = C.get_available_passes\n\n\ndef optimize(model, passes=None, fixed_point=False): # type: (ModelProto, Optional[Sequence[Text]], bool) -> ModelProto\n if passes is None:\n passes = ['eliminate_nop_transpose',\n 'eliminate_nop_pad',\n 'fuse_consecutive_transposes',\n 'fuse_transpose_into_gemm']\n if not isinstance(model, ModelProto):\n raise ValueError('Optimizer only accepts ModelProto, incorrect type: {}'.format(type(model)))\n\n model_str = model.SerializeToString()\n if fixed_point:\n optimized_model_str = C.optimize_fixedpoint(model_str, passes)\n else:\n optimized_model_str = C.optimize(model_str, passes)\n\n return onnx.load_from_string(optimized_model_str)\n", "path": "onnx/optimizer.py"}], "after_files": [{"content": "# ATTENTION: The code in this file is highly EXPERIMENTAL.\n# Adventurous users should note that the APIs will probably change.\n\n\"\"\"onnx optimizer\n\nThis enables users to optimize their models.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport onnx\nimport onnx.onnx_cpp2py_export.optimizer as C\nfrom onnx import ModelProto\nfrom typing import Text, Sequence, Optional\n\n\"\"\"Apply the optimization on the serialized ModelProto.\n\nArguments:\n input (ModelProto): model\n names (list of string): list of optimization names\n\nReturn:\n return (ModelProto) optimized model\n\nSupported pass names:\n -- nop\n -- eliminate_deadend\n -- eliminate_identity\n -- eliminate_nop_dropout\n -- eliminate_nop_monotone_argmax\n -- eliminate_nop_pad\n -- eliminate_nop_transpose\n -- eliminate_unused_initializer\n -- extract_constant_to_initializer\n -- fuse_add_bias_into_conv\n -- fuse_bn_into_conv\n -- fuse_consecutive_concats\n -- fuse_consecutive_log_softmax\n -- fuse_consecutive_reduce_unsqueeze\n -- fuse_consecutive_squeezes\n -- fuse_consecutive_transposes\n -- fuse_matmul_add_bias_into_gemm\n -- fuse_pad_into_conv\n -- fuse_transpose_into_gemm\n\"\"\"\n\nget_available_passes = C.get_available_passes\n\n\ndef optimize(model, passes=None, fixed_point=False): # type: (ModelProto, Optional[Sequence[Text]], bool) -> ModelProto\n if passes is None:\n passes = ['eliminate_nop_transpose',\n 'eliminate_nop_pad',\n 'fuse_consecutive_transposes',\n 'fuse_transpose_into_gemm']\n\n if not isinstance(model, ModelProto):\n raise ValueError(\n 'Optimizer only accepts ModelProto, incorrect type: {}'.format(type(model)))\n\n model_str = model.SerializeToString()\n if fixed_point:\n optimized_model_str = C.optimize_fixedpoint(model_str, passes)\n else:\n optimized_model_str = C.optimize(model_str, passes)\n\n return onnx.load_from_string(optimized_model_str)\n", "path": "onnx/optimizer.py"}]}
| 1,013 | 332 |
gh_patches_debug_37708
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-9196
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Adopt generic class based views for User management
| Part of the Epic https://github.com/wagtail/wagtail/issues/8365 |
|-----------------------------------------------|
### Is your proposal related to a problem?
* Wagtail 2.12 added generic class based views for use across Wagtail's admin.
* Many parts of the Wagtail admin are not yet migrated to this new approach, which makes it harder to share UI logic and styling/layouts.
* This issue specifically covers the user management (listing users, editing/deleting/creating users) views.
### Describe the solution you'd like
* Migrate the function views in [`wagtail/users/views/users.py`](https://github.com/wagtail/wagtail/blob/main/wagtail/users/views/users.py) to class based views.
* Ideally this would adopt the [Wagtail Model Viewset approach](https://github.com/wagtail/wagtail/blob/main/wagtail/admin/viewsets/model.py) as this would make the most sense, if not possible using discrete model generic class based views would be ok.
* The templates for the views should also extend their generic model class view counterparts.
* See a reference implementation already adopted in the Groups (listing/editing etc) views. [`wagtail/users/views/groups.py`](https://github.com/wagtail/wagtail/blob/main/wagtail/users/views/groups.py)
### Describe alternatives you've considered
* Leave as is, however this will make it harder to share UI elements and layout between similar listing views.
### Additional context
* This is part of a larger epic work to adopt class based views throughout wagtail - https://github.com/wagtail/wagtail/issues/8365
* Generic class based model views can be found in [`wagtail/admin/views/generic/models.py`](https://github.com/wagtail/wagtail/blob/main/wagtail/admin/views/generic/models.py)
* Generic class based model view templates can be found in [`wagtail/admin/templates/wagtailadmin/generic`](https://github.com/wagtail/wagtail/tree/main/wagtail/admin/templates/wagtailadmin/generic)
* A more recent set up of locales using the model viewset can be found in 7c86c4e14f8a1be15c519b88935c19cd563e43d9
#### Screenshots
##### Users Index
<img width="1448" alt="Screen Shot 2022-06-05 at 7 59 47 pm" src="https://user-images.githubusercontent.com/1396140/172045236-f5847f9c-794a-4e3e-b2ad-6ca85ccbefe0.png">
##### Create user
<img width="1172" alt="Screen Shot 2022-06-05 at 8 01 26 pm" src="https://user-images.githubusercontent.com/1396140/172045261-6b0af777-23c2-41d9-b7b2-c425b01f3ea4.png">
##### Delete user
<img width="1448" alt="Screen Shot 2022-06-05 at 8 00 07 pm" src="https://user-images.githubusercontent.com/1396140/172045295-db494e2d-6744-492a-b29e-e9d16b53571d.png">
##### Bulk Actions pages (note: may not be in scope)
<img width="1448" alt="Screen Shot 2022-06-05 at 8 00 27 pm" src="https://user-images.githubusercontent.com/1396140/172045327-9474d4bc-d65e-4f4f-aa55-beb9d1ef16bc.png">
<img width="1450" alt="Screen Shot 2022-06-05 at 8 00 16 pm" src="https://user-images.githubusercontent.com/1396140/172045330-b2fc7cd4-56aa-4538-9e18-767837ed4758.png">
<img width="1448" alt="Screen Shot 2022-06-05 at 8 00 44 pm" src="https://user-images.githubusercontent.com/1396140/172045323-113f842b-a52f-4ff3-a123-5228a7658a19.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/users/urls/users.py`
Content:
```
1 from django.urls import path
2
3 from wagtail.users.views import users
4
5 app_name = "wagtailusers_users"
6 urlpatterns = [
7 path("", users.Index.as_view(), name="index"),
8 path("add/", users.Create.as_view(), name="add"),
9 path("<str:user_id>/", users.edit, name="edit"),
10 path("<str:user_id>/delete/", users.delete, name="delete"),
11 ]
12
```
Path: `wagtail/users/views/users.py`
Content:
```
1 from django.conf import settings
2 from django.contrib.auth import get_user_model, update_session_auth_hash
3 from django.contrib.auth.models import Group
4 from django.core.exceptions import PermissionDenied
5 from django.db import transaction
6 from django.db.models import Q
7 from django.shortcuts import get_object_or_404, redirect
8 from django.template.response import TemplateResponse
9 from django.urls import reverse
10 from django.utils.translation import gettext as _
11 from django.utils.translation import gettext_lazy
12
13 from wagtail import hooks
14 from wagtail.admin import messages
15 from wagtail.admin.auth import permission_required
16 from wagtail.admin.views.generic import CreateView, IndexView
17 from wagtail.compat import AUTH_USER_APP_LABEL, AUTH_USER_MODEL_NAME
18 from wagtail.log_actions import log
19 from wagtail.permission_policies import ModelPermissionPolicy
20 from wagtail.users.forms import UserCreationForm, UserEditForm
21 from wagtail.users.utils import user_can_delete_user
22 from wagtail.utils.loading import get_custom_form
23
24 User = get_user_model()
25
26 # Typically we would check the permission 'auth.change_user' (and 'auth.add_user' /
27 # 'auth.delete_user') for user management actions, but this may vary according to
28 # the AUTH_USER_MODEL setting
29 add_user_perm = "{0}.add_{1}".format(AUTH_USER_APP_LABEL, AUTH_USER_MODEL_NAME.lower())
30 change_user_perm = "{0}.change_{1}".format(
31 AUTH_USER_APP_LABEL, AUTH_USER_MODEL_NAME.lower()
32 )
33 delete_user_perm = "{0}.delete_{1}".format(
34 AUTH_USER_APP_LABEL, AUTH_USER_MODEL_NAME.lower()
35 )
36
37
38 def get_user_creation_form():
39 form_setting = "WAGTAIL_USER_CREATION_FORM"
40 if hasattr(settings, form_setting):
41 return get_custom_form(form_setting)
42 else:
43 return UserCreationForm
44
45
46 def get_user_edit_form():
47 form_setting = "WAGTAIL_USER_EDIT_FORM"
48 if hasattr(settings, form_setting):
49 return get_custom_form(form_setting)
50 else:
51 return UserEditForm
52
53
54 def get_users_filter_query(q, model_fields):
55 conditions = Q()
56
57 for term in q.split():
58 if "username" in model_fields:
59 conditions |= Q(username__icontains=term)
60
61 if "first_name" in model_fields:
62 conditions |= Q(first_name__icontains=term)
63
64 if "last_name" in model_fields:
65 conditions |= Q(last_name__icontains=term)
66
67 if "email" in model_fields:
68 conditions |= Q(email__icontains=term)
69
70 return conditions
71
72
73 class Index(IndexView):
74 """
75 Lists the users for management within the admin.
76 """
77
78 any_permission_required = ["add", "change", "delete"]
79 permission_policy = ModelPermissionPolicy(User)
80 model = User
81 context_object_name = "users"
82 index_url_name = "wagtailusers_users:index"
83 add_url_name = "wagtailusers_users:add"
84 edit_url_name = "wagtailusers_users:edit"
85 default_ordering = "name"
86 paginate_by = 20
87 template_name = None
88 is_searchable = True
89 page_title = gettext_lazy("Users")
90
91 def setup(self, request, *args, **kwargs):
92 super().setup(request, *args, **kwargs)
93 setattr(self, "template_name", self.get_template())
94 self.group = get_object_or_404(Group, id=args[0]) if args else None
95 self.group_filter = Q(groups=self.group) if self.group else Q()
96 self.model_fields = [f.name for f in User._meta.get_fields()]
97
98 def get_valid_orderings(self):
99 return ["name", "username"]
100
101 def get_queryset(self):
102 if self.is_searching:
103 conditions = get_users_filter_query(self.search_query, self.model_fields)
104 users = User.objects.filter(self.group_filter & conditions)
105 else:
106 users = User.objects.filter(self.group_filter)
107
108 if self.locale:
109 users = users.filter(locale=self.locale)
110
111 if "last_name" in self.model_fields and "first_name" in self.model_fields:
112 users = users.order_by("last_name", "first_name")
113
114 if self.get_ordering() == "username":
115 users = users.order_by(User.USERNAME_FIELD)
116
117 return users
118
119 def get_template(self):
120 if self.request.headers.get("x-requested-with") == "XMLHttpRequest":
121 return "wagtailusers/users/results.html"
122 else:
123 return "wagtailusers/users/index.html"
124
125 def get_context_data(self, *args, object_list=None, **kwargs):
126 context_data = super().get_context_data(
127 *args, object_list=object_list, **kwargs
128 )
129 context_data["ordering"] = self.get_ordering()
130 context_data["group"] = self.group
131 if self.request.headers.get("x-requested-with") == "XMLHttpRequest":
132 return context_data
133
134 context_data.update(
135 {
136 "app_label": User._meta.app_label,
137 "model_name": User._meta.model_name,
138 }
139 )
140 return context_data
141
142
143 class Create(CreateView):
144 """
145 Provide the ability to create a user within the admin.
146 """
147
148 permission_policy = ModelPermissionPolicy(User)
149 permission_required = "add"
150 form_class = get_user_creation_form()
151 template_name = "wagtailusers/users/create.html"
152 add_url_name = "wagtailusers_users:add"
153 index_url_name = "wagtailusers_users:index"
154 edit_url_name = "wagtailusers_users:edit"
155 success_message = "User '{0}' created."
156 page_title = gettext_lazy("Add user")
157
158 def run_before_hook(self):
159 return self.run_hook(
160 "before_create_user",
161 self.request,
162 )
163
164 def run_after_hook(self):
165 return self.run_hook(
166 "after_create_user",
167 self.request,
168 self.object,
169 )
170
171 def get_add_url(self):
172 return None
173
174 def get_success_buttons(self):
175 return [
176 messages.button(
177 reverse(self.edit_url_name, args=(self.object.pk,)), _("Edit")
178 )
179 ]
180
181
182 @permission_required(change_user_perm)
183 def edit(request, user_id):
184 user = get_object_or_404(User, pk=user_id)
185 can_delete = user_can_delete_user(request.user, user)
186 editing_self = request.user == user
187
188 for fn in hooks.get_hooks("before_edit_user"):
189 result = fn(request, user)
190 if hasattr(result, "status_code"):
191 return result
192 if request.method == "POST":
193 form = get_user_edit_form()(
194 request.POST, request.FILES, instance=user, editing_self=editing_self
195 )
196 if form.is_valid():
197 with transaction.atomic():
198 user = form.save()
199 log(user, "wagtail.edit")
200
201 if user == request.user and "password1" in form.changed_data:
202 # User is changing their own password; need to update their session hash
203 update_session_auth_hash(request, user)
204
205 messages.success(
206 request,
207 _("User '{0}' updated.").format(user),
208 buttons=[
209 messages.button(
210 reverse("wagtailusers_users:edit", args=(user.pk,)), _("Edit")
211 )
212 ],
213 )
214 for fn in hooks.get_hooks("after_edit_user"):
215 result = fn(request, user)
216 if hasattr(result, "status_code"):
217 return result
218 return redirect("wagtailusers_users:index")
219 else:
220 messages.error(request, _("The user could not be saved due to errors."))
221 else:
222 form = get_user_edit_form()(instance=user, editing_self=editing_self)
223
224 return TemplateResponse(
225 request,
226 "wagtailusers/users/edit.html",
227 {
228 "user": user,
229 "form": form,
230 "can_delete": can_delete,
231 },
232 )
233
234
235 @permission_required(delete_user_perm)
236 def delete(request, user_id):
237 user = get_object_or_404(User, pk=user_id)
238
239 if not user_can_delete_user(request.user, user):
240 raise PermissionDenied
241
242 for fn in hooks.get_hooks("before_delete_user"):
243 result = fn(request, user)
244 if hasattr(result, "status_code"):
245 return result
246 if request.method == "POST":
247 with transaction.atomic():
248 log(user, "wagtail.delete")
249 user.delete()
250 messages.success(request, _("User '{0}' deleted.").format(user))
251 for fn in hooks.get_hooks("after_delete_user"):
252 result = fn(request, user)
253 if hasattr(result, "status_code"):
254 return result
255 return redirect("wagtailusers_users:index")
256
257 return TemplateResponse(
258 request,
259 "wagtailusers/users/confirm_delete.html",
260 {
261 "user": user,
262 },
263 )
264
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wagtail/users/urls/users.py b/wagtail/users/urls/users.py
--- a/wagtail/users/urls/users.py
+++ b/wagtail/users/urls/users.py
@@ -7,5 +7,5 @@
path("", users.Index.as_view(), name="index"),
path("add/", users.Create.as_view(), name="add"),
path("<str:user_id>/", users.edit, name="edit"),
- path("<str:user_id>/delete/", users.delete, name="delete"),
+ path("<str:pk>/delete/", users.Delete.as_view(), name="delete"),
]
diff --git a/wagtail/users/views/users.py b/wagtail/users/views/users.py
--- a/wagtail/users/views/users.py
+++ b/wagtail/users/views/users.py
@@ -13,7 +13,7 @@
from wagtail import hooks
from wagtail.admin import messages
from wagtail.admin.auth import permission_required
-from wagtail.admin.views.generic import CreateView, IndexView
+from wagtail.admin.views.generic import CreateView, DeleteView, IndexView
from wagtail.compat import AUTH_USER_APP_LABEL, AUTH_USER_MODEL_NAME
from wagtail.log_actions import log
from wagtail.permission_policies import ModelPermissionPolicy
@@ -232,32 +232,37 @@
)
-@permission_required(delete_user_perm)
-def delete(request, user_id):
- user = get_object_or_404(User, pk=user_id)
+class Delete(DeleteView):
+ """
+ Provide the ability to delete a user within the admin.
+ """
- if not user_can_delete_user(request.user, user):
- raise PermissionDenied
+ permission_policy = ModelPermissionPolicy(User)
+ permission_required = "delete"
+ model = User
+ template_name = "wagtailusers/users/confirm_delete.html"
+ delete_url_name = "wagtailusers_users:delete"
+ index_url_name = "wagtailusers_users:index"
+ page_title = gettext_lazy("Delete user")
+ context_object_name = "user"
+ success_message = _("User '{0}' deleted.")
- for fn in hooks.get_hooks("before_delete_user"):
- result = fn(request, user)
- if hasattr(result, "status_code"):
- return result
- if request.method == "POST":
- with transaction.atomic():
- log(user, "wagtail.delete")
- user.delete()
- messages.success(request, _("User '{0}' deleted.").format(user))
- for fn in hooks.get_hooks("after_delete_user"):
- result = fn(request, user)
- if hasattr(result, "status_code"):
- return result
- return redirect("wagtailusers_users:index")
+ def dispatch(self, request, *args, **kwargs):
+ self.object = self.get_object()
+ if not user_can_delete_user(self.request.user, self.object):
+ raise PermissionDenied
+ return super().dispatch(request, *args, **kwargs)
- return TemplateResponse(
- request,
- "wagtailusers/users/confirm_delete.html",
- {
- "user": user,
- },
- )
+ def run_before_hook(self):
+ return self.run_hook(
+ "before_delete_user",
+ self.request,
+ self.object,
+ )
+
+ def run_after_hook(self):
+ return self.run_hook(
+ "after_delete_user",
+ self.request,
+ self.object,
+ )
|
{"golden_diff": "diff --git a/wagtail/users/urls/users.py b/wagtail/users/urls/users.py\n--- a/wagtail/users/urls/users.py\n+++ b/wagtail/users/urls/users.py\n@@ -7,5 +7,5 @@\n path(\"\", users.Index.as_view(), name=\"index\"),\n path(\"add/\", users.Create.as_view(), name=\"add\"),\n path(\"<str:user_id>/\", users.edit, name=\"edit\"),\n- path(\"<str:user_id>/delete/\", users.delete, name=\"delete\"),\n+ path(\"<str:pk>/delete/\", users.Delete.as_view(), name=\"delete\"),\n ]\ndiff --git a/wagtail/users/views/users.py b/wagtail/users/views/users.py\n--- a/wagtail/users/views/users.py\n+++ b/wagtail/users/views/users.py\n@@ -13,7 +13,7 @@\n from wagtail import hooks\n from wagtail.admin import messages\n from wagtail.admin.auth import permission_required\n-from wagtail.admin.views.generic import CreateView, IndexView\n+from wagtail.admin.views.generic import CreateView, DeleteView, IndexView\n from wagtail.compat import AUTH_USER_APP_LABEL, AUTH_USER_MODEL_NAME\n from wagtail.log_actions import log\n from wagtail.permission_policies import ModelPermissionPolicy\n@@ -232,32 +232,37 @@\n )\n \n \n-@permission_required(delete_user_perm)\n-def delete(request, user_id):\n- user = get_object_or_404(User, pk=user_id)\n+class Delete(DeleteView):\n+ \"\"\"\n+ Provide the ability to delete a user within the admin.\n+ \"\"\"\n \n- if not user_can_delete_user(request.user, user):\n- raise PermissionDenied\n+ permission_policy = ModelPermissionPolicy(User)\n+ permission_required = \"delete\"\n+ model = User\n+ template_name = \"wagtailusers/users/confirm_delete.html\"\n+ delete_url_name = \"wagtailusers_users:delete\"\n+ index_url_name = \"wagtailusers_users:index\"\n+ page_title = gettext_lazy(\"Delete user\")\n+ context_object_name = \"user\"\n+ success_message = _(\"User '{0}' deleted.\")\n \n- for fn in hooks.get_hooks(\"before_delete_user\"):\n- result = fn(request, user)\n- if hasattr(result, \"status_code\"):\n- return result\n- if request.method == \"POST\":\n- with transaction.atomic():\n- log(user, \"wagtail.delete\")\n- user.delete()\n- messages.success(request, _(\"User '{0}' deleted.\").format(user))\n- for fn in hooks.get_hooks(\"after_delete_user\"):\n- result = fn(request, user)\n- if hasattr(result, \"status_code\"):\n- return result\n- return redirect(\"wagtailusers_users:index\")\n+ def dispatch(self, request, *args, **kwargs):\n+ self.object = self.get_object()\n+ if not user_can_delete_user(self.request.user, self.object):\n+ raise PermissionDenied\n+ return super().dispatch(request, *args, **kwargs)\n \n- return TemplateResponse(\n- request,\n- \"wagtailusers/users/confirm_delete.html\",\n- {\n- \"user\": user,\n- },\n- )\n+ def run_before_hook(self):\n+ return self.run_hook(\n+ \"before_delete_user\",\n+ self.request,\n+ self.object,\n+ )\n+\n+ def run_after_hook(self):\n+ return self.run_hook(\n+ \"after_delete_user\",\n+ self.request,\n+ self.object,\n+ )\n", "issue": "Adopt generic class based views for User management\n| Part of the Epic https://github.com/wagtail/wagtail/issues/8365 |\r\n|-----------------------------------------------|\r\n\r\n### Is your proposal related to a problem?\r\n\r\n* Wagtail 2.12 added generic class based views for use across Wagtail's admin.\r\n* Many parts of the Wagtail admin are not yet migrated to this new approach, which makes it harder to share UI logic and styling/layouts.\r\n* This issue specifically covers the user management (listing users, editing/deleting/creating users) views.\r\n\r\n### Describe the solution you'd like\r\n\r\n* Migrate the function views in [`wagtail/users/views/users.py`](https://github.com/wagtail/wagtail/blob/main/wagtail/users/views/users.py) to class based views.\r\n* Ideally this would adopt the [Wagtail Model Viewset approach](https://github.com/wagtail/wagtail/blob/main/wagtail/admin/viewsets/model.py) as this would make the most sense, if not possible using discrete model generic class based views would be ok.\r\n* The templates for the views should also extend their generic model class view counterparts.\r\n* See a reference implementation already adopted in the Groups (listing/editing etc) views. [`wagtail/users/views/groups.py`](https://github.com/wagtail/wagtail/blob/main/wagtail/users/views/groups.py)\r\n\r\n### Describe alternatives you've considered\r\n\r\n* Leave as is, however this will make it harder to share UI elements and layout between similar listing views.\r\n\r\n### Additional context\r\n\r\n* This is part of a larger epic work to adopt class based views throughout wagtail - https://github.com/wagtail/wagtail/issues/8365\r\n* Generic class based model views can be found in [`wagtail/admin/views/generic/models.py`](https://github.com/wagtail/wagtail/blob/main/wagtail/admin/views/generic/models.py)\r\n* Generic class based model view templates can be found in [`wagtail/admin/templates/wagtailadmin/generic`](https://github.com/wagtail/wagtail/tree/main/wagtail/admin/templates/wagtailadmin/generic)\r\n* A more recent set up of locales using the model viewset can be found in 7c86c4e14f8a1be15c519b88935c19cd563e43d9\r\n\r\n#### Screenshots\r\n\r\n##### Users Index\r\n\r\n<img width=\"1448\" alt=\"Screen Shot 2022-06-05 at 7 59 47 pm\" src=\"https://user-images.githubusercontent.com/1396140/172045236-f5847f9c-794a-4e3e-b2ad-6ca85ccbefe0.png\">\r\n\r\n##### Create user\r\n\r\n<img width=\"1172\" alt=\"Screen Shot 2022-06-05 at 8 01 26 pm\" src=\"https://user-images.githubusercontent.com/1396140/172045261-6b0af777-23c2-41d9-b7b2-c425b01f3ea4.png\">\r\n\r\n##### Delete user\r\n\r\n<img width=\"1448\" alt=\"Screen Shot 2022-06-05 at 8 00 07 pm\" src=\"https://user-images.githubusercontent.com/1396140/172045295-db494e2d-6744-492a-b29e-e9d16b53571d.png\">\r\n\r\n##### Bulk Actions pages (note: may not be in scope)\r\n\r\n<img width=\"1448\" alt=\"Screen Shot 2022-06-05 at 8 00 27 pm\" src=\"https://user-images.githubusercontent.com/1396140/172045327-9474d4bc-d65e-4f4f-aa55-beb9d1ef16bc.png\">\r\n\r\n<img width=\"1450\" alt=\"Screen Shot 2022-06-05 at 8 00 16 pm\" src=\"https://user-images.githubusercontent.com/1396140/172045330-b2fc7cd4-56aa-4538-9e18-767837ed4758.png\">\r\n\r\n<img width=\"1448\" alt=\"Screen Shot 2022-06-05 at 8 00 44 pm\" src=\"https://user-images.githubusercontent.com/1396140/172045323-113f842b-a52f-4ff3-a123-5228a7658a19.png\">\r\n\r\n\n", "before_files": [{"content": "from django.urls import path\n\nfrom wagtail.users.views import users\n\napp_name = \"wagtailusers_users\"\nurlpatterns = [\n path(\"\", users.Index.as_view(), name=\"index\"),\n path(\"add/\", users.Create.as_view(), name=\"add\"),\n path(\"<str:user_id>/\", users.edit, name=\"edit\"),\n path(\"<str:user_id>/delete/\", users.delete, name=\"delete\"),\n]\n", "path": "wagtail/users/urls/users.py"}, {"content": "from django.conf import settings\nfrom django.contrib.auth import get_user_model, update_session_auth_hash\nfrom django.contrib.auth.models import Group\nfrom django.core.exceptions import PermissionDenied\nfrom django.db import transaction\nfrom django.db.models import Q\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse\nfrom django.utils.translation import gettext as _\nfrom django.utils.translation import gettext_lazy\n\nfrom wagtail import hooks\nfrom wagtail.admin import messages\nfrom wagtail.admin.auth import permission_required\nfrom wagtail.admin.views.generic import CreateView, IndexView\nfrom wagtail.compat import AUTH_USER_APP_LABEL, AUTH_USER_MODEL_NAME\nfrom wagtail.log_actions import log\nfrom wagtail.permission_policies import ModelPermissionPolicy\nfrom wagtail.users.forms import UserCreationForm, UserEditForm\nfrom wagtail.users.utils import user_can_delete_user\nfrom wagtail.utils.loading import get_custom_form\n\nUser = get_user_model()\n\n# Typically we would check the permission 'auth.change_user' (and 'auth.add_user' /\n# 'auth.delete_user') for user management actions, but this may vary according to\n# the AUTH_USER_MODEL setting\nadd_user_perm = \"{0}.add_{1}\".format(AUTH_USER_APP_LABEL, AUTH_USER_MODEL_NAME.lower())\nchange_user_perm = \"{0}.change_{1}\".format(\n AUTH_USER_APP_LABEL, AUTH_USER_MODEL_NAME.lower()\n)\ndelete_user_perm = \"{0}.delete_{1}\".format(\n AUTH_USER_APP_LABEL, AUTH_USER_MODEL_NAME.lower()\n)\n\n\ndef get_user_creation_form():\n form_setting = \"WAGTAIL_USER_CREATION_FORM\"\n if hasattr(settings, form_setting):\n return get_custom_form(form_setting)\n else:\n return UserCreationForm\n\n\ndef get_user_edit_form():\n form_setting = \"WAGTAIL_USER_EDIT_FORM\"\n if hasattr(settings, form_setting):\n return get_custom_form(form_setting)\n else:\n return UserEditForm\n\n\ndef get_users_filter_query(q, model_fields):\n conditions = Q()\n\n for term in q.split():\n if \"username\" in model_fields:\n conditions |= Q(username__icontains=term)\n\n if \"first_name\" in model_fields:\n conditions |= Q(first_name__icontains=term)\n\n if \"last_name\" in model_fields:\n conditions |= Q(last_name__icontains=term)\n\n if \"email\" in model_fields:\n conditions |= Q(email__icontains=term)\n\n return conditions\n\n\nclass Index(IndexView):\n \"\"\"\n Lists the users for management within the admin.\n \"\"\"\n\n any_permission_required = [\"add\", \"change\", \"delete\"]\n permission_policy = ModelPermissionPolicy(User)\n model = User\n context_object_name = \"users\"\n index_url_name = \"wagtailusers_users:index\"\n add_url_name = \"wagtailusers_users:add\"\n edit_url_name = \"wagtailusers_users:edit\"\n default_ordering = \"name\"\n paginate_by = 20\n template_name = None\n is_searchable = True\n page_title = gettext_lazy(\"Users\")\n\n def setup(self, request, *args, **kwargs):\n super().setup(request, *args, **kwargs)\n setattr(self, \"template_name\", self.get_template())\n self.group = get_object_or_404(Group, id=args[0]) if args else None\n self.group_filter = Q(groups=self.group) if self.group else Q()\n self.model_fields = [f.name for f in User._meta.get_fields()]\n\n def get_valid_orderings(self):\n return [\"name\", \"username\"]\n\n def get_queryset(self):\n if self.is_searching:\n conditions = get_users_filter_query(self.search_query, self.model_fields)\n users = User.objects.filter(self.group_filter & conditions)\n else:\n users = User.objects.filter(self.group_filter)\n\n if self.locale:\n users = users.filter(locale=self.locale)\n\n if \"last_name\" in self.model_fields and \"first_name\" in self.model_fields:\n users = users.order_by(\"last_name\", \"first_name\")\n\n if self.get_ordering() == \"username\":\n users = users.order_by(User.USERNAME_FIELD)\n\n return users\n\n def get_template(self):\n if self.request.headers.get(\"x-requested-with\") == \"XMLHttpRequest\":\n return \"wagtailusers/users/results.html\"\n else:\n return \"wagtailusers/users/index.html\"\n\n def get_context_data(self, *args, object_list=None, **kwargs):\n context_data = super().get_context_data(\n *args, object_list=object_list, **kwargs\n )\n context_data[\"ordering\"] = self.get_ordering()\n context_data[\"group\"] = self.group\n if self.request.headers.get(\"x-requested-with\") == \"XMLHttpRequest\":\n return context_data\n\n context_data.update(\n {\n \"app_label\": User._meta.app_label,\n \"model_name\": User._meta.model_name,\n }\n )\n return context_data\n\n\nclass Create(CreateView):\n \"\"\"\n Provide the ability to create a user within the admin.\n \"\"\"\n\n permission_policy = ModelPermissionPolicy(User)\n permission_required = \"add\"\n form_class = get_user_creation_form()\n template_name = \"wagtailusers/users/create.html\"\n add_url_name = \"wagtailusers_users:add\"\n index_url_name = \"wagtailusers_users:index\"\n edit_url_name = \"wagtailusers_users:edit\"\n success_message = \"User '{0}' created.\"\n page_title = gettext_lazy(\"Add user\")\n\n def run_before_hook(self):\n return self.run_hook(\n \"before_create_user\",\n self.request,\n )\n\n def run_after_hook(self):\n return self.run_hook(\n \"after_create_user\",\n self.request,\n self.object,\n )\n\n def get_add_url(self):\n return None\n\n def get_success_buttons(self):\n return [\n messages.button(\n reverse(self.edit_url_name, args=(self.object.pk,)), _(\"Edit\")\n )\n ]\n\n\n@permission_required(change_user_perm)\ndef edit(request, user_id):\n user = get_object_or_404(User, pk=user_id)\n can_delete = user_can_delete_user(request.user, user)\n editing_self = request.user == user\n\n for fn in hooks.get_hooks(\"before_edit_user\"):\n result = fn(request, user)\n if hasattr(result, \"status_code\"):\n return result\n if request.method == \"POST\":\n form = get_user_edit_form()(\n request.POST, request.FILES, instance=user, editing_self=editing_self\n )\n if form.is_valid():\n with transaction.atomic():\n user = form.save()\n log(user, \"wagtail.edit\")\n\n if user == request.user and \"password1\" in form.changed_data:\n # User is changing their own password; need to update their session hash\n update_session_auth_hash(request, user)\n\n messages.success(\n request,\n _(\"User '{0}' updated.\").format(user),\n buttons=[\n messages.button(\n reverse(\"wagtailusers_users:edit\", args=(user.pk,)), _(\"Edit\")\n )\n ],\n )\n for fn in hooks.get_hooks(\"after_edit_user\"):\n result = fn(request, user)\n if hasattr(result, \"status_code\"):\n return result\n return redirect(\"wagtailusers_users:index\")\n else:\n messages.error(request, _(\"The user could not be saved due to errors.\"))\n else:\n form = get_user_edit_form()(instance=user, editing_self=editing_self)\n\n return TemplateResponse(\n request,\n \"wagtailusers/users/edit.html\",\n {\n \"user\": user,\n \"form\": form,\n \"can_delete\": can_delete,\n },\n )\n\n\n@permission_required(delete_user_perm)\ndef delete(request, user_id):\n user = get_object_or_404(User, pk=user_id)\n\n if not user_can_delete_user(request.user, user):\n raise PermissionDenied\n\n for fn in hooks.get_hooks(\"before_delete_user\"):\n result = fn(request, user)\n if hasattr(result, \"status_code\"):\n return result\n if request.method == \"POST\":\n with transaction.atomic():\n log(user, \"wagtail.delete\")\n user.delete()\n messages.success(request, _(\"User '{0}' deleted.\").format(user))\n for fn in hooks.get_hooks(\"after_delete_user\"):\n result = fn(request, user)\n if hasattr(result, \"status_code\"):\n return result\n return redirect(\"wagtailusers_users:index\")\n\n return TemplateResponse(\n request,\n \"wagtailusers/users/confirm_delete.html\",\n {\n \"user\": user,\n },\n )\n", "path": "wagtail/users/views/users.py"}], "after_files": [{"content": "from django.urls import path\n\nfrom wagtail.users.views import users\n\napp_name = \"wagtailusers_users\"\nurlpatterns = [\n path(\"\", users.Index.as_view(), name=\"index\"),\n path(\"add/\", users.Create.as_view(), name=\"add\"),\n path(\"<str:user_id>/\", users.edit, name=\"edit\"),\n path(\"<str:pk>/delete/\", users.Delete.as_view(), name=\"delete\"),\n]\n", "path": "wagtail/users/urls/users.py"}, {"content": "from django.conf import settings\nfrom django.contrib.auth import get_user_model, update_session_auth_hash\nfrom django.contrib.auth.models import Group\nfrom django.core.exceptions import PermissionDenied\nfrom django.db import transaction\nfrom django.db.models import Q\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse\nfrom django.utils.translation import gettext as _\nfrom django.utils.translation import gettext_lazy\n\nfrom wagtail import hooks\nfrom wagtail.admin import messages\nfrom wagtail.admin.auth import permission_required\nfrom wagtail.admin.views.generic import CreateView, DeleteView, IndexView\nfrom wagtail.compat import AUTH_USER_APP_LABEL, AUTH_USER_MODEL_NAME\nfrom wagtail.log_actions import log\nfrom wagtail.permission_policies import ModelPermissionPolicy\nfrom wagtail.users.forms import UserCreationForm, UserEditForm\nfrom wagtail.users.utils import user_can_delete_user\nfrom wagtail.utils.loading import get_custom_form\n\nUser = get_user_model()\n\n# Typically we would check the permission 'auth.change_user' (and 'auth.add_user' /\n# 'auth.delete_user') for user management actions, but this may vary according to\n# the AUTH_USER_MODEL setting\nadd_user_perm = \"{0}.add_{1}\".format(AUTH_USER_APP_LABEL, AUTH_USER_MODEL_NAME.lower())\nchange_user_perm = \"{0}.change_{1}\".format(\n AUTH_USER_APP_LABEL, AUTH_USER_MODEL_NAME.lower()\n)\ndelete_user_perm = \"{0}.delete_{1}\".format(\n AUTH_USER_APP_LABEL, AUTH_USER_MODEL_NAME.lower()\n)\n\n\ndef get_user_creation_form():\n form_setting = \"WAGTAIL_USER_CREATION_FORM\"\n if hasattr(settings, form_setting):\n return get_custom_form(form_setting)\n else:\n return UserCreationForm\n\n\ndef get_user_edit_form():\n form_setting = \"WAGTAIL_USER_EDIT_FORM\"\n if hasattr(settings, form_setting):\n return get_custom_form(form_setting)\n else:\n return UserEditForm\n\n\ndef get_users_filter_query(q, model_fields):\n conditions = Q()\n\n for term in q.split():\n if \"username\" in model_fields:\n conditions |= Q(username__icontains=term)\n\n if \"first_name\" in model_fields:\n conditions |= Q(first_name__icontains=term)\n\n if \"last_name\" in model_fields:\n conditions |= Q(last_name__icontains=term)\n\n if \"email\" in model_fields:\n conditions |= Q(email__icontains=term)\n\n return conditions\n\n\nclass Index(IndexView):\n \"\"\"\n Lists the users for management within the admin.\n \"\"\"\n\n any_permission_required = [\"add\", \"change\", \"delete\"]\n permission_policy = ModelPermissionPolicy(User)\n model = User\n context_object_name = \"users\"\n index_url_name = \"wagtailusers_users:index\"\n add_url_name = \"wagtailusers_users:add\"\n edit_url_name = \"wagtailusers_users:edit\"\n default_ordering = \"name\"\n paginate_by = 20\n template_name = None\n is_searchable = True\n page_title = gettext_lazy(\"Users\")\n\n def setup(self, request, *args, **kwargs):\n super().setup(request, *args, **kwargs)\n setattr(self, \"template_name\", self.get_template())\n self.group = get_object_or_404(Group, id=args[0]) if args else None\n self.group_filter = Q(groups=self.group) if self.group else Q()\n self.model_fields = [f.name for f in User._meta.get_fields()]\n\n def get_valid_orderings(self):\n return [\"name\", \"username\"]\n\n def get_queryset(self):\n if self.is_searching:\n conditions = get_users_filter_query(self.search_query, self.model_fields)\n users = User.objects.filter(self.group_filter & conditions)\n else:\n users = User.objects.filter(self.group_filter)\n\n if self.locale:\n users = users.filter(locale=self.locale)\n\n if \"last_name\" in self.model_fields and \"first_name\" in self.model_fields:\n users = users.order_by(\"last_name\", \"first_name\")\n\n if self.get_ordering() == \"username\":\n users = users.order_by(User.USERNAME_FIELD)\n\n return users\n\n def get_template(self):\n if self.request.headers.get(\"x-requested-with\") == \"XMLHttpRequest\":\n return \"wagtailusers/users/results.html\"\n else:\n return \"wagtailusers/users/index.html\"\n\n def get_context_data(self, *args, object_list=None, **kwargs):\n context_data = super().get_context_data(\n *args, object_list=object_list, **kwargs\n )\n context_data[\"ordering\"] = self.get_ordering()\n context_data[\"group\"] = self.group\n if self.request.headers.get(\"x-requested-with\") == \"XMLHttpRequest\":\n return context_data\n\n context_data.update(\n {\n \"app_label\": User._meta.app_label,\n \"model_name\": User._meta.model_name,\n }\n )\n return context_data\n\n\nclass Create(CreateView):\n \"\"\"\n Provide the ability to create a user within the admin.\n \"\"\"\n\n permission_policy = ModelPermissionPolicy(User)\n permission_required = \"add\"\n form_class = get_user_creation_form()\n template_name = \"wagtailusers/users/create.html\"\n add_url_name = \"wagtailusers_users:add\"\n index_url_name = \"wagtailusers_users:index\"\n edit_url_name = \"wagtailusers_users:edit\"\n success_message = \"User '{0}' created.\"\n page_title = gettext_lazy(\"Add user\")\n\n def run_before_hook(self):\n return self.run_hook(\n \"before_create_user\",\n self.request,\n )\n\n def run_after_hook(self):\n return self.run_hook(\n \"after_create_user\",\n self.request,\n self.object,\n )\n\n def get_add_url(self):\n return None\n\n def get_success_buttons(self):\n return [\n messages.button(\n reverse(self.edit_url_name, args=(self.object.pk,)), _(\"Edit\")\n )\n ]\n\n\n@permission_required(change_user_perm)\ndef edit(request, user_id):\n user = get_object_or_404(User, pk=user_id)\n can_delete = user_can_delete_user(request.user, user)\n editing_self = request.user == user\n\n for fn in hooks.get_hooks(\"before_edit_user\"):\n result = fn(request, user)\n if hasattr(result, \"status_code\"):\n return result\n if request.method == \"POST\":\n form = get_user_edit_form()(\n request.POST, request.FILES, instance=user, editing_self=editing_self\n )\n if form.is_valid():\n with transaction.atomic():\n user = form.save()\n log(user, \"wagtail.edit\")\n\n if user == request.user and \"password1\" in form.changed_data:\n # User is changing their own password; need to update their session hash\n update_session_auth_hash(request, user)\n\n messages.success(\n request,\n _(\"User '{0}' updated.\").format(user),\n buttons=[\n messages.button(\n reverse(\"wagtailusers_users:edit\", args=(user.pk,)), _(\"Edit\")\n )\n ],\n )\n for fn in hooks.get_hooks(\"after_edit_user\"):\n result = fn(request, user)\n if hasattr(result, \"status_code\"):\n return result\n return redirect(\"wagtailusers_users:index\")\n else:\n messages.error(request, _(\"The user could not be saved due to errors.\"))\n else:\n form = get_user_edit_form()(instance=user, editing_self=editing_self)\n\n return TemplateResponse(\n request,\n \"wagtailusers/users/edit.html\",\n {\n \"user\": user,\n \"form\": form,\n \"can_delete\": can_delete,\n },\n )\n\n\nclass Delete(DeleteView):\n \"\"\"\n Provide the ability to delete a user within the admin.\n \"\"\"\n\n permission_policy = ModelPermissionPolicy(User)\n permission_required = \"delete\"\n model = User\n template_name = \"wagtailusers/users/confirm_delete.html\"\n delete_url_name = \"wagtailusers_users:delete\"\n index_url_name = \"wagtailusers_users:index\"\n page_title = gettext_lazy(\"Delete user\")\n context_object_name = \"user\"\n success_message = _(\"User '{0}' deleted.\")\n\n def dispatch(self, request, *args, **kwargs):\n self.object = self.get_object()\n if not user_can_delete_user(self.request.user, self.object):\n raise PermissionDenied\n return super().dispatch(request, *args, **kwargs)\n\n def run_before_hook(self):\n return self.run_hook(\n \"before_delete_user\",\n self.request,\n self.object,\n )\n\n def run_after_hook(self):\n return self.run_hook(\n \"after_delete_user\",\n self.request,\n self.object,\n )\n", "path": "wagtail/users/views/users.py"}]}
| 4,052 | 770 |
gh_patches_debug_21781
|
rasdani/github-patches
|
git_diff
|
cal-itp__benefits-344
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Refactor language files for multiple verifiers
## Background
We need message strings for multiple verifiers
## Tasks
* [x] Create a new [verifier fixture](https://github.com/cal-itp/benefits/blob/dev/fixtures/02_eligibilityverifier.json) from the existing one
* [x] With a new `pk` and `fields.name`
* [x] Adjust both to have different `msgid`, e.g. `eligibility.pages.confirm.dmv.*` and `eligibility.pages.confirm.mst.*`
* [x] Adjust the `django.po` files with new `msgid` and new sample content for the Courtesy Cards verifier for each language
* [x] Wire the above fixture to one of the [sample agencies](https://github.com/cal-itp/benefits/blob/dev/fixtures/04_transitagency.json)
## Testing
Once #322 is merged
* [x] Remove your local db and local compiled language files, then run `bin/init.sh`
* [x] Observe that the migrations run correctly, the fixture data is loaded, and the language files are compiled
* [x] Open the app locally and select the agency configured above with multiple verifiers
* [x] Observe the `eligibility:index` view with different verifier selections
* [x] Select one of the verifiers
* [x] Observe the verifier-specific `eligibility:start` view
* [x] Confirm verification of a test user
* [x] Go back to `eligibility:index` and select the other verifier
* [x] Observe the verifier-specific `eligibility:start` view
* [x] Confirm verification of a test user
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `benefits/eligibility/views.py`
Content:
```
1 """
2 The eligibility application: view definitions for the eligibility verification flow.
3 """
4 from django.contrib import messages
5 from django.shortcuts import redirect
6 from django.urls import reverse
7 from django.utils.decorators import decorator_from_middleware
8 from django.utils.translation import pgettext, gettext as _
9
10 from benefits.core import middleware, recaptcha, session, viewmodels
11 from benefits.core.models import EligibilityVerifier
12 from benefits.core.views import PageTemplateResponse, _index_image
13 from . import analytics, api, forms
14
15
16 @decorator_from_middleware(middleware.AgencySessionRequired)
17 def index(request):
18 """View handler for the eligibility verifier selection form."""
19
20 session.update(request, eligibility_types=[], origin=reverse("eligibility:index"))
21 agency = session.agency(request)
22
23 eligibility_start = reverse("eligibility:start")
24
25 page = viewmodels.Page(
26 title=_("eligibility.pages.index.title"),
27 content_title=_("eligibility.pages.index.content_title"),
28 forms=forms.EligibilityVerifierSelectionForm(agency=agency),
29 image=_index_image(),
30 )
31
32 if request.method == "POST":
33 form = forms.EligibilityVerifierSelectionForm(data=request.POST, agency=agency)
34
35 if form.is_valid():
36 verifier_id = form.cleaned_data.get("verifier")
37 verifier = EligibilityVerifier.objects.get(id=verifier_id)
38 session.update(request, verifier=verifier)
39
40 response = redirect(eligibility_start)
41 else:
42 # form was not valid, allow for correction/resubmission
43 page.forms = [form]
44 response = PageTemplateResponse(request, page)
45 else:
46 if agency.eligibility_verifiers.count() == 1:
47 verifier = agency.eligibility_verifiers.first()
48 session.update(request, verifier=verifier)
49 response = redirect(eligibility_start)
50 else:
51 response = PageTemplateResponse(request, page)
52
53 return response
54
55
56 @decorator_from_middleware(middleware.AgencySessionRequired)
57 @decorator_from_middleware(middleware.VerifierSessionRequired)
58 def start(request):
59 """View handler for the eligibility verification getting started screen."""
60
61 verifier = session.verifier(request)
62
63 page = viewmodels.Page(
64 title=_("eligibility.pages.start.title"),
65 content_title=_(verifier.start_content_title),
66 media=[
67 viewmodels.MediaItem(
68 icon=viewmodels.Icon("idcardcheck", pgettext("image alt text", "core.icons.idcardcheck")),
69 heading=_(verifier.start_item_name),
70 details=_(verifier.start_item_description),
71 ),
72 viewmodels.MediaItem(
73 icon=viewmodels.Icon("bankcardcheck", pgettext("image alt text", "core.icons.bankcardcheck")),
74 heading=_("eligibility.pages.start.items[1].title"),
75 details=_("eligibility.pages.start.items[1].text"),
76 ),
77 ],
78 paragraphs=[_(verifier.start_blurb)],
79 image=_index_image(),
80 button=viewmodels.Button.primary(text=_("eligibility.pages.start.button"), url=reverse("eligibility:confirm")),
81 )
82
83 return PageTemplateResponse(request, page)
84
85
86 @decorator_from_middleware(middleware.AgencySessionRequired)
87 @decorator_from_middleware(middleware.RateLimit)
88 @decorator_from_middleware(middleware.VerifierSessionRequired)
89 def confirm(request):
90 """View handler for the eligibility verification form."""
91
92 verifier = session.verifier(request)
93
94 page = viewmodels.Page(
95 title=_(verifier.form_title),
96 content_title=_(verifier.form_content_title),
97 paragraphs=[_(verifier.form_blurb)],
98 form=forms.EligibilityVerificationForm(auto_id=True, label_suffix="", verifier=verifier),
99 classes="text-lg-center",
100 )
101
102 if request.method == "POST":
103 analytics.started_eligibility(request)
104
105 form = forms.EligibilityVerificationForm(data=request.POST, verifier=verifier)
106 response = _verify(request, form)
107
108 if response is None:
109 # form was not valid, allow for correction/resubmission
110 analytics.returned_error(request, form.errors)
111 page.forms = [form]
112 response = PageTemplateResponse(request, page)
113 elif session.eligible(request):
114 eligibility = session.eligibility(request)
115 response = verified(request, [eligibility.name])
116 else:
117 response = PageTemplateResponse(request, page)
118
119 return response
120
121
122 def _verify(request, form):
123 """Helper calls the eligibility verification API with user input."""
124
125 if not form.is_valid():
126 if recaptcha.has_error(form):
127 messages.error(request, "Recaptcha failed. Please try again.")
128 return None
129
130 sub, name = form.cleaned_data.get("sub"), form.cleaned_data.get("name")
131
132 agency = session.agency(request)
133 verifier = session.verifier(request)
134 client = api.Client(agency, verifier)
135
136 response = client.verify(sub, name)
137
138 if response.error and any(response.error):
139 form.add_api_errors(response.error)
140 return None
141 elif any(response.eligibility):
142 return verified(request, response.eligibility)
143 else:
144 return unverified(request)
145
146
147 @decorator_from_middleware(middleware.AgencySessionRequired)
148 def verified(request, verified_types):
149 """View handler for the verified eligibility page."""
150
151 analytics.returned_success(request)
152
153 enrollment_index = reverse("enrollment:index")
154 session.update(request, eligibility_types=verified_types, origin=enrollment_index)
155
156 return redirect(enrollment_index)
157
158
159 @decorator_from_middleware(middleware.AgencySessionRequired)
160 def unverified(request):
161 """View handler for the unverified eligibility page."""
162
163 analytics.returned_fail(request)
164
165 # tel: link to agency phone number
166 agency = session.agency(request)
167 buttons = viewmodels.Button.agency_contact_links(agency)
168
169 page = viewmodels.Page(
170 title=_("eligibility.pages.unverified.title"),
171 content_title=_("eligibility.pages.unverified.content_title"),
172 icon=viewmodels.Icon("idcardquestion", pgettext("image alt text", "core.icons.idcardquestion")),
173 paragraphs=[_("eligibility.pages.unverified.p[0]"), _("eligibility.pages.unverified.p[1]")],
174 buttons=buttons,
175 classes="text-lg-center",
176 )
177
178 return PageTemplateResponse(request, page)
179
```
Path: `benefits/eligibility/forms.py`
Content:
```
1 """
2 The eligibility application: Form definition for the eligibility verification flow.
3 """
4 import logging
5
6 from django import forms
7 from django.utils.translation import gettext_lazy as _
8
9 from benefits.core import models, recaptcha, widgets
10
11
12 logger = logging.getLogger(__name__)
13
14
15 class EligibilityVerifierSelectionForm(forms.Form):
16 """Form to capture eligibility verifier selection."""
17
18 action_url = "eligibility:index"
19 method = "POST"
20
21 verifier = forms.ChoiceField(label="", widget=forms.RadioSelect)
22
23 submit_value = _("eligibility.pages.index.button")
24
25 def __init__(self, agency: models.TransitAgency, *args, **kwargs):
26 super().__init__(*args, **kwargs)
27 choices = [(v.id, v.selection_label) for v in agency.eligibility_verifiers.all()]
28 self.fields["verifier"].choices = choices
29
30
31 class EligibilityVerificationForm(forms.Form):
32 """Form to collect eligibility verification details."""
33
34 action_url = "eligibility:confirm"
35 method = "POST"
36
37 submit_value = _("eligibility.forms.confirm.submit")
38 submitting_value = _("eligibility.forms.confirm.submitting")
39
40 _error_messages = {
41 "invalid": _("eligibility.forms.confirm.errors.invalid"),
42 "missing": _("eligibility.forms.confirm.errors.missing"),
43 }
44
45 def __init__(self, verifier: models.EligibilityVerifier, *args, **kwargs):
46 super().__init__(*args, **kwargs)
47
48 sub_widget = widgets.FormControlTextInput(placeholder=verifier.form_sub_placeholder)
49 if verifier.form_sub_pattern:
50 sub_widget.attrs.update({"pattern": verifier.form_sub_pattern})
51
52 self.fields["sub"] = forms.CharField(label=_(verifier.form_sub_label), widget=sub_widget)
53
54 name_widget = widgets.FormControlTextInput(placeholder=verifier.form_name_placeholder)
55 if verifier.form_name_max_length:
56 name_widget.attrs.update({"maxlength": verifier.form_name_max_length})
57
58 self.fields["name"] = forms.CharField(label=_(verifier.form_name_label), widget=name_widget)
59
60 def add_api_errors(self, form_errors):
61 """Handle errors passed back from API server related to submitted form values."""
62
63 validation_errors = {
64 field: forms.ValidationError(self._error_messages.get(code, _("core.pages.error.title")), code=code)
65 for (field, code) in form_errors.items()
66 if field in self.fields
67 }
68
69 if len(validation_errors) > 0:
70 logger.warning("Form fields are invalid")
71
72 for (field, err) in validation_errors.items():
73 self.add_error(field, err)
74
75 def clean(self):
76 if not recaptcha.verify(self.data):
77 raise forms.ValidationError("reCAPTCHA failed")
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/benefits/eligibility/forms.py b/benefits/eligibility/forms.py
--- a/benefits/eligibility/forms.py
+++ b/benefits/eligibility/forms.py
@@ -20,11 +20,11 @@
verifier = forms.ChoiceField(label="", widget=forms.RadioSelect)
- submit_value = _("eligibility.pages.index.button")
+ submit_value = _("eligibility.buttons.continue")
def __init__(self, agency: models.TransitAgency, *args, **kwargs):
super().__init__(*args, **kwargs)
- choices = [(v.id, v.selection_label) for v in agency.eligibility_verifiers.all()]
+ choices = [(v.id, _(v.selection_label)) for v in agency.eligibility_verifiers.all()]
self.fields["verifier"].choices = choices
diff --git a/benefits/eligibility/views.py b/benefits/eligibility/views.py
--- a/benefits/eligibility/views.py
+++ b/benefits/eligibility/views.py
@@ -77,7 +77,7 @@
],
paragraphs=[_(verifier.start_blurb)],
image=_index_image(),
- button=viewmodels.Button.primary(text=_("eligibility.pages.start.button"), url=reverse("eligibility:confirm")),
+ button=viewmodels.Button.primary(text=_("eligibility.buttons.continue"), url=reverse("eligibility:confirm")),
)
return PageTemplateResponse(request, page)
|
{"golden_diff": "diff --git a/benefits/eligibility/forms.py b/benefits/eligibility/forms.py\n--- a/benefits/eligibility/forms.py\n+++ b/benefits/eligibility/forms.py\n@@ -20,11 +20,11 @@\n \n verifier = forms.ChoiceField(label=\"\", widget=forms.RadioSelect)\n \n- submit_value = _(\"eligibility.pages.index.button\")\n+ submit_value = _(\"eligibility.buttons.continue\")\n \n def __init__(self, agency: models.TransitAgency, *args, **kwargs):\n super().__init__(*args, **kwargs)\n- choices = [(v.id, v.selection_label) for v in agency.eligibility_verifiers.all()]\n+ choices = [(v.id, _(v.selection_label)) for v in agency.eligibility_verifiers.all()]\n self.fields[\"verifier\"].choices = choices\n \n \ndiff --git a/benefits/eligibility/views.py b/benefits/eligibility/views.py\n--- a/benefits/eligibility/views.py\n+++ b/benefits/eligibility/views.py\n@@ -77,7 +77,7 @@\n ],\n paragraphs=[_(verifier.start_blurb)],\n image=_index_image(),\n- button=viewmodels.Button.primary(text=_(\"eligibility.pages.start.button\"), url=reverse(\"eligibility:confirm\")),\n+ button=viewmodels.Button.primary(text=_(\"eligibility.buttons.continue\"), url=reverse(\"eligibility:confirm\")),\n )\n \n return PageTemplateResponse(request, page)\n", "issue": "Refactor language files for multiple verifiers\n## Background\r\n\r\nWe need message strings for multiple verifiers\r\n\r\n## Tasks\r\n\r\n* [x] Create a new [verifier fixture](https://github.com/cal-itp/benefits/blob/dev/fixtures/02_eligibilityverifier.json) from the existing one\r\n * [x] With a new `pk` and `fields.name`\r\n * [x] Adjust both to have different `msgid`, e.g. `eligibility.pages.confirm.dmv.*` and `eligibility.pages.confirm.mst.*`\r\n* [x] Adjust the `django.po` files with new `msgid` and new sample content for the Courtesy Cards verifier for each language\r\n* [x] Wire the above fixture to one of the [sample agencies](https://github.com/cal-itp/benefits/blob/dev/fixtures/04_transitagency.json)\r\n\r\n## Testing\r\n\r\nOnce #322 is merged\r\n\r\n* [x] Remove your local db and local compiled language files, then run `bin/init.sh`\r\n * [x] Observe that the migrations run correctly, the fixture data is loaded, and the language files are compiled\r\n* [x] Open the app locally and select the agency configured above with multiple verifiers\r\n * [x] Observe the `eligibility:index` view with different verifier selections\r\n* [x] Select one of the verifiers\r\n * [x] Observe the verifier-specific `eligibility:start` view\r\n * [x] Confirm verification of a test user\r\n* [x] Go back to `eligibility:index` and select the other verifier\r\n * [x] Observe the verifier-specific `eligibility:start` view\r\n * [x] Confirm verification of a test user\n", "before_files": [{"content": "\"\"\"\nThe eligibility application: view definitions for the eligibility verification flow.\n\"\"\"\nfrom django.contrib import messages\nfrom django.shortcuts import redirect\nfrom django.urls import reverse\nfrom django.utils.decorators import decorator_from_middleware\nfrom django.utils.translation import pgettext, gettext as _\n\nfrom benefits.core import middleware, recaptcha, session, viewmodels\nfrom benefits.core.models import EligibilityVerifier\nfrom benefits.core.views import PageTemplateResponse, _index_image\nfrom . import analytics, api, forms\n\n\n@decorator_from_middleware(middleware.AgencySessionRequired)\ndef index(request):\n \"\"\"View handler for the eligibility verifier selection form.\"\"\"\n\n session.update(request, eligibility_types=[], origin=reverse(\"eligibility:index\"))\n agency = session.agency(request)\n\n eligibility_start = reverse(\"eligibility:start\")\n\n page = viewmodels.Page(\n title=_(\"eligibility.pages.index.title\"),\n content_title=_(\"eligibility.pages.index.content_title\"),\n forms=forms.EligibilityVerifierSelectionForm(agency=agency),\n image=_index_image(),\n )\n\n if request.method == \"POST\":\n form = forms.EligibilityVerifierSelectionForm(data=request.POST, agency=agency)\n\n if form.is_valid():\n verifier_id = form.cleaned_data.get(\"verifier\")\n verifier = EligibilityVerifier.objects.get(id=verifier_id)\n session.update(request, verifier=verifier)\n\n response = redirect(eligibility_start)\n else:\n # form was not valid, allow for correction/resubmission\n page.forms = [form]\n response = PageTemplateResponse(request, page)\n else:\n if agency.eligibility_verifiers.count() == 1:\n verifier = agency.eligibility_verifiers.first()\n session.update(request, verifier=verifier)\n response = redirect(eligibility_start)\n else:\n response = PageTemplateResponse(request, page)\n\n return response\n\n\n@decorator_from_middleware(middleware.AgencySessionRequired)\n@decorator_from_middleware(middleware.VerifierSessionRequired)\ndef start(request):\n \"\"\"View handler for the eligibility verification getting started screen.\"\"\"\n\n verifier = session.verifier(request)\n\n page = viewmodels.Page(\n title=_(\"eligibility.pages.start.title\"),\n content_title=_(verifier.start_content_title),\n media=[\n viewmodels.MediaItem(\n icon=viewmodels.Icon(\"idcardcheck\", pgettext(\"image alt text\", \"core.icons.idcardcheck\")),\n heading=_(verifier.start_item_name),\n details=_(verifier.start_item_description),\n ),\n viewmodels.MediaItem(\n icon=viewmodels.Icon(\"bankcardcheck\", pgettext(\"image alt text\", \"core.icons.bankcardcheck\")),\n heading=_(\"eligibility.pages.start.items[1].title\"),\n details=_(\"eligibility.pages.start.items[1].text\"),\n ),\n ],\n paragraphs=[_(verifier.start_blurb)],\n image=_index_image(),\n button=viewmodels.Button.primary(text=_(\"eligibility.pages.start.button\"), url=reverse(\"eligibility:confirm\")),\n )\n\n return PageTemplateResponse(request, page)\n\n\n@decorator_from_middleware(middleware.AgencySessionRequired)\n@decorator_from_middleware(middleware.RateLimit)\n@decorator_from_middleware(middleware.VerifierSessionRequired)\ndef confirm(request):\n \"\"\"View handler for the eligibility verification form.\"\"\"\n\n verifier = session.verifier(request)\n\n page = viewmodels.Page(\n title=_(verifier.form_title),\n content_title=_(verifier.form_content_title),\n paragraphs=[_(verifier.form_blurb)],\n form=forms.EligibilityVerificationForm(auto_id=True, label_suffix=\"\", verifier=verifier),\n classes=\"text-lg-center\",\n )\n\n if request.method == \"POST\":\n analytics.started_eligibility(request)\n\n form = forms.EligibilityVerificationForm(data=request.POST, verifier=verifier)\n response = _verify(request, form)\n\n if response is None:\n # form was not valid, allow for correction/resubmission\n analytics.returned_error(request, form.errors)\n page.forms = [form]\n response = PageTemplateResponse(request, page)\n elif session.eligible(request):\n eligibility = session.eligibility(request)\n response = verified(request, [eligibility.name])\n else:\n response = PageTemplateResponse(request, page)\n\n return response\n\n\ndef _verify(request, form):\n \"\"\"Helper calls the eligibility verification API with user input.\"\"\"\n\n if not form.is_valid():\n if recaptcha.has_error(form):\n messages.error(request, \"Recaptcha failed. Please try again.\")\n return None\n\n sub, name = form.cleaned_data.get(\"sub\"), form.cleaned_data.get(\"name\")\n\n agency = session.agency(request)\n verifier = session.verifier(request)\n client = api.Client(agency, verifier)\n\n response = client.verify(sub, name)\n\n if response.error and any(response.error):\n form.add_api_errors(response.error)\n return None\n elif any(response.eligibility):\n return verified(request, response.eligibility)\n else:\n return unverified(request)\n\n\n@decorator_from_middleware(middleware.AgencySessionRequired)\ndef verified(request, verified_types):\n \"\"\"View handler for the verified eligibility page.\"\"\"\n\n analytics.returned_success(request)\n\n enrollment_index = reverse(\"enrollment:index\")\n session.update(request, eligibility_types=verified_types, origin=enrollment_index)\n\n return redirect(enrollment_index)\n\n\n@decorator_from_middleware(middleware.AgencySessionRequired)\ndef unverified(request):\n \"\"\"View handler for the unverified eligibility page.\"\"\"\n\n analytics.returned_fail(request)\n\n # tel: link to agency phone number\n agency = session.agency(request)\n buttons = viewmodels.Button.agency_contact_links(agency)\n\n page = viewmodels.Page(\n title=_(\"eligibility.pages.unverified.title\"),\n content_title=_(\"eligibility.pages.unverified.content_title\"),\n icon=viewmodels.Icon(\"idcardquestion\", pgettext(\"image alt text\", \"core.icons.idcardquestion\")),\n paragraphs=[_(\"eligibility.pages.unverified.p[0]\"), _(\"eligibility.pages.unverified.p[1]\")],\n buttons=buttons,\n classes=\"text-lg-center\",\n )\n\n return PageTemplateResponse(request, page)\n", "path": "benefits/eligibility/views.py"}, {"content": "\"\"\"\nThe eligibility application: Form definition for the eligibility verification flow.\n\"\"\"\nimport logging\n\nfrom django import forms\nfrom django.utils.translation import gettext_lazy as _\n\nfrom benefits.core import models, recaptcha, widgets\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass EligibilityVerifierSelectionForm(forms.Form):\n \"\"\"Form to capture eligibility verifier selection.\"\"\"\n\n action_url = \"eligibility:index\"\n method = \"POST\"\n\n verifier = forms.ChoiceField(label=\"\", widget=forms.RadioSelect)\n\n submit_value = _(\"eligibility.pages.index.button\")\n\n def __init__(self, agency: models.TransitAgency, *args, **kwargs):\n super().__init__(*args, **kwargs)\n choices = [(v.id, v.selection_label) for v in agency.eligibility_verifiers.all()]\n self.fields[\"verifier\"].choices = choices\n\n\nclass EligibilityVerificationForm(forms.Form):\n \"\"\"Form to collect eligibility verification details.\"\"\"\n\n action_url = \"eligibility:confirm\"\n method = \"POST\"\n\n submit_value = _(\"eligibility.forms.confirm.submit\")\n submitting_value = _(\"eligibility.forms.confirm.submitting\")\n\n _error_messages = {\n \"invalid\": _(\"eligibility.forms.confirm.errors.invalid\"),\n \"missing\": _(\"eligibility.forms.confirm.errors.missing\"),\n }\n\n def __init__(self, verifier: models.EligibilityVerifier, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n sub_widget = widgets.FormControlTextInput(placeholder=verifier.form_sub_placeholder)\n if verifier.form_sub_pattern:\n sub_widget.attrs.update({\"pattern\": verifier.form_sub_pattern})\n\n self.fields[\"sub\"] = forms.CharField(label=_(verifier.form_sub_label), widget=sub_widget)\n\n name_widget = widgets.FormControlTextInput(placeholder=verifier.form_name_placeholder)\n if verifier.form_name_max_length:\n name_widget.attrs.update({\"maxlength\": verifier.form_name_max_length})\n\n self.fields[\"name\"] = forms.CharField(label=_(verifier.form_name_label), widget=name_widget)\n\n def add_api_errors(self, form_errors):\n \"\"\"Handle errors passed back from API server related to submitted form values.\"\"\"\n\n validation_errors = {\n field: forms.ValidationError(self._error_messages.get(code, _(\"core.pages.error.title\")), code=code)\n for (field, code) in form_errors.items()\n if field in self.fields\n }\n\n if len(validation_errors) > 0:\n logger.warning(\"Form fields are invalid\")\n\n for (field, err) in validation_errors.items():\n self.add_error(field, err)\n\n def clean(self):\n if not recaptcha.verify(self.data):\n raise forms.ValidationError(\"reCAPTCHA failed\")\n", "path": "benefits/eligibility/forms.py"}], "after_files": [{"content": "\"\"\"\nThe eligibility application: view definitions for the eligibility verification flow.\n\"\"\"\nfrom django.contrib import messages\nfrom django.shortcuts import redirect\nfrom django.urls import reverse\nfrom django.utils.decorators import decorator_from_middleware\nfrom django.utils.translation import pgettext, gettext as _\n\nfrom benefits.core import middleware, recaptcha, session, viewmodels\nfrom benefits.core.models import EligibilityVerifier\nfrom benefits.core.views import PageTemplateResponse, _index_image\nfrom . import analytics, api, forms\n\n\n@decorator_from_middleware(middleware.AgencySessionRequired)\ndef index(request):\n \"\"\"View handler for the eligibility verifier selection form.\"\"\"\n\n session.update(request, eligibility_types=[], origin=reverse(\"eligibility:index\"))\n agency = session.agency(request)\n\n eligibility_start = reverse(\"eligibility:start\")\n\n page = viewmodels.Page(\n title=_(\"eligibility.pages.index.title\"),\n content_title=_(\"eligibility.pages.index.content_title\"),\n forms=forms.EligibilityVerifierSelectionForm(agency=agency),\n image=_index_image(),\n )\n\n if request.method == \"POST\":\n form = forms.EligibilityVerifierSelectionForm(data=request.POST, agency=agency)\n\n if form.is_valid():\n verifier_id = form.cleaned_data.get(\"verifier\")\n verifier = EligibilityVerifier.objects.get(id=verifier_id)\n session.update(request, verifier=verifier)\n\n response = redirect(eligibility_start)\n else:\n # form was not valid, allow for correction/resubmission\n page.forms = [form]\n response = PageTemplateResponse(request, page)\n else:\n if agency.eligibility_verifiers.count() == 1:\n verifier = agency.eligibility_verifiers.first()\n session.update(request, verifier=verifier)\n response = redirect(eligibility_start)\n else:\n response = PageTemplateResponse(request, page)\n\n return response\n\n\n@decorator_from_middleware(middleware.AgencySessionRequired)\n@decorator_from_middleware(middleware.VerifierSessionRequired)\ndef start(request):\n \"\"\"View handler for the eligibility verification getting started screen.\"\"\"\n\n verifier = session.verifier(request)\n\n page = viewmodels.Page(\n title=_(\"eligibility.pages.start.title\"),\n content_title=_(verifier.start_content_title),\n media=[\n viewmodels.MediaItem(\n icon=viewmodels.Icon(\"idcardcheck\", pgettext(\"image alt text\", \"core.icons.idcardcheck\")),\n heading=_(verifier.start_item_name),\n details=_(verifier.start_item_description),\n ),\n viewmodels.MediaItem(\n icon=viewmodels.Icon(\"bankcardcheck\", pgettext(\"image alt text\", \"core.icons.bankcardcheck\")),\n heading=_(\"eligibility.pages.start.items[1].title\"),\n details=_(\"eligibility.pages.start.items[1].text\"),\n ),\n ],\n paragraphs=[_(verifier.start_blurb)],\n image=_index_image(),\n button=viewmodels.Button.primary(text=_(\"eligibility.buttons.continue\"), url=reverse(\"eligibility:confirm\")),\n )\n\n return PageTemplateResponse(request, page)\n\n\n@decorator_from_middleware(middleware.AgencySessionRequired)\n@decorator_from_middleware(middleware.RateLimit)\n@decorator_from_middleware(middleware.VerifierSessionRequired)\ndef confirm(request):\n \"\"\"View handler for the eligibility verification form.\"\"\"\n\n verifier = session.verifier(request)\n\n page = viewmodels.Page(\n title=_(verifier.form_title),\n content_title=_(verifier.form_content_title),\n paragraphs=[_(verifier.form_blurb)],\n form=forms.EligibilityVerificationForm(auto_id=True, label_suffix=\"\", verifier=verifier),\n classes=\"text-lg-center\",\n )\n\n if request.method == \"POST\":\n analytics.started_eligibility(request)\n\n form = forms.EligibilityVerificationForm(data=request.POST, verifier=verifier)\n response = _verify(request, form)\n\n if response is None:\n # form was not valid, allow for correction/resubmission\n analytics.returned_error(request, form.errors)\n page.forms = [form]\n response = PageTemplateResponse(request, page)\n elif session.eligible(request):\n eligibility = session.eligibility(request)\n response = verified(request, [eligibility.name])\n else:\n response = PageTemplateResponse(request, page)\n\n return response\n\n\ndef _verify(request, form):\n \"\"\"Helper calls the eligibility verification API with user input.\"\"\"\n\n if not form.is_valid():\n if recaptcha.has_error(form):\n messages.error(request, \"Recaptcha failed. Please try again.\")\n return None\n\n sub, name = form.cleaned_data.get(\"sub\"), form.cleaned_data.get(\"name\")\n\n agency = session.agency(request)\n verifier = session.verifier(request)\n client = api.Client(agency, verifier)\n\n response = client.verify(sub, name)\n\n if response.error and any(response.error):\n form.add_api_errors(response.error)\n return None\n elif any(response.eligibility):\n return verified(request, response.eligibility)\n else:\n return unverified(request)\n\n\n@decorator_from_middleware(middleware.AgencySessionRequired)\ndef verified(request, verified_types):\n \"\"\"View handler for the verified eligibility page.\"\"\"\n\n analytics.returned_success(request)\n\n enrollment_index = reverse(\"enrollment:index\")\n session.update(request, eligibility_types=verified_types, origin=enrollment_index)\n\n return redirect(enrollment_index)\n\n\n@decorator_from_middleware(middleware.AgencySessionRequired)\ndef unverified(request):\n \"\"\"View handler for the unverified eligibility page.\"\"\"\n\n analytics.returned_fail(request)\n\n # tel: link to agency phone number\n agency = session.agency(request)\n buttons = viewmodels.Button.agency_contact_links(agency)\n\n page = viewmodels.Page(\n title=_(\"eligibility.pages.unverified.title\"),\n content_title=_(\"eligibility.pages.unverified.content_title\"),\n icon=viewmodels.Icon(\"idcardquestion\", pgettext(\"image alt text\", \"core.icons.idcardquestion\")),\n paragraphs=[_(\"eligibility.pages.unverified.p[0]\"), _(\"eligibility.pages.unverified.p[1]\")],\n buttons=buttons,\n classes=\"text-lg-center\",\n )\n\n return PageTemplateResponse(request, page)\n", "path": "benefits/eligibility/views.py"}, {"content": "\"\"\"\nThe eligibility application: Form definition for the eligibility verification flow.\n\"\"\"\nimport logging\n\nfrom django import forms\nfrom django.utils.translation import gettext_lazy as _\n\nfrom benefits.core import models, recaptcha, widgets\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass EligibilityVerifierSelectionForm(forms.Form):\n \"\"\"Form to capture eligibility verifier selection.\"\"\"\n\n action_url = \"eligibility:index\"\n method = \"POST\"\n\n verifier = forms.ChoiceField(label=\"\", widget=forms.RadioSelect)\n\n submit_value = _(\"eligibility.buttons.continue\")\n\n def __init__(self, agency: models.TransitAgency, *args, **kwargs):\n super().__init__(*args, **kwargs)\n choices = [(v.id, _(v.selection_label)) for v in agency.eligibility_verifiers.all()]\n self.fields[\"verifier\"].choices = choices\n\n\nclass EligibilityVerificationForm(forms.Form):\n \"\"\"Form to collect eligibility verification details.\"\"\"\n\n action_url = \"eligibility:confirm\"\n method = \"POST\"\n\n submit_value = _(\"eligibility.forms.confirm.submit\")\n submitting_value = _(\"eligibility.forms.confirm.submitting\")\n\n _error_messages = {\n \"invalid\": _(\"eligibility.forms.confirm.errors.invalid\"),\n \"missing\": _(\"eligibility.forms.confirm.errors.missing\"),\n }\n\n def __init__(self, verifier: models.EligibilityVerifier, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n sub_widget = widgets.FormControlTextInput(placeholder=verifier.form_sub_placeholder)\n if verifier.form_sub_pattern:\n sub_widget.attrs.update({\"pattern\": verifier.form_sub_pattern})\n\n self.fields[\"sub\"] = forms.CharField(label=_(verifier.form_sub_label), widget=sub_widget)\n\n name_widget = widgets.FormControlTextInput(placeholder=verifier.form_name_placeholder)\n if verifier.form_name_max_length:\n name_widget.attrs.update({\"maxlength\": verifier.form_name_max_length})\n\n self.fields[\"name\"] = forms.CharField(label=_(verifier.form_name_label), widget=name_widget)\n\n def add_api_errors(self, form_errors):\n \"\"\"Handle errors passed back from API server related to submitted form values.\"\"\"\n\n validation_errors = {\n field: forms.ValidationError(self._error_messages.get(code, _(\"core.pages.error.title\")), code=code)\n for (field, code) in form_errors.items()\n if field in self.fields\n }\n\n if len(validation_errors) > 0:\n logger.warning(\"Form fields are invalid\")\n\n for (field, err) in validation_errors.items():\n self.add_error(field, err)\n\n def clean(self):\n if not recaptcha.verify(self.data):\n raise forms.ValidationError(\"reCAPTCHA failed\")\n", "path": "benefits/eligibility/forms.py"}]}
| 3,111 | 323 |
gh_patches_debug_33056
|
rasdani/github-patches
|
git_diff
|
lhotse-speech__lhotse-265
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Loading CutSet from manifest may drop supervisions
Calling `CutSet.from_manifests()` on pre-computed manifests may result in dopping supervisions.
Steps to reproduce:
```
ROOT = "."
MANIFESTS = $(ROOT)/LJSpeech-1.1/lhotse_manifests
FEATS = $(ROOT)/LJSpeech-1.1/feats
lhotse obtain ljspeech $(ROOT)
lhotse prepare ljspeech $(ROOT)/LJSpeech-1.1/ $(MANIFESTS)
# Extract LibrosaFbank features
lhotse feat write-default-config $(MANIFESTS)/librosa_config.yaml -f librosa-fbank
lhotse feat extract \
$(MANIFESTS)/recordings.json \
$(FEATS) \
-f $(MANIFESTS)/librosa_config.yaml
# store cuts
lhotse cut simple $(MANIFESTS)/cuts.json \
-r $(MANIFESTS)/recordings.json \
-f $(FEATS)/feature_manifest.json.gz \
-s $(MANIFESTS)/supervisions.json
```
The stored cuts don't have supervisions. The problem is that the ends of the feature segments and ends of the supervision segments do not match, because Supervision.end [rounds to 8 digits](https://github.com/lhotse-speech/lhotse/blob/61177e48cbc2e53f83541ed8b301d4d4ab1b54da/lhotse/supervision.py#L27), but the Features.end does not.
The SupervisionSegment.find then does not match features with supervisions correctly in [SupervisionSet.find()](https://github.com/lhotse-speech/lhotse/blob/61177e48cbc2e53f83541ed8b301d4d4ab1b54da/lhotse/cut.py#L1286).
I tried to disable the rounding, but it triggers the `test/cut/test_cut_truncate.py` test. Quick fix would be to round digits inside the find function or use some tolerance, but I am not sure if this would not break something else. WYT @pzelasko ?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lhotse/supervision.py`
Content:
```
1 import logging
2 from dataclasses import dataclass
3 from itertools import islice
4 from typing import Any, Callable, Dict, Iterable, List, Optional, Sequence
5
6 from lhotse.serialization import Serializable
7 from lhotse.utils import Seconds, asdict_nonull, exactly_one_not_null, fastcopy, \
8 index_by_id_and_check, \
9 perturb_num_samples, split_sequence
10
11
12 @dataclass(frozen=True, unsafe_hash=True)
13 class SupervisionSegment:
14 id: str
15 recording_id: str
16 start: Seconds
17 duration: Seconds
18 channel: int = 0
19 text: Optional[str] = None
20 language: Optional[str] = None
21 speaker: Optional[str] = None
22 gender: Optional[str] = None
23 custom: Optional[Dict[str, Any]] = None
24
25 @property
26 def end(self) -> Seconds:
27 return round(self.start + self.duration, ndigits=8)
28
29 def with_offset(self, offset: Seconds) -> 'SupervisionSegment':
30 """Return an identical ``SupervisionSegment``, but with the ``offset`` added to the ``start`` field."""
31 return fastcopy(self, start=round(self.start + offset, ndigits=8))
32
33 def perturb_speed(
34 self,
35 factor: float,
36 sampling_rate: int,
37 affix_id: bool = True
38 ) -> 'SupervisionSegment':
39 """
40 Return a ``SupervisionSegment`` that has time boundaries matching the
41 recording/cut perturbed with the same factor.
42
43 :param factor: The speed will be adjusted this many times (e.g. factor=1.1 means 1.1x faster).
44 :param sampling_rate: The sampling rate is necessary to accurately perturb the start
45 and duration (going through the sample counts).
46 :param affix_id: When true, we will modify the ``id`` and ``recording_id`` fields
47 by affixing it with "_sp{factor}".
48 :return: a modified copy of the current ``Recording``.
49 """
50 start_sample = round(self.start * sampling_rate)
51 num_samples = round(self.duration * sampling_rate)
52 new_start = perturb_num_samples(start_sample, factor) / sampling_rate
53 new_duration = perturb_num_samples(num_samples, factor) / sampling_rate
54 return fastcopy(
55 self,
56 id=f'{self.id}_sp{factor}' if affix_id else self.id,
57 recording_id=f'{self.recording_id}_sp{factor}' if affix_id else self.id,
58 start=new_start,
59 duration=new_duration
60 )
61
62 def trim(self, end: Seconds) -> 'SupervisionSegment':
63 """
64 Return an identical ``SupervisionSegment``, but ensure that ``self.start`` is not negative (in which case
65 it's set to 0) and ``self.end`` does not exceed the ``end`` parameter.
66
67 This method is useful for ensuring that the supervision does not exceed a cut's bounds,
68 in which case pass ``cut.duration`` as the ``end`` argument, since supervision times are relative to the cut.
69 """
70 start_exceeds_by = abs(min(0, self.start))
71 end_exceeds_by = max(0, self.end - end)
72 return fastcopy(self, start=max(0, self.start), duration=self.duration - end_exceeds_by - start_exceeds_by)
73
74 def map(self, transform_fn: Callable[['SupervisionSegment'], 'SupervisionSegment']) -> 'SupervisionSegment':
75 """
76 Return a copy of the current segment, transformed with ``transform_fn``.
77
78 :param transform_fn: a function that takes a segment as input, transforms it and returns a new segment.
79 :return: a modified ``SupervisionSegment``.
80 """
81 return transform_fn(self)
82
83 def transform_text(self, transform_fn: Callable[[str], str]) -> 'SupervisionSegment':
84 """
85 Return a copy of the current segment with transformed ``text`` field.
86 Useful for text normalization, phonetic transcription, etc.
87
88 :param transform_fn: a function that accepts a string and returns a string.
89 :return: a ``SupervisionSegment`` with adjusted text.
90 """
91 if self.text is None:
92 return self
93 return fastcopy(self, text=transform_fn(self.text))
94
95 @staticmethod
96 def from_dict(data: dict) -> 'SupervisionSegment':
97 return SupervisionSegment(**data)
98
99
100 @dataclass
101 class SupervisionSet(Serializable, Sequence[SupervisionSegment]):
102 """
103 SupervisionSet represents a collection of segments containing some supervision information.
104 The only required fields are the ID of the segment, ID of the corresponding recording,
105 and the start and duration of the segment in seconds.
106 All other fields, such as text, language or speaker, are deliberately optional
107 to support a wide range of tasks, as well as adding more supervision types in the future,
108 while retaining backwards compatibility.
109 """
110 segments: Dict[str, SupervisionSegment]
111
112 @staticmethod
113 def from_segments(segments: Iterable[SupervisionSegment]) -> 'SupervisionSet':
114 return SupervisionSet(segments=index_by_id_and_check(segments))
115
116 @staticmethod
117 def from_dicts(data: Iterable[Dict]) -> 'SupervisionSet':
118 return SupervisionSet.from_segments(SupervisionSegment.from_dict(s) for s in data)
119
120 def to_dicts(self) -> List[dict]:
121 return [asdict_nonull(s) for s in self]
122
123 def split(self, num_splits: int, shuffle: bool = False) -> List['SupervisionSet']:
124 """
125 Split the ``SupervisionSet`` into ``num_splits`` pieces of equal size.
126
127 :param num_splits: Requested number of splits.
128 :param shuffle: Optionally shuffle the supervisions order first.
129 :return: A list of ``SupervisionSet`` pieces.
130 """
131 return [
132 SupervisionSet.from_segments(subset) for subset in
133 split_sequence(self, num_splits=num_splits, shuffle=shuffle)
134 ]
135
136 def subset(self, first: Optional[int] = None, last: Optional[int] = None) -> 'SupervisionSet':
137 """
138 Return a new ``SupervisionSet`` according to the selected subset criterion.
139 Only a single argument to ``subset`` is supported at this time.
140
141 :param first: int, the number of first supervisions to keep.
142 :param last: int, the number of last supervisions to keep.
143 :return: a new ``SupervisionSet`` with the subset results.
144 """
145 assert exactly_one_not_null(first, last), "subset() can handle only one non-None arg."
146
147 if first is not None:
148 assert first > 0
149 if first > len(self):
150 logging.warning(f'SupervisionSet has only {len(self)} items but first {first} required; '
151 f'not doing anything.')
152 return self
153 return SupervisionSet.from_segments(islice(self, first))
154
155 if last is not None:
156 assert last > 0
157 if last > len(self):
158 logging.warning(f'SupervisionSet has only {len(self)} items but last {last} required; '
159 f'not doing anything.')
160 return self
161 return SupervisionSet.from_segments(islice(self, len(self) - last, len(self)))
162
163 def filter(self, predicate: Callable[[SupervisionSegment], bool]) -> 'SupervisionSet':
164 """
165 Return a new SupervisionSet with the SupervisionSegments that satisfy the `predicate`.
166
167 :param predicate: a function that takes a supervision as an argument and returns bool.
168 :return: a filtered SupervisionSet.
169 """
170 return SupervisionSet.from_segments(seg for seg in self if predicate(seg))
171
172 def map(self, transform_fn: Callable[[SupervisionSegment], SupervisionSegment]) -> 'SupervisionSet':
173 """
174 Map a ``transform_fn`` to the SupervisionSegments and return a new ``SupervisionSet``.
175
176 :param transform_fn: a function that modifies a supervision as an argument.
177 :return: a new ``SupervisionSet`` with modified segments.
178 """
179 return SupervisionSet.from_segments(s.map(transform_fn) for s in self)
180
181 def transform_text(self, transform_fn: Callable[[str], str]) -> 'SupervisionSet':
182 """
183 Return a copy of the current ``SupervisionSet`` with the segments having a transformed ``text`` field.
184 Useful for text normalization, phonetic transcription, etc.
185
186 :param transform_fn: a function that accepts a string and returns a string.
187 :return: a ``SupervisionSet`` with adjusted text.
188 """
189 return SupervisionSet.from_segments(s.transform_text(transform_fn) for s in self)
190
191 def find(
192 self,
193 recording_id: str,
194 channel: Optional[int] = None,
195 start_after: Seconds = 0,
196 end_before: Optional[Seconds] = None,
197 adjust_offset: bool = False
198 ) -> Iterable[SupervisionSegment]:
199 """
200 Return an iterable of segments that match the provided ``recording_id``.
201
202 :param recording_id: Desired recording ID.
203 :param channel: When specified, return supervisions in that channel - otherwise, in all channels.
204 :param start_after: When specified, return segments that start after the given value.
205 :param end_before: When specified, return segments that end before the given value.
206 :param adjust_offset: When true, return segments as if the recordings had started at ``start_after``.
207 This is useful for creating Cuts. Fom a user perspective, when dealing with a Cut, it is no
208 longer helpful to know when the supervisions starts in a recording - instead, it's useful to
209 know when the supervision starts relative to the start of the Cut.
210 In the anticipated use-case, ``start_after`` and ``end_before`` would be
211 the beginning and end of a cut;
212 this option converts the times to be relative to the start of the cut.
213 :return: An iterator over supervision segments satisfying all criteria.
214 """
215 segment_by_recording_id = self._index_by_recording_id_and_cache()
216 return (
217 # We only modify the offset - the duration remains the same, as we're only shifting the segment
218 # relative to the Cut's start, and not truncating anything.
219 segment.with_offset(-start_after) if adjust_offset else segment
220 for segment in segment_by_recording_id.get(recording_id, [])
221 if (channel is None or segment.channel == channel)
222 and segment.start >= start_after
223 and (end_before is None or segment.end <= end_before)
224 )
225
226 # This is a cache that significantly speeds up repeated ``find()`` queries.
227 _segments_by_recording_id: Optional[Dict[str, List[SupervisionSegment]]] = None
228
229 def _index_by_recording_id_and_cache(self):
230 if self._segments_by_recording_id is None:
231 from cytoolz import groupby
232 self._segments_by_recording_id = groupby(lambda seg: seg.recording_id, self)
233 return self._segments_by_recording_id
234
235 def __repr__(self) -> str:
236 return f'SupervisionSet(len={len(self)})'
237
238 def __getitem__(self, item: str) -> SupervisionSegment:
239 return self.segments[item]
240
241 def __iter__(self) -> Iterable[SupervisionSegment]:
242 return iter(self.segments.values())
243
244 def __len__(self) -> int:
245 return len(self.segments)
246
247 def __add__(self, other: 'SupervisionSet') -> 'SupervisionSet':
248 return SupervisionSet(segments={**self.segments, **other.segments})
249
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lhotse/supervision.py b/lhotse/supervision.py
--- a/lhotse/supervision.py
+++ b/lhotse/supervision.py
@@ -194,7 +194,8 @@
channel: Optional[int] = None,
start_after: Seconds = 0,
end_before: Optional[Seconds] = None,
- adjust_offset: bool = False
+ adjust_offset: bool = False,
+ tolerance: Seconds = 0.001
) -> Iterable[SupervisionSegment]:
"""
Return an iterable of segments that match the provided ``recording_id``.
@@ -210,6 +211,8 @@
In the anticipated use-case, ``start_after`` and ``end_before`` would be
the beginning and end of a cut;
this option converts the times to be relative to the start of the cut.
+ :param tolerance: Additional margin to account for floating point rounding errors
+ when comparing segment boundaries.
:return: An iterator over supervision segments satisfying all criteria.
"""
segment_by_recording_id = self._index_by_recording_id_and_cache()
@@ -219,8 +222,8 @@
segment.with_offset(-start_after) if adjust_offset else segment
for segment in segment_by_recording_id.get(recording_id, [])
if (channel is None or segment.channel == channel)
- and segment.start >= start_after
- and (end_before is None or segment.end <= end_before)
+ and segment.start >= start_after - tolerance
+ and (end_before is None or segment.end <= end_before + tolerance)
)
# This is a cache that significantly speeds up repeated ``find()`` queries.
|
{"golden_diff": "diff --git a/lhotse/supervision.py b/lhotse/supervision.py\n--- a/lhotse/supervision.py\n+++ b/lhotse/supervision.py\n@@ -194,7 +194,8 @@\n channel: Optional[int] = None,\n start_after: Seconds = 0,\n end_before: Optional[Seconds] = None,\n- adjust_offset: bool = False\n+ adjust_offset: bool = False,\n+ tolerance: Seconds = 0.001\n ) -> Iterable[SupervisionSegment]:\n \"\"\"\n Return an iterable of segments that match the provided ``recording_id``.\n@@ -210,6 +211,8 @@\n In the anticipated use-case, ``start_after`` and ``end_before`` would be\n the beginning and end of a cut;\n this option converts the times to be relative to the start of the cut.\n+ :param tolerance: Additional margin to account for floating point rounding errors\n+ when comparing segment boundaries.\n :return: An iterator over supervision segments satisfying all criteria.\n \"\"\"\n segment_by_recording_id = self._index_by_recording_id_and_cache()\n@@ -219,8 +222,8 @@\n segment.with_offset(-start_after) if adjust_offset else segment\n for segment in segment_by_recording_id.get(recording_id, [])\n if (channel is None or segment.channel == channel)\n- and segment.start >= start_after\n- and (end_before is None or segment.end <= end_before)\n+ and segment.start >= start_after - tolerance\n+ and (end_before is None or segment.end <= end_before + tolerance)\n )\n \n # This is a cache that significantly speeds up repeated ``find()`` queries.\n", "issue": "Loading CutSet from manifest may drop supervisions\nCalling `CutSet.from_manifests()` on pre-computed manifests may result in dopping supervisions. \r\nSteps to reproduce:\r\n\r\n```\r\nROOT = \".\"\r\nMANIFESTS = $(ROOT)/LJSpeech-1.1/lhotse_manifests\r\nFEATS = $(ROOT)/LJSpeech-1.1/feats\r\n\r\nlhotse obtain ljspeech $(ROOT)\r\nlhotse prepare ljspeech $(ROOT)/LJSpeech-1.1/ $(MANIFESTS)\r\n\r\n# Extract LibrosaFbank features\r\nlhotse feat write-default-config $(MANIFESTS)/librosa_config.yaml -f librosa-fbank\r\nlhotse feat extract \\\r\n$(MANIFESTS)/recordings.json \\\r\n$(FEATS) \\\r\n-f $(MANIFESTS)/librosa_config.yaml\r\n\r\n# store cuts\r\nlhotse cut simple $(MANIFESTS)/cuts.json \\\r\n-r $(MANIFESTS)/recordings.json \\\r\n-f $(FEATS)/feature_manifest.json.gz \\\r\n-s $(MANIFESTS)/supervisions.json\r\n```\r\nThe stored cuts don't have supervisions. The problem is that the ends of the feature segments and ends of the supervision segments do not match, because Supervision.end [rounds to 8 digits](https://github.com/lhotse-speech/lhotse/blob/61177e48cbc2e53f83541ed8b301d4d4ab1b54da/lhotse/supervision.py#L27), but the Features.end does not.\r\n\r\nThe SupervisionSegment.find then does not match features with supervisions correctly in [SupervisionSet.find()](https://github.com/lhotse-speech/lhotse/blob/61177e48cbc2e53f83541ed8b301d4d4ab1b54da/lhotse/cut.py#L1286). \r\n\r\nI tried to disable the rounding, but it triggers the `test/cut/test_cut_truncate.py` test. Quick fix would be to round digits inside the find function or use some tolerance, but I am not sure if this would not break something else. WYT @pzelasko ?\r\n\n", "before_files": [{"content": "import logging\nfrom dataclasses import dataclass\nfrom itertools import islice\nfrom typing import Any, Callable, Dict, Iterable, List, Optional, Sequence\n\nfrom lhotse.serialization import Serializable\nfrom lhotse.utils import Seconds, asdict_nonull, exactly_one_not_null, fastcopy, \\\n index_by_id_and_check, \\\n perturb_num_samples, split_sequence\n\n\n@dataclass(frozen=True, unsafe_hash=True)\nclass SupervisionSegment:\n id: str\n recording_id: str\n start: Seconds\n duration: Seconds\n channel: int = 0\n text: Optional[str] = None\n language: Optional[str] = None\n speaker: Optional[str] = None\n gender: Optional[str] = None\n custom: Optional[Dict[str, Any]] = None\n\n @property\n def end(self) -> Seconds:\n return round(self.start + self.duration, ndigits=8)\n\n def with_offset(self, offset: Seconds) -> 'SupervisionSegment':\n \"\"\"Return an identical ``SupervisionSegment``, but with the ``offset`` added to the ``start`` field.\"\"\"\n return fastcopy(self, start=round(self.start + offset, ndigits=8))\n\n def perturb_speed(\n self,\n factor: float,\n sampling_rate: int,\n affix_id: bool = True\n ) -> 'SupervisionSegment':\n \"\"\"\n Return a ``SupervisionSegment`` that has time boundaries matching the\n recording/cut perturbed with the same factor.\n\n :param factor: The speed will be adjusted this many times (e.g. factor=1.1 means 1.1x faster).\n :param sampling_rate: The sampling rate is necessary to accurately perturb the start\n and duration (going through the sample counts).\n :param affix_id: When true, we will modify the ``id`` and ``recording_id`` fields\n by affixing it with \"_sp{factor}\".\n :return: a modified copy of the current ``Recording``.\n \"\"\"\n start_sample = round(self.start * sampling_rate)\n num_samples = round(self.duration * sampling_rate)\n new_start = perturb_num_samples(start_sample, factor) / sampling_rate\n new_duration = perturb_num_samples(num_samples, factor) / sampling_rate\n return fastcopy(\n self,\n id=f'{self.id}_sp{factor}' if affix_id else self.id,\n recording_id=f'{self.recording_id}_sp{factor}' if affix_id else self.id,\n start=new_start,\n duration=new_duration\n )\n\n def trim(self, end: Seconds) -> 'SupervisionSegment':\n \"\"\"\n Return an identical ``SupervisionSegment``, but ensure that ``self.start`` is not negative (in which case\n it's set to 0) and ``self.end`` does not exceed the ``end`` parameter.\n\n This method is useful for ensuring that the supervision does not exceed a cut's bounds,\n in which case pass ``cut.duration`` as the ``end`` argument, since supervision times are relative to the cut.\n \"\"\"\n start_exceeds_by = abs(min(0, self.start))\n end_exceeds_by = max(0, self.end - end)\n return fastcopy(self, start=max(0, self.start), duration=self.duration - end_exceeds_by - start_exceeds_by)\n\n def map(self, transform_fn: Callable[['SupervisionSegment'], 'SupervisionSegment']) -> 'SupervisionSegment':\n \"\"\"\n Return a copy of the current segment, transformed with ``transform_fn``.\n\n :param transform_fn: a function that takes a segment as input, transforms it and returns a new segment.\n :return: a modified ``SupervisionSegment``.\n \"\"\"\n return transform_fn(self)\n\n def transform_text(self, transform_fn: Callable[[str], str]) -> 'SupervisionSegment':\n \"\"\"\n Return a copy of the current segment with transformed ``text`` field.\n Useful for text normalization, phonetic transcription, etc.\n\n :param transform_fn: a function that accepts a string and returns a string.\n :return: a ``SupervisionSegment`` with adjusted text.\n \"\"\"\n if self.text is None:\n return self\n return fastcopy(self, text=transform_fn(self.text))\n\n @staticmethod\n def from_dict(data: dict) -> 'SupervisionSegment':\n return SupervisionSegment(**data)\n\n\n@dataclass\nclass SupervisionSet(Serializable, Sequence[SupervisionSegment]):\n \"\"\"\n SupervisionSet represents a collection of segments containing some supervision information.\n The only required fields are the ID of the segment, ID of the corresponding recording,\n and the start and duration of the segment in seconds.\n All other fields, such as text, language or speaker, are deliberately optional\n to support a wide range of tasks, as well as adding more supervision types in the future,\n while retaining backwards compatibility.\n \"\"\"\n segments: Dict[str, SupervisionSegment]\n\n @staticmethod\n def from_segments(segments: Iterable[SupervisionSegment]) -> 'SupervisionSet':\n return SupervisionSet(segments=index_by_id_and_check(segments))\n\n @staticmethod\n def from_dicts(data: Iterable[Dict]) -> 'SupervisionSet':\n return SupervisionSet.from_segments(SupervisionSegment.from_dict(s) for s in data)\n\n def to_dicts(self) -> List[dict]:\n return [asdict_nonull(s) for s in self]\n\n def split(self, num_splits: int, shuffle: bool = False) -> List['SupervisionSet']:\n \"\"\"\n Split the ``SupervisionSet`` into ``num_splits`` pieces of equal size.\n\n :param num_splits: Requested number of splits.\n :param shuffle: Optionally shuffle the supervisions order first.\n :return: A list of ``SupervisionSet`` pieces.\n \"\"\"\n return [\n SupervisionSet.from_segments(subset) for subset in\n split_sequence(self, num_splits=num_splits, shuffle=shuffle)\n ]\n\n def subset(self, first: Optional[int] = None, last: Optional[int] = None) -> 'SupervisionSet':\n \"\"\"\n Return a new ``SupervisionSet`` according to the selected subset criterion.\n Only a single argument to ``subset`` is supported at this time.\n\n :param first: int, the number of first supervisions to keep.\n :param last: int, the number of last supervisions to keep.\n :return: a new ``SupervisionSet`` with the subset results.\n \"\"\"\n assert exactly_one_not_null(first, last), \"subset() can handle only one non-None arg.\"\n\n if first is not None:\n assert first > 0\n if first > len(self):\n logging.warning(f'SupervisionSet has only {len(self)} items but first {first} required; '\n f'not doing anything.')\n return self\n return SupervisionSet.from_segments(islice(self, first))\n\n if last is not None:\n assert last > 0\n if last > len(self):\n logging.warning(f'SupervisionSet has only {len(self)} items but last {last} required; '\n f'not doing anything.')\n return self\n return SupervisionSet.from_segments(islice(self, len(self) - last, len(self)))\n\n def filter(self, predicate: Callable[[SupervisionSegment], bool]) -> 'SupervisionSet':\n \"\"\"\n Return a new SupervisionSet with the SupervisionSegments that satisfy the `predicate`.\n\n :param predicate: a function that takes a supervision as an argument and returns bool.\n :return: a filtered SupervisionSet.\n \"\"\"\n return SupervisionSet.from_segments(seg for seg in self if predicate(seg))\n\n def map(self, transform_fn: Callable[[SupervisionSegment], SupervisionSegment]) -> 'SupervisionSet':\n \"\"\"\n Map a ``transform_fn`` to the SupervisionSegments and return a new ``SupervisionSet``.\n\n :param transform_fn: a function that modifies a supervision as an argument.\n :return: a new ``SupervisionSet`` with modified segments.\n \"\"\"\n return SupervisionSet.from_segments(s.map(transform_fn) for s in self)\n\n def transform_text(self, transform_fn: Callable[[str], str]) -> 'SupervisionSet':\n \"\"\"\n Return a copy of the current ``SupervisionSet`` with the segments having a transformed ``text`` field.\n Useful for text normalization, phonetic transcription, etc.\n\n :param transform_fn: a function that accepts a string and returns a string.\n :return: a ``SupervisionSet`` with adjusted text.\n \"\"\"\n return SupervisionSet.from_segments(s.transform_text(transform_fn) for s in self)\n\n def find(\n self,\n recording_id: str,\n channel: Optional[int] = None,\n start_after: Seconds = 0,\n end_before: Optional[Seconds] = None,\n adjust_offset: bool = False\n ) -> Iterable[SupervisionSegment]:\n \"\"\"\n Return an iterable of segments that match the provided ``recording_id``.\n\n :param recording_id: Desired recording ID.\n :param channel: When specified, return supervisions in that channel - otherwise, in all channels.\n :param start_after: When specified, return segments that start after the given value.\n :param end_before: When specified, return segments that end before the given value.\n :param adjust_offset: When true, return segments as if the recordings had started at ``start_after``.\n This is useful for creating Cuts. Fom a user perspective, when dealing with a Cut, it is no\n longer helpful to know when the supervisions starts in a recording - instead, it's useful to\n know when the supervision starts relative to the start of the Cut.\n In the anticipated use-case, ``start_after`` and ``end_before`` would be\n the beginning and end of a cut;\n this option converts the times to be relative to the start of the cut.\n :return: An iterator over supervision segments satisfying all criteria.\n \"\"\"\n segment_by_recording_id = self._index_by_recording_id_and_cache()\n return (\n # We only modify the offset - the duration remains the same, as we're only shifting the segment\n # relative to the Cut's start, and not truncating anything.\n segment.with_offset(-start_after) if adjust_offset else segment\n for segment in segment_by_recording_id.get(recording_id, [])\n if (channel is None or segment.channel == channel)\n and segment.start >= start_after\n and (end_before is None or segment.end <= end_before)\n )\n\n # This is a cache that significantly speeds up repeated ``find()`` queries.\n _segments_by_recording_id: Optional[Dict[str, List[SupervisionSegment]]] = None\n\n def _index_by_recording_id_and_cache(self):\n if self._segments_by_recording_id is None:\n from cytoolz import groupby\n self._segments_by_recording_id = groupby(lambda seg: seg.recording_id, self)\n return self._segments_by_recording_id\n\n def __repr__(self) -> str:\n return f'SupervisionSet(len={len(self)})'\n\n def __getitem__(self, item: str) -> SupervisionSegment:\n return self.segments[item]\n\n def __iter__(self) -> Iterable[SupervisionSegment]:\n return iter(self.segments.values())\n\n def __len__(self) -> int:\n return len(self.segments)\n\n def __add__(self, other: 'SupervisionSet') -> 'SupervisionSet':\n return SupervisionSet(segments={**self.segments, **other.segments})\n", "path": "lhotse/supervision.py"}], "after_files": [{"content": "import logging\nfrom dataclasses import dataclass\nfrom itertools import islice\nfrom typing import Any, Callable, Dict, Iterable, List, Optional, Sequence\n\nfrom lhotse.serialization import Serializable\nfrom lhotse.utils import Seconds, asdict_nonull, exactly_one_not_null, fastcopy, \\\n index_by_id_and_check, \\\n perturb_num_samples, split_sequence\n\n\n@dataclass(frozen=True, unsafe_hash=True)\nclass SupervisionSegment:\n id: str\n recording_id: str\n start: Seconds\n duration: Seconds\n channel: int = 0\n text: Optional[str] = None\n language: Optional[str] = None\n speaker: Optional[str] = None\n gender: Optional[str] = None\n custom: Optional[Dict[str, Any]] = None\n\n @property\n def end(self) -> Seconds:\n return round(self.start + self.duration, ndigits=8)\n\n def with_offset(self, offset: Seconds) -> 'SupervisionSegment':\n \"\"\"Return an identical ``SupervisionSegment``, but with the ``offset`` added to the ``start`` field.\"\"\"\n return fastcopy(self, start=round(self.start + offset, ndigits=8))\n\n def perturb_speed(\n self,\n factor: float,\n sampling_rate: int,\n affix_id: bool = True\n ) -> 'SupervisionSegment':\n \"\"\"\n Return a ``SupervisionSegment`` that has time boundaries matching the\n recording/cut perturbed with the same factor.\n\n :param factor: The speed will be adjusted this many times (e.g. factor=1.1 means 1.1x faster).\n :param sampling_rate: The sampling rate is necessary to accurately perturb the start\n and duration (going through the sample counts).\n :param affix_id: When true, we will modify the ``id`` and ``recording_id`` fields\n by affixing it with \"_sp{factor}\".\n :return: a modified copy of the current ``Recording``.\n \"\"\"\n start_sample = round(self.start * sampling_rate)\n num_samples = round(self.duration * sampling_rate)\n new_start = perturb_num_samples(start_sample, factor) / sampling_rate\n new_duration = perturb_num_samples(num_samples, factor) / sampling_rate\n return fastcopy(\n self,\n id=f'{self.id}_sp{factor}' if affix_id else self.id,\n recording_id=f'{self.recording_id}_sp{factor}' if affix_id else self.id,\n start=new_start,\n duration=new_duration\n )\n\n def trim(self, end: Seconds) -> 'SupervisionSegment':\n \"\"\"\n Return an identical ``SupervisionSegment``, but ensure that ``self.start`` is not negative (in which case\n it's set to 0) and ``self.end`` does not exceed the ``end`` parameter.\n\n This method is useful for ensuring that the supervision does not exceed a cut's bounds,\n in which case pass ``cut.duration`` as the ``end`` argument, since supervision times are relative to the cut.\n \"\"\"\n start_exceeds_by = abs(min(0, self.start))\n end_exceeds_by = max(0, self.end - end)\n return fastcopy(self, start=max(0, self.start), duration=self.duration - end_exceeds_by - start_exceeds_by)\n\n def map(self, transform_fn: Callable[['SupervisionSegment'], 'SupervisionSegment']) -> 'SupervisionSegment':\n \"\"\"\n Return a copy of the current segment, transformed with ``transform_fn``.\n\n :param transform_fn: a function that takes a segment as input, transforms it and returns a new segment.\n :return: a modified ``SupervisionSegment``.\n \"\"\"\n return transform_fn(self)\n\n def transform_text(self, transform_fn: Callable[[str], str]) -> 'SupervisionSegment':\n \"\"\"\n Return a copy of the current segment with transformed ``text`` field.\n Useful for text normalization, phonetic transcription, etc.\n\n :param transform_fn: a function that accepts a string and returns a string.\n :return: a ``SupervisionSegment`` with adjusted text.\n \"\"\"\n if self.text is None:\n return self\n return fastcopy(self, text=transform_fn(self.text))\n\n @staticmethod\n def from_dict(data: dict) -> 'SupervisionSegment':\n return SupervisionSegment(**data)\n\n\n@dataclass\nclass SupervisionSet(Serializable, Sequence[SupervisionSegment]):\n \"\"\"\n SupervisionSet represents a collection of segments containing some supervision information.\n The only required fields are the ID of the segment, ID of the corresponding recording,\n and the start and duration of the segment in seconds.\n All other fields, such as text, language or speaker, are deliberately optional\n to support a wide range of tasks, as well as adding more supervision types in the future,\n while retaining backwards compatibility.\n \"\"\"\n segments: Dict[str, SupervisionSegment]\n\n @staticmethod\n def from_segments(segments: Iterable[SupervisionSegment]) -> 'SupervisionSet':\n return SupervisionSet(segments=index_by_id_and_check(segments))\n\n @staticmethod\n def from_dicts(data: Iterable[Dict]) -> 'SupervisionSet':\n return SupervisionSet.from_segments(SupervisionSegment.from_dict(s) for s in data)\n\n def to_dicts(self) -> List[dict]:\n return [asdict_nonull(s) for s in self]\n\n def split(self, num_splits: int, shuffle: bool = False) -> List['SupervisionSet']:\n \"\"\"\n Split the ``SupervisionSet`` into ``num_splits`` pieces of equal size.\n\n :param num_splits: Requested number of splits.\n :param shuffle: Optionally shuffle the supervisions order first.\n :return: A list of ``SupervisionSet`` pieces.\n \"\"\"\n return [\n SupervisionSet.from_segments(subset) for subset in\n split_sequence(self, num_splits=num_splits, shuffle=shuffle)\n ]\n\n def subset(self, first: Optional[int] = None, last: Optional[int] = None) -> 'SupervisionSet':\n \"\"\"\n Return a new ``SupervisionSet`` according to the selected subset criterion.\n Only a single argument to ``subset`` is supported at this time.\n\n :param first: int, the number of first supervisions to keep.\n :param last: int, the number of last supervisions to keep.\n :return: a new ``SupervisionSet`` with the subset results.\n \"\"\"\n assert exactly_one_not_null(first, last), \"subset() can handle only one non-None arg.\"\n\n if first is not None:\n assert first > 0\n if first > len(self):\n logging.warning(f'SupervisionSet has only {len(self)} items but first {first} required; '\n f'not doing anything.')\n return self\n return SupervisionSet.from_segments(islice(self, first))\n\n if last is not None:\n assert last > 0\n if last > len(self):\n logging.warning(f'SupervisionSet has only {len(self)} items but last {last} required; '\n f'not doing anything.')\n return self\n return SupervisionSet.from_segments(islice(self, len(self) - last, len(self)))\n\n def filter(self, predicate: Callable[[SupervisionSegment], bool]) -> 'SupervisionSet':\n \"\"\"\n Return a new SupervisionSet with the SupervisionSegments that satisfy the `predicate`.\n\n :param predicate: a function that takes a supervision as an argument and returns bool.\n :return: a filtered SupervisionSet.\n \"\"\"\n return SupervisionSet.from_segments(seg for seg in self if predicate(seg))\n\n def map(self, transform_fn: Callable[[SupervisionSegment], SupervisionSegment]) -> 'SupervisionSet':\n \"\"\"\n Map a ``transform_fn`` to the SupervisionSegments and return a new ``SupervisionSet``.\n\n :param transform_fn: a function that modifies a supervision as an argument.\n :return: a new ``SupervisionSet`` with modified segments.\n \"\"\"\n return SupervisionSet.from_segments(s.map(transform_fn) for s in self)\n\n def transform_text(self, transform_fn: Callable[[str], str]) -> 'SupervisionSet':\n \"\"\"\n Return a copy of the current ``SupervisionSet`` with the segments having a transformed ``text`` field.\n Useful for text normalization, phonetic transcription, etc.\n\n :param transform_fn: a function that accepts a string and returns a string.\n :return: a ``SupervisionSet`` with adjusted text.\n \"\"\"\n return SupervisionSet.from_segments(s.transform_text(transform_fn) for s in self)\n\n def find(\n self,\n recording_id: str,\n channel: Optional[int] = None,\n start_after: Seconds = 0,\n end_before: Optional[Seconds] = None,\n adjust_offset: bool = False,\n tolerance: Seconds = 0.001\n ) -> Iterable[SupervisionSegment]:\n \"\"\"\n Return an iterable of segments that match the provided ``recording_id``.\n\n :param recording_id: Desired recording ID.\n :param channel: When specified, return supervisions in that channel - otherwise, in all channels.\n :param start_after: When specified, return segments that start after the given value.\n :param end_before: When specified, return segments that end before the given value.\n :param adjust_offset: When true, return segments as if the recordings had started at ``start_after``.\n This is useful for creating Cuts. Fom a user perspective, when dealing with a Cut, it is no\n longer helpful to know when the supervisions starts in a recording - instead, it's useful to\n know when the supervision starts relative to the start of the Cut.\n In the anticipated use-case, ``start_after`` and ``end_before`` would be\n the beginning and end of a cut;\n this option converts the times to be relative to the start of the cut.\n :param tolerance: Additional margin to account for floating point rounding errors\n when comparing segment boundaries.\n :return: An iterator over supervision segments satisfying all criteria.\n \"\"\"\n segment_by_recording_id = self._index_by_recording_id_and_cache()\n return (\n # We only modify the offset - the duration remains the same, as we're only shifting the segment\n # relative to the Cut's start, and not truncating anything.\n segment.with_offset(-start_after) if adjust_offset else segment\n for segment in segment_by_recording_id.get(recording_id, [])\n if (channel is None or segment.channel == channel)\n and segment.start >= start_after - tolerance\n and (end_before is None or segment.end <= end_before + tolerance)\n )\n\n # This is a cache that significantly speeds up repeated ``find()`` queries.\n _segments_by_recording_id: Optional[Dict[str, List[SupervisionSegment]]] = None\n\n def _index_by_recording_id_and_cache(self):\n if self._segments_by_recording_id is None:\n from cytoolz import groupby\n self._segments_by_recording_id = groupby(lambda seg: seg.recording_id, self)\n return self._segments_by_recording_id\n\n def __repr__(self) -> str:\n return f'SupervisionSet(len={len(self)})'\n\n def __getitem__(self, item: str) -> SupervisionSegment:\n return self.segments[item]\n\n def __iter__(self) -> Iterable[SupervisionSegment]:\n return iter(self.segments.values())\n\n def __len__(self) -> int:\n return len(self.segments)\n\n def __add__(self, other: 'SupervisionSet') -> 'SupervisionSet':\n return SupervisionSet(segments={**self.segments, **other.segments})\n", "path": "lhotse/supervision.py"}]}
| 3,945 | 386 |
gh_patches_debug_15751
|
rasdani/github-patches
|
git_diff
|
zulip__zulip-5463
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Too many redirects for a account during login of deactivated account
I deactivated my Zulip account and then signed up again and tried to sign in. For wrong password, it showed wrong password, when i entered correct credentials, chrome showed
```
chat.zulip.org redirected you too many times.
Try clearing your cookies.
ERR_TOO_MANY_REDIRECTS
```
Clearing cookies did not work as a solution for it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zerver/forms.py`
Content:
```
1 from __future__ import absolute_import
2
3 from django import forms
4 from django.conf import settings
5 from django.contrib.auth.forms import SetPasswordForm, AuthenticationForm, \
6 PasswordResetForm
7 from django.core.exceptions import ValidationError
8 from django.core.urlresolvers import reverse
9 from django.core.validators import validate_email
10 from django.db.models.query import QuerySet
11 from django.utils.translation import ugettext as _
12 from jinja2 import Markup as mark_safe
13
14 from zerver.lib.actions import do_change_password, is_inactive, user_email_is_unique
15 from zerver.lib.name_restrictions import is_reserved_subdomain, is_disposable_domain
16 from zerver.lib.request import JsonableError
17 from zerver.lib.send_email import send_email
18 from zerver.lib.users import check_full_name
19 from zerver.lib.utils import get_subdomain, check_subdomain
20 from zerver.models import Realm, get_user_profile_by_email, UserProfile, \
21 get_realm_by_email_domain, get_realm, \
22 get_unique_open_realm, email_to_domain, email_allowed_for_realm
23 from zproject.backends import password_auth_enabled
24
25 import logging
26 import re
27 import DNS
28
29 from typing import Any, Callable, List, Optional, Text, Dict
30
31 MIT_VALIDATION_ERROR = u'That user does not exist at MIT or is a ' + \
32 u'<a href="https://ist.mit.edu/email-lists">mailing list</a>. ' + \
33 u'If you want to sign up an alias for Zulip, ' + \
34 u'<a href="mailto:[email protected]">contact us</a>.'
35 WRONG_SUBDOMAIN_ERROR = "Your Zulip account is not a member of the " + \
36 "organization associated with this subdomain. " + \
37 "Please contact %s with any questions!" % (settings.ZULIP_ADMINISTRATOR,)
38
39 def email_is_not_mit_mailing_list(email):
40 # type: (Text) -> None
41 """Prevent MIT mailing lists from signing up for Zulip"""
42 if "@mit.edu" in email:
43 username = email.rsplit("@", 1)[0]
44 # Check whether the user exists and can get mail.
45 try:
46 DNS.dnslookup("%s.pobox.ns.athena.mit.edu" % username, DNS.Type.TXT)
47 except DNS.Base.ServerError as e:
48 if e.rcode == DNS.Status.NXDOMAIN:
49 raise ValidationError(mark_safe(MIT_VALIDATION_ERROR))
50 else:
51 raise
52
53 class RegistrationForm(forms.Form):
54 MAX_PASSWORD_LENGTH = 100
55 full_name = forms.CharField(max_length=100)
56 # The required-ness of the password field gets overridden if it isn't
57 # actually required for a realm
58 password = forms.CharField(widget=forms.PasswordInput, max_length=MAX_PASSWORD_LENGTH,
59 required=False)
60 realm_subdomain = forms.CharField(max_length=Realm.MAX_REALM_SUBDOMAIN_LENGTH, required=False)
61 realm_org_type = forms.ChoiceField(((Realm.COMMUNITY, 'Community'),
62 (Realm.CORPORATE, 'Corporate')),
63 initial=Realm.COMMUNITY, required=False)
64
65 def __init__(self, *args, **kwargs):
66 # type: (*Any, **Any) -> None
67
68 # Since the superclass doesn't except random extra kwargs, we
69 # remove it from the kwargs dict before initializing.
70 realm_creation = kwargs['realm_creation']
71 del kwargs['realm_creation']
72
73 super(RegistrationForm, self).__init__(*args, **kwargs)
74 if settings.TERMS_OF_SERVICE:
75 self.fields['terms'] = forms.BooleanField(required=True)
76 self.fields['realm_name'] = forms.CharField(
77 max_length=Realm.MAX_REALM_NAME_LENGTH,
78 required=realm_creation)
79
80 def clean_full_name(self):
81 # type: () -> Text
82 try:
83 return check_full_name(self.cleaned_data['full_name'])
84 except JsonableError as e:
85 raise ValidationError(e.error)
86
87 def clean_realm_subdomain(self):
88 # type: () -> str
89 if settings.REALMS_HAVE_SUBDOMAINS:
90 error_strings = {
91 'too short': _("Subdomain needs to have length 3 or greater."),
92 'extremal dash': _("Subdomain cannot start or end with a '-'."),
93 'bad character': _("Subdomain can only have lowercase letters, numbers, and '-'s."),
94 'unavailable': _("Subdomain unavailable. Please choose a different one.")}
95 else:
96 error_strings = {
97 'too short': _("Short name needs at least 3 characters."),
98 'extremal dash': _("Short name cannot start or end with a '-'."),
99 'bad character': _("Short name can only have lowercase letters, numbers, and '-'s."),
100 'unavailable': _("Short name unavailable. Please choose a different one.")}
101 subdomain = self.cleaned_data['realm_subdomain']
102 if not subdomain:
103 return ''
104 if len(subdomain) < 3:
105 raise ValidationError(error_strings['too short'])
106 if subdomain[0] == '-' or subdomain[-1] == '-':
107 raise ValidationError(error_strings['extremal dash'])
108 if not re.match('^[a-z0-9-]*$', subdomain):
109 raise ValidationError(error_strings['bad character'])
110 if is_reserved_subdomain(subdomain) or \
111 get_realm(subdomain) is not None:
112 raise ValidationError(error_strings['unavailable'])
113 return subdomain
114
115 class ToSForm(forms.Form):
116 terms = forms.BooleanField(required=True)
117
118 class HomepageForm(forms.Form):
119 email = forms.EmailField(validators=[is_inactive])
120
121 def __init__(self, *args, **kwargs):
122 # type: (*Any, **Any) -> None
123 self.realm = kwargs.pop('realm', None)
124 super(HomepageForm, self).__init__(*args, **kwargs)
125
126 def clean_email(self):
127 # type: () -> str
128 """Returns the email if and only if the user's email address is
129 allowed to join the realm they are trying to join."""
130 email = self.cleaned_data['email']
131
132 if get_unique_open_realm():
133 return email
134
135 # Otherwise, the user is trying to join a specific realm.
136 realm = self.realm
137 if realm is None and not settings.REALMS_HAVE_SUBDOMAINS:
138 realm = get_realm_by_email_domain(email)
139
140 if realm is None:
141 if settings.REALMS_HAVE_SUBDOMAINS:
142 raise ValidationError(_("The organization you are trying to "
143 "join using {email} does not "
144 "exist.").format(email=email))
145 else:
146 raise ValidationError(_("Your email address, {email}, does not "
147 "correspond to any existing "
148 "organization.").format(email=email))
149
150 if realm.invite_required:
151 raise ValidationError(_("Please request an invite for {email} "
152 "from the organization "
153 "administrator.").format(email=email))
154
155 if not email_allowed_for_realm(email, realm):
156 raise ValidationError(
157 _("Your email address, {email}, is not in one of the domains "
158 "that are allowed to register for accounts in this organization.").format(
159 string_id=realm.string_id, email=email))
160
161 if realm.is_zephyr_mirror_realm:
162 email_is_not_mit_mailing_list(email)
163
164 return email
165
166 def email_is_not_disposable(email):
167 # type: (Text) -> None
168 if is_disposable_domain(email_to_domain(email)):
169 raise ValidationError(_("Please use your real email address."))
170
171 class RealmCreationForm(forms.Form):
172 # This form determines whether users can create a new realm.
173 email = forms.EmailField(validators=[user_email_is_unique, email_is_not_disposable])
174
175 class LoggingSetPasswordForm(SetPasswordForm):
176 def save(self, commit=True):
177 # type: (bool) -> UserProfile
178 do_change_password(self.user, self.cleaned_data['new_password1'],
179 commit=commit)
180 return self.user
181
182 class ZulipPasswordResetForm(PasswordResetForm):
183 def get_users(self, email):
184 # type: (str) -> QuerySet
185 """Given an email, return matching user(s) who should receive a reset.
186
187 This is modified from the original in that it allows non-bot
188 users who don't have a usable password to reset their
189 passwords.
190 """
191 if not password_auth_enabled:
192 logging.info("Password reset attempted for %s even though password auth is disabled." % (email,))
193 return []
194 result = UserProfile.objects.filter(email__iexact=email, is_active=True,
195 is_bot=False)
196 if len(result) == 0:
197 logging.info("Password reset attempted for %s; no active account." % (email,))
198 return result
199
200 def send_mail(self, subject_template_name, email_template_name,
201 context, from_email, to_email, html_email_template_name=None):
202 # type: (str, str, Dict[str, Any], str, str, str) -> None
203 """
204 Currently we don't support accounts in multiple subdomains using
205 a single email address. We override this function so that we do
206 not send a reset link to an email address if the reset attempt is
207 done on the subdomain which does not match user.realm.subdomain.
208
209 Once we start supporting accounts with the same email in
210 multiple subdomains, we may be able to refactor this function.
211
212 A second reason we override this function is so that we can send
213 the mail through the functions in zerver.lib.send_email, to match
214 how we send all other mail in the codebase.
215 """
216 user_realm = get_user_profile_by_email(to_email).realm
217 attempted_subdomain = get_subdomain(getattr(self, 'request'))
218 context['attempted_realm'] = False
219 if not check_subdomain(user_realm.subdomain, attempted_subdomain):
220 context['attempted_realm'] = get_realm(attempted_subdomain)
221
222 send_email('zerver/emails/password_reset', to_email, from_email=from_email,
223 context=context)
224
225 def save(self, *args, **kwargs):
226 # type: (*Any, **Any) -> None
227 """Currently we don't support accounts in multiple subdomains using
228 a single email addresss. We override this function so that we can
229 inject request parameter in context. This parameter will be used
230 by send_mail function.
231
232 Once we start supporting accounts with the same email in
233 multiple subdomains, we may be able to delete or refactor this
234 function.
235 """
236 setattr(self, 'request', kwargs.get('request'))
237 super(ZulipPasswordResetForm, self).save(*args, **kwargs)
238
239 class CreateUserForm(forms.Form):
240 full_name = forms.CharField(max_length=100)
241 email = forms.EmailField()
242
243 class OurAuthenticationForm(AuthenticationForm):
244 def clean_username(self):
245 # type: () -> str
246 email = self.cleaned_data['username']
247 try:
248 user_profile = get_user_profile_by_email(email)
249 except UserProfile.DoesNotExist:
250 return email
251
252 if user_profile.realm.deactivated:
253 error_msg = u"""Sorry for the trouble, but %s has been deactivated.
254
255 Please contact %s to reactivate this group.""" % (
256 user_profile.realm.name,
257 settings.ZULIP_ADMINISTRATOR)
258 raise ValidationError(mark_safe(error_msg))
259
260 if not check_subdomain(get_subdomain(self.request), user_profile.realm.subdomain):
261 logging.warning("User %s attempted to password login to wrong subdomain %s" %
262 (user_profile.email, get_subdomain(self.request)))
263 raise ValidationError(mark_safe(WRONG_SUBDOMAIN_ERROR))
264 return email
265
266 class MultiEmailField(forms.Field):
267 def to_python(self, emails):
268 # type: (Text) -> List[Text]
269 """Normalize data to a list of strings."""
270 if not emails:
271 return []
272
273 return [email.strip() for email in emails.split(',')]
274
275 def validate(self, emails):
276 # type: (List[Text]) -> None
277 """Check if value consists only of valid emails."""
278 super(MultiEmailField, self).validate(emails)
279 for email in emails:
280 validate_email(email)
281
282 class FindMyTeamForm(forms.Form):
283 emails = MultiEmailField(
284 help_text=_("Add up to 10 comma-separated email addresses."))
285
286 def clean_emails(self):
287 # type: () -> List[Text]
288 emails = self.cleaned_data['emails']
289 if len(emails) > 10:
290 raise forms.ValidationError(_("Please enter at most 10 emails."))
291
292 return emails
293
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/zerver/forms.py b/zerver/forms.py
--- a/zerver/forms.py
+++ b/zerver/forms.py
@@ -257,6 +257,12 @@
settings.ZULIP_ADMINISTRATOR)
raise ValidationError(mark_safe(error_msg))
+ if not user_profile.is_active:
+ error_msg = (u"Sorry for the trouble, but your account has been "
+ u"deactivated. Please contact %s to reactivate "
+ u"it.") % (settings.ZULIP_ADMINISTRATOR,)
+ raise ValidationError(mark_safe(error_msg))
+
if not check_subdomain(get_subdomain(self.request), user_profile.realm.subdomain):
logging.warning("User %s attempted to password login to wrong subdomain %s" %
(user_profile.email, get_subdomain(self.request)))
|
{"golden_diff": "diff --git a/zerver/forms.py b/zerver/forms.py\n--- a/zerver/forms.py\n+++ b/zerver/forms.py\n@@ -257,6 +257,12 @@\n settings.ZULIP_ADMINISTRATOR)\n raise ValidationError(mark_safe(error_msg))\n \n+ if not user_profile.is_active:\n+ error_msg = (u\"Sorry for the trouble, but your account has been \"\n+ u\"deactivated. Please contact %s to reactivate \"\n+ u\"it.\") % (settings.ZULIP_ADMINISTRATOR,)\n+ raise ValidationError(mark_safe(error_msg))\n+\n if not check_subdomain(get_subdomain(self.request), user_profile.realm.subdomain):\n logging.warning(\"User %s attempted to password login to wrong subdomain %s\" %\n (user_profile.email, get_subdomain(self.request)))\n", "issue": "Too many redirects for a account during login of deactivated account\nI deactivated my Zulip account and then signed up again and tried to sign in. For wrong password, it showed wrong password, when i entered correct credentials, chrome showed\r\n```\r\nchat.zulip.org redirected you too many times.\r\nTry clearing your cookies.\r\nERR_TOO_MANY_REDIRECTS\r\n```\r\nClearing cookies did not work as a solution for it.\n", "before_files": [{"content": "from __future__ import absolute_import\n\nfrom django import forms\nfrom django.conf import settings\nfrom django.contrib.auth.forms import SetPasswordForm, AuthenticationForm, \\\n PasswordResetForm\nfrom django.core.exceptions import ValidationError\nfrom django.core.urlresolvers import reverse\nfrom django.core.validators import validate_email\nfrom django.db.models.query import QuerySet\nfrom django.utils.translation import ugettext as _\nfrom jinja2 import Markup as mark_safe\n\nfrom zerver.lib.actions import do_change_password, is_inactive, user_email_is_unique\nfrom zerver.lib.name_restrictions import is_reserved_subdomain, is_disposable_domain\nfrom zerver.lib.request import JsonableError\nfrom zerver.lib.send_email import send_email\nfrom zerver.lib.users import check_full_name\nfrom zerver.lib.utils import get_subdomain, check_subdomain\nfrom zerver.models import Realm, get_user_profile_by_email, UserProfile, \\\n get_realm_by_email_domain, get_realm, \\\n get_unique_open_realm, email_to_domain, email_allowed_for_realm\nfrom zproject.backends import password_auth_enabled\n\nimport logging\nimport re\nimport DNS\n\nfrom typing import Any, Callable, List, Optional, Text, Dict\n\nMIT_VALIDATION_ERROR = u'That user does not exist at MIT or is a ' + \\\n u'<a href=\"https://ist.mit.edu/email-lists\">mailing list</a>. ' + \\\n u'If you want to sign up an alias for Zulip, ' + \\\n u'<a href=\"mailto:[email protected]\">contact us</a>.'\nWRONG_SUBDOMAIN_ERROR = \"Your Zulip account is not a member of the \" + \\\n \"organization associated with this subdomain. \" + \\\n \"Please contact %s with any questions!\" % (settings.ZULIP_ADMINISTRATOR,)\n\ndef email_is_not_mit_mailing_list(email):\n # type: (Text) -> None\n \"\"\"Prevent MIT mailing lists from signing up for Zulip\"\"\"\n if \"@mit.edu\" in email:\n username = email.rsplit(\"@\", 1)[0]\n # Check whether the user exists and can get mail.\n try:\n DNS.dnslookup(\"%s.pobox.ns.athena.mit.edu\" % username, DNS.Type.TXT)\n except DNS.Base.ServerError as e:\n if e.rcode == DNS.Status.NXDOMAIN:\n raise ValidationError(mark_safe(MIT_VALIDATION_ERROR))\n else:\n raise\n\nclass RegistrationForm(forms.Form):\n MAX_PASSWORD_LENGTH = 100\n full_name = forms.CharField(max_length=100)\n # The required-ness of the password field gets overridden if it isn't\n # actually required for a realm\n password = forms.CharField(widget=forms.PasswordInput, max_length=MAX_PASSWORD_LENGTH,\n required=False)\n realm_subdomain = forms.CharField(max_length=Realm.MAX_REALM_SUBDOMAIN_LENGTH, required=False)\n realm_org_type = forms.ChoiceField(((Realm.COMMUNITY, 'Community'),\n (Realm.CORPORATE, 'Corporate')),\n initial=Realm.COMMUNITY, required=False)\n\n def __init__(self, *args, **kwargs):\n # type: (*Any, **Any) -> None\n\n # Since the superclass doesn't except random extra kwargs, we\n # remove it from the kwargs dict before initializing.\n realm_creation = kwargs['realm_creation']\n del kwargs['realm_creation']\n\n super(RegistrationForm, self).__init__(*args, **kwargs)\n if settings.TERMS_OF_SERVICE:\n self.fields['terms'] = forms.BooleanField(required=True)\n self.fields['realm_name'] = forms.CharField(\n max_length=Realm.MAX_REALM_NAME_LENGTH,\n required=realm_creation)\n\n def clean_full_name(self):\n # type: () -> Text\n try:\n return check_full_name(self.cleaned_data['full_name'])\n except JsonableError as e:\n raise ValidationError(e.error)\n\n def clean_realm_subdomain(self):\n # type: () -> str\n if settings.REALMS_HAVE_SUBDOMAINS:\n error_strings = {\n 'too short': _(\"Subdomain needs to have length 3 or greater.\"),\n 'extremal dash': _(\"Subdomain cannot start or end with a '-'.\"),\n 'bad character': _(\"Subdomain can only have lowercase letters, numbers, and '-'s.\"),\n 'unavailable': _(\"Subdomain unavailable. Please choose a different one.\")}\n else:\n error_strings = {\n 'too short': _(\"Short name needs at least 3 characters.\"),\n 'extremal dash': _(\"Short name cannot start or end with a '-'.\"),\n 'bad character': _(\"Short name can only have lowercase letters, numbers, and '-'s.\"),\n 'unavailable': _(\"Short name unavailable. Please choose a different one.\")}\n subdomain = self.cleaned_data['realm_subdomain']\n if not subdomain:\n return ''\n if len(subdomain) < 3:\n raise ValidationError(error_strings['too short'])\n if subdomain[0] == '-' or subdomain[-1] == '-':\n raise ValidationError(error_strings['extremal dash'])\n if not re.match('^[a-z0-9-]*$', subdomain):\n raise ValidationError(error_strings['bad character'])\n if is_reserved_subdomain(subdomain) or \\\n get_realm(subdomain) is not None:\n raise ValidationError(error_strings['unavailable'])\n return subdomain\n\nclass ToSForm(forms.Form):\n terms = forms.BooleanField(required=True)\n\nclass HomepageForm(forms.Form):\n email = forms.EmailField(validators=[is_inactive])\n\n def __init__(self, *args, **kwargs):\n # type: (*Any, **Any) -> None\n self.realm = kwargs.pop('realm', None)\n super(HomepageForm, self).__init__(*args, **kwargs)\n\n def clean_email(self):\n # type: () -> str\n \"\"\"Returns the email if and only if the user's email address is\n allowed to join the realm they are trying to join.\"\"\"\n email = self.cleaned_data['email']\n\n if get_unique_open_realm():\n return email\n\n # Otherwise, the user is trying to join a specific realm.\n realm = self.realm\n if realm is None and not settings.REALMS_HAVE_SUBDOMAINS:\n realm = get_realm_by_email_domain(email)\n\n if realm is None:\n if settings.REALMS_HAVE_SUBDOMAINS:\n raise ValidationError(_(\"The organization you are trying to \"\n \"join using {email} does not \"\n \"exist.\").format(email=email))\n else:\n raise ValidationError(_(\"Your email address, {email}, does not \"\n \"correspond to any existing \"\n \"organization.\").format(email=email))\n\n if realm.invite_required:\n raise ValidationError(_(\"Please request an invite for {email} \"\n \"from the organization \"\n \"administrator.\").format(email=email))\n\n if not email_allowed_for_realm(email, realm):\n raise ValidationError(\n _(\"Your email address, {email}, is not in one of the domains \"\n \"that are allowed to register for accounts in this organization.\").format(\n string_id=realm.string_id, email=email))\n\n if realm.is_zephyr_mirror_realm:\n email_is_not_mit_mailing_list(email)\n\n return email\n\ndef email_is_not_disposable(email):\n # type: (Text) -> None\n if is_disposable_domain(email_to_domain(email)):\n raise ValidationError(_(\"Please use your real email address.\"))\n\nclass RealmCreationForm(forms.Form):\n # This form determines whether users can create a new realm.\n email = forms.EmailField(validators=[user_email_is_unique, email_is_not_disposable])\n\nclass LoggingSetPasswordForm(SetPasswordForm):\n def save(self, commit=True):\n # type: (bool) -> UserProfile\n do_change_password(self.user, self.cleaned_data['new_password1'],\n commit=commit)\n return self.user\n\nclass ZulipPasswordResetForm(PasswordResetForm):\n def get_users(self, email):\n # type: (str) -> QuerySet\n \"\"\"Given an email, return matching user(s) who should receive a reset.\n\n This is modified from the original in that it allows non-bot\n users who don't have a usable password to reset their\n passwords.\n \"\"\"\n if not password_auth_enabled:\n logging.info(\"Password reset attempted for %s even though password auth is disabled.\" % (email,))\n return []\n result = UserProfile.objects.filter(email__iexact=email, is_active=True,\n is_bot=False)\n if len(result) == 0:\n logging.info(\"Password reset attempted for %s; no active account.\" % (email,))\n return result\n\n def send_mail(self, subject_template_name, email_template_name,\n context, from_email, to_email, html_email_template_name=None):\n # type: (str, str, Dict[str, Any], str, str, str) -> None\n \"\"\"\n Currently we don't support accounts in multiple subdomains using\n a single email address. We override this function so that we do\n not send a reset link to an email address if the reset attempt is\n done on the subdomain which does not match user.realm.subdomain.\n\n Once we start supporting accounts with the same email in\n multiple subdomains, we may be able to refactor this function.\n\n A second reason we override this function is so that we can send\n the mail through the functions in zerver.lib.send_email, to match\n how we send all other mail in the codebase.\n \"\"\"\n user_realm = get_user_profile_by_email(to_email).realm\n attempted_subdomain = get_subdomain(getattr(self, 'request'))\n context['attempted_realm'] = False\n if not check_subdomain(user_realm.subdomain, attempted_subdomain):\n context['attempted_realm'] = get_realm(attempted_subdomain)\n\n send_email('zerver/emails/password_reset', to_email, from_email=from_email,\n context=context)\n\n def save(self, *args, **kwargs):\n # type: (*Any, **Any) -> None\n \"\"\"Currently we don't support accounts in multiple subdomains using\n a single email addresss. We override this function so that we can\n inject request parameter in context. This parameter will be used\n by send_mail function.\n\n Once we start supporting accounts with the same email in\n multiple subdomains, we may be able to delete or refactor this\n function.\n \"\"\"\n setattr(self, 'request', kwargs.get('request'))\n super(ZulipPasswordResetForm, self).save(*args, **kwargs)\n\nclass CreateUserForm(forms.Form):\n full_name = forms.CharField(max_length=100)\n email = forms.EmailField()\n\nclass OurAuthenticationForm(AuthenticationForm):\n def clean_username(self):\n # type: () -> str\n email = self.cleaned_data['username']\n try:\n user_profile = get_user_profile_by_email(email)\n except UserProfile.DoesNotExist:\n return email\n\n if user_profile.realm.deactivated:\n error_msg = u\"\"\"Sorry for the trouble, but %s has been deactivated.\n\nPlease contact %s to reactivate this group.\"\"\" % (\n user_profile.realm.name,\n settings.ZULIP_ADMINISTRATOR)\n raise ValidationError(mark_safe(error_msg))\n\n if not check_subdomain(get_subdomain(self.request), user_profile.realm.subdomain):\n logging.warning(\"User %s attempted to password login to wrong subdomain %s\" %\n (user_profile.email, get_subdomain(self.request)))\n raise ValidationError(mark_safe(WRONG_SUBDOMAIN_ERROR))\n return email\n\nclass MultiEmailField(forms.Field):\n def to_python(self, emails):\n # type: (Text) -> List[Text]\n \"\"\"Normalize data to a list of strings.\"\"\"\n if not emails:\n return []\n\n return [email.strip() for email in emails.split(',')]\n\n def validate(self, emails):\n # type: (List[Text]) -> None\n \"\"\"Check if value consists only of valid emails.\"\"\"\n super(MultiEmailField, self).validate(emails)\n for email in emails:\n validate_email(email)\n\nclass FindMyTeamForm(forms.Form):\n emails = MultiEmailField(\n help_text=_(\"Add up to 10 comma-separated email addresses.\"))\n\n def clean_emails(self):\n # type: () -> List[Text]\n emails = self.cleaned_data['emails']\n if len(emails) > 10:\n raise forms.ValidationError(_(\"Please enter at most 10 emails.\"))\n\n return emails\n", "path": "zerver/forms.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nfrom django import forms\nfrom django.conf import settings\nfrom django.contrib.auth.forms import SetPasswordForm, AuthenticationForm, \\\n PasswordResetForm\nfrom django.core.exceptions import ValidationError\nfrom django.core.urlresolvers import reverse\nfrom django.core.validators import validate_email\nfrom django.db.models.query import QuerySet\nfrom django.utils.translation import ugettext as _\nfrom jinja2 import Markup as mark_safe\n\nfrom zerver.lib.actions import do_change_password, is_inactive, user_email_is_unique\nfrom zerver.lib.name_restrictions import is_reserved_subdomain, is_disposable_domain\nfrom zerver.lib.request import JsonableError\nfrom zerver.lib.send_email import send_email\nfrom zerver.lib.users import check_full_name\nfrom zerver.lib.utils import get_subdomain, check_subdomain\nfrom zerver.models import Realm, get_user_profile_by_email, UserProfile, \\\n get_realm_by_email_domain, get_realm, \\\n get_unique_open_realm, email_to_domain, email_allowed_for_realm\nfrom zproject.backends import password_auth_enabled\n\nimport logging\nimport re\nimport DNS\n\nfrom typing import Any, Callable, List, Optional, Text, Dict\n\nMIT_VALIDATION_ERROR = u'That user does not exist at MIT or is a ' + \\\n u'<a href=\"https://ist.mit.edu/email-lists\">mailing list</a>. ' + \\\n u'If you want to sign up an alias for Zulip, ' + \\\n u'<a href=\"mailto:[email protected]\">contact us</a>.'\nWRONG_SUBDOMAIN_ERROR = \"Your Zulip account is not a member of the \" + \\\n \"organization associated with this subdomain. \" + \\\n \"Please contact %s with any questions!\" % (settings.ZULIP_ADMINISTRATOR,)\n\ndef email_is_not_mit_mailing_list(email):\n # type: (Text) -> None\n \"\"\"Prevent MIT mailing lists from signing up for Zulip\"\"\"\n if \"@mit.edu\" in email:\n username = email.rsplit(\"@\", 1)[0]\n # Check whether the user exists and can get mail.\n try:\n DNS.dnslookup(\"%s.pobox.ns.athena.mit.edu\" % username, DNS.Type.TXT)\n except DNS.Base.ServerError as e:\n if e.rcode == DNS.Status.NXDOMAIN:\n raise ValidationError(mark_safe(MIT_VALIDATION_ERROR))\n else:\n raise\n\nclass RegistrationForm(forms.Form):\n MAX_PASSWORD_LENGTH = 100\n full_name = forms.CharField(max_length=100)\n # The required-ness of the password field gets overridden if it isn't\n # actually required for a realm\n password = forms.CharField(widget=forms.PasswordInput, max_length=MAX_PASSWORD_LENGTH,\n required=False)\n realm_subdomain = forms.CharField(max_length=Realm.MAX_REALM_SUBDOMAIN_LENGTH, required=False)\n realm_org_type = forms.ChoiceField(((Realm.COMMUNITY, 'Community'),\n (Realm.CORPORATE, 'Corporate')),\n initial=Realm.COMMUNITY, required=False)\n\n def __init__(self, *args, **kwargs):\n # type: (*Any, **Any) -> None\n\n # Since the superclass doesn't except random extra kwargs, we\n # remove it from the kwargs dict before initializing.\n realm_creation = kwargs['realm_creation']\n del kwargs['realm_creation']\n\n super(RegistrationForm, self).__init__(*args, **kwargs)\n if settings.TERMS_OF_SERVICE:\n self.fields['terms'] = forms.BooleanField(required=True)\n self.fields['realm_name'] = forms.CharField(\n max_length=Realm.MAX_REALM_NAME_LENGTH,\n required=realm_creation)\n\n def clean_full_name(self):\n # type: () -> Text\n try:\n return check_full_name(self.cleaned_data['full_name'])\n except JsonableError as e:\n raise ValidationError(e.error)\n\n def clean_realm_subdomain(self):\n # type: () -> str\n if settings.REALMS_HAVE_SUBDOMAINS:\n error_strings = {\n 'too short': _(\"Subdomain needs to have length 3 or greater.\"),\n 'extremal dash': _(\"Subdomain cannot start or end with a '-'.\"),\n 'bad character': _(\"Subdomain can only have lowercase letters, numbers, and '-'s.\"),\n 'unavailable': _(\"Subdomain unavailable. Please choose a different one.\")}\n else:\n error_strings = {\n 'too short': _(\"Short name needs at least 3 characters.\"),\n 'extremal dash': _(\"Short name cannot start or end with a '-'.\"),\n 'bad character': _(\"Short name can only have lowercase letters, numbers, and '-'s.\"),\n 'unavailable': _(\"Short name unavailable. Please choose a different one.\")}\n subdomain = self.cleaned_data['realm_subdomain']\n if not subdomain:\n return ''\n if len(subdomain) < 3:\n raise ValidationError(error_strings['too short'])\n if subdomain[0] == '-' or subdomain[-1] == '-':\n raise ValidationError(error_strings['extremal dash'])\n if not re.match('^[a-z0-9-]*$', subdomain):\n raise ValidationError(error_strings['bad character'])\n if is_reserved_subdomain(subdomain) or \\\n get_realm(subdomain) is not None:\n raise ValidationError(error_strings['unavailable'])\n return subdomain\n\nclass ToSForm(forms.Form):\n terms = forms.BooleanField(required=True)\n\nclass HomepageForm(forms.Form):\n email = forms.EmailField(validators=[is_inactive])\n\n def __init__(self, *args, **kwargs):\n # type: (*Any, **Any) -> None\n self.realm = kwargs.pop('realm', None)\n super(HomepageForm, self).__init__(*args, **kwargs)\n\n def clean_email(self):\n # type: () -> str\n \"\"\"Returns the email if and only if the user's email address is\n allowed to join the realm they are trying to join.\"\"\"\n email = self.cleaned_data['email']\n\n if get_unique_open_realm():\n return email\n\n # Otherwise, the user is trying to join a specific realm.\n realm = self.realm\n if realm is None and not settings.REALMS_HAVE_SUBDOMAINS:\n realm = get_realm_by_email_domain(email)\n\n if realm is None:\n if settings.REALMS_HAVE_SUBDOMAINS:\n raise ValidationError(_(\"The organization you are trying to \"\n \"join using {email} does not \"\n \"exist.\").format(email=email))\n else:\n raise ValidationError(_(\"Your email address, {email}, does not \"\n \"correspond to any existing \"\n \"organization.\").format(email=email))\n\n if realm.invite_required:\n raise ValidationError(_(\"Please request an invite for {email} \"\n \"from the organization \"\n \"administrator.\").format(email=email))\n\n if not email_allowed_for_realm(email, realm):\n raise ValidationError(\n _(\"Your email address, {email}, is not in one of the domains \"\n \"that are allowed to register for accounts in this organization.\").format(\n string_id=realm.string_id, email=email))\n\n if realm.is_zephyr_mirror_realm:\n email_is_not_mit_mailing_list(email)\n\n return email\n\ndef email_is_not_disposable(email):\n # type: (Text) -> None\n if is_disposable_domain(email_to_domain(email)):\n raise ValidationError(_(\"Please use your real email address.\"))\n\nclass RealmCreationForm(forms.Form):\n # This form determines whether users can create a new realm.\n email = forms.EmailField(validators=[user_email_is_unique, email_is_not_disposable])\n\nclass LoggingSetPasswordForm(SetPasswordForm):\n def save(self, commit=True):\n # type: (bool) -> UserProfile\n do_change_password(self.user, self.cleaned_data['new_password1'],\n commit=commit)\n return self.user\n\nclass ZulipPasswordResetForm(PasswordResetForm):\n def get_users(self, email):\n # type: (str) -> QuerySet\n \"\"\"Given an email, return matching user(s) who should receive a reset.\n\n This is modified from the original in that it allows non-bot\n users who don't have a usable password to reset their\n passwords.\n \"\"\"\n if not password_auth_enabled:\n logging.info(\"Password reset attempted for %s even though password auth is disabled.\" % (email,))\n return []\n result = UserProfile.objects.filter(email__iexact=email, is_active=True,\n is_bot=False)\n if len(result) == 0:\n logging.info(\"Password reset attempted for %s; no active account.\" % (email,))\n return result\n\n def send_mail(self, subject_template_name, email_template_name,\n context, from_email, to_email, html_email_template_name=None):\n # type: (str, str, Dict[str, Any], str, str, str) -> None\n \"\"\"\n Currently we don't support accounts in multiple subdomains using\n a single email address. We override this function so that we do\n not send a reset link to an email address if the reset attempt is\n done on the subdomain which does not match user.realm.subdomain.\n\n Once we start supporting accounts with the same email in\n multiple subdomains, we may be able to refactor this function.\n\n A second reason we override this function is so that we can send\n the mail through the functions in zerver.lib.send_email, to match\n how we send all other mail in the codebase.\n \"\"\"\n user_realm = get_user_profile_by_email(to_email).realm\n attempted_subdomain = get_subdomain(getattr(self, 'request'))\n context['attempted_realm'] = False\n if not check_subdomain(user_realm.subdomain, attempted_subdomain):\n context['attempted_realm'] = get_realm(attempted_subdomain)\n\n send_email('zerver/emails/password_reset', to_email, from_email=from_email,\n context=context)\n\n def save(self, *args, **kwargs):\n # type: (*Any, **Any) -> None\n \"\"\"Currently we don't support accounts in multiple subdomains using\n a single email addresss. We override this function so that we can\n inject request parameter in context. This parameter will be used\n by send_mail function.\n\n Once we start supporting accounts with the same email in\n multiple subdomains, we may be able to delete or refactor this\n function.\n \"\"\"\n setattr(self, 'request', kwargs.get('request'))\n super(ZulipPasswordResetForm, self).save(*args, **kwargs)\n\nclass CreateUserForm(forms.Form):\n full_name = forms.CharField(max_length=100)\n email = forms.EmailField()\n\nclass OurAuthenticationForm(AuthenticationForm):\n def clean_username(self):\n # type: () -> str\n email = self.cleaned_data['username']\n try:\n user_profile = get_user_profile_by_email(email)\n except UserProfile.DoesNotExist:\n return email\n\n if user_profile.realm.deactivated:\n error_msg = u\"\"\"Sorry for the trouble, but %s has been deactivated.\n\nPlease contact %s to reactivate this group.\"\"\" % (\n user_profile.realm.name,\n settings.ZULIP_ADMINISTRATOR)\n raise ValidationError(mark_safe(error_msg))\n\n if not user_profile.is_active:\n error_msg = (u\"Sorry for the trouble, but your account has been \"\n u\"deactivated. Please contact %s to reactivate \"\n u\"it.\") % (settings.ZULIP_ADMINISTRATOR,)\n raise ValidationError(mark_safe(error_msg))\n\n if not check_subdomain(get_subdomain(self.request), user_profile.realm.subdomain):\n logging.warning(\"User %s attempted to password login to wrong subdomain %s\" %\n (user_profile.email, get_subdomain(self.request)))\n raise ValidationError(mark_safe(WRONG_SUBDOMAIN_ERROR))\n return email\n\nclass MultiEmailField(forms.Field):\n def to_python(self, emails):\n # type: (Text) -> List[Text]\n \"\"\"Normalize data to a list of strings.\"\"\"\n if not emails:\n return []\n\n return [email.strip() for email in emails.split(',')]\n\n def validate(self, emails):\n # type: (List[Text]) -> None\n \"\"\"Check if value consists only of valid emails.\"\"\"\n super(MultiEmailField, self).validate(emails)\n for email in emails:\n validate_email(email)\n\nclass FindMyTeamForm(forms.Form):\n emails = MultiEmailField(\n help_text=_(\"Add up to 10 comma-separated email addresses.\"))\n\n def clean_emails(self):\n # type: () -> List[Text]\n emails = self.cleaned_data['emails']\n if len(emails) > 10:\n raise forms.ValidationError(_(\"Please enter at most 10 emails.\"))\n\n return emails\n", "path": "zerver/forms.py"}]}
| 3,821 | 179 |
gh_patches_debug_36741
|
rasdani/github-patches
|
git_diff
|
cupy__cupy-3011
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Columnar matrix slice doesn't work properly with ravel
When creating a view into a 2d array using `[:, n:]` notation, the resulting view doesn't work properly with `ravel` or functions that depend on it.
```
import cupy as cp
a = cp.zeros((3, 3))
b = a[:, 1:]
b.ravel()[0] = 1
print(b)
b[0, 0] = 2
print(a)
```
``` output
[[0. 0.]
[0. 0.]
[0. 0.]]
[[0. 2. 0.]
[0. 0. 0.]
[0. 0. 0.]]
```
As you can see, `b[0, 0]` still works as expected, but the array produced by `ravel` is a copy. This is not the case for the array produced with a row-wise slice:
```
c = a[1:, :]
c.ravel()[0] = 3
print(a)
```
``` output
array([[0., 2., 0.],
[3., 0., 0.],
[0., 0., 0.]])
```
This affects me because I'm trying to fill out a tridiagonal matrix using `cupy`, but the upper diagonal can't be properly referenced:
```
d = cp.zeros((3, 3))
cp.fill_diagonal(d, 3)
cp.fill_diagonal(d[1:], 2)
cp.fill_diagonal(d[:, 1:], 1)
print(d)
```
``` output
[[3. 0. 0.]
[2. 3. 0.]
[0. 2. 3.]]
```
```
In [182]: cp.show_config()
CuPy Version : 7.1.0
CUDA Root : /usr/local/cuda
CUDA Build Version : 10000
CUDA Driver Version : 10000
CUDA Runtime Version : 10000
cuBLAS Version : 10000
cuFFT Version : 10000
cuRAND Version : 10000
cuSOLVER Version : (10, 0, 0)
cuSPARSE Version : 10000
NVRTC Version : (10, 0)
cuDNN Build Version : 7605
cuDNN Version : 7600
NCCL Build Version : 2406
NCCL Runtime Version : 2406
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cupy/indexing/insert.py`
Content:
```
1 import numpy
2
3 import cupy
4
5
6 def place(arr, mask, vals):
7 """Change elements of an array based on conditional and input values.
8
9 This function uses the first N elements of `vals`, where N is the number
10 of true values in `mask`.
11
12 Args:
13 arr (cupy.ndarray): Array to put data into.
14 mask (array-like): Boolean mask array. Must have the same size as `a`.
15 vals (array-like): Values to put into `a`. Only the first
16 N elements are used, where N is the number of True values in
17 `mask`. If `vals` is smaller than N, it will be repeated, and if
18 elements of `a` are to be masked, this sequence must be non-empty.
19
20 Examples
21 --------
22 >>> arr = np.arange(6).reshape(2, 3)
23 >>> np.place(arr, arr>2, [44, 55])
24 >>> arr
25 array([[ 0, 1, 2],
26 [44, 55, 44]])
27
28 .. warning::
29
30 This function may synchronize the device.
31
32 .. seealso:: :func:`numpy.place`
33 """
34 # TODO(niboshi): Avoid nonzero which may synchronize the device.
35 mask = cupy.asarray(mask)
36 if arr.size != mask.size:
37 raise ValueError('Mask and data must be the same size.')
38 vals = cupy.asarray(vals)
39
40 mask_indices = mask.ravel().nonzero()[0] # may synchronize
41 if mask_indices.size == 0:
42 return
43 if vals.size == 0:
44 raise ValueError('Cannot insert from an empty array.')
45 arr.put(mask_indices, vals, mode='wrap')
46
47
48 def put(a, ind, v, mode='wrap'):
49 """Replaces specified elements of an array with given values.
50
51 Args:
52 a (cupy.ndarray): Target array.
53 ind (array-like): Target indices, interpreted as integers.
54 v (array-like): Values to place in `a` at target indices.
55 If `v` is shorter than `ind` it will be repeated as necessary.
56 mode (str): How out-of-bounds indices will behave. Its value must be
57 either `'raise'`, `'wrap'` or `'clip'`. Otherwise,
58 :class:`TypeError` is raised.
59
60 .. note::
61 Default `mode` is set to `'wrap'` to avoid unintended performance drop.
62 If you need NumPy's behavior, please pass `mode='raise'` manually.
63
64 .. seealso:: :func:`numpy.put`
65 """
66 a.put(ind, v, mode=mode)
67
68
69 # TODO(okuta): Implement putmask
70
71
72 def fill_diagonal(a, val, wrap=False):
73 """Fills the main diagonal of the given array of any dimensionality.
74
75 For an array `a` with ``a.ndim > 2``, the diagonal is the list of
76 locations with indices ``a[i, i, ..., i]`` all identical. This function
77 modifies the input array in-place, it does not return a value.
78
79 Args:
80 a (cupy.ndarray): The array, at least 2-D.
81 val (scalar): The value to be written on the diagonal.
82 Its type must be compatible with that of the array a.
83 wrap (bool): If specified, the diagonal is "wrapped" after N columns.
84 This affects only tall matrices.
85
86 Examples
87 --------
88 >>> a = cupy.zeros((3, 3), int)
89 >>> cupy.fill_diagonal(a, 5)
90 >>> a
91 array([[5, 0, 0],
92 [0, 5, 0],
93 [0, 0, 5]])
94
95 .. seealso:: :func:`numpy.fill_diagonal`
96 """
97 # The followings are imported from the original numpy
98 if a.ndim < 2:
99 raise ValueError('array must be at least 2-d')
100 end = None
101 if a.ndim == 2:
102 step = a.shape[1] + 1
103 if not wrap:
104 end = a.shape[1] * a.shape[1]
105 else:
106 if not numpy.alltrue(numpy.diff(a.shape) == 0):
107 raise ValueError('All dimensions of input must be of equal length')
108 step = 1 + numpy.cumprod(a.shape[:-1]).sum()
109
110 # Since the current cupy does not support a.flat,
111 # we use a.ravel() instead of a.flat
112 a.ravel()[:end:step] = val
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cupy/indexing/insert.py b/cupy/indexing/insert.py
--- a/cupy/indexing/insert.py
+++ b/cupy/indexing/insert.py
@@ -1,6 +1,12 @@
+import string
+
import numpy
import cupy
+from cupy import util
+from cupy.core import _carray
+from cupy.core import _scalar
+from cupy.cuda import device
def place(arr, mask, vals):
@@ -69,6 +75,33 @@
# TODO(okuta): Implement putmask
+_fill_diagonal_template = string.Template(r'''
+#include <cupy/complex.cuh>
+#include <cupy/carray.cuh>
+extern "C" __global__
+void cupy_fill_diagonal(CArray<${type}, ${a_ndim}> a,
+ CIndexer<${a_ndim}> a_ind,
+ int start,
+ int stop,
+ int step,
+ CArray<${type}, ${val_ndim}> val,
+ CIndexer<${val_ndim}> val_ind) {
+ int n = (stop - start) / step + 1;
+ CUPY_FOR(i, n) {
+ a_ind.set(start + i * step);
+ val_ind.set(i % val_ind.size());
+ a[a_ind.get()] = val[val_ind.get()];
+ }
+}''')
+
+
[email protected](for_each_device=True)
+def _fill_diagonal_kernel(type, a_ndim, val_ndim):
+ code = _fill_diagonal_template.substitute(
+ type=type, a_ndim=a_ndim, val_ndim=val_ndim)
+ return cupy.RawKernel(code, 'cupy_fill_diagonal')
+
+
def fill_diagonal(a, val, wrap=False):
"""Fills the main diagonal of the given array of any dimensionality.
@@ -97,7 +130,7 @@
# The followings are imported from the original numpy
if a.ndim < 2:
raise ValueError('array must be at least 2-d')
- end = None
+ end = a.size
if a.ndim == 2:
step = a.shape[1] + 1
if not wrap:
@@ -107,6 +140,21 @@
raise ValueError('All dimensions of input must be of equal length')
step = 1 + numpy.cumprod(a.shape[:-1]).sum()
- # Since the current cupy does not support a.flat,
- # we use a.ravel() instead of a.flat
- a.ravel()[:end:step] = val
+ val = cupy.asarray(val, dtype=a.dtype)
+
+ dev_id = device.get_device_id()
+ for arr in [a, val]:
+ if arr.data.device_id != dev_id:
+ raise ValueError(
+ 'Array device must be same as the current '
+ 'device: array device = %d while current = %d'
+ % (arr.data.device_id, dev_id))
+
+ typename = _scalar.get_typename(a.dtype)
+ fill_diagonal_kernel = _fill_diagonal_kernel(typename, a.ndim, val.ndim)
+
+ size = end // step + 1
+ a_ind = _carray.Indexer(a.shape)
+ val_ind = _carray.Indexer(val.shape)
+ fill_diagonal_kernel.kernel.linear_launch(
+ size, (a, a_ind, 0, end, step, val, val_ind))
|
{"golden_diff": "diff --git a/cupy/indexing/insert.py b/cupy/indexing/insert.py\n--- a/cupy/indexing/insert.py\n+++ b/cupy/indexing/insert.py\n@@ -1,6 +1,12 @@\n+import string\n+\n import numpy\n \n import cupy\n+from cupy import util\n+from cupy.core import _carray\n+from cupy.core import _scalar\n+from cupy.cuda import device\n \n \n def place(arr, mask, vals):\n@@ -69,6 +75,33 @@\n # TODO(okuta): Implement putmask\n \n \n+_fill_diagonal_template = string.Template(r'''\n+#include <cupy/complex.cuh>\n+#include <cupy/carray.cuh>\n+extern \"C\" __global__\n+void cupy_fill_diagonal(CArray<${type}, ${a_ndim}> a,\n+ CIndexer<${a_ndim}> a_ind,\n+ int start,\n+ int stop,\n+ int step,\n+ CArray<${type}, ${val_ndim}> val,\n+ CIndexer<${val_ndim}> val_ind) {\n+ int n = (stop - start) / step + 1;\n+ CUPY_FOR(i, n) {\n+ a_ind.set(start + i * step);\n+ val_ind.set(i % val_ind.size());\n+ a[a_ind.get()] = val[val_ind.get()];\n+ }\n+}''')\n+\n+\[email protected](for_each_device=True)\n+def _fill_diagonal_kernel(type, a_ndim, val_ndim):\n+ code = _fill_diagonal_template.substitute(\n+ type=type, a_ndim=a_ndim, val_ndim=val_ndim)\n+ return cupy.RawKernel(code, 'cupy_fill_diagonal')\n+\n+\n def fill_diagonal(a, val, wrap=False):\n \"\"\"Fills the main diagonal of the given array of any dimensionality.\n \n@@ -97,7 +130,7 @@\n # The followings are imported from the original numpy\n if a.ndim < 2:\n raise ValueError('array must be at least 2-d')\n- end = None\n+ end = a.size\n if a.ndim == 2:\n step = a.shape[1] + 1\n if not wrap:\n@@ -107,6 +140,21 @@\n raise ValueError('All dimensions of input must be of equal length')\n step = 1 + numpy.cumprod(a.shape[:-1]).sum()\n \n- # Since the current cupy does not support a.flat,\n- # we use a.ravel() instead of a.flat\n- a.ravel()[:end:step] = val\n+ val = cupy.asarray(val, dtype=a.dtype)\n+\n+ dev_id = device.get_device_id()\n+ for arr in [a, val]:\n+ if arr.data.device_id != dev_id:\n+ raise ValueError(\n+ 'Array device must be same as the current '\n+ 'device: array device = %d while current = %d'\n+ % (arr.data.device_id, dev_id))\n+\n+ typename = _scalar.get_typename(a.dtype)\n+ fill_diagonal_kernel = _fill_diagonal_kernel(typename, a.ndim, val.ndim)\n+\n+ size = end // step + 1\n+ a_ind = _carray.Indexer(a.shape)\n+ val_ind = _carray.Indexer(val.shape)\n+ fill_diagonal_kernel.kernel.linear_launch(\n+ size, (a, a_ind, 0, end, step, val, val_ind))\n", "issue": "Columnar matrix slice doesn't work properly with ravel\nWhen creating a view into a 2d array using `[:, n:]` notation, the resulting view doesn't work properly with `ravel` or functions that depend on it.\r\n\r\n```\r\nimport cupy as cp\r\na = cp.zeros((3, 3))\r\nb = a[:, 1:]\r\nb.ravel()[0] = 1\r\nprint(b)\r\nb[0, 0] = 2\r\nprint(a)\r\n```\r\n\r\n``` output\r\n[[0. 0.]\r\n [0. 0.]\r\n [0. 0.]]\r\n[[0. 2. 0.]\r\n [0. 0. 0.]\r\n [0. 0. 0.]]\r\n```\r\n\r\nAs you can see, `b[0, 0]` still works as expected, but the array produced by `ravel` is a copy. This is not the case for the array produced with a row-wise slice:\r\n\r\n```\r\nc = a[1:, :]\r\nc.ravel()[0] = 3\r\nprint(a)\r\n```\r\n\r\n``` output\r\narray([[0., 2., 0.],\r\n [3., 0., 0.],\r\n [0., 0., 0.]])\r\n```\r\n\r\nThis affects me because I'm trying to fill out a tridiagonal matrix using `cupy`, but the upper diagonal can't be properly referenced:\r\n```\r\nd = cp.zeros((3, 3))\r\ncp.fill_diagonal(d, 3)\r\ncp.fill_diagonal(d[1:], 2)\r\ncp.fill_diagonal(d[:, 1:], 1)\r\nprint(d)\r\n```\r\n\r\n``` output\r\n[[3. 0. 0.]\r\n [2. 3. 0.]\r\n [0. 2. 3.]]\r\n```\r\n\r\n```\r\nIn [182]: cp.show_config()\r\nCuPy Version : 7.1.0\r\nCUDA Root : /usr/local/cuda\r\nCUDA Build Version : 10000\r\nCUDA Driver Version : 10000\r\nCUDA Runtime Version : 10000\r\ncuBLAS Version : 10000\r\ncuFFT Version : 10000\r\ncuRAND Version : 10000\r\ncuSOLVER Version : (10, 0, 0)\r\ncuSPARSE Version : 10000\r\nNVRTC Version : (10, 0)\r\ncuDNN Build Version : 7605\r\ncuDNN Version : 7600\r\nNCCL Build Version : 2406\r\nNCCL Runtime Version : 2406\r\n```\n", "before_files": [{"content": "import numpy\n\nimport cupy\n\n\ndef place(arr, mask, vals):\n \"\"\"Change elements of an array based on conditional and input values.\n\n This function uses the first N elements of `vals`, where N is the number\n of true values in `mask`.\n\n Args:\n arr (cupy.ndarray): Array to put data into.\n mask (array-like): Boolean mask array. Must have the same size as `a`.\n vals (array-like): Values to put into `a`. Only the first\n N elements are used, where N is the number of True values in\n `mask`. If `vals` is smaller than N, it will be repeated, and if\n elements of `a` are to be masked, this sequence must be non-empty.\n\n Examples\n --------\n >>> arr = np.arange(6).reshape(2, 3)\n >>> np.place(arr, arr>2, [44, 55])\n >>> arr\n array([[ 0, 1, 2],\n [44, 55, 44]])\n\n .. warning::\n\n This function may synchronize the device.\n\n .. seealso:: :func:`numpy.place`\n \"\"\"\n # TODO(niboshi): Avoid nonzero which may synchronize the device.\n mask = cupy.asarray(mask)\n if arr.size != mask.size:\n raise ValueError('Mask and data must be the same size.')\n vals = cupy.asarray(vals)\n\n mask_indices = mask.ravel().nonzero()[0] # may synchronize\n if mask_indices.size == 0:\n return\n if vals.size == 0:\n raise ValueError('Cannot insert from an empty array.')\n arr.put(mask_indices, vals, mode='wrap')\n\n\ndef put(a, ind, v, mode='wrap'):\n \"\"\"Replaces specified elements of an array with given values.\n\n Args:\n a (cupy.ndarray): Target array.\n ind (array-like): Target indices, interpreted as integers.\n v (array-like): Values to place in `a` at target indices.\n If `v` is shorter than `ind` it will be repeated as necessary.\n mode (str): How out-of-bounds indices will behave. Its value must be\n either `'raise'`, `'wrap'` or `'clip'`. Otherwise,\n :class:`TypeError` is raised.\n\n .. note::\n Default `mode` is set to `'wrap'` to avoid unintended performance drop.\n If you need NumPy's behavior, please pass `mode='raise'` manually.\n\n .. seealso:: :func:`numpy.put`\n \"\"\"\n a.put(ind, v, mode=mode)\n\n\n# TODO(okuta): Implement putmask\n\n\ndef fill_diagonal(a, val, wrap=False):\n \"\"\"Fills the main diagonal of the given array of any dimensionality.\n\n For an array `a` with ``a.ndim > 2``, the diagonal is the list of\n locations with indices ``a[i, i, ..., i]`` all identical. This function\n modifies the input array in-place, it does not return a value.\n\n Args:\n a (cupy.ndarray): The array, at least 2-D.\n val (scalar): The value to be written on the diagonal.\n Its type must be compatible with that of the array a.\n wrap (bool): If specified, the diagonal is \"wrapped\" after N columns.\n This affects only tall matrices.\n\n Examples\n --------\n >>> a = cupy.zeros((3, 3), int)\n >>> cupy.fill_diagonal(a, 5)\n >>> a\n array([[5, 0, 0],\n [0, 5, 0],\n [0, 0, 5]])\n\n .. seealso:: :func:`numpy.fill_diagonal`\n \"\"\"\n # The followings are imported from the original numpy\n if a.ndim < 2:\n raise ValueError('array must be at least 2-d')\n end = None\n if a.ndim == 2:\n step = a.shape[1] + 1\n if not wrap:\n end = a.shape[1] * a.shape[1]\n else:\n if not numpy.alltrue(numpy.diff(a.shape) == 0):\n raise ValueError('All dimensions of input must be of equal length')\n step = 1 + numpy.cumprod(a.shape[:-1]).sum()\n\n # Since the current cupy does not support a.flat,\n # we use a.ravel() instead of a.flat\n a.ravel()[:end:step] = val\n", "path": "cupy/indexing/insert.py"}], "after_files": [{"content": "import string\n\nimport numpy\n\nimport cupy\nfrom cupy import util\nfrom cupy.core import _carray\nfrom cupy.core import _scalar\nfrom cupy.cuda import device\n\n\ndef place(arr, mask, vals):\n \"\"\"Change elements of an array based on conditional and input values.\n\n This function uses the first N elements of `vals`, where N is the number\n of true values in `mask`.\n\n Args:\n arr (cupy.ndarray): Array to put data into.\n mask (array-like): Boolean mask array. Must have the same size as `a`.\n vals (array-like): Values to put into `a`. Only the first\n N elements are used, where N is the number of True values in\n `mask`. If `vals` is smaller than N, it will be repeated, and if\n elements of `a` are to be masked, this sequence must be non-empty.\n\n Examples\n --------\n >>> arr = np.arange(6).reshape(2, 3)\n >>> np.place(arr, arr>2, [44, 55])\n >>> arr\n array([[ 0, 1, 2],\n [44, 55, 44]])\n\n .. warning::\n\n This function may synchronize the device.\n\n .. seealso:: :func:`numpy.place`\n \"\"\"\n # TODO(niboshi): Avoid nonzero which may synchronize the device.\n mask = cupy.asarray(mask)\n if arr.size != mask.size:\n raise ValueError('Mask and data must be the same size.')\n vals = cupy.asarray(vals)\n\n mask_indices = mask.ravel().nonzero()[0] # may synchronize\n if mask_indices.size == 0:\n return\n if vals.size == 0:\n raise ValueError('Cannot insert from an empty array.')\n arr.put(mask_indices, vals, mode='wrap')\n\n\ndef put(a, ind, v, mode='wrap'):\n \"\"\"Replaces specified elements of an array with given values.\n\n Args:\n a (cupy.ndarray): Target array.\n ind (array-like): Target indices, interpreted as integers.\n v (array-like): Values to place in `a` at target indices.\n If `v` is shorter than `ind` it will be repeated as necessary.\n mode (str): How out-of-bounds indices will behave. Its value must be\n either `'raise'`, `'wrap'` or `'clip'`. Otherwise,\n :class:`TypeError` is raised.\n\n .. note::\n Default `mode` is set to `'wrap'` to avoid unintended performance drop.\n If you need NumPy's behavior, please pass `mode='raise'` manually.\n\n .. seealso:: :func:`numpy.put`\n \"\"\"\n a.put(ind, v, mode=mode)\n\n\n# TODO(okuta): Implement putmask\n\n\n_fill_diagonal_template = string.Template(r'''\n#include <cupy/complex.cuh>\n#include <cupy/carray.cuh>\nextern \"C\" __global__\nvoid cupy_fill_diagonal(CArray<${type}, ${a_ndim}> a,\n CIndexer<${a_ndim}> a_ind,\n int start,\n int stop,\n int step,\n CArray<${type}, ${val_ndim}> val,\n CIndexer<${val_ndim}> val_ind) {\n int n = (stop - start) / step + 1;\n CUPY_FOR(i, n) {\n a_ind.set(start + i * step);\n val_ind.set(i % val_ind.size());\n a[a_ind.get()] = val[val_ind.get()];\n }\n}''')\n\n\[email protected](for_each_device=True)\ndef _fill_diagonal_kernel(type, a_ndim, val_ndim):\n code = _fill_diagonal_template.substitute(\n type=type, a_ndim=a_ndim, val_ndim=val_ndim)\n return cupy.RawKernel(code, 'cupy_fill_diagonal')\n\n\ndef fill_diagonal(a, val, wrap=False):\n \"\"\"Fills the main diagonal of the given array of any dimensionality.\n\n For an array `a` with ``a.ndim > 2``, the diagonal is the list of\n locations with indices ``a[i, i, ..., i]`` all identical. This function\n modifies the input array in-place, it does not return a value.\n\n Args:\n a (cupy.ndarray): The array, at least 2-D.\n val (scalar): The value to be written on the diagonal.\n Its type must be compatible with that of the array a.\n wrap (bool): If specified, the diagonal is \"wrapped\" after N columns.\n This affects only tall matrices.\n\n Examples\n --------\n >>> a = cupy.zeros((3, 3), int)\n >>> cupy.fill_diagonal(a, 5)\n >>> a\n array([[5, 0, 0],\n [0, 5, 0],\n [0, 0, 5]])\n\n .. seealso:: :func:`numpy.fill_diagonal`\n \"\"\"\n # The followings are imported from the original numpy\n if a.ndim < 2:\n raise ValueError('array must be at least 2-d')\n end = a.size\n if a.ndim == 2:\n step = a.shape[1] + 1\n if not wrap:\n end = a.shape[1] * a.shape[1]\n else:\n if not numpy.alltrue(numpy.diff(a.shape) == 0):\n raise ValueError('All dimensions of input must be of equal length')\n step = 1 + numpy.cumprod(a.shape[:-1]).sum()\n\n val = cupy.asarray(val, dtype=a.dtype)\n\n dev_id = device.get_device_id()\n for arr in [a, val]:\n if arr.data.device_id != dev_id:\n raise ValueError(\n 'Array device must be same as the current '\n 'device: array device = %d while current = %d'\n % (arr.data.device_id, dev_id))\n\n typename = _scalar.get_typename(a.dtype)\n fill_diagonal_kernel = _fill_diagonal_kernel(typename, a.ndim, val.ndim)\n\n size = end // step + 1\n a_ind = _carray.Indexer(a.shape)\n val_ind = _carray.Indexer(val.shape)\n fill_diagonal_kernel.kernel.linear_launch(\n size, (a, a_ind, 0, end, step, val, val_ind))\n", "path": "cupy/indexing/insert.py"}]}
| 2,073 | 786 |
gh_patches_debug_1054
|
rasdani/github-patches
|
git_diff
|
liqd__a4-opin-906
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
styling of categories in dashboard (Safari)
When using Safari the styling of categories in the dashboard is broken.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `euth/ideas/forms.py`
Content:
```
1 from adhocracy4.categories import forms as category_forms
2
3 from . import models
4
5
6 class IdeaForm(category_forms.CategorizableForm):
7 class Meta:
8 model = models.Idea
9 fields = ['name', 'description', 'image', 'category']
10
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/euth/ideas/forms.py b/euth/ideas/forms.py
--- a/euth/ideas/forms.py
+++ b/euth/ideas/forms.py
@@ -7,3 +7,7 @@
class Meta:
model = models.Idea
fields = ['name', 'description', 'image', 'category']
+
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.fields['category'].empty_label = '---'
|
{"golden_diff": "diff --git a/euth/ideas/forms.py b/euth/ideas/forms.py\n--- a/euth/ideas/forms.py\n+++ b/euth/ideas/forms.py\n@@ -7,3 +7,7 @@\n class Meta:\n model = models.Idea\n fields = ['name', 'description', 'image', 'category']\n+\n+ def __init__(self, *args, **kwargs):\n+ super().__init__(*args, **kwargs)\n+ self.fields['category'].empty_label = '---'\n", "issue": "styling of categories in dashboard (Safari)\nWhen using Safari the styling of categories in the dashboard is broken. \r\n\r\n\r\n\n", "before_files": [{"content": "from adhocracy4.categories import forms as category_forms\n\nfrom . import models\n\n\nclass IdeaForm(category_forms.CategorizableForm):\n class Meta:\n model = models.Idea\n fields = ['name', 'description', 'image', 'category']\n", "path": "euth/ideas/forms.py"}], "after_files": [{"content": "from adhocracy4.categories import forms as category_forms\n\nfrom . import models\n\n\nclass IdeaForm(category_forms.CategorizableForm):\n class Meta:\n model = models.Idea\n fields = ['name', 'description', 'image', 'category']\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields['category'].empty_label = '---'\n", "path": "euth/ideas/forms.py"}]}
| 416 | 111 |
gh_patches_debug_5823
|
rasdani/github-patches
|
git_diff
|
microsoft__botbuilder-python-1207
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[PORT] Remove Slack from the list of channels that support Suggested Actions
> Port this change from botbuilder-dotnet/master branch:
https://github.com/microsoft/botbuilder-dotnet/pull/4177
Fixes #2291
Removing Slack from the list of channels that do not support suggested actions.
# Changed projects
* Microsoft.Bot.Builder.Dialogs
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from botbuilder.core import TurnContext
5 from botframework.connector import Channels
6
7
8 class Channel:
9 """
10 Methods for determining channel-specific functionality.
11 """
12
13 @staticmethod
14 def supports_suggested_actions(channel_id: str, button_cnt: int = 100) -> bool:
15 """Determine if a number of Suggested Actions are supported by a Channel.
16
17 Args:
18 channel_id (str): The Channel to check the if Suggested Actions are supported in.
19 button_cnt (int, optional): Defaults to 100. The number of Suggested Actions to check for the Channel.
20
21 Returns:
22 bool: True if the Channel supports the button_cnt total Suggested Actions, False if the Channel does not
23 support that number of Suggested Actions.
24 """
25
26 max_actions = {
27 # https://developers.facebook.com/docs/messenger-platform/send-messages/quick-replies
28 Channels.facebook: 10,
29 Channels.skype: 10,
30 # https://developers.line.biz/en/reference/messaging-api/#items-object
31 Channels.line: 13,
32 # https://dev.kik.com/#/docs/messaging#text-response-object
33 Channels.kik: 20,
34 Channels.telegram: 100,
35 Channels.slack: 100,
36 Channels.emulator: 100,
37 Channels.direct_line: 100,
38 Channels.webchat: 100,
39 }
40 return (
41 button_cnt <= max_actions[channel_id]
42 if channel_id in max_actions
43 else False
44 )
45
46 @staticmethod
47 def supports_card_actions(channel_id: str, button_cnt: int = 100) -> bool:
48 """Determine if a number of Card Actions are supported by a Channel.
49
50 Args:
51 channel_id (str): The Channel to check if the Card Actions are supported in.
52 button_cnt (int, optional): Defaults to 100. The number of Card Actions to check for the Channel.
53
54 Returns:
55 bool: True if the Channel supports the button_cnt total Card Actions, False if the Channel does not support
56 that number of Card Actions.
57 """
58
59 max_actions = {
60 Channels.facebook: 3,
61 Channels.skype: 3,
62 Channels.ms_teams: 3,
63 Channels.line: 99,
64 Channels.slack: 100,
65 Channels.emulator: 100,
66 Channels.direct_line: 100,
67 Channels.webchat: 100,
68 Channels.cortana: 100,
69 }
70 return (
71 button_cnt <= max_actions[channel_id]
72 if channel_id in max_actions
73 else False
74 )
75
76 @staticmethod
77 def has_message_feed(channel_id: str) -> bool:
78 """Determine if a Channel has a Message Feed.
79
80 Args:
81 channel_id (str): The Channel to check for Message Feed.
82
83 Returns:
84 bool: True if the Channel has a Message Feed, False if it does not.
85 """
86
87 return not channel_id == Channels.cortana
88
89 @staticmethod
90 def max_action_title_length( # pylint: disable=unused-argument
91 channel_id: str,
92 ) -> int:
93 """Maximum length allowed for Action Titles.
94
95 Args:
96 channel_id (str): The Channel to determine Maximum Action Title Length.
97
98 Returns:
99 int: The total number of characters allowed for an Action Title on a specific Channel.
100 """
101
102 return 20
103
104 @staticmethod
105 def get_channel_id(turn_context: TurnContext) -> str:
106 """Get the Channel Id from the current Activity on the Turn Context.
107
108 Args:
109 turn_context (TurnContext): The Turn Context to retrieve the Activity's Channel Id from.
110
111 Returns:
112 str: The Channel Id from the Turn Context's Activity.
113 """
114
115 if turn_context.activity.channel_id is None:
116 return ""
117
118 return turn_context.activity.channel_id
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py b/libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py
--- a/libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py
+++ b/libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py
@@ -32,7 +32,6 @@
# https://dev.kik.com/#/docs/messaging#text-response-object
Channels.kik: 20,
Channels.telegram: 100,
- Channels.slack: 100,
Channels.emulator: 100,
Channels.direct_line: 100,
Channels.webchat: 100,
|
{"golden_diff": "diff --git a/libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py b/libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py\n--- a/libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py\n+++ b/libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py\n@@ -32,7 +32,6 @@\n # https://dev.kik.com/#/docs/messaging#text-response-object\n Channels.kik: 20,\n Channels.telegram: 100,\n- Channels.slack: 100,\n Channels.emulator: 100,\n Channels.direct_line: 100,\n Channels.webchat: 100,\n", "issue": "[PORT] Remove Slack from the list of channels that support Suggested Actions\n> Port this change from botbuilder-dotnet/master branch:\nhttps://github.com/microsoft/botbuilder-dotnet/pull/4177\n\nFixes #2291 \r\n\r\nRemoving Slack from the list of channels that do not support suggested actions.\n\n\r\n# Changed projects\r\n* Microsoft.Bot.Builder.Dialogs\r\n\r\n\r\n\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom botbuilder.core import TurnContext\nfrom botframework.connector import Channels\n\n\nclass Channel:\n \"\"\"\n Methods for determining channel-specific functionality.\n \"\"\"\n\n @staticmethod\n def supports_suggested_actions(channel_id: str, button_cnt: int = 100) -> bool:\n \"\"\"Determine if a number of Suggested Actions are supported by a Channel.\n\n Args:\n channel_id (str): The Channel to check the if Suggested Actions are supported in.\n button_cnt (int, optional): Defaults to 100. The number of Suggested Actions to check for the Channel.\n\n Returns:\n bool: True if the Channel supports the button_cnt total Suggested Actions, False if the Channel does not\n support that number of Suggested Actions.\n \"\"\"\n\n max_actions = {\n # https://developers.facebook.com/docs/messenger-platform/send-messages/quick-replies\n Channels.facebook: 10,\n Channels.skype: 10,\n # https://developers.line.biz/en/reference/messaging-api/#items-object\n Channels.line: 13,\n # https://dev.kik.com/#/docs/messaging#text-response-object\n Channels.kik: 20,\n Channels.telegram: 100,\n Channels.slack: 100,\n Channels.emulator: 100,\n Channels.direct_line: 100,\n Channels.webchat: 100,\n }\n return (\n button_cnt <= max_actions[channel_id]\n if channel_id in max_actions\n else False\n )\n\n @staticmethod\n def supports_card_actions(channel_id: str, button_cnt: int = 100) -> bool:\n \"\"\"Determine if a number of Card Actions are supported by a Channel.\n\n Args:\n channel_id (str): The Channel to check if the Card Actions are supported in.\n button_cnt (int, optional): Defaults to 100. The number of Card Actions to check for the Channel.\n\n Returns:\n bool: True if the Channel supports the button_cnt total Card Actions, False if the Channel does not support\n that number of Card Actions.\n \"\"\"\n\n max_actions = {\n Channels.facebook: 3,\n Channels.skype: 3,\n Channels.ms_teams: 3,\n Channels.line: 99,\n Channels.slack: 100,\n Channels.emulator: 100,\n Channels.direct_line: 100,\n Channels.webchat: 100,\n Channels.cortana: 100,\n }\n return (\n button_cnt <= max_actions[channel_id]\n if channel_id in max_actions\n else False\n )\n\n @staticmethod\n def has_message_feed(channel_id: str) -> bool:\n \"\"\"Determine if a Channel has a Message Feed.\n\n Args:\n channel_id (str): The Channel to check for Message Feed.\n\n Returns:\n bool: True if the Channel has a Message Feed, False if it does not.\n \"\"\"\n\n return not channel_id == Channels.cortana\n\n @staticmethod\n def max_action_title_length( # pylint: disable=unused-argument\n channel_id: str,\n ) -> int:\n \"\"\"Maximum length allowed for Action Titles.\n\n Args:\n channel_id (str): The Channel to determine Maximum Action Title Length.\n\n Returns:\n int: The total number of characters allowed for an Action Title on a specific Channel.\n \"\"\"\n\n return 20\n\n @staticmethod\n def get_channel_id(turn_context: TurnContext) -> str:\n \"\"\"Get the Channel Id from the current Activity on the Turn Context.\n\n Args:\n turn_context (TurnContext): The Turn Context to retrieve the Activity's Channel Id from.\n\n Returns:\n str: The Channel Id from the Turn Context's Activity.\n \"\"\"\n\n if turn_context.activity.channel_id is None:\n return \"\"\n\n return turn_context.activity.channel_id\n", "path": "libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom botbuilder.core import TurnContext\nfrom botframework.connector import Channels\n\n\nclass Channel:\n \"\"\"\n Methods for determining channel-specific functionality.\n \"\"\"\n\n @staticmethod\n def supports_suggested_actions(channel_id: str, button_cnt: int = 100) -> bool:\n \"\"\"Determine if a number of Suggested Actions are supported by a Channel.\n\n Args:\n channel_id (str): The Channel to check the if Suggested Actions are supported in.\n button_cnt (int, optional): Defaults to 100. The number of Suggested Actions to check for the Channel.\n\n Returns:\n bool: True if the Channel supports the button_cnt total Suggested Actions, False if the Channel does not\n support that number of Suggested Actions.\n \"\"\"\n\n max_actions = {\n # https://developers.facebook.com/docs/messenger-platform/send-messages/quick-replies\n Channels.facebook: 10,\n Channels.skype: 10,\n # https://developers.line.biz/en/reference/messaging-api/#items-object\n Channels.line: 13,\n # https://dev.kik.com/#/docs/messaging#text-response-object\n Channels.kik: 20,\n Channels.telegram: 100,\n Channels.emulator: 100,\n Channels.direct_line: 100,\n Channels.webchat: 100,\n }\n return (\n button_cnt <= max_actions[channel_id]\n if channel_id in max_actions\n else False\n )\n\n @staticmethod\n def supports_card_actions(channel_id: str, button_cnt: int = 100) -> bool:\n \"\"\"Determine if a number of Card Actions are supported by a Channel.\n\n Args:\n channel_id (str): The Channel to check if the Card Actions are supported in.\n button_cnt (int, optional): Defaults to 100. The number of Card Actions to check for the Channel.\n\n Returns:\n bool: True if the Channel supports the button_cnt total Card Actions, False if the Channel does not support\n that number of Card Actions.\n \"\"\"\n\n max_actions = {\n Channels.facebook: 3,\n Channels.skype: 3,\n Channels.ms_teams: 3,\n Channels.line: 99,\n Channels.slack: 100,\n Channels.emulator: 100,\n Channels.direct_line: 100,\n Channels.webchat: 100,\n Channels.cortana: 100,\n }\n return (\n button_cnt <= max_actions[channel_id]\n if channel_id in max_actions\n else False\n )\n\n @staticmethod\n def has_message_feed(channel_id: str) -> bool:\n \"\"\"Determine if a Channel has a Message Feed.\n\n Args:\n channel_id (str): The Channel to check for Message Feed.\n\n Returns:\n bool: True if the Channel has a Message Feed, False if it does not.\n \"\"\"\n\n return not channel_id == Channels.cortana\n\n @staticmethod\n def max_action_title_length( # pylint: disable=unused-argument\n channel_id: str,\n ) -> int:\n \"\"\"Maximum length allowed for Action Titles.\n\n Args:\n channel_id (str): The Channel to determine Maximum Action Title Length.\n\n Returns:\n int: The total number of characters allowed for an Action Title on a specific Channel.\n \"\"\"\n\n return 20\n\n @staticmethod\n def get_channel_id(turn_context: TurnContext) -> str:\n \"\"\"Get the Channel Id from the current Activity on the Turn Context.\n\n Args:\n turn_context (TurnContext): The Turn Context to retrieve the Activity's Channel Id from.\n\n Returns:\n str: The Channel Id from the Turn Context's Activity.\n \"\"\"\n\n if turn_context.activity.channel_id is None:\n return \"\"\n\n return turn_context.activity.channel_id\n", "path": "libraries/botbuilder-dialogs/botbuilder/dialogs/choices/channel.py"}]}
| 1,475 | 165 |
gh_patches_debug_20621
|
rasdani/github-patches
|
git_diff
|
lightly-ai__lightly-537
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NNCLR
Hi,thanks for your work,
In NNCLR's model:
"
# forward pass of first input x0
f0 = self.backbone(x0).flatten(start_dim=1)
z0 = self.projection_mlp(f0)
p0 = self.prediction_mlp(f0)
"
I think it's wrong, it should be that :
"
# forward pass of first input x0
f0 = self.backbone(x0).flatten(start_dim=1)
z0 = self.projection_mlp(f0)
p0 = self.prediction_mlp(z0)
" .
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lightly/models/nnclr.py`
Content:
```
1 """ NNCLR Model """
2
3 # Copyright (c) 2021. Lightly AG and its affiliates.
4 # All Rights Reserved
5
6 import warnings
7
8 import torch
9 import torch.nn as nn
10
11 from lightly.models.modules import NNCLRProjectionHead
12 from lightly.models.modules import NNCLRPredictionHead
13
14
15 def _prediction_mlp(in_dims: int,
16 h_dims: int,
17 out_dims: int) -> nn.Sequential:
18 """Prediction MLP. The original paper's implementation has 2 layers, with
19 BN applied to its hidden fc layers but no BN or ReLU on the output fc layer.
20
21 Note that the hidden dimensions should be smaller than the input/output
22 dimensions (bottleneck structure). The default implementation using a
23 ResNet50 backbone has an input dimension of 2048, hidden dimension of 512,
24 and output dimension of 2048
25
26 Args:
27 in_dims:
28 Input dimension of the first linear layer.
29 h_dims:
30 Hidden dimension of all the fully connected layers (should be a
31 bottleneck!)
32 out_dims:
33 Output Dimension of the final linear layer.
34
35 Returns:
36 nn.Sequential:
37 The projection head.
38 """
39 l1 = nn.Sequential(nn.Linear(in_dims, h_dims),
40 nn.BatchNorm1d(h_dims),
41 nn.ReLU(inplace=True))
42
43 l2 = nn.Linear(h_dims, out_dims)
44
45 prediction = nn.Sequential(l1, l2)
46 return prediction
47
48
49 def _projection_mlp(num_ftrs: int,
50 h_dims: int,
51 out_dim: int,
52 num_layers: int = 3) -> nn.Sequential:
53 """Projection MLP. The original paper's implementation has 3 layers, with
54 BN applied to its hidden fc layers but no ReLU on the output fc layer.
55 The CIFAR-10 study used a MLP with only two layers.
56
57 Args:
58 in_dims:
59 Input dimension of the first linear layer.
60 h_dims:
61 Hidden dimension of all the fully connected layers.
62 out_dims:
63 Output Dimension of the final linear layer.
64 num_layers:
65 Controls the number of layers; must be 2 or 3. Defaults to 3.
66
67 Returns:
68 nn.Sequential:
69 The projection head.
70 """
71 l1 = nn.Sequential(nn.Linear(num_ftrs, h_dims),
72 nn.BatchNorm1d(h_dims),
73 nn.ReLU(inplace=True))
74
75 l2 = nn.Sequential(nn.Linear(h_dims, h_dims),
76 nn.BatchNorm1d(h_dims),
77 nn.ReLU(inplace=True))
78
79 l3 = nn.Sequential(nn.Linear(h_dims, out_dim),
80 nn.BatchNorm1d(out_dim))
81
82 if num_layers == 3:
83 projection = nn.Sequential(l1, l2, l3)
84 elif num_layers == 2:
85 projection = nn.Sequential(l1, l3)
86 else:
87 raise NotImplementedError("Only MLPs with 2 and 3 layers are implemented.")
88
89 return projection
90
91
92 class NNCLR(nn.Module):
93 """Implementation of the NNCLR[0] architecture
94
95 Recommended loss: :py:class:`lightly.loss.ntx_ent_loss.NTXentLoss`
96 Recommended module: :py:class:`lightly.models.modules.nn_memory_bank.NNmemoryBankModule`
97
98 [0] NNCLR, 2021, https://arxiv.org/abs/2104.14548
99
100 Attributes:
101 backbone:
102 Backbone model to extract features from images.
103 num_ftrs:
104 Dimension of the embedding (before the projection head).
105 proj_hidden_dim:
106 Dimension of the hidden layer of the projection head.
107 pred_hidden_dim:
108 Dimension of the hidden layer of the predicion head.
109 out_dim:
110 Dimension of the output (after the projection head).
111 num_mlp_layers:
112 Number of linear layers for MLP.
113
114 Examples:
115 >>> model = NNCLR(backbone)
116 >>> criterion = NTXentLoss(temperature=0.1)
117 >>>
118 >>> nn_replacer = NNmemoryBankModule(size=2 ** 16)
119 >>>
120 >>> # forward pass
121 >>> (z0, p0), (z1, p1) = model(x0, x1)
122 >>> z0 = nn_replacer(z0.detach(), update=False)
123 >>> z1 = nn_replacer(z1.detach(), update=True)
124 >>>
125 >>> loss = 0.5 * (criterion(z0, p1) + criterion(z1, p0))
126
127 """
128
129 def __init__(self,
130 backbone: nn.Module,
131 num_ftrs: int = 512,
132 proj_hidden_dim: int = 2048,
133 pred_hidden_dim: int = 4096,
134 out_dim: int = 256):
135
136 super(NNCLR, self).__init__()
137
138 self.backbone = backbone
139 self.num_ftrs = num_ftrs
140 self.proj_hidden_dim = proj_hidden_dim
141 self.pred_hidden_dim = pred_hidden_dim
142 self.out_dim = out_dim
143
144 self.projection_mlp = NNCLRProjectionHead(
145 num_ftrs,
146 proj_hidden_dim,
147 out_dim,
148 )
149
150 self.prediction_mlp = NNCLRPredictionHead(
151 num_ftrs,
152 pred_hidden_dim,
153 out_dim,
154 )
155
156 warnings.warn(Warning(
157 'The high-level building block NNCLR will be deprecated in version 1.2.0. '
158 + 'Use low-level building blocks instead. '
159 + 'See https://docs.lightly.ai/lightly.models.html for more information'),
160 PendingDeprecationWarning)
161
162 def forward(self,
163 x0: torch.Tensor,
164 x1: torch.Tensor = None,
165 return_features: bool = False):
166 """Embeds and projects the input images.
167
168 Extracts features with the backbone and applies the projection
169 head to the output space. If both x0 and x1 are not None, both will be
170 passed through the backbone and projection head. If x1 is None, only
171 x0 will be forwarded.
172
173 Args:
174 x0:
175 Tensor of shape bsz x channels x W x H.
176 x1:
177 Tensor of shape bsz x channels x W x H.
178 return_features:
179 Whether or not to return the intermediate features backbone(x).
180
181 Returns:
182 The output projection of x0 and (if x1 is not None) the output
183 projection of x1. If return_features is True, the output for each x
184 is a tuple (out, f) where f are the features before the projection
185 head.
186
187 Examples:
188 >>> # single input, single output
189 >>> out = model(x)
190 >>>
191 >>> # single input with return_features=True
192 >>> out, f = model(x, return_features=True)
193 >>>
194 >>> # two inputs, two outputs
195 >>> out0, out1 = model(x0, x1)
196 >>>
197 >>> # two inputs, two outputs with return_features=True
198 >>> (out0, f0), (out1, f1) = model(x0, x1, return_features=True)
199
200 """
201
202 # forward pass of first input x0
203 f0 = self.backbone(x0).flatten(start_dim=1)
204 z0 = self.projection_mlp(f0)
205 p0 = self.prediction_mlp(f0)
206
207 out0 = (z0, p0)
208
209 # append features if requested
210 if return_features:
211 out0 = (out0, f0)
212
213 # return out0 if x1 is None
214 if x1 is None:
215 return out0
216
217 # forward pass of second input x1
218 f1 = self.backbone(x1).flatten(start_dim=1)
219 z1 = self.projection_mlp(f1)
220 p1 = self.prediction_mlp(f1)
221
222 out1 = (z1, p1)
223
224 # append features if requested
225 if return_features:
226 out1 = (out1, f1)
227
228 # return both outputs
229 return out0, out1
230
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lightly/models/nnclr.py b/lightly/models/nnclr.py
--- a/lightly/models/nnclr.py
+++ b/lightly/models/nnclr.py
@@ -148,7 +148,7 @@
)
self.prediction_mlp = NNCLRPredictionHead(
- num_ftrs,
+ out_dim,
pred_hidden_dim,
out_dim,
)
@@ -202,7 +202,7 @@
# forward pass of first input x0
f0 = self.backbone(x0).flatten(start_dim=1)
z0 = self.projection_mlp(f0)
- p0 = self.prediction_mlp(f0)
+ p0 = self.prediction_mlp(z0)
out0 = (z0, p0)
@@ -217,7 +217,7 @@
# forward pass of second input x1
f1 = self.backbone(x1).flatten(start_dim=1)
z1 = self.projection_mlp(f1)
- p1 = self.prediction_mlp(f1)
+ p1 = self.prediction_mlp(z1)
out1 = (z1, p1)
|
{"golden_diff": "diff --git a/lightly/models/nnclr.py b/lightly/models/nnclr.py\n--- a/lightly/models/nnclr.py\n+++ b/lightly/models/nnclr.py\n@@ -148,7 +148,7 @@\n )\n \n self.prediction_mlp = NNCLRPredictionHead(\n- num_ftrs,\n+ out_dim,\n pred_hidden_dim,\n out_dim,\n )\n@@ -202,7 +202,7 @@\n # forward pass of first input x0\n f0 = self.backbone(x0).flatten(start_dim=1)\n z0 = self.projection_mlp(f0)\n- p0 = self.prediction_mlp(f0)\n+ p0 = self.prediction_mlp(z0)\n \n out0 = (z0, p0)\n \n@@ -217,7 +217,7 @@\n # forward pass of second input x1\n f1 = self.backbone(x1).flatten(start_dim=1)\n z1 = self.projection_mlp(f1)\n- p1 = self.prediction_mlp(f1)\n+ p1 = self.prediction_mlp(z1)\n \n out1 = (z1, p1)\n", "issue": "NNCLR\nHi,thanks for your work,\r\nIn NNCLR's model:\r\n \"\r\n# forward pass of first input x0\r\n f0 = self.backbone(x0).flatten(start_dim=1)\r\n z0 = self.projection_mlp(f0)\r\n p0 = self.prediction_mlp(f0)\r\n \"\r\n\r\nI think it's wrong, it should be that :\r\n\"\r\n # forward pass of first input x0\r\n f0 = self.backbone(x0).flatten(start_dim=1)\r\n z0 = self.projection_mlp(f0)\r\n p0 = self.prediction_mlp(z0)\r\n\" .\n", "before_files": [{"content": "\"\"\" NNCLR Model \"\"\"\n\n# Copyright (c) 2021. Lightly AG and its affiliates.\n# All Rights Reserved\n\nimport warnings\n\nimport torch\nimport torch.nn as nn\n\nfrom lightly.models.modules import NNCLRProjectionHead\nfrom lightly.models.modules import NNCLRPredictionHead\n\n\ndef _prediction_mlp(in_dims: int, \n h_dims: int, \n out_dims: int) -> nn.Sequential:\n \"\"\"Prediction MLP. The original paper's implementation has 2 layers, with \n BN applied to its hidden fc layers but no BN or ReLU on the output fc layer.\n\n Note that the hidden dimensions should be smaller than the input/output \n dimensions (bottleneck structure). The default implementation using a \n ResNet50 backbone has an input dimension of 2048, hidden dimension of 512, \n and output dimension of 2048\n\n Args:\n in_dims:\n Input dimension of the first linear layer.\n h_dims: \n Hidden dimension of all the fully connected layers (should be a\n bottleneck!)\n out_dims: \n Output Dimension of the final linear layer.\n\n Returns:\n nn.Sequential:\n The projection head.\n \"\"\"\n l1 = nn.Sequential(nn.Linear(in_dims, h_dims),\n nn.BatchNorm1d(h_dims),\n nn.ReLU(inplace=True))\n\n l2 = nn.Linear(h_dims, out_dims)\n\n prediction = nn.Sequential(l1, l2)\n return prediction\n\n\ndef _projection_mlp(num_ftrs: int,\n h_dims: int, \n out_dim: int, \n num_layers: int = 3) -> nn.Sequential:\n \"\"\"Projection MLP. The original paper's implementation has 3 layers, with \n BN applied to its hidden fc layers but no ReLU on the output fc layer. \n The CIFAR-10 study used a MLP with only two layers.\n\n Args:\n in_dims:\n Input dimension of the first linear layer.\n h_dims: \n Hidden dimension of all the fully connected layers.\n out_dims: \n Output Dimension of the final linear layer.\n num_layers:\n Controls the number of layers; must be 2 or 3. Defaults to 3.\n\n Returns:\n nn.Sequential:\n The projection head.\n \"\"\"\n l1 = nn.Sequential(nn.Linear(num_ftrs, h_dims),\n nn.BatchNorm1d(h_dims),\n nn.ReLU(inplace=True))\n\n l2 = nn.Sequential(nn.Linear(h_dims, h_dims),\n nn.BatchNorm1d(h_dims),\n nn.ReLU(inplace=True))\n\n l3 = nn.Sequential(nn.Linear(h_dims, out_dim),\n nn.BatchNorm1d(out_dim))\n\n if num_layers == 3:\n projection = nn.Sequential(l1, l2, l3)\n elif num_layers == 2:\n projection = nn.Sequential(l1, l3)\n else:\n raise NotImplementedError(\"Only MLPs with 2 and 3 layers are implemented.\")\n\n return projection\n\n\nclass NNCLR(nn.Module):\n \"\"\"Implementation of the NNCLR[0] architecture\n\n Recommended loss: :py:class:`lightly.loss.ntx_ent_loss.NTXentLoss`\n Recommended module: :py:class:`lightly.models.modules.nn_memory_bank.NNmemoryBankModule`\n\n [0] NNCLR, 2021, https://arxiv.org/abs/2104.14548\n\n Attributes:\n backbone:\n Backbone model to extract features from images.\n num_ftrs:\n Dimension of the embedding (before the projection head).\n proj_hidden_dim: \n Dimension of the hidden layer of the projection head.\n pred_hidden_dim:\n Dimension of the hidden layer of the predicion head.\n out_dim:\n Dimension of the output (after the projection head).\n num_mlp_layers:\n Number of linear layers for MLP.\n\n Examples:\n >>> model = NNCLR(backbone)\n >>> criterion = NTXentLoss(temperature=0.1)\n >>> \n >>> nn_replacer = NNmemoryBankModule(size=2 ** 16)\n >>>\n >>> # forward pass\n >>> (z0, p0), (z1, p1) = model(x0, x1)\n >>> z0 = nn_replacer(z0.detach(), update=False)\n >>> z1 = nn_replacer(z1.detach(), update=True)\n >>>\n >>> loss = 0.5 * (criterion(z0, p1) + criterion(z1, p0))\n\n \"\"\"\n\n def __init__(self,\n backbone: nn.Module,\n num_ftrs: int = 512,\n proj_hidden_dim: int = 2048,\n pred_hidden_dim: int = 4096,\n out_dim: int = 256):\n\n super(NNCLR, self).__init__()\n\n self.backbone = backbone\n self.num_ftrs = num_ftrs\n self.proj_hidden_dim = proj_hidden_dim\n self.pred_hidden_dim = pred_hidden_dim\n self.out_dim = out_dim\n\n self.projection_mlp = NNCLRProjectionHead(\n num_ftrs,\n proj_hidden_dim,\n out_dim,\n )\n \n self.prediction_mlp = NNCLRPredictionHead(\n num_ftrs,\n pred_hidden_dim,\n out_dim,\n )\n\n warnings.warn(Warning(\n 'The high-level building block NNCLR will be deprecated in version 1.2.0. '\n + 'Use low-level building blocks instead. '\n + 'See https://docs.lightly.ai/lightly.models.html for more information'),\n PendingDeprecationWarning)\n\n def forward(self,\n x0: torch.Tensor,\n x1: torch.Tensor = None,\n return_features: bool = False):\n \"\"\"Embeds and projects the input images.\n\n Extracts features with the backbone and applies the projection\n head to the output space. If both x0 and x1 are not None, both will be\n passed through the backbone and projection head. If x1 is None, only\n x0 will be forwarded.\n\n Args:\n x0:\n Tensor of shape bsz x channels x W x H.\n x1:\n Tensor of shape bsz x channels x W x H.\n return_features:\n Whether or not to return the intermediate features backbone(x).\n\n Returns:\n The output projection of x0 and (if x1 is not None) the output\n projection of x1. If return_features is True, the output for each x\n is a tuple (out, f) where f are the features before the projection\n head.\n\n Examples:\n >>> # single input, single output\n >>> out = model(x) \n >>> \n >>> # single input with return_features=True\n >>> out, f = model(x, return_features=True)\n >>>\n >>> # two inputs, two outputs\n >>> out0, out1 = model(x0, x1)\n >>>\n >>> # two inputs, two outputs with return_features=True\n >>> (out0, f0), (out1, f1) = model(x0, x1, return_features=True)\n\n \"\"\"\n \n # forward pass of first input x0\n f0 = self.backbone(x0).flatten(start_dim=1)\n z0 = self.projection_mlp(f0)\n p0 = self.prediction_mlp(f0)\n\n out0 = (z0, p0)\n\n # append features if requested\n if return_features:\n out0 = (out0, f0)\n\n # return out0 if x1 is None\n if x1 is None:\n return out0\n\n # forward pass of second input x1\n f1 = self.backbone(x1).flatten(start_dim=1)\n z1 = self.projection_mlp(f1)\n p1 = self.prediction_mlp(f1)\n\n out1 = (z1, p1)\n\n # append features if requested\n if return_features:\n out1 = (out1, f1)\n\n # return both outputs\n return out0, out1\n", "path": "lightly/models/nnclr.py"}], "after_files": [{"content": "\"\"\" NNCLR Model \"\"\"\n\n# Copyright (c) 2021. Lightly AG and its affiliates.\n# All Rights Reserved\n\nimport warnings\n\nimport torch\nimport torch.nn as nn\n\nfrom lightly.models.modules import NNCLRProjectionHead\nfrom lightly.models.modules import NNCLRPredictionHead\n\n\ndef _prediction_mlp(in_dims: int, \n h_dims: int, \n out_dims: int) -> nn.Sequential:\n \"\"\"Prediction MLP. The original paper's implementation has 2 layers, with \n BN applied to its hidden fc layers but no BN or ReLU on the output fc layer.\n\n Note that the hidden dimensions should be smaller than the input/output \n dimensions (bottleneck structure). The default implementation using a \n ResNet50 backbone has an input dimension of 2048, hidden dimension of 512, \n and output dimension of 2048\n\n Args:\n in_dims:\n Input dimension of the first linear layer.\n h_dims: \n Hidden dimension of all the fully connected layers (should be a\n bottleneck!)\n out_dims: \n Output Dimension of the final linear layer.\n\n Returns:\n nn.Sequential:\n The projection head.\n \"\"\"\n l1 = nn.Sequential(nn.Linear(in_dims, h_dims),\n nn.BatchNorm1d(h_dims),\n nn.ReLU(inplace=True))\n\n l2 = nn.Linear(h_dims, out_dims)\n\n prediction = nn.Sequential(l1, l2)\n return prediction\n\n\ndef _projection_mlp(num_ftrs: int,\n h_dims: int, \n out_dim: int, \n num_layers: int = 3) -> nn.Sequential:\n \"\"\"Projection MLP. The original paper's implementation has 3 layers, with \n BN applied to its hidden fc layers but no ReLU on the output fc layer. \n The CIFAR-10 study used a MLP with only two layers.\n\n Args:\n in_dims:\n Input dimension of the first linear layer.\n h_dims: \n Hidden dimension of all the fully connected layers.\n out_dims: \n Output Dimension of the final linear layer.\n num_layers:\n Controls the number of layers; must be 2 or 3. Defaults to 3.\n\n Returns:\n nn.Sequential:\n The projection head.\n \"\"\"\n l1 = nn.Sequential(nn.Linear(num_ftrs, h_dims),\n nn.BatchNorm1d(h_dims),\n nn.ReLU(inplace=True))\n\n l2 = nn.Sequential(nn.Linear(h_dims, h_dims),\n nn.BatchNorm1d(h_dims),\n nn.ReLU(inplace=True))\n\n l3 = nn.Sequential(nn.Linear(h_dims, out_dim),\n nn.BatchNorm1d(out_dim))\n\n if num_layers == 3:\n projection = nn.Sequential(l1, l2, l3)\n elif num_layers == 2:\n projection = nn.Sequential(l1, l3)\n else:\n raise NotImplementedError(\"Only MLPs with 2 and 3 layers are implemented.\")\n\n return projection\n\n\nclass NNCLR(nn.Module):\n \"\"\"Implementation of the NNCLR[0] architecture\n\n Recommended loss: :py:class:`lightly.loss.ntx_ent_loss.NTXentLoss`\n Recommended module: :py:class:`lightly.models.modules.nn_memory_bank.NNmemoryBankModule`\n\n [0] NNCLR, 2021, https://arxiv.org/abs/2104.14548\n\n Attributes:\n backbone:\n Backbone model to extract features from images.\n num_ftrs:\n Dimension of the embedding (before the projection head).\n proj_hidden_dim: \n Dimension of the hidden layer of the projection head.\n pred_hidden_dim:\n Dimension of the hidden layer of the predicion head.\n out_dim:\n Dimension of the output (after the projection head).\n num_mlp_layers:\n Number of linear layers for MLP.\n\n Examples:\n >>> model = NNCLR(backbone)\n >>> criterion = NTXentLoss(temperature=0.1)\n >>> \n >>> nn_replacer = NNmemoryBankModule(size=2 ** 16)\n >>>\n >>> # forward pass\n >>> (z0, p0), (z1, p1) = model(x0, x1)\n >>> z0 = nn_replacer(z0.detach(), update=False)\n >>> z1 = nn_replacer(z1.detach(), update=True)\n >>>\n >>> loss = 0.5 * (criterion(z0, p1) + criterion(z1, p0))\n\n \"\"\"\n\n def __init__(self,\n backbone: nn.Module,\n num_ftrs: int = 512,\n proj_hidden_dim: int = 2048,\n pred_hidden_dim: int = 4096,\n out_dim: int = 256):\n\n super(NNCLR, self).__init__()\n\n self.backbone = backbone\n self.num_ftrs = num_ftrs\n self.proj_hidden_dim = proj_hidden_dim\n self.pred_hidden_dim = pred_hidden_dim\n self.out_dim = out_dim\n\n self.projection_mlp = NNCLRProjectionHead(\n num_ftrs,\n proj_hidden_dim,\n out_dim,\n )\n \n self.prediction_mlp = NNCLRPredictionHead(\n out_dim,\n pred_hidden_dim,\n out_dim,\n )\n\n warnings.warn(Warning(\n 'The high-level building block NNCLR will be deprecated in version 1.2.0. '\n + 'Use low-level building blocks instead. '\n + 'See https://docs.lightly.ai/lightly.models.html for more information'),\n PendingDeprecationWarning)\n\n def forward(self,\n x0: torch.Tensor,\n x1: torch.Tensor = None,\n return_features: bool = False):\n \"\"\"Embeds and projects the input images.\n\n Extracts features with the backbone and applies the projection\n head to the output space. If both x0 and x1 are not None, both will be\n passed through the backbone and projection head. If x1 is None, only\n x0 will be forwarded.\n\n Args:\n x0:\n Tensor of shape bsz x channels x W x H.\n x1:\n Tensor of shape bsz x channels x W x H.\n return_features:\n Whether or not to return the intermediate features backbone(x).\n\n Returns:\n The output projection of x0 and (if x1 is not None) the output\n projection of x1. If return_features is True, the output for each x\n is a tuple (out, f) where f are the features before the projection\n head.\n\n Examples:\n >>> # single input, single output\n >>> out = model(x) \n >>> \n >>> # single input with return_features=True\n >>> out, f = model(x, return_features=True)\n >>>\n >>> # two inputs, two outputs\n >>> out0, out1 = model(x0, x1)\n >>>\n >>> # two inputs, two outputs with return_features=True\n >>> (out0, f0), (out1, f1) = model(x0, x1, return_features=True)\n\n \"\"\"\n \n # forward pass of first input x0\n f0 = self.backbone(x0).flatten(start_dim=1)\n z0 = self.projection_mlp(f0)\n p0 = self.prediction_mlp(z0)\n\n out0 = (z0, p0)\n\n # append features if requested\n if return_features:\n out0 = (out0, f0)\n\n # return out0 if x1 is None\n if x1 is None:\n return out0\n\n # forward pass of second input x1\n f1 = self.backbone(x1).flatten(start_dim=1)\n z1 = self.projection_mlp(f1)\n p1 = self.prediction_mlp(z1)\n\n out1 = (z1, p1)\n\n # append features if requested\n if return_features:\n out1 = (out1, f1)\n\n # return both outputs\n return out0, out1\n", "path": "lightly/models/nnclr.py"}]}
| 2,749 | 274 |
gh_patches_debug_8267
|
rasdani/github-patches
|
git_diff
|
ipython__ipython-7852
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unknown error with no read permissions
If you do not have read permission on a notebook (e.g., it has `-rw-------` and you are not the owner) and you try to open it, rather than saying permission denied, you get an error that implies there's something wrong with the notebook format:

In the notebook server, there is an unhandled permission denied traceback:
```
[E 11:09:31.650 NotebookApp] Unhandled error in API request
Traceback (most recent call last):
File "/Users/jhamrick/project/tools/ipython/IPython/html/base/handlers.py", line 365, in wrapper
result = yield gen.maybe_future(method(self, *args, **kwargs))
File "/usr/local/lib/python2.7/site-packages/tornado/gen.py", line 807, in run
value = future.result()
File "/usr/local/lib/python2.7/site-packages/tornado/concurrent.py", line 209, in result
raise_exc_info(self._exc_info)
File "/usr/local/lib/python2.7/site-packages/tornado/gen.py", line 212, in wrapper
yielded = next(result)
File "/Users/jhamrick/project/tools/ipython/IPython/html/services/contents/handlers.py", line 126, in get
path=path, type=type, format=format, content=content,
File "/Users/jhamrick/project/tools/ipython/IPython/html/services/contents/filemanager.py", line 347, in get
model = self._notebook_model(path, content=content)
File "/Users/jhamrick/project/tools/ipython/IPython/html/services/contents/filemanager.py", line 306, in _notebook_model
nb = self._read_notebook(os_path, as_version=4)
File "/Users/jhamrick/project/tools/ipython/IPython/html/services/contents/fileio.py", line 104, in _read_notebook
with self.open(os_path, 'r', encoding='utf-8') as f:
File "/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/Users/jhamrick/project/tools/ipython/IPython/html/services/contents/fileio.py", line 49, in open
with io.open(os_path, *args, **kwargs) as f:
IOError: [Errno 13] Permission denied: '/Users/jhamrick/project/ipynb/Untitled9.ipynb'
```
Perhaps it would be possible to return a more informative generic error message, like just the last line of the traceback or something along those lines?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `IPython/html/services/contents/fileio.py`
Content:
```
1 """
2 Utilities for file-based Contents/Checkpoints managers.
3 """
4
5 # Copyright (c) IPython Development Team.
6 # Distributed under the terms of the Modified BSD License.
7
8 import base64
9 from contextlib import contextmanager
10 import errno
11 import io
12 import os
13 import shutil
14
15 from tornado.web import HTTPError
16
17 from IPython.html.utils import (
18 to_api_path,
19 to_os_path,
20 )
21 from IPython import nbformat
22 from IPython.utils.io import atomic_writing
23 from IPython.utils.py3compat import str_to_unicode
24
25
26 class FileManagerMixin(object):
27 """
28 Mixin for ContentsAPI classes that interact with the filesystem.
29
30 Provides facilities for reading, writing, and copying both notebooks and
31 generic files.
32
33 Shared by FileContentsManager and FileCheckpoints.
34
35 Note
36 ----
37 Classes using this mixin must provide the following attributes:
38
39 root_dir : unicode
40 A directory against against which API-style paths are to be resolved.
41
42 log : logging.Logger
43 """
44
45 @contextmanager
46 def open(self, os_path, *args, **kwargs):
47 """wrapper around io.open that turns permission errors into 403"""
48 with self.perm_to_403(os_path):
49 with io.open(os_path, *args, **kwargs) as f:
50 yield f
51
52 @contextmanager
53 def atomic_writing(self, os_path, *args, **kwargs):
54 """wrapper around atomic_writing that turns permission errors to 403"""
55 with self.perm_to_403(os_path):
56 with atomic_writing(os_path, *args, **kwargs) as f:
57 yield f
58
59 @contextmanager
60 def perm_to_403(self, os_path=''):
61 """context manager for turning permission errors into 403."""
62 try:
63 yield
64 except OSError as e:
65 if e.errno in {errno.EPERM, errno.EACCES}:
66 # make 403 error message without root prefix
67 # this may not work perfectly on unicode paths on Python 2,
68 # but nobody should be doing that anyway.
69 if not os_path:
70 os_path = str_to_unicode(e.filename or 'unknown file')
71 path = to_api_path(os_path, root=self.root_dir)
72 raise HTTPError(403, u'Permission denied: %s' % path)
73 else:
74 raise
75
76 def _copy(self, src, dest):
77 """copy src to dest
78
79 like shutil.copy2, but log errors in copystat
80 """
81 shutil.copyfile(src, dest)
82 try:
83 shutil.copystat(src, dest)
84 except OSError:
85 self.log.debug("copystat on %s failed", dest, exc_info=True)
86
87 def _get_os_path(self, path):
88 """Given an API path, return its file system path.
89
90 Parameters
91 ----------
92 path : string
93 The relative API path to the named file.
94
95 Returns
96 -------
97 path : string
98 Native, absolute OS path to for a file.
99 """
100 return to_os_path(path, self.root_dir)
101
102 def _read_notebook(self, os_path, as_version=4):
103 """Read a notebook from an os path."""
104 with self.open(os_path, 'r', encoding='utf-8') as f:
105 try:
106 return nbformat.read(f, as_version=as_version)
107 except Exception as e:
108 raise HTTPError(
109 400,
110 u"Unreadable Notebook: %s %r" % (os_path, e),
111 )
112
113 def _save_notebook(self, os_path, nb):
114 """Save a notebook to an os_path."""
115 with self.atomic_writing(os_path, encoding='utf-8') as f:
116 nbformat.write(nb, f, version=nbformat.NO_CONVERT)
117
118 def _read_file(self, os_path, format):
119 """Read a non-notebook file.
120
121 os_path: The path to be read.
122 format:
123 If 'text', the contents will be decoded as UTF-8.
124 If 'base64', the raw bytes contents will be encoded as base64.
125 If not specified, try to decode as UTF-8, and fall back to base64
126 """
127 if not os.path.isfile(os_path):
128 raise HTTPError(400, "Cannot read non-file %s" % os_path)
129
130 with self.open(os_path, 'rb') as f:
131 bcontent = f.read()
132
133 if format is None or format == 'text':
134 # Try to interpret as unicode if format is unknown or if unicode
135 # was explicitly requested.
136 try:
137 return bcontent.decode('utf8'), 'text'
138 except UnicodeError:
139 if format == 'text':
140 raise HTTPError(
141 400,
142 "%s is not UTF-8 encoded" % os_path,
143 reason='bad format',
144 )
145 return base64.encodestring(bcontent).decode('ascii'), 'base64'
146
147 def _save_file(self, os_path, content, format):
148 """Save content of a generic file."""
149 if format not in {'text', 'base64'}:
150 raise HTTPError(
151 400,
152 "Must specify format of file contents as 'text' or 'base64'",
153 )
154 try:
155 if format == 'text':
156 bcontent = content.encode('utf8')
157 else:
158 b64_bytes = content.encode('ascii')
159 bcontent = base64.decodestring(b64_bytes)
160 except Exception as e:
161 raise HTTPError(
162 400, u'Encoding error saving %s: %s' % (os_path, e)
163 )
164
165 with self.atomic_writing(os_path, text=False) as f:
166 f.write(bcontent)
167
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/IPython/html/services/contents/fileio.py b/IPython/html/services/contents/fileio.py
--- a/IPython/html/services/contents/fileio.py
+++ b/IPython/html/services/contents/fileio.py
@@ -61,7 +61,7 @@
"""context manager for turning permission errors into 403."""
try:
yield
- except OSError as e:
+ except (OSError, IOError) as e:
if e.errno in {errno.EPERM, errno.EACCES}:
# make 403 error message without root prefix
# this may not work perfectly on unicode paths on Python 2,
|
{"golden_diff": "diff --git a/IPython/html/services/contents/fileio.py b/IPython/html/services/contents/fileio.py\n--- a/IPython/html/services/contents/fileio.py\n+++ b/IPython/html/services/contents/fileio.py\n@@ -61,7 +61,7 @@\n \"\"\"context manager for turning permission errors into 403.\"\"\"\n try:\n yield\n- except OSError as e:\n+ except (OSError, IOError) as e:\n if e.errno in {errno.EPERM, errno.EACCES}:\n # make 403 error message without root prefix\n # this may not work perfectly on unicode paths on Python 2,\n", "issue": "Unknown error with no read permissions\nIf you do not have read permission on a notebook (e.g., it has `-rw-------` and you are not the owner) and you try to open it, rather than saying permission denied, you get an error that implies there's something wrong with the notebook format:\n\n\n\nIn the notebook server, there is an unhandled permission denied traceback:\n\n```\n[E 11:09:31.650 NotebookApp] Unhandled error in API request\n Traceback (most recent call last):\n File \"/Users/jhamrick/project/tools/ipython/IPython/html/base/handlers.py\", line 365, in wrapper\n result = yield gen.maybe_future(method(self, *args, **kwargs))\n File \"/usr/local/lib/python2.7/site-packages/tornado/gen.py\", line 807, in run\n value = future.result()\n File \"/usr/local/lib/python2.7/site-packages/tornado/concurrent.py\", line 209, in result\n raise_exc_info(self._exc_info)\n File \"/usr/local/lib/python2.7/site-packages/tornado/gen.py\", line 212, in wrapper\n yielded = next(result)\n File \"/Users/jhamrick/project/tools/ipython/IPython/html/services/contents/handlers.py\", line 126, in get\n path=path, type=type, format=format, content=content,\n File \"/Users/jhamrick/project/tools/ipython/IPython/html/services/contents/filemanager.py\", line 347, in get\n model = self._notebook_model(path, content=content)\n File \"/Users/jhamrick/project/tools/ipython/IPython/html/services/contents/filemanager.py\", line 306, in _notebook_model\n nb = self._read_notebook(os_path, as_version=4)\n File \"/Users/jhamrick/project/tools/ipython/IPython/html/services/contents/fileio.py\", line 104, in _read_notebook\n with self.open(os_path, 'r', encoding='utf-8') as f:\n File \"/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py\", line 17, in __enter__\n return self.gen.next()\n File \"/Users/jhamrick/project/tools/ipython/IPython/html/services/contents/fileio.py\", line 49, in open\n with io.open(os_path, *args, **kwargs) as f:\n IOError: [Errno 13] Permission denied: '/Users/jhamrick/project/ipynb/Untitled9.ipynb'\n```\n\nPerhaps it would be possible to return a more informative generic error message, like just the last line of the traceback or something along those lines?\n\n", "before_files": [{"content": "\"\"\"\nUtilities for file-based Contents/Checkpoints managers.\n\"\"\"\n\n# Copyright (c) IPython Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nimport base64\nfrom contextlib import contextmanager\nimport errno\nimport io\nimport os\nimport shutil\n\nfrom tornado.web import HTTPError\n\nfrom IPython.html.utils import (\n to_api_path,\n to_os_path,\n)\nfrom IPython import nbformat\nfrom IPython.utils.io import atomic_writing\nfrom IPython.utils.py3compat import str_to_unicode\n\n\nclass FileManagerMixin(object):\n \"\"\"\n Mixin for ContentsAPI classes that interact with the filesystem.\n\n Provides facilities for reading, writing, and copying both notebooks and\n generic files.\n\n Shared by FileContentsManager and FileCheckpoints.\n\n Note\n ----\n Classes using this mixin must provide the following attributes:\n\n root_dir : unicode\n A directory against against which API-style paths are to be resolved.\n\n log : logging.Logger\n \"\"\"\n\n @contextmanager\n def open(self, os_path, *args, **kwargs):\n \"\"\"wrapper around io.open that turns permission errors into 403\"\"\"\n with self.perm_to_403(os_path):\n with io.open(os_path, *args, **kwargs) as f:\n yield f\n\n @contextmanager\n def atomic_writing(self, os_path, *args, **kwargs):\n \"\"\"wrapper around atomic_writing that turns permission errors to 403\"\"\"\n with self.perm_to_403(os_path):\n with atomic_writing(os_path, *args, **kwargs) as f:\n yield f\n\n @contextmanager\n def perm_to_403(self, os_path=''):\n \"\"\"context manager for turning permission errors into 403.\"\"\"\n try:\n yield\n except OSError as e:\n if e.errno in {errno.EPERM, errno.EACCES}:\n # make 403 error message without root prefix\n # this may not work perfectly on unicode paths on Python 2,\n # but nobody should be doing that anyway.\n if not os_path:\n os_path = str_to_unicode(e.filename or 'unknown file')\n path = to_api_path(os_path, root=self.root_dir)\n raise HTTPError(403, u'Permission denied: %s' % path)\n else:\n raise\n\n def _copy(self, src, dest):\n \"\"\"copy src to dest\n\n like shutil.copy2, but log errors in copystat\n \"\"\"\n shutil.copyfile(src, dest)\n try:\n shutil.copystat(src, dest)\n except OSError:\n self.log.debug(\"copystat on %s failed\", dest, exc_info=True)\n\n def _get_os_path(self, path):\n \"\"\"Given an API path, return its file system path.\n\n Parameters\n ----------\n path : string\n The relative API path to the named file.\n\n Returns\n -------\n path : string\n Native, absolute OS path to for a file.\n \"\"\"\n return to_os_path(path, self.root_dir)\n\n def _read_notebook(self, os_path, as_version=4):\n \"\"\"Read a notebook from an os path.\"\"\"\n with self.open(os_path, 'r', encoding='utf-8') as f:\n try:\n return nbformat.read(f, as_version=as_version)\n except Exception as e:\n raise HTTPError(\n 400,\n u\"Unreadable Notebook: %s %r\" % (os_path, e),\n )\n\n def _save_notebook(self, os_path, nb):\n \"\"\"Save a notebook to an os_path.\"\"\"\n with self.atomic_writing(os_path, encoding='utf-8') as f:\n nbformat.write(nb, f, version=nbformat.NO_CONVERT)\n\n def _read_file(self, os_path, format):\n \"\"\"Read a non-notebook file.\n\n os_path: The path to be read.\n format:\n If 'text', the contents will be decoded as UTF-8.\n If 'base64', the raw bytes contents will be encoded as base64.\n If not specified, try to decode as UTF-8, and fall back to base64\n \"\"\"\n if not os.path.isfile(os_path):\n raise HTTPError(400, \"Cannot read non-file %s\" % os_path)\n\n with self.open(os_path, 'rb') as f:\n bcontent = f.read()\n\n if format is None or format == 'text':\n # Try to interpret as unicode if format is unknown or if unicode\n # was explicitly requested.\n try:\n return bcontent.decode('utf8'), 'text'\n except UnicodeError:\n if format == 'text':\n raise HTTPError(\n 400,\n \"%s is not UTF-8 encoded\" % os_path,\n reason='bad format',\n )\n return base64.encodestring(bcontent).decode('ascii'), 'base64'\n\n def _save_file(self, os_path, content, format):\n \"\"\"Save content of a generic file.\"\"\"\n if format not in {'text', 'base64'}:\n raise HTTPError(\n 400,\n \"Must specify format of file contents as 'text' or 'base64'\",\n )\n try:\n if format == 'text':\n bcontent = content.encode('utf8')\n else:\n b64_bytes = content.encode('ascii')\n bcontent = base64.decodestring(b64_bytes)\n except Exception as e:\n raise HTTPError(\n 400, u'Encoding error saving %s: %s' % (os_path, e)\n )\n\n with self.atomic_writing(os_path, text=False) as f:\n f.write(bcontent)\n", "path": "IPython/html/services/contents/fileio.py"}], "after_files": [{"content": "\"\"\"\nUtilities for file-based Contents/Checkpoints managers.\n\"\"\"\n\n# Copyright (c) IPython Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nimport base64\nfrom contextlib import contextmanager\nimport errno\nimport io\nimport os\nimport shutil\n\nfrom tornado.web import HTTPError\n\nfrom IPython.html.utils import (\n to_api_path,\n to_os_path,\n)\nfrom IPython import nbformat\nfrom IPython.utils.io import atomic_writing\nfrom IPython.utils.py3compat import str_to_unicode\n\n\nclass FileManagerMixin(object):\n \"\"\"\n Mixin for ContentsAPI classes that interact with the filesystem.\n\n Provides facilities for reading, writing, and copying both notebooks and\n generic files.\n\n Shared by FileContentsManager and FileCheckpoints.\n\n Note\n ----\n Classes using this mixin must provide the following attributes:\n\n root_dir : unicode\n A directory against against which API-style paths are to be resolved.\n\n log : logging.Logger\n \"\"\"\n\n @contextmanager\n def open(self, os_path, *args, **kwargs):\n \"\"\"wrapper around io.open that turns permission errors into 403\"\"\"\n with self.perm_to_403(os_path):\n with io.open(os_path, *args, **kwargs) as f:\n yield f\n\n @contextmanager\n def atomic_writing(self, os_path, *args, **kwargs):\n \"\"\"wrapper around atomic_writing that turns permission errors to 403\"\"\"\n with self.perm_to_403(os_path):\n with atomic_writing(os_path, *args, **kwargs) as f:\n yield f\n\n @contextmanager\n def perm_to_403(self, os_path=''):\n \"\"\"context manager for turning permission errors into 403.\"\"\"\n try:\n yield\n except (OSError, IOError) as e:\n if e.errno in {errno.EPERM, errno.EACCES}:\n # make 403 error message without root prefix\n # this may not work perfectly on unicode paths on Python 2,\n # but nobody should be doing that anyway.\n if not os_path:\n os_path = str_to_unicode(e.filename or 'unknown file')\n path = to_api_path(os_path, root=self.root_dir)\n raise HTTPError(403, u'Permission denied: %s' % path)\n else:\n raise\n\n def _copy(self, src, dest):\n \"\"\"copy src to dest\n\n like shutil.copy2, but log errors in copystat\n \"\"\"\n shutil.copyfile(src, dest)\n try:\n shutil.copystat(src, dest)\n except OSError:\n self.log.debug(\"copystat on %s failed\", dest, exc_info=True)\n\n def _get_os_path(self, path):\n \"\"\"Given an API path, return its file system path.\n\n Parameters\n ----------\n path : string\n The relative API path to the named file.\n\n Returns\n -------\n path : string\n Native, absolute OS path to for a file.\n \"\"\"\n return to_os_path(path, self.root_dir)\n\n def _read_notebook(self, os_path, as_version=4):\n \"\"\"Read a notebook from an os path.\"\"\"\n with self.open(os_path, 'r', encoding='utf-8') as f:\n try:\n return nbformat.read(f, as_version=as_version)\n except Exception as e:\n raise HTTPError(\n 400,\n u\"Unreadable Notebook: %s %r\" % (os_path, e),\n )\n\n def _save_notebook(self, os_path, nb):\n \"\"\"Save a notebook to an os_path.\"\"\"\n with self.atomic_writing(os_path, encoding='utf-8') as f:\n nbformat.write(nb, f, version=nbformat.NO_CONVERT)\n\n def _read_file(self, os_path, format):\n \"\"\"Read a non-notebook file.\n\n os_path: The path to be read.\n format:\n If 'text', the contents will be decoded as UTF-8.\n If 'base64', the raw bytes contents will be encoded as base64.\n If not specified, try to decode as UTF-8, and fall back to base64\n \"\"\"\n if not os.path.isfile(os_path):\n raise HTTPError(400, \"Cannot read non-file %s\" % os_path)\n\n with self.open(os_path, 'rb') as f:\n bcontent = f.read()\n\n if format is None or format == 'text':\n # Try to interpret as unicode if format is unknown or if unicode\n # was explicitly requested.\n try:\n return bcontent.decode('utf8'), 'text'\n except UnicodeError:\n if format == 'text':\n raise HTTPError(\n 400,\n \"%s is not UTF-8 encoded\" % os_path,\n reason='bad format',\n )\n return base64.encodestring(bcontent).decode('ascii'), 'base64'\n\n def _save_file(self, os_path, content, format):\n \"\"\"Save content of a generic file.\"\"\"\n if format not in {'text', 'base64'}:\n raise HTTPError(\n 400,\n \"Must specify format of file contents as 'text' or 'base64'\",\n )\n try:\n if format == 'text':\n bcontent = content.encode('utf8')\n else:\n b64_bytes = content.encode('ascii')\n bcontent = base64.decodestring(b64_bytes)\n except Exception as e:\n raise HTTPError(\n 400, u'Encoding error saving %s: %s' % (os_path, e)\n )\n\n with self.atomic_writing(os_path, text=False) as f:\n f.write(bcontent)\n", "path": "IPython/html/services/contents/fileio.py"}]}
| 2,591 | 145 |
gh_patches_debug_32718
|
rasdani/github-patches
|
git_diff
|
lutris__lutris-1982
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wine cannot find the FreeType font library
Origin doesn't render when "Prefer system libraries" is enabled.

[lutris-origin-log.txt](https://github.com/lutris/lutris/files/3031911/lutris-origin-log.txt)
@aTosser
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lutris/util/linux.py`
Content:
```
1 """Linux specific platform code"""
2 import os
3 import shutil
4 import sys
5 import json
6 import platform
7 import resource
8 import subprocess
9 from collections import defaultdict
10 from lutris.vendor.distro import linux_distribution
11 from lutris.util.graphics import drivers
12 from lutris.util.graphics import glxinfo
13 from lutris.util.log import logger
14 from lutris.util.disks import get_drive_for_path
15
16 SYSTEM_COMPONENTS = {
17 "COMMANDS": [
18 "xrandr",
19 "fuser",
20 "glxinfo",
21 "vulkaninfo",
22 "optirun",
23 "primusrun",
24 "pvkrun",
25 "xboxdrv",
26 "pulseaudio",
27 "lsi-steam",
28 "fuser",
29 "7z",
30 "gtk-update-icon-cache",
31 "lspci",
32 "xgamma",
33 "ldconfig",
34 "strangle",
35 "Xephyr",
36 "nvidia-smi",
37 "wine",
38 "fluidsynth",
39 ],
40 "TERMINALS": [
41 "xterm",
42 "gnome-terminal",
43 "konsole",
44 "xfce4-terminal",
45 "pantheon-terminal",
46 "terminator",
47 "mate-terminal",
48 "urxvt",
49 "cool-retro-term",
50 "Eterm",
51 "guake",
52 "lilyterm",
53 "lxterminal",
54 "roxterm",
55 "rxvt",
56 "aterm",
57 "sakura",
58 "st",
59 "terminology",
60 "termite",
61 "tilix",
62 "wterm",
63 "kitty",
64 "yuakuake",
65 "qterminal",
66 ],
67 "LIBRARIES": {
68 "OPENGL": [
69 "libGL.so.1",
70 ],
71 "VULKAN": [
72 "libvulkan.so.1",
73 ],
74 "WINE": [
75 "libsqlite3.so.0"
76 ],
77 "RADEON": [
78 "libvulkan_radeon.so"
79 ],
80 "GAMEMODE": [
81 "libgamemodeauto.so"
82 ]
83 }
84 }
85
86
87 class LinuxSystem:
88 """Global cache for system commands"""
89 _cache = {}
90
91 soundfont_folders = [
92 '/usr/share/sounds/sf2',
93 '/usr/share/soundfonts',
94 ]
95
96 recommended_no_file_open = 524288
97 required_components = ["OPENGL"]
98 optional_components = ["VULKAN", "WINE", "GAMEMODE"]
99
100 def __init__(self):
101 for key in ("COMMANDS", "TERMINALS"):
102 self._cache[key] = {}
103 for command in SYSTEM_COMPONENTS[key]:
104 command_path = shutil.which(command)
105 if not command_path:
106 command_path = self.get_sbin_path(command)
107 if command_path:
108 self._cache[key][command] = command_path
109
110 # Detect if system is 64bit capable
111 self.is_64_bit = sys.maxsize > 2 ** 32
112 self.arch = self.get_arch()
113
114 self.populate_libraries()
115 self.populate_sound_fonts()
116 self.soft_limit, self.hard_limit = self.get_file_limits()
117 self.glxinfo = self.get_glxinfo()
118
119 @staticmethod
120 def get_sbin_path(command):
121 """Some distributions don't put sbin directories in $PATH"""
122 path_candidates = ["/sbin", "/usr/sbin"]
123 for candidate in path_candidates:
124 command_path = os.path.join(candidate, command)
125 if os.path.exists(command_path):
126 return command_path
127
128 @staticmethod
129 def get_file_limits():
130 return resource.getrlimit(resource.RLIMIT_NOFILE)
131
132 def has_enough_file_descriptors(self):
133 return self.hard_limit >= self.recommended_no_file_open
134
135 @staticmethod
136 def get_cpus():
137 """Parse the output of /proc/cpuinfo"""
138 cpus = [{}]
139 cpu_index = 0
140 with open("/proc/cpuinfo") as cpuinfo:
141 for line in cpuinfo.readlines():
142 if not line.strip():
143 cpu_index += 1
144 cpus.append({})
145 continue
146 key, value = line.split(":", 1)
147 cpus[cpu_index][key.strip()] = value.strip()
148 return [cpu for cpu in cpus if cpu]
149
150 @staticmethod
151 def get_drives():
152 """Return a list of drives with their filesystems"""
153 try:
154 output = subprocess.check_output(["lsblk", "-f", "--json"]).decode()
155 except subprocess.CalledProcessError as ex:
156 logger.error("Failed to get drive information: %s", ex)
157 return None
158 return [
159 drive for drive in json.loads(output)["blockdevices"]
160 if drive["fstype"] != "squashfs"
161 ]
162
163 @staticmethod
164 def get_ram_info():
165 """Return RAM information"""
166 try:
167 output = subprocess.check_output(["free"]).decode().split("\n")
168 except subprocess.CalledProcessError as ex:
169 logger.error("Failed to get RAM information: %s", ex)
170 return None
171 columns = output[0].split()
172 meminfo = {}
173 for parts in [line.split() for line in output[1:] if line]:
174 meminfo[parts[0].strip(":").lower()] = dict(zip(columns, parts[1:]))
175 return meminfo
176
177 @staticmethod
178 def get_dist_info():
179 """Return distribution information"""
180 return linux_distribution()
181
182 @staticmethod
183 def get_arch():
184 """Return the system architecture only if compatible
185 with the supported architectures from the Lutris API
186 """
187 machine = platform.machine()
188 if "64" in machine:
189 return "x86_64"
190 if "86" in machine:
191 return "i386"
192 if "armv7" in machine:
193 return "armv7"
194 logger.warning("Unsupported architecture %s", machine)
195
196 @property
197 def runtime_architectures(self):
198 if self.arch == "x86_64":
199 return ["i386", "x86_64"]
200 return ["i386"]
201
202 @property
203 def requirements(self):
204 return self.get_requirements()
205
206 @property
207 def critical_requirements(self):
208 return self.get_requirements(include_optional=False)
209
210 @property
211 def lib_folders(self):
212 return self.get_lib_folders()
213
214 def get_fs_type_for_path(self, path):
215 """Return the filesystem type a given path uses"""
216 path_drive = get_drive_for_path(path)
217 for drive in self.get_drives():
218 for partition in drive.get("children", []):
219 if "/dev/%s" % partition["name"] == path_drive:
220 return partition["fstype"]
221
222 def get_lib_folders(self):
223 # Use ldconfig to locate the correct locations for system libs.
224 _paths = [[], []]
225 _candidates = (
226 subprocess.Popen([self.get("ldconfig"), '-p'],
227 stdout=subprocess.PIPE,
228 universal_newlines=True)
229 ).communicate()[0].split('\n')
230 for req in self.requirements:
231 for candidate in _candidates:
232 for lib in SYSTEM_COMPONENTS["LIBRARIES"][req]:
233 if lib in candidate:
234 if 'x86-64' in candidate:
235 candidate = candidate.split(' => ')[1].split(lib)[0]
236 if candidate not in _paths[1]:
237 _paths[1].append(candidate)
238 else:
239 candidate = candidate.split(' => ')[1].split(lib)[0]
240 if candidate not in _paths[0]:
241 _paths[0].append(candidate)
242 return _paths
243
244 def get_glxinfo(self):
245 """Return a GlxInfo instance if the gfxinfo tool is available"""
246 if not self.get("glxinfo"):
247 return
248 _glxinfo = glxinfo.GlxInfo()
249 if not hasattr(_glxinfo, "display"):
250 logger.warning("Invalid glxinfo received")
251 return
252 return _glxinfo
253
254 def get_requirements(self, include_optional=True):
255 """Return used system requirements"""
256 _requirements = self.required_components.copy()
257 if include_optional:
258 _requirements += self.optional_components
259 if drivers.is_amd():
260 _requirements.append("RADEON")
261 return _requirements
262
263 def get(self, command):
264 """Return a system command path if available"""
265 return self._cache["COMMANDS"].get(command)
266
267 def get_terminals(self):
268 """Return list of installed terminals"""
269 return list(self._cache["TERMINALS"].values())
270
271 def get_soundfonts(self):
272 """Return path of available soundfonts"""
273 return self._cache["SOUNDFONTS"]
274
275 def iter_lib_folders(self):
276 """Loop over existing 32/64 bit library folders"""
277 for lib_paths in self.lib_folders:
278 if self.arch != 'x86_64':
279 # On non amd64 setups, only the first element is relevant
280 lib_paths = [lib_paths[0]]
281 yield lib_paths
282
283 def populate_libraries(self):
284 """Populates the LIBRARIES cache with what is found on the system"""
285 self._cache["LIBRARIES"] = {}
286 for arch in self.runtime_architectures:
287 self._cache["LIBRARIES"][arch] = defaultdict(list)
288 for lib_paths in self.iter_lib_folders():
289 for path in lib_paths:
290 for req in self.requirements:
291 for lib in SYSTEM_COMPONENTS["LIBRARIES"][req]:
292 for index, arch in enumerate(self.runtime_architectures):
293 if os.path.exists(os.path.join(path, lib)):
294 self._cache["LIBRARIES"][arch][req].append(lib)
295
296 def populate_sound_fonts(self):
297 """Populates the soundfont cache"""
298 self._cache["SOUNDFONTS"] = []
299 for folder in self.soundfont_folders:
300 if not os.path.exists(folder):
301 continue
302 for soundfont in os.listdir(folder):
303 self._cache["SOUNDFONTS"].append(soundfont)
304
305 def get_missing_requirement_libs(self, req):
306 """Return a list of sets of missing libraries for each supported architecture"""
307 required_libs = set(SYSTEM_COMPONENTS["LIBRARIES"][req])
308 return [
309 list(required_libs - set(self._cache["LIBRARIES"][arch][req]))
310 for arch in self.runtime_architectures
311 ]
312
313 def get_missing_libs(self):
314 """Return a dictionary of missing libraries"""
315 return {
316 req: self.get_missing_requirement_libs(req)
317 for req in self.requirements
318 }
319
320 def is_feature_supported(self, feature):
321 """Return whether the system has the necessary libs to support a feature"""
322 return not self.get_missing_requirement_libs(feature)[0]
323
324
325 LINUX_SYSTEM = LinuxSystem()
326
327
328 def gather_system_info():
329 """Get all system information in a single data structure"""
330 system_info = {}
331 if drivers.is_nvidia():
332 system_info["nvidia_driver"] = drivers.get_nvidia_driver_info()
333 system_info["nvidia_gpus"] = [
334 drivers.get_nvidia_gpu_info(gpu_id)
335 for gpu_id in drivers.get_nvidia_gpu_ids()
336 ]
337 system_info["gpus"] = [drivers.get_gpu_info(gpu) for gpu in drivers.get_gpus()]
338 system_info["env"] = dict(os.environ)
339 system_info["missing_libs"] = LINUX_SYSTEM.get_missing_libs()
340 system_info["cpus"] = LINUX_SYSTEM.get_cpus()
341 system_info["drives"] = LINUX_SYSTEM.get_drives()
342 system_info["ram"] = LINUX_SYSTEM.get_ram_info()
343 system_info["dist"] = LINUX_SYSTEM.get_dist_info()
344 system_info["glxinfo"] = glxinfo.GlxInfo().as_dict()
345 return system_info
346
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lutris/util/linux.py b/lutris/util/linux.py
--- a/lutris/util/linux.py
+++ b/lutris/util/linux.py
@@ -220,27 +220,26 @@
return partition["fstype"]
def get_lib_folders(self):
- # Use ldconfig to locate the correct locations for system libs.
- _paths = [[], []]
- _candidates = (
- subprocess.Popen([self.get("ldconfig"), '-p'],
- stdout=subprocess.PIPE,
- universal_newlines=True)
- ).communicate()[0].split('\n')
- for req in self.requirements:
- for candidate in _candidates:
- for lib in SYSTEM_COMPONENTS["LIBRARIES"][req]:
- if lib in candidate:
- if 'x86-64' in candidate:
- candidate = candidate.split(' => ')[1].split(lib)[0]
- if candidate not in _paths[1]:
- _paths[1].append(candidate)
- else:
- candidate = candidate.split(' => ')[1].split(lib)[0]
- if candidate not in _paths[0]:
- _paths[0].append(candidate)
+ # Use ldconfig to locate the correct locations for system libs. Sorting is done to preserve ordering for the distros that care.
+ _cand_dict = {}
+ for candidate in subprocess.run([self.get("ldconfig"), '-p'], stdout=subprocess.PIPE, universal_newlines=True).stdout.split('\n'):
+ if '=>' in candidate:
+ candidate = candidate.split(' => ')
+ if candidate[0] not in _cand_dict:
+ _cand_dict.update({candidate[0]: [candidate[1].rsplit('/', 1)[0]]})
+ else:
+ _cand_dict[candidate[0]].append(candidate[1].rsplit('/', 1)[0])
+ _paths = [[],[]]
+ for candidate, path in sorted(_cand_dict.items(), key = lambda cand: len(cand[1]), reverse = True):
+ if 'x86-64' in candidate:
+ if path[0] not in _paths[1]:
+ _paths[1].append(path[0])
+ elif 'libc' in candidate:
+ if path[0] not in _paths[0]:
+ _paths[0].append(path[0])
return _paths
+
def get_glxinfo(self):
"""Return a GlxInfo instance if the gfxinfo tool is available"""
if not self.get("glxinfo"):
|
{"golden_diff": "diff --git a/lutris/util/linux.py b/lutris/util/linux.py\n--- a/lutris/util/linux.py\n+++ b/lutris/util/linux.py\n@@ -220,27 +220,26 @@\n return partition[\"fstype\"]\n \n def get_lib_folders(self):\n- # Use ldconfig to locate the correct locations for system libs.\n- _paths = [[], []]\n- _candidates = (\n- subprocess.Popen([self.get(\"ldconfig\"), '-p'],\n- stdout=subprocess.PIPE,\n- universal_newlines=True)\n- ).communicate()[0].split('\\n')\n- for req in self.requirements:\n- for candidate in _candidates:\n- for lib in SYSTEM_COMPONENTS[\"LIBRARIES\"][req]:\n- if lib in candidate:\n- if 'x86-64' in candidate:\n- candidate = candidate.split(' => ')[1].split(lib)[0]\n- if candidate not in _paths[1]:\n- _paths[1].append(candidate)\n- else:\n- candidate = candidate.split(' => ')[1].split(lib)[0]\n- if candidate not in _paths[0]:\n- _paths[0].append(candidate)\n+ # Use ldconfig to locate the correct locations for system libs. Sorting is done to preserve ordering for the distros that care.\n+ _cand_dict = {}\n+ for candidate in subprocess.run([self.get(\"ldconfig\"), '-p'], stdout=subprocess.PIPE, universal_newlines=True).stdout.split('\\n'):\n+ if '=>' in candidate:\n+ candidate = candidate.split(' => ')\n+ if candidate[0] not in _cand_dict:\n+ _cand_dict.update({candidate[0]: [candidate[1].rsplit('/', 1)[0]]})\n+ else:\n+ _cand_dict[candidate[0]].append(candidate[1].rsplit('/', 1)[0])\n+ _paths = [[],[]]\n+ for candidate, path in sorted(_cand_dict.items(), key = lambda cand: len(cand[1]), reverse = True):\n+ if 'x86-64' in candidate:\n+ if path[0] not in _paths[1]:\n+ _paths[1].append(path[0])\n+ elif 'libc' in candidate:\n+ if path[0] not in _paths[0]:\n+ _paths[0].append(path[0])\n return _paths\n \n+\n def get_glxinfo(self):\n \"\"\"Return a GlxInfo instance if the gfxinfo tool is available\"\"\"\n if not self.get(\"glxinfo\"):\n", "issue": "Wine cannot find the FreeType font library\nOrigin doesn't render when \"Prefer system libraries\" is enabled.\r\n\r\n\r\n\r\n[lutris-origin-log.txt](https://github.com/lutris/lutris/files/3031911/lutris-origin-log.txt)\r\n\r\n@aTosser \n", "before_files": [{"content": "\"\"\"Linux specific platform code\"\"\"\nimport os\nimport shutil\nimport sys\nimport json\nimport platform\nimport resource\nimport subprocess\nfrom collections import defaultdict\nfrom lutris.vendor.distro import linux_distribution\nfrom lutris.util.graphics import drivers\nfrom lutris.util.graphics import glxinfo\nfrom lutris.util.log import logger\nfrom lutris.util.disks import get_drive_for_path\n\nSYSTEM_COMPONENTS = {\n \"COMMANDS\": [\n \"xrandr\",\n \"fuser\",\n \"glxinfo\",\n \"vulkaninfo\",\n \"optirun\",\n \"primusrun\",\n \"pvkrun\",\n \"xboxdrv\",\n \"pulseaudio\",\n \"lsi-steam\",\n \"fuser\",\n \"7z\",\n \"gtk-update-icon-cache\",\n \"lspci\",\n \"xgamma\",\n \"ldconfig\",\n \"strangle\",\n \"Xephyr\",\n \"nvidia-smi\",\n \"wine\",\n \"fluidsynth\",\n ],\n \"TERMINALS\": [\n \"xterm\",\n \"gnome-terminal\",\n \"konsole\",\n \"xfce4-terminal\",\n \"pantheon-terminal\",\n \"terminator\",\n \"mate-terminal\",\n \"urxvt\",\n \"cool-retro-term\",\n \"Eterm\",\n \"guake\",\n \"lilyterm\",\n \"lxterminal\",\n \"roxterm\",\n \"rxvt\",\n \"aterm\",\n \"sakura\",\n \"st\",\n \"terminology\",\n \"termite\",\n \"tilix\",\n \"wterm\",\n \"kitty\",\n \"yuakuake\",\n \"qterminal\",\n ],\n \"LIBRARIES\": {\n \"OPENGL\": [\n \"libGL.so.1\",\n ],\n \"VULKAN\": [\n \"libvulkan.so.1\",\n ],\n \"WINE\": [\n \"libsqlite3.so.0\"\n ],\n \"RADEON\": [\n \"libvulkan_radeon.so\"\n ],\n \"GAMEMODE\": [\n \"libgamemodeauto.so\"\n ]\n }\n}\n\n\nclass LinuxSystem:\n \"\"\"Global cache for system commands\"\"\"\n _cache = {}\n\n soundfont_folders = [\n '/usr/share/sounds/sf2',\n '/usr/share/soundfonts',\n ]\n\n recommended_no_file_open = 524288\n required_components = [\"OPENGL\"]\n optional_components = [\"VULKAN\", \"WINE\", \"GAMEMODE\"]\n\n def __init__(self):\n for key in (\"COMMANDS\", \"TERMINALS\"):\n self._cache[key] = {}\n for command in SYSTEM_COMPONENTS[key]:\n command_path = shutil.which(command)\n if not command_path:\n command_path = self.get_sbin_path(command)\n if command_path:\n self._cache[key][command] = command_path\n\n # Detect if system is 64bit capable\n self.is_64_bit = sys.maxsize > 2 ** 32\n self.arch = self.get_arch()\n\n self.populate_libraries()\n self.populate_sound_fonts()\n self.soft_limit, self.hard_limit = self.get_file_limits()\n self.glxinfo = self.get_glxinfo()\n\n @staticmethod\n def get_sbin_path(command):\n \"\"\"Some distributions don't put sbin directories in $PATH\"\"\"\n path_candidates = [\"/sbin\", \"/usr/sbin\"]\n for candidate in path_candidates:\n command_path = os.path.join(candidate, command)\n if os.path.exists(command_path):\n return command_path\n\n @staticmethod\n def get_file_limits():\n return resource.getrlimit(resource.RLIMIT_NOFILE)\n\n def has_enough_file_descriptors(self):\n return self.hard_limit >= self.recommended_no_file_open\n\n @staticmethod\n def get_cpus():\n \"\"\"Parse the output of /proc/cpuinfo\"\"\"\n cpus = [{}]\n cpu_index = 0\n with open(\"/proc/cpuinfo\") as cpuinfo:\n for line in cpuinfo.readlines():\n if not line.strip():\n cpu_index += 1\n cpus.append({})\n continue\n key, value = line.split(\":\", 1)\n cpus[cpu_index][key.strip()] = value.strip()\n return [cpu for cpu in cpus if cpu]\n\n @staticmethod\n def get_drives():\n \"\"\"Return a list of drives with their filesystems\"\"\"\n try:\n output = subprocess.check_output([\"lsblk\", \"-f\", \"--json\"]).decode()\n except subprocess.CalledProcessError as ex:\n logger.error(\"Failed to get drive information: %s\", ex)\n return None\n return [\n drive for drive in json.loads(output)[\"blockdevices\"]\n if drive[\"fstype\"] != \"squashfs\"\n ]\n\n @staticmethod\n def get_ram_info():\n \"\"\"Return RAM information\"\"\"\n try:\n output = subprocess.check_output([\"free\"]).decode().split(\"\\n\")\n except subprocess.CalledProcessError as ex:\n logger.error(\"Failed to get RAM information: %s\", ex)\n return None\n columns = output[0].split()\n meminfo = {}\n for parts in [line.split() for line in output[1:] if line]:\n meminfo[parts[0].strip(\":\").lower()] = dict(zip(columns, parts[1:]))\n return meminfo\n\n @staticmethod\n def get_dist_info():\n \"\"\"Return distribution information\"\"\"\n return linux_distribution()\n\n @staticmethod\n def get_arch():\n \"\"\"Return the system architecture only if compatible\n with the supported architectures from the Lutris API\n \"\"\"\n machine = platform.machine()\n if \"64\" in machine:\n return \"x86_64\"\n if \"86\" in machine:\n return \"i386\"\n if \"armv7\" in machine:\n return \"armv7\"\n logger.warning(\"Unsupported architecture %s\", machine)\n\n @property\n def runtime_architectures(self):\n if self.arch == \"x86_64\":\n return [\"i386\", \"x86_64\"]\n return [\"i386\"]\n\n @property\n def requirements(self):\n return self.get_requirements()\n\n @property\n def critical_requirements(self):\n return self.get_requirements(include_optional=False)\n\n @property\n def lib_folders(self):\n return self.get_lib_folders()\n\n def get_fs_type_for_path(self, path):\n \"\"\"Return the filesystem type a given path uses\"\"\"\n path_drive = get_drive_for_path(path)\n for drive in self.get_drives():\n for partition in drive.get(\"children\", []):\n if \"/dev/%s\" % partition[\"name\"] == path_drive:\n return partition[\"fstype\"]\n\n def get_lib_folders(self):\n # Use ldconfig to locate the correct locations for system libs.\n _paths = [[], []]\n _candidates = (\n subprocess.Popen([self.get(\"ldconfig\"), '-p'],\n stdout=subprocess.PIPE,\n universal_newlines=True)\n ).communicate()[0].split('\\n')\n for req in self.requirements:\n for candidate in _candidates:\n for lib in SYSTEM_COMPONENTS[\"LIBRARIES\"][req]:\n if lib in candidate:\n if 'x86-64' in candidate:\n candidate = candidate.split(' => ')[1].split(lib)[0]\n if candidate not in _paths[1]:\n _paths[1].append(candidate)\n else:\n candidate = candidate.split(' => ')[1].split(lib)[0]\n if candidate not in _paths[0]:\n _paths[0].append(candidate)\n return _paths\n\n def get_glxinfo(self):\n \"\"\"Return a GlxInfo instance if the gfxinfo tool is available\"\"\"\n if not self.get(\"glxinfo\"):\n return\n _glxinfo = glxinfo.GlxInfo()\n if not hasattr(_glxinfo, \"display\"):\n logger.warning(\"Invalid glxinfo received\")\n return\n return _glxinfo\n\n def get_requirements(self, include_optional=True):\n \"\"\"Return used system requirements\"\"\"\n _requirements = self.required_components.copy()\n if include_optional:\n _requirements += self.optional_components\n if drivers.is_amd():\n _requirements.append(\"RADEON\")\n return _requirements\n\n def get(self, command):\n \"\"\"Return a system command path if available\"\"\"\n return self._cache[\"COMMANDS\"].get(command)\n\n def get_terminals(self):\n \"\"\"Return list of installed terminals\"\"\"\n return list(self._cache[\"TERMINALS\"].values())\n\n def get_soundfonts(self):\n \"\"\"Return path of available soundfonts\"\"\"\n return self._cache[\"SOUNDFONTS\"]\n\n def iter_lib_folders(self):\n \"\"\"Loop over existing 32/64 bit library folders\"\"\"\n for lib_paths in self.lib_folders:\n if self.arch != 'x86_64':\n # On non amd64 setups, only the first element is relevant\n lib_paths = [lib_paths[0]]\n yield lib_paths\n\n def populate_libraries(self):\n \"\"\"Populates the LIBRARIES cache with what is found on the system\"\"\"\n self._cache[\"LIBRARIES\"] = {}\n for arch in self.runtime_architectures:\n self._cache[\"LIBRARIES\"][arch] = defaultdict(list)\n for lib_paths in self.iter_lib_folders():\n for path in lib_paths:\n for req in self.requirements:\n for lib in SYSTEM_COMPONENTS[\"LIBRARIES\"][req]:\n for index, arch in enumerate(self.runtime_architectures):\n if os.path.exists(os.path.join(path, lib)):\n self._cache[\"LIBRARIES\"][arch][req].append(lib)\n\n def populate_sound_fonts(self):\n \"\"\"Populates the soundfont cache\"\"\"\n self._cache[\"SOUNDFONTS\"] = []\n for folder in self.soundfont_folders:\n if not os.path.exists(folder):\n continue\n for soundfont in os.listdir(folder):\n self._cache[\"SOUNDFONTS\"].append(soundfont)\n\n def get_missing_requirement_libs(self, req):\n \"\"\"Return a list of sets of missing libraries for each supported architecture\"\"\"\n required_libs = set(SYSTEM_COMPONENTS[\"LIBRARIES\"][req])\n return [\n list(required_libs - set(self._cache[\"LIBRARIES\"][arch][req]))\n for arch in self.runtime_architectures\n ]\n\n def get_missing_libs(self):\n \"\"\"Return a dictionary of missing libraries\"\"\"\n return {\n req: self.get_missing_requirement_libs(req)\n for req in self.requirements\n }\n\n def is_feature_supported(self, feature):\n \"\"\"Return whether the system has the necessary libs to support a feature\"\"\"\n return not self.get_missing_requirement_libs(feature)[0]\n\n\nLINUX_SYSTEM = LinuxSystem()\n\n\ndef gather_system_info():\n \"\"\"Get all system information in a single data structure\"\"\"\n system_info = {}\n if drivers.is_nvidia():\n system_info[\"nvidia_driver\"] = drivers.get_nvidia_driver_info()\n system_info[\"nvidia_gpus\"] = [\n drivers.get_nvidia_gpu_info(gpu_id)\n for gpu_id in drivers.get_nvidia_gpu_ids()\n ]\n system_info[\"gpus\"] = [drivers.get_gpu_info(gpu) for gpu in drivers.get_gpus()]\n system_info[\"env\"] = dict(os.environ)\n system_info[\"missing_libs\"] = LINUX_SYSTEM.get_missing_libs()\n system_info[\"cpus\"] = LINUX_SYSTEM.get_cpus()\n system_info[\"drives\"] = LINUX_SYSTEM.get_drives()\n system_info[\"ram\"] = LINUX_SYSTEM.get_ram_info()\n system_info[\"dist\"] = LINUX_SYSTEM.get_dist_info()\n system_info[\"glxinfo\"] = glxinfo.GlxInfo().as_dict()\n return system_info\n", "path": "lutris/util/linux.py"}], "after_files": [{"content": "\"\"\"Linux specific platform code\"\"\"\nimport os\nimport shutil\nimport sys\nimport json\nimport platform\nimport resource\nimport subprocess\nfrom collections import defaultdict\nfrom lutris.vendor.distro import linux_distribution\nfrom lutris.util.graphics import drivers\nfrom lutris.util.graphics import glxinfo\nfrom lutris.util.log import logger\nfrom lutris.util.disks import get_drive_for_path\n\nSYSTEM_COMPONENTS = {\n \"COMMANDS\": [\n \"xrandr\",\n \"fuser\",\n \"glxinfo\",\n \"vulkaninfo\",\n \"optirun\",\n \"primusrun\",\n \"pvkrun\",\n \"xboxdrv\",\n \"pulseaudio\",\n \"lsi-steam\",\n \"fuser\",\n \"7z\",\n \"gtk-update-icon-cache\",\n \"lspci\",\n \"xgamma\",\n \"ldconfig\",\n \"strangle\",\n \"Xephyr\",\n \"nvidia-smi\",\n \"wine\",\n \"fluidsynth\",\n ],\n \"TERMINALS\": [\n \"xterm\",\n \"gnome-terminal\",\n \"konsole\",\n \"xfce4-terminal\",\n \"pantheon-terminal\",\n \"terminator\",\n \"mate-terminal\",\n \"urxvt\",\n \"cool-retro-term\",\n \"Eterm\",\n \"guake\",\n \"lilyterm\",\n \"lxterminal\",\n \"roxterm\",\n \"rxvt\",\n \"aterm\",\n \"sakura\",\n \"st\",\n \"terminology\",\n \"termite\",\n \"tilix\",\n \"wterm\",\n \"kitty\",\n \"yuakuake\",\n \"qterminal\",\n ],\n \"LIBRARIES\": {\n \"OPENGL\": [\n \"libGL.so.1\",\n ],\n \"VULKAN\": [\n \"libvulkan.so.1\",\n ],\n \"WINE\": [\n \"libsqlite3.so.0\"\n ],\n \"RADEON\": [\n \"libvulkan_radeon.so\"\n ],\n \"GAMEMODE\": [\n \"libgamemodeauto.so\"\n ]\n }\n}\n\n\nclass LinuxSystem:\n \"\"\"Global cache for system commands\"\"\"\n _cache = {}\n\n soundfont_folders = [\n '/usr/share/sounds/sf2',\n '/usr/share/soundfonts',\n ]\n\n recommended_no_file_open = 524288\n required_components = [\"OPENGL\"]\n optional_components = [\"VULKAN\", \"WINE\", \"GAMEMODE\"]\n\n def __init__(self):\n for key in (\"COMMANDS\", \"TERMINALS\"):\n self._cache[key] = {}\n for command in SYSTEM_COMPONENTS[key]:\n command_path = shutil.which(command)\n if not command_path:\n command_path = self.get_sbin_path(command)\n if command_path:\n self._cache[key][command] = command_path\n\n # Detect if system is 64bit capable\n self.is_64_bit = sys.maxsize > 2 ** 32\n self.arch = self.get_arch()\n\n self.populate_libraries()\n self.populate_sound_fonts()\n self.soft_limit, self.hard_limit = self.get_file_limits()\n self.glxinfo = self.get_glxinfo()\n\n @staticmethod\n def get_sbin_path(command):\n \"\"\"Some distributions don't put sbin directories in $PATH\"\"\"\n path_candidates = [\"/sbin\", \"/usr/sbin\"]\n for candidate in path_candidates:\n command_path = os.path.join(candidate, command)\n if os.path.exists(command_path):\n return command_path\n\n @staticmethod\n def get_file_limits():\n return resource.getrlimit(resource.RLIMIT_NOFILE)\n\n def has_enough_file_descriptors(self):\n return self.hard_limit >= self.recommended_no_file_open\n\n @staticmethod\n def get_cpus():\n \"\"\"Parse the output of /proc/cpuinfo\"\"\"\n cpus = [{}]\n cpu_index = 0\n with open(\"/proc/cpuinfo\") as cpuinfo:\n for line in cpuinfo.readlines():\n if not line.strip():\n cpu_index += 1\n cpus.append({})\n continue\n key, value = line.split(\":\", 1)\n cpus[cpu_index][key.strip()] = value.strip()\n return [cpu for cpu in cpus if cpu]\n\n @staticmethod\n def get_drives():\n \"\"\"Return a list of drives with their filesystems\"\"\"\n try:\n output = subprocess.check_output([\"lsblk\", \"-f\", \"--json\"]).decode()\n except subprocess.CalledProcessError as ex:\n logger.error(\"Failed to get drive information: %s\", ex)\n return None\n return [\n drive for drive in json.loads(output)[\"blockdevices\"]\n if drive[\"fstype\"] != \"squashfs\"\n ]\n\n @staticmethod\n def get_ram_info():\n \"\"\"Return RAM information\"\"\"\n try:\n output = subprocess.check_output([\"free\"]).decode().split(\"\\n\")\n except subprocess.CalledProcessError as ex:\n logger.error(\"Failed to get RAM information: %s\", ex)\n return None\n columns = output[0].split()\n meminfo = {}\n for parts in [line.split() for line in output[1:] if line]:\n meminfo[parts[0].strip(\":\").lower()] = dict(zip(columns, parts[1:]))\n return meminfo\n\n @staticmethod\n def get_dist_info():\n \"\"\"Return distribution information\"\"\"\n return linux_distribution()\n\n @staticmethod\n def get_arch():\n \"\"\"Return the system architecture only if compatible\n with the supported architectures from the Lutris API\n \"\"\"\n machine = platform.machine()\n if \"64\" in machine:\n return \"x86_64\"\n if \"86\" in machine:\n return \"i386\"\n if \"armv7\" in machine:\n return \"armv7\"\n logger.warning(\"Unsupported architecture %s\", machine)\n\n @property\n def runtime_architectures(self):\n if self.arch == \"x86_64\":\n return [\"i386\", \"x86_64\"]\n return [\"i386\"]\n\n @property\n def requirements(self):\n return self.get_requirements()\n\n @property\n def critical_requirements(self):\n return self.get_requirements(include_optional=False)\n\n @property\n def lib_folders(self):\n return self.get_lib_folders()\n\n def get_fs_type_for_path(self, path):\n \"\"\"Return the filesystem type a given path uses\"\"\"\n path_drive = get_drive_for_path(path)\n for drive in self.get_drives():\n for partition in drive.get(\"children\", []):\n if \"/dev/%s\" % partition[\"name\"] == path_drive:\n return partition[\"fstype\"]\n\n def get_lib_folders(self):\n # Use ldconfig to locate the correct locations for system libs. Sorting is done to preserve ordering for the distros that care.\n _cand_dict = {}\n for candidate in subprocess.run([self.get(\"ldconfig\"), '-p'], stdout=subprocess.PIPE, universal_newlines=True).stdout.split('\\n'):\n if '=>' in candidate:\n candidate = candidate.split(' => ')\n if candidate[0] not in _cand_dict:\n _cand_dict.update({candidate[0]: [candidate[1].rsplit('/', 1)[0]]})\n else:\n _cand_dict[candidate[0]].append(candidate[1].rsplit('/', 1)[0])\n _paths = [[],[]]\n for candidate, path in sorted(_cand_dict.items(), key = lambda cand: len(cand[1]), reverse = True):\n if 'x86-64' in candidate:\n if path[0] not in _paths[1]:\n _paths[1].append(path[0])\n elif 'libc' in candidate:\n if path[0] not in _paths[0]:\n _paths[0].append(path[0])\n return _paths\n\n\n def get_glxinfo(self):\n \"\"\"Return a GlxInfo instance if the gfxinfo tool is available\"\"\"\n if not self.get(\"glxinfo\"):\n return\n _glxinfo = glxinfo.GlxInfo()\n if not hasattr(_glxinfo, \"display\"):\n logger.warning(\"Invalid glxinfo received\")\n return\n return _glxinfo\n\n def get_requirements(self, include_optional=True):\n \"\"\"Return used system requirements\"\"\"\n _requirements = self.required_components.copy()\n if include_optional:\n _requirements += self.optional_components\n if drivers.is_amd():\n _requirements.append(\"RADEON\")\n return _requirements\n\n def get(self, command):\n \"\"\"Return a system command path if available\"\"\"\n return self._cache[\"COMMANDS\"].get(command)\n\n def get_terminals(self):\n \"\"\"Return list of installed terminals\"\"\"\n return list(self._cache[\"TERMINALS\"].values())\n\n def get_soundfonts(self):\n \"\"\"Return path of available soundfonts\"\"\"\n return self._cache[\"SOUNDFONTS\"]\n\n def iter_lib_folders(self):\n \"\"\"Loop over existing 32/64 bit library folders\"\"\"\n for lib_paths in self.lib_folders:\n if self.arch != 'x86_64':\n # On non amd64 setups, only the first element is relevant\n lib_paths = [lib_paths[0]]\n yield lib_paths\n\n def populate_libraries(self):\n \"\"\"Populates the LIBRARIES cache with what is found on the system\"\"\"\n self._cache[\"LIBRARIES\"] = {}\n for arch in self.runtime_architectures:\n self._cache[\"LIBRARIES\"][arch] = defaultdict(list)\n for lib_paths in self.iter_lib_folders():\n for path in lib_paths:\n for req in self.requirements:\n for lib in SYSTEM_COMPONENTS[\"LIBRARIES\"][req]:\n for index, arch in enumerate(self.runtime_architectures):\n if os.path.exists(os.path.join(path, lib)):\n self._cache[\"LIBRARIES\"][arch][req].append(lib)\n\n def populate_sound_fonts(self):\n \"\"\"Populates the soundfont cache\"\"\"\n self._cache[\"SOUNDFONTS\"] = []\n for folder in self.soundfont_folders:\n if not os.path.exists(folder):\n continue\n for soundfont in os.listdir(folder):\n self._cache[\"SOUNDFONTS\"].append(soundfont)\n\n def get_missing_requirement_libs(self, req):\n \"\"\"Return a list of sets of missing libraries for each supported architecture\"\"\"\n required_libs = set(SYSTEM_COMPONENTS[\"LIBRARIES\"][req])\n return [\n list(required_libs - set(self._cache[\"LIBRARIES\"][arch][req]))\n for arch in self.runtime_architectures\n ]\n\n def get_missing_libs(self):\n \"\"\"Return a dictionary of missing libraries\"\"\"\n return {\n req: self.get_missing_requirement_libs(req)\n for req in self.requirements\n }\n\n def is_feature_supported(self, feature):\n \"\"\"Return whether the system has the necessary libs to support a feature\"\"\"\n return not self.get_missing_requirement_libs(feature)[0]\n\n\nLINUX_SYSTEM = LinuxSystem()\n\n\ndef gather_system_info():\n \"\"\"Get all system information in a single data structure\"\"\"\n system_info = {}\n if drivers.is_nvidia():\n system_info[\"nvidia_driver\"] = drivers.get_nvidia_driver_info()\n system_info[\"nvidia_gpus\"] = [\n drivers.get_nvidia_gpu_info(gpu_id)\n for gpu_id in drivers.get_nvidia_gpu_ids()\n ]\n system_info[\"gpus\"] = [drivers.get_gpu_info(gpu) for gpu in drivers.get_gpus()]\n system_info[\"env\"] = dict(os.environ)\n system_info[\"missing_libs\"] = LINUX_SYSTEM.get_missing_libs()\n system_info[\"cpus\"] = LINUX_SYSTEM.get_cpus()\n system_info[\"drives\"] = LINUX_SYSTEM.get_drives()\n system_info[\"ram\"] = LINUX_SYSTEM.get_ram_info()\n system_info[\"dist\"] = LINUX_SYSTEM.get_dist_info()\n system_info[\"glxinfo\"] = glxinfo.GlxInfo().as_dict()\n return system_info\n", "path": "lutris/util/linux.py"}]}
| 3,898 | 573 |
gh_patches_debug_62449
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-296
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tornado integration does not capture formdata
There is no obvious way to get just formdata from Tornado's request object (not mixed with other request params), but I haven't done a lot of research on that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/tornado.py`
Content:
```
1 import sys
2 import weakref
3 from inspect import iscoroutinefunction
4
5 from sentry_sdk.hub import Hub, _should_send_default_pii
6 from sentry_sdk.utils import (
7 event_from_exception,
8 capture_internal_exceptions,
9 transaction_from_function,
10 )
11 from sentry_sdk.integrations import Integration
12 from sentry_sdk.integrations._wsgi_common import (
13 RequestExtractor,
14 _filter_headers,
15 _is_json_content_type,
16 )
17 from sentry_sdk.integrations.logging import ignore_logger
18
19 from tornado.web import RequestHandler, HTTPError # type: ignore
20 from tornado.gen import coroutine # type: ignore
21
22 if False:
23 from typing import Any
24 from typing import List
25 from typing import Optional
26 from typing import Dict
27 from typing import Callable
28
29
30 class TornadoIntegration(Integration):
31 identifier = "tornado"
32
33 @staticmethod
34 def setup_once():
35 # type: () -> None
36 import tornado # type: ignore
37
38 tornado_version = getattr(tornado, "version_info", None)
39 if tornado_version is None or tornado_version < (5, 0):
40 raise RuntimeError("Tornado 5+ required")
41
42 if sys.version_info < (3, 7):
43 # Tornado is async. We better have contextvars or we're going to leak
44 # state between requests.
45 raise RuntimeError(
46 "The tornado integration for Sentry requires Python 3.7+"
47 )
48
49 ignore_logger("tornado.application")
50 ignore_logger("tornado.access")
51
52 old_execute = RequestHandler._execute
53
54 awaitable = iscoroutinefunction(old_execute)
55
56 if awaitable:
57 # Starting Tornado 6 RequestHandler._execute method is a standard Python coroutine (async/await)
58 # In that case our method should be a coroutine function too
59 async def sentry_execute_request_handler(self, *args, **kwargs):
60 # type: (Any, *List, **Any) -> Any
61 hub = Hub.current
62 integration = hub.get_integration(TornadoIntegration)
63 if integration is None:
64 return await old_execute(self, *args, **kwargs)
65
66 weak_handler = weakref.ref(self)
67
68 with Hub(hub) as hub:
69 with hub.configure_scope() as scope:
70 scope.add_event_processor(_make_event_processor(weak_handler))
71 return await old_execute(self, *args, **kwargs)
72
73 else:
74
75 @coroutine # type: ignore
76 def sentry_execute_request_handler(self, *args, **kwargs):
77 hub = Hub.current
78 integration = hub.get_integration(TornadoIntegration)
79 if integration is None:
80 return old_execute(self, *args, **kwargs)
81
82 weak_handler = weakref.ref(self)
83
84 with Hub(hub) as hub:
85 with hub.configure_scope() as scope:
86 scope.add_event_processor(_make_event_processor(weak_handler))
87 result = yield from old_execute(self, *args, **kwargs)
88 return result
89
90 RequestHandler._execute = sentry_execute_request_handler
91
92 old_log_exception = RequestHandler.log_exception
93
94 def sentry_log_exception(self, ty, value, tb, *args, **kwargs):
95 # type: (Any, type, BaseException, Any, *Any, **Any) -> Optional[Any]
96 _capture_exception(ty, value, tb)
97 return old_log_exception(self, ty, value, tb, *args, **kwargs)
98
99 RequestHandler.log_exception = sentry_log_exception
100
101
102 def _capture_exception(ty, value, tb):
103 # type: (type, BaseException, Any) -> None
104 hub = Hub.current
105 if hub.get_integration(TornadoIntegration) is None:
106 return
107 if isinstance(value, HTTPError):
108 return
109
110 event, hint = event_from_exception(
111 (ty, value, tb),
112 client_options=hub.client.options,
113 mechanism={"type": "tornado", "handled": False},
114 )
115
116 hub.capture_event(event, hint=hint)
117
118
119 def _make_event_processor(weak_handler):
120 # type: (Callable[[], RequestHandler]) -> Callable
121 def tornado_processor(event, hint):
122 # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]
123 handler = weak_handler()
124 if handler is None:
125 return event
126
127 request = handler.request
128
129 with capture_internal_exceptions():
130 method = getattr(handler, handler.request.method.lower())
131 event["transaction"] = transaction_from_function(method)
132
133 with capture_internal_exceptions():
134 extractor = TornadoRequestExtractor(request)
135 extractor.extract_into_event(event)
136
137 request_info = event["request"]
138
139 request_info["url"] = "%s://%s%s" % (
140 request.protocol,
141 request.host,
142 request.path,
143 )
144
145 request_info["query_string"] = request.query
146 request_info["method"] = request.method
147 request_info["env"] = {"REMOTE_ADDR": request.remote_ip}
148 request_info["headers"] = _filter_headers(dict(request.headers))
149
150 with capture_internal_exceptions():
151 if handler.current_user and _should_send_default_pii():
152 event.setdefault("user", {})["is_authenticated"] = True
153
154 return event
155
156 return tornado_processor
157
158
159 class TornadoRequestExtractor(RequestExtractor):
160 def content_length(self):
161 # type: () -> int
162 if self.request.body is None:
163 return 0
164 return len(self.request.body)
165
166 def cookies(self):
167 # type: () -> Dict
168 return {k: v.value for k, v in self.request.cookies.items()}
169
170 def raw_data(self):
171 # type: () -> bytes
172 return self.request.body
173
174 def form(self):
175 # type: () -> Optional[Any]
176 # TODO: Where to get formdata and nothing else?
177 return None
178
179 def is_json(self):
180 # type: () -> bool
181 return _is_json_content_type(self.request.headers.get("content-type"))
182
183 def files(self):
184 # type: () -> Dict
185 return {k: v[0] for k, v in self.request.files.items() if v}
186
187 def size_of_file(self, file):
188 return len(file.body or ())
189
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sentry_sdk/integrations/tornado.py b/sentry_sdk/integrations/tornado.py
--- a/sentry_sdk/integrations/tornado.py
+++ b/sentry_sdk/integrations/tornado.py
@@ -173,8 +173,10 @@
def form(self):
# type: () -> Optional[Any]
- # TODO: Where to get formdata and nothing else?
- return None
+ return {
+ k: [v.decode("latin1", "replace") for v in vs]
+ for k, vs in self.request.body_arguments.items()
+ }
def is_json(self):
# type: () -> bool
|
{"golden_diff": "diff --git a/sentry_sdk/integrations/tornado.py b/sentry_sdk/integrations/tornado.py\n--- a/sentry_sdk/integrations/tornado.py\n+++ b/sentry_sdk/integrations/tornado.py\n@@ -173,8 +173,10 @@\n \n def form(self):\n # type: () -> Optional[Any]\n- # TODO: Where to get formdata and nothing else?\n- return None\n+ return {\n+ k: [v.decode(\"latin1\", \"replace\") for v in vs]\n+ for k, vs in self.request.body_arguments.items()\n+ }\n \n def is_json(self):\n # type: () -> bool\n", "issue": "Tornado integration does not capture formdata\nThere is no obvious way to get just formdata from Tornado's request object (not mixed with other request params), but I haven't done a lot of research on that.\n", "before_files": [{"content": "import sys\nimport weakref\nfrom inspect import iscoroutinefunction\n\nfrom sentry_sdk.hub import Hub, _should_send_default_pii\nfrom sentry_sdk.utils import (\n event_from_exception,\n capture_internal_exceptions,\n transaction_from_function,\n)\nfrom sentry_sdk.integrations import Integration\nfrom sentry_sdk.integrations._wsgi_common import (\n RequestExtractor,\n _filter_headers,\n _is_json_content_type,\n)\nfrom sentry_sdk.integrations.logging import ignore_logger\n\nfrom tornado.web import RequestHandler, HTTPError # type: ignore\nfrom tornado.gen import coroutine # type: ignore\n\nif False:\n from typing import Any\n from typing import List\n from typing import Optional\n from typing import Dict\n from typing import Callable\n\n\nclass TornadoIntegration(Integration):\n identifier = \"tornado\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n import tornado # type: ignore\n\n tornado_version = getattr(tornado, \"version_info\", None)\n if tornado_version is None or tornado_version < (5, 0):\n raise RuntimeError(\"Tornado 5+ required\")\n\n if sys.version_info < (3, 7):\n # Tornado is async. We better have contextvars or we're going to leak\n # state between requests.\n raise RuntimeError(\n \"The tornado integration for Sentry requires Python 3.7+\"\n )\n\n ignore_logger(\"tornado.application\")\n ignore_logger(\"tornado.access\")\n\n old_execute = RequestHandler._execute\n\n awaitable = iscoroutinefunction(old_execute)\n\n if awaitable:\n # Starting Tornado 6 RequestHandler._execute method is a standard Python coroutine (async/await)\n # In that case our method should be a coroutine function too\n async def sentry_execute_request_handler(self, *args, **kwargs):\n # type: (Any, *List, **Any) -> Any\n hub = Hub.current\n integration = hub.get_integration(TornadoIntegration)\n if integration is None:\n return await old_execute(self, *args, **kwargs)\n\n weak_handler = weakref.ref(self)\n\n with Hub(hub) as hub:\n with hub.configure_scope() as scope:\n scope.add_event_processor(_make_event_processor(weak_handler))\n return await old_execute(self, *args, **kwargs)\n\n else:\n\n @coroutine # type: ignore\n def sentry_execute_request_handler(self, *args, **kwargs):\n hub = Hub.current\n integration = hub.get_integration(TornadoIntegration)\n if integration is None:\n return old_execute(self, *args, **kwargs)\n\n weak_handler = weakref.ref(self)\n\n with Hub(hub) as hub:\n with hub.configure_scope() as scope:\n scope.add_event_processor(_make_event_processor(weak_handler))\n result = yield from old_execute(self, *args, **kwargs)\n return result\n\n RequestHandler._execute = sentry_execute_request_handler\n\n old_log_exception = RequestHandler.log_exception\n\n def sentry_log_exception(self, ty, value, tb, *args, **kwargs):\n # type: (Any, type, BaseException, Any, *Any, **Any) -> Optional[Any]\n _capture_exception(ty, value, tb)\n return old_log_exception(self, ty, value, tb, *args, **kwargs)\n\n RequestHandler.log_exception = sentry_log_exception\n\n\ndef _capture_exception(ty, value, tb):\n # type: (type, BaseException, Any) -> None\n hub = Hub.current\n if hub.get_integration(TornadoIntegration) is None:\n return\n if isinstance(value, HTTPError):\n return\n\n event, hint = event_from_exception(\n (ty, value, tb),\n client_options=hub.client.options,\n mechanism={\"type\": \"tornado\", \"handled\": False},\n )\n\n hub.capture_event(event, hint=hint)\n\n\ndef _make_event_processor(weak_handler):\n # type: (Callable[[], RequestHandler]) -> Callable\n def tornado_processor(event, hint):\n # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]\n handler = weak_handler()\n if handler is None:\n return event\n\n request = handler.request\n\n with capture_internal_exceptions():\n method = getattr(handler, handler.request.method.lower())\n event[\"transaction\"] = transaction_from_function(method)\n\n with capture_internal_exceptions():\n extractor = TornadoRequestExtractor(request)\n extractor.extract_into_event(event)\n\n request_info = event[\"request\"]\n\n request_info[\"url\"] = \"%s://%s%s\" % (\n request.protocol,\n request.host,\n request.path,\n )\n\n request_info[\"query_string\"] = request.query\n request_info[\"method\"] = request.method\n request_info[\"env\"] = {\"REMOTE_ADDR\": request.remote_ip}\n request_info[\"headers\"] = _filter_headers(dict(request.headers))\n\n with capture_internal_exceptions():\n if handler.current_user and _should_send_default_pii():\n event.setdefault(\"user\", {})[\"is_authenticated\"] = True\n\n return event\n\n return tornado_processor\n\n\nclass TornadoRequestExtractor(RequestExtractor):\n def content_length(self):\n # type: () -> int\n if self.request.body is None:\n return 0\n return len(self.request.body)\n\n def cookies(self):\n # type: () -> Dict\n return {k: v.value for k, v in self.request.cookies.items()}\n\n def raw_data(self):\n # type: () -> bytes\n return self.request.body\n\n def form(self):\n # type: () -> Optional[Any]\n # TODO: Where to get formdata and nothing else?\n return None\n\n def is_json(self):\n # type: () -> bool\n return _is_json_content_type(self.request.headers.get(\"content-type\"))\n\n def files(self):\n # type: () -> Dict\n return {k: v[0] for k, v in self.request.files.items() if v}\n\n def size_of_file(self, file):\n return len(file.body or ())\n", "path": "sentry_sdk/integrations/tornado.py"}], "after_files": [{"content": "import sys\nimport weakref\nfrom inspect import iscoroutinefunction\n\nfrom sentry_sdk.hub import Hub, _should_send_default_pii\nfrom sentry_sdk.utils import (\n event_from_exception,\n capture_internal_exceptions,\n transaction_from_function,\n)\nfrom sentry_sdk.integrations import Integration\nfrom sentry_sdk.integrations._wsgi_common import (\n RequestExtractor,\n _filter_headers,\n _is_json_content_type,\n)\nfrom sentry_sdk.integrations.logging import ignore_logger\n\nfrom tornado.web import RequestHandler, HTTPError # type: ignore\nfrom tornado.gen import coroutine # type: ignore\n\nif False:\n from typing import Any\n from typing import List\n from typing import Optional\n from typing import Dict\n from typing import Callable\n\n\nclass TornadoIntegration(Integration):\n identifier = \"tornado\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n import tornado # type: ignore\n\n tornado_version = getattr(tornado, \"version_info\", None)\n if tornado_version is None or tornado_version < (5, 0):\n raise RuntimeError(\"Tornado 5+ required\")\n\n if sys.version_info < (3, 7):\n # Tornado is async. We better have contextvars or we're going to leak\n # state between requests.\n raise RuntimeError(\n \"The tornado integration for Sentry requires Python 3.7+\"\n )\n\n ignore_logger(\"tornado.application\")\n ignore_logger(\"tornado.access\")\n\n old_execute = RequestHandler._execute\n\n awaitable = iscoroutinefunction(old_execute)\n\n if awaitable:\n # Starting Tornado 6 RequestHandler._execute method is a standard Python coroutine (async/await)\n # In that case our method should be a coroutine function too\n async def sentry_execute_request_handler(self, *args, **kwargs):\n # type: (Any, *List, **Any) -> Any\n hub = Hub.current\n integration = hub.get_integration(TornadoIntegration)\n if integration is None:\n return await old_execute(self, *args, **kwargs)\n\n weak_handler = weakref.ref(self)\n\n with Hub(hub) as hub:\n with hub.configure_scope() as scope:\n scope.add_event_processor(_make_event_processor(weak_handler))\n return await old_execute(self, *args, **kwargs)\n\n else:\n\n @coroutine # type: ignore\n def sentry_execute_request_handler(self, *args, **kwargs):\n hub = Hub.current\n integration = hub.get_integration(TornadoIntegration)\n if integration is None:\n return old_execute(self, *args, **kwargs)\n\n weak_handler = weakref.ref(self)\n\n with Hub(hub) as hub:\n with hub.configure_scope() as scope:\n scope.add_event_processor(_make_event_processor(weak_handler))\n result = yield from old_execute(self, *args, **kwargs)\n return result\n\n RequestHandler._execute = sentry_execute_request_handler\n\n old_log_exception = RequestHandler.log_exception\n\n def sentry_log_exception(self, ty, value, tb, *args, **kwargs):\n # type: (Any, type, BaseException, Any, *Any, **Any) -> Optional[Any]\n _capture_exception(ty, value, tb)\n return old_log_exception(self, ty, value, tb, *args, **kwargs)\n\n RequestHandler.log_exception = sentry_log_exception\n\n\ndef _capture_exception(ty, value, tb):\n # type: (type, BaseException, Any) -> None\n hub = Hub.current\n if hub.get_integration(TornadoIntegration) is None:\n return\n if isinstance(value, HTTPError):\n return\n\n event, hint = event_from_exception(\n (ty, value, tb),\n client_options=hub.client.options,\n mechanism={\"type\": \"tornado\", \"handled\": False},\n )\n\n hub.capture_event(event, hint=hint)\n\n\ndef _make_event_processor(weak_handler):\n # type: (Callable[[], RequestHandler]) -> Callable\n def tornado_processor(event, hint):\n # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]\n handler = weak_handler()\n if handler is None:\n return event\n\n request = handler.request\n\n with capture_internal_exceptions():\n method = getattr(handler, handler.request.method.lower())\n event[\"transaction\"] = transaction_from_function(method)\n\n with capture_internal_exceptions():\n extractor = TornadoRequestExtractor(request)\n extractor.extract_into_event(event)\n\n request_info = event[\"request\"]\n\n request_info[\"url\"] = \"%s://%s%s\" % (\n request.protocol,\n request.host,\n request.path,\n )\n\n request_info[\"query_string\"] = request.query\n request_info[\"method\"] = request.method\n request_info[\"env\"] = {\"REMOTE_ADDR\": request.remote_ip}\n request_info[\"headers\"] = _filter_headers(dict(request.headers))\n\n with capture_internal_exceptions():\n if handler.current_user and _should_send_default_pii():\n event.setdefault(\"user\", {})[\"is_authenticated\"] = True\n\n return event\n\n return tornado_processor\n\n\nclass TornadoRequestExtractor(RequestExtractor):\n def content_length(self):\n # type: () -> int\n if self.request.body is None:\n return 0\n return len(self.request.body)\n\n def cookies(self):\n # type: () -> Dict\n return {k: v.value for k, v in self.request.cookies.items()}\n\n def raw_data(self):\n # type: () -> bytes\n return self.request.body\n\n def form(self):\n # type: () -> Optional[Any]\n return {\n k: [v.decode(\"latin1\", \"replace\") for v in vs]\n for k, vs in self.request.body_arguments.items()\n }\n\n def is_json(self):\n # type: () -> bool\n return _is_json_content_type(self.request.headers.get(\"content-type\"))\n\n def files(self):\n # type: () -> Dict\n return {k: v[0] for k, v in self.request.files.items() if v}\n\n def size_of_file(self, file):\n return len(file.body or ())\n", "path": "sentry_sdk/integrations/tornado.py"}]}
| 2,106 | 151 |
gh_patches_debug_39438
|
rasdani/github-patches
|
git_diff
|
aws-powertools__powertools-lambda-python-2004
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`BaseSegment` defines `put_annotation` as having return type `NoReturn`
### Expected Behaviour
The return type should be `None`. `NoReturn` means that the function never returns (e.g. it always throws an exception) and makes type checkers (e.g. mypy) mark all code following `put_annotation` as unreachable.
Also applies to `put_metadata`, `patch`, and `patch_all`.
### Current Behaviour
These methods breaks type checking in any modules they're used because all code following a call to e.g. `put_annotation` is marked as unreachable.
### Code snippet
```python
with Tracer().provider.in_subsegment('test') as subsegment:
subsegment.put_metadata('x', 1)
print("hi") # marked as unreachable
```
### Possible Solution
Change the return type to `None`.
### Steps to Reproduce
Use the code snippet above and run `mypy` on the file.
### AWS Lambda Powertools for Python version
latest
### AWS Lambda function runtime
3.9
### Packaging format used
PyPi
### Debugging logs
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `aws_lambda_powertools/tracing/base.py`
Content:
```
1 import abc
2 import numbers
3 import traceback
4 from contextlib import contextmanager
5 from typing import Any, Generator, List, NoReturn, Optional, Sequence, Union
6
7
8 class BaseSegment(abc.ABC):
9 """Holds common properties and methods on segment and subsegment."""
10
11 @abc.abstractmethod
12 def close(self, end_time: Optional[int] = None):
13 """Close the trace entity by setting `end_time`
14 and flip the in progress flag to False.
15
16 Parameters
17 ----------
18 end_time: int
19 Time in epoch seconds, by default current time will be used.
20 """
21
22 @abc.abstractmethod
23 def add_subsegment(self, subsegment: Any):
24 """Add input subsegment as a child subsegment."""
25
26 @abc.abstractmethod
27 def remove_subsegment(self, subsegment: Any):
28 """Remove input subsegment from child subsegments."""
29
30 @abc.abstractmethod
31 def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> NoReturn:
32 """Annotate segment or subsegment with a key-value pair.
33
34 Note: Annotations will be indexed for later search query.
35
36 Parameters
37 ----------
38 key: str
39 Metadata key
40 value: Union[str, numbers.Number, bool]
41 Annotation value
42 """
43
44 @abc.abstractmethod
45 def put_metadata(self, key: str, value: Any, namespace: str = "default") -> NoReturn:
46 """Add metadata to segment or subsegment. Metadata is not indexed
47 but can be later retrieved by BatchGetTraces API.
48
49 Parameters
50 ----------
51 key: str
52 Metadata key
53 value: Any
54 Any object that can be serialized into a JSON string
55 namespace: Set[str]
56 Metadata namespace, by default 'default'
57 """
58
59 @abc.abstractmethod
60 def add_exception(self, exception: BaseException, stack: List[traceback.StackSummary], remote: bool = False):
61 """Add an exception to trace entities.
62
63 Parameters
64 ----------
65 exception: Exception
66 Caught exception
67 stack: List[traceback.StackSummary]
68 List of traceback summaries
69
70 Output from `traceback.extract_stack()`.
71 remote: bool
72 Whether it's a client error (False) or downstream service error (True), by default False
73 """
74
75
76 class BaseProvider(abc.ABC):
77 @abc.abstractmethod
78 @contextmanager
79 def in_subsegment(self, name=None, **kwargs) -> Generator[BaseSegment, None, None]:
80 """Return a subsegment context manger.
81
82 Parameters
83 ----------
84 name: str
85 Subsegment name
86 kwargs: Optional[dict]
87 Optional parameters to be propagated to segment
88 """
89
90 @abc.abstractmethod
91 @contextmanager
92 def in_subsegment_async(self, name=None, **kwargs) -> Generator[BaseSegment, None, None]:
93 """Return a subsegment async context manger.
94
95 Parameters
96 ----------
97 name: str
98 Subsegment name
99 kwargs: Optional[dict]
100 Optional parameters to be propagated to segment
101 """
102
103 @abc.abstractmethod
104 def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> NoReturn:
105 """Annotate current active trace entity with a key-value pair.
106
107 Note: Annotations will be indexed for later search query.
108
109 Parameters
110 ----------
111 key: str
112 Metadata key
113 value: Union[str, numbers.Number, bool]
114 Annotation value
115 """
116
117 @abc.abstractmethod
118 def put_metadata(self, key: str, value: Any, namespace: str = "default") -> NoReturn:
119 """Add metadata to the current active trace entity.
120
121 Note: Metadata is not indexed but can be later retrieved by BatchGetTraces API.
122
123 Parameters
124 ----------
125 key: str
126 Metadata key
127 value: Any
128 Any object that can be serialized into a JSON string
129 namespace: Set[str]
130 Metadata namespace, by default 'default'
131 """
132
133 @abc.abstractmethod
134 def patch(self, modules: Sequence[str]) -> NoReturn:
135 """Instrument a set of supported libraries
136
137 Parameters
138 ----------
139 modules: Set[str]
140 Set of modules to be patched
141 """
142
143 @abc.abstractmethod
144 def patch_all(self) -> NoReturn:
145 """Instrument all supported libraries"""
146
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/aws_lambda_powertools/tracing/base.py b/aws_lambda_powertools/tracing/base.py
--- a/aws_lambda_powertools/tracing/base.py
+++ b/aws_lambda_powertools/tracing/base.py
@@ -2,7 +2,7 @@
import numbers
import traceback
from contextlib import contextmanager
-from typing import Any, Generator, List, NoReturn, Optional, Sequence, Union
+from typing import Any, Generator, List, Optional, Sequence, Union
class BaseSegment(abc.ABC):
@@ -28,7 +28,7 @@
"""Remove input subsegment from child subsegments."""
@abc.abstractmethod
- def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> NoReturn:
+ def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> None:
"""Annotate segment or subsegment with a key-value pair.
Note: Annotations will be indexed for later search query.
@@ -42,7 +42,7 @@
"""
@abc.abstractmethod
- def put_metadata(self, key: str, value: Any, namespace: str = "default") -> NoReturn:
+ def put_metadata(self, key: str, value: Any, namespace: str = "default") -> None:
"""Add metadata to segment or subsegment. Metadata is not indexed
but can be later retrieved by BatchGetTraces API.
@@ -101,7 +101,7 @@
"""
@abc.abstractmethod
- def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> NoReturn:
+ def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> None:
"""Annotate current active trace entity with a key-value pair.
Note: Annotations will be indexed for later search query.
@@ -115,7 +115,7 @@
"""
@abc.abstractmethod
- def put_metadata(self, key: str, value: Any, namespace: str = "default") -> NoReturn:
+ def put_metadata(self, key: str, value: Any, namespace: str = "default") -> None:
"""Add metadata to the current active trace entity.
Note: Metadata is not indexed but can be later retrieved by BatchGetTraces API.
@@ -131,7 +131,7 @@
"""
@abc.abstractmethod
- def patch(self, modules: Sequence[str]) -> NoReturn:
+ def patch(self, modules: Sequence[str]) -> None:
"""Instrument a set of supported libraries
Parameters
@@ -141,5 +141,5 @@
"""
@abc.abstractmethod
- def patch_all(self) -> NoReturn:
+ def patch_all(self) -> None:
"""Instrument all supported libraries"""
|
{"golden_diff": "diff --git a/aws_lambda_powertools/tracing/base.py b/aws_lambda_powertools/tracing/base.py\n--- a/aws_lambda_powertools/tracing/base.py\n+++ b/aws_lambda_powertools/tracing/base.py\n@@ -2,7 +2,7 @@\n import numbers\n import traceback\n from contextlib import contextmanager\n-from typing import Any, Generator, List, NoReturn, Optional, Sequence, Union\n+from typing import Any, Generator, List, Optional, Sequence, Union\n \n \n class BaseSegment(abc.ABC):\n@@ -28,7 +28,7 @@\n \"\"\"Remove input subsegment from child subsegments.\"\"\"\n \n @abc.abstractmethod\n- def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> NoReturn:\n+ def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> None:\n \"\"\"Annotate segment or subsegment with a key-value pair.\n \n Note: Annotations will be indexed for later search query.\n@@ -42,7 +42,7 @@\n \"\"\"\n \n @abc.abstractmethod\n- def put_metadata(self, key: str, value: Any, namespace: str = \"default\") -> NoReturn:\n+ def put_metadata(self, key: str, value: Any, namespace: str = \"default\") -> None:\n \"\"\"Add metadata to segment or subsegment. Metadata is not indexed\n but can be later retrieved by BatchGetTraces API.\n \n@@ -101,7 +101,7 @@\n \"\"\"\n \n @abc.abstractmethod\n- def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> NoReturn:\n+ def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> None:\n \"\"\"Annotate current active trace entity with a key-value pair.\n \n Note: Annotations will be indexed for later search query.\n@@ -115,7 +115,7 @@\n \"\"\"\n \n @abc.abstractmethod\n- def put_metadata(self, key: str, value: Any, namespace: str = \"default\") -> NoReturn:\n+ def put_metadata(self, key: str, value: Any, namespace: str = \"default\") -> None:\n \"\"\"Add metadata to the current active trace entity.\n \n Note: Metadata is not indexed but can be later retrieved by BatchGetTraces API.\n@@ -131,7 +131,7 @@\n \"\"\"\n \n @abc.abstractmethod\n- def patch(self, modules: Sequence[str]) -> NoReturn:\n+ def patch(self, modules: Sequence[str]) -> None:\n \"\"\"Instrument a set of supported libraries\n \n Parameters\n@@ -141,5 +141,5 @@\n \"\"\"\n \n @abc.abstractmethod\n- def patch_all(self) -> NoReturn:\n+ def patch_all(self) -> None:\n \"\"\"Instrument all supported libraries\"\"\"\n", "issue": "`BaseSegment` defines `put_annotation` as having return type `NoReturn`\n### Expected Behaviour\r\n\r\nThe return type should be `None`. `NoReturn` means that the function never returns (e.g. it always throws an exception) and makes type checkers (e.g. mypy) mark all code following `put_annotation` as unreachable.\r\n\r\nAlso applies to `put_metadata`, `patch`, and `patch_all`.\r\n\r\n\r\n### Current Behaviour\r\n\r\nThese methods breaks type checking in any modules they're used because all code following a call to e.g. `put_annotation` is marked as unreachable.\r\n\r\n### Code snippet\r\n\r\n```python\r\nwith Tracer().provider.in_subsegment('test') as subsegment:\r\n subsegment.put_metadata('x', 1)\r\n print(\"hi\") # marked as unreachable\r\n```\r\n\r\n\r\n### Possible Solution\r\n\r\nChange the return type to `None`.\r\n\r\n### Steps to Reproduce\r\n\r\nUse the code snippet above and run `mypy` on the file.\r\n\r\n### AWS Lambda Powertools for Python version\r\n\r\nlatest\r\n\r\n### AWS Lambda function runtime\r\n\r\n3.9\r\n\r\n### Packaging format used\r\n\r\nPyPi\r\n\r\n### Debugging logs\r\n\r\n_No response_\n", "before_files": [{"content": "import abc\nimport numbers\nimport traceback\nfrom contextlib import contextmanager\nfrom typing import Any, Generator, List, NoReturn, Optional, Sequence, Union\n\n\nclass BaseSegment(abc.ABC):\n \"\"\"Holds common properties and methods on segment and subsegment.\"\"\"\n\n @abc.abstractmethod\n def close(self, end_time: Optional[int] = None):\n \"\"\"Close the trace entity by setting `end_time`\n and flip the in progress flag to False.\n\n Parameters\n ----------\n end_time: int\n Time in epoch seconds, by default current time will be used.\n \"\"\"\n\n @abc.abstractmethod\n def add_subsegment(self, subsegment: Any):\n \"\"\"Add input subsegment as a child subsegment.\"\"\"\n\n @abc.abstractmethod\n def remove_subsegment(self, subsegment: Any):\n \"\"\"Remove input subsegment from child subsegments.\"\"\"\n\n @abc.abstractmethod\n def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> NoReturn:\n \"\"\"Annotate segment or subsegment with a key-value pair.\n\n Note: Annotations will be indexed for later search query.\n\n Parameters\n ----------\n key: str\n Metadata key\n value: Union[str, numbers.Number, bool]\n Annotation value\n \"\"\"\n\n @abc.abstractmethod\n def put_metadata(self, key: str, value: Any, namespace: str = \"default\") -> NoReturn:\n \"\"\"Add metadata to segment or subsegment. Metadata is not indexed\n but can be later retrieved by BatchGetTraces API.\n\n Parameters\n ----------\n key: str\n Metadata key\n value: Any\n Any object that can be serialized into a JSON string\n namespace: Set[str]\n Metadata namespace, by default 'default'\n \"\"\"\n\n @abc.abstractmethod\n def add_exception(self, exception: BaseException, stack: List[traceback.StackSummary], remote: bool = False):\n \"\"\"Add an exception to trace entities.\n\n Parameters\n ----------\n exception: Exception\n Caught exception\n stack: List[traceback.StackSummary]\n List of traceback summaries\n\n Output from `traceback.extract_stack()`.\n remote: bool\n Whether it's a client error (False) or downstream service error (True), by default False\n \"\"\"\n\n\nclass BaseProvider(abc.ABC):\n @abc.abstractmethod\n @contextmanager\n def in_subsegment(self, name=None, **kwargs) -> Generator[BaseSegment, None, None]:\n \"\"\"Return a subsegment context manger.\n\n Parameters\n ----------\n name: str\n Subsegment name\n kwargs: Optional[dict]\n Optional parameters to be propagated to segment\n \"\"\"\n\n @abc.abstractmethod\n @contextmanager\n def in_subsegment_async(self, name=None, **kwargs) -> Generator[BaseSegment, None, None]:\n \"\"\"Return a subsegment async context manger.\n\n Parameters\n ----------\n name: str\n Subsegment name\n kwargs: Optional[dict]\n Optional parameters to be propagated to segment\n \"\"\"\n\n @abc.abstractmethod\n def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> NoReturn:\n \"\"\"Annotate current active trace entity with a key-value pair.\n\n Note: Annotations will be indexed for later search query.\n\n Parameters\n ----------\n key: str\n Metadata key\n value: Union[str, numbers.Number, bool]\n Annotation value\n \"\"\"\n\n @abc.abstractmethod\n def put_metadata(self, key: str, value: Any, namespace: str = \"default\") -> NoReturn:\n \"\"\"Add metadata to the current active trace entity.\n\n Note: Metadata is not indexed but can be later retrieved by BatchGetTraces API.\n\n Parameters\n ----------\n key: str\n Metadata key\n value: Any\n Any object that can be serialized into a JSON string\n namespace: Set[str]\n Metadata namespace, by default 'default'\n \"\"\"\n\n @abc.abstractmethod\n def patch(self, modules: Sequence[str]) -> NoReturn:\n \"\"\"Instrument a set of supported libraries\n\n Parameters\n ----------\n modules: Set[str]\n Set of modules to be patched\n \"\"\"\n\n @abc.abstractmethod\n def patch_all(self) -> NoReturn:\n \"\"\"Instrument all supported libraries\"\"\"\n", "path": "aws_lambda_powertools/tracing/base.py"}], "after_files": [{"content": "import abc\nimport numbers\nimport traceback\nfrom contextlib import contextmanager\nfrom typing import Any, Generator, List, Optional, Sequence, Union\n\n\nclass BaseSegment(abc.ABC):\n \"\"\"Holds common properties and methods on segment and subsegment.\"\"\"\n\n @abc.abstractmethod\n def close(self, end_time: Optional[int] = None):\n \"\"\"Close the trace entity by setting `end_time`\n and flip the in progress flag to False.\n\n Parameters\n ----------\n end_time: int\n Time in epoch seconds, by default current time will be used.\n \"\"\"\n\n @abc.abstractmethod\n def add_subsegment(self, subsegment: Any):\n \"\"\"Add input subsegment as a child subsegment.\"\"\"\n\n @abc.abstractmethod\n def remove_subsegment(self, subsegment: Any):\n \"\"\"Remove input subsegment from child subsegments.\"\"\"\n\n @abc.abstractmethod\n def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> None:\n \"\"\"Annotate segment or subsegment with a key-value pair.\n\n Note: Annotations will be indexed for later search query.\n\n Parameters\n ----------\n key: str\n Metadata key\n value: Union[str, numbers.Number, bool]\n Annotation value\n \"\"\"\n\n @abc.abstractmethod\n def put_metadata(self, key: str, value: Any, namespace: str = \"default\") -> None:\n \"\"\"Add metadata to segment or subsegment. Metadata is not indexed\n but can be later retrieved by BatchGetTraces API.\n\n Parameters\n ----------\n key: str\n Metadata key\n value: Any\n Any object that can be serialized into a JSON string\n namespace: Set[str]\n Metadata namespace, by default 'default'\n \"\"\"\n\n @abc.abstractmethod\n def add_exception(self, exception: BaseException, stack: List[traceback.StackSummary], remote: bool = False):\n \"\"\"Add an exception to trace entities.\n\n Parameters\n ----------\n exception: Exception\n Caught exception\n stack: List[traceback.StackSummary]\n List of traceback summaries\n\n Output from `traceback.extract_stack()`.\n remote: bool\n Whether it's a client error (False) or downstream service error (True), by default False\n \"\"\"\n\n\nclass BaseProvider(abc.ABC):\n @abc.abstractmethod\n @contextmanager\n def in_subsegment(self, name=None, **kwargs) -> Generator[BaseSegment, None, None]:\n \"\"\"Return a subsegment context manger.\n\n Parameters\n ----------\n name: str\n Subsegment name\n kwargs: Optional[dict]\n Optional parameters to be propagated to segment\n \"\"\"\n\n @abc.abstractmethod\n @contextmanager\n def in_subsegment_async(self, name=None, **kwargs) -> Generator[BaseSegment, None, None]:\n \"\"\"Return a subsegment async context manger.\n\n Parameters\n ----------\n name: str\n Subsegment name\n kwargs: Optional[dict]\n Optional parameters to be propagated to segment\n \"\"\"\n\n @abc.abstractmethod\n def put_annotation(self, key: str, value: Union[str, numbers.Number, bool]) -> None:\n \"\"\"Annotate current active trace entity with a key-value pair.\n\n Note: Annotations will be indexed for later search query.\n\n Parameters\n ----------\n key: str\n Metadata key\n value: Union[str, numbers.Number, bool]\n Annotation value\n \"\"\"\n\n @abc.abstractmethod\n def put_metadata(self, key: str, value: Any, namespace: str = \"default\") -> None:\n \"\"\"Add metadata to the current active trace entity.\n\n Note: Metadata is not indexed but can be later retrieved by BatchGetTraces API.\n\n Parameters\n ----------\n key: str\n Metadata key\n value: Any\n Any object that can be serialized into a JSON string\n namespace: Set[str]\n Metadata namespace, by default 'default'\n \"\"\"\n\n @abc.abstractmethod\n def patch(self, modules: Sequence[str]) -> None:\n \"\"\"Instrument a set of supported libraries\n\n Parameters\n ----------\n modules: Set[str]\n Set of modules to be patched\n \"\"\"\n\n @abc.abstractmethod\n def patch_all(self) -> None:\n \"\"\"Instrument all supported libraries\"\"\"\n", "path": "aws_lambda_powertools/tracing/base.py"}]}
| 1,781 | 638 |
gh_patches_debug_18192
|
rasdani/github-patches
|
git_diff
|
qtile__qtile-2748
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CryptoTicker migration is incomplete
I previously had:
```
widget.CryptoTicker(format="BTC: {avg}", **widget_defaults)
```
but it seems avg is not an allowed key any more per:
```
2021-09-01 14:42:17,968 ERROR libqtile generic_poll_text.py:poll():L90 got exception polling widget
Traceback (most recent call last):
File "/home/tycho/.local/lib/python3.9/site-packages/libqtile/widget/generic_poll_text.py", line 88, in poll
text = self.parse(body)
File "/home/tycho/.local/lib/python3.9/site-packages/libqtile/widget/crypto_ticker.py", line 90, in parse
return self.format.format(**variables)
```
/cc @Graeme22
CryptoTicker migration is incomplete
I previously had:
```
widget.CryptoTicker(format="BTC: {avg}", **widget_defaults)
```
but it seems avg is not an allowed key any more per:
```
2021-09-01 14:42:17,968 ERROR libqtile generic_poll_text.py:poll():L90 got exception polling widget
Traceback (most recent call last):
File "/home/tycho/.local/lib/python3.9/site-packages/libqtile/widget/generic_poll_text.py", line 88, in poll
text = self.parse(body)
File "/home/tycho/.local/lib/python3.9/site-packages/libqtile/widget/crypto_ticker.py", line 90, in parse
return self.format.format(**variables)
```
/cc @Graeme22
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libqtile/scripts/migrate.py`
Content:
```
1 # Copyright (c) 2021, Tycho Andersen. All rights reserved.
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to deal
5 # in the Software without restriction, including without limitation the rights
6 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7 # copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
19 import filecmp
20 import os
21 import os.path
22 import shutil
23 import sys
24 from glob import glob
25
26 BACKUP_SUFFIX = ".migrate.bak"
27
28 try:
29 import bowler
30 except ImportError:
31 pass
32
33
34 def rename_hook(query, fro, to):
35 # could match on dotted_name< 'hook' '.' 'subscribe' '.' '{name}' >
36 # but the replacement gets more complicated...
37 selector = "'{name}'".format(name=fro)
38 q = query.select_pattern(selector)
39 q.current.kwargs["name"] = fro
40 return q.rename(to)
41
42
43 def client_name_updated(query):
44 """ Rename window_name_change -> client_name_updated"""
45 return rename_hook(query, "window_name_change", "client_name_updated")
46
47
48 def tile_master_windows_rename(query):
49 return (
50 query
51 .select_function("Tile")
52 .modify_argument("masterWindows", "master_length")
53 )
54
55
56 def threaded_poll_text_rename(query):
57 return (
58 query
59 .select_class("ThreadedPollText")
60 .rename("ThreadPoolText")
61 )
62
63
64 def pacman_to_checkupdates(query):
65 return (
66 query
67 .select_class("Pacman")
68 .rename("CheckUpdates")
69 )
70
71
72 def bitcoin_to_crypto(query):
73 return (
74 query
75 .select_class("BitcoinTicker")
76 .rename("CryptoTicker")
77 )
78
79
80 def hook_main_function(query):
81 def modify_main(node, capture, filename):
82 main = capture.get("function_def")
83 if main.prev_sibling:
84 for leaf in main.prev_sibling.leaves():
85 if "startup" == leaf.value:
86 return
87 args = capture.get("function_arguments")
88 if args:
89 args[0].remove()
90 main.prefix += "from libqtile import hook, qtile\n"
91 main.prefix += "@hook.subscribe.startup\n"
92
93 return (
94 query
95 .select_function("main")
96 .is_def()
97 .modify(modify_main)
98 )
99
100
101 # Deprecated new_at_current key replaced by new_client_position.
102 # In the node, we want to change the key name
103 # and adapts its value depending of the previous value :
104 # new_at_current=True => new_client_position=before_current
105 # new_at_current<>True => new_client_position=after_current
106 def update_node_nac(node, capture, filename):
107 key = capture.get("k")
108 key.value = "new_client_position"
109 val = capture.get("v")
110 if val.value == "True":
111 val.value = "'before_current'"
112 else:
113 val.value = "'after_current'"
114
115
116 def new_at_current_to_new_client_position(query):
117 old_pattern = """
118 argument< k="new_at_current" "=" v=any >
119 """
120 return (
121 query
122 .select(old_pattern)
123 .modify(update_node_nac)
124 )
125
126
127 MIGRATIONS = [
128 client_name_updated,
129 tile_master_windows_rename,
130 threaded_poll_text_rename,
131 pacman_to_checkupdates,
132 bitcoin_to_crypto,
133 hook_main_function,
134 new_at_current_to_new_client_position,
135 ]
136
137
138 MODULE_RENAMES = [
139 ("libqtile.command_graph", "libqtile.command.graph"),
140 ("libqtile.command_client", "libqtile.command.client"),
141 ("libqtile.command_interface", "libqtile.command.interface"),
142 ("libqtile.command_object", "libqtile.command.base"),
143 ("libqtile.window", "libqtile.backend.x11.window"),
144 ]
145
146 for (fro, to) in MODULE_RENAMES:
147 def f(query, fro=fro, to=to):
148 return (
149 query
150 .select_module(fro)
151 .rename(to)
152 )
153 MIGRATIONS.append(f)
154
155
156 def file_and_backup(config_dir):
157 for py in glob(os.path.join(config_dir, "*.py")):
158 backup = py + BACKUP_SUFFIX
159 yield py, backup
160
161
162 def do_migrate(args):
163 if "bowler" not in sys.modules:
164 print("bowler can't be found, not migrating config file")
165 print("install it and try again")
166 sys.exit(1)
167
168 config_dir = os.path.dirname(args.config)
169 for py, backup in file_and_backup(config_dir):
170 shutil.copyfile(py, backup)
171
172 for m in MIGRATIONS:
173 q = bowler.Query(config_dir)
174 m(q).execute(interactive=not args.yes, write=True)
175
176 changed = False
177 for py, backup in file_and_backup(config_dir):
178 backup = py + BACKUP_SUFFIX
179 if not filecmp.cmp(py, backup, shallow=False):
180 changed = True
181 break
182
183 if not changed:
184 print("Config unchanged.")
185 for _, backup in file_and_backup(config_dir):
186 os.remove(backup)
187
188
189 def add_subcommand(subparsers, parents):
190 parser = subparsers.add_parser(
191 "migrate",
192 parents=parents,
193 help="Migrate a configuration file to the current API"
194 )
195 parser.add_argument(
196 "-c",
197 "--config",
198 action="store",
199 default=os.path.expanduser(
200 os.path.join(os.getenv("XDG_CONFIG_HOME", "~/.config"), "qtile", "config.py")
201 ),
202 help="Use the specified configuration file (migrates every .py file in this directory)",
203 )
204 parser.add_argument(
205 "--yes",
206 action="store_true",
207 help="Automatically apply diffs with no confirmation",
208 )
209 parser.set_defaults(func=do_migrate)
210
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/libqtile/scripts/migrate.py b/libqtile/scripts/migrate.py
--- a/libqtile/scripts/migrate.py
+++ b/libqtile/scripts/migrate.py
@@ -69,10 +69,32 @@
)
+def reset_format(node, capture, filename):
+ args = capture.get("class_arguments")
+ if args:
+ if args[0].type == 260: # argument list
+ n_children = len(args[0].children)
+ for i in range(n_children):
+ # we only want to remove the format argument
+ if 'format' in str(args[0].children[i]):
+ # remove the argument and the trailing or preceeding comma
+ if i == n_children - 1: # last argument
+ args[0].children[i - 1].remove()
+ args[0].children[i - 1].remove()
+ else:
+ args[0].children[i].remove()
+ args[0].children[i].remove()
+
+ break
+ else: # there's only one argument
+ args[0].remove()
+
+
def bitcoin_to_crypto(query):
return (
query
.select_class("BitcoinTicker")
+ .modify(reset_format)
.rename("CryptoTicker")
)
|
{"golden_diff": "diff --git a/libqtile/scripts/migrate.py b/libqtile/scripts/migrate.py\n--- a/libqtile/scripts/migrate.py\n+++ b/libqtile/scripts/migrate.py\n@@ -69,10 +69,32 @@\n )\n \n \n+def reset_format(node, capture, filename):\n+ args = capture.get(\"class_arguments\")\n+ if args:\n+ if args[0].type == 260: # argument list\n+ n_children = len(args[0].children)\n+ for i in range(n_children):\n+ # we only want to remove the format argument\n+ if 'format' in str(args[0].children[i]):\n+ # remove the argument and the trailing or preceeding comma\n+ if i == n_children - 1: # last argument\n+ args[0].children[i - 1].remove()\n+ args[0].children[i - 1].remove()\n+ else:\n+ args[0].children[i].remove()\n+ args[0].children[i].remove()\n+\n+ break\n+ else: # there's only one argument\n+ args[0].remove()\n+\n+\n def bitcoin_to_crypto(query):\n return (\n query\n .select_class(\"BitcoinTicker\")\n+ .modify(reset_format)\n .rename(\"CryptoTicker\")\n )\n", "issue": "CryptoTicker migration is incomplete\nI previously had:\r\n\r\n```\r\nwidget.CryptoTicker(format=\"BTC: {avg}\", **widget_defaults)\r\n```\r\n\r\nbut it seems avg is not an allowed key any more per:\r\n\r\n```\r\n2021-09-01 14:42:17,968 ERROR libqtile generic_poll_text.py:poll():L90 got exception polling widget\r\nTraceback (most recent call last):\r\n File \"/home/tycho/.local/lib/python3.9/site-packages/libqtile/widget/generic_poll_text.py\", line 88, in poll\r\n text = self.parse(body)\r\n File \"/home/tycho/.local/lib/python3.9/site-packages/libqtile/widget/crypto_ticker.py\", line 90, in parse\r\n return self.format.format(**variables)\r\n```\r\n\r\n/cc @Graeme22 \nCryptoTicker migration is incomplete\nI previously had:\r\n\r\n```\r\nwidget.CryptoTicker(format=\"BTC: {avg}\", **widget_defaults)\r\n```\r\n\r\nbut it seems avg is not an allowed key any more per:\r\n\r\n```\r\n2021-09-01 14:42:17,968 ERROR libqtile generic_poll_text.py:poll():L90 got exception polling widget\r\nTraceback (most recent call last):\r\n File \"/home/tycho/.local/lib/python3.9/site-packages/libqtile/widget/generic_poll_text.py\", line 88, in poll\r\n text = self.parse(body)\r\n File \"/home/tycho/.local/lib/python3.9/site-packages/libqtile/widget/crypto_ticker.py\", line 90, in parse\r\n return self.format.format(**variables)\r\n```\r\n\r\n/cc @Graeme22 \n", "before_files": [{"content": "# Copyright (c) 2021, Tycho Andersen. All rights reserved.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nimport filecmp\nimport os\nimport os.path\nimport shutil\nimport sys\nfrom glob import glob\n\nBACKUP_SUFFIX = \".migrate.bak\"\n\ntry:\n import bowler\nexcept ImportError:\n pass\n\n\ndef rename_hook(query, fro, to):\n # could match on dotted_name< 'hook' '.' 'subscribe' '.' '{name}' >\n # but the replacement gets more complicated...\n selector = \"'{name}'\".format(name=fro)\n q = query.select_pattern(selector)\n q.current.kwargs[\"name\"] = fro\n return q.rename(to)\n\n\ndef client_name_updated(query):\n \"\"\" Rename window_name_change -> client_name_updated\"\"\"\n return rename_hook(query, \"window_name_change\", \"client_name_updated\")\n\n\ndef tile_master_windows_rename(query):\n return (\n query\n .select_function(\"Tile\")\n .modify_argument(\"masterWindows\", \"master_length\")\n )\n\n\ndef threaded_poll_text_rename(query):\n return (\n query\n .select_class(\"ThreadedPollText\")\n .rename(\"ThreadPoolText\")\n )\n\n\ndef pacman_to_checkupdates(query):\n return (\n query\n .select_class(\"Pacman\")\n .rename(\"CheckUpdates\")\n )\n\n\ndef bitcoin_to_crypto(query):\n return (\n query\n .select_class(\"BitcoinTicker\")\n .rename(\"CryptoTicker\")\n )\n\n\ndef hook_main_function(query):\n def modify_main(node, capture, filename):\n main = capture.get(\"function_def\")\n if main.prev_sibling:\n for leaf in main.prev_sibling.leaves():\n if \"startup\" == leaf.value:\n return\n args = capture.get(\"function_arguments\")\n if args:\n args[0].remove()\n main.prefix += \"from libqtile import hook, qtile\\n\"\n main.prefix += \"@hook.subscribe.startup\\n\"\n\n return (\n query\n .select_function(\"main\")\n .is_def()\n .modify(modify_main)\n )\n\n\n# Deprecated new_at_current key replaced by new_client_position.\n# In the node, we want to change the key name\n# and adapts its value depending of the previous value :\n# new_at_current=True => new_client_position=before_current\n# new_at_current<>True => new_client_position=after_current\ndef update_node_nac(node, capture, filename):\n key = capture.get(\"k\")\n key.value = \"new_client_position\"\n val = capture.get(\"v\")\n if val.value == \"True\":\n val.value = \"'before_current'\"\n else:\n val.value = \"'after_current'\"\n\n\ndef new_at_current_to_new_client_position(query):\n old_pattern = \"\"\"\n argument< k=\"new_at_current\" \"=\" v=any >\n \"\"\"\n return (\n query\n .select(old_pattern)\n .modify(update_node_nac)\n )\n\n\nMIGRATIONS = [\n client_name_updated,\n tile_master_windows_rename,\n threaded_poll_text_rename,\n pacman_to_checkupdates,\n bitcoin_to_crypto,\n hook_main_function,\n new_at_current_to_new_client_position,\n]\n\n\nMODULE_RENAMES = [\n (\"libqtile.command_graph\", \"libqtile.command.graph\"),\n (\"libqtile.command_client\", \"libqtile.command.client\"),\n (\"libqtile.command_interface\", \"libqtile.command.interface\"),\n (\"libqtile.command_object\", \"libqtile.command.base\"),\n (\"libqtile.window\", \"libqtile.backend.x11.window\"),\n]\n\nfor (fro, to) in MODULE_RENAMES:\n def f(query, fro=fro, to=to):\n return (\n query\n .select_module(fro)\n .rename(to)\n )\n MIGRATIONS.append(f)\n\n\ndef file_and_backup(config_dir):\n for py in glob(os.path.join(config_dir, \"*.py\")):\n backup = py + BACKUP_SUFFIX\n yield py, backup\n\n\ndef do_migrate(args):\n if \"bowler\" not in sys.modules:\n print(\"bowler can't be found, not migrating config file\")\n print(\"install it and try again\")\n sys.exit(1)\n\n config_dir = os.path.dirname(args.config)\n for py, backup in file_and_backup(config_dir):\n shutil.copyfile(py, backup)\n\n for m in MIGRATIONS:\n q = bowler.Query(config_dir)\n m(q).execute(interactive=not args.yes, write=True)\n\n changed = False\n for py, backup in file_and_backup(config_dir):\n backup = py + BACKUP_SUFFIX\n if not filecmp.cmp(py, backup, shallow=False):\n changed = True\n break\n\n if not changed:\n print(\"Config unchanged.\")\n for _, backup in file_and_backup(config_dir):\n os.remove(backup)\n\n\ndef add_subcommand(subparsers, parents):\n parser = subparsers.add_parser(\n \"migrate\",\n parents=parents,\n help=\"Migrate a configuration file to the current API\"\n )\n parser.add_argument(\n \"-c\",\n \"--config\",\n action=\"store\",\n default=os.path.expanduser(\n os.path.join(os.getenv(\"XDG_CONFIG_HOME\", \"~/.config\"), \"qtile\", \"config.py\")\n ),\n help=\"Use the specified configuration file (migrates every .py file in this directory)\",\n )\n parser.add_argument(\n \"--yes\",\n action=\"store_true\",\n help=\"Automatically apply diffs with no confirmation\",\n )\n parser.set_defaults(func=do_migrate)\n", "path": "libqtile/scripts/migrate.py"}], "after_files": [{"content": "# Copyright (c) 2021, Tycho Andersen. All rights reserved.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nimport filecmp\nimport os\nimport os.path\nimport shutil\nimport sys\nfrom glob import glob\n\nBACKUP_SUFFIX = \".migrate.bak\"\n\ntry:\n import bowler\nexcept ImportError:\n pass\n\n\ndef rename_hook(query, fro, to):\n # could match on dotted_name< 'hook' '.' 'subscribe' '.' '{name}' >\n # but the replacement gets more complicated...\n selector = \"'{name}'\".format(name=fro)\n q = query.select_pattern(selector)\n q.current.kwargs[\"name\"] = fro\n return q.rename(to)\n\n\ndef client_name_updated(query):\n \"\"\" Rename window_name_change -> client_name_updated\"\"\"\n return rename_hook(query, \"window_name_change\", \"client_name_updated\")\n\n\ndef tile_master_windows_rename(query):\n return (\n query\n .select_function(\"Tile\")\n .modify_argument(\"masterWindows\", \"master_length\")\n )\n\n\ndef threaded_poll_text_rename(query):\n return (\n query\n .select_class(\"ThreadedPollText\")\n .rename(\"ThreadPoolText\")\n )\n\n\ndef pacman_to_checkupdates(query):\n return (\n query\n .select_class(\"Pacman\")\n .rename(\"CheckUpdates\")\n )\n\n\ndef reset_format(node, capture, filename):\n args = capture.get(\"class_arguments\")\n if args:\n if args[0].type == 260: # argument list\n n_children = len(args[0].children)\n for i in range(n_children):\n # we only want to remove the format argument\n if 'format' in str(args[0].children[i]):\n # remove the argument and the trailing or preceeding comma\n if i == n_children - 1: # last argument\n args[0].children[i - 1].remove()\n args[0].children[i - 1].remove()\n else:\n args[0].children[i].remove()\n args[0].children[i].remove()\n\n break\n else: # there's only one argument\n args[0].remove()\n\n\ndef bitcoin_to_crypto(query):\n return (\n query\n .select_class(\"BitcoinTicker\")\n .modify(reset_format)\n .rename(\"CryptoTicker\")\n )\n\n\ndef hook_main_function(query):\n def modify_main(node, capture, filename):\n main = capture.get(\"function_def\")\n if main.prev_sibling:\n for leaf in main.prev_sibling.leaves():\n if \"startup\" == leaf.value:\n return\n args = capture.get(\"function_arguments\")\n if args:\n args[0].remove()\n main.prefix += \"from libqtile import hook, qtile\\n\"\n main.prefix += \"@hook.subscribe.startup\\n\"\n\n return (\n query\n .select_function(\"main\")\n .is_def()\n .modify(modify_main)\n )\n\n\n# Deprecated new_at_current key replaced by new_client_position.\n# In the node, we want to change the key name\n# and adapts its value depending of the previous value :\n# new_at_current=True => new_client_position=before_current\n# new_at_current<>True => new_client_position=after_current\ndef update_node_nac(node, capture, filename):\n key = capture.get(\"k\")\n key.value = \"new_client_position\"\n val = capture.get(\"v\")\n if val.value == \"True\":\n val.value = \"'before_current'\"\n else:\n val.value = \"'after_current'\"\n\n\ndef new_at_current_to_new_client_position(query):\n old_pattern = \"\"\"\n argument< k=\"new_at_current\" \"=\" v=any >\n \"\"\"\n return (\n query\n .select(old_pattern)\n .modify(update_node_nac)\n )\n\n\nMIGRATIONS = [\n client_name_updated,\n tile_master_windows_rename,\n threaded_poll_text_rename,\n pacman_to_checkupdates,\n bitcoin_to_crypto,\n hook_main_function,\n new_at_current_to_new_client_position,\n]\n\n\nMODULE_RENAMES = [\n (\"libqtile.command_graph\", \"libqtile.command.graph\"),\n (\"libqtile.command_client\", \"libqtile.command.client\"),\n (\"libqtile.command_interface\", \"libqtile.command.interface\"),\n (\"libqtile.command_object\", \"libqtile.command.base\"),\n (\"libqtile.window\", \"libqtile.backend.x11.window\"),\n]\n\nfor (fro, to) in MODULE_RENAMES:\n def f(query, fro=fro, to=to):\n return (\n query\n .select_module(fro)\n .rename(to)\n )\n MIGRATIONS.append(f)\n\n\ndef file_and_backup(config_dir):\n for py in glob(os.path.join(config_dir, \"*.py\")):\n backup = py + BACKUP_SUFFIX\n yield py, backup\n\n\ndef do_migrate(args):\n if \"bowler\" not in sys.modules:\n print(\"bowler can't be found, not migrating config file\")\n print(\"install it and try again\")\n sys.exit(1)\n\n config_dir = os.path.dirname(args.config)\n for py, backup in file_and_backup(config_dir):\n shutil.copyfile(py, backup)\n\n for m in MIGRATIONS:\n q = bowler.Query(config_dir)\n m(q).execute(interactive=not args.yes, write=True)\n\n changed = False\n for py, backup in file_and_backup(config_dir):\n backup = py + BACKUP_SUFFIX\n if not filecmp.cmp(py, backup, shallow=False):\n changed = True\n break\n\n if not changed:\n print(\"Config unchanged.\")\n for _, backup in file_and_backup(config_dir):\n os.remove(backup)\n\n\ndef add_subcommand(subparsers, parents):\n parser = subparsers.add_parser(\n \"migrate\",\n parents=parents,\n help=\"Migrate a configuration file to the current API\"\n )\n parser.add_argument(\n \"-c\",\n \"--config\",\n action=\"store\",\n default=os.path.expanduser(\n os.path.join(os.getenv(\"XDG_CONFIG_HOME\", \"~/.config\"), \"qtile\", \"config.py\")\n ),\n help=\"Use the specified configuration file (migrates every .py file in this directory)\",\n )\n parser.add_argument(\n \"--yes\",\n action=\"store_true\",\n help=\"Automatically apply diffs with no confirmation\",\n )\n parser.set_defaults(func=do_migrate)\n", "path": "libqtile/scripts/migrate.py"}]}
| 2,578 | 298 |
gh_patches_debug_43024
|
rasdani/github-patches
|
git_diff
|
comic__grand-challenge.org-2087
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Move `wait_for_build_completion` to acks late micro short queue and use delays
`wait_for_build_completion` uses a countdown that causes the same issues described in https://github.com/comic/grand-challenge.org/blob/ce130d451034377b0f3dfd83590268e48a66d518/app/grandchallenge/components/tasks.py#L255-L271.
This task is not resource-intensive so:
- If the task is not idempotent it should be made so
- It should be run on the acks-late-micro-short queue
- Retries should use the same strategy as https://github.com/comic/grand-challenge.org/blob/ce130d451034377b0f3dfd83590268e48a66d518/app/grandchallenge/components/tasks.py#L323-L327
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/codebuild/models.py`
Content:
```
1 import gzip
2 from tempfile import NamedTemporaryFile
3
4 import boto3
5 from django.conf import settings
6 from django.core import files
7 from django.db import models
8 from django.utils.translation import gettext_lazy as _
9
10 from grandchallenge.algorithms.models import AlgorithmImage
11 from grandchallenge.core.models import UUIDModel
12 from grandchallenge.core.storage import private_s3_storage
13 from grandchallenge.github.models import GitHubWebhookMessage
14
15
16 class BuildStatusChoices(models.TextChoices):
17 """From https://docs.aws.amazon.com/codebuild/latest/APIReference/API_Build.html"""
18
19 SUCCEEDED = "SUCCEEDED", _("Succeeded")
20 FAILED = "FAILED", _("Failed")
21 FAULT = "FAULT", _("Fault")
22 TIMED_OUT = "TIMED_OUT", _("Timed Out")
23 IN_PROGRESS = "IN_PROGRESS", _("In Progress")
24 STOPPED = "STOPPED", _("Stopped")
25
26
27 class Build(UUIDModel):
28 webhook_message = models.ForeignKey(
29 GitHubWebhookMessage, on_delete=models.SET_NULL, null=True
30 )
31 algorithm_image = models.OneToOneField(
32 AlgorithmImage, on_delete=models.SET_NULL, null=True
33 )
34 build_config = models.JSONField()
35 build_id = models.CharField(max_length=1024)
36 status = models.CharField(
37 choices=BuildStatusChoices.choices, max_length=11
38 )
39 build_log = models.TextField(blank=True)
40
41 BuildStatusChoices = BuildStatusChoices
42 __client = None
43
44 @property
45 def client(self):
46 if self.__client is None:
47 self.__client = boto3.client("codebuild")
48 return self.__client
49
50 @property
51 def build_number(self):
52 return self.build_id.split(":")[-1]
53
54 def refresh_status(self):
55 build_statuses = self.client.batch_get_builds(ids=[self.build_id])
56 self.status = build_statuses["builds"][0]["buildStatus"]
57
58 def refresh_logs(self):
59 try:
60 with private_s3_storage.open(
61 f"codebuild/logs/{self.build_number}.gz"
62 ) as file:
63 self.build_log = gzip.open(file).read().decode("utf-8")
64 except FileNotFoundError:
65 self.build_log = "Log file not available."
66
67 def add_image_to_algorithm(self):
68 with private_s3_storage.open(
69 f"codebuild/artifacts/{self.build_number}/{self.build_config['projectName']}/container-image.tar.gz"
70 ) as file:
71 with NamedTemporaryFile(delete=True) as tmp_file:
72 with open(tmp_file.name, "wb") as fd:
73 for chunk in file.chunks():
74 fd.write(chunk)
75
76 tmp_file.flush()
77 temp_file = files.File(
78 tmp_file, name=f"{str(self.pk)}.tar.gz",
79 )
80
81 self.algorithm_image.image = temp_file
82 self.algorithm_image.save()
83
84 def _create_build(self):
85 self.build_config = {
86 "projectName": settings.CODEBUILD_PROJECT_NAME,
87 "sourceLocationOverride": f"{settings.PRIVATE_S3_STORAGE_KWARGS['bucket_name']}/{self.webhook_message.zipfile.name}",
88 "sourceTypeOverride": "S3",
89 "environmentVariablesOverride": [
90 {
91 "name": "IMAGE_REPO_NAME",
92 "value": f"{AlgorithmImage._meta.app_label}/{AlgorithmImage._meta.model_name}",
93 },
94 {"name": "IMAGE_TAG", "value": str(self.algorithm_image.pk)},
95 ],
96 }
97
98 build_data = self.client.start_build(**self.build_config)
99
100 self.build_id = build_data["build"]["id"]
101 self.status = build_data["build"]["buildStatus"]
102
103 def save(self, *args, **kwargs):
104 if self._state.adding:
105 self._create_build()
106
107 super().save(*args, **kwargs)
108
109 @property
110 def animate(self):
111 return self.status == BuildStatusChoices.IN_PROGRESS
112
113 @property
114 def status_context(self):
115 if self.status == BuildStatusChoices.SUCCEEDED:
116 return "success"
117 elif self.status in {BuildStatusChoices.STOPPED}:
118 return "warning"
119 elif self.status in {
120 BuildStatusChoices.FAILED,
121 BuildStatusChoices.FAULT,
122 BuildStatusChoices.TIMED_OUT,
123 }:
124 return "danger"
125 elif self.status in {BuildStatusChoices.IN_PROGRESS}:
126 return "info"
127 else:
128 return "secondary"
129
130 class Meta:
131 indexes = [
132 models.Index(fields=["build_id"]),
133 ]
134
```
Path: `app/grandchallenge/codebuild/tasks.py`
Content:
```
1 from celery import shared_task
2 from django.apps import apps
3 from django.core.exceptions import ObjectDoesNotExist
4 from django.db.transaction import on_commit
5
6 from grandchallenge.algorithms.models import Algorithm, AlgorithmImage
7
8
9 @shared_task()
10 def create_codebuild_build(*, pk):
11 GitHubWebhookMessage = apps.get_model( # noqa: N806
12 app_label="github", model_name="GitHubWebhookMessage"
13 )
14 ghwm = GitHubWebhookMessage.objects.get(pk=pk)
15
16 Build = apps.get_model( # noqa: N806
17 app_label="codebuild", model_name="Build"
18 )
19
20 if Build.objects.filter(webhook_message=ghwm).exists():
21 # Build already exists
22 return
23
24 try:
25 algorithm = Algorithm.objects.get(
26 repo_name=ghwm.payload["repository"]["full_name"]
27 )
28 except ObjectDoesNotExist:
29 # Repository is not linked to algorithm
30 return
31
32 algorithm_image = AlgorithmImage.objects.create(
33 algorithm=algorithm,
34 requires_gpu=algorithm.image_requires_gpu,
35 requires_memory_gb=algorithm.image_requires_memory_gb,
36 )
37
38 build = Build.objects.create(
39 webhook_message=ghwm, algorithm_image=algorithm_image,
40 )
41
42 on_commit(
43 lambda: wait_for_build_completion.apply_async(
44 kwargs={"build_pk": str(build.pk)}
45 )
46 )
47
48
49 @shared_task(bind=True, max_retries=120)
50 def wait_for_build_completion(self, *, build_pk):
51 Build = apps.get_model( # noqa: N806
52 app_label="codebuild", model_name="Build"
53 )
54
55 build = Build.objects.get(pk=build_pk)
56 build.refresh_status()
57
58 if build.status == build.BuildStatusChoices.IN_PROGRESS:
59 self.retry(countdown=30)
60 else:
61 build.refresh_logs()
62 build.save()
63 if build.status == build.BuildStatusChoices.SUCCEEDED:
64 on_commit(
65 lambda: add_image_to_algorithm.apply_async(
66 kwargs={"build_pk": str(build_pk)}
67 )
68 )
69
70
71 @shared_task
72 def add_image_to_algorithm(*, build_pk):
73 Build = apps.get_model( # noqa: N806
74 app_label="codebuild", model_name="Build"
75 )
76
77 build = Build.objects.get(pk=build_pk)
78 build.add_image_to_algorithm()
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/app/grandchallenge/codebuild/models.py b/app/grandchallenge/codebuild/models.py
--- a/app/grandchallenge/codebuild/models.py
+++ b/app/grandchallenge/codebuild/models.py
@@ -65,6 +65,7 @@
self.build_log = "Log file not available."
def add_image_to_algorithm(self):
+ # TODO, this would be much faster using S3 copy, can then run on a smaller queue
with private_s3_storage.open(
f"codebuild/artifacts/{self.build_number}/{self.build_config['projectName']}/container-image.tar.gz"
) as file:
diff --git a/app/grandchallenge/codebuild/tasks.py b/app/grandchallenge/codebuild/tasks.py
--- a/app/grandchallenge/codebuild/tasks.py
+++ b/app/grandchallenge/codebuild/tasks.py
@@ -1,12 +1,15 @@
from celery import shared_task
from django.apps import apps
+from django.conf import settings
from django.core.exceptions import ObjectDoesNotExist
+from django.db import transaction
from django.db.transaction import on_commit
from grandchallenge.algorithms.models import Algorithm, AlgorithmImage
+from grandchallenge.components.tasks import _retry
-@shared_task()
+@shared_task(**settings.CELERY_TASK_DECORATOR_KWARGS["acks-late-micro-short"])
def create_codebuild_build(*, pk):
GitHubWebhookMessage = apps.get_model( # noqa: N806
app_label="github", model_name="GitHubWebhookMessage"
@@ -29,50 +32,59 @@
# Repository is not linked to algorithm
return
- algorithm_image = AlgorithmImage.objects.create(
- algorithm=algorithm,
- requires_gpu=algorithm.image_requires_gpu,
- requires_memory_gb=algorithm.image_requires_memory_gb,
- )
-
- build = Build.objects.create(
- webhook_message=ghwm, algorithm_image=algorithm_image,
- )
+ with transaction.atomic():
+ algorithm_image = AlgorithmImage.objects.create(
+ algorithm=algorithm,
+ requires_gpu=algorithm.image_requires_gpu,
+ requires_memory_gb=algorithm.image_requires_memory_gb,
+ )
+ build = Build.objects.create(
+ webhook_message=ghwm, algorithm_image=algorithm_image,
+ )
- on_commit(
- lambda: wait_for_build_completion.apply_async(
- kwargs={"build_pk": str(build.pk)}
+ on_commit(
+ lambda: wait_for_build_completion.apply_async(
+ kwargs={"build_pk": str(build.pk)}
+ )
)
- )
-@shared_task(bind=True, max_retries=120)
-def wait_for_build_completion(self, *, build_pk):
+@shared_task(**settings.CELERY_TASK_DECORATOR_KWARGS["acks-late-micro-short"])
+def wait_for_build_completion(*, build_pk, retries=0):
Build = apps.get_model( # noqa: N806
app_label="codebuild", model_name="Build"
)
build = Build.objects.get(pk=build_pk)
- build.refresh_status()
-
- if build.status == build.BuildStatusChoices.IN_PROGRESS:
- self.retry(countdown=30)
- else:
- build.refresh_logs()
- build.save()
- if build.status == build.BuildStatusChoices.SUCCEEDED:
- on_commit(
- lambda: add_image_to_algorithm.apply_async(
- kwargs={"build_pk": str(build_pk)}
- )
+
+ with transaction.atomic():
+ build.refresh_status()
+
+ if build.status == build.BuildStatusChoices.IN_PROGRESS:
+ _retry(
+ task=wait_for_build_completion,
+ signature_kwargs={"kwargs": {"build_pk": build_pk}},
+ retries=retries,
)
+ return
+ else:
+ build.refresh_logs()
+ build.save()
+ if build.status == build.BuildStatusChoices.SUCCEEDED:
+ on_commit(
+ lambda: add_image_to_algorithm.apply_async(
+ kwargs={"build_pk": str(build_pk)}
+ )
+ )
-@shared_task
+@shared_task(**settings.CELERY_TASK_DECORATOR_KWARGS["acks-late-2xlarge"])
def add_image_to_algorithm(*, build_pk):
Build = apps.get_model( # noqa: N806
app_label="codebuild", model_name="Build"
)
build = Build.objects.get(pk=build_pk)
- build.add_image_to_algorithm()
+
+ if not build.algorithm_image.image:
+ build.add_image_to_algorithm()
|
{"golden_diff": "diff --git a/app/grandchallenge/codebuild/models.py b/app/grandchallenge/codebuild/models.py\n--- a/app/grandchallenge/codebuild/models.py\n+++ b/app/grandchallenge/codebuild/models.py\n@@ -65,6 +65,7 @@\n self.build_log = \"Log file not available.\"\n \n def add_image_to_algorithm(self):\n+ # TODO, this would be much faster using S3 copy, can then run on a smaller queue\n with private_s3_storage.open(\n f\"codebuild/artifacts/{self.build_number}/{self.build_config['projectName']}/container-image.tar.gz\"\n ) as file:\ndiff --git a/app/grandchallenge/codebuild/tasks.py b/app/grandchallenge/codebuild/tasks.py\n--- a/app/grandchallenge/codebuild/tasks.py\n+++ b/app/grandchallenge/codebuild/tasks.py\n@@ -1,12 +1,15 @@\n from celery import shared_task\n from django.apps import apps\n+from django.conf import settings\n from django.core.exceptions import ObjectDoesNotExist\n+from django.db import transaction\n from django.db.transaction import on_commit\n \n from grandchallenge.algorithms.models import Algorithm, AlgorithmImage\n+from grandchallenge.components.tasks import _retry\n \n \n-@shared_task()\n+@shared_task(**settings.CELERY_TASK_DECORATOR_KWARGS[\"acks-late-micro-short\"])\n def create_codebuild_build(*, pk):\n GitHubWebhookMessage = apps.get_model( # noqa: N806\n app_label=\"github\", model_name=\"GitHubWebhookMessage\"\n@@ -29,50 +32,59 @@\n # Repository is not linked to algorithm\n return\n \n- algorithm_image = AlgorithmImage.objects.create(\n- algorithm=algorithm,\n- requires_gpu=algorithm.image_requires_gpu,\n- requires_memory_gb=algorithm.image_requires_memory_gb,\n- )\n-\n- build = Build.objects.create(\n- webhook_message=ghwm, algorithm_image=algorithm_image,\n- )\n+ with transaction.atomic():\n+ algorithm_image = AlgorithmImage.objects.create(\n+ algorithm=algorithm,\n+ requires_gpu=algorithm.image_requires_gpu,\n+ requires_memory_gb=algorithm.image_requires_memory_gb,\n+ )\n+ build = Build.objects.create(\n+ webhook_message=ghwm, algorithm_image=algorithm_image,\n+ )\n \n- on_commit(\n- lambda: wait_for_build_completion.apply_async(\n- kwargs={\"build_pk\": str(build.pk)}\n+ on_commit(\n+ lambda: wait_for_build_completion.apply_async(\n+ kwargs={\"build_pk\": str(build.pk)}\n+ )\n )\n- )\n \n \n-@shared_task(bind=True, max_retries=120)\n-def wait_for_build_completion(self, *, build_pk):\n+@shared_task(**settings.CELERY_TASK_DECORATOR_KWARGS[\"acks-late-micro-short\"])\n+def wait_for_build_completion(*, build_pk, retries=0):\n Build = apps.get_model( # noqa: N806\n app_label=\"codebuild\", model_name=\"Build\"\n )\n \n build = Build.objects.get(pk=build_pk)\n- build.refresh_status()\n-\n- if build.status == build.BuildStatusChoices.IN_PROGRESS:\n- self.retry(countdown=30)\n- else:\n- build.refresh_logs()\n- build.save()\n- if build.status == build.BuildStatusChoices.SUCCEEDED:\n- on_commit(\n- lambda: add_image_to_algorithm.apply_async(\n- kwargs={\"build_pk\": str(build_pk)}\n- )\n+\n+ with transaction.atomic():\n+ build.refresh_status()\n+\n+ if build.status == build.BuildStatusChoices.IN_PROGRESS:\n+ _retry(\n+ task=wait_for_build_completion,\n+ signature_kwargs={\"kwargs\": {\"build_pk\": build_pk}},\n+ retries=retries,\n )\n+ return\n+ else:\n+ build.refresh_logs()\n+ build.save()\n+ if build.status == build.BuildStatusChoices.SUCCEEDED:\n+ on_commit(\n+ lambda: add_image_to_algorithm.apply_async(\n+ kwargs={\"build_pk\": str(build_pk)}\n+ )\n+ )\n \n \n-@shared_task\n+@shared_task(**settings.CELERY_TASK_DECORATOR_KWARGS[\"acks-late-2xlarge\"])\n def add_image_to_algorithm(*, build_pk):\n Build = apps.get_model( # noqa: N806\n app_label=\"codebuild\", model_name=\"Build\"\n )\n \n build = Build.objects.get(pk=build_pk)\n- build.add_image_to_algorithm()\n+\n+ if not build.algorithm_image.image:\n+ build.add_image_to_algorithm()\n", "issue": "Move `wait_for_build_completion` to acks late micro short queue and use delays\n`wait_for_build_completion` uses a countdown that causes the same issues described in https://github.com/comic/grand-challenge.org/blob/ce130d451034377b0f3dfd83590268e48a66d518/app/grandchallenge/components/tasks.py#L255-L271. \r\n\r\nThis task is not resource-intensive so:\r\n- If the task is not idempotent it should be made so\r\n- It should be run on the acks-late-micro-short queue\r\n- Retries should use the same strategy as https://github.com/comic/grand-challenge.org/blob/ce130d451034377b0f3dfd83590268e48a66d518/app/grandchallenge/components/tasks.py#L323-L327\n", "before_files": [{"content": "import gzip\nfrom tempfile import NamedTemporaryFile\n\nimport boto3\nfrom django.conf import settings\nfrom django.core import files\nfrom django.db import models\nfrom django.utils.translation import gettext_lazy as _\n\nfrom grandchallenge.algorithms.models import AlgorithmImage\nfrom grandchallenge.core.models import UUIDModel\nfrom grandchallenge.core.storage import private_s3_storage\nfrom grandchallenge.github.models import GitHubWebhookMessage\n\n\nclass BuildStatusChoices(models.TextChoices):\n \"\"\"From https://docs.aws.amazon.com/codebuild/latest/APIReference/API_Build.html\"\"\"\n\n SUCCEEDED = \"SUCCEEDED\", _(\"Succeeded\")\n FAILED = \"FAILED\", _(\"Failed\")\n FAULT = \"FAULT\", _(\"Fault\")\n TIMED_OUT = \"TIMED_OUT\", _(\"Timed Out\")\n IN_PROGRESS = \"IN_PROGRESS\", _(\"In Progress\")\n STOPPED = \"STOPPED\", _(\"Stopped\")\n\n\nclass Build(UUIDModel):\n webhook_message = models.ForeignKey(\n GitHubWebhookMessage, on_delete=models.SET_NULL, null=True\n )\n algorithm_image = models.OneToOneField(\n AlgorithmImage, on_delete=models.SET_NULL, null=True\n )\n build_config = models.JSONField()\n build_id = models.CharField(max_length=1024)\n status = models.CharField(\n choices=BuildStatusChoices.choices, max_length=11\n )\n build_log = models.TextField(blank=True)\n\n BuildStatusChoices = BuildStatusChoices\n __client = None\n\n @property\n def client(self):\n if self.__client is None:\n self.__client = boto3.client(\"codebuild\")\n return self.__client\n\n @property\n def build_number(self):\n return self.build_id.split(\":\")[-1]\n\n def refresh_status(self):\n build_statuses = self.client.batch_get_builds(ids=[self.build_id])\n self.status = build_statuses[\"builds\"][0][\"buildStatus\"]\n\n def refresh_logs(self):\n try:\n with private_s3_storage.open(\n f\"codebuild/logs/{self.build_number}.gz\"\n ) as file:\n self.build_log = gzip.open(file).read().decode(\"utf-8\")\n except FileNotFoundError:\n self.build_log = \"Log file not available.\"\n\n def add_image_to_algorithm(self):\n with private_s3_storage.open(\n f\"codebuild/artifacts/{self.build_number}/{self.build_config['projectName']}/container-image.tar.gz\"\n ) as file:\n with NamedTemporaryFile(delete=True) as tmp_file:\n with open(tmp_file.name, \"wb\") as fd:\n for chunk in file.chunks():\n fd.write(chunk)\n\n tmp_file.flush()\n temp_file = files.File(\n tmp_file, name=f\"{str(self.pk)}.tar.gz\",\n )\n\n self.algorithm_image.image = temp_file\n self.algorithm_image.save()\n\n def _create_build(self):\n self.build_config = {\n \"projectName\": settings.CODEBUILD_PROJECT_NAME,\n \"sourceLocationOverride\": f\"{settings.PRIVATE_S3_STORAGE_KWARGS['bucket_name']}/{self.webhook_message.zipfile.name}\",\n \"sourceTypeOverride\": \"S3\",\n \"environmentVariablesOverride\": [\n {\n \"name\": \"IMAGE_REPO_NAME\",\n \"value\": f\"{AlgorithmImage._meta.app_label}/{AlgorithmImage._meta.model_name}\",\n },\n {\"name\": \"IMAGE_TAG\", \"value\": str(self.algorithm_image.pk)},\n ],\n }\n\n build_data = self.client.start_build(**self.build_config)\n\n self.build_id = build_data[\"build\"][\"id\"]\n self.status = build_data[\"build\"][\"buildStatus\"]\n\n def save(self, *args, **kwargs):\n if self._state.adding:\n self._create_build()\n\n super().save(*args, **kwargs)\n\n @property\n def animate(self):\n return self.status == BuildStatusChoices.IN_PROGRESS\n\n @property\n def status_context(self):\n if self.status == BuildStatusChoices.SUCCEEDED:\n return \"success\"\n elif self.status in {BuildStatusChoices.STOPPED}:\n return \"warning\"\n elif self.status in {\n BuildStatusChoices.FAILED,\n BuildStatusChoices.FAULT,\n BuildStatusChoices.TIMED_OUT,\n }:\n return \"danger\"\n elif self.status in {BuildStatusChoices.IN_PROGRESS}:\n return \"info\"\n else:\n return \"secondary\"\n\n class Meta:\n indexes = [\n models.Index(fields=[\"build_id\"]),\n ]\n", "path": "app/grandchallenge/codebuild/models.py"}, {"content": "from celery import shared_task\nfrom django.apps import apps\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.db.transaction import on_commit\n\nfrom grandchallenge.algorithms.models import Algorithm, AlgorithmImage\n\n\n@shared_task()\ndef create_codebuild_build(*, pk):\n GitHubWebhookMessage = apps.get_model( # noqa: N806\n app_label=\"github\", model_name=\"GitHubWebhookMessage\"\n )\n ghwm = GitHubWebhookMessage.objects.get(pk=pk)\n\n Build = apps.get_model( # noqa: N806\n app_label=\"codebuild\", model_name=\"Build\"\n )\n\n if Build.objects.filter(webhook_message=ghwm).exists():\n # Build already exists\n return\n\n try:\n algorithm = Algorithm.objects.get(\n repo_name=ghwm.payload[\"repository\"][\"full_name\"]\n )\n except ObjectDoesNotExist:\n # Repository is not linked to algorithm\n return\n\n algorithm_image = AlgorithmImage.objects.create(\n algorithm=algorithm,\n requires_gpu=algorithm.image_requires_gpu,\n requires_memory_gb=algorithm.image_requires_memory_gb,\n )\n\n build = Build.objects.create(\n webhook_message=ghwm, algorithm_image=algorithm_image,\n )\n\n on_commit(\n lambda: wait_for_build_completion.apply_async(\n kwargs={\"build_pk\": str(build.pk)}\n )\n )\n\n\n@shared_task(bind=True, max_retries=120)\ndef wait_for_build_completion(self, *, build_pk):\n Build = apps.get_model( # noqa: N806\n app_label=\"codebuild\", model_name=\"Build\"\n )\n\n build = Build.objects.get(pk=build_pk)\n build.refresh_status()\n\n if build.status == build.BuildStatusChoices.IN_PROGRESS:\n self.retry(countdown=30)\n else:\n build.refresh_logs()\n build.save()\n if build.status == build.BuildStatusChoices.SUCCEEDED:\n on_commit(\n lambda: add_image_to_algorithm.apply_async(\n kwargs={\"build_pk\": str(build_pk)}\n )\n )\n\n\n@shared_task\ndef add_image_to_algorithm(*, build_pk):\n Build = apps.get_model( # noqa: N806\n app_label=\"codebuild\", model_name=\"Build\"\n )\n\n build = Build.objects.get(pk=build_pk)\n build.add_image_to_algorithm()\n", "path": "app/grandchallenge/codebuild/tasks.py"}], "after_files": [{"content": "import gzip\nfrom tempfile import NamedTemporaryFile\n\nimport boto3\nfrom django.conf import settings\nfrom django.core import files\nfrom django.db import models\nfrom django.utils.translation import gettext_lazy as _\n\nfrom grandchallenge.algorithms.models import AlgorithmImage\nfrom grandchallenge.core.models import UUIDModel\nfrom grandchallenge.core.storage import private_s3_storage\nfrom grandchallenge.github.models import GitHubWebhookMessage\n\n\nclass BuildStatusChoices(models.TextChoices):\n \"\"\"From https://docs.aws.amazon.com/codebuild/latest/APIReference/API_Build.html\"\"\"\n\n SUCCEEDED = \"SUCCEEDED\", _(\"Succeeded\")\n FAILED = \"FAILED\", _(\"Failed\")\n FAULT = \"FAULT\", _(\"Fault\")\n TIMED_OUT = \"TIMED_OUT\", _(\"Timed Out\")\n IN_PROGRESS = \"IN_PROGRESS\", _(\"In Progress\")\n STOPPED = \"STOPPED\", _(\"Stopped\")\n\n\nclass Build(UUIDModel):\n webhook_message = models.ForeignKey(\n GitHubWebhookMessage, on_delete=models.SET_NULL, null=True\n )\n algorithm_image = models.OneToOneField(\n AlgorithmImage, on_delete=models.SET_NULL, null=True\n )\n build_config = models.JSONField()\n build_id = models.CharField(max_length=1024)\n status = models.CharField(\n choices=BuildStatusChoices.choices, max_length=11\n )\n build_log = models.TextField(blank=True)\n\n BuildStatusChoices = BuildStatusChoices\n __client = None\n\n @property\n def client(self):\n if self.__client is None:\n self.__client = boto3.client(\"codebuild\")\n return self.__client\n\n @property\n def build_number(self):\n return self.build_id.split(\":\")[-1]\n\n def refresh_status(self):\n build_statuses = self.client.batch_get_builds(ids=[self.build_id])\n self.status = build_statuses[\"builds\"][0][\"buildStatus\"]\n\n def refresh_logs(self):\n try:\n with private_s3_storage.open(\n f\"codebuild/logs/{self.build_number}.gz\"\n ) as file:\n self.build_log = gzip.open(file).read().decode(\"utf-8\")\n except FileNotFoundError:\n self.build_log = \"Log file not available.\"\n\n def add_image_to_algorithm(self):\n # TODO, this would be much faster using S3 copy, can then run on a smaller queue\n with private_s3_storage.open(\n f\"codebuild/artifacts/{self.build_number}/{self.build_config['projectName']}/container-image.tar.gz\"\n ) as file:\n with NamedTemporaryFile(delete=True) as tmp_file:\n with open(tmp_file.name, \"wb\") as fd:\n for chunk in file.chunks():\n fd.write(chunk)\n\n tmp_file.flush()\n temp_file = files.File(\n tmp_file, name=f\"{str(self.pk)}.tar.gz\",\n )\n\n self.algorithm_image.image = temp_file\n self.algorithm_image.save()\n\n def _create_build(self):\n self.build_config = {\n \"projectName\": settings.CODEBUILD_PROJECT_NAME,\n \"sourceLocationOverride\": f\"{settings.PRIVATE_S3_STORAGE_KWARGS['bucket_name']}/{self.webhook_message.zipfile.name}\",\n \"sourceTypeOverride\": \"S3\",\n \"environmentVariablesOverride\": [\n {\n \"name\": \"IMAGE_REPO_NAME\",\n \"value\": f\"{AlgorithmImage._meta.app_label}/{AlgorithmImage._meta.model_name}\",\n },\n {\"name\": \"IMAGE_TAG\", \"value\": str(self.algorithm_image.pk)},\n ],\n }\n\n build_data = self.client.start_build(**self.build_config)\n\n self.build_id = build_data[\"build\"][\"id\"]\n self.status = build_data[\"build\"][\"buildStatus\"]\n\n def save(self, *args, **kwargs):\n if self._state.adding:\n self._create_build()\n\n super().save(*args, **kwargs)\n\n @property\n def animate(self):\n return self.status == BuildStatusChoices.IN_PROGRESS\n\n @property\n def status_context(self):\n if self.status == BuildStatusChoices.SUCCEEDED:\n return \"success\"\n elif self.status in {BuildStatusChoices.STOPPED}:\n return \"warning\"\n elif self.status in {\n BuildStatusChoices.FAILED,\n BuildStatusChoices.FAULT,\n BuildStatusChoices.TIMED_OUT,\n }:\n return \"danger\"\n elif self.status in {BuildStatusChoices.IN_PROGRESS}:\n return \"info\"\n else:\n return \"secondary\"\n\n class Meta:\n indexes = [\n models.Index(fields=[\"build_id\"]),\n ]\n", "path": "app/grandchallenge/codebuild/models.py"}, {"content": "from celery import shared_task\nfrom django.apps import apps\nfrom django.conf import settings\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.db import transaction\nfrom django.db.transaction import on_commit\n\nfrom grandchallenge.algorithms.models import Algorithm, AlgorithmImage\nfrom grandchallenge.components.tasks import _retry\n\n\n@shared_task(**settings.CELERY_TASK_DECORATOR_KWARGS[\"acks-late-micro-short\"])\ndef create_codebuild_build(*, pk):\n GitHubWebhookMessage = apps.get_model( # noqa: N806\n app_label=\"github\", model_name=\"GitHubWebhookMessage\"\n )\n ghwm = GitHubWebhookMessage.objects.get(pk=pk)\n\n Build = apps.get_model( # noqa: N806\n app_label=\"codebuild\", model_name=\"Build\"\n )\n\n if Build.objects.filter(webhook_message=ghwm).exists():\n # Build already exists\n return\n\n try:\n algorithm = Algorithm.objects.get(\n repo_name=ghwm.payload[\"repository\"][\"full_name\"]\n )\n except ObjectDoesNotExist:\n # Repository is not linked to algorithm\n return\n\n with transaction.atomic():\n algorithm_image = AlgorithmImage.objects.create(\n algorithm=algorithm,\n requires_gpu=algorithm.image_requires_gpu,\n requires_memory_gb=algorithm.image_requires_memory_gb,\n )\n build = Build.objects.create(\n webhook_message=ghwm, algorithm_image=algorithm_image,\n )\n\n on_commit(\n lambda: wait_for_build_completion.apply_async(\n kwargs={\"build_pk\": str(build.pk)}\n )\n )\n\n\n@shared_task(**settings.CELERY_TASK_DECORATOR_KWARGS[\"acks-late-micro-short\"])\ndef wait_for_build_completion(*, build_pk, retries=0):\n Build = apps.get_model( # noqa: N806\n app_label=\"codebuild\", model_name=\"Build\"\n )\n\n build = Build.objects.get(pk=build_pk)\n\n with transaction.atomic():\n build.refresh_status()\n\n if build.status == build.BuildStatusChoices.IN_PROGRESS:\n _retry(\n task=wait_for_build_completion,\n signature_kwargs={\"kwargs\": {\"build_pk\": build_pk}},\n retries=retries,\n )\n return\n else:\n build.refresh_logs()\n build.save()\n if build.status == build.BuildStatusChoices.SUCCEEDED:\n on_commit(\n lambda: add_image_to_algorithm.apply_async(\n kwargs={\"build_pk\": str(build_pk)}\n )\n )\n\n\n@shared_task(**settings.CELERY_TASK_DECORATOR_KWARGS[\"acks-late-2xlarge\"])\ndef add_image_to_algorithm(*, build_pk):\n Build = apps.get_model( # noqa: N806\n app_label=\"codebuild\", model_name=\"Build\"\n )\n\n build = Build.objects.get(pk=build_pk)\n\n if not build.algorithm_image.image:\n build.add_image_to_algorithm()\n", "path": "app/grandchallenge/codebuild/tasks.py"}]}
| 2,398 | 991 |
gh_patches_debug_30096
|
rasdani/github-patches
|
git_diff
|
SeldonIO__MLServer-464
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Install requirements system-wide when building custom images
Currently, custom dependencies coming from a `requirements.txt` get installed on the base environment for user `1000` (i.e. the default non-`root` user in the image). Therefore, if the image is loaded with a different user ID, these custom dependencies are not available.
To work around this, we should install these custom deps under `root`, so that they are installed system-wide and accessible by any user.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `runtimes/sklearn/mlserver_sklearn/encoding.py`
Content:
```
1 from dataclasses import dataclass
2 from typing import List, Union
3
4 import numpy as np
5 import pandas as pd
6
7 from mlserver.codecs import NumpyCodec, PandasCodec
8 from mlserver.errors import InferenceError
9 from mlserver.types import ResponseOutput
10
11 # TODO: more support!
12 SKLearnResponse = Union[np.ndarray, pd.DataFrame]
13
14
15 @dataclass
16 class SKLearnPayload:
17 """Class for keeping track of requested outputs
18 and corresponding model responses."""
19
20 requested_output: ResponseOutput
21 model_output: SKLearnResponse
22
23
24 def to_outputs(sklearn_payloads: List[SKLearnPayload]) -> List[ResponseOutput]:
25 """
26 Encodes a list of SKLearn payloads into a list of proto-able ResponseOutputs.
27
28 :param sklearn_payloads: List of requested outputs + the responses from the
29 SKLearn model
30 :return: response_outputs: List of encoded response outputs
31 :raises: InferenceError if multiple columnar responses were returned by the model
32 """
33 response_outputs = []
34
35 all_output_names = [p.requested_output.name for p in sklearn_payloads]
36
37 for payload in sklearn_payloads:
38 if _is_columnar_data(payload) and len(sklearn_payloads) > 1:
39 raise InferenceError(
40 f"{payload.requested_output.name} returned columnar data of type"
41 f" {type(payload.model_output)} and {all_output_names} were"
42 f" requested. Cannot encode multiple columnar data responses"
43 f" one response."
44 )
45
46 if isinstance(payload.model_output, pd.DataFrame):
47 # Immediately return the outputs of columnar data encoding,
48 # don't try to jam more outputs together in one response.
49 return PandasCodec.encode("some-model", payload.model_output).outputs
50
51 response_output = NumpyCodec.encode(
52 name=payload.requested_output.name, payload=payload.model_output
53 )
54 response_outputs.append(response_output)
55
56 return response_outputs
57
58
59 def _is_columnar_data(payload: SKLearnPayload) -> bool:
60 return isinstance(payload.model_output, pd.DataFrame)
61
```
Path: `runtimes/sklearn/mlserver_sklearn/sklearn.py`
Content:
```
1 from typing import List
2
3 import joblib
4 from mlserver_sklearn import encoding
5 from mlserver_sklearn.encoding import SKLearnPayload
6 from sklearn.pipeline import Pipeline
7
8 from mlserver import types
9 from mlserver.codecs import NumpyRequestCodec
10 from mlserver.errors import InferenceError
11 from mlserver.model import MLModel
12 from mlserver.types import InferenceResponse
13 from mlserver.utils import get_model_uri
14
15 PREDICT_OUTPUT = "predict"
16 PREDICT_PROBA_OUTPUT = "predict_proba"
17 VALID_OUTPUTS = [PREDICT_OUTPUT, PREDICT_PROBA_OUTPUT]
18
19 WELLKNOWN_MODEL_FILENAMES = ["model.joblib", "model.pickle", "model.pkl"]
20
21
22 class SKLearnModel(MLModel):
23 """
24 Implementation of the MLModel interface to load and serve `scikit-learn`
25 models persisted with `joblib`.
26 """
27
28 async def load(self) -> bool:
29 # TODO: Log info message
30 model_uri = await get_model_uri(
31 self._settings, wellknown_filenames=WELLKNOWN_MODEL_FILENAMES
32 )
33 self._model = joblib.load(model_uri)
34
35 self.ready = True
36 return self.ready
37
38 async def predict(self, payload: types.InferenceRequest) -> types.InferenceResponse:
39 payload = self._check_request(payload)
40
41 model_responses = self._get_model_outputs(payload)
42
43 return InferenceResponse(
44 model_name=self.name,
45 model_version=self.version,
46 outputs=encoding.to_outputs(sklearn_payloads=model_responses),
47 )
48
49 def _check_request(self, payload: types.InferenceRequest) -> types.InferenceRequest:
50 if not payload.outputs:
51 # By default, only return the result of `predict()`
52 payload.outputs = [types.RequestOutput(name=PREDICT_OUTPUT)]
53 else:
54 for request_output in payload.outputs:
55 if request_output.name not in VALID_OUTPUTS:
56 raise InferenceError(
57 f"SKLearnModel only supports '{PREDICT_OUTPUT}' and "
58 f"'{PREDICT_PROBA_OUTPUT}' as outputs "
59 f"({request_output.name} was received)"
60 )
61
62 # Regression models do not support `predict_proba`
63 if PREDICT_PROBA_OUTPUT in [o.name for o in payload.outputs]:
64 # Ensure model supports it
65 maybe_regressor = self._model
66 if isinstance(self._model, Pipeline):
67 maybe_regressor = maybe_regressor.steps[-1][-1]
68
69 if not hasattr(maybe_regressor, PREDICT_PROBA_OUTPUT):
70 raise InferenceError(
71 f"{type(maybe_regressor)} models do not support "
72 f"'{PREDICT_PROBA_OUTPUT}"
73 )
74
75 return payload
76
77 def _get_model_outputs(
78 self, payload: types.InferenceRequest
79 ) -> List[SKLearnPayload]:
80 decoded_request = self.decode_request(payload, default_codec=NumpyRequestCodec)
81
82 outputs = []
83 for request_output in payload.outputs: # type: ignore
84 predict_fn = getattr(self._model, request_output.name)
85 y = predict_fn(decoded_request)
86 payload = SKLearnPayload(requested_output=request_output, model_output=y)
87 outputs.append(payload)
88
89 return outputs
90
```
Path: `mlserver/cli/constants.py`
Content:
```
1 DockerfileName = "Dockerfile"
2 DockerfileTemplate = """
3 FROM continuumio/miniconda3:4.10.3 AS env-builder
4 SHELL ["/bin/bash", "-c"]
5
6 ARG MLSERVER_ENV_NAME="mlserver-custom-env" \\
7 MLSERVER_ENV_TARBALL="./envs/base.tar.gz"
8
9 RUN conda config --add channels conda-forge && \\
10 conda install conda-pack
11
12 # The `[]` character range will ensure that Docker doesn't complain if the
13 # files don't exist:
14 # https://stackoverflow.com/a/65138098/5015573
15 COPY \\
16 ./environment.ym[l] \\
17 ./environment.yam[l] \\
18 ./conda.ym[l] \\
19 ./conda.yam[l] \\
20 .
21 RUN mkdir $(dirname $MLSERVER_ENV_TARBALL); \\
22 for envFile in environment.yml environment.yaml conda.yml conda.yaml; do \\
23 if [[ -f $envFile ]]; then \\
24 conda env create \
25 --name $MLSERVER_ENV_NAME \\
26 --file $envFile; \\
27 conda-pack \
28 -n $MLSERVER_ENV_NAME \\
29 -o $MLSERVER_ENV_TARBALL; \\
30 fi \\
31 done; \\
32 chmod -R 776 $(dirname $MLSERVER_ENV_TARBALL)
33
34 FROM seldonio/mlserver:{version}-slim
35 SHELL ["/bin/bash", "-c"]
36
37 # Copy all potential sources for custom environments
38 COPY \\
39 --chown=1000 \\
40 --from=env-builder \\
41 /envs/base.tar.g[z] \\
42 ./envs/base.tar.gz
43 COPY \\
44 ./settings.jso[n] \\
45 ./model-settings.jso[n] \\
46 ./requirements.tx[t] \\
47 .
48
49 RUN ./hack/build-env.sh . ./envs/base
50
51 # Copy everything else
52 COPY . .
53
54 # Override MLServer's own `CMD` to activate the embedded environment
55 # (optionally activating the hot-loaded one as well).
56 CMD source ./hack/activate-env.sh ./envs/base.tar.gz ./envs/base && \\
57 mlserver start $MLSERVER_MODELS_DIR
58 """
59
60 DockerignoreName = ".dockerignore"
61 Dockerignore = """
62 # Binaries for programs and plugins
63 *.exe
64 *.exe~
65 *.dll
66 *.so
67 *.dylib
68 *.pyc
69 *.pyo
70 *.pyd
71 bin
72
73 # Mac file system
74 **/.DS_Store
75
76 # Python dev
77 __pycache__
78 .Python
79 env
80 pip-log.txt
81 pip-delete-this-directory.txt
82 .mypy_cache
83 eggs/
84 .eggs/
85 *.egg-info/
86 ./pytest_cache
87 .tox
88 build/
89 dist/
90
91 # Notebook Checkpoints
92 .ipynb_checkpoints
93
94 .coverage
95 .coverage.*
96 .cache
97 nosetests.xml
98 coverage.xml
99 *,cover
100 *.log
101 .git
102 """
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mlserver/cli/constants.py b/mlserver/cli/constants.py
--- a/mlserver/cli/constants.py
+++ b/mlserver/cli/constants.py
@@ -46,7 +46,13 @@
./requirements.tx[t] \\
.
-RUN ./hack/build-env.sh . ./envs/base
+USER root
+# Install dependencies system-wide, to ensure that they are available for every
+# user
+RUN ./hack/build-env.sh . ./envs/base && \
+ chown -R 1000:0 ./envs/base && \\
+ chmod -R 776 ./envs/base
+USER 1000
# Copy everything else
COPY . .
diff --git a/runtimes/sklearn/mlserver_sklearn/encoding.py b/runtimes/sklearn/mlserver_sklearn/encoding.py
--- a/runtimes/sklearn/mlserver_sklearn/encoding.py
+++ b/runtimes/sklearn/mlserver_sklearn/encoding.py
@@ -6,7 +6,7 @@
from mlserver.codecs import NumpyCodec, PandasCodec
from mlserver.errors import InferenceError
-from mlserver.types import ResponseOutput
+from mlserver.types import RequestOutput, ResponseOutput
# TODO: more support!
SKLearnResponse = Union[np.ndarray, pd.DataFrame]
@@ -17,7 +17,7 @@
"""Class for keeping track of requested outputs
and corresponding model responses."""
- requested_output: ResponseOutput
+ requested_output: RequestOutput
model_output: SKLearnResponse
diff --git a/runtimes/sklearn/mlserver_sklearn/sklearn.py b/runtimes/sklearn/mlserver_sklearn/sklearn.py
--- a/runtimes/sklearn/mlserver_sklearn/sklearn.py
+++ b/runtimes/sklearn/mlserver_sklearn/sklearn.py
@@ -83,7 +83,7 @@
for request_output in payload.outputs: # type: ignore
predict_fn = getattr(self._model, request_output.name)
y = predict_fn(decoded_request)
- payload = SKLearnPayload(requested_output=request_output, model_output=y)
- outputs.append(payload)
+ output = SKLearnPayload(requested_output=request_output, model_output=y)
+ outputs.append(output)
return outputs
|
{"golden_diff": "diff --git a/mlserver/cli/constants.py b/mlserver/cli/constants.py\n--- a/mlserver/cli/constants.py\n+++ b/mlserver/cli/constants.py\n@@ -46,7 +46,13 @@\n ./requirements.tx[t] \\\\\n .\n \n-RUN ./hack/build-env.sh . ./envs/base\n+USER root\n+# Install dependencies system-wide, to ensure that they are available for every\n+# user\n+RUN ./hack/build-env.sh . ./envs/base && \\\n+ chown -R 1000:0 ./envs/base && \\\\\n+ chmod -R 776 ./envs/base\n+USER 1000\n \n # Copy everything else\n COPY . .\ndiff --git a/runtimes/sklearn/mlserver_sklearn/encoding.py b/runtimes/sklearn/mlserver_sklearn/encoding.py\n--- a/runtimes/sklearn/mlserver_sklearn/encoding.py\n+++ b/runtimes/sklearn/mlserver_sklearn/encoding.py\n@@ -6,7 +6,7 @@\n \n from mlserver.codecs import NumpyCodec, PandasCodec\n from mlserver.errors import InferenceError\n-from mlserver.types import ResponseOutput\n+from mlserver.types import RequestOutput, ResponseOutput\n \n # TODO: more support!\n SKLearnResponse = Union[np.ndarray, pd.DataFrame]\n@@ -17,7 +17,7 @@\n \"\"\"Class for keeping track of requested outputs\n and corresponding model responses.\"\"\"\n \n- requested_output: ResponseOutput\n+ requested_output: RequestOutput\n model_output: SKLearnResponse\n \n \ndiff --git a/runtimes/sklearn/mlserver_sklearn/sklearn.py b/runtimes/sklearn/mlserver_sklearn/sklearn.py\n--- a/runtimes/sklearn/mlserver_sklearn/sklearn.py\n+++ b/runtimes/sklearn/mlserver_sklearn/sklearn.py\n@@ -83,7 +83,7 @@\n for request_output in payload.outputs: # type: ignore\n predict_fn = getattr(self._model, request_output.name)\n y = predict_fn(decoded_request)\n- payload = SKLearnPayload(requested_output=request_output, model_output=y)\n- outputs.append(payload)\n+ output = SKLearnPayload(requested_output=request_output, model_output=y)\n+ outputs.append(output)\n \n return outputs\n", "issue": "Install requirements system-wide when building custom images\nCurrently, custom dependencies coming from a `requirements.txt` get installed on the base environment for user `1000` (i.e. the default non-`root` user in the image). Therefore, if the image is loaded with a different user ID, these custom dependencies are not available.\r\n\r\nTo work around this, we should install these custom deps under `root`, so that they are installed system-wide and accessible by any user.\n", "before_files": [{"content": "from dataclasses import dataclass\nfrom typing import List, Union\n\nimport numpy as np\nimport pandas as pd\n\nfrom mlserver.codecs import NumpyCodec, PandasCodec\nfrom mlserver.errors import InferenceError\nfrom mlserver.types import ResponseOutput\n\n# TODO: more support!\nSKLearnResponse = Union[np.ndarray, pd.DataFrame]\n\n\n@dataclass\nclass SKLearnPayload:\n \"\"\"Class for keeping track of requested outputs\n and corresponding model responses.\"\"\"\n\n requested_output: ResponseOutput\n model_output: SKLearnResponse\n\n\ndef to_outputs(sklearn_payloads: List[SKLearnPayload]) -> List[ResponseOutput]:\n \"\"\"\n Encodes a list of SKLearn payloads into a list of proto-able ResponseOutputs.\n\n :param sklearn_payloads: List of requested outputs + the responses from the\n SKLearn model\n :return: response_outputs: List of encoded response outputs\n :raises: InferenceError if multiple columnar responses were returned by the model\n \"\"\"\n response_outputs = []\n\n all_output_names = [p.requested_output.name for p in sklearn_payloads]\n\n for payload in sklearn_payloads:\n if _is_columnar_data(payload) and len(sklearn_payloads) > 1:\n raise InferenceError(\n f\"{payload.requested_output.name} returned columnar data of type\"\n f\" {type(payload.model_output)} and {all_output_names} were\"\n f\" requested. Cannot encode multiple columnar data responses\"\n f\" one response.\"\n )\n\n if isinstance(payload.model_output, pd.DataFrame):\n # Immediately return the outputs of columnar data encoding,\n # don't try to jam more outputs together in one response.\n return PandasCodec.encode(\"some-model\", payload.model_output).outputs\n\n response_output = NumpyCodec.encode(\n name=payload.requested_output.name, payload=payload.model_output\n )\n response_outputs.append(response_output)\n\n return response_outputs\n\n\ndef _is_columnar_data(payload: SKLearnPayload) -> bool:\n return isinstance(payload.model_output, pd.DataFrame)\n", "path": "runtimes/sklearn/mlserver_sklearn/encoding.py"}, {"content": "from typing import List\n\nimport joblib\nfrom mlserver_sklearn import encoding\nfrom mlserver_sklearn.encoding import SKLearnPayload\nfrom sklearn.pipeline import Pipeline\n\nfrom mlserver import types\nfrom mlserver.codecs import NumpyRequestCodec\nfrom mlserver.errors import InferenceError\nfrom mlserver.model import MLModel\nfrom mlserver.types import InferenceResponse\nfrom mlserver.utils import get_model_uri\n\nPREDICT_OUTPUT = \"predict\"\nPREDICT_PROBA_OUTPUT = \"predict_proba\"\nVALID_OUTPUTS = [PREDICT_OUTPUT, PREDICT_PROBA_OUTPUT]\n\nWELLKNOWN_MODEL_FILENAMES = [\"model.joblib\", \"model.pickle\", \"model.pkl\"]\n\n\nclass SKLearnModel(MLModel):\n \"\"\"\n Implementation of the MLModel interface to load and serve `scikit-learn`\n models persisted with `joblib`.\n \"\"\"\n\n async def load(self) -> bool:\n # TODO: Log info message\n model_uri = await get_model_uri(\n self._settings, wellknown_filenames=WELLKNOWN_MODEL_FILENAMES\n )\n self._model = joblib.load(model_uri)\n\n self.ready = True\n return self.ready\n\n async def predict(self, payload: types.InferenceRequest) -> types.InferenceResponse:\n payload = self._check_request(payload)\n\n model_responses = self._get_model_outputs(payload)\n\n return InferenceResponse(\n model_name=self.name,\n model_version=self.version,\n outputs=encoding.to_outputs(sklearn_payloads=model_responses),\n )\n\n def _check_request(self, payload: types.InferenceRequest) -> types.InferenceRequest:\n if not payload.outputs:\n # By default, only return the result of `predict()`\n payload.outputs = [types.RequestOutput(name=PREDICT_OUTPUT)]\n else:\n for request_output in payload.outputs:\n if request_output.name not in VALID_OUTPUTS:\n raise InferenceError(\n f\"SKLearnModel only supports '{PREDICT_OUTPUT}' and \"\n f\"'{PREDICT_PROBA_OUTPUT}' as outputs \"\n f\"({request_output.name} was received)\"\n )\n\n # Regression models do not support `predict_proba`\n if PREDICT_PROBA_OUTPUT in [o.name for o in payload.outputs]:\n # Ensure model supports it\n maybe_regressor = self._model\n if isinstance(self._model, Pipeline):\n maybe_regressor = maybe_regressor.steps[-1][-1]\n\n if not hasattr(maybe_regressor, PREDICT_PROBA_OUTPUT):\n raise InferenceError(\n f\"{type(maybe_regressor)} models do not support \"\n f\"'{PREDICT_PROBA_OUTPUT}\"\n )\n\n return payload\n\n def _get_model_outputs(\n self, payload: types.InferenceRequest\n ) -> List[SKLearnPayload]:\n decoded_request = self.decode_request(payload, default_codec=NumpyRequestCodec)\n\n outputs = []\n for request_output in payload.outputs: # type: ignore\n predict_fn = getattr(self._model, request_output.name)\n y = predict_fn(decoded_request)\n payload = SKLearnPayload(requested_output=request_output, model_output=y)\n outputs.append(payload)\n\n return outputs\n", "path": "runtimes/sklearn/mlserver_sklearn/sklearn.py"}, {"content": "DockerfileName = \"Dockerfile\"\nDockerfileTemplate = \"\"\"\nFROM continuumio/miniconda3:4.10.3 AS env-builder\nSHELL [\"/bin/bash\", \"-c\"]\n\nARG MLSERVER_ENV_NAME=\"mlserver-custom-env\" \\\\\n MLSERVER_ENV_TARBALL=\"./envs/base.tar.gz\"\n\nRUN conda config --add channels conda-forge && \\\\\n conda install conda-pack\n\n# The `[]` character range will ensure that Docker doesn't complain if the\n# files don't exist:\n# https://stackoverflow.com/a/65138098/5015573\nCOPY \\\\\n ./environment.ym[l] \\\\\n ./environment.yam[l] \\\\\n ./conda.ym[l] \\\\\n ./conda.yam[l] \\\\\n .\nRUN mkdir $(dirname $MLSERVER_ENV_TARBALL); \\\\\n for envFile in environment.yml environment.yaml conda.yml conda.yaml; do \\\\\n if [[ -f $envFile ]]; then \\\\\n conda env create \\\n --name $MLSERVER_ENV_NAME \\\\\n --file $envFile; \\\\\n conda-pack \\\n -n $MLSERVER_ENV_NAME \\\\\n -o $MLSERVER_ENV_TARBALL; \\\\\n fi \\\\\n done; \\\\\n chmod -R 776 $(dirname $MLSERVER_ENV_TARBALL)\n\nFROM seldonio/mlserver:{version}-slim\nSHELL [\"/bin/bash\", \"-c\"]\n\n# Copy all potential sources for custom environments\nCOPY \\\\\n --chown=1000 \\\\\n --from=env-builder \\\\\n /envs/base.tar.g[z] \\\\\n ./envs/base.tar.gz\nCOPY \\\\\n ./settings.jso[n] \\\\\n ./model-settings.jso[n] \\\\\n ./requirements.tx[t] \\\\\n .\n\nRUN ./hack/build-env.sh . ./envs/base\n\n# Copy everything else\nCOPY . .\n\n# Override MLServer's own `CMD` to activate the embedded environment\n# (optionally activating the hot-loaded one as well).\nCMD source ./hack/activate-env.sh ./envs/base.tar.gz ./envs/base && \\\\\n mlserver start $MLSERVER_MODELS_DIR\n\"\"\"\n\nDockerignoreName = \".dockerignore\"\nDockerignore = \"\"\"\n# Binaries for programs and plugins\n*.exe\n*.exe~\n*.dll\n*.so\n*.dylib\n*.pyc\n*.pyo\n*.pyd\nbin\n\n# Mac file system\n**/.DS_Store\n\n# Python dev\n__pycache__\n.Python\nenv\npip-log.txt\npip-delete-this-directory.txt\n.mypy_cache\neggs/\n.eggs/\n*.egg-info/\n./pytest_cache\n.tox\nbuild/\ndist/\n\n# Notebook Checkpoints\n.ipynb_checkpoints\n\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*,cover\n*.log\n.git\n\"\"\"\n", "path": "mlserver/cli/constants.py"}], "after_files": [{"content": "from dataclasses import dataclass\nfrom typing import List, Union\n\nimport numpy as np\nimport pandas as pd\n\nfrom mlserver.codecs import NumpyCodec, PandasCodec\nfrom mlserver.errors import InferenceError\nfrom mlserver.types import RequestOutput, ResponseOutput\n\n# TODO: more support!\nSKLearnResponse = Union[np.ndarray, pd.DataFrame]\n\n\n@dataclass\nclass SKLearnPayload:\n \"\"\"Class for keeping track of requested outputs\n and corresponding model responses.\"\"\"\n\n requested_output: RequestOutput\n model_output: SKLearnResponse\n\n\ndef to_outputs(sklearn_payloads: List[SKLearnPayload]) -> List[ResponseOutput]:\n \"\"\"\n Encodes a list of SKLearn payloads into a list of proto-able ResponseOutputs.\n\n :param sklearn_payloads: List of requested outputs + the responses from the\n SKLearn model\n :return: response_outputs: List of encoded response outputs\n :raises: InferenceError if multiple columnar responses were returned by the model\n \"\"\"\n response_outputs = []\n\n all_output_names = [p.requested_output.name for p in sklearn_payloads]\n\n for payload in sklearn_payloads:\n if _is_columnar_data(payload) and len(sklearn_payloads) > 1:\n raise InferenceError(\n f\"{payload.requested_output.name} returned columnar data of type\"\n f\" {type(payload.model_output)} and {all_output_names} were\"\n f\" requested. Cannot encode multiple columnar data responses\"\n f\" one response.\"\n )\n\n if isinstance(payload.model_output, pd.DataFrame):\n # Immediately return the outputs of columnar data encoding,\n # don't try to jam more outputs together in one response.\n return PandasCodec.encode(\"some-model\", payload.model_output).outputs\n\n response_output = NumpyCodec.encode(\n name=payload.requested_output.name, payload=payload.model_output\n )\n response_outputs.append(response_output)\n\n return response_outputs\n\n\ndef _is_columnar_data(payload: SKLearnPayload) -> bool:\n return isinstance(payload.model_output, pd.DataFrame)\n", "path": "runtimes/sklearn/mlserver_sklearn/encoding.py"}, {"content": "from typing import List\n\nimport joblib\nfrom mlserver_sklearn import encoding\nfrom mlserver_sklearn.encoding import SKLearnPayload\nfrom sklearn.pipeline import Pipeline\n\nfrom mlserver import types\nfrom mlserver.codecs import NumpyRequestCodec\nfrom mlserver.errors import InferenceError\nfrom mlserver.model import MLModel\nfrom mlserver.types import InferenceResponse\nfrom mlserver.utils import get_model_uri\n\nPREDICT_OUTPUT = \"predict\"\nPREDICT_PROBA_OUTPUT = \"predict_proba\"\nVALID_OUTPUTS = [PREDICT_OUTPUT, PREDICT_PROBA_OUTPUT]\n\nWELLKNOWN_MODEL_FILENAMES = [\"model.joblib\", \"model.pickle\", \"model.pkl\"]\n\n\nclass SKLearnModel(MLModel):\n \"\"\"\n Implementation of the MLModel interface to load and serve `scikit-learn`\n models persisted with `joblib`.\n \"\"\"\n\n async def load(self) -> bool:\n # TODO: Log info message\n model_uri = await get_model_uri(\n self._settings, wellknown_filenames=WELLKNOWN_MODEL_FILENAMES\n )\n self._model = joblib.load(model_uri)\n\n self.ready = True\n return self.ready\n\n async def predict(self, payload: types.InferenceRequest) -> types.InferenceResponse:\n payload = self._check_request(payload)\n\n model_responses = self._get_model_outputs(payload)\n\n return InferenceResponse(\n model_name=self.name,\n model_version=self.version,\n outputs=encoding.to_outputs(sklearn_payloads=model_responses),\n )\n\n def _check_request(self, payload: types.InferenceRequest) -> types.InferenceRequest:\n if not payload.outputs:\n # By default, only return the result of `predict()`\n payload.outputs = [types.RequestOutput(name=PREDICT_OUTPUT)]\n else:\n for request_output in payload.outputs:\n if request_output.name not in VALID_OUTPUTS:\n raise InferenceError(\n f\"SKLearnModel only supports '{PREDICT_OUTPUT}' and \"\n f\"'{PREDICT_PROBA_OUTPUT}' as outputs \"\n f\"({request_output.name} was received)\"\n )\n\n # Regression models do not support `predict_proba`\n if PREDICT_PROBA_OUTPUT in [o.name for o in payload.outputs]:\n # Ensure model supports it\n maybe_regressor = self._model\n if isinstance(self._model, Pipeline):\n maybe_regressor = maybe_regressor.steps[-1][-1]\n\n if not hasattr(maybe_regressor, PREDICT_PROBA_OUTPUT):\n raise InferenceError(\n f\"{type(maybe_regressor)} models do not support \"\n f\"'{PREDICT_PROBA_OUTPUT}\"\n )\n\n return payload\n\n def _get_model_outputs(\n self, payload: types.InferenceRequest\n ) -> List[SKLearnPayload]:\n decoded_request = self.decode_request(payload, default_codec=NumpyRequestCodec)\n\n outputs = []\n for request_output in payload.outputs: # type: ignore\n predict_fn = getattr(self._model, request_output.name)\n y = predict_fn(decoded_request)\n output = SKLearnPayload(requested_output=request_output, model_output=y)\n outputs.append(output)\n\n return outputs\n", "path": "runtimes/sklearn/mlserver_sklearn/sklearn.py"}, {"content": "DockerfileName = \"Dockerfile\"\nDockerfileTemplate = \"\"\"\nFROM continuumio/miniconda3:4.10.3 AS env-builder\nSHELL [\"/bin/bash\", \"-c\"]\n\nARG MLSERVER_ENV_NAME=\"mlserver-custom-env\" \\\\\n MLSERVER_ENV_TARBALL=\"./envs/base.tar.gz\"\n\nRUN conda config --add channels conda-forge && \\\\\n conda install conda-pack\n\n# The `[]` character range will ensure that Docker doesn't complain if the\n# files don't exist:\n# https://stackoverflow.com/a/65138098/5015573\nCOPY \\\\\n ./environment.ym[l] \\\\\n ./environment.yam[l] \\\\\n ./conda.ym[l] \\\\\n ./conda.yam[l] \\\\\n .\nRUN mkdir $(dirname $MLSERVER_ENV_TARBALL); \\\\\n for envFile in environment.yml environment.yaml conda.yml conda.yaml; do \\\\\n if [[ -f $envFile ]]; then \\\\\n conda env create \\\n --name $MLSERVER_ENV_NAME \\\\\n --file $envFile; \\\\\n conda-pack \\\n -n $MLSERVER_ENV_NAME \\\\\n -o $MLSERVER_ENV_TARBALL; \\\\\n fi \\\\\n done; \\\\\n chmod -R 776 $(dirname $MLSERVER_ENV_TARBALL)\n\nFROM seldonio/mlserver:{version}-slim\nSHELL [\"/bin/bash\", \"-c\"]\n\n# Copy all potential sources for custom environments\nCOPY \\\\\n --chown=1000 \\\\\n --from=env-builder \\\\\n /envs/base.tar.g[z] \\\\\n ./envs/base.tar.gz\nCOPY \\\\\n ./settings.jso[n] \\\\\n ./model-settings.jso[n] \\\\\n ./requirements.tx[t] \\\\\n .\n\nUSER root\n# Install dependencies system-wide, to ensure that they are available for every\n# user\nRUN ./hack/build-env.sh . ./envs/base && \\\n chown -R 1000:0 ./envs/base && \\\\\n chmod -R 776 ./envs/base\nUSER 1000\n\n# Copy everything else\nCOPY . .\n\n# Override MLServer's own `CMD` to activate the embedded environment\n# (optionally activating the hot-loaded one as well).\nCMD source ./hack/activate-env.sh ./envs/base.tar.gz ./envs/base && \\\\\n mlserver start $MLSERVER_MODELS_DIR\n\"\"\"\n\nDockerignoreName = \".dockerignore\"\nDockerignore = \"\"\"\n# Binaries for programs and plugins\n*.exe\n*.exe~\n*.dll\n*.so\n*.dylib\n*.pyc\n*.pyo\n*.pyd\nbin\n\n# Mac file system\n**/.DS_Store\n\n# Python dev\n__pycache__\n.Python\nenv\npip-log.txt\npip-delete-this-directory.txt\n.mypy_cache\neggs/\n.eggs/\n*.egg-info/\n./pytest_cache\n.tox\nbuild/\ndist/\n\n# Notebook Checkpoints\n.ipynb_checkpoints\n\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*,cover\n*.log\n.git\n\"\"\"\n", "path": "mlserver/cli/constants.py"}]}
| 2,652 | 508 |
gh_patches_debug_11273
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-387
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
do not recommend `pre-commit autoupdate` on failure of `pre-commit autoupdate`
It would be preferable to recommend something that has chance of fixing the problem
instruction to reproduce in #385
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/repository.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import io
4 import json
5 import logging
6 import os
7 import shutil
8 from collections import defaultdict
9
10 import pkg_resources
11 from cached_property import cached_property
12
13 from pre_commit import five
14 from pre_commit import git
15 from pre_commit.clientlib.validate_config import is_local_hooks
16 from pre_commit.clientlib.validate_manifest import MANIFEST_JSON_SCHEMA
17 from pre_commit.jsonschema_extensions import apply_defaults
18 from pre_commit.languages.all import languages
19 from pre_commit.languages.helpers import environment_dir
20 from pre_commit.manifest import Manifest
21 from pre_commit.prefixed_command_runner import PrefixedCommandRunner
22
23
24 logger = logging.getLogger('pre_commit')
25
26 _pre_commit_version = pkg_resources.parse_version(
27 pkg_resources.get_distribution('pre-commit').version
28 )
29
30 # Bump when installation changes in a backwards / forwards incompatible way
31 INSTALLED_STATE_VERSION = '1'
32
33
34 class Repository(object):
35 def __init__(self, repo_config, repo_path_getter):
36 self.repo_config = repo_config
37 self.repo_path_getter = repo_path_getter
38 self.__installed = False
39
40 @classmethod
41 def create(cls, config, store):
42 if is_local_hooks(config):
43 return LocalRepository(config)
44 else:
45 repo_path_getter = store.get_repo_path_getter(
46 config['repo'], config['sha']
47 )
48 return cls(config, repo_path_getter)
49
50 @cached_property
51 def repo_url(self):
52 return self.repo_config['repo']
53
54 @cached_property
55 def sha(self):
56 return self.repo_config['sha']
57
58 @cached_property
59 def languages(self):
60 return set(
61 (hook['language'], hook['language_version'])
62 for _, hook in self.hooks
63 )
64
65 @cached_property
66 def additional_dependencies(self):
67 dep_dict = defaultdict(lambda: defaultdict(set))
68 for _, hook in self.hooks:
69 dep_dict[hook['language']][hook['language_version']].update(
70 hook.get('additional_dependencies', []),
71 )
72 return dep_dict
73
74 @cached_property
75 def hooks(self):
76 for hook in self.repo_config['hooks']:
77 if hook['id'] not in self.manifest.hooks:
78 logger.error(
79 '`{0}` is not present in repository {1}. '
80 'Typo? Perhaps it is introduced in a newer version? '
81 'Often `pre-commit autoupdate` fixes this.'.format(
82 hook['id'], self.repo_config['repo'],
83 )
84 )
85 exit(1)
86 hook_version = pkg_resources.parse_version(
87 self.manifest.hooks[hook['id']]['minimum_pre_commit_version'],
88 )
89 if hook_version > _pre_commit_version:
90 logger.error(
91 'The hook `{0}` requires pre-commit version {1} but '
92 'version {2} is installed. '
93 'Perhaps run `pip install --upgrade pre-commit`.'.format(
94 hook['id'], hook_version, _pre_commit_version,
95 )
96 )
97 exit(1)
98 return tuple(
99 (hook['id'], dict(self.manifest.hooks[hook['id']], **hook))
100 for hook in self.repo_config['hooks']
101 )
102
103 @cached_property
104 def manifest(self):
105 return Manifest(self.repo_path_getter)
106
107 @cached_property
108 def cmd_runner(self):
109 return PrefixedCommandRunner(self.repo_path_getter.repo_path)
110
111 def require_installed(self):
112 if self.__installed:
113 return
114
115 self.install()
116 self.__installed = True
117
118 def install(self):
119 """Install the hook repository."""
120 def state(language_name, language_version):
121 return {
122 'additional_dependencies': sorted(
123 self.additional_dependencies[
124 language_name
125 ][language_version],
126 )
127 }
128
129 def state_filename(venv, suffix=''):
130 return self.cmd_runner.path(
131 venv, '.install_state_v' + INSTALLED_STATE_VERSION + suffix,
132 )
133
134 def read_state(venv):
135 if not os.path.exists(state_filename(venv)):
136 return None
137 else:
138 return json.loads(io.open(state_filename(venv)).read())
139
140 def write_state(venv, language_name, language_version):
141 with io.open(
142 state_filename(venv, suffix='staging'), 'w',
143 ) as state_file:
144 state_file.write(five.to_text(json.dumps(
145 state(language_name, language_version),
146 )))
147 # Move the file into place atomically to indicate we've installed
148 os.rename(
149 state_filename(venv, suffix='staging'),
150 state_filename(venv),
151 )
152
153 def language_is_installed(language_name, language_version):
154 language = languages[language_name]
155 venv = environment_dir(language.ENVIRONMENT_DIR, language_version)
156 return (
157 venv is None or
158 read_state(venv) == state(language_name, language_version)
159 )
160
161 if not all(
162 language_is_installed(language_name, language_version)
163 for language_name, language_version in self.languages
164 ):
165 logger.info(
166 'Installing environment for {0}.'.format(self.repo_url)
167 )
168 logger.info('Once installed this environment will be reused.')
169 logger.info('This may take a few minutes...')
170
171 for language_name, language_version in self.languages:
172 if language_is_installed(language_name, language_version):
173 continue
174
175 language = languages[language_name]
176 venv = environment_dir(language.ENVIRONMENT_DIR, language_version)
177
178 # There's potentially incomplete cleanup from previous runs
179 # Clean it up!
180 if self.cmd_runner.exists(venv):
181 shutil.rmtree(self.cmd_runner.path(venv))
182
183 language.install_environment(
184 self.cmd_runner, language_version,
185 self.additional_dependencies[language_name][language_version],
186 )
187 # Write our state to indicate we're installed
188 write_state(venv, language_name, language_version)
189
190 def run_hook(self, hook, file_args):
191 """Run a hook.
192
193 Args:
194 hook - Hook dictionary
195 file_args - List of files to run
196 """
197 self.require_installed()
198 return languages[hook['language']].run_hook(
199 self.cmd_runner, hook, file_args,
200 )
201
202
203 class LocalRepository(Repository):
204 def __init__(self, repo_config):
205 super(LocalRepository, self).__init__(repo_config, None)
206
207 @cached_property
208 def hooks(self):
209 return tuple(
210 (hook['id'], apply_defaults(hook, MANIFEST_JSON_SCHEMA['items']))
211 for hook in self.repo_config['hooks']
212 )
213
214 @cached_property
215 def cmd_runner(self):
216 return PrefixedCommandRunner(git.get_root())
217
218 @cached_property
219 def sha(self):
220 raise NotImplementedError
221
222 @cached_property
223 def manifest(self):
224 raise NotImplementedError
225
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pre_commit/repository.py b/pre_commit/repository.py
--- a/pre_commit/repository.py
+++ b/pre_commit/repository.py
@@ -78,7 +78,9 @@
logger.error(
'`{0}` is not present in repository {1}. '
'Typo? Perhaps it is introduced in a newer version? '
- 'Often `pre-commit autoupdate` fixes this.'.format(
+ 'Often you can fix this by removing the hook, running '
+ '`pre-commit autoupdate`, '
+ 'and then adding the hook.'.format(
hook['id'], self.repo_config['repo'],
)
)
|
{"golden_diff": "diff --git a/pre_commit/repository.py b/pre_commit/repository.py\n--- a/pre_commit/repository.py\n+++ b/pre_commit/repository.py\n@@ -78,7 +78,9 @@\n logger.error(\n '`{0}` is not present in repository {1}. '\n 'Typo? Perhaps it is introduced in a newer version? '\n- 'Often `pre-commit autoupdate` fixes this.'.format(\n+ 'Often you can fix this by removing the hook, running '\n+ '`pre-commit autoupdate`, '\n+ 'and then adding the hook.'.format(\n hook['id'], self.repo_config['repo'],\n )\n )\n", "issue": "do not recommend `pre-commit autoupdate` on failure of `pre-commit autoupdate` \nIt would be preferable to recommend something that has chance of fixing the problem\n\ninstruction to reproduce in #385\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport io\nimport json\nimport logging\nimport os\nimport shutil\nfrom collections import defaultdict\n\nimport pkg_resources\nfrom cached_property import cached_property\n\nfrom pre_commit import five\nfrom pre_commit import git\nfrom pre_commit.clientlib.validate_config import is_local_hooks\nfrom pre_commit.clientlib.validate_manifest import MANIFEST_JSON_SCHEMA\nfrom pre_commit.jsonschema_extensions import apply_defaults\nfrom pre_commit.languages.all import languages\nfrom pre_commit.languages.helpers import environment_dir\nfrom pre_commit.manifest import Manifest\nfrom pre_commit.prefixed_command_runner import PrefixedCommandRunner\n\n\nlogger = logging.getLogger('pre_commit')\n\n_pre_commit_version = pkg_resources.parse_version(\n pkg_resources.get_distribution('pre-commit').version\n)\n\n# Bump when installation changes in a backwards / forwards incompatible way\nINSTALLED_STATE_VERSION = '1'\n\n\nclass Repository(object):\n def __init__(self, repo_config, repo_path_getter):\n self.repo_config = repo_config\n self.repo_path_getter = repo_path_getter\n self.__installed = False\n\n @classmethod\n def create(cls, config, store):\n if is_local_hooks(config):\n return LocalRepository(config)\n else:\n repo_path_getter = store.get_repo_path_getter(\n config['repo'], config['sha']\n )\n return cls(config, repo_path_getter)\n\n @cached_property\n def repo_url(self):\n return self.repo_config['repo']\n\n @cached_property\n def sha(self):\n return self.repo_config['sha']\n\n @cached_property\n def languages(self):\n return set(\n (hook['language'], hook['language_version'])\n for _, hook in self.hooks\n )\n\n @cached_property\n def additional_dependencies(self):\n dep_dict = defaultdict(lambda: defaultdict(set))\n for _, hook in self.hooks:\n dep_dict[hook['language']][hook['language_version']].update(\n hook.get('additional_dependencies', []),\n )\n return dep_dict\n\n @cached_property\n def hooks(self):\n for hook in self.repo_config['hooks']:\n if hook['id'] not in self.manifest.hooks:\n logger.error(\n '`{0}` is not present in repository {1}. '\n 'Typo? Perhaps it is introduced in a newer version? '\n 'Often `pre-commit autoupdate` fixes this.'.format(\n hook['id'], self.repo_config['repo'],\n )\n )\n exit(1)\n hook_version = pkg_resources.parse_version(\n self.manifest.hooks[hook['id']]['minimum_pre_commit_version'],\n )\n if hook_version > _pre_commit_version:\n logger.error(\n 'The hook `{0}` requires pre-commit version {1} but '\n 'version {2} is installed. '\n 'Perhaps run `pip install --upgrade pre-commit`.'.format(\n hook['id'], hook_version, _pre_commit_version,\n )\n )\n exit(1)\n return tuple(\n (hook['id'], dict(self.manifest.hooks[hook['id']], **hook))\n for hook in self.repo_config['hooks']\n )\n\n @cached_property\n def manifest(self):\n return Manifest(self.repo_path_getter)\n\n @cached_property\n def cmd_runner(self):\n return PrefixedCommandRunner(self.repo_path_getter.repo_path)\n\n def require_installed(self):\n if self.__installed:\n return\n\n self.install()\n self.__installed = True\n\n def install(self):\n \"\"\"Install the hook repository.\"\"\"\n def state(language_name, language_version):\n return {\n 'additional_dependencies': sorted(\n self.additional_dependencies[\n language_name\n ][language_version],\n )\n }\n\n def state_filename(venv, suffix=''):\n return self.cmd_runner.path(\n venv, '.install_state_v' + INSTALLED_STATE_VERSION + suffix,\n )\n\n def read_state(venv):\n if not os.path.exists(state_filename(venv)):\n return None\n else:\n return json.loads(io.open(state_filename(venv)).read())\n\n def write_state(venv, language_name, language_version):\n with io.open(\n state_filename(venv, suffix='staging'), 'w',\n ) as state_file:\n state_file.write(five.to_text(json.dumps(\n state(language_name, language_version),\n )))\n # Move the file into place atomically to indicate we've installed\n os.rename(\n state_filename(venv, suffix='staging'),\n state_filename(venv),\n )\n\n def language_is_installed(language_name, language_version):\n language = languages[language_name]\n venv = environment_dir(language.ENVIRONMENT_DIR, language_version)\n return (\n venv is None or\n read_state(venv) == state(language_name, language_version)\n )\n\n if not all(\n language_is_installed(language_name, language_version)\n for language_name, language_version in self.languages\n ):\n logger.info(\n 'Installing environment for {0}.'.format(self.repo_url)\n )\n logger.info('Once installed this environment will be reused.')\n logger.info('This may take a few minutes...')\n\n for language_name, language_version in self.languages:\n if language_is_installed(language_name, language_version):\n continue\n\n language = languages[language_name]\n venv = environment_dir(language.ENVIRONMENT_DIR, language_version)\n\n # There's potentially incomplete cleanup from previous runs\n # Clean it up!\n if self.cmd_runner.exists(venv):\n shutil.rmtree(self.cmd_runner.path(venv))\n\n language.install_environment(\n self.cmd_runner, language_version,\n self.additional_dependencies[language_name][language_version],\n )\n # Write our state to indicate we're installed\n write_state(venv, language_name, language_version)\n\n def run_hook(self, hook, file_args):\n \"\"\"Run a hook.\n\n Args:\n hook - Hook dictionary\n file_args - List of files to run\n \"\"\"\n self.require_installed()\n return languages[hook['language']].run_hook(\n self.cmd_runner, hook, file_args,\n )\n\n\nclass LocalRepository(Repository):\n def __init__(self, repo_config):\n super(LocalRepository, self).__init__(repo_config, None)\n\n @cached_property\n def hooks(self):\n return tuple(\n (hook['id'], apply_defaults(hook, MANIFEST_JSON_SCHEMA['items']))\n for hook in self.repo_config['hooks']\n )\n\n @cached_property\n def cmd_runner(self):\n return PrefixedCommandRunner(git.get_root())\n\n @cached_property\n def sha(self):\n raise NotImplementedError\n\n @cached_property\n def manifest(self):\n raise NotImplementedError\n", "path": "pre_commit/repository.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport io\nimport json\nimport logging\nimport os\nimport shutil\nfrom collections import defaultdict\n\nimport pkg_resources\nfrom cached_property import cached_property\n\nfrom pre_commit import five\nfrom pre_commit import git\nfrom pre_commit.clientlib.validate_config import is_local_hooks\nfrom pre_commit.clientlib.validate_manifest import MANIFEST_JSON_SCHEMA\nfrom pre_commit.jsonschema_extensions import apply_defaults\nfrom pre_commit.languages.all import languages\nfrom pre_commit.languages.helpers import environment_dir\nfrom pre_commit.manifest import Manifest\nfrom pre_commit.prefixed_command_runner import PrefixedCommandRunner\n\n\nlogger = logging.getLogger('pre_commit')\n\n_pre_commit_version = pkg_resources.parse_version(\n pkg_resources.get_distribution('pre-commit').version\n)\n\n# Bump when installation changes in a backwards / forwards incompatible way\nINSTALLED_STATE_VERSION = '1'\n\n\nclass Repository(object):\n def __init__(self, repo_config, repo_path_getter):\n self.repo_config = repo_config\n self.repo_path_getter = repo_path_getter\n self.__installed = False\n\n @classmethod\n def create(cls, config, store):\n if is_local_hooks(config):\n return LocalRepository(config)\n else:\n repo_path_getter = store.get_repo_path_getter(\n config['repo'], config['sha']\n )\n return cls(config, repo_path_getter)\n\n @cached_property\n def repo_url(self):\n return self.repo_config['repo']\n\n @cached_property\n def sha(self):\n return self.repo_config['sha']\n\n @cached_property\n def languages(self):\n return set(\n (hook['language'], hook['language_version'])\n for _, hook in self.hooks\n )\n\n @cached_property\n def additional_dependencies(self):\n dep_dict = defaultdict(lambda: defaultdict(set))\n for _, hook in self.hooks:\n dep_dict[hook['language']][hook['language_version']].update(\n hook.get('additional_dependencies', []),\n )\n return dep_dict\n\n @cached_property\n def hooks(self):\n for hook in self.repo_config['hooks']:\n if hook['id'] not in self.manifest.hooks:\n logger.error(\n '`{0}` is not present in repository {1}. '\n 'Typo? Perhaps it is introduced in a newer version? '\n 'Often you can fix this by removing the hook, running '\n '`pre-commit autoupdate`, '\n 'and then adding the hook.'.format(\n hook['id'], self.repo_config['repo'],\n )\n )\n exit(1)\n hook_version = pkg_resources.parse_version(\n self.manifest.hooks[hook['id']]['minimum_pre_commit_version'],\n )\n if hook_version > _pre_commit_version:\n logger.error(\n 'The hook `{0}` requires pre-commit version {1} but '\n 'version {2} is installed. '\n 'Perhaps run `pip install --upgrade pre-commit`.'.format(\n hook['id'], hook_version, _pre_commit_version,\n )\n )\n exit(1)\n return tuple(\n (hook['id'], dict(self.manifest.hooks[hook['id']], **hook))\n for hook in self.repo_config['hooks']\n )\n\n @cached_property\n def manifest(self):\n return Manifest(self.repo_path_getter)\n\n @cached_property\n def cmd_runner(self):\n return PrefixedCommandRunner(self.repo_path_getter.repo_path)\n\n def require_installed(self):\n if self.__installed:\n return\n\n self.install()\n self.__installed = True\n\n def install(self):\n \"\"\"Install the hook repository.\"\"\"\n def state(language_name, language_version):\n return {\n 'additional_dependencies': sorted(\n self.additional_dependencies[\n language_name\n ][language_version],\n )\n }\n\n def state_filename(venv, suffix=''):\n return self.cmd_runner.path(\n venv, '.install_state_v' + INSTALLED_STATE_VERSION + suffix,\n )\n\n def read_state(venv):\n if not os.path.exists(state_filename(venv)):\n return None\n else:\n return json.loads(io.open(state_filename(venv)).read())\n\n def write_state(venv, language_name, language_version):\n with io.open(\n state_filename(venv, suffix='staging'), 'w',\n ) as state_file:\n state_file.write(five.to_text(json.dumps(\n state(language_name, language_version),\n )))\n # Move the file into place atomically to indicate we've installed\n os.rename(\n state_filename(venv, suffix='staging'),\n state_filename(venv),\n )\n\n def language_is_installed(language_name, language_version):\n language = languages[language_name]\n venv = environment_dir(language.ENVIRONMENT_DIR, language_version)\n return (\n venv is None or\n read_state(venv) == state(language_name, language_version)\n )\n\n if not all(\n language_is_installed(language_name, language_version)\n for language_name, language_version in self.languages\n ):\n logger.info(\n 'Installing environment for {0}.'.format(self.repo_url)\n )\n logger.info('Once installed this environment will be reused.')\n logger.info('This may take a few minutes...')\n\n for language_name, language_version in self.languages:\n if language_is_installed(language_name, language_version):\n continue\n\n language = languages[language_name]\n venv = environment_dir(language.ENVIRONMENT_DIR, language_version)\n\n # There's potentially incomplete cleanup from previous runs\n # Clean it up!\n if self.cmd_runner.exists(venv):\n shutil.rmtree(self.cmd_runner.path(venv))\n\n language.install_environment(\n self.cmd_runner, language_version,\n self.additional_dependencies[language_name][language_version],\n )\n # Write our state to indicate we're installed\n write_state(venv, language_name, language_version)\n\n def run_hook(self, hook, file_args):\n \"\"\"Run a hook.\n\n Args:\n hook - Hook dictionary\n file_args - List of files to run\n \"\"\"\n self.require_installed()\n return languages[hook['language']].run_hook(\n self.cmd_runner, hook, file_args,\n )\n\n\nclass LocalRepository(Repository):\n def __init__(self, repo_config):\n super(LocalRepository, self).__init__(repo_config, None)\n\n @cached_property\n def hooks(self):\n return tuple(\n (hook['id'], apply_defaults(hook, MANIFEST_JSON_SCHEMA['items']))\n for hook in self.repo_config['hooks']\n )\n\n @cached_property\n def cmd_runner(self):\n return PrefixedCommandRunner(git.get_root())\n\n @cached_property\n def sha(self):\n raise NotImplementedError\n\n @cached_property\n def manifest(self):\n raise NotImplementedError\n", "path": "pre_commit/repository.py"}]}
| 2,327 | 142 |
gh_patches_debug_36558
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-23023
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Create comment when linking to existing Jira issue
## Summary
It would be nice if when linking to an existing Jira issue Sentry created a comment with a link back to the Sentry issue.
## Motivation
- Sentry already creates a backlink when creating a _new_ Jira issue
- Being able to navigate from Jira issue to the original Sentry issue is just as important and useful as navigating from Sentry to Jira.
## Additional Context
When creating a new Jira issue, there is a backlink:



When linking to an existing Jira issue, there is no comment and no backlink.
We are using on-premises Jira Server.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/integrations/jira_server/integration.py`
Content:
```
1 import logging
2
3 from cryptography.hazmat.primitives.serialization import load_pem_private_key
4 from cryptography.hazmat.backends import default_backend
5 from django import forms
6 from django.core.urlresolvers import reverse
7 from django.core.validators import URLValidator
8 from django.utils.translation import ugettext as _
9 from django.views.decorators.csrf import csrf_exempt
10 from urllib.parse import urlparse
11
12 from sentry.integrations import (
13 IntegrationFeatures,
14 IntegrationProvider,
15 IntegrationMetadata,
16 FeatureDescription,
17 )
18 from sentry.shared_integrations.exceptions import IntegrationError, ApiError
19 from sentry.integrations.jira import JiraIntegration
20 from sentry.pipeline import PipelineView
21 from sentry.utils.hashlib import sha1_text
22 from sentry.utils.decorators import classproperty
23 from sentry.web.helpers import render_to_response
24 from .client import JiraServer, JiraServerSetupClient, JiraServerClient
25
26
27 logger = logging.getLogger("sentry.integrations.jira_server")
28
29 DESCRIPTION = """
30 Connect your Sentry organization into one or more of your Jira Server instances.
31 Get started streamlining your bug squashing workflow by unifying your Sentry and
32 Jira instances together.
33 """
34
35 FEATURE_DESCRIPTIONS = [
36 FeatureDescription(
37 """
38 Create and link Sentry issue groups directly to a Jira ticket in any of your
39 projects, providing a quick way to jump from Sentry bug to tracked ticket!
40 """,
41 IntegrationFeatures.ISSUE_BASIC,
42 ),
43 FeatureDescription(
44 """
45 Automatically synchronize assignees to and from Jira. Don't get confused
46 who's fixing what, let us handle ensuring your issues and tickets match up
47 to your Sentry and Jira assignees.
48 """,
49 IntegrationFeatures.ISSUE_SYNC,
50 ),
51 FeatureDescription(
52 """
53 Synchronize Comments on Sentry Issues directly to the linked Jira ticket.
54 """,
55 IntegrationFeatures.ISSUE_SYNC,
56 ),
57 ]
58
59 setup_alert = {
60 "type": "warning",
61 "icon": "icon-warning-sm",
62 "text": "Your Jira instance must be able to communicate with Sentry."
63 " Sentry makes outbound requests from a [static set of IP"
64 " addresses](https://docs.sentry.io/ip-ranges/) that you may wish"
65 " to allow in your firewall to support this integration.",
66 }
67
68
69 metadata = IntegrationMetadata(
70 description=_(DESCRIPTION.strip()),
71 features=FEATURE_DESCRIPTIONS,
72 author="The Sentry Team",
73 noun=_("Installation"),
74 issue_url="https://github.com/getsentry/sentry/issues/new?assignees=&labels=Component:%20Integrations&template=bug_report.md&title=Jira%20Server%20Integration%20Problem",
75 source_url="https://github.com/getsentry/sentry/tree/master/src/sentry/integrations/jira_server",
76 aspects={"alerts": [setup_alert]},
77 )
78
79
80 class InstallationForm(forms.Form):
81 url = forms.CharField(
82 label=_("Jira URL"),
83 help_text=_("The base URL for your Jira Server instance, including the host and protocol."),
84 widget=forms.TextInput(attrs={"placeholder": "https://jira.example.com"}),
85 validators=[URLValidator()],
86 )
87 verify_ssl = forms.BooleanField(
88 label=_("Verify SSL"),
89 help_text=_(
90 "By default, we verify SSL certificates " "when making requests to your Jira instance."
91 ),
92 widget=forms.CheckboxInput(),
93 required=False,
94 initial=True,
95 )
96 consumer_key = forms.CharField(
97 label=_("Jira Consumer Key"),
98 widget=forms.TextInput(attrs={"placeholder": "sentry-consumer-key"}),
99 )
100 private_key = forms.CharField(
101 label=_("Jira Consumer Private Key"),
102 widget=forms.Textarea(
103 attrs={
104 "placeholder": "-----BEGIN RSA PRIVATE KEY-----\n...\n-----END RSA PRIVATE KEY-----"
105 }
106 ),
107 )
108
109 def clean_url(self):
110 """Strip off trailing / as they cause invalid URLs downstream"""
111 return self.cleaned_data["url"].rstrip("/")
112
113 def clean_private_key(self):
114 data = self.cleaned_data["private_key"]
115
116 try:
117 load_pem_private_key(data.encode("utf-8"), None, default_backend())
118 except Exception:
119 raise forms.ValidationError(
120 "Private key must be a valid SSH private key encoded in a PEM format."
121 )
122 return data
123
124 def clean_consumer_key(self):
125 data = self.cleaned_data["consumer_key"]
126 if len(data) > 200:
127 raise forms.ValidationError("Consumer key is limited to 200 characters.")
128 return data
129
130
131 class InstallationConfigView(PipelineView):
132 """
133 Collect the OAuth client credentials from the user.
134 """
135
136 def dispatch(self, request, pipeline):
137 if request.method == "POST":
138 form = InstallationForm(request.POST)
139 if form.is_valid():
140 form_data = form.cleaned_data
141
142 pipeline.bind_state("installation_data", form_data)
143 return pipeline.next_step()
144 else:
145 form = InstallationForm()
146
147 return render_to_response(
148 template="sentry/integrations/jira-server-config.html",
149 context={"form": form},
150 request=request,
151 )
152
153
154 class OAuthLoginView(PipelineView):
155 """
156 Start the OAuth dance by creating a request token
157 and redirecting the user to approve it.
158 """
159
160 @csrf_exempt
161 def dispatch(self, request, pipeline):
162 if "oauth_token" in request.GET:
163 return pipeline.next_step()
164
165 config = pipeline.fetch_state("installation_data")
166 client = JiraServerSetupClient(
167 config.get("url"),
168 config.get("consumer_key"),
169 config.get("private_key"),
170 config.get("verify_ssl"),
171 )
172 try:
173 request_token = client.get_request_token()
174 pipeline.bind_state("request_token", request_token)
175 authorize_url = client.get_authorize_url(request_token)
176
177 return self.redirect(authorize_url)
178 except ApiError as error:
179 logger.info(
180 "identity.jira-server.request-token",
181 extra={"url": config.get("url"), "error": error},
182 )
183 return pipeline.error("Could not fetch a request token from Jira. %s" % error)
184
185
186 class OAuthCallbackView(PipelineView):
187 """
188 Complete the OAuth dance by exchanging our request token
189 into an access token.
190 """
191
192 @csrf_exempt
193 def dispatch(self, request, pipeline):
194 config = pipeline.fetch_state("installation_data")
195 client = JiraServerSetupClient(
196 config.get("url"),
197 config.get("consumer_key"),
198 config.get("private_key"),
199 config.get("verify_ssl"),
200 )
201
202 try:
203 access_token = client.get_access_token(
204 pipeline.fetch_state("request_token"), request.GET["oauth_token"]
205 )
206 pipeline.bind_state("access_token", access_token)
207
208 return pipeline.next_step()
209 except ApiError as error:
210 logger.info("identity.jira-server.access-token", extra={"error": error})
211 return pipeline.error("Could not fetch an access token from Jira")
212
213
214 class JiraServerIntegration(JiraIntegration):
215 """
216 IntegrationInstallation implementation for Jira-Server
217 """
218
219 default_identity = None
220
221 @classproperty
222 def use_email_scope(cls):
223 # jira server doesn't need the email scope since it's not restricted by GDPR
224 return False
225
226 def get_client(self):
227 if self.default_identity is None:
228 self.default_identity = self.get_default_identity()
229
230 return JiraServerClient(
231 self.model.metadata["base_url"],
232 JiraServer(self.default_identity.data),
233 self.model.metadata["verify_ssl"],
234 )
235
236 def get_link_issue_config(self, group, **kwargs):
237 fields = super(JiraIntegration, self).get_link_issue_config(group, **kwargs)
238 org = group.organization
239 autocomplete_url = reverse(
240 "sentry-extensions-jiraserver-search", args=[org.slug, self.model.id]
241 )
242 for field in fields:
243 if field["name"] == "externalIssue":
244 field["url"] = autocomplete_url
245 field["type"] = "select"
246 return fields
247
248 def search_url(self, org_slug):
249 return reverse("sentry-extensions-jiraserver-search", args=[org_slug, self.model.id])
250
251
252 class JiraServerIntegrationProvider(IntegrationProvider):
253 key = "jira_server"
254 name = "Jira Server"
255 metadata = metadata
256 integration_cls = JiraServerIntegration
257
258 needs_default_identity = True
259
260 can_add = True
261
262 features = frozenset([IntegrationFeatures.ISSUE_BASIC, IntegrationFeatures.ISSUE_SYNC])
263
264 setup_dialog_config = {"width": 1030, "height": 1000}
265
266 def get_pipeline_views(self):
267 return [InstallationConfigView(), OAuthLoginView(), OAuthCallbackView()]
268
269 def build_integration(self, state):
270 install = state["installation_data"]
271 access_token = state["access_token"]
272
273 webhook_secret = sha1_text(install["private_key"]).hexdigest()
274
275 hostname = urlparse(install["url"]).netloc
276 external_id = "{}:{}".format(hostname, install["consumer_key"])[:64]
277
278 credentials = {
279 "consumer_key": install["consumer_key"],
280 "private_key": install["private_key"],
281 "access_token": access_token["oauth_token"],
282 "access_token_secret": access_token["oauth_token_secret"],
283 }
284 # Create the webhook before the integration record exists
285 # so that if it fails we don't persist a broken integration.
286 self.create_webhook(external_id, webhook_secret, install, credentials)
287
288 return {
289 "name": install["consumer_key"],
290 "provider": "jira_server",
291 "external_id": external_id,
292 "metadata": {
293 "base_url": install["url"],
294 "domain_name": hostname,
295 "verify_ssl": install["verify_ssl"],
296 "webhook_secret": webhook_secret,
297 },
298 "user_identity": {
299 "type": "jira_server",
300 "external_id": external_id,
301 "scopes": [],
302 "data": credentials,
303 },
304 }
305
306 def create_webhook(self, external_id, webhook_secret, install, credentials):
307 client = JiraServerSetupClient(
308 install["url"], install["consumer_key"], install["private_key"], install["verify_ssl"]
309 )
310 try:
311 client.create_issue_webhook(external_id, webhook_secret, credentials)
312 except ApiError as err:
313 logger.info(
314 "jira-server.webhook.failed",
315 extra={"error": str(err), "external_id": external_id},
316 )
317 try:
318 details = next(x for x in err.json["messages"][0].values())
319 except (KeyError, TypeError, StopIteration):
320 details = ""
321 message = f"Could not create issue webhook in Jira. {details}"
322 raise IntegrationError(message)
323
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/sentry/integrations/jira_server/integration.py b/src/sentry/integrations/jira_server/integration.py
--- a/src/sentry/integrations/jira_server/integration.py
+++ b/src/sentry/integrations/jira_server/integration.py
@@ -18,8 +18,9 @@
from sentry.shared_integrations.exceptions import IntegrationError, ApiError
from sentry.integrations.jira import JiraIntegration
from sentry.pipeline import PipelineView
-from sentry.utils.hashlib import sha1_text
from sentry.utils.decorators import classproperty
+from sentry.utils.hashlib import sha1_text
+from sentry.utils.http import absolute_uri
from sentry.web.helpers import render_to_response
from .client import JiraServer, JiraServerSetupClient, JiraServerClient
@@ -234,7 +235,8 @@
)
def get_link_issue_config(self, group, **kwargs):
- fields = super(JiraIntegration, self).get_link_issue_config(group, **kwargs)
+ fields = super().get_link_issue_config(group, **kwargs)
+
org = group.organization
autocomplete_url = reverse(
"sentry-extensions-jiraserver-search", args=[org.slug, self.model.id]
@@ -243,11 +245,35 @@
if field["name"] == "externalIssue":
field["url"] = autocomplete_url
field["type"] = "select"
+
+ default_comment = "Linked Sentry Issue: [{}|{}]".format(
+ group.qualified_short_id,
+ absolute_uri(group.get_absolute_url(params={"referrer": "jira_server"})),
+ )
+ fields.append(
+ {
+ "name": "comment",
+ "label": "Comment",
+ "default": default_comment,
+ "type": "textarea",
+ "autosize": True,
+ "maxRows": 10,
+ }
+ )
+
return fields
def search_url(self, org_slug):
return reverse("sentry-extensions-jiraserver-search", args=[org_slug, self.model.id])
+ def after_link_issue(self, external_issue, data=None, **kwargs):
+ super().after_link_issue(external_issue, **kwargs)
+
+ if data:
+ comment = data.get("comment")
+ if comment:
+ self.get_client().create_comment(external_issue.key, comment)
+
class JiraServerIntegrationProvider(IntegrationProvider):
key = "jira_server"
|
{"golden_diff": "diff --git a/src/sentry/integrations/jira_server/integration.py b/src/sentry/integrations/jira_server/integration.py\n--- a/src/sentry/integrations/jira_server/integration.py\n+++ b/src/sentry/integrations/jira_server/integration.py\n@@ -18,8 +18,9 @@\n from sentry.shared_integrations.exceptions import IntegrationError, ApiError\n from sentry.integrations.jira import JiraIntegration\n from sentry.pipeline import PipelineView\n-from sentry.utils.hashlib import sha1_text\n from sentry.utils.decorators import classproperty\n+from sentry.utils.hashlib import sha1_text\n+from sentry.utils.http import absolute_uri\n from sentry.web.helpers import render_to_response\n from .client import JiraServer, JiraServerSetupClient, JiraServerClient\n \n@@ -234,7 +235,8 @@\n )\n \n def get_link_issue_config(self, group, **kwargs):\n- fields = super(JiraIntegration, self).get_link_issue_config(group, **kwargs)\n+ fields = super().get_link_issue_config(group, **kwargs)\n+\n org = group.organization\n autocomplete_url = reverse(\n \"sentry-extensions-jiraserver-search\", args=[org.slug, self.model.id]\n@@ -243,11 +245,35 @@\n if field[\"name\"] == \"externalIssue\":\n field[\"url\"] = autocomplete_url\n field[\"type\"] = \"select\"\n+\n+ default_comment = \"Linked Sentry Issue: [{}|{}]\".format(\n+ group.qualified_short_id,\n+ absolute_uri(group.get_absolute_url(params={\"referrer\": \"jira_server\"})),\n+ )\n+ fields.append(\n+ {\n+ \"name\": \"comment\",\n+ \"label\": \"Comment\",\n+ \"default\": default_comment,\n+ \"type\": \"textarea\",\n+ \"autosize\": True,\n+ \"maxRows\": 10,\n+ }\n+ )\n+\n return fields\n \n def search_url(self, org_slug):\n return reverse(\"sentry-extensions-jiraserver-search\", args=[org_slug, self.model.id])\n \n+ def after_link_issue(self, external_issue, data=None, **kwargs):\n+ super().after_link_issue(external_issue, **kwargs)\n+\n+ if data:\n+ comment = data.get(\"comment\")\n+ if comment:\n+ self.get_client().create_comment(external_issue.key, comment)\n+\n \n class JiraServerIntegrationProvider(IntegrationProvider):\n key = \"jira_server\"\n", "issue": "Create comment when linking to existing Jira issue\n## Summary\r\n\r\nIt would be nice if when linking to an existing Jira issue Sentry created a comment with a link back to the Sentry issue.\r\n\r\n## Motivation\r\n\r\n- Sentry already creates a backlink when creating a _new_ Jira issue\r\n- Being able to navigate from Jira issue to the original Sentry issue is just as important and useful as navigating from Sentry to Jira.\r\n\r\n## Additional Context\r\n\r\nWhen creating a new Jira issue, there is a backlink:\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nWhen linking to an existing Jira issue, there is no comment and no backlink.\r\n\r\nWe are using on-premises Jira Server.\n", "before_files": [{"content": "import logging\n\nfrom cryptography.hazmat.primitives.serialization import load_pem_private_key\nfrom cryptography.hazmat.backends import default_backend\nfrom django import forms\nfrom django.core.urlresolvers import reverse\nfrom django.core.validators import URLValidator\nfrom django.utils.translation import ugettext as _\nfrom django.views.decorators.csrf import csrf_exempt\nfrom urllib.parse import urlparse\n\nfrom sentry.integrations import (\n IntegrationFeatures,\n IntegrationProvider,\n IntegrationMetadata,\n FeatureDescription,\n)\nfrom sentry.shared_integrations.exceptions import IntegrationError, ApiError\nfrom sentry.integrations.jira import JiraIntegration\nfrom sentry.pipeline import PipelineView\nfrom sentry.utils.hashlib import sha1_text\nfrom sentry.utils.decorators import classproperty\nfrom sentry.web.helpers import render_to_response\nfrom .client import JiraServer, JiraServerSetupClient, JiraServerClient\n\n\nlogger = logging.getLogger(\"sentry.integrations.jira_server\")\n\nDESCRIPTION = \"\"\"\nConnect your Sentry organization into one or more of your Jira Server instances.\nGet started streamlining your bug squashing workflow by unifying your Sentry and\nJira instances together.\n\"\"\"\n\nFEATURE_DESCRIPTIONS = [\n FeatureDescription(\n \"\"\"\n Create and link Sentry issue groups directly to a Jira ticket in any of your\n projects, providing a quick way to jump from Sentry bug to tracked ticket!\n \"\"\",\n IntegrationFeatures.ISSUE_BASIC,\n ),\n FeatureDescription(\n \"\"\"\n Automatically synchronize assignees to and from Jira. Don't get confused\n who's fixing what, let us handle ensuring your issues and tickets match up\n to your Sentry and Jira assignees.\n \"\"\",\n IntegrationFeatures.ISSUE_SYNC,\n ),\n FeatureDescription(\n \"\"\"\n Synchronize Comments on Sentry Issues directly to the linked Jira ticket.\n \"\"\",\n IntegrationFeatures.ISSUE_SYNC,\n ),\n]\n\nsetup_alert = {\n \"type\": \"warning\",\n \"icon\": \"icon-warning-sm\",\n \"text\": \"Your Jira instance must be able to communicate with Sentry.\"\n \" Sentry makes outbound requests from a [static set of IP\"\n \" addresses](https://docs.sentry.io/ip-ranges/) that you may wish\"\n \" to allow in your firewall to support this integration.\",\n}\n\n\nmetadata = IntegrationMetadata(\n description=_(DESCRIPTION.strip()),\n features=FEATURE_DESCRIPTIONS,\n author=\"The Sentry Team\",\n noun=_(\"Installation\"),\n issue_url=\"https://github.com/getsentry/sentry/issues/new?assignees=&labels=Component:%20Integrations&template=bug_report.md&title=Jira%20Server%20Integration%20Problem\",\n source_url=\"https://github.com/getsentry/sentry/tree/master/src/sentry/integrations/jira_server\",\n aspects={\"alerts\": [setup_alert]},\n)\n\n\nclass InstallationForm(forms.Form):\n url = forms.CharField(\n label=_(\"Jira URL\"),\n help_text=_(\"The base URL for your Jira Server instance, including the host and protocol.\"),\n widget=forms.TextInput(attrs={\"placeholder\": \"https://jira.example.com\"}),\n validators=[URLValidator()],\n )\n verify_ssl = forms.BooleanField(\n label=_(\"Verify SSL\"),\n help_text=_(\n \"By default, we verify SSL certificates \" \"when making requests to your Jira instance.\"\n ),\n widget=forms.CheckboxInput(),\n required=False,\n initial=True,\n )\n consumer_key = forms.CharField(\n label=_(\"Jira Consumer Key\"),\n widget=forms.TextInput(attrs={\"placeholder\": \"sentry-consumer-key\"}),\n )\n private_key = forms.CharField(\n label=_(\"Jira Consumer Private Key\"),\n widget=forms.Textarea(\n attrs={\n \"placeholder\": \"-----BEGIN RSA PRIVATE KEY-----\\n...\\n-----END RSA PRIVATE KEY-----\"\n }\n ),\n )\n\n def clean_url(self):\n \"\"\"Strip off trailing / as they cause invalid URLs downstream\"\"\"\n return self.cleaned_data[\"url\"].rstrip(\"/\")\n\n def clean_private_key(self):\n data = self.cleaned_data[\"private_key\"]\n\n try:\n load_pem_private_key(data.encode(\"utf-8\"), None, default_backend())\n except Exception:\n raise forms.ValidationError(\n \"Private key must be a valid SSH private key encoded in a PEM format.\"\n )\n return data\n\n def clean_consumer_key(self):\n data = self.cleaned_data[\"consumer_key\"]\n if len(data) > 200:\n raise forms.ValidationError(\"Consumer key is limited to 200 characters.\")\n return data\n\n\nclass InstallationConfigView(PipelineView):\n \"\"\"\n Collect the OAuth client credentials from the user.\n \"\"\"\n\n def dispatch(self, request, pipeline):\n if request.method == \"POST\":\n form = InstallationForm(request.POST)\n if form.is_valid():\n form_data = form.cleaned_data\n\n pipeline.bind_state(\"installation_data\", form_data)\n return pipeline.next_step()\n else:\n form = InstallationForm()\n\n return render_to_response(\n template=\"sentry/integrations/jira-server-config.html\",\n context={\"form\": form},\n request=request,\n )\n\n\nclass OAuthLoginView(PipelineView):\n \"\"\"\n Start the OAuth dance by creating a request token\n and redirecting the user to approve it.\n \"\"\"\n\n @csrf_exempt\n def dispatch(self, request, pipeline):\n if \"oauth_token\" in request.GET:\n return pipeline.next_step()\n\n config = pipeline.fetch_state(\"installation_data\")\n client = JiraServerSetupClient(\n config.get(\"url\"),\n config.get(\"consumer_key\"),\n config.get(\"private_key\"),\n config.get(\"verify_ssl\"),\n )\n try:\n request_token = client.get_request_token()\n pipeline.bind_state(\"request_token\", request_token)\n authorize_url = client.get_authorize_url(request_token)\n\n return self.redirect(authorize_url)\n except ApiError as error:\n logger.info(\n \"identity.jira-server.request-token\",\n extra={\"url\": config.get(\"url\"), \"error\": error},\n )\n return pipeline.error(\"Could not fetch a request token from Jira. %s\" % error)\n\n\nclass OAuthCallbackView(PipelineView):\n \"\"\"\n Complete the OAuth dance by exchanging our request token\n into an access token.\n \"\"\"\n\n @csrf_exempt\n def dispatch(self, request, pipeline):\n config = pipeline.fetch_state(\"installation_data\")\n client = JiraServerSetupClient(\n config.get(\"url\"),\n config.get(\"consumer_key\"),\n config.get(\"private_key\"),\n config.get(\"verify_ssl\"),\n )\n\n try:\n access_token = client.get_access_token(\n pipeline.fetch_state(\"request_token\"), request.GET[\"oauth_token\"]\n )\n pipeline.bind_state(\"access_token\", access_token)\n\n return pipeline.next_step()\n except ApiError as error:\n logger.info(\"identity.jira-server.access-token\", extra={\"error\": error})\n return pipeline.error(\"Could not fetch an access token from Jira\")\n\n\nclass JiraServerIntegration(JiraIntegration):\n \"\"\"\n IntegrationInstallation implementation for Jira-Server\n \"\"\"\n\n default_identity = None\n\n @classproperty\n def use_email_scope(cls):\n # jira server doesn't need the email scope since it's not restricted by GDPR\n return False\n\n def get_client(self):\n if self.default_identity is None:\n self.default_identity = self.get_default_identity()\n\n return JiraServerClient(\n self.model.metadata[\"base_url\"],\n JiraServer(self.default_identity.data),\n self.model.metadata[\"verify_ssl\"],\n )\n\n def get_link_issue_config(self, group, **kwargs):\n fields = super(JiraIntegration, self).get_link_issue_config(group, **kwargs)\n org = group.organization\n autocomplete_url = reverse(\n \"sentry-extensions-jiraserver-search\", args=[org.slug, self.model.id]\n )\n for field in fields:\n if field[\"name\"] == \"externalIssue\":\n field[\"url\"] = autocomplete_url\n field[\"type\"] = \"select\"\n return fields\n\n def search_url(self, org_slug):\n return reverse(\"sentry-extensions-jiraserver-search\", args=[org_slug, self.model.id])\n\n\nclass JiraServerIntegrationProvider(IntegrationProvider):\n key = \"jira_server\"\n name = \"Jira Server\"\n metadata = metadata\n integration_cls = JiraServerIntegration\n\n needs_default_identity = True\n\n can_add = True\n\n features = frozenset([IntegrationFeatures.ISSUE_BASIC, IntegrationFeatures.ISSUE_SYNC])\n\n setup_dialog_config = {\"width\": 1030, \"height\": 1000}\n\n def get_pipeline_views(self):\n return [InstallationConfigView(), OAuthLoginView(), OAuthCallbackView()]\n\n def build_integration(self, state):\n install = state[\"installation_data\"]\n access_token = state[\"access_token\"]\n\n webhook_secret = sha1_text(install[\"private_key\"]).hexdigest()\n\n hostname = urlparse(install[\"url\"]).netloc\n external_id = \"{}:{}\".format(hostname, install[\"consumer_key\"])[:64]\n\n credentials = {\n \"consumer_key\": install[\"consumer_key\"],\n \"private_key\": install[\"private_key\"],\n \"access_token\": access_token[\"oauth_token\"],\n \"access_token_secret\": access_token[\"oauth_token_secret\"],\n }\n # Create the webhook before the integration record exists\n # so that if it fails we don't persist a broken integration.\n self.create_webhook(external_id, webhook_secret, install, credentials)\n\n return {\n \"name\": install[\"consumer_key\"],\n \"provider\": \"jira_server\",\n \"external_id\": external_id,\n \"metadata\": {\n \"base_url\": install[\"url\"],\n \"domain_name\": hostname,\n \"verify_ssl\": install[\"verify_ssl\"],\n \"webhook_secret\": webhook_secret,\n },\n \"user_identity\": {\n \"type\": \"jira_server\",\n \"external_id\": external_id,\n \"scopes\": [],\n \"data\": credentials,\n },\n }\n\n def create_webhook(self, external_id, webhook_secret, install, credentials):\n client = JiraServerSetupClient(\n install[\"url\"], install[\"consumer_key\"], install[\"private_key\"], install[\"verify_ssl\"]\n )\n try:\n client.create_issue_webhook(external_id, webhook_secret, credentials)\n except ApiError as err:\n logger.info(\n \"jira-server.webhook.failed\",\n extra={\"error\": str(err), \"external_id\": external_id},\n )\n try:\n details = next(x for x in err.json[\"messages\"][0].values())\n except (KeyError, TypeError, StopIteration):\n details = \"\"\n message = f\"Could not create issue webhook in Jira. {details}\"\n raise IntegrationError(message)\n", "path": "src/sentry/integrations/jira_server/integration.py"}], "after_files": [{"content": "import logging\n\nfrom cryptography.hazmat.primitives.serialization import load_pem_private_key\nfrom cryptography.hazmat.backends import default_backend\nfrom django import forms\nfrom django.core.urlresolvers import reverse\nfrom django.core.validators import URLValidator\nfrom django.utils.translation import ugettext as _\nfrom django.views.decorators.csrf import csrf_exempt\nfrom urllib.parse import urlparse\n\nfrom sentry.integrations import (\n IntegrationFeatures,\n IntegrationProvider,\n IntegrationMetadata,\n FeatureDescription,\n)\nfrom sentry.shared_integrations.exceptions import IntegrationError, ApiError\nfrom sentry.integrations.jira import JiraIntegration\nfrom sentry.pipeline import PipelineView\nfrom sentry.utils.decorators import classproperty\nfrom sentry.utils.hashlib import sha1_text\nfrom sentry.utils.http import absolute_uri\nfrom sentry.web.helpers import render_to_response\nfrom .client import JiraServer, JiraServerSetupClient, JiraServerClient\n\n\nlogger = logging.getLogger(\"sentry.integrations.jira_server\")\n\nDESCRIPTION = \"\"\"\nConnect your Sentry organization into one or more of your Jira Server instances.\nGet started streamlining your bug squashing workflow by unifying your Sentry and\nJira instances together.\n\"\"\"\n\nFEATURE_DESCRIPTIONS = [\n FeatureDescription(\n \"\"\"\n Create and link Sentry issue groups directly to a Jira ticket in any of your\n projects, providing a quick way to jump from Sentry bug to tracked ticket!\n \"\"\",\n IntegrationFeatures.ISSUE_BASIC,\n ),\n FeatureDescription(\n \"\"\"\n Automatically synchronize assignees to and from Jira. Don't get confused\n who's fixing what, let us handle ensuring your issues and tickets match up\n to your Sentry and Jira assignees.\n \"\"\",\n IntegrationFeatures.ISSUE_SYNC,\n ),\n FeatureDescription(\n \"\"\"\n Synchronize Comments on Sentry Issues directly to the linked Jira ticket.\n \"\"\",\n IntegrationFeatures.ISSUE_SYNC,\n ),\n]\n\nsetup_alert = {\n \"type\": \"warning\",\n \"icon\": \"icon-warning-sm\",\n \"text\": \"Your Jira instance must be able to communicate with Sentry.\"\n \" Sentry makes outbound requests from a [static set of IP\"\n \" addresses](https://docs.sentry.io/ip-ranges/) that you may wish\"\n \" to allow in your firewall to support this integration.\",\n}\n\n\nmetadata = IntegrationMetadata(\n description=_(DESCRIPTION.strip()),\n features=FEATURE_DESCRIPTIONS,\n author=\"The Sentry Team\",\n noun=_(\"Installation\"),\n issue_url=\"https://github.com/getsentry/sentry/issues/new?assignees=&labels=Component:%20Integrations&template=bug_report.md&title=Jira%20Server%20Integration%20Problem\",\n source_url=\"https://github.com/getsentry/sentry/tree/master/src/sentry/integrations/jira_server\",\n aspects={\"alerts\": [setup_alert]},\n)\n\n\nclass InstallationForm(forms.Form):\n url = forms.CharField(\n label=_(\"Jira URL\"),\n help_text=_(\"The base URL for your Jira Server instance, including the host and protocol.\"),\n widget=forms.TextInput(attrs={\"placeholder\": \"https://jira.example.com\"}),\n validators=[URLValidator()],\n )\n verify_ssl = forms.BooleanField(\n label=_(\"Verify SSL\"),\n help_text=_(\n \"By default, we verify SSL certificates \" \"when making requests to your Jira instance.\"\n ),\n widget=forms.CheckboxInput(),\n required=False,\n initial=True,\n )\n consumer_key = forms.CharField(\n label=_(\"Jira Consumer Key\"),\n widget=forms.TextInput(attrs={\"placeholder\": \"sentry-consumer-key\"}),\n )\n private_key = forms.CharField(\n label=_(\"Jira Consumer Private Key\"),\n widget=forms.Textarea(\n attrs={\n \"placeholder\": \"-----BEGIN RSA PRIVATE KEY-----\\n...\\n-----END RSA PRIVATE KEY-----\"\n }\n ),\n )\n\n def clean_url(self):\n \"\"\"Strip off trailing / as they cause invalid URLs downstream\"\"\"\n return self.cleaned_data[\"url\"].rstrip(\"/\")\n\n def clean_private_key(self):\n data = self.cleaned_data[\"private_key\"]\n\n try:\n load_pem_private_key(data.encode(\"utf-8\"), None, default_backend())\n except Exception:\n raise forms.ValidationError(\n \"Private key must be a valid SSH private key encoded in a PEM format.\"\n )\n return data\n\n def clean_consumer_key(self):\n data = self.cleaned_data[\"consumer_key\"]\n if len(data) > 200:\n raise forms.ValidationError(\"Consumer key is limited to 200 characters.\")\n return data\n\n\nclass InstallationConfigView(PipelineView):\n \"\"\"\n Collect the OAuth client credentials from the user.\n \"\"\"\n\n def dispatch(self, request, pipeline):\n if request.method == \"POST\":\n form = InstallationForm(request.POST)\n if form.is_valid():\n form_data = form.cleaned_data\n\n pipeline.bind_state(\"installation_data\", form_data)\n return pipeline.next_step()\n else:\n form = InstallationForm()\n\n return render_to_response(\n template=\"sentry/integrations/jira-server-config.html\",\n context={\"form\": form},\n request=request,\n )\n\n\nclass OAuthLoginView(PipelineView):\n \"\"\"\n Start the OAuth dance by creating a request token\n and redirecting the user to approve it.\n \"\"\"\n\n @csrf_exempt\n def dispatch(self, request, pipeline):\n if \"oauth_token\" in request.GET:\n return pipeline.next_step()\n\n config = pipeline.fetch_state(\"installation_data\")\n client = JiraServerSetupClient(\n config.get(\"url\"),\n config.get(\"consumer_key\"),\n config.get(\"private_key\"),\n config.get(\"verify_ssl\"),\n )\n try:\n request_token = client.get_request_token()\n pipeline.bind_state(\"request_token\", request_token)\n authorize_url = client.get_authorize_url(request_token)\n\n return self.redirect(authorize_url)\n except ApiError as error:\n logger.info(\n \"identity.jira-server.request-token\",\n extra={\"url\": config.get(\"url\"), \"error\": error},\n )\n return pipeline.error(\"Could not fetch a request token from Jira. %s\" % error)\n\n\nclass OAuthCallbackView(PipelineView):\n \"\"\"\n Complete the OAuth dance by exchanging our request token\n into an access token.\n \"\"\"\n\n @csrf_exempt\n def dispatch(self, request, pipeline):\n config = pipeline.fetch_state(\"installation_data\")\n client = JiraServerSetupClient(\n config.get(\"url\"),\n config.get(\"consumer_key\"),\n config.get(\"private_key\"),\n config.get(\"verify_ssl\"),\n )\n\n try:\n access_token = client.get_access_token(\n pipeline.fetch_state(\"request_token\"), request.GET[\"oauth_token\"]\n )\n pipeline.bind_state(\"access_token\", access_token)\n\n return pipeline.next_step()\n except ApiError as error:\n logger.info(\"identity.jira-server.access-token\", extra={\"error\": error})\n return pipeline.error(\"Could not fetch an access token from Jira\")\n\n\nclass JiraServerIntegration(JiraIntegration):\n \"\"\"\n IntegrationInstallation implementation for Jira-Server\n \"\"\"\n\n default_identity = None\n\n @classproperty\n def use_email_scope(cls):\n # jira server doesn't need the email scope since it's not restricted by GDPR\n return False\n\n def get_client(self):\n if self.default_identity is None:\n self.default_identity = self.get_default_identity()\n\n return JiraServerClient(\n self.model.metadata[\"base_url\"],\n JiraServer(self.default_identity.data),\n self.model.metadata[\"verify_ssl\"],\n )\n\n def get_link_issue_config(self, group, **kwargs):\n fields = super().get_link_issue_config(group, **kwargs)\n\n org = group.organization\n autocomplete_url = reverse(\n \"sentry-extensions-jiraserver-search\", args=[org.slug, self.model.id]\n )\n for field in fields:\n if field[\"name\"] == \"externalIssue\":\n field[\"url\"] = autocomplete_url\n field[\"type\"] = \"select\"\n\n default_comment = \"Linked Sentry Issue: [{}|{}]\".format(\n group.qualified_short_id,\n absolute_uri(group.get_absolute_url(params={\"referrer\": \"jira_server\"})),\n )\n fields.append(\n {\n \"name\": \"comment\",\n \"label\": \"Comment\",\n \"default\": default_comment,\n \"type\": \"textarea\",\n \"autosize\": True,\n \"maxRows\": 10,\n }\n )\n\n return fields\n\n def search_url(self, org_slug):\n return reverse(\"sentry-extensions-jiraserver-search\", args=[org_slug, self.model.id])\n\n def after_link_issue(self, external_issue, data=None, **kwargs):\n super().after_link_issue(external_issue, **kwargs)\n\n if data:\n comment = data.get(\"comment\")\n if comment:\n self.get_client().create_comment(external_issue.key, comment)\n\n\nclass JiraServerIntegrationProvider(IntegrationProvider):\n key = \"jira_server\"\n name = \"Jira Server\"\n metadata = metadata\n integration_cls = JiraServerIntegration\n\n needs_default_identity = True\n\n can_add = True\n\n features = frozenset([IntegrationFeatures.ISSUE_BASIC, IntegrationFeatures.ISSUE_SYNC])\n\n setup_dialog_config = {\"width\": 1030, \"height\": 1000}\n\n def get_pipeline_views(self):\n return [InstallationConfigView(), OAuthLoginView(), OAuthCallbackView()]\n\n def build_integration(self, state):\n install = state[\"installation_data\"]\n access_token = state[\"access_token\"]\n\n webhook_secret = sha1_text(install[\"private_key\"]).hexdigest()\n\n hostname = urlparse(install[\"url\"]).netloc\n external_id = \"{}:{}\".format(hostname, install[\"consumer_key\"])[:64]\n\n credentials = {\n \"consumer_key\": install[\"consumer_key\"],\n \"private_key\": install[\"private_key\"],\n \"access_token\": access_token[\"oauth_token\"],\n \"access_token_secret\": access_token[\"oauth_token_secret\"],\n }\n # Create the webhook before the integration record exists\n # so that if it fails we don't persist a broken integration.\n self.create_webhook(external_id, webhook_secret, install, credentials)\n\n return {\n \"name\": install[\"consumer_key\"],\n \"provider\": \"jira_server\",\n \"external_id\": external_id,\n \"metadata\": {\n \"base_url\": install[\"url\"],\n \"domain_name\": hostname,\n \"verify_ssl\": install[\"verify_ssl\"],\n \"webhook_secret\": webhook_secret,\n },\n \"user_identity\": {\n \"type\": \"jira_server\",\n \"external_id\": external_id,\n \"scopes\": [],\n \"data\": credentials,\n },\n }\n\n def create_webhook(self, external_id, webhook_secret, install, credentials):\n client = JiraServerSetupClient(\n install[\"url\"], install[\"consumer_key\"], install[\"private_key\"], install[\"verify_ssl\"]\n )\n try:\n client.create_issue_webhook(external_id, webhook_secret, credentials)\n except ApiError as err:\n logger.info(\n \"jira-server.webhook.failed\",\n extra={\"error\": str(err), \"external_id\": external_id},\n )\n try:\n details = next(x for x in err.json[\"messages\"][0].values())\n except (KeyError, TypeError, StopIteration):\n details = \"\"\n message = f\"Could not create issue webhook in Jira. {details}\"\n raise IntegrationError(message)\n", "path": "src/sentry/integrations/jira_server/integration.py"}]}
| 3,749 | 556 |
gh_patches_debug_25290
|
rasdani/github-patches
|
git_diff
|
mkdocs__mkdocs-166
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Exit gracefully when misconfigured
Currently if `mkdocs.yaml` does not exist or if `pages`/`project_name` are not set, we'll exit with an exception. Really we should print a simple error to stderr and exit with an appropriate error code.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mkdocs/config.py`
Content:
```
1 # coding: utf-8
2
3 from mkdocs import utils
4 from mkdocs.compat import urlparse
5 import os
6 import yaml
7
8 DEFAULT_CONFIG = {
9 'site_name': None,
10 'pages': None,
11
12 'site_url': None,
13 'site_description': None,
14 'site_author': None,
15 'site_favicon': None,
16
17 'theme': 'mkdocs',
18 'docs_dir': 'docs',
19 'site_dir': 'site',
20 'theme_dir': None,
21
22 'copyright': None,
23 'google-analytics': None,
24
25 # The address on which to serve the livereloading docs server.
26 'dev_addr': '127.0.0.1:8000',
27
28 # If `True`, use `<page_name>/index.hmtl` style files with hyperlinks to the directory.
29 # If `False`, use `<page_name>.html style file with hyperlinks to the file.
30 # True generates nicer URLs, but False is useful if browsing the output on a filesystem.
31 'use_directory_urls': True,
32
33 # Specify a link to the project source repo to be included
34 # in the documentation pages.
35 'repo_url': None,
36
37 # A name to use for the link to the project source repo.
38 # Default: If repo_url is unset then None, otherwise
39 # "GitHub" or "Bitbucket" for known url or Hostname for unknown urls.
40 'repo_name': None,
41
42 # Specify which css or javascript files from the docs
43 # directionary should be additionally included in the site.
44 # Default: List of all .css and .js files in the docs dir.
45 'extra_css': None,
46 'extra_javascript': None,
47
48 # Determine if the site should include the nav and next/prev elements.
49 # Default: True if the site has more than one page, False otherwise.
50 'include_nav': None,
51 'include_next_prev': None,
52
53 # PyMarkdown extension names.
54 'markdown_extensions': (),
55
56 # Determine if the site should generate a json search index and include
57 # search elements in the theme. - TODO
58 'include_search': False,
59
60 # Determine if the site should include a 404.html page.
61 # TODO: Implment this. Make this None, have it True if a 404.html
62 # template exists in the theme or docs dir.
63 'include_404': False,
64
65 # Determine if the site should include a sitemap.xml page.
66 # TODO: Implement this. Make this None, have it True if a sitemap.xml
67 # template exists in the theme or docs dir.
68 'include_sitemap': False,
69 }
70
71
72 def load_config(filename='mkdocs.yml', options=None):
73 options = options or {}
74 if 'config' in options:
75 filename = options['config']
76 assert os.path.exists(filename), "Config file '%s' does not exist." % filename
77 with open(filename, 'r') as fp:
78 user_config = yaml.load(fp)
79 user_config.update(options)
80 return validate_config(user_config)
81
82
83 def validate_config(user_config):
84 config = DEFAULT_CONFIG.copy()
85 config.update(user_config)
86
87 assert config['site_name'], "Config must contain 'site_name' setting."
88
89 # If not specified, then the 'pages' config simply includes all
90 # markdown files in the docs dir, without generating any header items
91 # for them.
92 pages = []
93 extra_css = []
94 extra_javascript = []
95 for (dirpath, dirnames, filenames) in os.walk(config['docs_dir']):
96 for filename in sorted(filenames):
97 fullpath = os.path.join(dirpath, filename)
98 relpath = os.path.relpath(fullpath, config['docs_dir'])
99
100 if utils.is_markdown_file(filename):
101 # index pages should always be the first listed page.
102 if os.path.splitext(relpath)[0] == 'index':
103 pages.insert(0, relpath)
104 else:
105 pages.append(relpath)
106 elif utils.is_css_file(filename):
107 extra_css.append(relpath)
108 elif utils.is_javascript_file(filename):
109 extra_javascript.append(relpath)
110
111 if config['pages'] is None:
112 config['pages'] = pages
113
114 if config['extra_css'] is None:
115 config['extra_css'] = extra_css
116
117 if config['extra_javascript'] is None:
118 config['extra_javascript'] = extra_javascript
119
120 package_dir = os.path.dirname(__file__)
121 theme_dir = [os.path.join(package_dir, 'themes', config['theme'])]
122
123 if config['theme_dir'] is not None:
124 theme_dir.insert(0, config['theme_dir'])
125
126 config['theme_dir'] = theme_dir
127
128 if config['repo_url'] is not None and config['repo_name'] is None:
129 repo_host = urlparse(config['repo_url']).netloc.lower()
130 if repo_host == 'github.com':
131 config['repo_name'] = 'GitHub'
132 elif repo_host == 'bitbucket.com':
133 config['repo_name'] = 'Bitbucket'
134 else:
135 config['repo_name'] = repo_host.split('.')[0].title()
136
137 if config['include_next_prev'] is None:
138 config['include_next_prev'] = len(config['pages']) > 1
139
140 if config['include_nav'] is None:
141 config['include_nav'] = len(config['pages']) > 1
142
143 # To Do:
144
145 # The docs dir must exist.
146 # The theme dir must exist.
147 # Ensure 'theme' is one of 'mkdocs', 'readthedocs', 'custom'
148 # A homepage 'index' must exist.
149 # The theme 'base.html' file must exist.
150 # Cannot set repo_name without setting repo_url.
151 # Cannot set 'include_next_prev: true' when only one page exists.
152 # Cannot set 'include_nav: true' when only one page exists.
153 # Error if any config keys provided that are not in the DEFAULT_CONFIG.
154
155 return config
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mkdocs/config.py b/mkdocs/config.py
--- a/mkdocs/config.py
+++ b/mkdocs/config.py
@@ -2,7 +2,9 @@
from mkdocs import utils
from mkdocs.compat import urlparse
+import errno
import os
+import sys
import yaml
DEFAULT_CONFIG = {
@@ -73,7 +75,9 @@
options = options or {}
if 'config' in options:
filename = options['config']
- assert os.path.exists(filename), "Config file '%s' does not exist." % filename
+ if not os.path.exists(filename):
+ sys.stderr.write("Config file '%s' does not exist." % filename)
+ sys.exit(errno.ENOENT)
with open(filename, 'r') as fp:
user_config = yaml.load(fp)
user_config.update(options)
@@ -84,7 +88,9 @@
config = DEFAULT_CONFIG.copy()
config.update(user_config)
- assert config['site_name'], "Config must contain 'site_name' setting."
+ if not config['site_name']:
+ sys.stderr.write("Config must contain 'site_name' setting.")
+ sys.exit(errno.EINVAL)
# If not specified, then the 'pages' config simply includes all
# markdown files in the docs dir, without generating any header items
|
{"golden_diff": "diff --git a/mkdocs/config.py b/mkdocs/config.py\n--- a/mkdocs/config.py\n+++ b/mkdocs/config.py\n@@ -2,7 +2,9 @@\n \n from mkdocs import utils\n from mkdocs.compat import urlparse\n+import errno\n import os\n+import sys\n import yaml\n \n DEFAULT_CONFIG = {\n@@ -73,7 +75,9 @@\n options = options or {}\n if 'config' in options:\n filename = options['config']\n- assert os.path.exists(filename), \"Config file '%s' does not exist.\" % filename\n+ if not os.path.exists(filename):\n+ sys.stderr.write(\"Config file '%s' does not exist.\" % filename)\n+ sys.exit(errno.ENOENT)\n with open(filename, 'r') as fp:\n user_config = yaml.load(fp)\n user_config.update(options)\n@@ -84,7 +88,9 @@\n config = DEFAULT_CONFIG.copy()\n config.update(user_config)\n \n- assert config['site_name'], \"Config must contain 'site_name' setting.\"\n+ if not config['site_name']:\n+ sys.stderr.write(\"Config must contain 'site_name' setting.\")\n+ sys.exit(errno.EINVAL)\n \n # If not specified, then the 'pages' config simply includes all\n # markdown files in the docs dir, without generating any header items\n", "issue": "Exit gracefully when misconfigured\nCurrently if `mkdocs.yaml` does not exist or if `pages`/`project_name` are not set, we'll exit with an exception. Really we should print a simple error to stderr and exit with an appropriate error code.\n\n", "before_files": [{"content": "# coding: utf-8\n\nfrom mkdocs import utils\nfrom mkdocs.compat import urlparse\nimport os\nimport yaml\n\nDEFAULT_CONFIG = {\n 'site_name': None,\n 'pages': None,\n\n 'site_url': None,\n 'site_description': None,\n 'site_author': None,\n 'site_favicon': None,\n\n 'theme': 'mkdocs',\n 'docs_dir': 'docs',\n 'site_dir': 'site',\n 'theme_dir': None,\n\n 'copyright': None,\n 'google-analytics': None,\n\n # The address on which to serve the livereloading docs server.\n 'dev_addr': '127.0.0.1:8000',\n\n # If `True`, use `<page_name>/index.hmtl` style files with hyperlinks to the directory.\n # If `False`, use `<page_name>.html style file with hyperlinks to the file.\n # True generates nicer URLs, but False is useful if browsing the output on a filesystem.\n 'use_directory_urls': True,\n\n # Specify a link to the project source repo to be included\n # in the documentation pages.\n 'repo_url': None,\n\n # A name to use for the link to the project source repo.\n # Default: If repo_url is unset then None, otherwise\n # \"GitHub\" or \"Bitbucket\" for known url or Hostname for unknown urls.\n 'repo_name': None,\n\n # Specify which css or javascript files from the docs\n # directionary should be additionally included in the site.\n # Default: List of all .css and .js files in the docs dir.\n 'extra_css': None,\n 'extra_javascript': None,\n\n # Determine if the site should include the nav and next/prev elements.\n # Default: True if the site has more than one page, False otherwise.\n 'include_nav': None,\n 'include_next_prev': None,\n\n # PyMarkdown extension names.\n 'markdown_extensions': (),\n\n # Determine if the site should generate a json search index and include\n # search elements in the theme. - TODO\n 'include_search': False,\n\n # Determine if the site should include a 404.html page.\n # TODO: Implment this. Make this None, have it True if a 404.html\n # template exists in the theme or docs dir.\n 'include_404': False,\n\n # Determine if the site should include a sitemap.xml page.\n # TODO: Implement this. Make this None, have it True if a sitemap.xml\n # template exists in the theme or docs dir.\n 'include_sitemap': False,\n}\n\n\ndef load_config(filename='mkdocs.yml', options=None):\n options = options or {}\n if 'config' in options:\n filename = options['config']\n assert os.path.exists(filename), \"Config file '%s' does not exist.\" % filename\n with open(filename, 'r') as fp:\n user_config = yaml.load(fp)\n user_config.update(options)\n return validate_config(user_config)\n\n\ndef validate_config(user_config):\n config = DEFAULT_CONFIG.copy()\n config.update(user_config)\n\n assert config['site_name'], \"Config must contain 'site_name' setting.\"\n\n # If not specified, then the 'pages' config simply includes all\n # markdown files in the docs dir, without generating any header items\n # for them.\n pages = []\n extra_css = []\n extra_javascript = []\n for (dirpath, dirnames, filenames) in os.walk(config['docs_dir']):\n for filename in sorted(filenames):\n fullpath = os.path.join(dirpath, filename)\n relpath = os.path.relpath(fullpath, config['docs_dir'])\n\n if utils.is_markdown_file(filename):\n # index pages should always be the first listed page.\n if os.path.splitext(relpath)[0] == 'index':\n pages.insert(0, relpath)\n else:\n pages.append(relpath)\n elif utils.is_css_file(filename):\n extra_css.append(relpath)\n elif utils.is_javascript_file(filename):\n extra_javascript.append(relpath)\n\n if config['pages'] is None:\n config['pages'] = pages\n\n if config['extra_css'] is None:\n config['extra_css'] = extra_css\n\n if config['extra_javascript'] is None:\n config['extra_javascript'] = extra_javascript\n\n package_dir = os.path.dirname(__file__)\n theme_dir = [os.path.join(package_dir, 'themes', config['theme'])]\n\n if config['theme_dir'] is not None:\n theme_dir.insert(0, config['theme_dir'])\n\n config['theme_dir'] = theme_dir\n\n if config['repo_url'] is not None and config['repo_name'] is None:\n repo_host = urlparse(config['repo_url']).netloc.lower()\n if repo_host == 'github.com':\n config['repo_name'] = 'GitHub'\n elif repo_host == 'bitbucket.com':\n config['repo_name'] = 'Bitbucket'\n else:\n config['repo_name'] = repo_host.split('.')[0].title()\n\n if config['include_next_prev'] is None:\n config['include_next_prev'] = len(config['pages']) > 1\n\n if config['include_nav'] is None:\n config['include_nav'] = len(config['pages']) > 1\n\n # To Do:\n\n # The docs dir must exist.\n # The theme dir must exist.\n # Ensure 'theme' is one of 'mkdocs', 'readthedocs', 'custom'\n # A homepage 'index' must exist.\n # The theme 'base.html' file must exist.\n # Cannot set repo_name without setting repo_url.\n # Cannot set 'include_next_prev: true' when only one page exists.\n # Cannot set 'include_nav: true' when only one page exists.\n # Error if any config keys provided that are not in the DEFAULT_CONFIG.\n\n return config\n", "path": "mkdocs/config.py"}], "after_files": [{"content": "# coding: utf-8\n\nfrom mkdocs import utils\nfrom mkdocs.compat import urlparse\nimport errno\nimport os\nimport sys\nimport yaml\n\nDEFAULT_CONFIG = {\n 'site_name': None,\n 'pages': None,\n\n 'site_url': None,\n 'site_description': None,\n 'site_author': None,\n 'site_favicon': None,\n\n 'theme': 'mkdocs',\n 'docs_dir': 'docs',\n 'site_dir': 'site',\n 'theme_dir': None,\n\n 'copyright': None,\n 'google-analytics': None,\n\n # The address on which to serve the livereloading docs server.\n 'dev_addr': '127.0.0.1:8000',\n\n # If `True`, use `<page_name>/index.hmtl` style files with hyperlinks to the directory.\n # If `False`, use `<page_name>.html style file with hyperlinks to the file.\n # True generates nicer URLs, but False is useful if browsing the output on a filesystem.\n 'use_directory_urls': True,\n\n # Specify a link to the project source repo to be included\n # in the documentation pages.\n 'repo_url': None,\n\n # A name to use for the link to the project source repo.\n # Default: If repo_url is unset then None, otherwise\n # \"GitHub\" or \"Bitbucket\" for known url or Hostname for unknown urls.\n 'repo_name': None,\n\n # Specify which css or javascript files from the docs\n # directionary should be additionally included in the site.\n # Default: List of all .css and .js files in the docs dir.\n 'extra_css': None,\n 'extra_javascript': None,\n\n # Determine if the site should include the nav and next/prev elements.\n # Default: True if the site has more than one page, False otherwise.\n 'include_nav': None,\n 'include_next_prev': None,\n\n # PyMarkdown extension names.\n 'markdown_extensions': (),\n\n # Determine if the site should generate a json search index and include\n # search elements in the theme. - TODO\n 'include_search': False,\n\n # Determine if the site should include a 404.html page.\n # TODO: Implment this. Make this None, have it True if a 404.html\n # template exists in the theme or docs dir.\n 'include_404': False,\n\n # Determine if the site should include a sitemap.xml page.\n # TODO: Implement this. Make this None, have it True if a sitemap.xml\n # template exists in the theme or docs dir.\n 'include_sitemap': False,\n}\n\n\ndef load_config(filename='mkdocs.yml', options=None):\n options = options or {}\n if 'config' in options:\n filename = options['config']\n if not os.path.exists(filename):\n sys.stderr.write(\"Config file '%s' does not exist.\" % filename)\n sys.exit(errno.ENOENT)\n with open(filename, 'r') as fp:\n user_config = yaml.load(fp)\n user_config.update(options)\n return validate_config(user_config)\n\n\ndef validate_config(user_config):\n config = DEFAULT_CONFIG.copy()\n config.update(user_config)\n\n if not config['site_name']:\n sys.stderr.write(\"Config must contain 'site_name' setting.\")\n sys.exit(errno.EINVAL)\n\n # If not specified, then the 'pages' config simply includes all\n # markdown files in the docs dir, without generating any header items\n # for them.\n pages = []\n extra_css = []\n extra_javascript = []\n for (dirpath, dirnames, filenames) in os.walk(config['docs_dir']):\n for filename in sorted(filenames):\n fullpath = os.path.join(dirpath, filename)\n relpath = os.path.relpath(fullpath, config['docs_dir'])\n\n if utils.is_markdown_file(filename):\n # index pages should always be the first listed page.\n if os.path.splitext(relpath)[0] == 'index':\n pages.insert(0, relpath)\n else:\n pages.append(relpath)\n elif utils.is_css_file(filename):\n extra_css.append(relpath)\n elif utils.is_javascript_file(filename):\n extra_javascript.append(relpath)\n\n if config['pages'] is None:\n config['pages'] = pages\n\n if config['extra_css'] is None:\n config['extra_css'] = extra_css\n\n if config['extra_javascript'] is None:\n config['extra_javascript'] = extra_javascript\n\n package_dir = os.path.dirname(__file__)\n theme_dir = [os.path.join(package_dir, 'themes', config['theme'])]\n\n if config['theme_dir'] is not None:\n theme_dir.insert(0, config['theme_dir'])\n\n config['theme_dir'] = theme_dir\n\n if config['repo_url'] is not None and config['repo_name'] is None:\n repo_host = urlparse(config['repo_url']).netloc.lower()\n if repo_host == 'github.com':\n config['repo_name'] = 'GitHub'\n elif repo_host == 'bitbucket.com':\n config['repo_name'] = 'Bitbucket'\n else:\n config['repo_name'] = repo_host.split('.')[0].title()\n\n if config['include_next_prev'] is None:\n config['include_next_prev'] = len(config['pages']) > 1\n\n if config['include_nav'] is None:\n config['include_nav'] = len(config['pages']) > 1\n\n # To Do:\n\n # The docs dir must exist.\n # The theme dir must exist.\n # Ensure 'theme' is one of 'mkdocs', 'readthedocs', 'custom'\n # A homepage 'index' must exist.\n # The theme 'base.html' file must exist.\n # Cannot set repo_name without setting repo_url.\n # Cannot set 'include_next_prev: true' when only one page exists.\n # Cannot set 'include_nav: true' when only one page exists.\n # Error if any config keys provided that are not in the DEFAULT_CONFIG.\n\n return config\n", "path": "mkdocs/config.py"}]}
| 1,988 | 297 |
gh_patches_debug_33574
|
rasdani/github-patches
|
git_diff
|
getnikola__nikola-3501
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
dispatch PANDOC_OPTIONS base on input extensions
<!--
Before creating an issue:
* make sure you are using an up-to-date version of Nikola
* search for existing issues that might be related
Describe your requested features as precisely as possible. -->
# Summary
Currently, `COMPILERS` only has a key `pandoc`, and `PANDOC_OPTIONS` is set for all the extensions specified there. The proposed feature simply suggests to have configurable dispatch logic to allow multiple extensions calling pandoc with a different set of `PANDOC_OPTIONS`.
# Example use case
My personal use for this would be to convert one of my site currently using `.md` and `.markdown` to mean a different target format. Both will be converted to HTML, but one of them is slideshow, another is normal "article"/post.
Another possible use case is, if one want to specify markdown extensions in pandoc, say `-t markdown+...-...`, then `COMPILERS['pandoc'] = ['.rst', '.md']` won't work with `PANDOC_OPTIONS = ['-t', 'markdown+...-...']` because that would be wrong input format for `.rst`.
# Suggested implementation
Probably the natural way is to promote
```py
PANDOC_OPTIONS: Union[list, Dict[str, list]]
```
where the keys are those in `COMPILERS['pandoc']`.
If it makes sense I can send a PR doing that. It seems that it only requires changes in <https://github.com/getnikola/nikola/blob/master/nikola/plugins/compile/pandoc.py#L52-L56>, is it true?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nikola/plugins/compile/pandoc.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2021 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """Page compiler plugin for pandoc.
28
29 You will need, of course, to install pandoc
30 """
31
32
33 import io
34 import os
35 import subprocess
36
37 from nikola.plugin_categories import PageCompiler
38 from nikola.utils import req_missing, makedirs, write_metadata
39
40
41 class CompilePandoc(PageCompiler):
42 """Compile markups into HTML using pandoc."""
43
44 name = "pandoc"
45 friendly_name = "pandoc"
46
47 def set_site(self, site):
48 """Set Nikola site."""
49 self.config_dependencies = [str(site.config['PANDOC_OPTIONS'])]
50 super().set_site(site)
51
52 def compile(self, source, dest, is_two_file=True, post=None, lang=None):
53 """Compile the source file into HTML and save as dest."""
54 makedirs(os.path.dirname(dest))
55 try:
56 subprocess.check_call(['pandoc', '-o', dest, source] + self.site.config['PANDOC_OPTIONS'])
57 with open(dest, 'r', encoding='utf-8-sig') as inf:
58 output, shortcode_deps = self.site.apply_shortcodes(inf.read())
59 with open(dest, 'w', encoding='utf-8') as outf:
60 outf.write(output)
61 if post is None:
62 if shortcode_deps:
63 self.logger.error(
64 "Cannot save dependencies for post {0} (post unknown)",
65 source)
66 else:
67 post._depfile[dest] += shortcode_deps
68 except OSError as e:
69 if e.strreror == 'No such file or directory':
70 req_missing(['pandoc'], 'build this site (compile with pandoc)', python=False)
71
72 def compile_string(self, data, source_path=None, is_two_file=True, post=None, lang=None):
73 """Compile into HTML strings."""
74 raise ValueError("Pandoc compiler does not support compile_string due to multiple output formats")
75
76 def create_post(self, path, **kw):
77 """Create a new post."""
78 content = kw.pop('content', None)
79 onefile = kw.pop('onefile', False)
80 # is_page is not used by create_post as of now.
81 kw.pop('is_page', False)
82 metadata = {}
83 metadata.update(self.default_metadata)
84 metadata.update(kw)
85 makedirs(os.path.dirname(path))
86 if not content.endswith('\n'):
87 content += '\n'
88 with io.open(path, "w+", encoding="utf8") as fd:
89 if onefile:
90 fd.write(write_metadata(metadata, comment_wrap=True, site=self.site, compiler=self))
91 fd.write(content)
92
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nikola/plugins/compile/pandoc.py b/nikola/plugins/compile/pandoc.py
--- a/nikola/plugins/compile/pandoc.py
+++ b/nikola/plugins/compile/pandoc.py
@@ -33,6 +33,8 @@
import io
import os
import subprocess
+from typing import List
+from pathlib import Path
from nikola.plugin_categories import PageCompiler
from nikola.utils import req_missing, makedirs, write_metadata
@@ -49,11 +51,30 @@
self.config_dependencies = [str(site.config['PANDOC_OPTIONS'])]
super().set_site(site)
+ def _get_pandoc_options(self, source: str) -> List[str]:
+ """Obtain pandoc args from config depending on type and file extensions."""
+ # Union[List[str], Dict[str, List[str]]]
+ config_options = self.site.config['PANDOC_OPTIONS']
+ type_ = type(config_options)
+ if type_ is list:
+ pandoc_options = config_options
+ elif type_ is dict:
+ ext = Path(source).suffix
+ try:
+ pandoc_options = config_options[ext]
+ except KeyError:
+ self.logger.warn('Setting PANDOC_OPTIONS to [], because extension {} is not defined in PANDOC_OPTIONS: {}.'.format(ext, config_options))
+ pandoc_options = []
+ else:
+ self.logger.warn('Setting PANDOC_OPTIONS to [], because PANDOC_OPTIONS is expected to be of type Union[List[str], Dict[str, List[str]]] but this is not: {}'.format(config_options))
+ pandoc_options = []
+ return pandoc_options
+
def compile(self, source, dest, is_two_file=True, post=None, lang=None):
"""Compile the source file into HTML and save as dest."""
makedirs(os.path.dirname(dest))
try:
- subprocess.check_call(['pandoc', '-o', dest, source] + self.site.config['PANDOC_OPTIONS'])
+ subprocess.check_call(['pandoc', '-o', dest, source] + self._get_pandoc_options(source))
with open(dest, 'r', encoding='utf-8-sig') as inf:
output, shortcode_deps = self.site.apply_shortcodes(inf.read())
with open(dest, 'w', encoding='utf-8') as outf:
|
{"golden_diff": "diff --git a/nikola/plugins/compile/pandoc.py b/nikola/plugins/compile/pandoc.py\n--- a/nikola/plugins/compile/pandoc.py\n+++ b/nikola/plugins/compile/pandoc.py\n@@ -33,6 +33,8 @@\n import io\n import os\n import subprocess\n+from typing import List\n+from pathlib import Path\n \n from nikola.plugin_categories import PageCompiler\n from nikola.utils import req_missing, makedirs, write_metadata\n@@ -49,11 +51,30 @@\n self.config_dependencies = [str(site.config['PANDOC_OPTIONS'])]\n super().set_site(site)\n \n+ def _get_pandoc_options(self, source: str) -> List[str]:\n+ \"\"\"Obtain pandoc args from config depending on type and file extensions.\"\"\"\n+ # Union[List[str], Dict[str, List[str]]]\n+ config_options = self.site.config['PANDOC_OPTIONS']\n+ type_ = type(config_options)\n+ if type_ is list:\n+ pandoc_options = config_options\n+ elif type_ is dict:\n+ ext = Path(source).suffix\n+ try:\n+ pandoc_options = config_options[ext]\n+ except KeyError:\n+ self.logger.warn('Setting PANDOC_OPTIONS to [], because extension {} is not defined in PANDOC_OPTIONS: {}.'.format(ext, config_options))\n+ pandoc_options = []\n+ else:\n+ self.logger.warn('Setting PANDOC_OPTIONS to [], because PANDOC_OPTIONS is expected to be of type Union[List[str], Dict[str, List[str]]] but this is not: {}'.format(config_options))\n+ pandoc_options = []\n+ return pandoc_options\n+\n def compile(self, source, dest, is_two_file=True, post=None, lang=None):\n \"\"\"Compile the source file into HTML and save as dest.\"\"\"\n makedirs(os.path.dirname(dest))\n try:\n- subprocess.check_call(['pandoc', '-o', dest, source] + self.site.config['PANDOC_OPTIONS'])\n+ subprocess.check_call(['pandoc', '-o', dest, source] + self._get_pandoc_options(source))\n with open(dest, 'r', encoding='utf-8-sig') as inf:\n output, shortcode_deps = self.site.apply_shortcodes(inf.read())\n with open(dest, 'w', encoding='utf-8') as outf:\n", "issue": "dispatch PANDOC_OPTIONS base on input extensions\n<!--\r\nBefore creating an issue:\r\n* make sure you are using an up-to-date version of Nikola\r\n* search for existing issues that might be related\r\n\r\nDescribe your requested features as precisely as possible. -->\r\n\r\n# Summary\r\n\r\nCurrently, `COMPILERS` only has a key `pandoc`, and `PANDOC_OPTIONS` is set for all the extensions specified there. The proposed feature simply suggests to have configurable dispatch logic to allow multiple extensions calling pandoc with a different set of `PANDOC_OPTIONS`.\r\n\r\n# Example use case\r\n\r\nMy personal use for this would be to convert one of my site currently using `.md` and `.markdown` to mean a different target format. Both will be converted to HTML, but one of them is slideshow, another is normal \"article\"/post.\r\n\r\nAnother possible use case is, if one want to specify markdown extensions in pandoc, say `-t markdown+...-...`, then `COMPILERS['pandoc'] = ['.rst', '.md']` won't work with `PANDOC_OPTIONS = ['-t', 'markdown+...-...']` because that would be wrong input format for `.rst`.\r\n\r\n# Suggested implementation\r\n\r\nProbably the natural way is to promote\r\n\r\n```py\r\nPANDOC_OPTIONS: Union[list, Dict[str, list]]\r\n```\r\n\r\nwhere the keys are those in `COMPILERS['pandoc']`.\r\n\r\nIf it makes sense I can send a PR doing that. It seems that it only requires changes in <https://github.com/getnikola/nikola/blob/master/nikola/plugins/compile/pandoc.py#L52-L56>, is it true?\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2021 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Page compiler plugin for pandoc.\n\nYou will need, of course, to install pandoc\n\"\"\"\n\n\nimport io\nimport os\nimport subprocess\n\nfrom nikola.plugin_categories import PageCompiler\nfrom nikola.utils import req_missing, makedirs, write_metadata\n\n\nclass CompilePandoc(PageCompiler):\n \"\"\"Compile markups into HTML using pandoc.\"\"\"\n\n name = \"pandoc\"\n friendly_name = \"pandoc\"\n\n def set_site(self, site):\n \"\"\"Set Nikola site.\"\"\"\n self.config_dependencies = [str(site.config['PANDOC_OPTIONS'])]\n super().set_site(site)\n\n def compile(self, source, dest, is_two_file=True, post=None, lang=None):\n \"\"\"Compile the source file into HTML and save as dest.\"\"\"\n makedirs(os.path.dirname(dest))\n try:\n subprocess.check_call(['pandoc', '-o', dest, source] + self.site.config['PANDOC_OPTIONS'])\n with open(dest, 'r', encoding='utf-8-sig') as inf:\n output, shortcode_deps = self.site.apply_shortcodes(inf.read())\n with open(dest, 'w', encoding='utf-8') as outf:\n outf.write(output)\n if post is None:\n if shortcode_deps:\n self.logger.error(\n \"Cannot save dependencies for post {0} (post unknown)\",\n source)\n else:\n post._depfile[dest] += shortcode_deps\n except OSError as e:\n if e.strreror == 'No such file or directory':\n req_missing(['pandoc'], 'build this site (compile with pandoc)', python=False)\n\n def compile_string(self, data, source_path=None, is_two_file=True, post=None, lang=None):\n \"\"\"Compile into HTML strings.\"\"\"\n raise ValueError(\"Pandoc compiler does not support compile_string due to multiple output formats\")\n\n def create_post(self, path, **kw):\n \"\"\"Create a new post.\"\"\"\n content = kw.pop('content', None)\n onefile = kw.pop('onefile', False)\n # is_page is not used by create_post as of now.\n kw.pop('is_page', False)\n metadata = {}\n metadata.update(self.default_metadata)\n metadata.update(kw)\n makedirs(os.path.dirname(path))\n if not content.endswith('\\n'):\n content += '\\n'\n with io.open(path, \"w+\", encoding=\"utf8\") as fd:\n if onefile:\n fd.write(write_metadata(metadata, comment_wrap=True, site=self.site, compiler=self))\n fd.write(content)\n", "path": "nikola/plugins/compile/pandoc.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2021 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Page compiler plugin for pandoc.\n\nYou will need, of course, to install pandoc\n\"\"\"\n\n\nimport io\nimport os\nimport subprocess\nfrom typing import List\nfrom pathlib import Path\n\nfrom nikola.plugin_categories import PageCompiler\nfrom nikola.utils import req_missing, makedirs, write_metadata\n\n\nclass CompilePandoc(PageCompiler):\n \"\"\"Compile markups into HTML using pandoc.\"\"\"\n\n name = \"pandoc\"\n friendly_name = \"pandoc\"\n\n def set_site(self, site):\n \"\"\"Set Nikola site.\"\"\"\n self.config_dependencies = [str(site.config['PANDOC_OPTIONS'])]\n super().set_site(site)\n\n def _get_pandoc_options(self, source: str) -> List[str]:\n \"\"\"Obtain pandoc args from config depending on type and file extensions.\"\"\"\n # Union[List[str], Dict[str, List[str]]]\n config_options = self.site.config['PANDOC_OPTIONS']\n type_ = type(config_options)\n if type_ is list:\n pandoc_options = config_options\n elif type_ is dict:\n ext = Path(source).suffix\n try:\n pandoc_options = config_options[ext]\n except KeyError:\n self.logger.warn('Setting PANDOC_OPTIONS to [], because extension {} is not defined in PANDOC_OPTIONS: {}.'.format(ext, config_options))\n pandoc_options = []\n else:\n self.logger.warn('Setting PANDOC_OPTIONS to [], because PANDOC_OPTIONS is expected to be of type Union[List[str], Dict[str, List[str]]] but this is not: {}'.format(config_options))\n pandoc_options = []\n return pandoc_options\n\n def compile(self, source, dest, is_two_file=True, post=None, lang=None):\n \"\"\"Compile the source file into HTML and save as dest.\"\"\"\n makedirs(os.path.dirname(dest))\n try:\n subprocess.check_call(['pandoc', '-o', dest, source] + self._get_pandoc_options(source))\n with open(dest, 'r', encoding='utf-8-sig') as inf:\n output, shortcode_deps = self.site.apply_shortcodes(inf.read())\n with open(dest, 'w', encoding='utf-8') as outf:\n outf.write(output)\n if post is None:\n if shortcode_deps:\n self.logger.error(\n \"Cannot save dependencies for post {0} (post unknown)\",\n source)\n else:\n post._depfile[dest] += shortcode_deps\n except OSError as e:\n if e.strreror == 'No such file or directory':\n req_missing(['pandoc'], 'build this site (compile with pandoc)', python=False)\n\n def compile_string(self, data, source_path=None, is_two_file=True, post=None, lang=None):\n \"\"\"Compile into HTML strings.\"\"\"\n raise ValueError(\"Pandoc compiler does not support compile_string due to multiple output formats\")\n\n def create_post(self, path, **kw):\n \"\"\"Create a new post.\"\"\"\n content = kw.pop('content', None)\n onefile = kw.pop('onefile', False)\n # is_page is not used by create_post as of now.\n kw.pop('is_page', False)\n metadata = {}\n metadata.update(self.default_metadata)\n metadata.update(kw)\n makedirs(os.path.dirname(path))\n if not content.endswith('\\n'):\n content += '\\n'\n with io.open(path, \"w+\", encoding=\"utf8\") as fd:\n if onefile:\n fd.write(write_metadata(metadata, comment_wrap=True, site=self.site, compiler=self))\n fd.write(content)\n", "path": "nikola/plugins/compile/pandoc.py"}]}
| 1,597 | 526 |
gh_patches_debug_3348
|
rasdani/github-patches
|
git_diff
|
Mailu__Mailu-811
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Dovecot service takes long to start
```
docker stack ps mailu
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
yh6u3jrckt32 mailu_redis.1 redis:alpine prod02.usrpro.com Running Running 4 minutes ago
otpt6os15sy3 mailu_smtp.1 mailu/postfix:master prod01.usrpro.com Running Running 4 minutes ago
artmzgamypoy mailu_webdav.1 mailu/radicale:master prod03.usrpro.com Running Running 4 minutes ago
x5ynw6h5ap3p mailu_imap.1 usrpro/dovecot:workaround-lucene prod03.usrpro.com Running Starting 4 minutes ago
n674xs649jlw mailu_admin.1 mailu/admin:master prod01.usrpro.com Running Running 4 minutes ago
xrmri5cqjzqn mailu_antivirus.1 mailu/clamav:master prod02.usrpro.com Running Running 2 minutes ago
koiuyxlj2cpt mailu_antispam.1 mailu/rspamd:master prod03.usrpro.com Running Running 4 minutes ago
hdpu4gq182ha mailu_webmail.1 mailu/rainloop:master prod01.usrpro.com Running Running 4 minutes ago
wgqkwnz6wyxp mailu_front.1 mailu/nginx:master prod01.usrpro.com Running Running 4 minutes ago
v2eznworzs0g mailu_front.2 mailu/nginx:master prod02.usrpro.com Running Running 4 minutes ago
nckm82d8aryd mailu_front.3 mailu/nginx:master prod03.usrpro.com Running Running 4 minutes ago
```
In `start.py`, most probably it is stuck on:
```
multiprocessing.Process(target=start_podop).start()
```
Have to confirm with writing some logging, like requested in issue #588 .
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/dovecot/start.py`
Content:
```
1 #!/usr/bin/python3
2
3 import jinja2
4 import os
5 import socket
6 import glob
7 import multiprocessing
8 import tenacity
9
10 from tenacity import retry
11 from podop import run_server
12
13
14 def start_podop():
15 os.setuid(8)
16 run_server(3 if "DEBUG" in os.environ else 0, "dovecot", "/tmp/podop.socket", [
17 ("quota", "url", "http://admin/internal/dovecot/§"),
18 ("auth", "url", "http://admin/internal/dovecot/§"),
19 ("sieve", "url", "http://admin/internal/dovecot/§"),
20 ])
21
22 convert = lambda src, dst: open(dst, "w").write(jinja2.Template(open(src).read()).render(**os.environ))
23
24 # Actual startup script
25 resolve = retry(socket.gethostbyname, stop=tenacity.stop_after_attempt(100), wait=tenacity.wait_random(min=2, max=5))
26 os.environ["FRONT_ADDRESS"] = resolve(os.environ.get("FRONT_ADDRESS", "front"))
27 os.environ["REDIS_ADDRESS"] = resolve(os.environ.get("REDIS_ADDRESS", "redis"))
28 if os.environ["WEBMAIL"] != "none":
29 os.environ["WEBMAIL_ADDRESS"] = resolve(os.environ.get("WEBMAIL_ADDRESS", "webmail"))
30
31 for dovecot_file in glob.glob("/conf/*.conf"):
32 convert(dovecot_file, os.path.join("/etc/dovecot", os.path.basename(dovecot_file)))
33
34 # Run Podop, then postfix
35 multiprocessing.Process(target=start_podop).start()
36 os.system("chown -R mail:mail /mail /var/lib/dovecot /conf")
37 os.execv("/usr/sbin/dovecot", ["dovecot", "-c", "/etc/dovecot/dovecot.conf", "-F"])
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/core/dovecot/start.py b/core/dovecot/start.py
--- a/core/dovecot/start.py
+++ b/core/dovecot/start.py
@@ -33,5 +33,6 @@
# Run Podop, then postfix
multiprocessing.Process(target=start_podop).start()
-os.system("chown -R mail:mail /mail /var/lib/dovecot /conf")
+os.system("chown mail:mail /mail")
+os.system("chown -R mail:mail /var/lib/dovecot /conf")
os.execv("/usr/sbin/dovecot", ["dovecot", "-c", "/etc/dovecot/dovecot.conf", "-F"])
|
{"golden_diff": "diff --git a/core/dovecot/start.py b/core/dovecot/start.py\n--- a/core/dovecot/start.py\n+++ b/core/dovecot/start.py\n@@ -33,5 +33,6 @@\n \n # Run Podop, then postfix\n multiprocessing.Process(target=start_podop).start()\n-os.system(\"chown -R mail:mail /mail /var/lib/dovecot /conf\")\n+os.system(\"chown mail:mail /mail\")\n+os.system(\"chown -R mail:mail /var/lib/dovecot /conf\")\n os.execv(\"/usr/sbin/dovecot\", [\"dovecot\", \"-c\", \"/etc/dovecot/dovecot.conf\", \"-F\"])\n", "issue": "Dovecot service takes long to start\n```\r\ndocker stack ps mailu\r\nID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS\r\nyh6u3jrckt32 mailu_redis.1 redis:alpine prod02.usrpro.com Running Running 4 minutes ago \r\notpt6os15sy3 mailu_smtp.1 mailu/postfix:master prod01.usrpro.com Running Running 4 minutes ago \r\nartmzgamypoy mailu_webdav.1 mailu/radicale:master prod03.usrpro.com Running Running 4 minutes ago \r\nx5ynw6h5ap3p mailu_imap.1 usrpro/dovecot:workaround-lucene prod03.usrpro.com Running Starting 4 minutes ago \r\nn674xs649jlw mailu_admin.1 mailu/admin:master prod01.usrpro.com Running Running 4 minutes ago \r\nxrmri5cqjzqn mailu_antivirus.1 mailu/clamav:master prod02.usrpro.com Running Running 2 minutes ago \r\nkoiuyxlj2cpt mailu_antispam.1 mailu/rspamd:master prod03.usrpro.com Running Running 4 minutes ago \r\nhdpu4gq182ha mailu_webmail.1 mailu/rainloop:master prod01.usrpro.com Running Running 4 minutes ago \r\nwgqkwnz6wyxp mailu_front.1 mailu/nginx:master prod01.usrpro.com Running Running 4 minutes ago \r\nv2eznworzs0g mailu_front.2 mailu/nginx:master prod02.usrpro.com Running Running 4 minutes ago \r\nnckm82d8aryd mailu_front.3 mailu/nginx:master prod03.usrpro.com Running Running 4 minutes ago \r\n```\r\nIn `start.py`, most probably it is stuck on:\r\n```\r\nmultiprocessing.Process(target=start_podop).start()\r\n```\r\n\r\nHave to confirm with writing some logging, like requested in issue #588 .\n", "before_files": [{"content": "#!/usr/bin/python3\n\nimport jinja2\nimport os\nimport socket\nimport glob\nimport multiprocessing\nimport tenacity\n\nfrom tenacity import retry\nfrom podop import run_server\n\n\ndef start_podop():\n os.setuid(8)\n run_server(3 if \"DEBUG\" in os.environ else 0, \"dovecot\", \"/tmp/podop.socket\", [\n\t\t(\"quota\", \"url\", \"http://admin/internal/dovecot/\u00a7\"),\n\t\t(\"auth\", \"url\", \"http://admin/internal/dovecot/\u00a7\"),\n\t\t(\"sieve\", \"url\", \"http://admin/internal/dovecot/\u00a7\"),\n ])\n\nconvert = lambda src, dst: open(dst, \"w\").write(jinja2.Template(open(src).read()).render(**os.environ))\n\n# Actual startup script\nresolve = retry(socket.gethostbyname, stop=tenacity.stop_after_attempt(100), wait=tenacity.wait_random(min=2, max=5))\nos.environ[\"FRONT_ADDRESS\"] = resolve(os.environ.get(\"FRONT_ADDRESS\", \"front\"))\nos.environ[\"REDIS_ADDRESS\"] = resolve(os.environ.get(\"REDIS_ADDRESS\", \"redis\"))\nif os.environ[\"WEBMAIL\"] != \"none\":\n os.environ[\"WEBMAIL_ADDRESS\"] = resolve(os.environ.get(\"WEBMAIL_ADDRESS\", \"webmail\"))\n\nfor dovecot_file in glob.glob(\"/conf/*.conf\"):\n convert(dovecot_file, os.path.join(\"/etc/dovecot\", os.path.basename(dovecot_file)))\n\n# Run Podop, then postfix\nmultiprocessing.Process(target=start_podop).start()\nos.system(\"chown -R mail:mail /mail /var/lib/dovecot /conf\")\nos.execv(\"/usr/sbin/dovecot\", [\"dovecot\", \"-c\", \"/etc/dovecot/dovecot.conf\", \"-F\"])\n", "path": "core/dovecot/start.py"}], "after_files": [{"content": "#!/usr/bin/python3\n\nimport jinja2\nimport os\nimport socket\nimport glob\nimport multiprocessing\nimport tenacity\n\nfrom tenacity import retry\nfrom podop import run_server\n\n\ndef start_podop():\n os.setuid(8)\n run_server(3 if \"DEBUG\" in os.environ else 0, \"dovecot\", \"/tmp/podop.socket\", [\n\t\t(\"quota\", \"url\", \"http://admin/internal/dovecot/\u00a7\"),\n\t\t(\"auth\", \"url\", \"http://admin/internal/dovecot/\u00a7\"),\n\t\t(\"sieve\", \"url\", \"http://admin/internal/dovecot/\u00a7\"),\n ])\n\nconvert = lambda src, dst: open(dst, \"w\").write(jinja2.Template(open(src).read()).render(**os.environ))\n\n# Actual startup script\nresolve = retry(socket.gethostbyname, stop=tenacity.stop_after_attempt(100), wait=tenacity.wait_random(min=2, max=5))\nos.environ[\"FRONT_ADDRESS\"] = resolve(os.environ.get(\"FRONT_ADDRESS\", \"front\"))\nos.environ[\"REDIS_ADDRESS\"] = resolve(os.environ.get(\"REDIS_ADDRESS\", \"redis\"))\nif os.environ[\"WEBMAIL\"] != \"none\":\n os.environ[\"WEBMAIL_ADDRESS\"] = resolve(os.environ.get(\"WEBMAIL_ADDRESS\", \"webmail\"))\n\nfor dovecot_file in glob.glob(\"/conf/*.conf\"):\n convert(dovecot_file, os.path.join(\"/etc/dovecot\", os.path.basename(dovecot_file)))\n\n# Run Podop, then postfix\nmultiprocessing.Process(target=start_podop).start()\nos.system(\"chown mail:mail /mail\")\nos.system(\"chown -R mail:mail /var/lib/dovecot /conf\")\nos.execv(\"/usr/sbin/dovecot\", [\"dovecot\", \"-c\", \"/etc/dovecot/dovecot.conf\", \"-F\"])\n", "path": "core/dovecot/start.py"}]}
| 1,236 | 150 |
gh_patches_debug_6841
|
rasdani/github-patches
|
git_diff
|
activeloopai__deeplake-1738
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[FEATURE]Do not overwrite python logging configuration
## 🚨🚨 Feature Request
Be able to use both `logging` and `hub` in the same python script
### Current behavior
I found the following code has undesired behavior: when adding the line `import hub`, the logging configuration will be overwritten.
```python
import logging
import hub # this will overwrite the logging config
logging.basicConfig(filename = "sample.log", level = logging.DEBUG,
format = '%(asctime)s: %(filename)s: %(levelname)s: %(message)s')
logger = logging.getLogger(__name__)
logger.debug("this is a debug message")
logger.info("this is an info message")
logger.warning("this is a warning message")
logger.error("this is an error message")
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hub/client/log.py`
Content:
```
1 """
2 License:
3 This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0.
4 If a copy of the MPL was not distributed with this file, You can obtain one at https://mozilla.org/MPL/2.0/.
5 """
6
7 import logging
8 import sys
9
10 logger = logging.getLogger("hub")
11
12
13 def configure_logger(debug=0):
14 log_level = logging.DEBUG if debug == 1 else logging.INFO
15 logging.basicConfig(format="%(message)s", level=log_level, stream=sys.stdout)
16
17
18 configure_logger(0)
19
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/hub/client/log.py b/hub/client/log.py
--- a/hub/client/log.py
+++ b/hub/client/log.py
@@ -12,7 +12,17 @@
def configure_logger(debug=0):
log_level = logging.DEBUG if debug == 1 else logging.INFO
- logging.basicConfig(format="%(message)s", level=log_level, stream=sys.stdout)
+ logger.setLevel(log_level)
+
+ stream_handler = logging.StreamHandler(stream=sys.stdout)
+ stream_handler.setLevel(log_level)
+
+ formatter = logging.Formatter("%(message)s")
+
+ stream_handler.setFormatter(formatter)
+
+ logger.addHandler(stream_handler)
+ logger.propagate = False
configure_logger(0)
|
{"golden_diff": "diff --git a/hub/client/log.py b/hub/client/log.py\n--- a/hub/client/log.py\n+++ b/hub/client/log.py\n@@ -12,7 +12,17 @@\n \n def configure_logger(debug=0):\n log_level = logging.DEBUG if debug == 1 else logging.INFO\n- logging.basicConfig(format=\"%(message)s\", level=log_level, stream=sys.stdout)\n+ logger.setLevel(log_level)\n+\n+ stream_handler = logging.StreamHandler(stream=sys.stdout)\n+ stream_handler.setLevel(log_level)\n+\n+ formatter = logging.Formatter(\"%(message)s\")\n+\n+ stream_handler.setFormatter(formatter)\n+\n+ logger.addHandler(stream_handler)\n+ logger.propagate = False\n \n \n configure_logger(0)\n", "issue": "[FEATURE]Do not overwrite python logging configuration\n## \ud83d\udea8\ud83d\udea8 Feature Request\r\n\r\nBe able to use both `logging` and `hub` in the same python script\r\n\r\n### Current behavior\r\n\r\nI found the following code has undesired behavior: when adding the line `import hub`, the logging configuration will be overwritten.\r\n\r\n```python\r\nimport logging\r\nimport hub # this will overwrite the logging config\r\n\r\nlogging.basicConfig(filename = \"sample.log\", level = logging.DEBUG,\r\n format = '%(asctime)s: %(filename)s: %(levelname)s: %(message)s')\r\n\r\nlogger = logging.getLogger(__name__)\r\n\r\nlogger.debug(\"this is a debug message\")\r\nlogger.info(\"this is an info message\")\r\nlogger.warning(\"this is a warning message\")\r\nlogger.error(\"this is an error message\")\r\n\r\n```\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nLicense:\nThis Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0.\nIf a copy of the MPL was not distributed with this file, You can obtain one at https://mozilla.org/MPL/2.0/.\n\"\"\"\n\nimport logging\nimport sys\n\nlogger = logging.getLogger(\"hub\")\n\n\ndef configure_logger(debug=0):\n log_level = logging.DEBUG if debug == 1 else logging.INFO\n logging.basicConfig(format=\"%(message)s\", level=log_level, stream=sys.stdout)\n\n\nconfigure_logger(0)\n", "path": "hub/client/log.py"}], "after_files": [{"content": "\"\"\"\nLicense:\nThis Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0.\nIf a copy of the MPL was not distributed with this file, You can obtain one at https://mozilla.org/MPL/2.0/.\n\"\"\"\n\nimport logging\nimport sys\n\nlogger = logging.getLogger(\"hub\")\n\n\ndef configure_logger(debug=0):\n log_level = logging.DEBUG if debug == 1 else logging.INFO\n logger.setLevel(log_level)\n\n stream_handler = logging.StreamHandler(stream=sys.stdout)\n stream_handler.setLevel(log_level)\n\n formatter = logging.Formatter(\"%(message)s\")\n\n stream_handler.setFormatter(formatter)\n\n logger.addHandler(stream_handler)\n logger.propagate = False\n\n\nconfigure_logger(0)\n", "path": "hub/client/log.py"}]}
| 567 | 157 |
gh_patches_debug_24390
|
rasdani/github-patches
|
git_diff
|
pulp__pulpcore-4188
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Reclaim space for repository fails with Cannot delete some instances of model 'Artifact' because they are referenced through protected foreign keys: 'ContentArtifact.artifact'."
**Version**
3.16, but probably all versions
**Describe the bug**
- Reclaim space for repository fails with the following error.
~~~
Task paused with error: "("Cannot delete some instances of model 'Artifact' because they are referenced through protected foreign keys: 'ContentArtifact.artifact'.", {<ContentArtifact: pk=452959ad-c045-4e85-bf9f-6651ba37f57d>})"
~~~
**To Reproduce**
See BZ
**Additional context**
https://bugzilla.redhat.com/show_bug.cgi?id=2169322
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/app/tasks/reclaim_space.py`
Content:
```
1 from pulpcore.app.models import (
2 Artifact,
3 Content,
4 ContentArtifact,
5 ProgressReport,
6 PublishedMetadata,
7 Repository,
8 RepositoryVersion,
9 )
10 from pulpcore.app.util import get_domain
11
12
13 def reclaim_space(repo_pks, keeplist_rv_pks=None, force=False):
14 """
15 This task frees-up disk space by removing Artifact files from the filesystem for Content
16 exclusive to the list of provided repos.
17
18 Note: content marked as `proctected` will be excluded from the reclaim disk space.
19
20 Kwargs:
21 repo_pks (list): A list of repo pks the disk reclaim space is performed on.
22 keeplist_rv_pks (list): A list of repo version pks that will be excluded from the reclaim
23 disk space.
24 force (bool): If True, uploaded content will be taken into account.
25
26 """
27 reclaimed_repos = Repository.objects.filter(pk__in=repo_pks)
28 for repo in reclaimed_repos:
29 repo.invalidate_cache(everything=True)
30
31 domain = get_domain()
32 rest_of_repos = Repository.objects.filter(pulp_domain=domain).exclude(pk__in=repo_pks)
33 c_keep_qs = Content.objects.filter(repositories__in=rest_of_repos)
34 c_reclaim_qs = Content.objects.filter(repositories__in=repo_pks)
35 c_reclaim_qs = c_reclaim_qs.exclude(
36 pk__in=c_keep_qs, pulp_type=PublishedMetadata.get_pulp_type()
37 )
38
39 if keeplist_rv_pks:
40 rv_qs = RepositoryVersion.objects.filter(pk__in=keeplist_rv_pks)
41 rv_content = Content.objects.none()
42 for rv in rv_qs.iterator():
43 rv_content |= rv.content
44 c_reclaim_qs = c_reclaim_qs.exclude(pk__in=rv_content)
45
46 content_distinct = c_reclaim_qs.distinct("pulp_type")
47 unprotected = []
48 for content in content_distinct:
49 if not content.cast().PROTECTED_FROM_RECLAIM:
50 unprotected.append(content.pulp_type)
51
52 ca_qs = ContentArtifact.objects.select_related("content", "artifact").filter(
53 content__in=c_reclaim_qs.values("pk"), artifact__isnull=False
54 )
55 if not force:
56 ca_qs = ca_qs.filter(remoteartifact__isnull=False)
57 artifact_pks = set()
58 ca_to_update = []
59 for ca in ca_qs.iterator():
60 if ca.content.pulp_type in unprotected:
61 artifact_pks.add(ca.artifact.pk)
62 ca.artifact = None
63 ca_to_update.append(ca)
64
65 ContentArtifact.objects.bulk_update(objs=ca_to_update, fields=["artifact"], batch_size=1000)
66 artifacts_to_delete = Artifact.objects.filter(pk__in=artifact_pks)
67 progress_bar = ProgressReport(
68 message="Reclaim disk space",
69 total=artifacts_to_delete.count(),
70 code="reclaim-space.artifact",
71 done=0,
72 state="running",
73 )
74 progress_bar.save()
75
76 counter = 0
77 interval = 100
78 for artifact in artifacts_to_delete.iterator():
79 # we need to manually call delete() because it cleans up the file on the filesystem
80 artifact.delete()
81 progress_bar.done += 1
82 counter += 1
83
84 if counter >= interval:
85 progress_bar.save()
86 counter = 0
87
88 progress_bar.state = "completed"
89 progress_bar.save()
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pulpcore/app/tasks/reclaim_space.py b/pulpcore/app/tasks/reclaim_space.py
--- a/pulpcore/app/tasks/reclaim_space.py
+++ b/pulpcore/app/tasks/reclaim_space.py
@@ -1,3 +1,7 @@
+from logging import getLogger
+
+from django.db.models.deletion import ProtectedError
+
from pulpcore.app.models import (
Artifact,
Content,
@@ -9,6 +13,8 @@
)
from pulpcore.app.util import get_domain
+log = getLogger(__name__)
+
def reclaim_space(repo_pks, keeplist_rv_pks=None, force=False):
"""
@@ -76,10 +82,16 @@
counter = 0
interval = 100
for artifact in artifacts_to_delete.iterator():
- # we need to manually call delete() because it cleans up the file on the filesystem
- artifact.delete()
- progress_bar.done += 1
- counter += 1
+ try:
+ # we need to manually call delete() because it cleans up the file on the filesystem
+ artifact.delete()
+ except ProtectedError as e:
+ # Rarely artifact could be shared between to different content units.
+ # Just log and skip the artifact deletion in this case
+ log.info(e)
+ else:
+ progress_bar.done += 1
+ counter += 1
if counter >= interval:
progress_bar.save()
|
{"golden_diff": "diff --git a/pulpcore/app/tasks/reclaim_space.py b/pulpcore/app/tasks/reclaim_space.py\n--- a/pulpcore/app/tasks/reclaim_space.py\n+++ b/pulpcore/app/tasks/reclaim_space.py\n@@ -1,3 +1,7 @@\n+from logging import getLogger\n+\n+from django.db.models.deletion import ProtectedError\n+\n from pulpcore.app.models import (\n Artifact,\n Content,\n@@ -9,6 +13,8 @@\n )\n from pulpcore.app.util import get_domain\n \n+log = getLogger(__name__)\n+\n \n def reclaim_space(repo_pks, keeplist_rv_pks=None, force=False):\n \"\"\"\n@@ -76,10 +82,16 @@\n counter = 0\n interval = 100\n for artifact in artifacts_to_delete.iterator():\n- # we need to manually call delete() because it cleans up the file on the filesystem\n- artifact.delete()\n- progress_bar.done += 1\n- counter += 1\n+ try:\n+ # we need to manually call delete() because it cleans up the file on the filesystem\n+ artifact.delete()\n+ except ProtectedError as e:\n+ # Rarely artifact could be shared between to different content units.\n+ # Just log and skip the artifact deletion in this case\n+ log.info(e)\n+ else:\n+ progress_bar.done += 1\n+ counter += 1\n \n if counter >= interval:\n progress_bar.save()\n", "issue": "Reclaim space for repository fails with Cannot delete some instances of model 'Artifact' because they are referenced through protected foreign keys: 'ContentArtifact.artifact'.\"\n**Version**\r\n3.16, but probably all versions\r\n\r\n**Describe the bug**\r\n\r\n- Reclaim space for repository fails with the following error.\r\n\r\n ~~~\r\n Task paused with error: \"(\"Cannot delete some instances of model 'Artifact' because they are referenced through protected foreign keys: 'ContentArtifact.artifact'.\", {<ContentArtifact: pk=452959ad-c045-4e85-bf9f-6651ba37f57d>})\"\r\n ~~~\r\n\r\n**To Reproduce**\r\nSee BZ\r\n\r\n**Additional context**\r\nhttps://bugzilla.redhat.com/show_bug.cgi?id=2169322\r\n\n", "before_files": [{"content": "from pulpcore.app.models import (\n Artifact,\n Content,\n ContentArtifact,\n ProgressReport,\n PublishedMetadata,\n Repository,\n RepositoryVersion,\n)\nfrom pulpcore.app.util import get_domain\n\n\ndef reclaim_space(repo_pks, keeplist_rv_pks=None, force=False):\n \"\"\"\n This task frees-up disk space by removing Artifact files from the filesystem for Content\n exclusive to the list of provided repos.\n\n Note: content marked as `proctected` will be excluded from the reclaim disk space.\n\n Kwargs:\n repo_pks (list): A list of repo pks the disk reclaim space is performed on.\n keeplist_rv_pks (list): A list of repo version pks that will be excluded from the reclaim\n disk space.\n force (bool): If True, uploaded content will be taken into account.\n\n \"\"\"\n reclaimed_repos = Repository.objects.filter(pk__in=repo_pks)\n for repo in reclaimed_repos:\n repo.invalidate_cache(everything=True)\n\n domain = get_domain()\n rest_of_repos = Repository.objects.filter(pulp_domain=domain).exclude(pk__in=repo_pks)\n c_keep_qs = Content.objects.filter(repositories__in=rest_of_repos)\n c_reclaim_qs = Content.objects.filter(repositories__in=repo_pks)\n c_reclaim_qs = c_reclaim_qs.exclude(\n pk__in=c_keep_qs, pulp_type=PublishedMetadata.get_pulp_type()\n )\n\n if keeplist_rv_pks:\n rv_qs = RepositoryVersion.objects.filter(pk__in=keeplist_rv_pks)\n rv_content = Content.objects.none()\n for rv in rv_qs.iterator():\n rv_content |= rv.content\n c_reclaim_qs = c_reclaim_qs.exclude(pk__in=rv_content)\n\n content_distinct = c_reclaim_qs.distinct(\"pulp_type\")\n unprotected = []\n for content in content_distinct:\n if not content.cast().PROTECTED_FROM_RECLAIM:\n unprotected.append(content.pulp_type)\n\n ca_qs = ContentArtifact.objects.select_related(\"content\", \"artifact\").filter(\n content__in=c_reclaim_qs.values(\"pk\"), artifact__isnull=False\n )\n if not force:\n ca_qs = ca_qs.filter(remoteartifact__isnull=False)\n artifact_pks = set()\n ca_to_update = []\n for ca in ca_qs.iterator():\n if ca.content.pulp_type in unprotected:\n artifact_pks.add(ca.artifact.pk)\n ca.artifact = None\n ca_to_update.append(ca)\n\n ContentArtifact.objects.bulk_update(objs=ca_to_update, fields=[\"artifact\"], batch_size=1000)\n artifacts_to_delete = Artifact.objects.filter(pk__in=artifact_pks)\n progress_bar = ProgressReport(\n message=\"Reclaim disk space\",\n total=artifacts_to_delete.count(),\n code=\"reclaim-space.artifact\",\n done=0,\n state=\"running\",\n )\n progress_bar.save()\n\n counter = 0\n interval = 100\n for artifact in artifacts_to_delete.iterator():\n # we need to manually call delete() because it cleans up the file on the filesystem\n artifact.delete()\n progress_bar.done += 1\n counter += 1\n\n if counter >= interval:\n progress_bar.save()\n counter = 0\n\n progress_bar.state = \"completed\"\n progress_bar.save()\n", "path": "pulpcore/app/tasks/reclaim_space.py"}], "after_files": [{"content": "from logging import getLogger\n\nfrom django.db.models.deletion import ProtectedError\n\nfrom pulpcore.app.models import (\n Artifact,\n Content,\n ContentArtifact,\n ProgressReport,\n PublishedMetadata,\n Repository,\n RepositoryVersion,\n)\nfrom pulpcore.app.util import get_domain\n\nlog = getLogger(__name__)\n\n\ndef reclaim_space(repo_pks, keeplist_rv_pks=None, force=False):\n \"\"\"\n This task frees-up disk space by removing Artifact files from the filesystem for Content\n exclusive to the list of provided repos.\n\n Note: content marked as `proctected` will be excluded from the reclaim disk space.\n\n Kwargs:\n repo_pks (list): A list of repo pks the disk reclaim space is performed on.\n keeplist_rv_pks (list): A list of repo version pks that will be excluded from the reclaim\n disk space.\n force (bool): If True, uploaded content will be taken into account.\n\n \"\"\"\n reclaimed_repos = Repository.objects.filter(pk__in=repo_pks)\n for repo in reclaimed_repos:\n repo.invalidate_cache(everything=True)\n\n domain = get_domain()\n rest_of_repos = Repository.objects.filter(pulp_domain=domain).exclude(pk__in=repo_pks)\n c_keep_qs = Content.objects.filter(repositories__in=rest_of_repos)\n c_reclaim_qs = Content.objects.filter(repositories__in=repo_pks)\n c_reclaim_qs = c_reclaim_qs.exclude(\n pk__in=c_keep_qs, pulp_type=PublishedMetadata.get_pulp_type()\n )\n\n if keeplist_rv_pks:\n rv_qs = RepositoryVersion.objects.filter(pk__in=keeplist_rv_pks)\n rv_content = Content.objects.none()\n for rv in rv_qs.iterator():\n rv_content |= rv.content\n c_reclaim_qs = c_reclaim_qs.exclude(pk__in=rv_content)\n\n content_distinct = c_reclaim_qs.distinct(\"pulp_type\")\n unprotected = []\n for content in content_distinct:\n if not content.cast().PROTECTED_FROM_RECLAIM:\n unprotected.append(content.pulp_type)\n\n ca_qs = ContentArtifact.objects.select_related(\"content\", \"artifact\").filter(\n content__in=c_reclaim_qs.values(\"pk\"), artifact__isnull=False\n )\n if not force:\n ca_qs = ca_qs.filter(remoteartifact__isnull=False)\n artifact_pks = set()\n ca_to_update = []\n for ca in ca_qs.iterator():\n if ca.content.pulp_type in unprotected:\n artifact_pks.add(ca.artifact.pk)\n ca.artifact = None\n ca_to_update.append(ca)\n\n ContentArtifact.objects.bulk_update(objs=ca_to_update, fields=[\"artifact\"], batch_size=1000)\n artifacts_to_delete = Artifact.objects.filter(pk__in=artifact_pks)\n progress_bar = ProgressReport(\n message=\"Reclaim disk space\",\n total=artifacts_to_delete.count(),\n code=\"reclaim-space.artifact\",\n done=0,\n state=\"running\",\n )\n progress_bar.save()\n\n counter = 0\n interval = 100\n for artifact in artifacts_to_delete.iterator():\n try:\n # we need to manually call delete() because it cleans up the file on the filesystem\n artifact.delete()\n except ProtectedError as e:\n # Rarely artifact could be shared between to different content units.\n # Just log and skip the artifact deletion in this case\n log.info(e)\n else:\n progress_bar.done += 1\n counter += 1\n\n if counter >= interval:\n progress_bar.save()\n counter = 0\n\n progress_bar.state = \"completed\"\n progress_bar.save()\n", "path": "pulpcore/app/tasks/reclaim_space.py"}]}
| 1,349 | 322 |
gh_patches_debug_42862
|
rasdani/github-patches
|
git_diff
|
Textualize__textual-3830
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make Title of a Collapsible Widget Editable
Can not update title property of the Collapsible widget , looking at CollapsibleTitle class it seems to only set at init
Looks like this was an intentional choice ?
But I think it would be helpful to make the title property editable ,
Eg: changing content in the collapsed state and don't want to toggle it open to get a quick status update, so write something in .title...
~~~
from textual.app import App, ComposeResult
from textual.containers import Container, Horizontal, VerticalScroll
from textual.widgets import Header, Footer, Collapsible, Placeholder
class AnApp(App):
TITLE = "Collapsible Titles Should Update"
BINDINGS = [("d", "toggle_dark", "Update"),
("c", "collapse_or_expand(True)", "Collapse"),
("e", "collapse_or_expand(False)", "Expand")]
def action_collapse_or_expand(self, collapse: bool) -> None:
for child in self.walk_children(Collapsible):
child.collapsed = collapse
def compose(self) -> ComposeResult:
"""Create child widgets for the app."""
yield Header(show_clock=True)
yield VerticalScroll(
Collapsible(
Horizontal(
Placeholder(variant="text"),
Placeholder(variant="size"),
Placeholder(variant="text"),
),
title="Top_Stuff",id="top"
)
)
yield Footer()
def action_toggle_dark(self) -> None:
"""An action to toggle dark mode."""
top_collapsible_widget = self.query_one("#top")
top_collapsible_widget.title = "UPDATED_Top_Stuff"
self.app.title = "UPDATED_App_Title"
self.dark = not self.dark
if __name__ == "__main__":
app = AnApp()
app.run()
~~~
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/textual/widgets/_collapsible.py`
Content:
```
1 from __future__ import annotations
2
3 from rich.console import RenderableType
4 from rich.text import Text
5
6 from .. import events
7 from ..app import ComposeResult
8 from ..binding import Binding
9 from ..containers import Container
10 from ..css.query import NoMatches
11 from ..message import Message
12 from ..reactive import reactive
13 from ..widget import Widget
14
15 __all__ = ["Collapsible", "CollapsibleTitle"]
16
17
18 class CollapsibleTitle(Widget, can_focus=True):
19 """Title and symbol for the Collapsible."""
20
21 DEFAULT_CSS = """
22 CollapsibleTitle {
23 width: auto;
24 height: auto;
25 padding: 0 1 0 1;
26 }
27
28 CollapsibleTitle:hover {
29 background: $foreground 10%;
30 color: $text;
31 }
32
33 CollapsibleTitle:focus {
34 background: $accent;
35 color: $text;
36 }
37 """
38
39 BINDINGS = [Binding("enter", "toggle", "Toggle collapsible", show=False)]
40 """
41 | Key(s) | Description |
42 | :- | :- |
43 | enter | Toggle the collapsible. |
44 """
45
46 collapsed = reactive(True)
47
48 def __init__(
49 self,
50 *,
51 label: str,
52 collapsed_symbol: str,
53 expanded_symbol: str,
54 collapsed: bool,
55 ) -> None:
56 super().__init__()
57 self.collapsed_symbol = collapsed_symbol
58 self.expanded_symbol = expanded_symbol
59 self.label = label
60 self.collapse = collapsed
61
62 class Toggle(Message):
63 """Request toggle."""
64
65 async def _on_click(self, event: events.Click) -> None:
66 """Inform ancestor we want to toggle."""
67 event.stop()
68 self.post_message(self.Toggle())
69
70 def action_toggle(self) -> None:
71 """Toggle the state of the parent collapsible."""
72 self.post_message(self.Toggle())
73
74 def render(self) -> RenderableType:
75 """Compose right/down arrow and label."""
76 if self.collapsed:
77 return Text(f"{self.collapsed_symbol} {self.label}")
78 else:
79 return Text(f"{self.expanded_symbol} {self.label}")
80
81
82 class Collapsible(Widget):
83 """A collapsible container."""
84
85 collapsed = reactive(True)
86
87 DEFAULT_CSS = """
88 Collapsible {
89 width: 1fr;
90 height: auto;
91 background: $boost;
92 border-top: hkey $background;
93 padding-bottom: 1;
94 padding-left: 1;
95 }
96
97 Collapsible.-collapsed > Contents {
98 display: none;
99 }
100 """
101
102 class Toggled(Message):
103 """Parent class subclassed by `Collapsible` messages.
104
105 Can be handled with `on(Collapsible.Toggled)` if you want to handle expansions
106 and collapsed in the same way, or you can handle the specific events individually.
107 """
108
109 def __init__(self, collapsible: Collapsible) -> None:
110 """Create an instance of the message.
111
112 Args:
113 collapsible: The `Collapsible` widget that was toggled.
114 """
115 self.collapsible: Collapsible = collapsible
116 """The collapsible that was toggled."""
117 super().__init__()
118
119 @property
120 def control(self) -> Collapsible:
121 """An alias for [Toggled.collapsible][textual.widgets.Collapsible.Toggled.collapsible]."""
122 return self.collapsible
123
124 class Expanded(Toggled):
125 """Event sent when the `Collapsible` widget is expanded.
126
127 Can be handled using `on_collapsible_expanded` in a subclass of
128 [`Collapsible`][textual.widgets.Collapsible] or in a parent widget in the DOM.
129 """
130
131 class Collapsed(Toggled):
132 """Event sent when the `Collapsible` widget is collapsed.
133
134 Can be handled using `on_collapsible_collapsed` in a subclass of
135 [`Collapsible`][textual.widgets.Collapsible] or in a parent widget in the DOM.
136 """
137
138 class Contents(Container):
139 DEFAULT_CSS = """
140 Contents {
141 width: 100%;
142 height: auto;
143 padding: 1 0 0 3;
144 }
145 """
146
147 def __init__(
148 self,
149 *children: Widget,
150 title: str = "Toggle",
151 collapsed: bool = True,
152 collapsed_symbol: str = "▶",
153 expanded_symbol: str = "▼",
154 name: str | None = None,
155 id: str | None = None,
156 classes: str | None = None,
157 disabled: bool = False,
158 ) -> None:
159 """Initialize a Collapsible widget.
160
161 Args:
162 *children: Contents that will be collapsed/expanded.
163 title: Title of the collapsed/expanded contents.
164 collapsed: Default status of the contents.
165 collapsed_symbol: Collapsed symbol before the title.
166 expanded_symbol: Expanded symbol before the title.
167 name: The name of the collapsible.
168 id: The ID of the collapsible in the DOM.
169 classes: The CSS classes of the collapsible.
170 disabled: Whether the collapsible is disabled or not.
171 """
172 self._title = CollapsibleTitle(
173 label=title,
174 collapsed_symbol=collapsed_symbol,
175 expanded_symbol=expanded_symbol,
176 collapsed=collapsed,
177 )
178 self._contents_list: list[Widget] = list(children)
179 super().__init__(name=name, id=id, classes=classes, disabled=disabled)
180 self.collapsed = collapsed
181
182 def _on_collapsible_title_toggle(self, event: CollapsibleTitle.Toggle) -> None:
183 event.stop()
184 self.collapsed = not self.collapsed
185 if self.collapsed:
186 self.post_message(self.Collapsed(self))
187 else:
188 self.post_message(self.Expanded(self))
189
190 def _watch_collapsed(self, collapsed: bool) -> None:
191 """Update collapsed state when reactive is changed."""
192 self._update_collapsed(collapsed)
193
194 def _update_collapsed(self, collapsed: bool) -> None:
195 """Update children to match collapsed state."""
196 try:
197 self._title.collapsed = collapsed
198 self.set_class(collapsed, "-collapsed")
199 except NoMatches:
200 pass
201
202 def _on_mount(self, event: events.Mount) -> None:
203 """Initialise collapsed state."""
204 self._update_collapsed(self.collapsed)
205
206 def compose(self) -> ComposeResult:
207 yield self._title
208 yield self.Contents(*self._contents_list)
209
210 def compose_add_child(self, widget: Widget) -> None:
211 """When using the context manager compose syntax, we want to attach nodes to the contents.
212
213 Args:
214 widget: A Widget to add.
215 """
216 self._contents_list.append(widget)
217
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/textual/widgets/_collapsible.py b/src/textual/widgets/_collapsible.py
--- a/src/textual/widgets/_collapsible.py
+++ b/src/textual/widgets/_collapsible.py
@@ -1,8 +1,5 @@
from __future__ import annotations
-from rich.console import RenderableType
-from rich.text import Text
-
from .. import events
from ..app import ComposeResult
from ..binding import Binding
@@ -11,11 +8,12 @@
from ..message import Message
from ..reactive import reactive
from ..widget import Widget
+from ..widgets import Static
__all__ = ["Collapsible", "CollapsibleTitle"]
-class CollapsibleTitle(Widget, can_focus=True):
+class CollapsibleTitle(Static, can_focus=True):
"""Title and symbol for the Collapsible."""
DEFAULT_CSS = """
@@ -44,6 +42,7 @@
"""
collapsed = reactive(True)
+ label = reactive("Toggle")
def __init__(
self,
@@ -57,7 +56,9 @@
self.collapsed_symbol = collapsed_symbol
self.expanded_symbol = expanded_symbol
self.label = label
- self.collapse = collapsed
+ self.collapsed = collapsed
+ self._collapsed_label = f"{collapsed_symbol} {label}"
+ self._expanded_label = f"{expanded_symbol} {label}"
class Toggle(Message):
"""Request toggle."""
@@ -71,18 +72,26 @@
"""Toggle the state of the parent collapsible."""
self.post_message(self.Toggle())
- def render(self) -> RenderableType:
- """Compose right/down arrow and label."""
+ def _watch_label(self, label: str) -> None:
+ self._collapsed_label = f"{self.collapsed_symbol} {label}"
+ self._expanded_label = f"{self.expanded_symbol} {label}"
if self.collapsed:
- return Text(f"{self.collapsed_symbol} {self.label}")
+ self.update(self._collapsed_label)
+ else:
+ self.update(self._expanded_label)
+
+ def _watch_collapsed(self, collapsed: bool) -> None:
+ if collapsed:
+ self.update(self._collapsed_label)
else:
- return Text(f"{self.expanded_symbol} {self.label}")
+ self.update(self._expanded_label)
class Collapsible(Widget):
"""A collapsible container."""
collapsed = reactive(True)
+ title = reactive("Toggle")
DEFAULT_CSS = """
Collapsible {
@@ -169,14 +178,15 @@
classes: The CSS classes of the collapsible.
disabled: Whether the collapsible is disabled or not.
"""
+ super().__init__(name=name, id=id, classes=classes, disabled=disabled)
self._title = CollapsibleTitle(
label=title,
collapsed_symbol=collapsed_symbol,
expanded_symbol=expanded_symbol,
collapsed=collapsed,
)
+ self.title = title
self._contents_list: list[Widget] = list(children)
- super().__init__(name=name, id=id, classes=classes, disabled=disabled)
self.collapsed = collapsed
def _on_collapsible_title_toggle(self, event: CollapsibleTitle.Toggle) -> None:
@@ -214,3 +224,6 @@
widget: A Widget to add.
"""
self._contents_list.append(widget)
+
+ def _watch_title(self, title: str) -> None:
+ self._title.label = title
|
{"golden_diff": "diff --git a/src/textual/widgets/_collapsible.py b/src/textual/widgets/_collapsible.py\n--- a/src/textual/widgets/_collapsible.py\n+++ b/src/textual/widgets/_collapsible.py\n@@ -1,8 +1,5 @@\n from __future__ import annotations\n \n-from rich.console import RenderableType\n-from rich.text import Text\n-\n from .. import events\n from ..app import ComposeResult\n from ..binding import Binding\n@@ -11,11 +8,12 @@\n from ..message import Message\n from ..reactive import reactive\n from ..widget import Widget\n+from ..widgets import Static\n \n __all__ = [\"Collapsible\", \"CollapsibleTitle\"]\n \n \n-class CollapsibleTitle(Widget, can_focus=True):\n+class CollapsibleTitle(Static, can_focus=True):\n \"\"\"Title and symbol for the Collapsible.\"\"\"\n \n DEFAULT_CSS = \"\"\"\n@@ -44,6 +42,7 @@\n \"\"\"\n \n collapsed = reactive(True)\n+ label = reactive(\"Toggle\")\n \n def __init__(\n self,\n@@ -57,7 +56,9 @@\n self.collapsed_symbol = collapsed_symbol\n self.expanded_symbol = expanded_symbol\n self.label = label\n- self.collapse = collapsed\n+ self.collapsed = collapsed\n+ self._collapsed_label = f\"{collapsed_symbol} {label}\"\n+ self._expanded_label = f\"{expanded_symbol} {label}\"\n \n class Toggle(Message):\n \"\"\"Request toggle.\"\"\"\n@@ -71,18 +72,26 @@\n \"\"\"Toggle the state of the parent collapsible.\"\"\"\n self.post_message(self.Toggle())\n \n- def render(self) -> RenderableType:\n- \"\"\"Compose right/down arrow and label.\"\"\"\n+ def _watch_label(self, label: str) -> None:\n+ self._collapsed_label = f\"{self.collapsed_symbol} {label}\"\n+ self._expanded_label = f\"{self.expanded_symbol} {label}\"\n if self.collapsed:\n- return Text(f\"{self.collapsed_symbol} {self.label}\")\n+ self.update(self._collapsed_label)\n+ else:\n+ self.update(self._expanded_label)\n+\n+ def _watch_collapsed(self, collapsed: bool) -> None:\n+ if collapsed:\n+ self.update(self._collapsed_label)\n else:\n- return Text(f\"{self.expanded_symbol} {self.label}\")\n+ self.update(self._expanded_label)\n \n \n class Collapsible(Widget):\n \"\"\"A collapsible container.\"\"\"\n \n collapsed = reactive(True)\n+ title = reactive(\"Toggle\")\n \n DEFAULT_CSS = \"\"\"\n Collapsible {\n@@ -169,14 +178,15 @@\n classes: The CSS classes of the collapsible.\n disabled: Whether the collapsible is disabled or not.\n \"\"\"\n+ super().__init__(name=name, id=id, classes=classes, disabled=disabled)\n self._title = CollapsibleTitle(\n label=title,\n collapsed_symbol=collapsed_symbol,\n expanded_symbol=expanded_symbol,\n collapsed=collapsed,\n )\n+ self.title = title\n self._contents_list: list[Widget] = list(children)\n- super().__init__(name=name, id=id, classes=classes, disabled=disabled)\n self.collapsed = collapsed\n \n def _on_collapsible_title_toggle(self, event: CollapsibleTitle.Toggle) -> None:\n@@ -214,3 +224,6 @@\n widget: A Widget to add.\n \"\"\"\n self._contents_list.append(widget)\n+\n+ def _watch_title(self, title: str) -> None:\n+ self._title.label = title\n", "issue": "Make Title of a Collapsible Widget Editable\nCan not update title property of the Collapsible widget , looking at CollapsibleTitle class it seems to only set at init\r\nLooks like this was an intentional choice ?\r\nBut I think it would be helpful to make the title property editable ,\r\nEg: changing content in the collapsed state and don't want to toggle it open to get a quick status update, so write something in .title... \r\n\r\n~~~\r\nfrom textual.app import App, ComposeResult\r\nfrom textual.containers import Container, Horizontal, VerticalScroll\r\nfrom textual.widgets import Header, Footer, Collapsible, Placeholder\r\n\r\nclass AnApp(App):\r\n\r\n TITLE = \"Collapsible Titles Should Update\"\r\n\r\n BINDINGS = [(\"d\", \"toggle_dark\", \"Update\"),\r\n (\"c\", \"collapse_or_expand(True)\", \"Collapse\"),\r\n (\"e\", \"collapse_or_expand(False)\", \"Expand\")]\r\n \r\n def action_collapse_or_expand(self, collapse: bool) -> None:\r\n for child in self.walk_children(Collapsible):\r\n child.collapsed = collapse\r\n\r\n def compose(self) -> ComposeResult:\r\n \"\"\"Create child widgets for the app.\"\"\"\r\n yield Header(show_clock=True)\r\n yield VerticalScroll(\r\n Collapsible(\r\n Horizontal(\r\n Placeholder(variant=\"text\"),\r\n Placeholder(variant=\"size\"),\r\n Placeholder(variant=\"text\"),\r\n ),\r\n title=\"Top_Stuff\",id=\"top\"\r\n )\r\n )\r\n yield Footer()\r\n\r\n def action_toggle_dark(self) -> None:\r\n \"\"\"An action to toggle dark mode.\"\"\"\r\n top_collapsible_widget = self.query_one(\"#top\")\r\n top_collapsible_widget.title = \"UPDATED_Top_Stuff\"\r\n self.app.title = \"UPDATED_App_Title\"\r\n self.dark = not self.dark\r\n\r\n\r\nif __name__ == \"__main__\":\r\n app = AnApp()\r\n app.run()\r\n~~~\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom rich.console import RenderableType\nfrom rich.text import Text\n\nfrom .. import events\nfrom ..app import ComposeResult\nfrom ..binding import Binding\nfrom ..containers import Container\nfrom ..css.query import NoMatches\nfrom ..message import Message\nfrom ..reactive import reactive\nfrom ..widget import Widget\n\n__all__ = [\"Collapsible\", \"CollapsibleTitle\"]\n\n\nclass CollapsibleTitle(Widget, can_focus=True):\n \"\"\"Title and symbol for the Collapsible.\"\"\"\n\n DEFAULT_CSS = \"\"\"\n CollapsibleTitle {\n width: auto;\n height: auto;\n padding: 0 1 0 1;\n }\n\n CollapsibleTitle:hover {\n background: $foreground 10%;\n color: $text;\n }\n\n CollapsibleTitle:focus {\n background: $accent;\n color: $text;\n }\n \"\"\"\n\n BINDINGS = [Binding(\"enter\", \"toggle\", \"Toggle collapsible\", show=False)]\n \"\"\"\n | Key(s) | Description |\n | :- | :- |\n | enter | Toggle the collapsible. |\n \"\"\"\n\n collapsed = reactive(True)\n\n def __init__(\n self,\n *,\n label: str,\n collapsed_symbol: str,\n expanded_symbol: str,\n collapsed: bool,\n ) -> None:\n super().__init__()\n self.collapsed_symbol = collapsed_symbol\n self.expanded_symbol = expanded_symbol\n self.label = label\n self.collapse = collapsed\n\n class Toggle(Message):\n \"\"\"Request toggle.\"\"\"\n\n async def _on_click(self, event: events.Click) -> None:\n \"\"\"Inform ancestor we want to toggle.\"\"\"\n event.stop()\n self.post_message(self.Toggle())\n\n def action_toggle(self) -> None:\n \"\"\"Toggle the state of the parent collapsible.\"\"\"\n self.post_message(self.Toggle())\n\n def render(self) -> RenderableType:\n \"\"\"Compose right/down arrow and label.\"\"\"\n if self.collapsed:\n return Text(f\"{self.collapsed_symbol} {self.label}\")\n else:\n return Text(f\"{self.expanded_symbol} {self.label}\")\n\n\nclass Collapsible(Widget):\n \"\"\"A collapsible container.\"\"\"\n\n collapsed = reactive(True)\n\n DEFAULT_CSS = \"\"\"\n Collapsible {\n width: 1fr;\n height: auto;\n background: $boost;\n border-top: hkey $background;\n padding-bottom: 1;\n padding-left: 1;\n }\n\n Collapsible.-collapsed > Contents {\n display: none;\n }\n \"\"\"\n\n class Toggled(Message):\n \"\"\"Parent class subclassed by `Collapsible` messages.\n\n Can be handled with `on(Collapsible.Toggled)` if you want to handle expansions\n and collapsed in the same way, or you can handle the specific events individually.\n \"\"\"\n\n def __init__(self, collapsible: Collapsible) -> None:\n \"\"\"Create an instance of the message.\n\n Args:\n collapsible: The `Collapsible` widget that was toggled.\n \"\"\"\n self.collapsible: Collapsible = collapsible\n \"\"\"The collapsible that was toggled.\"\"\"\n super().__init__()\n\n @property\n def control(self) -> Collapsible:\n \"\"\"An alias for [Toggled.collapsible][textual.widgets.Collapsible.Toggled.collapsible].\"\"\"\n return self.collapsible\n\n class Expanded(Toggled):\n \"\"\"Event sent when the `Collapsible` widget is expanded.\n\n Can be handled using `on_collapsible_expanded` in a subclass of\n [`Collapsible`][textual.widgets.Collapsible] or in a parent widget in the DOM.\n \"\"\"\n\n class Collapsed(Toggled):\n \"\"\"Event sent when the `Collapsible` widget is collapsed.\n\n Can be handled using `on_collapsible_collapsed` in a subclass of\n [`Collapsible`][textual.widgets.Collapsible] or in a parent widget in the DOM.\n \"\"\"\n\n class Contents(Container):\n DEFAULT_CSS = \"\"\"\n Contents {\n width: 100%;\n height: auto;\n padding: 1 0 0 3;\n }\n \"\"\"\n\n def __init__(\n self,\n *children: Widget,\n title: str = \"Toggle\",\n collapsed: bool = True,\n collapsed_symbol: str = \"\u25b6\",\n expanded_symbol: str = \"\u25bc\",\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n disabled: bool = False,\n ) -> None:\n \"\"\"Initialize a Collapsible widget.\n\n Args:\n *children: Contents that will be collapsed/expanded.\n title: Title of the collapsed/expanded contents.\n collapsed: Default status of the contents.\n collapsed_symbol: Collapsed symbol before the title.\n expanded_symbol: Expanded symbol before the title.\n name: The name of the collapsible.\n id: The ID of the collapsible in the DOM.\n classes: The CSS classes of the collapsible.\n disabled: Whether the collapsible is disabled or not.\n \"\"\"\n self._title = CollapsibleTitle(\n label=title,\n collapsed_symbol=collapsed_symbol,\n expanded_symbol=expanded_symbol,\n collapsed=collapsed,\n )\n self._contents_list: list[Widget] = list(children)\n super().__init__(name=name, id=id, classes=classes, disabled=disabled)\n self.collapsed = collapsed\n\n def _on_collapsible_title_toggle(self, event: CollapsibleTitle.Toggle) -> None:\n event.stop()\n self.collapsed = not self.collapsed\n if self.collapsed:\n self.post_message(self.Collapsed(self))\n else:\n self.post_message(self.Expanded(self))\n\n def _watch_collapsed(self, collapsed: bool) -> None:\n \"\"\"Update collapsed state when reactive is changed.\"\"\"\n self._update_collapsed(collapsed)\n\n def _update_collapsed(self, collapsed: bool) -> None:\n \"\"\"Update children to match collapsed state.\"\"\"\n try:\n self._title.collapsed = collapsed\n self.set_class(collapsed, \"-collapsed\")\n except NoMatches:\n pass\n\n def _on_mount(self, event: events.Mount) -> None:\n \"\"\"Initialise collapsed state.\"\"\"\n self._update_collapsed(self.collapsed)\n\n def compose(self) -> ComposeResult:\n yield self._title\n yield self.Contents(*self._contents_list)\n\n def compose_add_child(self, widget: Widget) -> None:\n \"\"\"When using the context manager compose syntax, we want to attach nodes to the contents.\n\n Args:\n widget: A Widget to add.\n \"\"\"\n self._contents_list.append(widget)\n", "path": "src/textual/widgets/_collapsible.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom .. import events\nfrom ..app import ComposeResult\nfrom ..binding import Binding\nfrom ..containers import Container\nfrom ..css.query import NoMatches\nfrom ..message import Message\nfrom ..reactive import reactive\nfrom ..widget import Widget\nfrom ..widgets import Static\n\n__all__ = [\"Collapsible\", \"CollapsibleTitle\"]\n\n\nclass CollapsibleTitle(Static, can_focus=True):\n \"\"\"Title and symbol for the Collapsible.\"\"\"\n\n DEFAULT_CSS = \"\"\"\n CollapsibleTitle {\n width: auto;\n height: auto;\n padding: 0 1 0 1;\n }\n\n CollapsibleTitle:hover {\n background: $foreground 10%;\n color: $text;\n }\n\n CollapsibleTitle:focus {\n background: $accent;\n color: $text;\n }\n \"\"\"\n\n BINDINGS = [Binding(\"enter\", \"toggle\", \"Toggle collapsible\", show=False)]\n \"\"\"\n | Key(s) | Description |\n | :- | :- |\n | enter | Toggle the collapsible. |\n \"\"\"\n\n collapsed = reactive(True)\n label = reactive(\"Toggle\")\n\n def __init__(\n self,\n *,\n label: str,\n collapsed_symbol: str,\n expanded_symbol: str,\n collapsed: bool,\n ) -> None:\n super().__init__()\n self.collapsed_symbol = collapsed_symbol\n self.expanded_symbol = expanded_symbol\n self.label = label\n self.collapsed = collapsed\n self._collapsed_label = f\"{collapsed_symbol} {label}\"\n self._expanded_label = f\"{expanded_symbol} {label}\"\n\n class Toggle(Message):\n \"\"\"Request toggle.\"\"\"\n\n async def _on_click(self, event: events.Click) -> None:\n \"\"\"Inform ancestor we want to toggle.\"\"\"\n event.stop()\n self.post_message(self.Toggle())\n\n def action_toggle(self) -> None:\n \"\"\"Toggle the state of the parent collapsible.\"\"\"\n self.post_message(self.Toggle())\n\n def _watch_label(self, label: str) -> None:\n self._collapsed_label = f\"{self.collapsed_symbol} {label}\"\n self._expanded_label = f\"{self.expanded_symbol} {label}\"\n if self.collapsed:\n self.update(self._collapsed_label)\n else:\n self.update(self._expanded_label)\n\n def _watch_collapsed(self, collapsed: bool) -> None:\n if collapsed:\n self.update(self._collapsed_label)\n else:\n self.update(self._expanded_label)\n\n\nclass Collapsible(Widget):\n \"\"\"A collapsible container.\"\"\"\n\n collapsed = reactive(True)\n title = reactive(\"Toggle\")\n\n DEFAULT_CSS = \"\"\"\n Collapsible {\n width: 1fr;\n height: auto;\n background: $boost;\n border-top: hkey $background;\n padding-bottom: 1;\n padding-left: 1;\n }\n\n Collapsible.-collapsed > Contents {\n display: none;\n }\n \"\"\"\n\n class Toggled(Message):\n \"\"\"Parent class subclassed by `Collapsible` messages.\n\n Can be handled with `on(Collapsible.Toggled)` if you want to handle expansions\n and collapsed in the same way, or you can handle the specific events individually.\n \"\"\"\n\n def __init__(self, collapsible: Collapsible) -> None:\n \"\"\"Create an instance of the message.\n\n Args:\n collapsible: The `Collapsible` widget that was toggled.\n \"\"\"\n self.collapsible: Collapsible = collapsible\n \"\"\"The collapsible that was toggled.\"\"\"\n super().__init__()\n\n @property\n def control(self) -> Collapsible:\n \"\"\"An alias for [Toggled.collapsible][textual.widgets.Collapsible.Toggled.collapsible].\"\"\"\n return self.collapsible\n\n class Expanded(Toggled):\n \"\"\"Event sent when the `Collapsible` widget is expanded.\n\n Can be handled using `on_collapsible_expanded` in a subclass of\n [`Collapsible`][textual.widgets.Collapsible] or in a parent widget in the DOM.\n \"\"\"\n\n class Collapsed(Toggled):\n \"\"\"Event sent when the `Collapsible` widget is collapsed.\n\n Can be handled using `on_collapsible_collapsed` in a subclass of\n [`Collapsible`][textual.widgets.Collapsible] or in a parent widget in the DOM.\n \"\"\"\n\n class Contents(Container):\n DEFAULT_CSS = \"\"\"\n Contents {\n width: 100%;\n height: auto;\n padding: 1 0 0 3;\n }\n \"\"\"\n\n def __init__(\n self,\n *children: Widget,\n title: str = \"Toggle\",\n collapsed: bool = True,\n collapsed_symbol: str = \"\u25b6\",\n expanded_symbol: str = \"\u25bc\",\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n disabled: bool = False,\n ) -> None:\n \"\"\"Initialize a Collapsible widget.\n\n Args:\n *children: Contents that will be collapsed/expanded.\n title: Title of the collapsed/expanded contents.\n collapsed: Default status of the contents.\n collapsed_symbol: Collapsed symbol before the title.\n expanded_symbol: Expanded symbol before the title.\n name: The name of the collapsible.\n id: The ID of the collapsible in the DOM.\n classes: The CSS classes of the collapsible.\n disabled: Whether the collapsible is disabled or not.\n \"\"\"\n super().__init__(name=name, id=id, classes=classes, disabled=disabled)\n self._title = CollapsibleTitle(\n label=title,\n collapsed_symbol=collapsed_symbol,\n expanded_symbol=expanded_symbol,\n collapsed=collapsed,\n )\n self.title = title\n self._contents_list: list[Widget] = list(children)\n self.collapsed = collapsed\n\n def _on_collapsible_title_toggle(self, event: CollapsibleTitle.Toggle) -> None:\n event.stop()\n self.collapsed = not self.collapsed\n if self.collapsed:\n self.post_message(self.Collapsed(self))\n else:\n self.post_message(self.Expanded(self))\n\n def _watch_collapsed(self, collapsed: bool) -> None:\n \"\"\"Update collapsed state when reactive is changed.\"\"\"\n self._update_collapsed(collapsed)\n\n def _update_collapsed(self, collapsed: bool) -> None:\n \"\"\"Update children to match collapsed state.\"\"\"\n try:\n self._title.collapsed = collapsed\n self.set_class(collapsed, \"-collapsed\")\n except NoMatches:\n pass\n\n def _on_mount(self, event: events.Mount) -> None:\n \"\"\"Initialise collapsed state.\"\"\"\n self._update_collapsed(self.collapsed)\n\n def compose(self) -> ComposeResult:\n yield self._title\n yield self.Contents(*self._contents_list)\n\n def compose_add_child(self, widget: Widget) -> None:\n \"\"\"When using the context manager compose syntax, we want to attach nodes to the contents.\n\n Args:\n widget: A Widget to add.\n \"\"\"\n self._contents_list.append(widget)\n\n def _watch_title(self, title: str) -> None:\n self._title.label = title\n", "path": "src/textual/widgets/_collapsible.py"}]}
| 2,627 | 781 |
gh_patches_debug_53978
|
rasdani/github-patches
|
git_diff
|
scikit-hep__pyhf-1091
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add CodeMeta JSON-LD Context File
# Description
As part of an IRIS-HEP initiative to improve the citation quality/ease for all software it has been recommended by @danielskatz that projects:
- Track authors/contributors to software as it is developed
- Create a citation metadata file in the software repo and keep it up to date
- Link Github and Zenodo to get DOIs for major releases
- In the README and docs clearly says how to cite the software
`pyhf` already does all of this, as:
- [x] There is an [`AUTHORS` file](https://github.com/scikit-hep/pyhf/blob/31d34a0edd91593d75181cc3ad71563a52600793/AUTHORS) as well as a [listing of the authors in the docs](https://scikit-hep.org/pyhf/index.html#authors)
- [x] There is a [`.zenodo.json`](https://github.com/scikit-hep/pyhf/blob/2b0a1e2ba41a2aa1233511524e3aa78e696ead34/.zenodo.json) that formats Zenodo with the correct metadata and adds the ORCID of the authors
- [x] There is a Zenodo DOI that is kept in sync with our release schedule (almost) fully automatically (almost as it requires 1 button click). [](https://doi.org/10.5281/zenodo.1169739)
- [x] There is a Citation section in both the [`README`](https://github.com/scikit-hep/pyhf/blob/2b0a1e2ba41a2aa1233511524e3aa78e696ead34/README.rst#citation) and the [docs webpage](https://scikit-hep.org/pyhf/citations.html) with the preferred citation
It has been pointed out though that we should also look into adding a [CodeMeta Project JSON-LD Context File](https://codemeta.github.io/jsonld/) as with it
> zenodo will read [it] and will be basically the same as [schema.org](http://schema.org/) so that this will help google index software, and tools be able to use the metadata file in [softwareheritage.org](http://softwareheritage.org/) to automatically build a citation for a particular commit.
(cc @gordonwatts as this might be of interest)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup
2
3 extras_require = {
4 'shellcomplete': ['click_completion'],
5 'tensorflow': [
6 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major
7 'tensorflow-probability~=0.10.0',
8 ],
9 'torch': ['torch~=1.2'],
10 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],
11 'xmlio': ['uproot~=3.6'], # Future proof against uproot4 API changes
12 'minuit': ['iminuit~=1.4.3'], # v1.5.0 breaks pyhf for 32b TensorFlow and PyTorch
13 }
14 extras_require['backends'] = sorted(
15 set(
16 extras_require['tensorflow']
17 + extras_require['torch']
18 + extras_require['jax']
19 + extras_require['minuit']
20 )
21 )
22 extras_require['contrib'] = sorted(set(['matplotlib', 'requests']))
23 extras_require['lint'] = sorted(set(['pyflakes', 'black']))
24
25 extras_require['test'] = sorted(
26 set(
27 extras_require['backends']
28 + extras_require['xmlio']
29 + extras_require['contrib']
30 + extras_require['shellcomplete']
31 + [
32 'pytest~=6.0',
33 'pytest-cov>=2.5.1',
34 'pytest-mock',
35 'pytest-benchmark[histogram]',
36 'pytest-console-scripts',
37 'pytest-mpl',
38 'pydocstyle',
39 'coverage>=4.0', # coveralls
40 'papermill~=2.0',
41 'nteract-scrapbook~=0.2',
42 'jupyter',
43 'uproot~=3.3',
44 'graphviz',
45 'jsonpatch',
46 ]
47 )
48 )
49 extras_require['docs'] = sorted(
50 set(
51 [
52 'sphinx>=3.1.2',
53 'sphinxcontrib-bibtex',
54 'sphinx-click',
55 'sphinx_rtd_theme',
56 'nbsphinx',
57 'ipywidgets',
58 'sphinx-issues',
59 'sphinx-copybutton>0.2.9',
60 ]
61 )
62 )
63 extras_require['develop'] = sorted(
64 set(
65 extras_require['docs']
66 + extras_require['lint']
67 + extras_require['test']
68 + ['nbdime', 'bump2version', 'ipython', 'pre-commit', 'check-manifest', 'twine']
69 )
70 )
71 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
72
73
74 setup(
75 extras_require=extras_require,
76 use_scm_version=lambda: {'local_scheme': lambda version: ''},
77 )
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -65,7 +65,15 @@
extras_require['docs']
+ extras_require['lint']
+ extras_require['test']
- + ['nbdime', 'bump2version', 'ipython', 'pre-commit', 'check-manifest', 'twine']
+ + [
+ 'nbdime',
+ 'bump2version',
+ 'ipython',
+ 'pre-commit',
+ 'check-manifest',
+ 'codemetapy>=0.3.4',
+ 'twine',
+ ]
)
)
extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -65,7 +65,15 @@\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n- + ['nbdime', 'bump2version', 'ipython', 'pre-commit', 'check-manifest', 'twine']\n+ + [\n+ 'nbdime',\n+ 'bump2version',\n+ 'ipython',\n+ 'pre-commit',\n+ 'check-manifest',\n+ 'codemetapy>=0.3.4',\n+ 'twine',\n+ ]\n )\n )\n extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n", "issue": "Add CodeMeta JSON-LD Context File\n# Description\r\n\r\nAs part of an IRIS-HEP initiative to improve the citation quality/ease for all software it has been recommended by @danielskatz that projects:\r\n- Track authors/contributors to software as it is developed\r\n- Create a citation metadata file in the software repo and keep it up to date\r\n- Link Github and Zenodo to get DOIs for major releases\r\n- In the README and docs clearly says how to cite the software\r\n\r\n`pyhf` already does all of this, as:\r\n- [x] There is an [`AUTHORS` file](https://github.com/scikit-hep/pyhf/blob/31d34a0edd91593d75181cc3ad71563a52600793/AUTHORS) as well as a [listing of the authors in the docs](https://scikit-hep.org/pyhf/index.html#authors)\r\n- [x] There is a [`.zenodo.json`](https://github.com/scikit-hep/pyhf/blob/2b0a1e2ba41a2aa1233511524e3aa78e696ead34/.zenodo.json) that formats Zenodo with the correct metadata and adds the ORCID of the authors\r\n- [x] There is a Zenodo DOI that is kept in sync with our release schedule (almost) fully automatically (almost as it requires 1 button click). [](https://doi.org/10.5281/zenodo.1169739)\r\n- [x] There is a Citation section in both the [`README`](https://github.com/scikit-hep/pyhf/blob/2b0a1e2ba41a2aa1233511524e3aa78e696ead34/README.rst#citation) and the [docs webpage](https://scikit-hep.org/pyhf/citations.html) with the preferred citation\r\n\r\nIt has been pointed out though that we should also look into adding a [CodeMeta Project JSON-LD Context File](https://codemeta.github.io/jsonld/) as with it\r\n\r\n> zenodo will read [it] and will be basically the same as [schema.org](http://schema.org/) so that this will help google index software, and tools be able to use the metadata file in [softwareheritage.org](http://softwareheritage.org/) to automatically build a citation for a particular commit.\r\n\r\n(cc @gordonwatts as this might be of interest)\n", "before_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major\n 'tensorflow-probability~=0.10.0',\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],\n 'xmlio': ['uproot~=3.6'], # Future proof against uproot4 API changes\n 'minuit': ['iminuit~=1.4.3'], # v1.5.0 breaks pyhf for 32b TensorFlow and PyTorch\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted(set(['matplotlib', 'requests']))\nextras_require['lint'] = sorted(set(['pyflakes', 'black']))\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'pytest~=6.0',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'uproot~=3.3',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n [\n 'sphinx>=3.1.2',\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + ['nbdime', 'bump2version', 'ipython', 'pre-commit', 'check-manifest', 'twine']\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major\n 'tensorflow-probability~=0.10.0',\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],\n 'xmlio': ['uproot~=3.6'], # Future proof against uproot4 API changes\n 'minuit': ['iminuit~=1.4.3'], # v1.5.0 breaks pyhf for 32b TensorFlow and PyTorch\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted(set(['matplotlib', 'requests']))\nextras_require['lint'] = sorted(set(['pyflakes', 'black']))\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'pytest~=6.0',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'uproot~=3.3',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n [\n 'sphinx>=3.1.2',\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + [\n 'nbdime',\n 'bump2version',\n 'ipython',\n 'pre-commit',\n 'check-manifest',\n 'codemetapy>=0.3.4',\n 'twine',\n ]\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}]}
| 1,591 | 163 |
gh_patches_debug_9416
|
rasdani/github-patches
|
git_diff
|
searxng__searxng-1686
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: bing videos engine
**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**
Repository: https://github.com/searxng/searxng
Branch: master
Version: 2022.08.14-649d1a14
<!-- Check if these values are correct -->
**How did you install SearXNG?**
searxng-docker
**What happened?**
bing video search crashed
**How To Reproduce**
search for anything
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**Screenshots & Logs**
<!-- If applicable, add screenshots, logs to help explain your problem. -->
**Additional context**
<!-- Add any other context about the problem here. -->
**Technical report**
Error
* Error: IndexError
* Percentage: 100
* Parameters: `()`
* File name: `searx/engines/bing_videos.py:84`
* Function: `response`
* Code: `metadata = loads(result.xpath('.//div[@class="vrhdata"]/@vrhm')[0])`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/engines/bing_videos.py`
Content:
```
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 # lint: pylint
3 """Bing (Videos)
4
5 """
6
7 from json import loads
8 from urllib.parse import urlencode
9
10 from lxml import html
11
12 from searx.utils import match_language
13 from searx.engines.bing import language_aliases
14
15 from searx.engines.bing import ( # pylint: disable=unused-import
16 _fetch_supported_languages,
17 supported_languages_url,
18 )
19
20 about = {
21 "website": 'https://www.bing.com/videos',
22 "wikidata_id": 'Q4914152',
23 "official_api_documentation": 'https://www.microsoft.com/en-us/bing/apis/bing-video-search-api',
24 "use_official_api": False,
25 "require_api_key": False,
26 "results": 'HTML',
27 }
28
29 categories = ['videos', 'web']
30 paging = True
31 safesearch = True
32 time_range_support = True
33 send_accept_language_header = True
34 number_of_results = 28
35
36 base_url = 'https://www.bing.com/'
37 search_string = (
38 # fmt: off
39 'videos/search'
40 '?{query}'
41 '&count={count}'
42 '&first={first}'
43 '&scope=video'
44 '&FORM=QBLH'
45 # fmt: on
46 )
47 time_range_string = '&qft=+filterui:videoage-lt{interval}'
48 time_range_dict = {'day': '1440', 'week': '10080', 'month': '43200', 'year': '525600'}
49
50 # safesearch definitions
51 safesearch_types = {2: 'STRICT', 1: 'DEMOTE', 0: 'OFF'}
52
53
54 # do search-request
55 def request(query, params):
56 offset = ((params['pageno'] - 1) * number_of_results) + 1
57
58 search_path = search_string.format(query=urlencode({'q': query}), count=number_of_results, first=offset)
59
60 # safesearch cookie
61 params['cookies']['SRCHHPGUSR'] = 'ADLT=' + safesearch_types.get(params['safesearch'], 'DEMOTE')
62
63 # language cookie
64 language = match_language(params['language'], supported_languages, language_aliases).lower()
65 params['cookies']['_EDGE_S'] = 'mkt=' + language + '&F=1'
66
67 # query and paging
68 params['url'] = base_url + search_path
69
70 # time range
71 if params['time_range'] in time_range_dict:
72 params['url'] += time_range_string.format(interval=time_range_dict[params['time_range']])
73
74 return params
75
76
77 # get response from search-request
78 def response(resp):
79 results = []
80
81 dom = html.fromstring(resp.text)
82
83 for result in dom.xpath('//div[@class="dg_u"]'):
84 metadata = loads(result.xpath('.//div[@class="vrhdata"]/@vrhm')[0])
85 info = ' - '.join(result.xpath('.//div[@class="mc_vtvc_meta_block"]//span/text()')).strip()
86 content = '{0} - {1}'.format(metadata['du'], info)
87 thumbnail = '{0}th?id={1}'.format(base_url, metadata['thid'])
88 results.append(
89 {
90 'url': metadata['murl'],
91 'thumbnail': thumbnail,
92 'title': metadata.get('vt', ''),
93 'content': content,
94 'template': 'videos.html',
95 }
96 )
97
98 return results
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/searx/engines/bing_videos.py b/searx/engines/bing_videos.py
--- a/searx/engines/bing_videos.py
+++ b/searx/engines/bing_videos.py
@@ -80,7 +80,7 @@
dom = html.fromstring(resp.text)
- for result in dom.xpath('//div[@class="dg_u"]'):
+ for result in dom.xpath('//div[@class="dg_u"]/div[contains(@class, "mc_vtvc")]'):
metadata = loads(result.xpath('.//div[@class="vrhdata"]/@vrhm')[0])
info = ' - '.join(result.xpath('.//div[@class="mc_vtvc_meta_block"]//span/text()')).strip()
content = '{0} - {1}'.format(metadata['du'], info)
|
{"golden_diff": "diff --git a/searx/engines/bing_videos.py b/searx/engines/bing_videos.py\n--- a/searx/engines/bing_videos.py\n+++ b/searx/engines/bing_videos.py\n@@ -80,7 +80,7 @@\n \n dom = html.fromstring(resp.text)\n \n- for result in dom.xpath('//div[@class=\"dg_u\"]'):\n+ for result in dom.xpath('//div[@class=\"dg_u\"]/div[contains(@class, \"mc_vtvc\")]'):\n metadata = loads(result.xpath('.//div[@class=\"vrhdata\"]/@vrhm')[0])\n info = ' - '.join(result.xpath('.//div[@class=\"mc_vtvc_meta_block\"]//span/text()')).strip()\n content = '{0} - {1}'.format(metadata['du'], info)\n", "issue": "Bug: bing videos engine\n**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**\r\nRepository: https://github.com/searxng/searxng\r\nBranch: master\r\nVersion: 2022.08.14-649d1a14\r\n<!-- Check if these values are correct -->\r\n\r\n**How did you install SearXNG?**\r\nsearxng-docker\r\n\r\n**What happened?**\r\nbing video search crashed\r\n\r\n**How To Reproduce**\r\nsearch for anything\r\n\r\n**Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n**Screenshots & Logs**\r\n<!-- If applicable, add screenshots, logs to help explain your problem. -->\r\n\r\n**Additional context**\r\n<!-- Add any other context about the problem here. -->\r\n\r\n**Technical report**\r\n\r\nError\r\n * Error: IndexError\r\n * Percentage: 100\r\n * Parameters: `()`\r\n * File name: `searx/engines/bing_videos.py:84`\r\n * Function: `response`\r\n * Code: `metadata = loads(result.xpath('.//div[@class=\"vrhdata\"]/@vrhm')[0])`\r\n\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Bing (Videos)\n\n\"\"\"\n\nfrom json import loads\nfrom urllib.parse import urlencode\n\nfrom lxml import html\n\nfrom searx.utils import match_language\nfrom searx.engines.bing import language_aliases\n\nfrom searx.engines.bing import ( # pylint: disable=unused-import\n _fetch_supported_languages,\n supported_languages_url,\n)\n\nabout = {\n \"website\": 'https://www.bing.com/videos',\n \"wikidata_id\": 'Q4914152',\n \"official_api_documentation\": 'https://www.microsoft.com/en-us/bing/apis/bing-video-search-api',\n \"use_official_api\": False,\n \"require_api_key\": False,\n \"results\": 'HTML',\n}\n\ncategories = ['videos', 'web']\npaging = True\nsafesearch = True\ntime_range_support = True\nsend_accept_language_header = True\nnumber_of_results = 28\n\nbase_url = 'https://www.bing.com/'\nsearch_string = (\n # fmt: off\n 'videos/search'\n '?{query}'\n '&count={count}'\n '&first={first}'\n '&scope=video'\n '&FORM=QBLH'\n # fmt: on\n)\ntime_range_string = '&qft=+filterui:videoage-lt{interval}'\ntime_range_dict = {'day': '1440', 'week': '10080', 'month': '43200', 'year': '525600'}\n\n# safesearch definitions\nsafesearch_types = {2: 'STRICT', 1: 'DEMOTE', 0: 'OFF'}\n\n\n# do search-request\ndef request(query, params):\n offset = ((params['pageno'] - 1) * number_of_results) + 1\n\n search_path = search_string.format(query=urlencode({'q': query}), count=number_of_results, first=offset)\n\n # safesearch cookie\n params['cookies']['SRCHHPGUSR'] = 'ADLT=' + safesearch_types.get(params['safesearch'], 'DEMOTE')\n\n # language cookie\n language = match_language(params['language'], supported_languages, language_aliases).lower()\n params['cookies']['_EDGE_S'] = 'mkt=' + language + '&F=1'\n\n # query and paging\n params['url'] = base_url + search_path\n\n # time range\n if params['time_range'] in time_range_dict:\n params['url'] += time_range_string.format(interval=time_range_dict[params['time_range']])\n\n return params\n\n\n# get response from search-request\ndef response(resp):\n results = []\n\n dom = html.fromstring(resp.text)\n\n for result in dom.xpath('//div[@class=\"dg_u\"]'):\n metadata = loads(result.xpath('.//div[@class=\"vrhdata\"]/@vrhm')[0])\n info = ' - '.join(result.xpath('.//div[@class=\"mc_vtvc_meta_block\"]//span/text()')).strip()\n content = '{0} - {1}'.format(metadata['du'], info)\n thumbnail = '{0}th?id={1}'.format(base_url, metadata['thid'])\n results.append(\n {\n 'url': metadata['murl'],\n 'thumbnail': thumbnail,\n 'title': metadata.get('vt', ''),\n 'content': content,\n 'template': 'videos.html',\n }\n )\n\n return results\n", "path": "searx/engines/bing_videos.py"}], "after_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"Bing (Videos)\n\n\"\"\"\n\nfrom json import loads\nfrom urllib.parse import urlencode\n\nfrom lxml import html\n\nfrom searx.utils import match_language\nfrom searx.engines.bing import language_aliases\n\nfrom searx.engines.bing import ( # pylint: disable=unused-import\n _fetch_supported_languages,\n supported_languages_url,\n)\n\nabout = {\n \"website\": 'https://www.bing.com/videos',\n \"wikidata_id\": 'Q4914152',\n \"official_api_documentation\": 'https://www.microsoft.com/en-us/bing/apis/bing-video-search-api',\n \"use_official_api\": False,\n \"require_api_key\": False,\n \"results\": 'HTML',\n}\n\ncategories = ['videos', 'web']\npaging = True\nsafesearch = True\ntime_range_support = True\nsend_accept_language_header = True\nnumber_of_results = 28\n\nbase_url = 'https://www.bing.com/'\nsearch_string = (\n # fmt: off\n 'videos/search'\n '?{query}'\n '&count={count}'\n '&first={first}'\n '&scope=video'\n '&FORM=QBLH'\n # fmt: on\n)\ntime_range_string = '&qft=+filterui:videoage-lt{interval}'\ntime_range_dict = {'day': '1440', 'week': '10080', 'month': '43200', 'year': '525600'}\n\n# safesearch definitions\nsafesearch_types = {2: 'STRICT', 1: 'DEMOTE', 0: 'OFF'}\n\n\n# do search-request\ndef request(query, params):\n offset = ((params['pageno'] - 1) * number_of_results) + 1\n\n search_path = search_string.format(query=urlencode({'q': query}), count=number_of_results, first=offset)\n\n # safesearch cookie\n params['cookies']['SRCHHPGUSR'] = 'ADLT=' + safesearch_types.get(params['safesearch'], 'DEMOTE')\n\n # language cookie\n language = match_language(params['language'], supported_languages, language_aliases).lower()\n params['cookies']['_EDGE_S'] = 'mkt=' + language + '&F=1'\n\n # query and paging\n params['url'] = base_url + search_path\n\n # time range\n if params['time_range'] in time_range_dict:\n params['url'] += time_range_string.format(interval=time_range_dict[params['time_range']])\n\n return params\n\n\n# get response from search-request\ndef response(resp):\n results = []\n\n dom = html.fromstring(resp.text)\n\n for result in dom.xpath('//div[@class=\"dg_u\"]/div[contains(@class, \"mc_vtvc\")]'):\n metadata = loads(result.xpath('.//div[@class=\"vrhdata\"]/@vrhm')[0])\n info = ' - '.join(result.xpath('.//div[@class=\"mc_vtvc_meta_block\"]//span/text()')).strip()\n content = '{0} - {1}'.format(metadata['du'], info)\n thumbnail = '{0}th?id={1}'.format(base_url, metadata['thid'])\n results.append(\n {\n 'url': metadata['murl'],\n 'thumbnail': thumbnail,\n 'title': metadata.get('vt', ''),\n 'content': content,\n 'template': 'videos.html',\n }\n )\n\n return results\n", "path": "searx/engines/bing_videos.py"}]}
| 1,492 | 186 |
gh_patches_debug_24839
|
rasdani/github-patches
|
git_diff
|
kymatio__kymatio-312
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BUG scattering3d_qm7.py
It seems that line 180 should be modified as follows:
Current:
order_0, order_1, order_2 = [], [], []
Should be:
order_0, orders_1_and_2 = [], []
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/3d/scattering3d_qm7.py`
Content:
```
1 """
2 3D scattering quantum chemistry regression
3 ==========================================
4 This uses the 3D scattering on a standard dataset.
5 """
6
7 import numpy as np
8 import time
9 import torch
10 import os
11
12 from sklearn import linear_model, model_selection, preprocessing, pipeline
13 from kymatio.scattering3d import HarmonicScattering3D
14 from kymatio.scattering3d.utils import compute_integrals, generate_weighted_sum_of_gaussians
15 from kymatio.datasets import fetch_qm7
16 from kymatio.caching import get_cache_dir
17 from scipy.spatial.distance import pdist
18
19
20 def evaluate_linear_regression(X, y, n_folds=5):
21 """
22 Evaluates linear ridge regression predictions of y using X.
23
24 Parameters
25 ----------
26 X: numpy array
27 input features, shape (N, D)
28 y: numpy array
29 target value, shape (N, 1)
30
31 """
32 n_datapoints = X.shape[0]
33 P = np.random.permutation(n_datapoints).reshape((n_folds, -1))
34 cross_val_folds = []
35
36 for i_fold in range(n_folds):
37 fold = (np.concatenate(P[np.arange(n_folds) != i_fold], axis=0), P[i_fold])
38 cross_val_folds.append(fold)
39
40 alphas = 10.**(-np.arange(0, 10))
41 for i, alpha in enumerate(alphas):
42 regressor = pipeline.make_pipeline(
43 preprocessing.StandardScaler(), linear_model.Ridge(alpha=alpha))
44 y_prediction = model_selection.cross_val_predict(
45 regressor, X=X, y=y, cv=cross_val_folds)
46 MAE = np.mean(np.abs(y_prediction - y))
47 RMSE = np.sqrt(np.mean((y_prediction - y)**2))
48 print('Ridge regression, alpha: {}, MAE: {}, RMSE: {}'.format(
49 alpha, MAE, RMSE))
50
51
52 def get_valence(charges):
53 """
54 Returns the number valence electrons of a particle given the
55 nuclear charge.
56
57 Parameters
58 ----------
59 charges: numpy array
60 array containing the nuclear charges, arbitrary size
61
62 Returns
63 -------
64 valence_charges : numpy array
65 same size as the input
66 """
67 return (
68 charges * (charges <= 2) +
69 (charges - 2) * np.logical_and(charges > 2, charges <= 10) +
70 (charges - 10) * np.logical_and(charges > 10, charges <= 18))
71
72
73 def get_qm7_energies():
74 """
75 Loads the energies of the molecules of the QM7 dataset.
76
77 Returns
78 -------
79 energies: numpy array
80 array containing the energies of the molecules
81 """
82 qm7 = fetch_qm7()
83 return qm7['energies']
84
85
86
87 def get_qm7_positions_and_charges(sigma, overlapping_precision=1e-1):
88 """
89 Loads the positions and charges of the molecules of the QM7 dataset.
90 QM7 is a dataset of 7165 organic molecules with up to 7 non-hydrogen
91 atoms, whose energies were computed with a quantun chemistry
92 computational method named Density Functional Theory.
93 This dataset has been made available to train machine learning models
94 to predict these energies.
95
96 Parameters
97 ----------
98 sigma : float
99 width parameter of the Gaussian that represents a particle
100
101 overlapping_precision : float, optional
102 affects the scaling of the positions. The positions are re-scaled
103 such that two Gaussian functions of width sigma centerd at the qm7
104 positions overlapp with amplitude <= the overlapping_precision
105
106 Returns
107 -------
108 positions, charges, valence_charges: torch arrays
109 array containing the positions, charges and valence charges
110 of the QM7 database molecules
111 """
112 qm7 = fetch_qm7(align=True)
113 positions = qm7['positions']
114 charges = qm7['charges'].astype('float32')
115 valence_charges = get_valence(charges)
116
117 # normalize positions
118 min_dist = np.inf
119 for i in range(positions.shape[0]):
120 n_atoms = np.sum(charges[i] != 0)
121 pos = positions[i, :n_atoms, :]
122 min_dist = min(min_dist, pdist(pos).min())
123 delta = sigma * np.sqrt(-8 * np.log(overlapping_precision))
124 positions = positions * delta / min_dist
125
126 return (torch.from_numpy(positions),
127 torch.from_numpy(charges),
128 torch.from_numpy(valence_charges))
129
130
131 def compute_qm7_solid_harmonic_scattering_coefficients(
132 M=192, N=128, O=96, sigma=2., J=2, L=3,
133 integral_powers=(0.5, 1., 2., 3.), batch_size=16):
134 """
135 Computes the scattering coefficients of the molecules of the
136 QM7 database. Channels used are full charges, valence charges
137 and core charges. Linear regression of the qm7 energies with
138 the given values gives MAE 2.75, RMSE 4.18 (kcal.mol-1).
139
140 Parameters
141 ----------
142 M, N, O: int
143 dimensions of the numerical grid
144 sigma : float
145 width parameter of the Gaussian that represents a particle
146 J: int
147 maximal scale of the solid harmonic wavelets
148 L: int
149 maximal first order of the solid harmonic wavelets
150 integral_powers: list of int
151 powers for the integrals
152 batch_size: int
153 size of the batch for computations
154
155 Returns
156 -------
157 order_0: torch tensor
158 array containing zeroth-order scattering coefficients
159 orders_1_and_2: torch tensor
160 array containing first- and second-order scattering coefficients
161 """
162 cuda = torch.cuda.is_available()
163 grid = torch.from_numpy(
164 np.fft.ifftshift(
165 np.mgrid[-M//2:-M//2+M, -N//2:-N//2+N, -O//2:-O//2+O].astype('float32'),
166 axes=(1, 2, 3)))
167 pos, full_charges, valence_charges = get_qm7_positions_and_charges(sigma)
168
169 n_molecules = pos.size(0)
170 n_batches = np.ceil(n_molecules / batch_size).astype(int)
171
172 scattering = HarmonicScattering3D(J=J, shape=(M, N, O), L=L, sigma_0=sigma)
173
174 if cuda:
175 grid = grid.cuda()
176 pos = pos.cuda()
177 full_charges = full_charges.cuda()
178 valence_charges = valence_charges.cuda()
179 scattering.cuda()
180
181 order_0, order_1, order_2 = [], [], []
182 print('Computing solid harmonic scattering coefficients of {} molecules '
183 'of QM7 database on {}'.format(pos.size(0), 'GPU' if cuda else 'CPU'))
184 print('sigma: {}, L: {}, J: {}, integral powers: {}'.format(sigma, L, J, integral_powers))
185
186 this_time = None
187 last_time = None
188 for i in range(n_batches):
189 this_time = time.time()
190 if last_time is not None:
191 dt = this_time - last_time
192 print("Iteration {} ETA: [{:02}:{:02}:{:02}]".format(
193 i + 1, int(((n_batches - i - 1) * dt) // 3600),
194 int((((n_batches - i - 1) * dt) // 60) % 60),
195 int(((n_batches - i - 1) * dt) % 60)), end='\r')
196 else:
197 print("Iteration {} ETA: {}".format(i + 1,'-'),end='\r')
198 last_time = this_time
199 time.sleep(1)
200
201 start, end = i * batch_size, min((i + 1) * batch_size, n_molecules)
202
203 pos_batch = pos[start:end]
204 full_batch = full_charges[start:end]
205 val_batch = valence_charges[start:end]
206
207 full_density_batch = generate_weighted_sum_of_gaussians(
208 grid, pos_batch, full_batch, sigma, cuda=cuda)
209 full_order_0 = compute_integrals(full_density_batch, integral_powers)
210 scattering.max_order = 2
211 scattering.method = 'integral'
212 scattering.integral_powers = integral_powers
213 full_scattering = scattering(full_density_batch)
214
215 val_density_batch = generate_weighted_sum_of_gaussians(
216 grid, pos_batch, val_batch, sigma, cuda=cuda)
217 val_order_0 = compute_integrals(val_density_batch, integral_powers)
218 val_scattering= scattering(val_density_batch)
219
220 core_density_batch = full_density_batch - val_density_batch
221 core_order_0 = compute_integrals(core_density_batch, integral_powers)
222 core_scattering = scattering(core_density_batch)
223
224
225 order_0.append(
226 torch.stack([full_order_0, val_order_0, core_order_0], dim=-1))
227 orders_1_and_2.append(
228 torch.stack(
229 [full_scattering, val_scattering, core_scattering], dim=-1))
230
231 order_0 = torch.cat(order_0, dim=0)
232 orders_1_and_2 = torch.cat(orders_1_and_2, dim=0)
233
234 return order_0, orders_1_and_2
235
236 M, N, O, J, L = 192, 128, 96, 2, 3
237 integral_powers = [0.5, 1., 2., 3.]
238 sigma = 2.
239
240 order_0, orders_1_and_2 = compute_qm7_solid_harmonic_scattering_coefficients(
241 M=M, N=N, O=O, J=J, L=L, integral_powers=integral_powers,
242 sigma=sigma, batch_size=8)
243
244 n_molecules = order_0.size(0)
245
246 np_order_0 = order_0.numpy().reshape((n_molecules, -1))
247 np_orders_1_and_2 = orders_1_and_2.numpy().reshape((n_molecules, -1))
248
249 basename = 'qm7_L_{}_J_{}_sigma_{}_MNO_{}_powers_{}.npy'.format(
250 L, J, sigma, (M, N, O), integral_powers)
251 cachedir = get_cache_dir("qm7/experiments")
252 np.save(os.path.join(cachedir, 'order_0_' + basename), np_order_0)
253 np.save(os.path.join(
254 cachedir, 'orders_1_and_2_' + basename), np_orders_1_and_2)
255
256 scattering_coef = np.concatenate([np_order_0, np_orders_1_and_2], axis=1)
257 target = get_qm7_energies()
258
259 evaluate_linear_regression(scattering_coef, target)
260
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/3d/scattering3d_qm7.py b/examples/3d/scattering3d_qm7.py
--- a/examples/3d/scattering3d_qm7.py
+++ b/examples/3d/scattering3d_qm7.py
@@ -88,7 +88,7 @@
"""
Loads the positions and charges of the molecules of the QM7 dataset.
QM7 is a dataset of 7165 organic molecules with up to 7 non-hydrogen
- atoms, whose energies were computed with a quantun chemistry
+ atoms, whose energies were computed with a quantum chemistry
computational method named Density Functional Theory.
This dataset has been made available to train machine learning models
to predict these energies.
@@ -178,7 +178,7 @@
valence_charges = valence_charges.cuda()
scattering.cuda()
- order_0, order_1, order_2 = [], [], []
+ order_0, orders_1_and_2 = [], []
print('Computing solid harmonic scattering coefficients of {} molecules '
'of QM7 database on {}'.format(pos.size(0), 'GPU' if cuda else 'CPU'))
print('sigma: {}, L: {}, J: {}, integral powers: {}'.format(sigma, L, J, integral_powers))
|
{"golden_diff": "diff --git a/examples/3d/scattering3d_qm7.py b/examples/3d/scattering3d_qm7.py\n--- a/examples/3d/scattering3d_qm7.py\n+++ b/examples/3d/scattering3d_qm7.py\n@@ -88,7 +88,7 @@\n \"\"\"\n Loads the positions and charges of the molecules of the QM7 dataset.\n QM7 is a dataset of 7165 organic molecules with up to 7 non-hydrogen\n- atoms, whose energies were computed with a quantun chemistry\n+ atoms, whose energies were computed with a quantum chemistry\n computational method named Density Functional Theory.\n This dataset has been made available to train machine learning models\n to predict these energies.\n@@ -178,7 +178,7 @@\n valence_charges = valence_charges.cuda()\n scattering.cuda()\n \n- order_0, order_1, order_2 = [], [], []\n+ order_0, orders_1_and_2 = [], []\n print('Computing solid harmonic scattering coefficients of {} molecules '\n 'of QM7 database on {}'.format(pos.size(0), 'GPU' if cuda else 'CPU'))\n print('sigma: {}, L: {}, J: {}, integral powers: {}'.format(sigma, L, J, integral_powers))\n", "issue": "BUG scattering3d_qm7.py\nIt seems that line 180 should be modified as follows:\r\nCurrent:\r\norder_0, order_1, order_2 = [], [], []\r\nShould be:\r\norder_0, orders_1_and_2 = [], []\n", "before_files": [{"content": "\"\"\"\n3D scattering quantum chemistry regression\n==========================================\nThis uses the 3D scattering on a standard dataset.\n\"\"\"\n\nimport numpy as np\nimport time\nimport torch\nimport os\n\nfrom sklearn import linear_model, model_selection, preprocessing, pipeline\nfrom kymatio.scattering3d import HarmonicScattering3D\nfrom kymatio.scattering3d.utils import compute_integrals, generate_weighted_sum_of_gaussians\nfrom kymatio.datasets import fetch_qm7\nfrom kymatio.caching import get_cache_dir\nfrom scipy.spatial.distance import pdist\n\n\ndef evaluate_linear_regression(X, y, n_folds=5):\n \"\"\"\n Evaluates linear ridge regression predictions of y using X.\n\n Parameters\n ----------\n X: numpy array\n input features, shape (N, D)\n y: numpy array\n target value, shape (N, 1)\n\n \"\"\"\n n_datapoints = X.shape[0]\n P = np.random.permutation(n_datapoints).reshape((n_folds, -1))\n cross_val_folds = []\n\n for i_fold in range(n_folds):\n fold = (np.concatenate(P[np.arange(n_folds) != i_fold], axis=0), P[i_fold])\n cross_val_folds.append(fold)\n\n alphas = 10.**(-np.arange(0, 10))\n for i, alpha in enumerate(alphas):\n regressor = pipeline.make_pipeline(\n preprocessing.StandardScaler(), linear_model.Ridge(alpha=alpha))\n y_prediction = model_selection.cross_val_predict(\n regressor, X=X, y=y, cv=cross_val_folds)\n MAE = np.mean(np.abs(y_prediction - y))\n RMSE = np.sqrt(np.mean((y_prediction - y)**2))\n print('Ridge regression, alpha: {}, MAE: {}, RMSE: {}'.format(\n alpha, MAE, RMSE))\n\n\ndef get_valence(charges):\n \"\"\"\n Returns the number valence electrons of a particle given the\n nuclear charge.\n\n Parameters\n ----------\n charges: numpy array\n array containing the nuclear charges, arbitrary size\n\n Returns\n -------\n valence_charges : numpy array\n same size as the input\n \"\"\"\n return (\n charges * (charges <= 2) +\n (charges - 2) * np.logical_and(charges > 2, charges <= 10) +\n (charges - 10) * np.logical_and(charges > 10, charges <= 18))\n\n\ndef get_qm7_energies():\n \"\"\"\n Loads the energies of the molecules of the QM7 dataset.\n\n Returns\n -------\n energies: numpy array\n array containing the energies of the molecules\n \"\"\"\n qm7 = fetch_qm7()\n return qm7['energies']\n\n\n\ndef get_qm7_positions_and_charges(sigma, overlapping_precision=1e-1):\n \"\"\"\n Loads the positions and charges of the molecules of the QM7 dataset.\n QM7 is a dataset of 7165 organic molecules with up to 7 non-hydrogen\n atoms, whose energies were computed with a quantun chemistry\n computational method named Density Functional Theory.\n This dataset has been made available to train machine learning models\n to predict these energies.\n\n Parameters\n ----------\n sigma : float\n width parameter of the Gaussian that represents a particle\n\n overlapping_precision : float, optional\n affects the scaling of the positions. The positions are re-scaled\n such that two Gaussian functions of width sigma centerd at the qm7\n positions overlapp with amplitude <= the overlapping_precision\n\n Returns\n -------\n positions, charges, valence_charges: torch arrays\n array containing the positions, charges and valence charges\n of the QM7 database molecules\n \"\"\"\n qm7 = fetch_qm7(align=True)\n positions = qm7['positions']\n charges = qm7['charges'].astype('float32')\n valence_charges = get_valence(charges)\n\n # normalize positions\n min_dist = np.inf\n for i in range(positions.shape[0]):\n n_atoms = np.sum(charges[i] != 0)\n pos = positions[i, :n_atoms, :]\n min_dist = min(min_dist, pdist(pos).min())\n delta = sigma * np.sqrt(-8 * np.log(overlapping_precision))\n positions = positions * delta / min_dist\n\n return (torch.from_numpy(positions),\n torch.from_numpy(charges),\n torch.from_numpy(valence_charges))\n\n\ndef compute_qm7_solid_harmonic_scattering_coefficients(\n M=192, N=128, O=96, sigma=2., J=2, L=3,\n integral_powers=(0.5, 1., 2., 3.), batch_size=16):\n \"\"\"\n Computes the scattering coefficients of the molecules of the\n QM7 database. Channels used are full charges, valence charges\n and core charges. Linear regression of the qm7 energies with\n the given values gives MAE 2.75, RMSE 4.18 (kcal.mol-1).\n\n Parameters\n ----------\n M, N, O: int\n dimensions of the numerical grid\n sigma : float\n width parameter of the Gaussian that represents a particle\n J: int\n maximal scale of the solid harmonic wavelets\n L: int\n maximal first order of the solid harmonic wavelets\n integral_powers: list of int\n powers for the integrals\n batch_size: int\n size of the batch for computations\n\n Returns\n -------\n order_0: torch tensor\n array containing zeroth-order scattering coefficients\n orders_1_and_2: torch tensor\n array containing first- and second-order scattering coefficients\n \"\"\"\n cuda = torch.cuda.is_available()\n grid = torch.from_numpy(\n np.fft.ifftshift(\n np.mgrid[-M//2:-M//2+M, -N//2:-N//2+N, -O//2:-O//2+O].astype('float32'),\n axes=(1, 2, 3)))\n pos, full_charges, valence_charges = get_qm7_positions_and_charges(sigma)\n\n n_molecules = pos.size(0)\n n_batches = np.ceil(n_molecules / batch_size).astype(int)\n\n scattering = HarmonicScattering3D(J=J, shape=(M, N, O), L=L, sigma_0=sigma)\n\n if cuda:\n grid = grid.cuda()\n pos = pos.cuda()\n full_charges = full_charges.cuda()\n valence_charges = valence_charges.cuda()\n scattering.cuda()\n\n order_0, order_1, order_2 = [], [], []\n print('Computing solid harmonic scattering coefficients of {} molecules '\n 'of QM7 database on {}'.format(pos.size(0), 'GPU' if cuda else 'CPU'))\n print('sigma: {}, L: {}, J: {}, integral powers: {}'.format(sigma, L, J, integral_powers))\n\n this_time = None\n last_time = None\n for i in range(n_batches):\n this_time = time.time()\n if last_time is not None:\n dt = this_time - last_time\n print(\"Iteration {} ETA: [{:02}:{:02}:{:02}]\".format(\n i + 1, int(((n_batches - i - 1) * dt) // 3600),\n int((((n_batches - i - 1) * dt) // 60) % 60),\n int(((n_batches - i - 1) * dt) % 60)), end='\\r')\n else:\n print(\"Iteration {} ETA: {}\".format(i + 1,'-'),end='\\r')\n last_time = this_time\n time.sleep(1)\n\n start, end = i * batch_size, min((i + 1) * batch_size, n_molecules)\n\n pos_batch = pos[start:end]\n full_batch = full_charges[start:end]\n val_batch = valence_charges[start:end]\n\n full_density_batch = generate_weighted_sum_of_gaussians(\n grid, pos_batch, full_batch, sigma, cuda=cuda)\n full_order_0 = compute_integrals(full_density_batch, integral_powers)\n scattering.max_order = 2\n scattering.method = 'integral'\n scattering.integral_powers = integral_powers\n full_scattering = scattering(full_density_batch)\n\n val_density_batch = generate_weighted_sum_of_gaussians(\n grid, pos_batch, val_batch, sigma, cuda=cuda)\n val_order_0 = compute_integrals(val_density_batch, integral_powers)\n val_scattering= scattering(val_density_batch)\n\n core_density_batch = full_density_batch - val_density_batch\n core_order_0 = compute_integrals(core_density_batch, integral_powers)\n core_scattering = scattering(core_density_batch)\n\n\n order_0.append(\n torch.stack([full_order_0, val_order_0, core_order_0], dim=-1))\n orders_1_and_2.append(\n torch.stack(\n [full_scattering, val_scattering, core_scattering], dim=-1))\n\n order_0 = torch.cat(order_0, dim=0)\n orders_1_and_2 = torch.cat(orders_1_and_2, dim=0)\n\n return order_0, orders_1_and_2\n\nM, N, O, J, L = 192, 128, 96, 2, 3\nintegral_powers = [0.5, 1., 2., 3.]\nsigma = 2.\n\norder_0, orders_1_and_2 = compute_qm7_solid_harmonic_scattering_coefficients(\n M=M, N=N, O=O, J=J, L=L, integral_powers=integral_powers,\n sigma=sigma, batch_size=8)\n\nn_molecules = order_0.size(0)\n\nnp_order_0 = order_0.numpy().reshape((n_molecules, -1))\nnp_orders_1_and_2 = orders_1_and_2.numpy().reshape((n_molecules, -1))\n\nbasename = 'qm7_L_{}_J_{}_sigma_{}_MNO_{}_powers_{}.npy'.format(\n L, J, sigma, (M, N, O), integral_powers)\ncachedir = get_cache_dir(\"qm7/experiments\")\nnp.save(os.path.join(cachedir, 'order_0_' + basename), np_order_0)\nnp.save(os.path.join(\n cachedir, 'orders_1_and_2_' + basename), np_orders_1_and_2)\n\nscattering_coef = np.concatenate([np_order_0, np_orders_1_and_2], axis=1)\ntarget = get_qm7_energies()\n\nevaluate_linear_regression(scattering_coef, target)\n", "path": "examples/3d/scattering3d_qm7.py"}], "after_files": [{"content": "\"\"\"\n3D scattering quantum chemistry regression\n==========================================\nThis uses the 3D scattering on a standard dataset.\n\"\"\"\n\nimport numpy as np\nimport time\nimport torch\nimport os\n\nfrom sklearn import linear_model, model_selection, preprocessing, pipeline\nfrom kymatio.scattering3d import HarmonicScattering3D\nfrom kymatio.scattering3d.utils import compute_integrals, generate_weighted_sum_of_gaussians\nfrom kymatio.datasets import fetch_qm7\nfrom kymatio.caching import get_cache_dir\nfrom scipy.spatial.distance import pdist\n\n\ndef evaluate_linear_regression(X, y, n_folds=5):\n \"\"\"\n Evaluates linear ridge regression predictions of y using X.\n\n Parameters\n ----------\n X: numpy array\n input features, shape (N, D)\n y: numpy array\n target value, shape (N, 1)\n\n \"\"\"\n n_datapoints = X.shape[0]\n P = np.random.permutation(n_datapoints).reshape((n_folds, -1))\n cross_val_folds = []\n\n for i_fold in range(n_folds):\n fold = (np.concatenate(P[np.arange(n_folds) != i_fold], axis=0), P[i_fold])\n cross_val_folds.append(fold)\n\n alphas = 10.**(-np.arange(0, 10))\n for i, alpha in enumerate(alphas):\n regressor = pipeline.make_pipeline(\n preprocessing.StandardScaler(), linear_model.Ridge(alpha=alpha))\n y_prediction = model_selection.cross_val_predict(\n regressor, X=X, y=y, cv=cross_val_folds)\n MAE = np.mean(np.abs(y_prediction - y))\n RMSE = np.sqrt(np.mean((y_prediction - y)**2))\n print('Ridge regression, alpha: {}, MAE: {}, RMSE: {}'.format(\n alpha, MAE, RMSE))\n\n\ndef get_valence(charges):\n \"\"\"\n Returns the number valence electrons of a particle given the\n nuclear charge.\n\n Parameters\n ----------\n charges: numpy array\n array containing the nuclear charges, arbitrary size\n\n Returns\n -------\n valence_charges : numpy array\n same size as the input\n \"\"\"\n return (\n charges * (charges <= 2) +\n (charges - 2) * np.logical_and(charges > 2, charges <= 10) +\n (charges - 10) * np.logical_and(charges > 10, charges <= 18))\n\n\ndef get_qm7_energies():\n \"\"\"\n Loads the energies of the molecules of the QM7 dataset.\n\n Returns\n -------\n energies: numpy array\n array containing the energies of the molecules\n \"\"\"\n qm7 = fetch_qm7()\n return qm7['energies']\n\n\n\ndef get_qm7_positions_and_charges(sigma, overlapping_precision=1e-1):\n \"\"\"\n Loads the positions and charges of the molecules of the QM7 dataset.\n QM7 is a dataset of 7165 organic molecules with up to 7 non-hydrogen\n atoms, whose energies were computed with a quantum chemistry\n computational method named Density Functional Theory.\n This dataset has been made available to train machine learning models\n to predict these energies.\n\n Parameters\n ----------\n sigma : float\n width parameter of the Gaussian that represents a particle\n\n overlapping_precision : float, optional\n affects the scaling of the positions. The positions are re-scaled\n such that two Gaussian functions of width sigma centerd at the qm7\n positions overlapp with amplitude <= the overlapping_precision\n\n Returns\n -------\n positions, charges, valence_charges: torch arrays\n array containing the positions, charges and valence charges\n of the QM7 database molecules\n \"\"\"\n qm7 = fetch_qm7(align=True)\n positions = qm7['positions']\n charges = qm7['charges'].astype('float32')\n valence_charges = get_valence(charges)\n\n # normalize positions\n min_dist = np.inf\n for i in range(positions.shape[0]):\n n_atoms = np.sum(charges[i] != 0)\n pos = positions[i, :n_atoms, :]\n min_dist = min(min_dist, pdist(pos).min())\n delta = sigma * np.sqrt(-8 * np.log(overlapping_precision))\n positions = positions * delta / min_dist\n\n return (torch.from_numpy(positions),\n torch.from_numpy(charges),\n torch.from_numpy(valence_charges))\n\n\ndef compute_qm7_solid_harmonic_scattering_coefficients(\n M=192, N=128, O=96, sigma=2., J=2, L=3,\n integral_powers=(0.5, 1., 2., 3.), batch_size=16):\n \"\"\"\n Computes the scattering coefficients of the molecules of the\n QM7 database. Channels used are full charges, valence charges\n and core charges. Linear regression of the qm7 energies with\n the given values gives MAE 2.75, RMSE 4.18 (kcal.mol-1).\n\n Parameters\n ----------\n M, N, O: int\n dimensions of the numerical grid\n sigma : float\n width parameter of the Gaussian that represents a particle\n J: int\n maximal scale of the solid harmonic wavelets\n L: int\n maximal first order of the solid harmonic wavelets\n integral_powers: list of int\n powers for the integrals\n batch_size: int\n size of the batch for computations\n\n Returns\n -------\n order_0: torch tensor\n array containing zeroth-order scattering coefficients\n orders_1_and_2: torch tensor\n array containing first- and second-order scattering coefficients\n \"\"\"\n cuda = torch.cuda.is_available()\n grid = torch.from_numpy(\n np.fft.ifftshift(\n np.mgrid[-M//2:-M//2+M, -N//2:-N//2+N, -O//2:-O//2+O].astype('float32'),\n axes=(1, 2, 3)))\n pos, full_charges, valence_charges = get_qm7_positions_and_charges(sigma)\n\n n_molecules = pos.size(0)\n n_batches = np.ceil(n_molecules / batch_size).astype(int)\n\n scattering = HarmonicScattering3D(J=J, shape=(M, N, O), L=L, sigma_0=sigma)\n\n if cuda:\n grid = grid.cuda()\n pos = pos.cuda()\n full_charges = full_charges.cuda()\n valence_charges = valence_charges.cuda()\n scattering.cuda()\n\n order_0, orders_1_and_2 = [], []\n print('Computing solid harmonic scattering coefficients of {} molecules '\n 'of QM7 database on {}'.format(pos.size(0), 'GPU' if cuda else 'CPU'))\n print('sigma: {}, L: {}, J: {}, integral powers: {}'.format(sigma, L, J, integral_powers))\n\n this_time = None\n last_time = None\n for i in range(n_batches):\n this_time = time.time()\n if last_time is not None:\n dt = this_time - last_time\n print(\"Iteration {} ETA: [{:02}:{:02}:{:02}]\".format(\n i + 1, int(((n_batches - i - 1) * dt) // 3600),\n int((((n_batches - i - 1) * dt) // 60) % 60),\n int(((n_batches - i - 1) * dt) % 60)), end='\\r')\n else:\n print(\"Iteration {} ETA: {}\".format(i + 1,'-'),end='\\r')\n last_time = this_time\n time.sleep(1)\n\n start, end = i * batch_size, min((i + 1) * batch_size, n_molecules)\n\n pos_batch = pos[start:end]\n full_batch = full_charges[start:end]\n val_batch = valence_charges[start:end]\n\n full_density_batch = generate_weighted_sum_of_gaussians(\n grid, pos_batch, full_batch, sigma, cuda=cuda)\n full_order_0 = compute_integrals(full_density_batch, integral_powers)\n scattering.max_order = 2\n scattering.method = 'integral'\n scattering.integral_powers = integral_powers\n full_scattering = scattering(full_density_batch)\n\n val_density_batch = generate_weighted_sum_of_gaussians(\n grid, pos_batch, val_batch, sigma, cuda=cuda)\n val_order_0 = compute_integrals(val_density_batch, integral_powers)\n val_scattering= scattering(val_density_batch)\n\n core_density_batch = full_density_batch - val_density_batch\n core_order_0 = compute_integrals(core_density_batch, integral_powers)\n core_scattering = scattering(core_density_batch)\n\n\n order_0.append(\n torch.stack([full_order_0, val_order_0, core_order_0], dim=-1))\n orders_1_and_2.append(\n torch.stack(\n [full_scattering, val_scattering, core_scattering], dim=-1))\n\n order_0 = torch.cat(order_0, dim=0)\n orders_1_and_2 = torch.cat(orders_1_and_2, dim=0)\n\n return order_0, orders_1_and_2\n\nM, N, O, J, L = 192, 128, 96, 2, 3\nintegral_powers = [0.5, 1., 2., 3.]\nsigma = 2.\n\norder_0, orders_1_and_2 = compute_qm7_solid_harmonic_scattering_coefficients(\n M=M, N=N, O=O, J=J, L=L, integral_powers=integral_powers,\n sigma=sigma, batch_size=8)\n\nn_molecules = order_0.size(0)\n\nnp_order_0 = order_0.numpy().reshape((n_molecules, -1))\nnp_orders_1_and_2 = orders_1_and_2.numpy().reshape((n_molecules, -1))\n\nbasename = 'qm7_L_{}_J_{}_sigma_{}_MNO_{}_powers_{}.npy'.format(\n L, J, sigma, (M, N, O), integral_powers)\ncachedir = get_cache_dir(\"qm7/experiments\")\nnp.save(os.path.join(cachedir, 'order_0_' + basename), np_order_0)\nnp.save(os.path.join(\n cachedir, 'orders_1_and_2_' + basename), np_orders_1_and_2)\n\nscattering_coef = np.concatenate([np_order_0, np_orders_1_and_2], axis=1)\ntarget = get_qm7_energies()\n\nevaluate_linear_regression(scattering_coef, target)\n", "path": "examples/3d/scattering3d_qm7.py"}]}
| 3,427 | 296 |
gh_patches_debug_22806
|
rasdani/github-patches
|
git_diff
|
sopel-irc__sopel-1859
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
announce: Batch up to MAXTARGETS channels at a time if available
The simple loop in `announce.py` is kind of...meh. The number of lines it actually sends to the server could be reduced as much as 75% depending on how many channels Sopel is in when `.announce` is called.
Some IRC servers (Hybrid, for example) send a `MAXTARGETS` field in the `ISUPPORT` properties when a client connects. If this is present, it would be useful for `announce.py` to have that information and send its announcement messages to `MAXTARGETS` channels at a time.
Long-term thing, more of an optimization than anything.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sopel/modules/announce.py`
Content:
```
1 # coding=utf-8
2 """
3 announce.py - Sopel Announcement Plugin
4 Sends announcements to all channels the bot has joined.
5 Copyright © 2013, Elad Alfassa, <[email protected]>
6 Licensed under the Eiffel Forum License 2.
7
8 https://sopel.chat
9 """
10 from __future__ import absolute_import, division, print_function, unicode_literals
11
12 from sopel import plugin
13
14
15 @plugin.command('announce')
16 @plugin.example('.announce Some important message here')
17 @plugin.require_admin('Sorry, I can\'t let you do that', reply=True)
18 @plugin.output_prefix('[ANNOUNCEMENT] ')
19 def announce(bot, trigger):
20 """Send an announcement to all channels the bot is in"""
21 for channel in bot.channels:
22 bot.say(trigger.group(2), channel)
23 bot.reply('Announce complete.')
24
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sopel/modules/announce.py b/sopel/modules/announce.py
--- a/sopel/modules/announce.py
+++ b/sopel/modules/announce.py
@@ -12,12 +12,30 @@
from sopel import plugin
+def _chunks(items, size):
+ """Break a list of items into groups.
+
+ :param items: the collection of items to chunk
+ :type items: :term:`iterable`
+ :param int size: the size of each chunk
+ """
+ # from https://stackoverflow.com/a/312464/5991 with modified names for readability
+ for delim in range(0, len(items), size):
+ yield items[delim:delim + size]
+
+
@plugin.command('announce')
@plugin.example('.announce Some important message here')
@plugin.require_admin('Sorry, I can\'t let you do that', reply=True)
@plugin.output_prefix('[ANNOUNCEMENT] ')
def announce(bot, trigger):
- """Send an announcement to all channels the bot is in"""
- for channel in bot.channels:
- bot.say(trigger.group(2), channel)
+ """Send an announcement to all channels the bot is in."""
+ if trigger.group(2) is None:
+ bot.reply('Announce what? I need a message to say.')
+ return
+
+ size = bot.isupport.get('TARGMAX', {}).get('PRIVMSG', 1)
+ channels = _chunks(bot.channels.keys(), size)
+ for cgroup in channels:
+ bot.say(trigger.group(2), ','.join(cgroup))
bot.reply('Announce complete.')
|
{"golden_diff": "diff --git a/sopel/modules/announce.py b/sopel/modules/announce.py\n--- a/sopel/modules/announce.py\n+++ b/sopel/modules/announce.py\n@@ -12,12 +12,30 @@\n from sopel import plugin\n \n \n+def _chunks(items, size):\n+ \"\"\"Break a list of items into groups.\n+\n+ :param items: the collection of items to chunk\n+ :type items: :term:`iterable`\n+ :param int size: the size of each chunk\n+ \"\"\"\n+ # from https://stackoverflow.com/a/312464/5991 with modified names for readability\n+ for delim in range(0, len(items), size):\n+ yield items[delim:delim + size]\n+\n+\n @plugin.command('announce')\n @plugin.example('.announce Some important message here')\n @plugin.require_admin('Sorry, I can\\'t let you do that', reply=True)\n @plugin.output_prefix('[ANNOUNCEMENT] ')\n def announce(bot, trigger):\n- \"\"\"Send an announcement to all channels the bot is in\"\"\"\n- for channel in bot.channels:\n- bot.say(trigger.group(2), channel)\n+ \"\"\"Send an announcement to all channels the bot is in.\"\"\"\n+ if trigger.group(2) is None:\n+ bot.reply('Announce what? I need a message to say.')\n+ return\n+\n+ size = bot.isupport.get('TARGMAX', {}).get('PRIVMSG', 1)\n+ channels = _chunks(bot.channels.keys(), size)\n+ for cgroup in channels:\n+ bot.say(trigger.group(2), ','.join(cgroup))\n bot.reply('Announce complete.')\n", "issue": "announce: Batch up to MAXTARGETS channels at a time if available\nThe simple loop in `announce.py` is kind of...meh. The number of lines it actually sends to the server could be reduced as much as 75% depending on how many channels Sopel is in when `.announce` is called.\r\n\r\nSome IRC servers (Hybrid, for example) send a `MAXTARGETS` field in the `ISUPPORT` properties when a client connects. If this is present, it would be useful for `announce.py` to have that information and send its announcement messages to `MAXTARGETS` channels at a time.\r\n\r\nLong-term thing, more of an optimization than anything.\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"\nannounce.py - Sopel Announcement Plugin\nSends announcements to all channels the bot has joined.\nCopyright \u00a9 2013, Elad Alfassa, <[email protected]>\nLicensed under the Eiffel Forum License 2.\n\nhttps://sopel.chat\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom sopel import plugin\n\n\[email protected]('announce')\[email protected]('.announce Some important message here')\[email protected]_admin('Sorry, I can\\'t let you do that', reply=True)\[email protected]_prefix('[ANNOUNCEMENT] ')\ndef announce(bot, trigger):\n \"\"\"Send an announcement to all channels the bot is in\"\"\"\n for channel in bot.channels:\n bot.say(trigger.group(2), channel)\n bot.reply('Announce complete.')\n", "path": "sopel/modules/announce.py"}], "after_files": [{"content": "# coding=utf-8\n\"\"\"\nannounce.py - Sopel Announcement Plugin\nSends announcements to all channels the bot has joined.\nCopyright \u00a9 2013, Elad Alfassa, <[email protected]>\nLicensed under the Eiffel Forum License 2.\n\nhttps://sopel.chat\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom sopel import plugin\n\n\ndef _chunks(items, size):\n \"\"\"Break a list of items into groups.\n\n :param items: the collection of items to chunk\n :type items: :term:`iterable`\n :param int size: the size of each chunk\n \"\"\"\n # from https://stackoverflow.com/a/312464/5991 with modified names for readability\n for delim in range(0, len(items), size):\n yield items[delim:delim + size]\n\n\[email protected]('announce')\[email protected]('.announce Some important message here')\[email protected]_admin('Sorry, I can\\'t let you do that', reply=True)\[email protected]_prefix('[ANNOUNCEMENT] ')\ndef announce(bot, trigger):\n \"\"\"Send an announcement to all channels the bot is in.\"\"\"\n if trigger.group(2) is None:\n bot.reply('Announce what? I need a message to say.')\n return\n\n size = bot.isupport.get('TARGMAX', {}).get('PRIVMSG', 1)\n channels = _chunks(bot.channels.keys(), size)\n for cgroup in channels:\n bot.say(trigger.group(2), ','.join(cgroup))\n bot.reply('Announce complete.')\n", "path": "sopel/modules/announce.py"}]}
| 625 | 373 |
gh_patches_debug_40819
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-4393
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
GitLab SAST encounters error parsing the report
**Describe the issue**
`gitlab_sast` was added to the output in #4315. GitLab SAST produced the following error when attempting to parse the report.
```
[Schema] property '/vulnerabilities/0/identifiers/0/url' is not of type: string
[Schema] property '/vulnerabilities/0/links/0/url' is not of type: string
```
I found that `url` value is set to `null` which is not supported e.g.
```json
"vulnerabilities": [
{
"id": "..."
"identifiers": [
{
"name": "CKV_AWS_283",
"type": "checkov",
"url": null,
"value": "CKV_AWS_283"
}
],
"links": [
{
"url": null
}
],
...
]
```
`url` is not a required field so instead of setting the value to `null`, drop the key altogether
schema reference: https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/sast-report-format.json
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/common/output/gitlab_sast.py`
Content:
```
1 from __future__ import annotations
2
3 from datetime import datetime
4 from typing import TYPE_CHECKING, Any
5 from uuid import uuid4
6
7 from checkov.common.output.cyclonedx_consts import SCA_CHECKTYPES
8 from checkov.version import version
9
10 if TYPE_CHECKING:
11 from checkov.common.output.record import Record
12 from checkov.common.output.report import Report
13
14 DEFAULT_SEVERITY_GITLAB_LEVEL = "Unknown"
15 SEVERITY_TO_GITLAB_LEVEL = {
16 "critical": "Critical",
17 "high": "High",
18 "medium": "Medium",
19 "low": "Low",
20 "none": "Info",
21 }
22
23
24 class GitLabSast:
25 def __init__(self, reports: list[Report]):
26 self.reports = reports
27
28 self.sast_json = self.create_sast_json()
29
30 def create_sast_json(self) -> dict[str, Any]:
31 return {
32 "schema": "https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/raw/v15.0.4/dist/sast-report-format.json",
33 "version": "15.0.4",
34 "scan": self._create_scan(),
35 "vulnerabilities": self._create_vulnerabilities(),
36 }
37
38 def _create_scan(self) -> dict[str, Any]:
39 current_datetime = datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%S")
40 scanner = {
41 "id": "checkov",
42 "name": "Checkov",
43 "url": "https://www.checkov.io/",
44 "vendor": {
45 "name": "Bridgecrew",
46 },
47 "version": version,
48 }
49
50 return {
51 "start_time": current_datetime, # needs to be done properly in a later stage
52 "end_time": current_datetime,
53 "analyzer": scanner, # same for us
54 "scanner": scanner,
55 "status": "success",
56 "type": "sast",
57 }
58
59 def _create_vulnerabilities(self) -> list[dict[str, Any]]:
60 vulnerabilities = []
61
62 for report in self.reports:
63 if report.check_type in SCA_CHECKTYPES:
64 for check in report.failed_checks:
65 vulnerability = None
66 if check.check_id.startswith("BC_LIC"):
67 vulnerability = self._create_license_vulnerability(record=check)
68 elif check.check_id.startswith(("BC_VUL", "CKV_CVE")):
69 vulnerability = self._create_cve_vulnerability(record=check)
70
71 if vulnerability:
72 vulnerabilities.append(vulnerability)
73 else:
74 for check in report.failed_checks:
75 vulnerabilities.append(self._create_iac_vulnerability(record=check))
76
77 return vulnerabilities
78
79 def _create_iac_vulnerability(self, record: Record) -> dict[str, Any]:
80 severity = record.severity.name.lower() if record.severity else ""
81
82 return {
83 "id": str(uuid4()),
84 "identifiers": [
85 {
86 "name": record.check_id,
87 "type": "checkov",
88 "url": record.guideline,
89 "value": record.check_id,
90 }
91 ],
92 "links": [
93 {
94 "url": record.guideline,
95 }
96 ],
97 "location": {
98 "file": record.repo_file_path.lstrip("/"),
99 "start_line": record.file_line_range[0],
100 "end_line": record.file_line_range[1],
101 },
102 "name": record.check_name,
103 "description": f"Further info can be found {record.guideline}",
104 "severity": SEVERITY_TO_GITLAB_LEVEL.get(severity, DEFAULT_SEVERITY_GITLAB_LEVEL),
105 "solution": f"Further info can be found {record.guideline}",
106 }
107
108 def _create_cve_vulnerability(self, record: Record) -> dict[str, Any] | None:
109 details = record.vulnerability_details
110 if not details:
111 # this shouldn't happen
112 return None
113
114 severity = record.severity.name.lower() if record.severity else ""
115
116 return {
117 "id": str(uuid4()),
118 "identifiers": [
119 {
120 "name": record.short_description,
121 "type": "cve",
122 "url": details.get("link"),
123 "value": details["id"],
124 }
125 ],
126 "links": [
127 {
128 "url": details.get("link"),
129 }
130 ],
131 "location": {
132 "file": record.repo_file_path.lstrip("/"),
133 },
134 "name": record.short_description,
135 "description": details.get("description"),
136 "severity": SEVERITY_TO_GITLAB_LEVEL.get(severity, DEFAULT_SEVERITY_GITLAB_LEVEL),
137 "solution": details.get("status"),
138 }
139
140 def _create_license_vulnerability(self, record: Record) -> dict[str, Any] | None:
141 details = record.vulnerability_details
142 if not details:
143 # this shouldn't happen
144 return None
145
146 return {
147 "id": str(uuid4()),
148 "identifiers": [
149 {
150 "name": record.check_id,
151 "type": "license",
152 "value": record.check_id,
153 }
154 ],
155 "location": {
156 "file": record.repo_file_path.lstrip("/"),
157 },
158 "name": record.short_description,
159 "description": f"Package {details['package_name']}@{details['package_version']} has license {details['license']}",
160 }
161
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/checkov/common/output/gitlab_sast.py b/checkov/common/output/gitlab_sast.py
--- a/checkov/common/output/gitlab_sast.py
+++ b/checkov/common/output/gitlab_sast.py
@@ -79,21 +79,15 @@
def _create_iac_vulnerability(self, record: Record) -> dict[str, Any]:
severity = record.severity.name.lower() if record.severity else ""
- return {
+ vulnerability: "dict[str, Any]" = {
"id": str(uuid4()),
"identifiers": [
{
"name": record.check_id,
"type": "checkov",
- "url": record.guideline,
"value": record.check_id,
}
],
- "links": [
- {
- "url": record.guideline,
- }
- ],
"location": {
"file": record.repo_file_path.lstrip("/"),
"start_line": record.file_line_range[0],
@@ -105,6 +99,17 @@
"solution": f"Further info can be found {record.guideline}",
}
+ if record.guideline:
+ # url can't be None
+ vulnerability["identifiers"][0]["url"] = record.guideline
+ vulnerability["links"] = [
+ {
+ "url": record.guideline,
+ }
+ ]
+
+ return vulnerability
+
def _create_cve_vulnerability(self, record: Record) -> dict[str, Any] | None:
details = record.vulnerability_details
if not details:
@@ -113,21 +118,15 @@
severity = record.severity.name.lower() if record.severity else ""
- return {
+ vulnerability: "dict[str, Any]" = {
"id": str(uuid4()),
"identifiers": [
{
"name": record.short_description,
"type": "cve",
- "url": details.get("link"),
"value": details["id"],
}
],
- "links": [
- {
- "url": details.get("link"),
- }
- ],
"location": {
"file": record.repo_file_path.lstrip("/"),
},
@@ -137,6 +136,18 @@
"solution": details.get("status"),
}
+ link = details.get("link")
+ if link:
+ # url can't be None
+ vulnerability["identifiers"][0]["url"] = link
+ vulnerability["links"] = [
+ {
+ "url": link,
+ }
+ ]
+
+ return vulnerability
+
def _create_license_vulnerability(self, record: Record) -> dict[str, Any] | None:
details = record.vulnerability_details
if not details:
|
{"golden_diff": "diff --git a/checkov/common/output/gitlab_sast.py b/checkov/common/output/gitlab_sast.py\n--- a/checkov/common/output/gitlab_sast.py\n+++ b/checkov/common/output/gitlab_sast.py\n@@ -79,21 +79,15 @@\n def _create_iac_vulnerability(self, record: Record) -> dict[str, Any]:\n severity = record.severity.name.lower() if record.severity else \"\"\n \n- return {\n+ vulnerability: \"dict[str, Any]\" = {\n \"id\": str(uuid4()),\n \"identifiers\": [\n {\n \"name\": record.check_id,\n \"type\": \"checkov\",\n- \"url\": record.guideline,\n \"value\": record.check_id,\n }\n ],\n- \"links\": [\n- {\n- \"url\": record.guideline,\n- }\n- ],\n \"location\": {\n \"file\": record.repo_file_path.lstrip(\"/\"),\n \"start_line\": record.file_line_range[0],\n@@ -105,6 +99,17 @@\n \"solution\": f\"Further info can be found {record.guideline}\",\n }\n \n+ if record.guideline:\n+ # url can't be None\n+ vulnerability[\"identifiers\"][0][\"url\"] = record.guideline\n+ vulnerability[\"links\"] = [\n+ {\n+ \"url\": record.guideline,\n+ }\n+ ]\n+\n+ return vulnerability\n+\n def _create_cve_vulnerability(self, record: Record) -> dict[str, Any] | None:\n details = record.vulnerability_details\n if not details:\n@@ -113,21 +118,15 @@\n \n severity = record.severity.name.lower() if record.severity else \"\"\n \n- return {\n+ vulnerability: \"dict[str, Any]\" = {\n \"id\": str(uuid4()),\n \"identifiers\": [\n {\n \"name\": record.short_description,\n \"type\": \"cve\",\n- \"url\": details.get(\"link\"),\n \"value\": details[\"id\"],\n }\n ],\n- \"links\": [\n- {\n- \"url\": details.get(\"link\"),\n- }\n- ],\n \"location\": {\n \"file\": record.repo_file_path.lstrip(\"/\"),\n },\n@@ -137,6 +136,18 @@\n \"solution\": details.get(\"status\"),\n }\n \n+ link = details.get(\"link\")\n+ if link:\n+ # url can't be None\n+ vulnerability[\"identifiers\"][0][\"url\"] = link\n+ vulnerability[\"links\"] = [\n+ {\n+ \"url\": link,\n+ }\n+ ]\n+\n+ return vulnerability\n+\n def _create_license_vulnerability(self, record: Record) -> dict[str, Any] | None:\n details = record.vulnerability_details\n if not details:\n", "issue": "GitLab SAST encounters error parsing the report\n**Describe the issue**\r\n`gitlab_sast` was added to the output in #4315. GitLab SAST produced the following error when attempting to parse the report.\r\n\r\n```\r\n[Schema] property '/vulnerabilities/0/identifiers/0/url' is not of type: string\r\n[Schema] property '/vulnerabilities/0/links/0/url' is not of type: string\r\n```\r\n\r\nI found that `url` value is set to `null` which is not supported e.g.\r\n\r\n```json\r\n \"vulnerabilities\": [\r\n {\r\n \"id\": \"...\"\r\n \"identifiers\": [\r\n {\r\n \"name\": \"CKV_AWS_283\",\r\n \"type\": \"checkov\",\r\n \"url\": null,\r\n \"value\": \"CKV_AWS_283\"\r\n }\r\n ],\r\n \"links\": [\r\n {\r\n \"url\": null\r\n }\r\n ],\r\n ...\r\n ]\r\n```\r\n\r\n`url` is not a required field so instead of setting the value to `null`, drop the key altogether\r\n\r\nschema reference: https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/blob/master/dist/sast-report-format.json\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom datetime import datetime\nfrom typing import TYPE_CHECKING, Any\nfrom uuid import uuid4\n\nfrom checkov.common.output.cyclonedx_consts import SCA_CHECKTYPES\nfrom checkov.version import version\n\nif TYPE_CHECKING:\n from checkov.common.output.record import Record\n from checkov.common.output.report import Report\n\nDEFAULT_SEVERITY_GITLAB_LEVEL = \"Unknown\"\nSEVERITY_TO_GITLAB_LEVEL = {\n \"critical\": \"Critical\",\n \"high\": \"High\",\n \"medium\": \"Medium\",\n \"low\": \"Low\",\n \"none\": \"Info\",\n}\n\n\nclass GitLabSast:\n def __init__(self, reports: list[Report]):\n self.reports = reports\n\n self.sast_json = self.create_sast_json()\n\n def create_sast_json(self) -> dict[str, Any]:\n return {\n \"schema\": \"https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/raw/v15.0.4/dist/sast-report-format.json\",\n \"version\": \"15.0.4\",\n \"scan\": self._create_scan(),\n \"vulnerabilities\": self._create_vulnerabilities(),\n }\n\n def _create_scan(self) -> dict[str, Any]:\n current_datetime = datetime.utcnow().strftime(\"%Y-%m-%dT%H:%M:%S\")\n scanner = {\n \"id\": \"checkov\",\n \"name\": \"Checkov\",\n \"url\": \"https://www.checkov.io/\",\n \"vendor\": {\n \"name\": \"Bridgecrew\",\n },\n \"version\": version,\n }\n\n return {\n \"start_time\": current_datetime, # needs to be done properly in a later stage\n \"end_time\": current_datetime,\n \"analyzer\": scanner, # same for us\n \"scanner\": scanner,\n \"status\": \"success\",\n \"type\": \"sast\",\n }\n\n def _create_vulnerabilities(self) -> list[dict[str, Any]]:\n vulnerabilities = []\n\n for report in self.reports:\n if report.check_type in SCA_CHECKTYPES:\n for check in report.failed_checks:\n vulnerability = None\n if check.check_id.startswith(\"BC_LIC\"):\n vulnerability = self._create_license_vulnerability(record=check)\n elif check.check_id.startswith((\"BC_VUL\", \"CKV_CVE\")):\n vulnerability = self._create_cve_vulnerability(record=check)\n\n if vulnerability:\n vulnerabilities.append(vulnerability)\n else:\n for check in report.failed_checks:\n vulnerabilities.append(self._create_iac_vulnerability(record=check))\n\n return vulnerabilities\n\n def _create_iac_vulnerability(self, record: Record) -> dict[str, Any]:\n severity = record.severity.name.lower() if record.severity else \"\"\n\n return {\n \"id\": str(uuid4()),\n \"identifiers\": [\n {\n \"name\": record.check_id,\n \"type\": \"checkov\",\n \"url\": record.guideline,\n \"value\": record.check_id,\n }\n ],\n \"links\": [\n {\n \"url\": record.guideline,\n }\n ],\n \"location\": {\n \"file\": record.repo_file_path.lstrip(\"/\"),\n \"start_line\": record.file_line_range[0],\n \"end_line\": record.file_line_range[1],\n },\n \"name\": record.check_name,\n \"description\": f\"Further info can be found {record.guideline}\",\n \"severity\": SEVERITY_TO_GITLAB_LEVEL.get(severity, DEFAULT_SEVERITY_GITLAB_LEVEL),\n \"solution\": f\"Further info can be found {record.guideline}\",\n }\n\n def _create_cve_vulnerability(self, record: Record) -> dict[str, Any] | None:\n details = record.vulnerability_details\n if not details:\n # this shouldn't happen\n return None\n\n severity = record.severity.name.lower() if record.severity else \"\"\n\n return {\n \"id\": str(uuid4()),\n \"identifiers\": [\n {\n \"name\": record.short_description,\n \"type\": \"cve\",\n \"url\": details.get(\"link\"),\n \"value\": details[\"id\"],\n }\n ],\n \"links\": [\n {\n \"url\": details.get(\"link\"),\n }\n ],\n \"location\": {\n \"file\": record.repo_file_path.lstrip(\"/\"),\n },\n \"name\": record.short_description,\n \"description\": details.get(\"description\"),\n \"severity\": SEVERITY_TO_GITLAB_LEVEL.get(severity, DEFAULT_SEVERITY_GITLAB_LEVEL),\n \"solution\": details.get(\"status\"),\n }\n\n def _create_license_vulnerability(self, record: Record) -> dict[str, Any] | None:\n details = record.vulnerability_details\n if not details:\n # this shouldn't happen\n return None\n\n return {\n \"id\": str(uuid4()),\n \"identifiers\": [\n {\n \"name\": record.check_id,\n \"type\": \"license\",\n \"value\": record.check_id,\n }\n ],\n \"location\": {\n \"file\": record.repo_file_path.lstrip(\"/\"),\n },\n \"name\": record.short_description,\n \"description\": f\"Package {details['package_name']}@{details['package_version']} has license {details['license']}\",\n }\n", "path": "checkov/common/output/gitlab_sast.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom datetime import datetime\nfrom typing import TYPE_CHECKING, Any\nfrom uuid import uuid4\n\nfrom checkov.common.output.cyclonedx_consts import SCA_CHECKTYPES\nfrom checkov.version import version\n\nif TYPE_CHECKING:\n from checkov.common.output.record import Record\n from checkov.common.output.report import Report\n\nDEFAULT_SEVERITY_GITLAB_LEVEL = \"Unknown\"\nSEVERITY_TO_GITLAB_LEVEL = {\n \"critical\": \"Critical\",\n \"high\": \"High\",\n \"medium\": \"Medium\",\n \"low\": \"Low\",\n \"none\": \"Info\",\n}\n\n\nclass GitLabSast:\n def __init__(self, reports: list[Report]):\n self.reports = reports\n\n self.sast_json = self.create_sast_json()\n\n def create_sast_json(self) -> dict[str, Any]:\n return {\n \"schema\": \"https://gitlab.com/gitlab-org/security-products/security-report-schemas/-/raw/v15.0.4/dist/sast-report-format.json\",\n \"version\": \"15.0.4\",\n \"scan\": self._create_scan(),\n \"vulnerabilities\": self._create_vulnerabilities(),\n }\n\n def _create_scan(self) -> dict[str, Any]:\n current_datetime = datetime.utcnow().strftime(\"%Y-%m-%dT%H:%M:%S\")\n scanner = {\n \"id\": \"checkov\",\n \"name\": \"Checkov\",\n \"url\": \"https://www.checkov.io/\",\n \"vendor\": {\n \"name\": \"Bridgecrew\",\n },\n \"version\": version,\n }\n\n return {\n \"start_time\": current_datetime, # needs to be done properly in a later stage\n \"end_time\": current_datetime,\n \"analyzer\": scanner, # same for us\n \"scanner\": scanner,\n \"status\": \"success\",\n \"type\": \"sast\",\n }\n\n def _create_vulnerabilities(self) -> list[dict[str, Any]]:\n vulnerabilities = []\n\n for report in self.reports:\n if report.check_type in SCA_CHECKTYPES:\n for check in report.failed_checks:\n vulnerability = None\n if check.check_id.startswith(\"BC_LIC\"):\n vulnerability = self._create_license_vulnerability(record=check)\n elif check.check_id.startswith((\"BC_VUL\", \"CKV_CVE\")):\n vulnerability = self._create_cve_vulnerability(record=check)\n\n if vulnerability:\n vulnerabilities.append(vulnerability)\n else:\n for check in report.failed_checks:\n vulnerabilities.append(self._create_iac_vulnerability(record=check))\n\n return vulnerabilities\n\n def _create_iac_vulnerability(self, record: Record) -> dict[str, Any]:\n severity = record.severity.name.lower() if record.severity else \"\"\n\n vulnerability: \"dict[str, Any]\" = {\n \"id\": str(uuid4()),\n \"identifiers\": [\n {\n \"name\": record.check_id,\n \"type\": \"checkov\",\n \"value\": record.check_id,\n }\n ],\n \"location\": {\n \"file\": record.repo_file_path.lstrip(\"/\"),\n \"start_line\": record.file_line_range[0],\n \"end_line\": record.file_line_range[1],\n },\n \"name\": record.check_name,\n \"description\": f\"Further info can be found {record.guideline}\",\n \"severity\": SEVERITY_TO_GITLAB_LEVEL.get(severity, DEFAULT_SEVERITY_GITLAB_LEVEL),\n \"solution\": f\"Further info can be found {record.guideline}\",\n }\n\n if record.guideline:\n # url can't be None\n vulnerability[\"identifiers\"][0][\"url\"] = record.guideline\n vulnerability[\"links\"] = [\n {\n \"url\": record.guideline,\n }\n ]\n\n return vulnerability\n\n def _create_cve_vulnerability(self, record: Record) -> dict[str, Any] | None:\n details = record.vulnerability_details\n if not details:\n # this shouldn't happen\n return None\n\n severity = record.severity.name.lower() if record.severity else \"\"\n\n vulnerability: \"dict[str, Any]\" = {\n \"id\": str(uuid4()),\n \"identifiers\": [\n {\n \"name\": record.short_description,\n \"type\": \"cve\",\n \"value\": details[\"id\"],\n }\n ],\n \"location\": {\n \"file\": record.repo_file_path.lstrip(\"/\"),\n },\n \"name\": record.short_description,\n \"description\": details.get(\"description\"),\n \"severity\": SEVERITY_TO_GITLAB_LEVEL.get(severity, DEFAULT_SEVERITY_GITLAB_LEVEL),\n \"solution\": details.get(\"status\"),\n }\n\n link = details.get(\"link\")\n if link:\n # url can't be None\n vulnerability[\"identifiers\"][0][\"url\"] = link\n vulnerability[\"links\"] = [\n {\n \"url\": link,\n }\n ]\n\n return vulnerability\n\n def _create_license_vulnerability(self, record: Record) -> dict[str, Any] | None:\n details = record.vulnerability_details\n if not details:\n # this shouldn't happen\n return None\n\n return {\n \"id\": str(uuid4()),\n \"identifiers\": [\n {\n \"name\": record.check_id,\n \"type\": \"license\",\n \"value\": record.check_id,\n }\n ],\n \"location\": {\n \"file\": record.repo_file_path.lstrip(\"/\"),\n },\n \"name\": record.short_description,\n \"description\": f\"Package {details['package_name']}@{details['package_version']} has license {details['license']}\",\n }\n", "path": "checkov/common/output/gitlab_sast.py"}]}
| 2,074 | 636 |
gh_patches_debug_16507
|
rasdani/github-patches
|
git_diff
|
azavea__raster-vision-725
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot handle CRS w/o EPSG
```
Checking for existing output [####################################] 100%
Saving command configuration to /opt/data/rv_root/chip/xview-object_detection/command-config.json...
Saving command configuration to /opt/data/rv_root/train/xview-object-detection-mobilenet/command-config.json...
Saving command configuration to /opt/data/rv_root/bundle/xview-object-detection-mobilenet/command-config.json...
Saving command configuration to /opt/data/rv_root/predict/xview-object-detection-mobilenet/command-config.json...
Saving command configuration to /opt/data/rv_root/eval/xview-object-detection-mobilenet/command-config.json...
python -m rastervision run_command /opt/data/rv_root/chip/xview-object_detection/command-config.json
Making training chips...
2019-03-20 16:00:47:rastervision.utils.files: INFO - Downloading s3://azavea-nyc-ml/cogs/000227.tif to /tmp/tmp98w0p7cl/tmpypmuuz6t/tmpqbscq03c/s3/azavea-nyc-ml/cogs/000227.tif
Traceback (most recent call last):
File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/src/rastervision/__main__.py", line 17, in <module>
rv.main()
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/opt/src/rastervision/cli/main.py", line 253, in run_command
rv.runner.CommandRunner.run(command_config_uri)
File "/opt/src/rastervision/runner/command_runner.py", line 11, in run
CommandRunner.run_from_proto(msg)
File "/opt/src/rastervision/runner/command_runner.py", line 17, in run_from_proto
command.run()
File "/opt/src/rastervision/command/chip_command.py", line 22, in run
map(lambda s: s.create_scene(cc.task, tmp_dir), cc.train_scenes))
File "/opt/src/rastervision/command/chip_command.py", line 22, in <lambda>
map(lambda s: s.create_scene(cc.task, tmp_dir), cc.train_scenes))
File "/opt/src/rastervision/data/scene_config.py", line 36, in create_scene
raster_source = self.raster_source.create_source(tmp_dir)
File "/opt/src/rastervision/data/raster_source/geotiff_source_config.py", line 70, in create_source
y_shift_meters=y_shift_meters)
File "/opt/src/rastervision/data/raster_source/geotiff_source.py", line 46, in __init__
super().__init__(raster_transformers, temp_dir, channel_order)
File "/opt/src/rastervision/data/raster_source/rasterio_source.py", line 47, in __init__
with self.activate():
File "/opt/src/rastervision/data/activate_mixin.py", line 21, in __enter__
self.activate()
File "/opt/src/rastervision/data/activate_mixin.py", line 54, in do_activate
self._activate()
File "/opt/src/rastervision/data/raster_source/geotiff_source.py", line 104, in _activate
super()._activate()
File "/opt/src/rastervision/data/raster_source/rasterio_source.py", line 106, in _activate
self._set_crs_transformer()
File "/opt/src/rastervision/data/raster_source/geotiff_source.py", line 56, in _set_crs_transformer
self.image_dataset)
File "/opt/src/rastervision/data/crs_transformer/rasterio_crs_transformer.py", line 59, in from_dataset
image_crs = dataset.crs['init']
File "/usr/lib/python3.5/collections/__init__.py", line 986, in __getitem__
raise KeyError(key)
KeyError: 'init'
/tmp/tmpkon0yhf2/tmpyysq6a82/Makefile:6: recipe for target '0' failed
make: *** [0] Error 1
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rastervision/data/crs_transformer/rasterio_crs_transformer.py`
Content:
```
1 import pyproj
2
3 from rasterio.transform import (rowcol, xy)
4
5 from rastervision.data.crs_transformer import (CRSTransformer,
6 IdentityCRSTransformer)
7
8
9 class RasterioCRSTransformer(CRSTransformer):
10 """Transformer for a RasterioRasterSource."""
11
12 def __init__(self, transform, image_crs, map_crs='epsg:4326'):
13 """Construct transformer.
14
15 Args:
16 image_dataset: Rasterio DatasetReader
17 map_crs: CRS code
18 """
19 self.map_proj = pyproj.Proj(init=map_crs)
20 self.image_proj = pyproj.Proj(init=image_crs)
21
22 super().__init__(image_crs, map_crs, transform)
23
24 def map_to_pixel(self, map_point):
25 """Transform point from map to pixel-based coordinates.
26
27 Args:
28 map_point: (x, y) tuple in map coordinates
29
30 Returns:
31 (x, y) tuple in pixel coordinates
32 """
33 image_point = pyproj.transform(self.map_proj, self.image_proj,
34 map_point[0], map_point[1])
35 pixel_point = rowcol(self.transform, image_point[0], image_point[1])
36 pixel_point = (pixel_point[1], pixel_point[0])
37 return pixel_point
38
39 def pixel_to_map(self, pixel_point):
40 """Transform point from pixel to map-based coordinates.
41
42 Args:
43 pixel_point: (x, y) tuple in pixel coordinates
44
45 Returns:
46 (x, y) tuple in map coordinates
47 """
48 image_point = xy(self.transform, int(pixel_point[1]),
49 int(pixel_point[0]))
50 map_point = pyproj.transform(self.image_proj, self.map_proj,
51 image_point[0], image_point[1])
52 return map_point
53
54 @classmethod
55 def from_dataset(cls, dataset, map_crs='epsg:4326'):
56 if dataset.crs is None:
57 return IdentityCRSTransformer()
58 transform = dataset.transform
59 image_crs = dataset.crs['init']
60 return cls(transform, image_crs, map_crs)
61
62 def get_affine_transform(self):
63 return self.transform
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/rastervision/data/crs_transformer/rasterio_crs_transformer.py b/rastervision/data/crs_transformer/rasterio_crs_transformer.py
--- a/rastervision/data/crs_transformer/rasterio_crs_transformer.py
+++ b/rastervision/data/crs_transformer/rasterio_crs_transformer.py
@@ -17,7 +17,7 @@
map_crs: CRS code
"""
self.map_proj = pyproj.Proj(init=map_crs)
- self.image_proj = pyproj.Proj(init=image_crs)
+ self.image_proj = pyproj.Proj(image_crs)
super().__init__(image_crs, map_crs, transform)
@@ -56,7 +56,7 @@
if dataset.crs is None:
return IdentityCRSTransformer()
transform = dataset.transform
- image_crs = dataset.crs['init']
+ image_crs = dataset.crs
return cls(transform, image_crs, map_crs)
def get_affine_transform(self):
|
{"golden_diff": "diff --git a/rastervision/data/crs_transformer/rasterio_crs_transformer.py b/rastervision/data/crs_transformer/rasterio_crs_transformer.py\n--- a/rastervision/data/crs_transformer/rasterio_crs_transformer.py\n+++ b/rastervision/data/crs_transformer/rasterio_crs_transformer.py\n@@ -17,7 +17,7 @@\n map_crs: CRS code\n \"\"\"\n self.map_proj = pyproj.Proj(init=map_crs)\n- self.image_proj = pyproj.Proj(init=image_crs)\n+ self.image_proj = pyproj.Proj(image_crs)\n \n super().__init__(image_crs, map_crs, transform)\n \n@@ -56,7 +56,7 @@\n if dataset.crs is None:\n return IdentityCRSTransformer()\n transform = dataset.transform\n- image_crs = dataset.crs['init']\n+ image_crs = dataset.crs\n return cls(transform, image_crs, map_crs)\n \n def get_affine_transform(self):\n", "issue": "Cannot handle CRS w/o EPSG\n```\r\nChecking for existing output [####################################] 100%\r\nSaving command configuration to /opt/data/rv_root/chip/xview-object_detection/command-config.json...\r\nSaving command configuration to /opt/data/rv_root/train/xview-object-detection-mobilenet/command-config.json...\r\nSaving command configuration to /opt/data/rv_root/bundle/xview-object-detection-mobilenet/command-config.json...\r\nSaving command configuration to /opt/data/rv_root/predict/xview-object-detection-mobilenet/command-config.json...\r\nSaving command configuration to /opt/data/rv_root/eval/xview-object-detection-mobilenet/command-config.json...\r\npython -m rastervision run_command /opt/data/rv_root/chip/xview-object_detection/command-config.json\r\nMaking training chips...\r\n2019-03-20 16:00:47:rastervision.utils.files: INFO - Downloading s3://azavea-nyc-ml/cogs/000227.tif to /tmp/tmp98w0p7cl/tmpypmuuz6t/tmpqbscq03c/s3/azavea-nyc-ml/cogs/000227.tif\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.5/runpy.py\", line 184, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/lib/python3.5/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/opt/src/rastervision/__main__.py\", line 17, in <module>\r\n rv.main()\r\n File \"/usr/local/lib/python3.5/dist-packages/click/core.py\", line 722, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/local/lib/python3.5/dist-packages/click/core.py\", line 697, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/local/lib/python3.5/dist-packages/click/core.py\", line 1066, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/usr/local/lib/python3.5/dist-packages/click/core.py\", line 895, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/usr/local/lib/python3.5/dist-packages/click/core.py\", line 535, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/opt/src/rastervision/cli/main.py\", line 253, in run_command\r\n rv.runner.CommandRunner.run(command_config_uri)\r\n File \"/opt/src/rastervision/runner/command_runner.py\", line 11, in run\r\n CommandRunner.run_from_proto(msg)\r\n File \"/opt/src/rastervision/runner/command_runner.py\", line 17, in run_from_proto\r\n command.run()\r\n File \"/opt/src/rastervision/command/chip_command.py\", line 22, in run\r\n map(lambda s: s.create_scene(cc.task, tmp_dir), cc.train_scenes))\r\n File \"/opt/src/rastervision/command/chip_command.py\", line 22, in <lambda>\r\n map(lambda s: s.create_scene(cc.task, tmp_dir), cc.train_scenes))\r\n File \"/opt/src/rastervision/data/scene_config.py\", line 36, in create_scene\r\n raster_source = self.raster_source.create_source(tmp_dir)\r\n File \"/opt/src/rastervision/data/raster_source/geotiff_source_config.py\", line 70, in create_source\r\n y_shift_meters=y_shift_meters)\r\n File \"/opt/src/rastervision/data/raster_source/geotiff_source.py\", line 46, in __init__\r\n super().__init__(raster_transformers, temp_dir, channel_order)\r\n File \"/opt/src/rastervision/data/raster_source/rasterio_source.py\", line 47, in __init__\r\n with self.activate():\r\n File \"/opt/src/rastervision/data/activate_mixin.py\", line 21, in __enter__\r\n self.activate()\r\n File \"/opt/src/rastervision/data/activate_mixin.py\", line 54, in do_activate\r\n self._activate()\r\n File \"/opt/src/rastervision/data/raster_source/geotiff_source.py\", line 104, in _activate\r\n super()._activate()\r\n File \"/opt/src/rastervision/data/raster_source/rasterio_source.py\", line 106, in _activate\r\n self._set_crs_transformer()\r\n File \"/opt/src/rastervision/data/raster_source/geotiff_source.py\", line 56, in _set_crs_transformer\r\n self.image_dataset)\r\n File \"/opt/src/rastervision/data/crs_transformer/rasterio_crs_transformer.py\", line 59, in from_dataset\r\n image_crs = dataset.crs['init']\r\n File \"/usr/lib/python3.5/collections/__init__.py\", line 986, in __getitem__\r\n raise KeyError(key)\r\nKeyError: 'init'\r\n/tmp/tmpkon0yhf2/tmpyysq6a82/Makefile:6: recipe for target '0' failed\r\nmake: *** [0] Error 1\r\n```\n", "before_files": [{"content": "import pyproj\n\nfrom rasterio.transform import (rowcol, xy)\n\nfrom rastervision.data.crs_transformer import (CRSTransformer,\n IdentityCRSTransformer)\n\n\nclass RasterioCRSTransformer(CRSTransformer):\n \"\"\"Transformer for a RasterioRasterSource.\"\"\"\n\n def __init__(self, transform, image_crs, map_crs='epsg:4326'):\n \"\"\"Construct transformer.\n\n Args:\n image_dataset: Rasterio DatasetReader\n map_crs: CRS code\n \"\"\"\n self.map_proj = pyproj.Proj(init=map_crs)\n self.image_proj = pyproj.Proj(init=image_crs)\n\n super().__init__(image_crs, map_crs, transform)\n\n def map_to_pixel(self, map_point):\n \"\"\"Transform point from map to pixel-based coordinates.\n\n Args:\n map_point: (x, y) tuple in map coordinates\n\n Returns:\n (x, y) tuple in pixel coordinates\n \"\"\"\n image_point = pyproj.transform(self.map_proj, self.image_proj,\n map_point[0], map_point[1])\n pixel_point = rowcol(self.transform, image_point[0], image_point[1])\n pixel_point = (pixel_point[1], pixel_point[0])\n return pixel_point\n\n def pixel_to_map(self, pixel_point):\n \"\"\"Transform point from pixel to map-based coordinates.\n\n Args:\n pixel_point: (x, y) tuple in pixel coordinates\n\n Returns:\n (x, y) tuple in map coordinates\n \"\"\"\n image_point = xy(self.transform, int(pixel_point[1]),\n int(pixel_point[0]))\n map_point = pyproj.transform(self.image_proj, self.map_proj,\n image_point[0], image_point[1])\n return map_point\n\n @classmethod\n def from_dataset(cls, dataset, map_crs='epsg:4326'):\n if dataset.crs is None:\n return IdentityCRSTransformer()\n transform = dataset.transform\n image_crs = dataset.crs['init']\n return cls(transform, image_crs, map_crs)\n\n def get_affine_transform(self):\n return self.transform\n", "path": "rastervision/data/crs_transformer/rasterio_crs_transformer.py"}], "after_files": [{"content": "import pyproj\n\nfrom rasterio.transform import (rowcol, xy)\n\nfrom rastervision.data.crs_transformer import (CRSTransformer,\n IdentityCRSTransformer)\n\n\nclass RasterioCRSTransformer(CRSTransformer):\n \"\"\"Transformer for a RasterioRasterSource.\"\"\"\n\n def __init__(self, transform, image_crs, map_crs='epsg:4326'):\n \"\"\"Construct transformer.\n\n Args:\n image_dataset: Rasterio DatasetReader\n map_crs: CRS code\n \"\"\"\n self.map_proj = pyproj.Proj(init=map_crs)\n self.image_proj = pyproj.Proj(image_crs)\n\n super().__init__(image_crs, map_crs, transform)\n\n def map_to_pixel(self, map_point):\n \"\"\"Transform point from map to pixel-based coordinates.\n\n Args:\n map_point: (x, y) tuple in map coordinates\n\n Returns:\n (x, y) tuple in pixel coordinates\n \"\"\"\n image_point = pyproj.transform(self.map_proj, self.image_proj,\n map_point[0], map_point[1])\n pixel_point = rowcol(self.transform, image_point[0], image_point[1])\n pixel_point = (pixel_point[1], pixel_point[0])\n return pixel_point\n\n def pixel_to_map(self, pixel_point):\n \"\"\"Transform point from pixel to map-based coordinates.\n\n Args:\n pixel_point: (x, y) tuple in pixel coordinates\n\n Returns:\n (x, y) tuple in map coordinates\n \"\"\"\n image_point = xy(self.transform, int(pixel_point[1]),\n int(pixel_point[0]))\n map_point = pyproj.transform(self.image_proj, self.map_proj,\n image_point[0], image_point[1])\n return map_point\n\n @classmethod\n def from_dataset(cls, dataset, map_crs='epsg:4326'):\n if dataset.crs is None:\n return IdentityCRSTransformer()\n transform = dataset.transform\n image_crs = dataset.crs\n return cls(transform, image_crs, map_crs)\n\n def get_affine_transform(self):\n return self.transform\n", "path": "rastervision/data/crs_transformer/rasterio_crs_transformer.py"}]}
| 2,043 | 241 |
gh_patches_debug_3302
|
rasdani/github-patches
|
git_diff
|
Project-MONAI__MONAI-2093
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Enhance the log of SegmentationSaver
**Is your feature request related to a problem? Please describe.**
We need to print out more info when saving segmentation results, now it's only:
```
saved all the model outputs into files.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `monai/data/nifti_saver.py`
Content:
```
1 # Copyright 2020 - 2021 MONAI Consortium
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 # http://www.apache.org/licenses/LICENSE-2.0
6 # Unless required by applicable law or agreed to in writing, software
7 # distributed under the License is distributed on an "AS IS" BASIS,
8 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
9 # See the License for the specific language governing permissions and
10 # limitations under the License.
11
12 from typing import Dict, Optional, Union
13
14 import numpy as np
15 import torch
16
17 from monai.config import DtypeLike
18 from monai.data.nifti_writer import write_nifti
19 from monai.data.utils import create_file_basename
20 from monai.utils import GridSampleMode, GridSamplePadMode
21 from monai.utils import ImageMetaKey as Key
22
23
24 class NiftiSaver:
25 """
26 Save the data as NIfTI file, it can support single data content or a batch of data.
27 Typically, the data can be segmentation predictions, call `save` for single data
28 or call `save_batch` to save a batch of data together. If no meta data provided,
29 use index from 0 as the filename prefix.
30
31 NB: image should include channel dimension: [B],C,H,W,[D].
32 """
33
34 def __init__(
35 self,
36 output_dir: str = "./",
37 output_postfix: str = "seg",
38 output_ext: str = ".nii.gz",
39 resample: bool = True,
40 mode: Union[GridSampleMode, str] = GridSampleMode.BILINEAR,
41 padding_mode: Union[GridSamplePadMode, str] = GridSamplePadMode.BORDER,
42 align_corners: bool = False,
43 dtype: DtypeLike = np.float64,
44 output_dtype: DtypeLike = np.float32,
45 squeeze_end_dims: bool = True,
46 data_root_dir: str = "",
47 ) -> None:
48 """
49 Args:
50 output_dir: output image directory.
51 output_postfix: a string appended to all output file names.
52 output_ext: output file extension name.
53 resample: whether to resample before saving the data array.
54 mode: {``"bilinear"``, ``"nearest"``}
55 This option is used when ``resample = True``.
56 Interpolation mode to calculate output values. Defaults to ``"bilinear"``.
57 See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample
58 padding_mode: {``"zeros"``, ``"border"``, ``"reflection"``}
59 This option is used when ``resample = True``.
60 Padding mode for outside grid values. Defaults to ``"border"``.
61 See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample
62 align_corners: Geometrically, we consider the pixels of the input as squares rather than points.
63 See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample
64 dtype: data type for resampling computation. Defaults to ``np.float64`` for best precision.
65 If None, use the data type of input data.
66 output_dtype: data type for saving data. Defaults to ``np.float32``.
67 squeeze_end_dims: if True, any trailing singleton dimensions will be removed (after the channel
68 has been moved to the end). So if input is (C,H,W,D), this will be altered to (H,W,D,C), and
69 then if C==1, it will be saved as (H,W,D). If D also ==1, it will be saved as (H,W). If false,
70 image will always be saved as (H,W,D,C).
71 data_root_dir: if not empty, it specifies the beginning parts of the input file's
72 absolute path. it's used to compute `input_file_rel_path`, the relative path to the file from
73 `data_root_dir` to preserve folder structure when saving in case there are files in different
74 folders with the same file names. for example:
75 input_file_name: /foo/bar/test1/image.nii,
76 postfix: seg
77 output_ext: nii.gz
78 output_dir: /output,
79 data_root_dir: /foo/bar,
80 output will be: /output/test1/image/image_seg.nii.gz
81
82 """
83 self.output_dir = output_dir
84 self.output_postfix = output_postfix
85 self.output_ext = output_ext
86 self.resample = resample
87 self.mode: GridSampleMode = GridSampleMode(mode)
88 self.padding_mode: GridSamplePadMode = GridSamplePadMode(padding_mode)
89 self.align_corners = align_corners
90 self.dtype = dtype
91 self.output_dtype = output_dtype
92 self._data_index = 0
93 self.squeeze_end_dims = squeeze_end_dims
94 self.data_root_dir = data_root_dir
95
96 def save(self, data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:
97 """
98 Save data into a Nifti file.
99 The meta_data could optionally have the following keys:
100
101 - ``'filename_or_obj'`` -- for output file name creation, corresponding to filename or object.
102 - ``'original_affine'`` -- for data orientation handling, defaulting to an identity matrix.
103 - ``'affine'`` -- for data output affine, defaulting to an identity matrix.
104 - ``'spatial_shape'`` -- for data output shape.
105 - ``'patch_index'`` -- if the data is a patch of big image, append the patch index to filename.
106
107 When meta_data is specified, the saver will try to resample batch data from the space
108 defined by "affine" to the space defined by "original_affine".
109
110 If meta_data is None, use the default index (starting from 0) as the filename.
111
112 Args:
113 data: target data content that to be saved as a NIfTI format file.
114 Assuming the data shape starts with a channel dimension and followed by spatial dimensions.
115 meta_data: the meta data information corresponding to the data.
116
117 See Also
118 :py:meth:`monai.data.nifti_writer.write_nifti`
119 """
120 filename = meta_data[Key.FILENAME_OR_OBJ] if meta_data else str(self._data_index)
121 self._data_index += 1
122 original_affine = meta_data.get("original_affine", None) if meta_data else None
123 affine = meta_data.get("affine", None) if meta_data else None
124 spatial_shape = meta_data.get("spatial_shape", None) if meta_data else None
125 patch_index = meta_data.get(Key.PATCH_INDEX, None) if meta_data else None
126
127 if isinstance(data, torch.Tensor):
128 data = data.detach().cpu().numpy()
129
130 path = create_file_basename(self.output_postfix, filename, self.output_dir, self.data_root_dir, patch_index)
131 path = f"{path}{self.output_ext}"
132 # change data shape to be (channel, h, w, d)
133 while len(data.shape) < 4:
134 data = np.expand_dims(data, -1)
135 # change data to "channel last" format and write to nifti format file
136 data = np.moveaxis(np.asarray(data), 0, -1)
137
138 # if desired, remove trailing singleton dimensions
139 if self.squeeze_end_dims:
140 while data.shape[-1] == 1:
141 data = np.squeeze(data, -1)
142
143 write_nifti(
144 data,
145 file_name=path,
146 affine=affine,
147 target_affine=original_affine,
148 resample=self.resample,
149 output_spatial_shape=spatial_shape,
150 mode=self.mode,
151 padding_mode=self.padding_mode,
152 align_corners=self.align_corners,
153 dtype=self.dtype,
154 output_dtype=self.output_dtype,
155 )
156
157 def save_batch(self, batch_data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:
158 """
159 Save a batch of data into Nifti format files.
160
161 Spatially it supports up to three dimensions, that is, H, HW, HWD for
162 1D, 2D, 3D respectively (with resampling supports for 2D and 3D only).
163
164 When saving multiple time steps or multiple channels `batch_data`,
165 time and/or modality axes should be appended after the batch dimensions.
166 For example, the shape of a batch of 2D eight-class
167 segmentation probabilities to be saved could be `(batch, 8, 64, 64)`;
168 in this case each item in the batch will be saved as (64, 64, 1, 8)
169 NIfTI file (the third dimension is reserved as a spatial dimension).
170
171 Args:
172 batch_data: target batch data content that save into NIfTI format.
173 meta_data: every key-value in the meta_data is corresponding to a batch of data.
174
175 """
176 for i, data in enumerate(batch_data): # save a batch of files
177 self.save(data=data, meta_data={k: meta_data[k][i] for k in meta_data} if meta_data is not None else None)
178
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/monai/data/nifti_saver.py b/monai/data/nifti_saver.py
--- a/monai/data/nifti_saver.py
+++ b/monai/data/nifti_saver.py
@@ -153,6 +153,7 @@
dtype=self.dtype,
output_dtype=self.output_dtype,
)
+ print(f"file written: {path}.")
def save_batch(self, batch_data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:
"""
|
{"golden_diff": "diff --git a/monai/data/nifti_saver.py b/monai/data/nifti_saver.py\n--- a/monai/data/nifti_saver.py\n+++ b/monai/data/nifti_saver.py\n@@ -153,6 +153,7 @@\n dtype=self.dtype,\n output_dtype=self.output_dtype,\n )\n+ print(f\"file written: {path}.\")\n \n def save_batch(self, batch_data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:\n \"\"\"\n", "issue": "Enhance the log of SegmentationSaver\n**Is your feature request related to a problem? Please describe.**\r\nWe need to print out more info when saving segmentation results, now it's only:\r\n```\r\nsaved all the model outputs into files.\r\n```\r\n\n", "before_files": [{"content": "# Copyright 2020 - 2021 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Dict, Optional, Union\n\nimport numpy as np\nimport torch\n\nfrom monai.config import DtypeLike\nfrom monai.data.nifti_writer import write_nifti\nfrom monai.data.utils import create_file_basename\nfrom monai.utils import GridSampleMode, GridSamplePadMode\nfrom monai.utils import ImageMetaKey as Key\n\n\nclass NiftiSaver:\n \"\"\"\n Save the data as NIfTI file, it can support single data content or a batch of data.\n Typically, the data can be segmentation predictions, call `save` for single data\n or call `save_batch` to save a batch of data together. If no meta data provided,\n use index from 0 as the filename prefix.\n\n NB: image should include channel dimension: [B],C,H,W,[D].\n \"\"\"\n\n def __init__(\n self,\n output_dir: str = \"./\",\n output_postfix: str = \"seg\",\n output_ext: str = \".nii.gz\",\n resample: bool = True,\n mode: Union[GridSampleMode, str] = GridSampleMode.BILINEAR,\n padding_mode: Union[GridSamplePadMode, str] = GridSamplePadMode.BORDER,\n align_corners: bool = False,\n dtype: DtypeLike = np.float64,\n output_dtype: DtypeLike = np.float32,\n squeeze_end_dims: bool = True,\n data_root_dir: str = \"\",\n ) -> None:\n \"\"\"\n Args:\n output_dir: output image directory.\n output_postfix: a string appended to all output file names.\n output_ext: output file extension name.\n resample: whether to resample before saving the data array.\n mode: {``\"bilinear\"``, ``\"nearest\"``}\n This option is used when ``resample = True``.\n Interpolation mode to calculate output values. Defaults to ``\"bilinear\"``.\n See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample\n padding_mode: {``\"zeros\"``, ``\"border\"``, ``\"reflection\"``}\n This option is used when ``resample = True``.\n Padding mode for outside grid values. Defaults to ``\"border\"``.\n See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample\n align_corners: Geometrically, we consider the pixels of the input as squares rather than points.\n See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample\n dtype: data type for resampling computation. Defaults to ``np.float64`` for best precision.\n If None, use the data type of input data.\n output_dtype: data type for saving data. Defaults to ``np.float32``.\n squeeze_end_dims: if True, any trailing singleton dimensions will be removed (after the channel\n has been moved to the end). So if input is (C,H,W,D), this will be altered to (H,W,D,C), and\n then if C==1, it will be saved as (H,W,D). If D also ==1, it will be saved as (H,W). If false,\n image will always be saved as (H,W,D,C).\n data_root_dir: if not empty, it specifies the beginning parts of the input file's\n absolute path. it's used to compute `input_file_rel_path`, the relative path to the file from\n `data_root_dir` to preserve folder structure when saving in case there are files in different\n folders with the same file names. for example:\n input_file_name: /foo/bar/test1/image.nii,\n postfix: seg\n output_ext: nii.gz\n output_dir: /output,\n data_root_dir: /foo/bar,\n output will be: /output/test1/image/image_seg.nii.gz\n\n \"\"\"\n self.output_dir = output_dir\n self.output_postfix = output_postfix\n self.output_ext = output_ext\n self.resample = resample\n self.mode: GridSampleMode = GridSampleMode(mode)\n self.padding_mode: GridSamplePadMode = GridSamplePadMode(padding_mode)\n self.align_corners = align_corners\n self.dtype = dtype\n self.output_dtype = output_dtype\n self._data_index = 0\n self.squeeze_end_dims = squeeze_end_dims\n self.data_root_dir = data_root_dir\n\n def save(self, data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:\n \"\"\"\n Save data into a Nifti file.\n The meta_data could optionally have the following keys:\n\n - ``'filename_or_obj'`` -- for output file name creation, corresponding to filename or object.\n - ``'original_affine'`` -- for data orientation handling, defaulting to an identity matrix.\n - ``'affine'`` -- for data output affine, defaulting to an identity matrix.\n - ``'spatial_shape'`` -- for data output shape.\n - ``'patch_index'`` -- if the data is a patch of big image, append the patch index to filename.\n\n When meta_data is specified, the saver will try to resample batch data from the space\n defined by \"affine\" to the space defined by \"original_affine\".\n\n If meta_data is None, use the default index (starting from 0) as the filename.\n\n Args:\n data: target data content that to be saved as a NIfTI format file.\n Assuming the data shape starts with a channel dimension and followed by spatial dimensions.\n meta_data: the meta data information corresponding to the data.\n\n See Also\n :py:meth:`monai.data.nifti_writer.write_nifti`\n \"\"\"\n filename = meta_data[Key.FILENAME_OR_OBJ] if meta_data else str(self._data_index)\n self._data_index += 1\n original_affine = meta_data.get(\"original_affine\", None) if meta_data else None\n affine = meta_data.get(\"affine\", None) if meta_data else None\n spatial_shape = meta_data.get(\"spatial_shape\", None) if meta_data else None\n patch_index = meta_data.get(Key.PATCH_INDEX, None) if meta_data else None\n\n if isinstance(data, torch.Tensor):\n data = data.detach().cpu().numpy()\n\n path = create_file_basename(self.output_postfix, filename, self.output_dir, self.data_root_dir, patch_index)\n path = f\"{path}{self.output_ext}\"\n # change data shape to be (channel, h, w, d)\n while len(data.shape) < 4:\n data = np.expand_dims(data, -1)\n # change data to \"channel last\" format and write to nifti format file\n data = np.moveaxis(np.asarray(data), 0, -1)\n\n # if desired, remove trailing singleton dimensions\n if self.squeeze_end_dims:\n while data.shape[-1] == 1:\n data = np.squeeze(data, -1)\n\n write_nifti(\n data,\n file_name=path,\n affine=affine,\n target_affine=original_affine,\n resample=self.resample,\n output_spatial_shape=spatial_shape,\n mode=self.mode,\n padding_mode=self.padding_mode,\n align_corners=self.align_corners,\n dtype=self.dtype,\n output_dtype=self.output_dtype,\n )\n\n def save_batch(self, batch_data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:\n \"\"\"\n Save a batch of data into Nifti format files.\n\n Spatially it supports up to three dimensions, that is, H, HW, HWD for\n 1D, 2D, 3D respectively (with resampling supports for 2D and 3D only).\n\n When saving multiple time steps or multiple channels `batch_data`,\n time and/or modality axes should be appended after the batch dimensions.\n For example, the shape of a batch of 2D eight-class\n segmentation probabilities to be saved could be `(batch, 8, 64, 64)`;\n in this case each item in the batch will be saved as (64, 64, 1, 8)\n NIfTI file (the third dimension is reserved as a spatial dimension).\n\n Args:\n batch_data: target batch data content that save into NIfTI format.\n meta_data: every key-value in the meta_data is corresponding to a batch of data.\n\n \"\"\"\n for i, data in enumerate(batch_data): # save a batch of files\n self.save(data=data, meta_data={k: meta_data[k][i] for k in meta_data} if meta_data is not None else None)\n", "path": "monai/data/nifti_saver.py"}], "after_files": [{"content": "# Copyright 2020 - 2021 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Dict, Optional, Union\n\nimport numpy as np\nimport torch\n\nfrom monai.config import DtypeLike\nfrom monai.data.nifti_writer import write_nifti\nfrom monai.data.utils import create_file_basename\nfrom monai.utils import GridSampleMode, GridSamplePadMode\nfrom monai.utils import ImageMetaKey as Key\n\n\nclass NiftiSaver:\n \"\"\"\n Save the data as NIfTI file, it can support single data content or a batch of data.\n Typically, the data can be segmentation predictions, call `save` for single data\n or call `save_batch` to save a batch of data together. If no meta data provided,\n use index from 0 as the filename prefix.\n\n NB: image should include channel dimension: [B],C,H,W,[D].\n \"\"\"\n\n def __init__(\n self,\n output_dir: str = \"./\",\n output_postfix: str = \"seg\",\n output_ext: str = \".nii.gz\",\n resample: bool = True,\n mode: Union[GridSampleMode, str] = GridSampleMode.BILINEAR,\n padding_mode: Union[GridSamplePadMode, str] = GridSamplePadMode.BORDER,\n align_corners: bool = False,\n dtype: DtypeLike = np.float64,\n output_dtype: DtypeLike = np.float32,\n squeeze_end_dims: bool = True,\n data_root_dir: str = \"\",\n ) -> None:\n \"\"\"\n Args:\n output_dir: output image directory.\n output_postfix: a string appended to all output file names.\n output_ext: output file extension name.\n resample: whether to resample before saving the data array.\n mode: {``\"bilinear\"``, ``\"nearest\"``}\n This option is used when ``resample = True``.\n Interpolation mode to calculate output values. Defaults to ``\"bilinear\"``.\n See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample\n padding_mode: {``\"zeros\"``, ``\"border\"``, ``\"reflection\"``}\n This option is used when ``resample = True``.\n Padding mode for outside grid values. Defaults to ``\"border\"``.\n See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample\n align_corners: Geometrically, we consider the pixels of the input as squares rather than points.\n See also: https://pytorch.org/docs/stable/nn.functional.html#grid-sample\n dtype: data type for resampling computation. Defaults to ``np.float64`` for best precision.\n If None, use the data type of input data.\n output_dtype: data type for saving data. Defaults to ``np.float32``.\n squeeze_end_dims: if True, any trailing singleton dimensions will be removed (after the channel\n has been moved to the end). So if input is (C,H,W,D), this will be altered to (H,W,D,C), and\n then if C==1, it will be saved as (H,W,D). If D also ==1, it will be saved as (H,W). If false,\n image will always be saved as (H,W,D,C).\n data_root_dir: if not empty, it specifies the beginning parts of the input file's\n absolute path. it's used to compute `input_file_rel_path`, the relative path to the file from\n `data_root_dir` to preserve folder structure when saving in case there are files in different\n folders with the same file names. for example:\n input_file_name: /foo/bar/test1/image.nii,\n postfix: seg\n output_ext: nii.gz\n output_dir: /output,\n data_root_dir: /foo/bar,\n output will be: /output/test1/image/image_seg.nii.gz\n\n \"\"\"\n self.output_dir = output_dir\n self.output_postfix = output_postfix\n self.output_ext = output_ext\n self.resample = resample\n self.mode: GridSampleMode = GridSampleMode(mode)\n self.padding_mode: GridSamplePadMode = GridSamplePadMode(padding_mode)\n self.align_corners = align_corners\n self.dtype = dtype\n self.output_dtype = output_dtype\n self._data_index = 0\n self.squeeze_end_dims = squeeze_end_dims\n self.data_root_dir = data_root_dir\n\n def save(self, data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:\n \"\"\"\n Save data into a Nifti file.\n The meta_data could optionally have the following keys:\n\n - ``'filename_or_obj'`` -- for output file name creation, corresponding to filename or object.\n - ``'original_affine'`` -- for data orientation handling, defaulting to an identity matrix.\n - ``'affine'`` -- for data output affine, defaulting to an identity matrix.\n - ``'spatial_shape'`` -- for data output shape.\n - ``'patch_index'`` -- if the data is a patch of big image, append the patch index to filename.\n\n When meta_data is specified, the saver will try to resample batch data from the space\n defined by \"affine\" to the space defined by \"original_affine\".\n\n If meta_data is None, use the default index (starting from 0) as the filename.\n\n Args:\n data: target data content that to be saved as a NIfTI format file.\n Assuming the data shape starts with a channel dimension and followed by spatial dimensions.\n meta_data: the meta data information corresponding to the data.\n\n See Also\n :py:meth:`monai.data.nifti_writer.write_nifti`\n \"\"\"\n filename = meta_data[Key.FILENAME_OR_OBJ] if meta_data else str(self._data_index)\n self._data_index += 1\n original_affine = meta_data.get(\"original_affine\", None) if meta_data else None\n affine = meta_data.get(\"affine\", None) if meta_data else None\n spatial_shape = meta_data.get(\"spatial_shape\", None) if meta_data else None\n patch_index = meta_data.get(Key.PATCH_INDEX, None) if meta_data else None\n\n if isinstance(data, torch.Tensor):\n data = data.detach().cpu().numpy()\n\n path = create_file_basename(self.output_postfix, filename, self.output_dir, self.data_root_dir, patch_index)\n path = f\"{path}{self.output_ext}\"\n # change data shape to be (channel, h, w, d)\n while len(data.shape) < 4:\n data = np.expand_dims(data, -1)\n # change data to \"channel last\" format and write to nifti format file\n data = np.moveaxis(np.asarray(data), 0, -1)\n\n # if desired, remove trailing singleton dimensions\n if self.squeeze_end_dims:\n while data.shape[-1] == 1:\n data = np.squeeze(data, -1)\n\n write_nifti(\n data,\n file_name=path,\n affine=affine,\n target_affine=original_affine,\n resample=self.resample,\n output_spatial_shape=spatial_shape,\n mode=self.mode,\n padding_mode=self.padding_mode,\n align_corners=self.align_corners,\n dtype=self.dtype,\n output_dtype=self.output_dtype,\n )\n print(f\"file written: {path}.\")\n\n def save_batch(self, batch_data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict] = None) -> None:\n \"\"\"\n Save a batch of data into Nifti format files.\n\n Spatially it supports up to three dimensions, that is, H, HW, HWD for\n 1D, 2D, 3D respectively (with resampling supports for 2D and 3D only).\n\n When saving multiple time steps or multiple channels `batch_data`,\n time and/or modality axes should be appended after the batch dimensions.\n For example, the shape of a batch of 2D eight-class\n segmentation probabilities to be saved could be `(batch, 8, 64, 64)`;\n in this case each item in the batch will be saved as (64, 64, 1, 8)\n NIfTI file (the third dimension is reserved as a spatial dimension).\n\n Args:\n batch_data: target batch data content that save into NIfTI format.\n meta_data: every key-value in the meta_data is corresponding to a batch of data.\n\n \"\"\"\n for i, data in enumerate(batch_data): # save a batch of files\n self.save(data=data, meta_data={k: meta_data[k][i] for k in meta_data} if meta_data is not None else None)\n", "path": "monai/data/nifti_saver.py"}]}
| 2,777 | 125 |
gh_patches_debug_23949
|
rasdani/github-patches
|
git_diff
|
pypa__virtualenv-1866
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
.gitignore added to places it should not
Refs #1806
Virtualenv now drops a .gitignore file. While I understand where you are coming from, it is a mistake to assume virtualenv owns the target directory. For example, I and many people I work with typically do this:
```
git clone [email protected]:stefanholek/foo
cd foo
virtualenv .
./bin/pip install -e .
./bin/python -m unittest
```
Recent versions of virtualenv add a .gitignore file into my sandbox. To make matters worse it even **overwrites** an existing .gitignore file! This is very clearly wrong.
I suggest to not write the .gitignore file unless virtualenv has created the target directory, or at least don't write it if the target directory already contains a .git directory and/or a .gitignore file.
Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/virtualenv/create/creator.py`
Content:
```
1 from __future__ import absolute_import, print_function, unicode_literals
2
3 import json
4 import logging
5 import os
6 import sys
7 from abc import ABCMeta, abstractmethod
8 from argparse import ArgumentTypeError
9 from ast import literal_eval
10 from collections import OrderedDict
11 from textwrap import dedent
12
13 from six import add_metaclass
14
15 from virtualenv.discovery.cached_py_info import LogCmd
16 from virtualenv.info import WIN_CPYTHON_2
17 from virtualenv.util.path import Path, safe_delete
18 from virtualenv.util.six import ensure_str, ensure_text
19 from virtualenv.util.subprocess import run_cmd
20 from virtualenv.version import __version__
21
22 from .pyenv_cfg import PyEnvCfg
23
24 HERE = Path(os.path.abspath(__file__)).parent
25 DEBUG_SCRIPT = HERE / "debug.py"
26
27
28 class CreatorMeta(object):
29 def __init__(self):
30 self.error = None
31
32
33 @add_metaclass(ABCMeta)
34 class Creator(object):
35 """A class that given a python Interpreter creates a virtual environment"""
36
37 def __init__(self, options, interpreter):
38 """Construct a new virtual environment creator.
39
40 :param options: the CLI option as parsed from :meth:`add_parser_arguments`
41 :param interpreter: the interpreter to create virtual environment from
42 """
43 self.interpreter = interpreter
44 self._debug = None
45 self.dest = Path(options.dest)
46 self.clear = options.clear
47 self.pyenv_cfg = PyEnvCfg.from_folder(self.dest)
48 self.app_data = options.app_data
49
50 def __repr__(self):
51 return ensure_str(self.__unicode__())
52
53 def __unicode__(self):
54 return "{}({})".format(self.__class__.__name__, ", ".join("{}={}".format(k, v) for k, v in self._args()))
55
56 def _args(self):
57 return [
58 ("dest", ensure_text(str(self.dest))),
59 ("clear", self.clear),
60 ]
61
62 @classmethod
63 def can_create(cls, interpreter):
64 """Determine if we can create a virtual environment.
65
66 :param interpreter: the interpreter in question
67 :return: ``None`` if we can't create, any other object otherwise that will be forwarded to \
68 :meth:`add_parser_arguments`
69 """
70 return True
71
72 @classmethod
73 def add_parser_arguments(cls, parser, interpreter, meta, app_data):
74 """Add CLI arguments for the creator.
75
76 :param parser: the CLI parser
77 :param app_data: the application data folder
78 :param interpreter: the interpreter we're asked to create virtual environment for
79 :param meta: value as returned by :meth:`can_create`
80 """
81 parser.add_argument(
82 "dest", help="directory to create virtualenv at", type=cls.validate_dest,
83 )
84 parser.add_argument(
85 "--clear",
86 dest="clear",
87 action="store_true",
88 help="remove the destination directory if exist before starting (will overwrite files otherwise)",
89 default=False,
90 )
91
92 @abstractmethod
93 def create(self):
94 """Perform the virtual environment creation."""
95 raise NotImplementedError
96
97 @classmethod
98 def validate_dest(cls, raw_value):
99 """No path separator in the path, valid chars and must be write-able"""
100
101 def non_write_able(dest, value):
102 common = Path(*os.path.commonprefix([value.parts, dest.parts]))
103 raise ArgumentTypeError(
104 "the destination {} is not write-able at {}".format(dest.relative_to(common), common),
105 )
106
107 # the file system must be able to encode
108 # note in newer CPython this is always utf-8 https://www.python.org/dev/peps/pep-0529/
109 encoding = sys.getfilesystemencoding()
110 refused = OrderedDict()
111 kwargs = {"errors": "ignore"} if encoding != "mbcs" else {}
112 for char in ensure_text(raw_value):
113 try:
114 trip = char.encode(encoding, **kwargs).decode(encoding)
115 if trip == char:
116 continue
117 raise ValueError(trip)
118 except ValueError:
119 refused[char] = None
120 if refused:
121 raise ArgumentTypeError(
122 "the file system codec ({}) cannot handle characters {!r} within {!r}".format(
123 encoding, "".join(refused.keys()), raw_value,
124 ),
125 )
126 if os.pathsep in raw_value:
127 raise ArgumentTypeError(
128 "destination {!r} must not contain the path separator ({}) as this would break "
129 "the activation scripts".format(raw_value, os.pathsep),
130 )
131
132 value = Path(raw_value)
133 if value.exists() and value.is_file():
134 raise ArgumentTypeError("the destination {} already exists and is a file".format(value))
135 if (3, 3) <= sys.version_info <= (3, 6):
136 # pre 3.6 resolve is always strict, aka must exists, sidestep by using os.path operation
137 dest = Path(os.path.realpath(raw_value))
138 else:
139 dest = Path(os.path.abspath(str(value))).resolve() # on Windows absolute does not imply resolve so use both
140 value = dest
141 while dest:
142 if dest.exists():
143 if os.access(ensure_text(str(dest)), os.W_OK):
144 break
145 else:
146 non_write_able(dest, value)
147 base, _ = dest.parent, dest.name
148 if base == dest:
149 non_write_able(dest, value) # pragma: no cover
150 dest = base
151 return str(value)
152
153 def run(self):
154 if self.dest.exists() and self.clear:
155 logging.debug("delete %s", self.dest)
156 safe_delete(self.dest)
157 self.create()
158 self.set_pyenv_cfg()
159 self.setup_ignore_vcs()
160
161 def set_pyenv_cfg(self):
162 self.pyenv_cfg.content = OrderedDict()
163 self.pyenv_cfg["home"] = self.interpreter.system_exec_prefix
164 self.pyenv_cfg["implementation"] = self.interpreter.implementation
165 self.pyenv_cfg["version_info"] = ".".join(str(i) for i in self.interpreter.version_info)
166 self.pyenv_cfg["virtualenv"] = __version__
167
168 def setup_ignore_vcs(self):
169 """Generate ignore instructions for version control systems."""
170 # mark this folder to be ignored by VCS, handle https://www.python.org/dev/peps/pep-0610/#registered-vcs
171 (self.dest / ".gitignore").write_text(
172 dedent(
173 """
174 # created by virtualenv automatically
175 *
176 """,
177 ).lstrip(),
178 )
179 # Mercurial - does not support the .hgignore file inside a subdirectory directly, but only if included via the
180 # subinclude directive from root, at which point on might as well ignore the directory itself, see
181 # https://www.selenic.com/mercurial/hgignore.5.html for more details
182 # Bazaar - does not support ignore files in sub-directories, only at root level via .bzrignore
183 # Subversion - does not support ignore files, requires direct manipulation with the svn tool
184
185 @property
186 def debug(self):
187 """
188 :return: debug information about the virtual environment (only valid after :meth:`create` has run)
189 """
190 if self._debug is None and self.exe is not None:
191 self._debug = get_env_debug_info(self.exe, self.debug_script(), self.app_data)
192 return self._debug
193
194 # noinspection PyMethodMayBeStatic
195 def debug_script(self):
196 return DEBUG_SCRIPT
197
198
199 def get_env_debug_info(env_exe, debug_script, app_data):
200 env = os.environ.copy()
201 env.pop(str("PYTHONPATH"), None)
202
203 with app_data.ensure_extracted(debug_script) as debug_script:
204 cmd = [str(env_exe), str(debug_script)]
205 if WIN_CPYTHON_2:
206 cmd = [ensure_text(i) for i in cmd]
207 logging.debug(str("debug via %r"), LogCmd(cmd))
208 code, out, err = run_cmd(cmd)
209
210 # noinspection PyBroadException
211 try:
212 if code != 0:
213 result = literal_eval(out)
214 else:
215 result = json.loads(out)
216 if err:
217 result["err"] = err
218 except Exception as exception:
219 return {"out": out, "err": err, "returncode": code, "exception": repr(exception)}
220 if "sys" in result and "path" in result["sys"]:
221 del result["sys"]["path"][0]
222 return result
223
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/virtualenv/create/creator.py b/src/virtualenv/create/creator.py
--- a/src/virtualenv/create/creator.py
+++ b/src/virtualenv/create/creator.py
@@ -168,14 +168,16 @@
def setup_ignore_vcs(self):
"""Generate ignore instructions for version control systems."""
# mark this folder to be ignored by VCS, handle https://www.python.org/dev/peps/pep-0610/#registered-vcs
- (self.dest / ".gitignore").write_text(
- dedent(
- """
- # created by virtualenv automatically
- *
- """,
- ).lstrip(),
- )
+ git_ignore = self.dest / ".gitignore"
+ if not git_ignore.exists():
+ git_ignore.write_text(
+ dedent(
+ """
+ # created by virtualenv automatically
+ *
+ """,
+ ).lstrip(),
+ )
# Mercurial - does not support the .hgignore file inside a subdirectory directly, but only if included via the
# subinclude directive from root, at which point on might as well ignore the directory itself, see
# https://www.selenic.com/mercurial/hgignore.5.html for more details
|
{"golden_diff": "diff --git a/src/virtualenv/create/creator.py b/src/virtualenv/create/creator.py\n--- a/src/virtualenv/create/creator.py\n+++ b/src/virtualenv/create/creator.py\n@@ -168,14 +168,16 @@\n def setup_ignore_vcs(self):\n \"\"\"Generate ignore instructions for version control systems.\"\"\"\n # mark this folder to be ignored by VCS, handle https://www.python.org/dev/peps/pep-0610/#registered-vcs\n- (self.dest / \".gitignore\").write_text(\n- dedent(\n- \"\"\"\n- # created by virtualenv automatically\n- *\n- \"\"\",\n- ).lstrip(),\n- )\n+ git_ignore = self.dest / \".gitignore\"\n+ if not git_ignore.exists():\n+ git_ignore.write_text(\n+ dedent(\n+ \"\"\"\n+ # created by virtualenv automatically\n+ *\n+ \"\"\",\n+ ).lstrip(),\n+ )\n # Mercurial - does not support the .hgignore file inside a subdirectory directly, but only if included via the\n # subinclude directive from root, at which point on might as well ignore the directory itself, see\n # https://www.selenic.com/mercurial/hgignore.5.html for more details\n", "issue": ".gitignore added to places it should not\nRefs #1806\r\n\r\nVirtualenv now drops a .gitignore file. While I understand where you are coming from, it is a mistake to assume virtualenv owns the target directory. For example, I and many people I work with typically do this:\r\n\r\n```\r\ngit clone [email protected]:stefanholek/foo\r\ncd foo\r\nvirtualenv .\r\n./bin/pip install -e .\r\n./bin/python -m unittest\r\n```\r\n\r\nRecent versions of virtualenv add a .gitignore file into my sandbox. To make matters worse it even **overwrites** an existing .gitignore file! This is very clearly wrong.\r\n\r\nI suggest to not write the .gitignore file unless virtualenv has created the target directory, or at least don't write it if the target directory already contains a .git directory and/or a .gitignore file.\r\n\r\nThanks!\n", "before_files": [{"content": "from __future__ import absolute_import, print_function, unicode_literals\n\nimport json\nimport logging\nimport os\nimport sys\nfrom abc import ABCMeta, abstractmethod\nfrom argparse import ArgumentTypeError\nfrom ast import literal_eval\nfrom collections import OrderedDict\nfrom textwrap import dedent\n\nfrom six import add_metaclass\n\nfrom virtualenv.discovery.cached_py_info import LogCmd\nfrom virtualenv.info import WIN_CPYTHON_2\nfrom virtualenv.util.path import Path, safe_delete\nfrom virtualenv.util.six import ensure_str, ensure_text\nfrom virtualenv.util.subprocess import run_cmd\nfrom virtualenv.version import __version__\n\nfrom .pyenv_cfg import PyEnvCfg\n\nHERE = Path(os.path.abspath(__file__)).parent\nDEBUG_SCRIPT = HERE / \"debug.py\"\n\n\nclass CreatorMeta(object):\n def __init__(self):\n self.error = None\n\n\n@add_metaclass(ABCMeta)\nclass Creator(object):\n \"\"\"A class that given a python Interpreter creates a virtual environment\"\"\"\n\n def __init__(self, options, interpreter):\n \"\"\"Construct a new virtual environment creator.\n\n :param options: the CLI option as parsed from :meth:`add_parser_arguments`\n :param interpreter: the interpreter to create virtual environment from\n \"\"\"\n self.interpreter = interpreter\n self._debug = None\n self.dest = Path(options.dest)\n self.clear = options.clear\n self.pyenv_cfg = PyEnvCfg.from_folder(self.dest)\n self.app_data = options.app_data\n\n def __repr__(self):\n return ensure_str(self.__unicode__())\n\n def __unicode__(self):\n return \"{}({})\".format(self.__class__.__name__, \", \".join(\"{}={}\".format(k, v) for k, v in self._args()))\n\n def _args(self):\n return [\n (\"dest\", ensure_text(str(self.dest))),\n (\"clear\", self.clear),\n ]\n\n @classmethod\n def can_create(cls, interpreter):\n \"\"\"Determine if we can create a virtual environment.\n\n :param interpreter: the interpreter in question\n :return: ``None`` if we can't create, any other object otherwise that will be forwarded to \\\n :meth:`add_parser_arguments`\n \"\"\"\n return True\n\n @classmethod\n def add_parser_arguments(cls, parser, interpreter, meta, app_data):\n \"\"\"Add CLI arguments for the creator.\n\n :param parser: the CLI parser\n :param app_data: the application data folder\n :param interpreter: the interpreter we're asked to create virtual environment for\n :param meta: value as returned by :meth:`can_create`\n \"\"\"\n parser.add_argument(\n \"dest\", help=\"directory to create virtualenv at\", type=cls.validate_dest,\n )\n parser.add_argument(\n \"--clear\",\n dest=\"clear\",\n action=\"store_true\",\n help=\"remove the destination directory if exist before starting (will overwrite files otherwise)\",\n default=False,\n )\n\n @abstractmethod\n def create(self):\n \"\"\"Perform the virtual environment creation.\"\"\"\n raise NotImplementedError\n\n @classmethod\n def validate_dest(cls, raw_value):\n \"\"\"No path separator in the path, valid chars and must be write-able\"\"\"\n\n def non_write_able(dest, value):\n common = Path(*os.path.commonprefix([value.parts, dest.parts]))\n raise ArgumentTypeError(\n \"the destination {} is not write-able at {}\".format(dest.relative_to(common), common),\n )\n\n # the file system must be able to encode\n # note in newer CPython this is always utf-8 https://www.python.org/dev/peps/pep-0529/\n encoding = sys.getfilesystemencoding()\n refused = OrderedDict()\n kwargs = {\"errors\": \"ignore\"} if encoding != \"mbcs\" else {}\n for char in ensure_text(raw_value):\n try:\n trip = char.encode(encoding, **kwargs).decode(encoding)\n if trip == char:\n continue\n raise ValueError(trip)\n except ValueError:\n refused[char] = None\n if refused:\n raise ArgumentTypeError(\n \"the file system codec ({}) cannot handle characters {!r} within {!r}\".format(\n encoding, \"\".join(refused.keys()), raw_value,\n ),\n )\n if os.pathsep in raw_value:\n raise ArgumentTypeError(\n \"destination {!r} must not contain the path separator ({}) as this would break \"\n \"the activation scripts\".format(raw_value, os.pathsep),\n )\n\n value = Path(raw_value)\n if value.exists() and value.is_file():\n raise ArgumentTypeError(\"the destination {} already exists and is a file\".format(value))\n if (3, 3) <= sys.version_info <= (3, 6):\n # pre 3.6 resolve is always strict, aka must exists, sidestep by using os.path operation\n dest = Path(os.path.realpath(raw_value))\n else:\n dest = Path(os.path.abspath(str(value))).resolve() # on Windows absolute does not imply resolve so use both\n value = dest\n while dest:\n if dest.exists():\n if os.access(ensure_text(str(dest)), os.W_OK):\n break\n else:\n non_write_able(dest, value)\n base, _ = dest.parent, dest.name\n if base == dest:\n non_write_able(dest, value) # pragma: no cover\n dest = base\n return str(value)\n\n def run(self):\n if self.dest.exists() and self.clear:\n logging.debug(\"delete %s\", self.dest)\n safe_delete(self.dest)\n self.create()\n self.set_pyenv_cfg()\n self.setup_ignore_vcs()\n\n def set_pyenv_cfg(self):\n self.pyenv_cfg.content = OrderedDict()\n self.pyenv_cfg[\"home\"] = self.interpreter.system_exec_prefix\n self.pyenv_cfg[\"implementation\"] = self.interpreter.implementation\n self.pyenv_cfg[\"version_info\"] = \".\".join(str(i) for i in self.interpreter.version_info)\n self.pyenv_cfg[\"virtualenv\"] = __version__\n\n def setup_ignore_vcs(self):\n \"\"\"Generate ignore instructions for version control systems.\"\"\"\n # mark this folder to be ignored by VCS, handle https://www.python.org/dev/peps/pep-0610/#registered-vcs\n (self.dest / \".gitignore\").write_text(\n dedent(\n \"\"\"\n # created by virtualenv automatically\n *\n \"\"\",\n ).lstrip(),\n )\n # Mercurial - does not support the .hgignore file inside a subdirectory directly, but only if included via the\n # subinclude directive from root, at which point on might as well ignore the directory itself, see\n # https://www.selenic.com/mercurial/hgignore.5.html for more details\n # Bazaar - does not support ignore files in sub-directories, only at root level via .bzrignore\n # Subversion - does not support ignore files, requires direct manipulation with the svn tool\n\n @property\n def debug(self):\n \"\"\"\n :return: debug information about the virtual environment (only valid after :meth:`create` has run)\n \"\"\"\n if self._debug is None and self.exe is not None:\n self._debug = get_env_debug_info(self.exe, self.debug_script(), self.app_data)\n return self._debug\n\n # noinspection PyMethodMayBeStatic\n def debug_script(self):\n return DEBUG_SCRIPT\n\n\ndef get_env_debug_info(env_exe, debug_script, app_data):\n env = os.environ.copy()\n env.pop(str(\"PYTHONPATH\"), None)\n\n with app_data.ensure_extracted(debug_script) as debug_script:\n cmd = [str(env_exe), str(debug_script)]\n if WIN_CPYTHON_2:\n cmd = [ensure_text(i) for i in cmd]\n logging.debug(str(\"debug via %r\"), LogCmd(cmd))\n code, out, err = run_cmd(cmd)\n\n # noinspection PyBroadException\n try:\n if code != 0:\n result = literal_eval(out)\n else:\n result = json.loads(out)\n if err:\n result[\"err\"] = err\n except Exception as exception:\n return {\"out\": out, \"err\": err, \"returncode\": code, \"exception\": repr(exception)}\n if \"sys\" in result and \"path\" in result[\"sys\"]:\n del result[\"sys\"][\"path\"][0]\n return result\n", "path": "src/virtualenv/create/creator.py"}], "after_files": [{"content": "from __future__ import absolute_import, print_function, unicode_literals\n\nimport json\nimport logging\nimport os\nimport sys\nfrom abc import ABCMeta, abstractmethod\nfrom argparse import ArgumentTypeError\nfrom ast import literal_eval\nfrom collections import OrderedDict\nfrom textwrap import dedent\n\nfrom six import add_metaclass\n\nfrom virtualenv.discovery.cached_py_info import LogCmd\nfrom virtualenv.info import WIN_CPYTHON_2\nfrom virtualenv.util.path import Path, safe_delete\nfrom virtualenv.util.six import ensure_str, ensure_text\nfrom virtualenv.util.subprocess import run_cmd\nfrom virtualenv.version import __version__\n\nfrom .pyenv_cfg import PyEnvCfg\n\nHERE = Path(os.path.abspath(__file__)).parent\nDEBUG_SCRIPT = HERE / \"debug.py\"\n\n\nclass CreatorMeta(object):\n def __init__(self):\n self.error = None\n\n\n@add_metaclass(ABCMeta)\nclass Creator(object):\n \"\"\"A class that given a python Interpreter creates a virtual environment\"\"\"\n\n def __init__(self, options, interpreter):\n \"\"\"Construct a new virtual environment creator.\n\n :param options: the CLI option as parsed from :meth:`add_parser_arguments`\n :param interpreter: the interpreter to create virtual environment from\n \"\"\"\n self.interpreter = interpreter\n self._debug = None\n self.dest = Path(options.dest)\n self.clear = options.clear\n self.pyenv_cfg = PyEnvCfg.from_folder(self.dest)\n self.app_data = options.app_data\n\n def __repr__(self):\n return ensure_str(self.__unicode__())\n\n def __unicode__(self):\n return \"{}({})\".format(self.__class__.__name__, \", \".join(\"{}={}\".format(k, v) for k, v in self._args()))\n\n def _args(self):\n return [\n (\"dest\", ensure_text(str(self.dest))),\n (\"clear\", self.clear),\n ]\n\n @classmethod\n def can_create(cls, interpreter):\n \"\"\"Determine if we can create a virtual environment.\n\n :param interpreter: the interpreter in question\n :return: ``None`` if we can't create, any other object otherwise that will be forwarded to \\\n :meth:`add_parser_arguments`\n \"\"\"\n return True\n\n @classmethod\n def add_parser_arguments(cls, parser, interpreter, meta, app_data):\n \"\"\"Add CLI arguments for the creator.\n\n :param parser: the CLI parser\n :param app_data: the application data folder\n :param interpreter: the interpreter we're asked to create virtual environment for\n :param meta: value as returned by :meth:`can_create`\n \"\"\"\n parser.add_argument(\n \"dest\", help=\"directory to create virtualenv at\", type=cls.validate_dest,\n )\n parser.add_argument(\n \"--clear\",\n dest=\"clear\",\n action=\"store_true\",\n help=\"remove the destination directory if exist before starting (will overwrite files otherwise)\",\n default=False,\n )\n\n @abstractmethod\n def create(self):\n \"\"\"Perform the virtual environment creation.\"\"\"\n raise NotImplementedError\n\n @classmethod\n def validate_dest(cls, raw_value):\n \"\"\"No path separator in the path, valid chars and must be write-able\"\"\"\n\n def non_write_able(dest, value):\n common = Path(*os.path.commonprefix([value.parts, dest.parts]))\n raise ArgumentTypeError(\n \"the destination {} is not write-able at {}\".format(dest.relative_to(common), common),\n )\n\n # the file system must be able to encode\n # note in newer CPython this is always utf-8 https://www.python.org/dev/peps/pep-0529/\n encoding = sys.getfilesystemencoding()\n refused = OrderedDict()\n kwargs = {\"errors\": \"ignore\"} if encoding != \"mbcs\" else {}\n for char in ensure_text(raw_value):\n try:\n trip = char.encode(encoding, **kwargs).decode(encoding)\n if trip == char:\n continue\n raise ValueError(trip)\n except ValueError:\n refused[char] = None\n if refused:\n raise ArgumentTypeError(\n \"the file system codec ({}) cannot handle characters {!r} within {!r}\".format(\n encoding, \"\".join(refused.keys()), raw_value,\n ),\n )\n if os.pathsep in raw_value:\n raise ArgumentTypeError(\n \"destination {!r} must not contain the path separator ({}) as this would break \"\n \"the activation scripts\".format(raw_value, os.pathsep),\n )\n\n value = Path(raw_value)\n if value.exists() and value.is_file():\n raise ArgumentTypeError(\"the destination {} already exists and is a file\".format(value))\n if (3, 3) <= sys.version_info <= (3, 6):\n # pre 3.6 resolve is always strict, aka must exists, sidestep by using os.path operation\n dest = Path(os.path.realpath(raw_value))\n else:\n dest = Path(os.path.abspath(str(value))).resolve() # on Windows absolute does not imply resolve so use both\n value = dest\n while dest:\n if dest.exists():\n if os.access(ensure_text(str(dest)), os.W_OK):\n break\n else:\n non_write_able(dest, value)\n base, _ = dest.parent, dest.name\n if base == dest:\n non_write_able(dest, value) # pragma: no cover\n dest = base\n return str(value)\n\n def run(self):\n if self.dest.exists() and self.clear:\n logging.debug(\"delete %s\", self.dest)\n safe_delete(self.dest)\n self.create()\n self.set_pyenv_cfg()\n self.setup_ignore_vcs()\n\n def set_pyenv_cfg(self):\n self.pyenv_cfg.content = OrderedDict()\n self.pyenv_cfg[\"home\"] = self.interpreter.system_exec_prefix\n self.pyenv_cfg[\"implementation\"] = self.interpreter.implementation\n self.pyenv_cfg[\"version_info\"] = \".\".join(str(i) for i in self.interpreter.version_info)\n self.pyenv_cfg[\"virtualenv\"] = __version__\n\n def setup_ignore_vcs(self):\n \"\"\"Generate ignore instructions for version control systems.\"\"\"\n # mark this folder to be ignored by VCS, handle https://www.python.org/dev/peps/pep-0610/#registered-vcs\n git_ignore = self.dest / \".gitignore\"\n if not git_ignore.exists():\n git_ignore.write_text(\n dedent(\n \"\"\"\n # created by virtualenv automatically\n *\n \"\"\",\n ).lstrip(),\n )\n # Mercurial - does not support the .hgignore file inside a subdirectory directly, but only if included via the\n # subinclude directive from root, at which point on might as well ignore the directory itself, see\n # https://www.selenic.com/mercurial/hgignore.5.html for more details\n # Bazaar - does not support ignore files in sub-directories, only at root level via .bzrignore\n # Subversion - does not support ignore files, requires direct manipulation with the svn tool\n\n @property\n def debug(self):\n \"\"\"\n :return: debug information about the virtual environment (only valid after :meth:`create` has run)\n \"\"\"\n if self._debug is None and self.exe is not None:\n self._debug = get_env_debug_info(self.exe, self.debug_script(), self.app_data)\n return self._debug\n\n # noinspection PyMethodMayBeStatic\n def debug_script(self):\n return DEBUG_SCRIPT\n\n\ndef get_env_debug_info(env_exe, debug_script, app_data):\n env = os.environ.copy()\n env.pop(str(\"PYTHONPATH\"), None)\n\n with app_data.ensure_extracted(debug_script) as debug_script:\n cmd = [str(env_exe), str(debug_script)]\n if WIN_CPYTHON_2:\n cmd = [ensure_text(i) for i in cmd]\n logging.debug(str(\"debug via %r\"), LogCmd(cmd))\n code, out, err = run_cmd(cmd)\n\n # noinspection PyBroadException\n try:\n if code != 0:\n result = literal_eval(out)\n else:\n result = json.loads(out)\n if err:\n result[\"err\"] = err\n except Exception as exception:\n return {\"out\": out, \"err\": err, \"returncode\": code, \"exception\": repr(exception)}\n if \"sys\" in result and \"path\" in result[\"sys\"]:\n del result[\"sys\"][\"path\"][0]\n return result\n", "path": "src/virtualenv/create/creator.py"}]}
| 2,834 | 291 |
gh_patches_debug_23661
|
rasdani/github-patches
|
git_diff
|
mitmproxy__mitmproxy-2367
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support removing SSL/TLS from upstream https connection
I'm trying to use an upstream proxy that expects plain HTTP bytes instead of TLS-encapsulated HTTP for HTTPS URLs. I've tried to modify the scheme and other things but mitmproxy still continues to send a TLS client hello after the initial upstream CONNECT.
It appears that this is currently unsupported in mitmproxy?
##### Steps to reproduce the problem:
uplink.py:
```
from mitmproxy import ctx
def http_connect(flow):
flow.request.scheme = 'http'
flow.request.port = 80
ctx.log.info('script: connect '+repr(flow))
def request(flow):
flow.request.scheme = 'http'
flow.request.port = 80
ctx.log.info('script: request '+repr(flow))
```
1. socat -v tcp-listen:8082,reuseaddr,fork TCP:upstreamproxy:999
2. mitmproxy -p 8081 -s uplink.py -U http://localhost:8082 --insecure --no-upstream-cert
3. curl -v -k -x http://127.0.0.1:8081 https://www.google.com
The output of socat is:
```
> 2017/05/11 10:11:56.527763 length=119 from=0 to=118
CONNECT www.google.com:80 HTTP/1.1\r
Host: www.google.com:443\r
User-Agent: curl/7.51.0\r
Proxy-Connection: Keep-Alive\r
\r
< 2017/05/11 10:11:56.720058 length=116 from=0 to=115
HTTP/1.1 200 OK\r
Content-Length: 0\r
Date: Thu, 11 May 2017 08:11:56 GMT\r
Content-Type: text/plain; charset=utf-8\r
\r
> 2017/05/11 10:11:56.753419 length=164 from=119 to=282
............i..RF+,Q..2.=.......
oA5T].Lt....2.,.+.$.#.
. .0./.(.'.........k.g.9.3.....=.<.5./.....@.\v.......
.......#...\r. ........................................< 2017/05/11 10:11:56.753987 length=82 from=116 to=197
HTTP/1.1 502 Uplink Bad Gateway\r
Content-Length: 131\r
Content-Type: text/plain\r
\r
< 2017/05/11 10:11:56.754159 length=131 from=198 to=328
malformed HTTP request "\\x16\\x03\\x01\\x00\\x9f\\x01\\x00\\x00\\x9b\\x03\\x03\\xd2i\\xe3\\xacRF+,Q\\xa4\\xec2\\xfe=\\x11\\x11\\xf1\\xfe\\xcf\\x0e\\u007f"^C⏎
```
As you can see above, the TLS client hello `\\x16\\x03\\x01\\x00` is sent. The upstream proxy doesn't like it and replies with error 502.
##### Expected behavior
If configured/scripted, mitmproxy should send upstream HTTP bytes for a HTTPS URL.
##### Any other comments? What have you tried so far?
I also tried some variants of uplink.py, also the script from issue #1731 for example. I also tried switching off `--insecure` or `--no-upstream-cert`. I'm currently using those flags because Burp (which I actually intend to use instead of curl) apparently doesn't send SNI [properly](https://support.portswigger.net/customer/portal/questions/16827939-burp-does-not-set-sni-on-the-outgoing-connection-to-an-ssl-enabled-web-server).
##### System information
```
Mitmproxy version: 2.0.2 (1.2.0dev0000-0x8cba0352e)
Python version: 3.6.1
Platform: Darwin-16.5.0-x86_64-i386-64bit
SSL version: OpenSSL 1.1.0e 16 Feb 2017
Mac version: 10.12.4 ('', '', '') x86_64
```
Support removing SSL/TLS from upstream https connection
I'm trying to use an upstream proxy that expects plain HTTP bytes instead of TLS-encapsulated HTTP for HTTPS URLs. I've tried to modify the scheme and other things but mitmproxy still continues to send a TLS client hello after the initial upstream CONNECT.
It appears that this is currently unsupported in mitmproxy?
##### Steps to reproduce the problem:
uplink.py:
```
from mitmproxy import ctx
def http_connect(flow):
flow.request.scheme = 'http'
flow.request.port = 80
ctx.log.info('script: connect '+repr(flow))
def request(flow):
flow.request.scheme = 'http'
flow.request.port = 80
ctx.log.info('script: request '+repr(flow))
```
1. socat -v tcp-listen:8082,reuseaddr,fork TCP:upstreamproxy:999
2. mitmproxy -p 8081 -s uplink.py -U http://localhost:8082 --insecure --no-upstream-cert
3. curl -v -k -x http://127.0.0.1:8081 https://www.google.com
The output of socat is:
```
> 2017/05/11 10:11:56.527763 length=119 from=0 to=118
CONNECT www.google.com:80 HTTP/1.1\r
Host: www.google.com:443\r
User-Agent: curl/7.51.0\r
Proxy-Connection: Keep-Alive\r
\r
< 2017/05/11 10:11:56.720058 length=116 from=0 to=115
HTTP/1.1 200 OK\r
Content-Length: 0\r
Date: Thu, 11 May 2017 08:11:56 GMT\r
Content-Type: text/plain; charset=utf-8\r
\r
> 2017/05/11 10:11:56.753419 length=164 from=119 to=282
............i..RF+,Q..2.=.......
oA5T].Lt....2.,.+.$.#.
. .0./.(.'.........k.g.9.3.....=.<.5./.....@.\v.......
.......#...\r. ........................................< 2017/05/11 10:11:56.753987 length=82 from=116 to=197
HTTP/1.1 502 Uplink Bad Gateway\r
Content-Length: 131\r
Content-Type: text/plain\r
\r
< 2017/05/11 10:11:56.754159 length=131 from=198 to=328
malformed HTTP request "\\x16\\x03\\x01\\x00\\x9f\\x01\\x00\\x00\\x9b\\x03\\x03\\xd2i\\xe3\\xacRF+,Q\\xa4\\xec2\\xfe=\\x11\\x11\\xf1\\xfe\\xcf\\x0e\\u007f"^C⏎
```
As you can see above, the TLS client hello `\\x16\\x03\\x01\\x00` is sent. The upstream proxy doesn't like it and replies with error 502.
##### Expected behavior
If configured/scripted, mitmproxy should send upstream HTTP bytes for a HTTPS URL.
##### Any other comments? What have you tried so far?
I also tried some variants of uplink.py, also the script from issue #1731 for example. I also tried switching off `--insecure` or `--no-upstream-cert`. I'm currently using those flags because Burp (which I actually intend to use instead of curl) apparently doesn't send SNI [properly](https://support.portswigger.net/customer/portal/questions/16827939-burp-does-not-set-sni-on-the-outgoing-connection-to-an-ssl-enabled-web-server).
##### System information
```
Mitmproxy version: 2.0.2 (1.2.0dev0000-0x8cba0352e)
Python version: 3.6.1
Platform: Darwin-16.5.0-x86_64-i386-64bit
SSL version: OpenSSL 1.1.0e 16 Feb 2017
Mac version: 10.12.4 ('', '', '') x86_64
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitmproxy/proxy/root_context.py`
Content:
```
1 from mitmproxy import log
2 from mitmproxy import exceptions
3 from mitmproxy.proxy import protocol
4 from mitmproxy.proxy import modes
5 from mitmproxy.proxy.protocol import http
6
7
8 class RootContext:
9
10 """
11 The outermost context provided to the root layer.
12 As a consequence, every layer has access to methods and attributes defined here.
13
14 Attributes:
15 client_conn:
16 The :py:class:`client connection <mitmproxy.connections.ClientConnection>`.
17 channel:
18 A :py:class:`~mitmproxy.controller.Channel` to communicate with the FlowMaster.
19 Provides :py:meth:`.ask() <mitmproxy.controller.Channel.ask>` and
20 :py:meth:`.tell() <mitmproxy.controller.Channel.tell>` methods.
21 config:
22 The :py:class:`proxy server's configuration <mitmproxy.proxy.ProxyConfig>`
23 """
24
25 def __init__(self, client_conn, config, channel):
26 self.client_conn = client_conn
27 self.channel = channel
28 self.config = config
29
30 def next_layer(self, top_layer):
31 """
32 This function determines the next layer in the protocol stack.
33
34 Arguments:
35 top_layer: the current innermost layer.
36
37 Returns:
38 The next layer
39 """
40 layer = self._next_layer(top_layer)
41 return self.channel.ask("next_layer", layer)
42
43 def _next_layer(self, top_layer):
44 try:
45 d = top_layer.client_conn.rfile.peek(3)
46 except exceptions.TcpException as e:
47 raise exceptions.ProtocolException(str(e))
48 client_tls = protocol.is_tls_record_magic(d)
49
50 # 1. check for --ignore
51 if self.config.check_ignore:
52 ignore = self.config.check_ignore(top_layer.server_conn.address)
53 if not ignore and client_tls:
54 try:
55 client_hello = protocol.TlsClientHello.from_client_conn(self.client_conn)
56 except exceptions.TlsProtocolException as e:
57 self.log("Cannot parse Client Hello: %s" % repr(e), "error")
58 else:
59 ignore = self.config.check_ignore((client_hello.sni, 443))
60 if ignore:
61 return protocol.RawTCPLayer(top_layer, ignore=True)
62
63 # 2. Always insert a TLS layer, even if there's neither client nor server tls.
64 # An inline script may upgrade from http to https,
65 # in which case we need some form of TLS layer.
66 if isinstance(top_layer, modes.ReverseProxy):
67 return protocol.TlsLayer(
68 top_layer,
69 client_tls,
70 top_layer.server_tls,
71 top_layer.server_conn.address[0]
72 )
73 if isinstance(top_layer, protocol.ServerConnectionMixin) or isinstance(top_layer, protocol.UpstreamConnectLayer):
74 return protocol.TlsLayer(top_layer, client_tls, client_tls)
75
76 # 3. In Http Proxy mode and Upstream Proxy mode, the next layer is fixed.
77 if isinstance(top_layer, protocol.TlsLayer):
78 if isinstance(top_layer.ctx, modes.HttpProxy):
79 return protocol.Http1Layer(top_layer, http.HTTPMode.regular)
80 if isinstance(top_layer.ctx, modes.HttpUpstreamProxy):
81 return protocol.Http1Layer(top_layer, http.HTTPMode.upstream)
82
83 # 4. Check for other TLS cases (e.g. after CONNECT).
84 if client_tls:
85 return protocol.TlsLayer(top_layer, True, True)
86
87 # 4. Check for --tcp
88 if self.config.check_tcp(top_layer.server_conn.address):
89 return protocol.RawTCPLayer(top_layer)
90
91 # 5. Check for TLS ALPN (HTTP1/HTTP2)
92 if isinstance(top_layer, protocol.TlsLayer):
93 alpn = top_layer.client_conn.get_alpn_proto_negotiated()
94 if alpn == b'h2':
95 return protocol.Http2Layer(top_layer, http.HTTPMode.transparent)
96 if alpn == b'http/1.1':
97 return protocol.Http1Layer(top_layer, http.HTTPMode.transparent)
98
99 # 6. Assume HTTP1 by default
100 return protocol.Http1Layer(top_layer, http.HTTPMode.transparent)
101
102 def log(self, msg, level, subs=()):
103 """
104 Send a log message to the master.
105 """
106 full_msg = [
107 "{}:{}: {}".format(self.client_conn.address[0], self.client_conn.address[1], msg)
108 ]
109 for i in subs:
110 full_msg.append(" -> " + i)
111 full_msg = "\n".join(full_msg)
112 self.channel.tell("log", log.LogEntry(full_msg, level))
113
```
Path: `mitmproxy/log.py`
Content:
```
1
2 class LogEntry:
3 def __init__(self, msg, level):
4 self.msg = msg
5 self.level = level
6
7
8 class Log:
9 """
10 The central logger, exposed to scripts as mitmproxy.ctx.log.
11 """
12 def __init__(self, master):
13 self.master = master
14
15 def debug(self, txt):
16 """
17 Log with level debug.
18 """
19 self(txt, "debug")
20
21 def info(self, txt):
22 """
23 Log with level info.
24 """
25 self(txt, "info")
26
27 def alert(self, txt):
28 """
29 Log with level alert. Alerts have the same urgency as info, but
30 signals to interctive tools that the user's attention should be
31 drawn to the output even if they're not currently looking at the
32 event log.
33 """
34 self(txt, "alert")
35
36 def warn(self, txt):
37 """
38 Log with level warn.
39 """
40 self(txt, "warn")
41
42 def error(self, txt):
43 """
44 Log with level error.
45 """
46 self(txt, "error")
47
48 def __call__(self, text, level="info"):
49 self.master.add_log(text, level)
50
51
52 def log_tier(level):
53 return dict(error=0, warn=1, info=2, alert=2, debug=3).get(level)
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mitmproxy/log.py b/mitmproxy/log.py
--- a/mitmproxy/log.py
+++ b/mitmproxy/log.py
@@ -4,6 +4,9 @@
self.msg = msg
self.level = level
+ def __repr__(self):
+ return "LogEntry({}, {})".format(self.msg, self.level)
+
class Log:
"""
diff --git a/mitmproxy/proxy/root_context.py b/mitmproxy/proxy/root_context.py
--- a/mitmproxy/proxy/root_context.py
+++ b/mitmproxy/proxy/root_context.py
@@ -70,8 +70,16 @@
top_layer.server_tls,
top_layer.server_conn.address[0]
)
- if isinstance(top_layer, protocol.ServerConnectionMixin) or isinstance(top_layer, protocol.UpstreamConnectLayer):
+ if isinstance(top_layer, protocol.ServerConnectionMixin):
return protocol.TlsLayer(top_layer, client_tls, client_tls)
+ if isinstance(top_layer, protocol.UpstreamConnectLayer):
+ # if the user manually sets a scheme for connect requests, we use this to decide if we
+ # want TLS or not.
+ if top_layer.connect_request.scheme:
+ tls = top_layer.connect_request.scheme == "https"
+ else:
+ tls = client_tls
+ return protocol.TlsLayer(top_layer, client_tls, tls)
# 3. In Http Proxy mode and Upstream Proxy mode, the next layer is fixed.
if isinstance(top_layer, protocol.TlsLayer):
|
{"golden_diff": "diff --git a/mitmproxy/log.py b/mitmproxy/log.py\n--- a/mitmproxy/log.py\n+++ b/mitmproxy/log.py\n@@ -4,6 +4,9 @@\n self.msg = msg\n self.level = level\n \n+ def __repr__(self):\n+ return \"LogEntry({}, {})\".format(self.msg, self.level)\n+\n \n class Log:\n \"\"\"\ndiff --git a/mitmproxy/proxy/root_context.py b/mitmproxy/proxy/root_context.py\n--- a/mitmproxy/proxy/root_context.py\n+++ b/mitmproxy/proxy/root_context.py\n@@ -70,8 +70,16 @@\n top_layer.server_tls,\n top_layer.server_conn.address[0]\n )\n- if isinstance(top_layer, protocol.ServerConnectionMixin) or isinstance(top_layer, protocol.UpstreamConnectLayer):\n+ if isinstance(top_layer, protocol.ServerConnectionMixin):\n return protocol.TlsLayer(top_layer, client_tls, client_tls)\n+ if isinstance(top_layer, protocol.UpstreamConnectLayer):\n+ # if the user manually sets a scheme for connect requests, we use this to decide if we\n+ # want TLS or not.\n+ if top_layer.connect_request.scheme:\n+ tls = top_layer.connect_request.scheme == \"https\"\n+ else:\n+ tls = client_tls\n+ return protocol.TlsLayer(top_layer, client_tls, tls)\n \n # 3. In Http Proxy mode and Upstream Proxy mode, the next layer is fixed.\n if isinstance(top_layer, protocol.TlsLayer):\n", "issue": "Support removing SSL/TLS from upstream https connection\nI'm trying to use an upstream proxy that expects plain HTTP bytes instead of TLS-encapsulated HTTP for HTTPS URLs. I've tried to modify the scheme and other things but mitmproxy still continues to send a TLS client hello after the initial upstream CONNECT. \r\n\r\nIt appears that this is currently unsupported in mitmproxy?\r\n\r\n##### Steps to reproduce the problem:\r\n\r\nuplink.py:\r\n\r\n```\r\nfrom mitmproxy import ctx\r\n\r\ndef http_connect(flow):\r\n flow.request.scheme = 'http'\r\n flow.request.port = 80\r\n ctx.log.info('script: connect '+repr(flow))\r\n\r\ndef request(flow):\r\n flow.request.scheme = 'http'\r\n flow.request.port = 80\r\n ctx.log.info('script: request '+repr(flow))\r\n```\r\n\r\n\r\n1. socat -v tcp-listen:8082,reuseaddr,fork TCP:upstreamproxy:999\r\n2. mitmproxy -p 8081 -s uplink.py -U http://localhost:8082 --insecure --no-upstream-cert\r\n3. curl -v -k -x http://127.0.0.1:8081 https://www.google.com\r\n\r\nThe output of socat is:\r\n\r\n```\r\n> 2017/05/11 10:11:56.527763 length=119 from=0 to=118\r\nCONNECT www.google.com:80 HTTP/1.1\\r\r\nHost: www.google.com:443\\r\r\nUser-Agent: curl/7.51.0\\r\r\nProxy-Connection: Keep-Alive\\r\r\n\\r\r\n< 2017/05/11 10:11:56.720058 length=116 from=0 to=115\r\nHTTP/1.1 200 OK\\r\r\nContent-Length: 0\\r\r\nDate: Thu, 11 May 2017 08:11:56 GMT\\r\r\nContent-Type: text/plain; charset=utf-8\\r\r\n\\r\r\n> 2017/05/11 10:11:56.753419 length=164 from=119 to=282\r\n............i..RF+,Q..2.=.......\r\noA5T].Lt....2.,.+.$.#.\r\n. .0./.(.'.........k.g.9.3.....=.<.5./.....@.\\v.......\r\n.......#...\\r. ........................................< 2017/05/11 10:11:56.753987 length=82 from=116 to=197\r\nHTTP/1.1 502 Uplink Bad Gateway\\r\r\nContent-Length: 131\\r\r\nContent-Type: text/plain\\r\r\n\\r\r\n< 2017/05/11 10:11:56.754159 length=131 from=198 to=328\r\nmalformed HTTP request \"\\\\x16\\\\x03\\\\x01\\\\x00\\\\x9f\\\\x01\\\\x00\\\\x00\\\\x9b\\\\x03\\\\x03\\\\xd2i\\\\xe3\\\\xacRF+,Q\\\\xa4\\\\xec2\\\\xfe=\\\\x11\\\\x11\\\\xf1\\\\xfe\\\\xcf\\\\x0e\\\\u007f\"^C\u23ce\r\n```\r\n\r\nAs you can see above, the TLS client hello `\\\\x16\\\\x03\\\\x01\\\\x00` is sent. The upstream proxy doesn't like it and replies with error 502.\r\n\r\n##### Expected behavior\r\n\r\nIf configured/scripted, mitmproxy should send upstream HTTP bytes for a HTTPS URL.\r\n\r\n##### Any other comments? What have you tried so far?\r\n\r\nI also tried some variants of uplink.py, also the script from issue #1731 for example. I also tried switching off `--insecure` or `--no-upstream-cert`. I'm currently using those flags because Burp (which I actually intend to use instead of curl) apparently doesn't send SNI [properly](https://support.portswigger.net/customer/portal/questions/16827939-burp-does-not-set-sni-on-the-outgoing-connection-to-an-ssl-enabled-web-server).\r\n\r\n##### System information\r\n\r\n```\r\nMitmproxy version: 2.0.2 (1.2.0dev0000-0x8cba0352e)\r\nPython version: 3.6.1\r\nPlatform: Darwin-16.5.0-x86_64-i386-64bit\r\nSSL version: OpenSSL 1.1.0e 16 Feb 2017\r\nMac version: 10.12.4 ('', '', '') x86_64\r\n```\nSupport removing SSL/TLS from upstream https connection\nI'm trying to use an upstream proxy that expects plain HTTP bytes instead of TLS-encapsulated HTTP for HTTPS URLs. I've tried to modify the scheme and other things but mitmproxy still continues to send a TLS client hello after the initial upstream CONNECT. \r\n\r\nIt appears that this is currently unsupported in mitmproxy?\r\n\r\n##### Steps to reproduce the problem:\r\n\r\nuplink.py:\r\n\r\n```\r\nfrom mitmproxy import ctx\r\n\r\ndef http_connect(flow):\r\n flow.request.scheme = 'http'\r\n flow.request.port = 80\r\n ctx.log.info('script: connect '+repr(flow))\r\n\r\ndef request(flow):\r\n flow.request.scheme = 'http'\r\n flow.request.port = 80\r\n ctx.log.info('script: request '+repr(flow))\r\n```\r\n\r\n\r\n1. socat -v tcp-listen:8082,reuseaddr,fork TCP:upstreamproxy:999\r\n2. mitmproxy -p 8081 -s uplink.py -U http://localhost:8082 --insecure --no-upstream-cert\r\n3. curl -v -k -x http://127.0.0.1:8081 https://www.google.com\r\n\r\nThe output of socat is:\r\n\r\n```\r\n> 2017/05/11 10:11:56.527763 length=119 from=0 to=118\r\nCONNECT www.google.com:80 HTTP/1.1\\r\r\nHost: www.google.com:443\\r\r\nUser-Agent: curl/7.51.0\\r\r\nProxy-Connection: Keep-Alive\\r\r\n\\r\r\n< 2017/05/11 10:11:56.720058 length=116 from=0 to=115\r\nHTTP/1.1 200 OK\\r\r\nContent-Length: 0\\r\r\nDate: Thu, 11 May 2017 08:11:56 GMT\\r\r\nContent-Type: text/plain; charset=utf-8\\r\r\n\\r\r\n> 2017/05/11 10:11:56.753419 length=164 from=119 to=282\r\n............i..RF+,Q..2.=.......\r\noA5T].Lt....2.,.+.$.#.\r\n. .0./.(.'.........k.g.9.3.....=.<.5./.....@.\\v.......\r\n.......#...\\r. ........................................< 2017/05/11 10:11:56.753987 length=82 from=116 to=197\r\nHTTP/1.1 502 Uplink Bad Gateway\\r\r\nContent-Length: 131\\r\r\nContent-Type: text/plain\\r\r\n\\r\r\n< 2017/05/11 10:11:56.754159 length=131 from=198 to=328\r\nmalformed HTTP request \"\\\\x16\\\\x03\\\\x01\\\\x00\\\\x9f\\\\x01\\\\x00\\\\x00\\\\x9b\\\\x03\\\\x03\\\\xd2i\\\\xe3\\\\xacRF+,Q\\\\xa4\\\\xec2\\\\xfe=\\\\x11\\\\x11\\\\xf1\\\\xfe\\\\xcf\\\\x0e\\\\u007f\"^C\u23ce\r\n```\r\n\r\nAs you can see above, the TLS client hello `\\\\x16\\\\x03\\\\x01\\\\x00` is sent. The upstream proxy doesn't like it and replies with error 502.\r\n\r\n##### Expected behavior\r\n\r\nIf configured/scripted, mitmproxy should send upstream HTTP bytes for a HTTPS URL.\r\n\r\n##### Any other comments? What have you tried so far?\r\n\r\nI also tried some variants of uplink.py, also the script from issue #1731 for example. I also tried switching off `--insecure` or `--no-upstream-cert`. I'm currently using those flags because Burp (which I actually intend to use instead of curl) apparently doesn't send SNI [properly](https://support.portswigger.net/customer/portal/questions/16827939-burp-does-not-set-sni-on-the-outgoing-connection-to-an-ssl-enabled-web-server).\r\n\r\n##### System information\r\n\r\n```\r\nMitmproxy version: 2.0.2 (1.2.0dev0000-0x8cba0352e)\r\nPython version: 3.6.1\r\nPlatform: Darwin-16.5.0-x86_64-i386-64bit\r\nSSL version: OpenSSL 1.1.0e 16 Feb 2017\r\nMac version: 10.12.4 ('', '', '') x86_64\r\n```\n", "before_files": [{"content": "from mitmproxy import log\nfrom mitmproxy import exceptions\nfrom mitmproxy.proxy import protocol\nfrom mitmproxy.proxy import modes\nfrom mitmproxy.proxy.protocol import http\n\n\nclass RootContext:\n\n \"\"\"\n The outermost context provided to the root layer.\n As a consequence, every layer has access to methods and attributes defined here.\n\n Attributes:\n client_conn:\n The :py:class:`client connection <mitmproxy.connections.ClientConnection>`.\n channel:\n A :py:class:`~mitmproxy.controller.Channel` to communicate with the FlowMaster.\n Provides :py:meth:`.ask() <mitmproxy.controller.Channel.ask>` and\n :py:meth:`.tell() <mitmproxy.controller.Channel.tell>` methods.\n config:\n The :py:class:`proxy server's configuration <mitmproxy.proxy.ProxyConfig>`\n \"\"\"\n\n def __init__(self, client_conn, config, channel):\n self.client_conn = client_conn\n self.channel = channel\n self.config = config\n\n def next_layer(self, top_layer):\n \"\"\"\n This function determines the next layer in the protocol stack.\n\n Arguments:\n top_layer: the current innermost layer.\n\n Returns:\n The next layer\n \"\"\"\n layer = self._next_layer(top_layer)\n return self.channel.ask(\"next_layer\", layer)\n\n def _next_layer(self, top_layer):\n try:\n d = top_layer.client_conn.rfile.peek(3)\n except exceptions.TcpException as e:\n raise exceptions.ProtocolException(str(e))\n client_tls = protocol.is_tls_record_magic(d)\n\n # 1. check for --ignore\n if self.config.check_ignore:\n ignore = self.config.check_ignore(top_layer.server_conn.address)\n if not ignore and client_tls:\n try:\n client_hello = protocol.TlsClientHello.from_client_conn(self.client_conn)\n except exceptions.TlsProtocolException as e:\n self.log(\"Cannot parse Client Hello: %s\" % repr(e), \"error\")\n else:\n ignore = self.config.check_ignore((client_hello.sni, 443))\n if ignore:\n return protocol.RawTCPLayer(top_layer, ignore=True)\n\n # 2. Always insert a TLS layer, even if there's neither client nor server tls.\n # An inline script may upgrade from http to https,\n # in which case we need some form of TLS layer.\n if isinstance(top_layer, modes.ReverseProxy):\n return protocol.TlsLayer(\n top_layer,\n client_tls,\n top_layer.server_tls,\n top_layer.server_conn.address[0]\n )\n if isinstance(top_layer, protocol.ServerConnectionMixin) or isinstance(top_layer, protocol.UpstreamConnectLayer):\n return protocol.TlsLayer(top_layer, client_tls, client_tls)\n\n # 3. In Http Proxy mode and Upstream Proxy mode, the next layer is fixed.\n if isinstance(top_layer, protocol.TlsLayer):\n if isinstance(top_layer.ctx, modes.HttpProxy):\n return protocol.Http1Layer(top_layer, http.HTTPMode.regular)\n if isinstance(top_layer.ctx, modes.HttpUpstreamProxy):\n return protocol.Http1Layer(top_layer, http.HTTPMode.upstream)\n\n # 4. Check for other TLS cases (e.g. after CONNECT).\n if client_tls:\n return protocol.TlsLayer(top_layer, True, True)\n\n # 4. Check for --tcp\n if self.config.check_tcp(top_layer.server_conn.address):\n return protocol.RawTCPLayer(top_layer)\n\n # 5. Check for TLS ALPN (HTTP1/HTTP2)\n if isinstance(top_layer, protocol.TlsLayer):\n alpn = top_layer.client_conn.get_alpn_proto_negotiated()\n if alpn == b'h2':\n return protocol.Http2Layer(top_layer, http.HTTPMode.transparent)\n if alpn == b'http/1.1':\n return protocol.Http1Layer(top_layer, http.HTTPMode.transparent)\n\n # 6. Assume HTTP1 by default\n return protocol.Http1Layer(top_layer, http.HTTPMode.transparent)\n\n def log(self, msg, level, subs=()):\n \"\"\"\n Send a log message to the master.\n \"\"\"\n full_msg = [\n \"{}:{}: {}\".format(self.client_conn.address[0], self.client_conn.address[1], msg)\n ]\n for i in subs:\n full_msg.append(\" -> \" + i)\n full_msg = \"\\n\".join(full_msg)\n self.channel.tell(\"log\", log.LogEntry(full_msg, level))\n", "path": "mitmproxy/proxy/root_context.py"}, {"content": "\nclass LogEntry:\n def __init__(self, msg, level):\n self.msg = msg\n self.level = level\n\n\nclass Log:\n \"\"\"\n The central logger, exposed to scripts as mitmproxy.ctx.log.\n \"\"\"\n def __init__(self, master):\n self.master = master\n\n def debug(self, txt):\n \"\"\"\n Log with level debug.\n \"\"\"\n self(txt, \"debug\")\n\n def info(self, txt):\n \"\"\"\n Log with level info.\n \"\"\"\n self(txt, \"info\")\n\n def alert(self, txt):\n \"\"\"\n Log with level alert. Alerts have the same urgency as info, but\n signals to interctive tools that the user's attention should be\n drawn to the output even if they're not currently looking at the\n event log.\n \"\"\"\n self(txt, \"alert\")\n\n def warn(self, txt):\n \"\"\"\n Log with level warn.\n \"\"\"\n self(txt, \"warn\")\n\n def error(self, txt):\n \"\"\"\n Log with level error.\n \"\"\"\n self(txt, \"error\")\n\n def __call__(self, text, level=\"info\"):\n self.master.add_log(text, level)\n\n\ndef log_tier(level):\n return dict(error=0, warn=1, info=2, alert=2, debug=3).get(level)\n", "path": "mitmproxy/log.py"}], "after_files": [{"content": "from mitmproxy import log\nfrom mitmproxy import exceptions\nfrom mitmproxy.proxy import protocol\nfrom mitmproxy.proxy import modes\nfrom mitmproxy.proxy.protocol import http\n\n\nclass RootContext:\n\n \"\"\"\n The outermost context provided to the root layer.\n As a consequence, every layer has access to methods and attributes defined here.\n\n Attributes:\n client_conn:\n The :py:class:`client connection <mitmproxy.connections.ClientConnection>`.\n channel:\n A :py:class:`~mitmproxy.controller.Channel` to communicate with the FlowMaster.\n Provides :py:meth:`.ask() <mitmproxy.controller.Channel.ask>` and\n :py:meth:`.tell() <mitmproxy.controller.Channel.tell>` methods.\n config:\n The :py:class:`proxy server's configuration <mitmproxy.proxy.ProxyConfig>`\n \"\"\"\n\n def __init__(self, client_conn, config, channel):\n self.client_conn = client_conn\n self.channel = channel\n self.config = config\n\n def next_layer(self, top_layer):\n \"\"\"\n This function determines the next layer in the protocol stack.\n\n Arguments:\n top_layer: the current innermost layer.\n\n Returns:\n The next layer\n \"\"\"\n layer = self._next_layer(top_layer)\n return self.channel.ask(\"next_layer\", layer)\n\n def _next_layer(self, top_layer):\n try:\n d = top_layer.client_conn.rfile.peek(3)\n except exceptions.TcpException as e:\n raise exceptions.ProtocolException(str(e))\n client_tls = protocol.is_tls_record_magic(d)\n\n # 1. check for --ignore\n if self.config.check_ignore:\n ignore = self.config.check_ignore(top_layer.server_conn.address)\n if not ignore and client_tls:\n try:\n client_hello = protocol.TlsClientHello.from_client_conn(self.client_conn)\n except exceptions.TlsProtocolException as e:\n self.log(\"Cannot parse Client Hello: %s\" % repr(e), \"error\")\n else:\n ignore = self.config.check_ignore((client_hello.sni, 443))\n if ignore:\n return protocol.RawTCPLayer(top_layer, ignore=True)\n\n # 2. Always insert a TLS layer, even if there's neither client nor server tls.\n # An inline script may upgrade from http to https,\n # in which case we need some form of TLS layer.\n if isinstance(top_layer, modes.ReverseProxy):\n return protocol.TlsLayer(\n top_layer,\n client_tls,\n top_layer.server_tls,\n top_layer.server_conn.address[0]\n )\n if isinstance(top_layer, protocol.ServerConnectionMixin):\n return protocol.TlsLayer(top_layer, client_tls, client_tls)\n if isinstance(top_layer, protocol.UpstreamConnectLayer):\n # if the user manually sets a scheme for connect requests, we use this to decide if we\n # want TLS or not.\n if top_layer.connect_request.scheme:\n tls = top_layer.connect_request.scheme == \"https\"\n else:\n tls = client_tls\n return protocol.TlsLayer(top_layer, client_tls, tls)\n\n # 3. In Http Proxy mode and Upstream Proxy mode, the next layer is fixed.\n if isinstance(top_layer, protocol.TlsLayer):\n if isinstance(top_layer.ctx, modes.HttpProxy):\n return protocol.Http1Layer(top_layer, http.HTTPMode.regular)\n if isinstance(top_layer.ctx, modes.HttpUpstreamProxy):\n return protocol.Http1Layer(top_layer, http.HTTPMode.upstream)\n\n # 4. Check for other TLS cases (e.g. after CONNECT).\n if client_tls:\n return protocol.TlsLayer(top_layer, True, True)\n\n # 4. Check for --tcp\n if self.config.check_tcp(top_layer.server_conn.address):\n return protocol.RawTCPLayer(top_layer)\n\n # 5. Check for TLS ALPN (HTTP1/HTTP2)\n if isinstance(top_layer, protocol.TlsLayer):\n alpn = top_layer.client_conn.get_alpn_proto_negotiated()\n if alpn == b'h2':\n return protocol.Http2Layer(top_layer, http.HTTPMode.transparent)\n if alpn == b'http/1.1':\n return protocol.Http1Layer(top_layer, http.HTTPMode.transparent)\n\n # 6. Assume HTTP1 by default\n return protocol.Http1Layer(top_layer, http.HTTPMode.transparent)\n\n def log(self, msg, level, subs=()):\n \"\"\"\n Send a log message to the master.\n \"\"\"\n full_msg = [\n \"{}:{}: {}\".format(self.client_conn.address[0], self.client_conn.address[1], msg)\n ]\n for i in subs:\n full_msg.append(\" -> \" + i)\n full_msg = \"\\n\".join(full_msg)\n self.channel.tell(\"log\", log.LogEntry(full_msg, level))\n", "path": "mitmproxy/proxy/root_context.py"}, {"content": "\nclass LogEntry:\n def __init__(self, msg, level):\n self.msg = msg\n self.level = level\n\n def __repr__(self):\n return \"LogEntry({}, {})\".format(self.msg, self.level)\n\n\nclass Log:\n \"\"\"\n The central logger, exposed to scripts as mitmproxy.ctx.log.\n \"\"\"\n def __init__(self, master):\n self.master = master\n\n def debug(self, txt):\n \"\"\"\n Log with level debug.\n \"\"\"\n self(txt, \"debug\")\n\n def info(self, txt):\n \"\"\"\n Log with level info.\n \"\"\"\n self(txt, \"info\")\n\n def alert(self, txt):\n \"\"\"\n Log with level alert. Alerts have the same urgency as info, but\n signals to interctive tools that the user's attention should be\n drawn to the output even if they're not currently looking at the\n event log.\n \"\"\"\n self(txt, \"alert\")\n\n def warn(self, txt):\n \"\"\"\n Log with level warn.\n \"\"\"\n self(txt, \"warn\")\n\n def error(self, txt):\n \"\"\"\n Log with level error.\n \"\"\"\n self(txt, \"error\")\n\n def __call__(self, text, level=\"info\"):\n self.master.add_log(text, level)\n\n\ndef log_tier(level):\n return dict(error=0, warn=1, info=2, alert=2, debug=3).get(level)\n", "path": "mitmproxy/log.py"}]}
| 4,086 | 333 |
gh_patches_debug_4156
|
rasdani/github-patches
|
git_diff
|
mampfes__hacs_waste_collection_schedule-190
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Berlin Recycling broken due to SSL misconfiguration
Berlin Recycling is currently broken and fails with an `SSLCertVerificationError`:
> Logger: waste_collection_schedule.scraper
> Source: custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py:118
> Integration: waste_collection_schedule (documentation)
> First occurred: 22. März 2022, 16:39:19 (2 occurrences)
> Last logged: 01:12:00
>
> fetch failed for source Berline Recycling: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 1040, in _validate_conn conn.connect() File "/usr/local/lib/python3.9/site-packages/urllib3/connection.py", line 414, in connect self.sock = ssl_wrap_socket( File "/usr/local/lib/python3.9/site-packages/urllib3/util/ssl_.py", line 449, in ssl_wrap_socket ssl_sock = _ssl_wrap_socket_impl( File "/usr/local/lib/python3.9/site-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl return ssl_context.wrap_socket(sock, server_hostname=server_hostname) File "/usr/local/lib/python3.9/ssl.py", line 500, in wrap_socket return self.sslsocket_class._create( File "/usr/local/lib/python3.9/ssl.py", line 1040, in _create self.do_handshake() File "/usr/local/lib/python3.9/ssl.py", line 1309, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/requests/adapters.py", line 440, in send resp = conn.urlopen( File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 785, in urlopen retries = retries.increment( File "/usr/local/lib/python3.9/site-packages/urllib3/util/retry.py", line 592, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='kundenportal.berlin-recycling.de', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)'))) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py", line 116, in fetch entries = self._source.fetch() File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/berlin_recycling_de.py", line 48, in fetch r = session.get(SERVICE_URL, allow_redirects=False) File "/usr/local/lib/python3.9/site-packages/requests/sessions.py", line 542, in get return self.request('GET', url, **kwargs) File "/usr/local/lib/python3.9/site-packages/requests/sessions.py", line 529, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.9/site-packages/requests/sessions.py", line 645, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.9/site-packages/requests/adapters.py", line 517, in send raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='kundenportal.berlin-recycling.de', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)')))
Seems the reason is https://kundenportal.berlin-recycling.de has a misconfigured HTTPS server [which is not sending the intermediate certificate](https://www.ssllabs.com/ssltest/analyze.html?d=kundenportal.berlin-recycling.de) ([Starfield Secure Certificate Authority - G2](https://certs.starfieldtech.com/repository/)). Unlike browsers, python [does not support](https://bugs.python.org/issue18617) automatic fetching of missing intermediates.
I think an option to avoid setting `verify = False` on the request would be pinning the CAs and shipping the certificates with the project.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/berlin_recycling_de.py`
Content:
```
1 import json
2 from datetime import datetime
3 from html.parser import HTMLParser
4
5 import requests
6 from waste_collection_schedule import Collection # type: ignore[attr-defined]
7
8 TITLE = "Berline Recycling"
9 DESCRIPTION = "Source for Berlin Recycling waste collection."
10 URL = "https://berlin-recycling.de"
11 TEST_CASES = {
12 "Germanenstrasse": {
13 "username": "!secret berlin_recycling_username",
14 "password": "!secret berlin_recycling_password",
15 },
16 }
17
18
19 # Parser for HTML input (hidden) text
20 class HiddenInputParser(HTMLParser):
21 def __init__(self):
22 super().__init__()
23 self._args = {}
24
25 @property
26 def args(self):
27 return self._args
28
29 def handle_starttag(self, tag, attrs):
30 if tag == "input":
31 d = dict(attrs)
32 if str(d["type"]).lower() == "hidden":
33 self._args[d["name"]] = d["value"] if "value" in d else ""
34
35
36 SERVICE_URL = "https://kundenportal.berlin-recycling.de/"
37
38
39 class Source:
40 def __init__(self, username, password):
41 self._username = username
42 self._password = password
43
44 def fetch(self):
45 session = requests.session()
46
47 # first get returns session specific url
48 r = session.get(SERVICE_URL, allow_redirects=False)
49
50 # get session id's
51 r = session.get(r.url)
52
53 parser = HiddenInputParser()
54 parser.feed(r.text)
55 args = parser.args
56 args["__EVENTTARGET"] = "btnLog"
57 args["__EVENTARGUMENT"] = None
58 args["Username"] = self._username
59 args["Password"] = self._password
60
61 # login
62 r = session.post(r.url, data=args)
63 serviceUrl = r.url
64
65 request_data = {"withhtml": "true"}
66 r = session.post(serviceUrl + "/GetDashboard", json=request_data)
67
68 request_data = {"datasettable": "ENWIS_ABFUHRKALENDER"}
69 r = session.post(serviceUrl + "/ChangeDatasetTable", json=request_data)
70
71 request_data = {
72 "datasettablecode": "ENWIS_ABFUHRKALENDER",
73 "startindex": 0,
74 "searchtext": "",
75 "rangefilter": "",
76 "ordername": "",
77 "orderdir": "",
78 "ClientParameters": "",
79 "headrecid": "",
80 }
81 r = session.post(serviceUrl + "/GetDatasetTableHead", json=request_data)
82
83 data = json.loads(r.text)
84 # load json again, because response is double coded
85 data = json.loads(data["d"])
86
87 entries = []
88 for d in data["data"]:
89 date = datetime.strptime(d["Task Date"], "%Y-%m-%d").date()
90 entries.append(Collection(date, d["Material Description"]))
91 return entries
92
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/berlin_recycling_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/berlin_recycling_de.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/berlin_recycling_de.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/berlin_recycling_de.py
@@ -45,7 +45,7 @@
session = requests.session()
# first get returns session specific url
- r = session.get(SERVICE_URL, allow_redirects=False)
+ r = session.get(SERVICE_URL, allow_redirects=False, verify=False)
# get session id's
r = session.get(r.url)
|
{"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/berlin_recycling_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/berlin_recycling_de.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/berlin_recycling_de.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/berlin_recycling_de.py\n@@ -45,7 +45,7 @@\n session = requests.session()\n \n # first get returns session specific url\n- r = session.get(SERVICE_URL, allow_redirects=False)\n+ r = session.get(SERVICE_URL, allow_redirects=False, verify=False)\n \n # get session id's\n r = session.get(r.url)\n", "issue": "Berlin Recycling broken due to SSL misconfiguration\nBerlin Recycling is currently broken and fails with an `SSLCertVerificationError`:\r\n\r\n> Logger: waste_collection_schedule.scraper\r\n> Source: custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py:118\r\n> Integration: waste_collection_schedule (documentation)\r\n> First occurred: 22. M\u00e4rz 2022, 16:39:19 (2 occurrences)\r\n> Last logged: 01:12:00\r\n> \r\n> fetch failed for source Berline Recycling: Traceback (most recent call last): File \"/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py\", line 703, in urlopen httplib_response = self._make_request( File \"/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py\", line 386, in _make_request self._validate_conn(conn) File \"/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py\", line 1040, in _validate_conn conn.connect() File \"/usr/local/lib/python3.9/site-packages/urllib3/connection.py\", line 414, in connect self.sock = ssl_wrap_socket( File \"/usr/local/lib/python3.9/site-packages/urllib3/util/ssl_.py\", line 449, in ssl_wrap_socket ssl_sock = _ssl_wrap_socket_impl( File \"/usr/local/lib/python3.9/site-packages/urllib3/util/ssl_.py\", line 493, in _ssl_wrap_socket_impl return ssl_context.wrap_socket(sock, server_hostname=server_hostname) File \"/usr/local/lib/python3.9/ssl.py\", line 500, in wrap_socket return self.sslsocket_class._create( File \"/usr/local/lib/python3.9/ssl.py\", line 1040, in _create self.do_handshake() File \"/usr/local/lib/python3.9/ssl.py\", line 1309, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129) During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"/usr/local/lib/python3.9/site-packages/requests/adapters.py\", line 440, in send resp = conn.urlopen( File \"/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py\", line 785, in urlopen retries = retries.increment( File \"/usr/local/lib/python3.9/site-packages/urllib3/util/retry.py\", line 592, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='kundenportal.berlin-recycling.de', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)'))) During handling of the above exception, another exception occurred: Traceback (most recent call last): File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/scraper.py\", line 116, in fetch entries = self._source.fetch() File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/berlin_recycling_de.py\", line 48, in fetch r = session.get(SERVICE_URL, allow_redirects=False) File \"/usr/local/lib/python3.9/site-packages/requests/sessions.py\", line 542, in get return self.request('GET', url, **kwargs) File \"/usr/local/lib/python3.9/site-packages/requests/sessions.py\", line 529, in request resp = self.send(prep, **send_kwargs) File \"/usr/local/lib/python3.9/site-packages/requests/sessions.py\", line 645, in send r = adapter.send(request, **kwargs) File \"/usr/local/lib/python3.9/site-packages/requests/adapters.py\", line 517, in send raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='kundenportal.berlin-recycling.de', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)')))\r\n\r\nSeems the reason is https://kundenportal.berlin-recycling.de has a misconfigured HTTPS server [which is not sending the intermediate certificate](https://www.ssllabs.com/ssltest/analyze.html?d=kundenportal.berlin-recycling.de) ([Starfield Secure Certificate Authority - G2](https://certs.starfieldtech.com/repository/)). Unlike browsers, python [does not support](https://bugs.python.org/issue18617) automatic fetching of missing intermediates.\r\n\r\nI think an option to avoid setting `verify = False` on the request would be pinning the CAs and shipping the certificates with the project.\n", "before_files": [{"content": "import json\nfrom datetime import datetime\nfrom html.parser import HTMLParser\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Berline Recycling\"\nDESCRIPTION = \"Source for Berlin Recycling waste collection.\"\nURL = \"https://berlin-recycling.de\"\nTEST_CASES = {\n \"Germanenstrasse\": {\n \"username\": \"!secret berlin_recycling_username\",\n \"password\": \"!secret berlin_recycling_password\",\n },\n}\n\n\n# Parser for HTML input (hidden) text\nclass HiddenInputParser(HTMLParser):\n def __init__(self):\n super().__init__()\n self._args = {}\n\n @property\n def args(self):\n return self._args\n\n def handle_starttag(self, tag, attrs):\n if tag == \"input\":\n d = dict(attrs)\n if str(d[\"type\"]).lower() == \"hidden\":\n self._args[d[\"name\"]] = d[\"value\"] if \"value\" in d else \"\"\n\n\nSERVICE_URL = \"https://kundenportal.berlin-recycling.de/\"\n\n\nclass Source:\n def __init__(self, username, password):\n self._username = username\n self._password = password\n\n def fetch(self):\n session = requests.session()\n\n # first get returns session specific url\n r = session.get(SERVICE_URL, allow_redirects=False)\n\n # get session id's\n r = session.get(r.url)\n\n parser = HiddenInputParser()\n parser.feed(r.text)\n args = parser.args\n args[\"__EVENTTARGET\"] = \"btnLog\"\n args[\"__EVENTARGUMENT\"] = None\n args[\"Username\"] = self._username\n args[\"Password\"] = self._password\n\n # login\n r = session.post(r.url, data=args)\n serviceUrl = r.url\n\n request_data = {\"withhtml\": \"true\"}\n r = session.post(serviceUrl + \"/GetDashboard\", json=request_data)\n\n request_data = {\"datasettable\": \"ENWIS_ABFUHRKALENDER\"}\n r = session.post(serviceUrl + \"/ChangeDatasetTable\", json=request_data)\n\n request_data = {\n \"datasettablecode\": \"ENWIS_ABFUHRKALENDER\",\n \"startindex\": 0,\n \"searchtext\": \"\",\n \"rangefilter\": \"\",\n \"ordername\": \"\",\n \"orderdir\": \"\",\n \"ClientParameters\": \"\",\n \"headrecid\": \"\",\n }\n r = session.post(serviceUrl + \"/GetDatasetTableHead\", json=request_data)\n\n data = json.loads(r.text)\n # load json again, because response is double coded\n data = json.loads(data[\"d\"])\n\n entries = []\n for d in data[\"data\"]:\n date = datetime.strptime(d[\"Task Date\"], \"%Y-%m-%d\").date()\n entries.append(Collection(date, d[\"Material Description\"]))\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/berlin_recycling_de.py"}], "after_files": [{"content": "import json\nfrom datetime import datetime\nfrom html.parser import HTMLParser\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Berline Recycling\"\nDESCRIPTION = \"Source for Berlin Recycling waste collection.\"\nURL = \"https://berlin-recycling.de\"\nTEST_CASES = {\n \"Germanenstrasse\": {\n \"username\": \"!secret berlin_recycling_username\",\n \"password\": \"!secret berlin_recycling_password\",\n },\n}\n\n\n# Parser for HTML input (hidden) text\nclass HiddenInputParser(HTMLParser):\n def __init__(self):\n super().__init__()\n self._args = {}\n\n @property\n def args(self):\n return self._args\n\n def handle_starttag(self, tag, attrs):\n if tag == \"input\":\n d = dict(attrs)\n if str(d[\"type\"]).lower() == \"hidden\":\n self._args[d[\"name\"]] = d[\"value\"] if \"value\" in d else \"\"\n\n\nSERVICE_URL = \"https://kundenportal.berlin-recycling.de/\"\n\n\nclass Source:\n def __init__(self, username, password):\n self._username = username\n self._password = password\n\n def fetch(self):\n session = requests.session()\n\n # first get returns session specific url\n r = session.get(SERVICE_URL, allow_redirects=False, verify=False)\n\n # get session id's\n r = session.get(r.url)\n\n parser = HiddenInputParser()\n parser.feed(r.text)\n args = parser.args\n args[\"__EVENTTARGET\"] = \"btnLog\"\n args[\"__EVENTARGUMENT\"] = None\n args[\"Username\"] = self._username\n args[\"Password\"] = self._password\n\n # login\n r = session.post(r.url, data=args)\n serviceUrl = r.url\n\n request_data = {\"withhtml\": \"true\"}\n r = session.post(serviceUrl + \"/GetDashboard\", json=request_data)\n\n request_data = {\"datasettable\": \"ENWIS_ABFUHRKALENDER\"}\n r = session.post(serviceUrl + \"/ChangeDatasetTable\", json=request_data)\n\n request_data = {\n \"datasettablecode\": \"ENWIS_ABFUHRKALENDER\",\n \"startindex\": 0,\n \"searchtext\": \"\",\n \"rangefilter\": \"\",\n \"ordername\": \"\",\n \"orderdir\": \"\",\n \"ClientParameters\": \"\",\n \"headrecid\": \"\",\n }\n r = session.post(serviceUrl + \"/GetDatasetTableHead\", json=request_data)\n\n data = json.loads(r.text)\n # load json again, because response is double coded\n data = json.loads(data[\"d\"])\n\n entries = []\n for d in data[\"data\"]:\n date = datetime.strptime(d[\"Task Date\"], \"%Y-%m-%d\").date()\n entries.append(Collection(date, d[\"Material Description\"]))\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/berlin_recycling_de.py"}]}
| 2,204 | 161 |
gh_patches_debug_16342
|
rasdani/github-patches
|
git_diff
|
learningequality__kolibri-8880
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
User on SouD should not be able to edit their user profile
From @jamalex in bug bash:
The FacilityUser object is read-only for single-user certificates, so these changes will not be synced back. We had a spec doc that laid out "what should be allowed on a SoUD", I think this was documented in there — we should do an audit to make sure other cases are covered as well.
https://www.notion.so/learningequality/CRITICAL-FEATURE-BUG-Users-on-SoUD-should-not-be-allowed-to-edit-their-user-profile-e5765979e7ed4fcaa694cfe7932ac4df
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kolibri/plugins/user_profile/kolibri_plugin.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import print_function
3 from __future__ import unicode_literals
4
5 from kolibri.core.hooks import NavigationHook
6 from kolibri.core.webpack import hooks as webpack_hooks
7 from kolibri.plugins import KolibriPluginBase
8 from kolibri.plugins.hooks import register_hook
9
10
11 class UserProfile(KolibriPluginBase):
12 translated_view_urls = "urls"
13
14 @property
15 def url_slug(self):
16 return "profile"
17
18
19 @register_hook
20 class UserAuthAsset(webpack_hooks.WebpackBundleHook):
21 bundle_id = "app"
22
23
24 @register_hook
25 class ProfileNavAction(NavigationHook):
26 bundle_id = "user_profile_side_nav"
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kolibri/plugins/user_profile/kolibri_plugin.py b/kolibri/plugins/user_profile/kolibri_plugin.py
--- a/kolibri/plugins/user_profile/kolibri_plugin.py
+++ b/kolibri/plugins/user_profile/kolibri_plugin.py
@@ -2,6 +2,7 @@
from __future__ import print_function
from __future__ import unicode_literals
+from kolibri.core.device.utils import get_device_setting
from kolibri.core.hooks import NavigationHook
from kolibri.core.webpack import hooks as webpack_hooks
from kolibri.plugins import KolibriPluginBase
@@ -20,6 +21,10 @@
class UserAuthAsset(webpack_hooks.WebpackBundleHook):
bundle_id = "app"
+ @property
+ def plugin_data(self):
+ return {"isSubsetOfUsersDevice": get_device_setting("subset_of_users_device")}
+
@register_hook
class ProfileNavAction(NavigationHook):
|
{"golden_diff": "diff --git a/kolibri/plugins/user_profile/kolibri_plugin.py b/kolibri/plugins/user_profile/kolibri_plugin.py\n--- a/kolibri/plugins/user_profile/kolibri_plugin.py\n+++ b/kolibri/plugins/user_profile/kolibri_plugin.py\n@@ -2,6 +2,7 @@\n from __future__ import print_function\n from __future__ import unicode_literals\n \n+from kolibri.core.device.utils import get_device_setting\n from kolibri.core.hooks import NavigationHook\n from kolibri.core.webpack import hooks as webpack_hooks\n from kolibri.plugins import KolibriPluginBase\n@@ -20,6 +21,10 @@\n class UserAuthAsset(webpack_hooks.WebpackBundleHook):\n bundle_id = \"app\"\n \n+ @property\n+ def plugin_data(self):\n+ return {\"isSubsetOfUsersDevice\": get_device_setting(\"subset_of_users_device\")}\n+\n \n @register_hook\n class ProfileNavAction(NavigationHook):\n", "issue": "User on SouD should not be able to edit their user profile\nFrom @jamalex in bug bash: \r\nThe FacilityUser object is read-only for single-user certificates, so these changes will not be synced back. We had a spec doc that laid out \"what should be allowed on a SoUD\", I think this was documented in there \u2014 we should do an audit to make sure other cases are covered as well.\r\n\r\nhttps://www.notion.so/learningequality/CRITICAL-FEATURE-BUG-Users-on-SoUD-should-not-be-allowed-to-edit-their-user-profile-e5765979e7ed4fcaa694cfe7932ac4df\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nfrom kolibri.core.hooks import NavigationHook\nfrom kolibri.core.webpack import hooks as webpack_hooks\nfrom kolibri.plugins import KolibriPluginBase\nfrom kolibri.plugins.hooks import register_hook\n\n\nclass UserProfile(KolibriPluginBase):\n translated_view_urls = \"urls\"\n\n @property\n def url_slug(self):\n return \"profile\"\n\n\n@register_hook\nclass UserAuthAsset(webpack_hooks.WebpackBundleHook):\n bundle_id = \"app\"\n\n\n@register_hook\nclass ProfileNavAction(NavigationHook):\n bundle_id = \"user_profile_side_nav\"\n", "path": "kolibri/plugins/user_profile/kolibri_plugin.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nfrom kolibri.core.device.utils import get_device_setting\nfrom kolibri.core.hooks import NavigationHook\nfrom kolibri.core.webpack import hooks as webpack_hooks\nfrom kolibri.plugins import KolibriPluginBase\nfrom kolibri.plugins.hooks import register_hook\n\n\nclass UserProfile(KolibriPluginBase):\n translated_view_urls = \"urls\"\n\n @property\n def url_slug(self):\n return \"profile\"\n\n\n@register_hook\nclass UserAuthAsset(webpack_hooks.WebpackBundleHook):\n bundle_id = \"app\"\n\n @property\n def plugin_data(self):\n return {\"isSubsetOfUsersDevice\": get_device_setting(\"subset_of_users_device\")}\n\n\n@register_hook\nclass ProfileNavAction(NavigationHook):\n bundle_id = \"user_profile_side_nav\"\n", "path": "kolibri/plugins/user_profile/kolibri_plugin.py"}]}
| 605 | 204 |
gh_patches_debug_18363
|
rasdani/github-patches
|
git_diff
|
pytorch__pytorch-78948
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Functional API for FileLister
### 🚀 The feature, motivation and pitch
Similar to https://github.com/pytorch/data/issues/387
This allows for
```python
IterableWrapper([...]).list_file()
```
### Alternatives
_No response_
### Additional context
_No response_
cc @VitalyFedyunin @ejguan @NivekT
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torch/utils/data/datapipes/iter/filelister.py`
Content:
```
1 from typing import Iterator, List, Sequence, Union
2
3 from torch.utils.data.datapipes.datapipe import IterDataPipe
4 from torch.utils.data.datapipes.iter import IterableWrapper
5 from torch.utils.data.datapipes.utils.common import get_file_pathnames_from_root
6
7 __all__ = ["FileListerIterDataPipe", ]
8
9
10 class FileListerIterDataPipe(IterDataPipe[str]):
11 r"""
12 Given path(s) to the root directory, yields file pathname(s) (path + filename) of files within the root directory.
13 Multiple root directories can be provided.
14
15 Args:
16 root: Root directory or a sequence of root directories
17 masks: Unix style filter string or string list for filtering file name(s)
18 recursive: Whether to return pathname from nested directories or not
19 abspath: Whether to return relative pathname or absolute pathname
20 non_deterministic: Whether to return pathname in sorted order or not.
21 If ``False``, the results yielded from each root directory will be sorted
22 length: Nominal length of the datapipe
23
24 Example:
25 >>> from torchdata.datapipes.iter import FileLister
26 >>> dp = FileLister(root=".", recursive=True)
27 >>> list(dp)
28 ['example.py', './data/data.tar']
29 """
30
31 def __init__(
32 self,
33 root: Union[str, Sequence[str], IterDataPipe] = '.',
34 masks: Union[str, List[str]] = '',
35 *,
36 recursive: bool = False,
37 abspath: bool = False,
38 non_deterministic: bool = False,
39 length: int = -1
40 ) -> None:
41 super().__init__()
42 if isinstance(root, str):
43 root = [root, ]
44 if not isinstance(root, IterDataPipe):
45 root = IterableWrapper(root)
46 self.datapipe: IterDataPipe = root
47 self.masks: Union[str, List[str]] = masks
48 self.recursive: bool = recursive
49 self.abspath: bool = abspath
50 self.non_deterministic: bool = non_deterministic
51 self.length: int = length
52
53 def __iter__(self) -> Iterator[str] :
54 for path in self.datapipe:
55 yield from get_file_pathnames_from_root(path, self.masks, self.recursive, self.abspath, self.non_deterministic)
56
57 def __len__(self):
58 if self.length == -1:
59 raise TypeError("{} instance doesn't have valid length".format(type(self).__name__))
60 return self.length
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/torch/utils/data/datapipes/iter/filelister.py b/torch/utils/data/datapipes/iter/filelister.py
--- a/torch/utils/data/datapipes/iter/filelister.py
+++ b/torch/utils/data/datapipes/iter/filelister.py
@@ -1,5 +1,8 @@
from typing import Iterator, List, Sequence, Union
+
+from torch.utils.data.datapipes._decorator import functional_datapipe
+
from torch.utils.data.datapipes.datapipe import IterDataPipe
from torch.utils.data.datapipes.iter import IterableWrapper
from torch.utils.data.datapipes.utils.common import get_file_pathnames_from_root
@@ -7,6 +10,7 @@
__all__ = ["FileListerIterDataPipe", ]
+@functional_datapipe("list_files")
class FileListerIterDataPipe(IterDataPipe[str]):
r"""
Given path(s) to the root directory, yields file pathname(s) (path + filename) of files within the root directory.
|
{"golden_diff": "diff --git a/torch/utils/data/datapipes/iter/filelister.py b/torch/utils/data/datapipes/iter/filelister.py\n--- a/torch/utils/data/datapipes/iter/filelister.py\n+++ b/torch/utils/data/datapipes/iter/filelister.py\n@@ -1,5 +1,8 @@\n from typing import Iterator, List, Sequence, Union\n \n+\n+from torch.utils.data.datapipes._decorator import functional_datapipe\n+\n from torch.utils.data.datapipes.datapipe import IterDataPipe\n from torch.utils.data.datapipes.iter import IterableWrapper\n from torch.utils.data.datapipes.utils.common import get_file_pathnames_from_root\n@@ -7,6 +10,7 @@\n __all__ = [\"FileListerIterDataPipe\", ]\n \n \n+@functional_datapipe(\"list_files\")\n class FileListerIterDataPipe(IterDataPipe[str]):\n r\"\"\"\n Given path(s) to the root directory, yields file pathname(s) (path + filename) of files within the root directory.\n", "issue": "Functional API for FileLister\n### \ud83d\ude80 The feature, motivation and pitch\n\nSimilar to https://github.com/pytorch/data/issues/387\r\n\r\nThis allows for \r\n```python\r\nIterableWrapper([...]).list_file()\r\n```\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\ncc @VitalyFedyunin @ejguan @NivekT\n", "before_files": [{"content": "from typing import Iterator, List, Sequence, Union\n\nfrom torch.utils.data.datapipes.datapipe import IterDataPipe\nfrom torch.utils.data.datapipes.iter import IterableWrapper\nfrom torch.utils.data.datapipes.utils.common import get_file_pathnames_from_root\n\n__all__ = [\"FileListerIterDataPipe\", ]\n\n\nclass FileListerIterDataPipe(IterDataPipe[str]):\n r\"\"\"\n Given path(s) to the root directory, yields file pathname(s) (path + filename) of files within the root directory.\n Multiple root directories can be provided.\n\n Args:\n root: Root directory or a sequence of root directories\n masks: Unix style filter string or string list for filtering file name(s)\n recursive: Whether to return pathname from nested directories or not\n abspath: Whether to return relative pathname or absolute pathname\n non_deterministic: Whether to return pathname in sorted order or not.\n If ``False``, the results yielded from each root directory will be sorted\n length: Nominal length of the datapipe\n\n Example:\n >>> from torchdata.datapipes.iter import FileLister\n >>> dp = FileLister(root=\".\", recursive=True)\n >>> list(dp)\n ['example.py', './data/data.tar']\n \"\"\"\n\n def __init__(\n self,\n root: Union[str, Sequence[str], IterDataPipe] = '.',\n masks: Union[str, List[str]] = '',\n *,\n recursive: bool = False,\n abspath: bool = False,\n non_deterministic: bool = False,\n length: int = -1\n ) -> None:\n super().__init__()\n if isinstance(root, str):\n root = [root, ]\n if not isinstance(root, IterDataPipe):\n root = IterableWrapper(root)\n self.datapipe: IterDataPipe = root\n self.masks: Union[str, List[str]] = masks\n self.recursive: bool = recursive\n self.abspath: bool = abspath\n self.non_deterministic: bool = non_deterministic\n self.length: int = length\n\n def __iter__(self) -> Iterator[str] :\n for path in self.datapipe:\n yield from get_file_pathnames_from_root(path, self.masks, self.recursive, self.abspath, self.non_deterministic)\n\n def __len__(self):\n if self.length == -1:\n raise TypeError(\"{} instance doesn't have valid length\".format(type(self).__name__))\n return self.length\n", "path": "torch/utils/data/datapipes/iter/filelister.py"}], "after_files": [{"content": "from typing import Iterator, List, Sequence, Union\n\n\nfrom torch.utils.data.datapipes._decorator import functional_datapipe\n\nfrom torch.utils.data.datapipes.datapipe import IterDataPipe\nfrom torch.utils.data.datapipes.iter import IterableWrapper\nfrom torch.utils.data.datapipes.utils.common import get_file_pathnames_from_root\n\n__all__ = [\"FileListerIterDataPipe\", ]\n\n\n@functional_datapipe(\"list_files\")\nclass FileListerIterDataPipe(IterDataPipe[str]):\n r\"\"\"\n Given path(s) to the root directory, yields file pathname(s) (path + filename) of files within the root directory.\n Multiple root directories can be provided.\n\n Args:\n root: Root directory or a sequence of root directories\n masks: Unix style filter string or string list for filtering file name(s)\n recursive: Whether to return pathname from nested directories or not\n abspath: Whether to return relative pathname or absolute pathname\n non_deterministic: Whether to return pathname in sorted order or not.\n If ``False``, the results yielded from each root directory will be sorted\n length: Nominal length of the datapipe\n\n Example:\n >>> from torchdata.datapipes.iter import FileLister\n >>> dp = FileLister(root=\".\", recursive=True)\n >>> list(dp)\n ['example.py', './data/data.tar']\n \"\"\"\n\n def __init__(\n self,\n root: Union[str, Sequence[str], IterDataPipe] = '.',\n masks: Union[str, List[str]] = '',\n *,\n recursive: bool = False,\n abspath: bool = False,\n non_deterministic: bool = False,\n length: int = -1\n ) -> None:\n super().__init__()\n if isinstance(root, str):\n root = [root, ]\n if not isinstance(root, IterDataPipe):\n root = IterableWrapper(root)\n self.datapipe: IterDataPipe = root\n self.masks: Union[str, List[str]] = masks\n self.recursive: bool = recursive\n self.abspath: bool = abspath\n self.non_deterministic: bool = non_deterministic\n self.length: int = length\n\n def __iter__(self) -> Iterator[str] :\n for path in self.datapipe:\n yield from get_file_pathnames_from_root(path, self.masks, self.recursive, self.abspath, self.non_deterministic)\n\n def __len__(self):\n if self.length == -1:\n raise TypeError(\"{} instance doesn't have valid length\".format(type(self).__name__))\n return self.length\n", "path": "torch/utils/data/datapipes/iter/filelister.py"}]}
| 998 | 226 |
gh_patches_debug_20988
|
rasdani/github-patches
|
git_diff
|
numba__numba-4623
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tuples containing bools do not work with `numba.typed.List` or `numba.typed.Dict`
On the latest release (v0.35.1) on Python 3.7, the following:
```
import numba
numba.typed.List.empty_list(numba.typeof(("hello", True)))
```
fails with:
<details>
<summary>Stack Trace</summary>
```
TypingError Traceback (most recent call last)
<ipython-input-45-d29ae2daf2f2> in <module>
----> 1 l = numba.typed.List.empty_list(numba.typeof((0.0, "hello", True)))
/opt/anaconda/envs/test/lib/python3.7/site-packages/numba/typed/typedlist.py in empty_list(cls, item_type)
149 of the list .
150 """
--> 151 return cls(lsttype=ListType(item_type))
152
153 def __init__(self, **kwargs):
/opt/anaconda/envs/test/lib/python3.7/site-packages/numba/typed/typedlist.py in __init__(self, **kwargs)
164 """
165 if kwargs:
--> 166 self._list_type, self._opaque = self._parse_arg(**kwargs)
167 else:
168 self._list_type = None
/opt/anaconda/envs/test/lib/python3.7/site-packages/numba/typed/typedlist.py in _parse_arg(self, lsttype, meminfo)
175 opaque = meminfo
176 else:
--> 177 opaque = _make_list(lsttype.item_type)
178 return lsttype, opaque
179
/opt/anaconda/envs/test/lib/python3.7/site-packages/numba/dispatcher.py in _compile_for_args(self, *args, **kws)
374 e.patch_message(msg)
375
--> 376 error_rewrite(e, 'typing')
377 except errors.UnsupportedError as e:
378 # Something unsupported is present in the user code, add help info
/opt/anaconda/envs/test/lib/python3.7/site-packages/numba/dispatcher.py in error_rewrite(e, issue_type)
341 raise e
342 else:
--> 343 reraise(type(e), e, None)
344
345 argtypes = []
/opt/anaconda/envs/test/lib/python3.7/site-packages/numba/six.py in reraise(tp, value, tb)
656 value = tp()
657 if value.__traceback__ is not tb:
--> 658 raise value.with_traceback(tb)
659 raise value
660
TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Invalid use of Function(<function new_list at 0x7f29d63d4510>) with argument(s) of type(s): (typeref[(unicode_type, bool)])
* parameterized
In definition 0:
LoweringError: Failed in nopython mode pipeline (step: nopython mode backend)
expecting {{i8*, i64, i32, i32, i64, i8*, i8*}, i1} but got {{i8*, i64, i32, i32, i64, i8*, i8*}, i8}
File "../../opt/anaconda/envs/test/lib/python3.7/site-packages/numba/listobject.py", line 347:
def imp(item):
<source elided>
lp = _list_new(itemty)
_list_set_method_table(lp, itemty)
^
[1] During: lowering "$0.7 = call $0.4(lp, $0.6, func=$0.4, args=[Var(lp, /opt/anaconda/envs/test/lib/python3.7/site-packages/numba/listobject.py (346)), Var($0.6, /opt/anaconda/envs/test/lib/python3.7/site-packages/numba/listobject.py (347))], kws=(), vararg=None)" at /opt/anaconda/envs/test/lib/python3.7/site-packages/numba/listobject.py (347)
raised from /opt/anaconda/envs/test/lib/python3.7/site-packages/numba/six.py:659
In definition 1:
LoweringError: Failed in nopython mode pipeline (step: nopython mode backend)
expecting {{i8*, i64, i32, i32, i64, i8*, i8*}, i1} but got {{i8*, i64, i32, i32, i64, i8*, i8*}, i8}
File "../../opt/anaconda/envs/test/lib/python3.7/site-packages/numba/listobject.py", line 347:
def imp(item):
<source elided>
lp = _list_new(itemty)
_list_set_method_table(lp, itemty)
^
[1] During: lowering "$0.7 = call $0.4(lp, $0.6, func=$0.4, args=[Var(lp, /opt/anaconda/envs/test/lib/python3.7/site-packages/numba/listobject.py (346)), Var($0.6, /opt/anaconda/envs/test/lib/python3.7/site-packages/numba/listobject.py (347))], kws=(), vararg=None)" at /opt/anaconda/envs/test/lib/python3.7/site-packages/numba/listobject.py (347)
raised from /opt/anaconda/envs/test/lib/python3.7/site-packages/numba/six.py:659
This error is usually caused by passing an argument of a type that is unsupported by the named function.
[1] During: resolving callee type: Function(<function new_list at 0x7f29d63d4510>)
[2] During: typing of call at /opt/anaconda/envs/test/lib/python3.7/site-packages/numba/typed/typedlist.py (29)
```
</details>
It looks like using a tuple of `(int32, bool)` works fine, however.
## Reporting a bug
<!--
Before submitting a bug report please ensure that you can check off these boxes:
-->
- [x] I am using the latest released version of Numba (most recent is visible in
the change log (https://github.com/numba/numba/blob/master/CHANGE_LOG).
- [x] I have included below a minimal working reproducer (if you are unsure how
to write one see http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports).
<!--
Please include details of the bug here, including, if applicable, what you
expected to happen!
-->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `numba/typedobjectutils.py`
Content:
```
1 """ Common compiler level utilities for typed dict and list. """
2
3 import operator
4 import warnings
5
6 from llvmlite import ir
7 from llvmlite.llvmpy.core import Builder
8
9 from numba import cgutils
10 from numba import types
11 from numba import typing
12 from numba.targets.registry import cpu_target
13 from numba.typeconv import Conversion
14 from numba.extending import intrinsic
15 from numba.errors import TypingError, NumbaTypeSafetyWarning
16
17
18 def _as_bytes(builder, ptr):
19 """Helper to do (void*)ptr
20 """
21 return builder.bitcast(ptr, cgutils.voidptr_t)
22
23
24 @intrinsic
25 def _cast(typingctx, val, typ):
26 """Cast *val* to *typ*
27 """
28 def codegen(context, builder, signature, args):
29 [val, typ] = args
30 context.nrt.incref(builder, signature.return_type, val)
31 return val
32 # Using implicit casting in argument types
33 casted = typ.instance_type
34 _sentry_safe_cast(val, casted)
35 sig = casted(casted, typ)
36 return sig, codegen
37
38
39 def _sentry_safe_cast(fromty, toty):
40 """Check and raise TypingError if *fromty* cannot be safely cast to *toty*
41 """
42 tyctxt = cpu_target.typing_context
43 fromty, toty = map(types.unliteral, (fromty, toty))
44 by = tyctxt.can_convert(fromty, toty)
45
46 def warn():
47 m = 'unsafe cast from {} to {}. Precision may be lost.'
48 warnings.warn(m.format(fromty, toty),
49 category=NumbaTypeSafetyWarning)
50
51 isint = lambda x: isinstance(x, types.Integer)
52 isflt = lambda x: isinstance(x, types.Float)
53 iscmplx = lambda x: isinstance(x, types.Complex)
54 # Only check against numeric types.
55 if by is None or by > Conversion.safe:
56 if isint(fromty) and isint(toty):
57 # Accept if both types are ints
58 warn()
59 elif isint(fromty) and isflt(toty):
60 # Accept if ints to floats
61 warn()
62 elif isflt(fromty) and isflt(toty):
63 # Accept if floats to floats
64 warn()
65 elif iscmplx(fromty) and iscmplx(toty):
66 # Accept if complex to complex
67 warn()
68 elif not isinstance(toty, types.Number):
69 # Non-numbers
70 warn()
71 else:
72 # Make it a hard error for numeric type that changes domain.
73 m = 'cannot safely cast {} to {}. Please cast explicitly.'
74 raise TypingError(m.format(fromty, toty))
75
76
77 def _sentry_safe_cast_default(default, valty):
78 """Similar to _sentry_safe_cast but handle default value.
79 """
80 # Handle default values
81 # TODO: simplify default values; too many possible way to spell None
82 if default is None:
83 return
84 if isinstance(default, (types.Omitted, types.NoneType)):
85 return
86 return _sentry_safe_cast(default, valty)
87
88
89 @intrinsic
90 def _nonoptional(typingctx, val):
91 """Typing trick to cast Optional[T] to T
92 """
93 if not isinstance(val, types.Optional):
94 raise TypeError('expected an optional')
95
96 def codegen(context, builder, sig, args):
97 context.nrt.incref(builder, sig.return_type, args[0])
98 return args[0]
99
100 casted = val.type
101 sig = casted(casted)
102 return sig, codegen
103
104
105 def _container_get_data(context, builder, container_ty, c):
106 """Helper to get the C list pointer in a numba containers.
107 """
108 ctor = cgutils.create_struct_proxy(container_ty)
109 conatainer_struct = ctor(context, builder, value=c)
110 return conatainer_struct.data
111
112
113 def _container_get_meminfo(context, builder, container_ty, c):
114 """Helper to get the meminfo for a container
115 """
116 ctor = cgutils.create_struct_proxy(container_ty)
117 conatainer_struct = ctor(context, builder, value=c)
118 return conatainer_struct.meminfo
119
120
121 def _get_incref_decref(context, module, datamodel, container_type):
122 assert datamodel.contains_nrt_meminfo()
123
124 fe_type = datamodel.fe_type
125 data_ptr_ty = datamodel.get_data_type().as_pointer()
126 refct_fnty = ir.FunctionType(ir.VoidType(), [data_ptr_ty])
127 incref_fn = module.get_or_insert_function(
128 refct_fnty,
129 name='.numba_{}_incref${}'.format(container_type, fe_type),
130 )
131 builder = ir.IRBuilder(incref_fn.append_basic_block())
132 context.nrt.incref(builder, fe_type, builder.load(incref_fn.args[0]))
133 builder.ret_void()
134
135 decref_fn = module.get_or_insert_function(
136 refct_fnty,
137 name='.numba_{}_decref${}'.format(container_type, fe_type),
138 )
139 builder = ir.IRBuilder(decref_fn.append_basic_block())
140 context.nrt.decref(builder, fe_type, builder.load(decref_fn.args[0]))
141 builder.ret_void()
142
143 return incref_fn, decref_fn
144
145
146 def _get_equal(context, module, datamodel, container_type):
147 assert datamodel.contains_nrt_meminfo()
148
149 fe_type = datamodel.fe_type
150 data_ptr_ty = datamodel.get_data_type().as_pointer()
151
152 wrapfnty = context.call_conv.get_function_type(types.int32, [fe_type, fe_type])
153 argtypes = [fe_type, fe_type]
154
155 def build_wrapper(fn):
156 builder = Builder(fn.append_basic_block())
157 args = context.call_conv.decode_arguments(builder, argtypes, fn)
158
159 sig = typing.signature(types.boolean, fe_type, fe_type)
160 op = operator.eq
161 fnop = context.typing_context.resolve_value_type(op)
162 fnop.get_call_type(context.typing_context, sig.args, {})
163 eqfn = context.get_function(fnop, sig)
164 res = eqfn(builder, args)
165 intres = context.cast(builder, res, types.boolean, types.int32)
166 context.call_conv.return_value(builder, intres)
167
168 wrapfn = module.get_or_insert_function(
169 wrapfnty,
170 name='.numba_{}_item_equal.wrap${}'.format(container_type, fe_type)
171 )
172 build_wrapper(wrapfn)
173
174 equal_fnty = ir.FunctionType(ir.IntType(32), [data_ptr_ty, data_ptr_ty])
175 equal_fn = module.get_or_insert_function(
176 equal_fnty,
177 name='.numba_{}_item_equal${}'.format(container_type, fe_type),
178 )
179 builder = Builder(equal_fn.append_basic_block())
180 lhs = datamodel.load_from_data_pointer(builder, equal_fn.args[0])
181 rhs = datamodel.load_from_data_pointer(builder, equal_fn.args[1])
182
183 status, retval = context.call_conv.call_function(
184 builder, wrapfn, types.boolean, argtypes, [lhs, rhs],
185 )
186 with builder.if_then(status.is_ok, likely=True):
187 with builder.if_then(status.is_none):
188 builder.ret(context.get_constant(types.int32, 0))
189 retval = context.cast(builder, retval, types.boolean, types.int32)
190 builder.ret(retval)
191 # Error out
192 builder.ret(context.get_constant(types.int32, -1))
193
194 return equal_fn
195
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/numba/typedobjectutils.py b/numba/typedobjectutils.py
--- a/numba/typedobjectutils.py
+++ b/numba/typedobjectutils.py
@@ -128,8 +128,12 @@
refct_fnty,
name='.numba_{}_incref${}'.format(container_type, fe_type),
)
+
builder = ir.IRBuilder(incref_fn.append_basic_block())
- context.nrt.incref(builder, fe_type, builder.load(incref_fn.args[0]))
+ context.nrt.incref(
+ builder, fe_type,
+ datamodel.load_from_data_pointer(builder, incref_fn.args[0]),
+ )
builder.ret_void()
decref_fn = module.get_or_insert_function(
@@ -137,7 +141,10 @@
name='.numba_{}_decref${}'.format(container_type, fe_type),
)
builder = ir.IRBuilder(decref_fn.append_basic_block())
- context.nrt.decref(builder, fe_type, builder.load(decref_fn.args[0]))
+ context.nrt.decref(
+ builder, fe_type,
+ datamodel.load_from_data_pointer(builder, decref_fn.args[0]),
+ )
builder.ret_void()
return incref_fn, decref_fn
|
{"golden_diff": "diff --git a/numba/typedobjectutils.py b/numba/typedobjectutils.py\n--- a/numba/typedobjectutils.py\n+++ b/numba/typedobjectutils.py\n@@ -128,8 +128,12 @@\n refct_fnty,\n name='.numba_{}_incref${}'.format(container_type, fe_type),\n )\n+\n builder = ir.IRBuilder(incref_fn.append_basic_block())\n- context.nrt.incref(builder, fe_type, builder.load(incref_fn.args[0]))\n+ context.nrt.incref(\n+ builder, fe_type,\n+ datamodel.load_from_data_pointer(builder, incref_fn.args[0]),\n+ )\n builder.ret_void()\n \n decref_fn = module.get_or_insert_function(\n@@ -137,7 +141,10 @@\n name='.numba_{}_decref${}'.format(container_type, fe_type),\n )\n builder = ir.IRBuilder(decref_fn.append_basic_block())\n- context.nrt.decref(builder, fe_type, builder.load(decref_fn.args[0]))\n+ context.nrt.decref(\n+ builder, fe_type,\n+ datamodel.load_from_data_pointer(builder, decref_fn.args[0]),\n+ )\n builder.ret_void()\n \n return incref_fn, decref_fn\n", "issue": "Tuples containing bools do not work with `numba.typed.List` or `numba.typed.Dict`\nOn the latest release (v0.35.1) on Python 3.7, the following:\r\n\r\n\r\n```\r\nimport numba\r\nnumba.typed.List.empty_list(numba.typeof((\"hello\", True)))\r\n```\r\n\r\nfails with:\r\n\r\n<details>\r\n<summary>Stack Trace</summary>\r\n\r\n```\r\nTypingError Traceback (most recent call last)\r\n<ipython-input-45-d29ae2daf2f2> in <module>\r\n----> 1 l = numba.typed.List.empty_list(numba.typeof((0.0, \"hello\", True)))\r\n\r\n/opt/anaconda/envs/test/lib/python3.7/site-packages/numba/typed/typedlist.py in empty_list(cls, item_type)\r\n 149 of the list .\r\n 150 \"\"\"\r\n--> 151 return cls(lsttype=ListType(item_type))\r\n 152 \r\n 153 def __init__(self, **kwargs):\r\n\r\n/opt/anaconda/envs/test/lib/python3.7/site-packages/numba/typed/typedlist.py in __init__(self, **kwargs)\r\n 164 \"\"\"\r\n 165 if kwargs:\r\n--> 166 self._list_type, self._opaque = self._parse_arg(**kwargs)\r\n 167 else:\r\n 168 self._list_type = None\r\n\r\n/opt/anaconda/envs/test/lib/python3.7/site-packages/numba/typed/typedlist.py in _parse_arg(self, lsttype, meminfo)\r\n 175 opaque = meminfo\r\n 176 else:\r\n--> 177 opaque = _make_list(lsttype.item_type)\r\n 178 return lsttype, opaque\r\n 179 \r\n\r\n/opt/anaconda/envs/test/lib/python3.7/site-packages/numba/dispatcher.py in _compile_for_args(self, *args, **kws)\r\n 374 e.patch_message(msg)\r\n 375 \r\n--> 376 error_rewrite(e, 'typing')\r\n 377 except errors.UnsupportedError as e:\r\n 378 # Something unsupported is present in the user code, add help info\r\n\r\n/opt/anaconda/envs/test/lib/python3.7/site-packages/numba/dispatcher.py in error_rewrite(e, issue_type)\r\n 341 raise e\r\n 342 else:\r\n--> 343 reraise(type(e), e, None)\r\n 344 \r\n 345 argtypes = []\r\n\r\n/opt/anaconda/envs/test/lib/python3.7/site-packages/numba/six.py in reraise(tp, value, tb)\r\n 656 value = tp()\r\n 657 if value.__traceback__ is not tb:\r\n--> 658 raise value.with_traceback(tb)\r\n 659 raise value\r\n 660 \r\n\r\n\r\nTypingError: Failed in nopython mode pipeline (step: nopython frontend)\r\nInvalid use of Function(<function new_list at 0x7f29d63d4510>) with argument(s) of type(s): (typeref[(unicode_type, bool)])\r\n * parameterized\r\nIn definition 0:\r\n LoweringError: Failed in nopython mode pipeline (step: nopython mode backend)\r\nexpecting {{i8*, i64, i32, i32, i64, i8*, i8*}, i1} but got {{i8*, i64, i32, i32, i64, i8*, i8*}, i8}\r\n\r\nFile \"../../opt/anaconda/envs/test/lib/python3.7/site-packages/numba/listobject.py\", line 347:\r\n def imp(item):\r\n <source elided>\r\n lp = _list_new(itemty)\r\n _list_set_method_table(lp, itemty)\r\n ^\r\n\r\n[1] During: lowering \"$0.7 = call $0.4(lp, $0.6, func=$0.4, args=[Var(lp, /opt/anaconda/envs/test/lib/python3.7/site-packages/numba/listobject.py (346)), Var($0.6, /opt/anaconda/envs/test/lib/python3.7/site-packages/numba/listobject.py (347))], kws=(), vararg=None)\" at /opt/anaconda/envs/test/lib/python3.7/site-packages/numba/listobject.py (347)\r\n raised from /opt/anaconda/envs/test/lib/python3.7/site-packages/numba/six.py:659\r\nIn definition 1:\r\n LoweringError: Failed in nopython mode pipeline (step: nopython mode backend)\r\nexpecting {{i8*, i64, i32, i32, i64, i8*, i8*}, i1} but got {{i8*, i64, i32, i32, i64, i8*, i8*}, i8}\r\n\r\nFile \"../../opt/anaconda/envs/test/lib/python3.7/site-packages/numba/listobject.py\", line 347:\r\n def imp(item):\r\n <source elided>\r\n lp = _list_new(itemty)\r\n _list_set_method_table(lp, itemty)\r\n ^\r\n\r\n[1] During: lowering \"$0.7 = call $0.4(lp, $0.6, func=$0.4, args=[Var(lp, /opt/anaconda/envs/test/lib/python3.7/site-packages/numba/listobject.py (346)), Var($0.6, /opt/anaconda/envs/test/lib/python3.7/site-packages/numba/listobject.py (347))], kws=(), vararg=None)\" at /opt/anaconda/envs/test/lib/python3.7/site-packages/numba/listobject.py (347)\r\n raised from /opt/anaconda/envs/test/lib/python3.7/site-packages/numba/six.py:659\r\nThis error is usually caused by passing an argument of a type that is unsupported by the named function.\r\n[1] During: resolving callee type: Function(<function new_list at 0x7f29d63d4510>)\r\n[2] During: typing of call at /opt/anaconda/envs/test/lib/python3.7/site-packages/numba/typed/typedlist.py (29)\r\n\r\n```\r\n\r\n</details>\r\n\r\n\r\nIt looks like using a tuple of `(int32, bool)` works fine, however.\r\n\r\n\r\n## Reporting a bug\r\n\r\n<!--\r\n\r\nBefore submitting a bug report please ensure that you can check off these boxes:\r\n\r\n-->\r\n\r\n- [x] I am using the latest released version of Numba (most recent is visible in\r\n the change log (https://github.com/numba/numba/blob/master/CHANGE_LOG).\r\n- [x] I have included below a minimal working reproducer (if you are unsure how\r\n to write one see http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports).\r\n\r\n<!--\r\n\r\nPlease include details of the bug here, including, if applicable, what you\r\nexpected to happen!\r\n\r\n-->\r\n\n", "before_files": [{"content": "\"\"\" Common compiler level utilities for typed dict and list. \"\"\"\n\nimport operator\nimport warnings\n\nfrom llvmlite import ir\nfrom llvmlite.llvmpy.core import Builder\n\nfrom numba import cgutils\nfrom numba import types\nfrom numba import typing\nfrom numba.targets.registry import cpu_target\nfrom numba.typeconv import Conversion\nfrom numba.extending import intrinsic\nfrom numba.errors import TypingError, NumbaTypeSafetyWarning\n\n\ndef _as_bytes(builder, ptr):\n \"\"\"Helper to do (void*)ptr\n \"\"\"\n return builder.bitcast(ptr, cgutils.voidptr_t)\n\n\n@intrinsic\ndef _cast(typingctx, val, typ):\n \"\"\"Cast *val* to *typ*\n \"\"\"\n def codegen(context, builder, signature, args):\n [val, typ] = args\n context.nrt.incref(builder, signature.return_type, val)\n return val\n # Using implicit casting in argument types\n casted = typ.instance_type\n _sentry_safe_cast(val, casted)\n sig = casted(casted, typ)\n return sig, codegen\n\n\ndef _sentry_safe_cast(fromty, toty):\n \"\"\"Check and raise TypingError if *fromty* cannot be safely cast to *toty*\n \"\"\"\n tyctxt = cpu_target.typing_context\n fromty, toty = map(types.unliteral, (fromty, toty))\n by = tyctxt.can_convert(fromty, toty)\n\n def warn():\n m = 'unsafe cast from {} to {}. Precision may be lost.'\n warnings.warn(m.format(fromty, toty),\n category=NumbaTypeSafetyWarning)\n\n isint = lambda x: isinstance(x, types.Integer)\n isflt = lambda x: isinstance(x, types.Float)\n iscmplx = lambda x: isinstance(x, types.Complex)\n # Only check against numeric types.\n if by is None or by > Conversion.safe:\n if isint(fromty) and isint(toty):\n # Accept if both types are ints\n warn()\n elif isint(fromty) and isflt(toty):\n # Accept if ints to floats\n warn()\n elif isflt(fromty) and isflt(toty):\n # Accept if floats to floats\n warn()\n elif iscmplx(fromty) and iscmplx(toty):\n # Accept if complex to complex\n warn()\n elif not isinstance(toty, types.Number):\n # Non-numbers\n warn()\n else:\n # Make it a hard error for numeric type that changes domain.\n m = 'cannot safely cast {} to {}. Please cast explicitly.'\n raise TypingError(m.format(fromty, toty))\n\n\ndef _sentry_safe_cast_default(default, valty):\n \"\"\"Similar to _sentry_safe_cast but handle default value.\n \"\"\"\n # Handle default values\n # TODO: simplify default values; too many possible way to spell None\n if default is None:\n return\n if isinstance(default, (types.Omitted, types.NoneType)):\n return\n return _sentry_safe_cast(default, valty)\n\n\n@intrinsic\ndef _nonoptional(typingctx, val):\n \"\"\"Typing trick to cast Optional[T] to T\n \"\"\"\n if not isinstance(val, types.Optional):\n raise TypeError('expected an optional')\n\n def codegen(context, builder, sig, args):\n context.nrt.incref(builder, sig.return_type, args[0])\n return args[0]\n\n casted = val.type\n sig = casted(casted)\n return sig, codegen\n\n\ndef _container_get_data(context, builder, container_ty, c):\n \"\"\"Helper to get the C list pointer in a numba containers.\n \"\"\"\n ctor = cgutils.create_struct_proxy(container_ty)\n conatainer_struct = ctor(context, builder, value=c)\n return conatainer_struct.data\n\n\ndef _container_get_meminfo(context, builder, container_ty, c):\n \"\"\"Helper to get the meminfo for a container\n \"\"\"\n ctor = cgutils.create_struct_proxy(container_ty)\n conatainer_struct = ctor(context, builder, value=c)\n return conatainer_struct.meminfo\n\n\ndef _get_incref_decref(context, module, datamodel, container_type):\n assert datamodel.contains_nrt_meminfo()\n\n fe_type = datamodel.fe_type\n data_ptr_ty = datamodel.get_data_type().as_pointer()\n refct_fnty = ir.FunctionType(ir.VoidType(), [data_ptr_ty])\n incref_fn = module.get_or_insert_function(\n refct_fnty,\n name='.numba_{}_incref${}'.format(container_type, fe_type),\n )\n builder = ir.IRBuilder(incref_fn.append_basic_block())\n context.nrt.incref(builder, fe_type, builder.load(incref_fn.args[0]))\n builder.ret_void()\n\n decref_fn = module.get_or_insert_function(\n refct_fnty,\n name='.numba_{}_decref${}'.format(container_type, fe_type),\n )\n builder = ir.IRBuilder(decref_fn.append_basic_block())\n context.nrt.decref(builder, fe_type, builder.load(decref_fn.args[0]))\n builder.ret_void()\n\n return incref_fn, decref_fn\n\n\ndef _get_equal(context, module, datamodel, container_type):\n assert datamodel.contains_nrt_meminfo()\n\n fe_type = datamodel.fe_type\n data_ptr_ty = datamodel.get_data_type().as_pointer()\n\n wrapfnty = context.call_conv.get_function_type(types.int32, [fe_type, fe_type])\n argtypes = [fe_type, fe_type]\n\n def build_wrapper(fn):\n builder = Builder(fn.append_basic_block())\n args = context.call_conv.decode_arguments(builder, argtypes, fn)\n\n sig = typing.signature(types.boolean, fe_type, fe_type)\n op = operator.eq\n fnop = context.typing_context.resolve_value_type(op)\n fnop.get_call_type(context.typing_context, sig.args, {})\n eqfn = context.get_function(fnop, sig)\n res = eqfn(builder, args)\n intres = context.cast(builder, res, types.boolean, types.int32)\n context.call_conv.return_value(builder, intres)\n\n wrapfn = module.get_or_insert_function(\n wrapfnty,\n name='.numba_{}_item_equal.wrap${}'.format(container_type, fe_type)\n )\n build_wrapper(wrapfn)\n\n equal_fnty = ir.FunctionType(ir.IntType(32), [data_ptr_ty, data_ptr_ty])\n equal_fn = module.get_or_insert_function(\n equal_fnty,\n name='.numba_{}_item_equal${}'.format(container_type, fe_type),\n )\n builder = Builder(equal_fn.append_basic_block())\n lhs = datamodel.load_from_data_pointer(builder, equal_fn.args[0])\n rhs = datamodel.load_from_data_pointer(builder, equal_fn.args[1])\n\n status, retval = context.call_conv.call_function(\n builder, wrapfn, types.boolean, argtypes, [lhs, rhs],\n )\n with builder.if_then(status.is_ok, likely=True):\n with builder.if_then(status.is_none):\n builder.ret(context.get_constant(types.int32, 0))\n retval = context.cast(builder, retval, types.boolean, types.int32)\n builder.ret(retval)\n # Error out\n builder.ret(context.get_constant(types.int32, -1))\n\n return equal_fn\n", "path": "numba/typedobjectutils.py"}], "after_files": [{"content": "\"\"\" Common compiler level utilities for typed dict and list. \"\"\"\n\nimport operator\nimport warnings\n\nfrom llvmlite import ir\nfrom llvmlite.llvmpy.core import Builder\n\nfrom numba import cgutils\nfrom numba import types\nfrom numba import typing\nfrom numba.targets.registry import cpu_target\nfrom numba.typeconv import Conversion\nfrom numba.extending import intrinsic\nfrom numba.errors import TypingError, NumbaTypeSafetyWarning\n\n\ndef _as_bytes(builder, ptr):\n \"\"\"Helper to do (void*)ptr\n \"\"\"\n return builder.bitcast(ptr, cgutils.voidptr_t)\n\n\n@intrinsic\ndef _cast(typingctx, val, typ):\n \"\"\"Cast *val* to *typ*\n \"\"\"\n def codegen(context, builder, signature, args):\n [val, typ] = args\n context.nrt.incref(builder, signature.return_type, val)\n return val\n # Using implicit casting in argument types\n casted = typ.instance_type\n _sentry_safe_cast(val, casted)\n sig = casted(casted, typ)\n return sig, codegen\n\n\ndef _sentry_safe_cast(fromty, toty):\n \"\"\"Check and raise TypingError if *fromty* cannot be safely cast to *toty*\n \"\"\"\n tyctxt = cpu_target.typing_context\n fromty, toty = map(types.unliteral, (fromty, toty))\n by = tyctxt.can_convert(fromty, toty)\n\n def warn():\n m = 'unsafe cast from {} to {}. Precision may be lost.'\n warnings.warn(m.format(fromty, toty),\n category=NumbaTypeSafetyWarning)\n\n isint = lambda x: isinstance(x, types.Integer)\n isflt = lambda x: isinstance(x, types.Float)\n iscmplx = lambda x: isinstance(x, types.Complex)\n # Only check against numeric types.\n if by is None or by > Conversion.safe:\n if isint(fromty) and isint(toty):\n # Accept if both types are ints\n warn()\n elif isint(fromty) and isflt(toty):\n # Accept if ints to floats\n warn()\n elif isflt(fromty) and isflt(toty):\n # Accept if floats to floats\n warn()\n elif iscmplx(fromty) and iscmplx(toty):\n # Accept if complex to complex\n warn()\n elif not isinstance(toty, types.Number):\n # Non-numbers\n warn()\n else:\n # Make it a hard error for numeric type that changes domain.\n m = 'cannot safely cast {} to {}. Please cast explicitly.'\n raise TypingError(m.format(fromty, toty))\n\n\ndef _sentry_safe_cast_default(default, valty):\n \"\"\"Similar to _sentry_safe_cast but handle default value.\n \"\"\"\n # Handle default values\n # TODO: simplify default values; too many possible way to spell None\n if default is None:\n return\n if isinstance(default, (types.Omitted, types.NoneType)):\n return\n return _sentry_safe_cast(default, valty)\n\n\n@intrinsic\ndef _nonoptional(typingctx, val):\n \"\"\"Typing trick to cast Optional[T] to T\n \"\"\"\n if not isinstance(val, types.Optional):\n raise TypeError('expected an optional')\n\n def codegen(context, builder, sig, args):\n context.nrt.incref(builder, sig.return_type, args[0])\n return args[0]\n\n casted = val.type\n sig = casted(casted)\n return sig, codegen\n\n\ndef _container_get_data(context, builder, container_ty, c):\n \"\"\"Helper to get the C list pointer in a numba containers.\n \"\"\"\n ctor = cgutils.create_struct_proxy(container_ty)\n conatainer_struct = ctor(context, builder, value=c)\n return conatainer_struct.data\n\n\ndef _container_get_meminfo(context, builder, container_ty, c):\n \"\"\"Helper to get the meminfo for a container\n \"\"\"\n ctor = cgutils.create_struct_proxy(container_ty)\n conatainer_struct = ctor(context, builder, value=c)\n return conatainer_struct.meminfo\n\n\ndef _get_incref_decref(context, module, datamodel, container_type):\n assert datamodel.contains_nrt_meminfo()\n\n fe_type = datamodel.fe_type\n data_ptr_ty = datamodel.get_data_type().as_pointer()\n refct_fnty = ir.FunctionType(ir.VoidType(), [data_ptr_ty])\n incref_fn = module.get_or_insert_function(\n refct_fnty,\n name='.numba_{}_incref${}'.format(container_type, fe_type),\n )\n\n builder = ir.IRBuilder(incref_fn.append_basic_block())\n context.nrt.incref(\n builder, fe_type,\n datamodel.load_from_data_pointer(builder, incref_fn.args[0]),\n )\n builder.ret_void()\n\n decref_fn = module.get_or_insert_function(\n refct_fnty,\n name='.numba_{}_decref${}'.format(container_type, fe_type),\n )\n builder = ir.IRBuilder(decref_fn.append_basic_block())\n context.nrt.decref(\n builder, fe_type,\n datamodel.load_from_data_pointer(builder, decref_fn.args[0]),\n )\n builder.ret_void()\n\n return incref_fn, decref_fn\n\n\ndef _get_equal(context, module, datamodel, container_type):\n assert datamodel.contains_nrt_meminfo()\n\n fe_type = datamodel.fe_type\n data_ptr_ty = datamodel.get_data_type().as_pointer()\n\n wrapfnty = context.call_conv.get_function_type(types.int32, [fe_type, fe_type])\n argtypes = [fe_type, fe_type]\n\n def build_wrapper(fn):\n builder = Builder(fn.append_basic_block())\n args = context.call_conv.decode_arguments(builder, argtypes, fn)\n\n sig = typing.signature(types.boolean, fe_type, fe_type)\n op = operator.eq\n fnop = context.typing_context.resolve_value_type(op)\n fnop.get_call_type(context.typing_context, sig.args, {})\n eqfn = context.get_function(fnop, sig)\n res = eqfn(builder, args)\n intres = context.cast(builder, res, types.boolean, types.int32)\n context.call_conv.return_value(builder, intres)\n\n wrapfn = module.get_or_insert_function(\n wrapfnty,\n name='.numba_{}_item_equal.wrap${}'.format(container_type, fe_type)\n )\n build_wrapper(wrapfn)\n\n equal_fnty = ir.FunctionType(ir.IntType(32), [data_ptr_ty, data_ptr_ty])\n equal_fn = module.get_or_insert_function(\n equal_fnty,\n name='.numba_{}_item_equal${}'.format(container_type, fe_type),\n )\n builder = Builder(equal_fn.append_basic_block())\n lhs = datamodel.load_from_data_pointer(builder, equal_fn.args[0])\n rhs = datamodel.load_from_data_pointer(builder, equal_fn.args[1])\n\n status, retval = context.call_conv.call_function(\n builder, wrapfn, types.boolean, argtypes, [lhs, rhs],\n )\n with builder.if_then(status.is_ok, likely=True):\n with builder.if_then(status.is_none):\n builder.ret(context.get_constant(types.int32, 0))\n retval = context.cast(builder, retval, types.boolean, types.int32)\n builder.ret(retval)\n # Error out\n builder.ret(context.get_constant(types.int32, -1))\n\n return equal_fn\n", "path": "numba/typedobjectutils.py"}]}
| 4,001 | 303 |
gh_patches_debug_59556
|
rasdani/github-patches
|
git_diff
|
elastic__apm-agent-python-1076
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FastAPI: Calls to undefined routes result in 500
**Description**
When using the apm agent with FastAPI, calls to any undefined route results in a 500 internal server error. Without apm, 404 not found is returned correctly.
**To Reproduce**
Run the following simple FastAPI app:
```python
import uvicorn
from elasticapm.contrib.starlette import ElasticAPM, make_apm_client
from fastapi import FastAPI
app = FastAPI()
apm = make_apm_client(
{
"SERVICE_NAME": "undefined-route",
}
)
app.add_middleware(ElasticAPM, client=apm)
@app.get("/hi")
async def hi():
return "hi"
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=80)
```
**Observed behavior**
Calling `http://localhost/bye` results in a 500 internal server error.
**Expected behavior**
Calling `http://localhost/bye` results in a 404 not found error.
**Environment**
- OS:
- Client: Windows
- Server: Ubuntu
- Python version: `3.7.3`
- Framework and version: `fastapi==0.61.2`
- APM Server version: docker image `elasticsearch/elasticsearch:7.10.2`
- Agent version: `elastic-apm==master`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticapm/contrib/starlette/__init__.py`
Content:
```
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2012, the Sentry Team, see AUTHORS for more details
4 # Copyright (c) 2019, Elasticsearch BV
5 # All rights reserved.
6 #
7 # Redistribution and use in source and binary forms, with or without
8 # modification, are permitted provided that the following conditions are met:
9 #
10 # * Redistributions of source code must retain the above copyright notice, this
11 # list of conditions and the following disclaimer.
12 #
13 # * Redistributions in binary form must reproduce the above copyright notice,
14 # this list of conditions and the following disclaimer in the documentation
15 # and/or other materials provided with the distribution.
16 #
17 # * Neither the name of the copyright holder nor the names of its
18 # contributors may be used to endorse or promote products derived from
19 # this software without specific prior written permission.
20 #
21 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
22 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
23 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
24 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
25 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
26 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
27 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
28 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
29 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
30
31
32 from __future__ import absolute_import
33
34 import starlette
35 from starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint
36 from starlette.requests import Request
37 from starlette.responses import Response
38 from starlette.routing import Match, Mount
39 from starlette.types import ASGIApp
40
41 import elasticapm
42 import elasticapm.instrumentation.control
43 from elasticapm.base import Client
44 from elasticapm.conf import constants
45 from elasticapm.contrib.asyncio.traces import set_context
46 from elasticapm.contrib.starlette.utils import get_body, get_data_from_request, get_data_from_response
47 from elasticapm.utils.disttracing import TraceParent
48 from elasticapm.utils.logging import get_logger
49
50 logger = get_logger("elasticapm.errors.client")
51
52
53 def make_apm_client(config: dict, client_cls=Client, **defaults) -> Client:
54 """Builds ElasticAPM client.
55
56 Args:
57 config (dict): Dictionary of Client configuration. All keys must be uppercase. See `elasticapm.conf.Config`.
58 client_cls (Client): Must be Client or its child.
59 **defaults: Additional parameters for Client. See `elasticapm.base.Client`
60
61 Returns:
62 Client
63 """
64 if "framework_name" not in defaults:
65 defaults["framework_name"] = "starlette"
66 defaults["framework_version"] = starlette.__version__
67
68 return client_cls(config, **defaults)
69
70
71 class ElasticAPM(BaseHTTPMiddleware):
72 """
73 Starlette / FastAPI middleware for Elastic APM capturing.
74
75 >>> elasticapm = make_apm_client({
76 >>> 'SERVICE_NAME': 'myapp',
77 >>> 'DEBUG': True,
78 >>> 'SERVER_URL': 'http://localhost:8200',
79 >>> 'CAPTURE_HEADERS': True,
80 >>> 'CAPTURE_BODY': 'all'
81 >>> })
82
83 >>> app.add_middleware(ElasticAPM, client=elasticapm)
84
85 Pass an arbitrary APP_NAME and SECRET_TOKEN::
86
87 >>> elasticapm = ElasticAPM(app, service_name='myapp', secret_token='asdasdasd')
88
89 Pass an explicit client::
90
91 >>> elasticapm = ElasticAPM(app, client=client)
92
93 Automatically configure logging::
94
95 >>> elasticapm = ElasticAPM(app, logging=True)
96
97 Capture an exception::
98
99 >>> try:
100 >>> 1 / 0
101 >>> except ZeroDivisionError:
102 >>> elasticapm.capture_exception()
103
104 Capture a message::
105
106 >>> elasticapm.capture_message('hello, world!')
107 """
108
109 def __init__(self, app: ASGIApp, client: Client):
110 """
111
112 Args:
113 app (ASGIApp): Starlette app
114 client (Client): ElasticAPM Client
115 """
116 self.client = client
117
118 if self.client.config.instrument and self.client.config.enabled:
119 elasticapm.instrumentation.control.instrument()
120
121 super().__init__(app)
122
123 async def dispatch(self, request: Request, call_next: RequestResponseEndpoint) -> Response:
124 """Processes the whole request APM capturing.
125
126 Args:
127 request (Request)
128 call_next (RequestResponseEndpoint): Next request process in Starlette.
129
130 Returns:
131 Response
132 """
133 await self._request_started(request)
134
135 try:
136 response = await call_next(request)
137 elasticapm.set_transaction_outcome(constants.OUTCOME.SUCCESS, override=False)
138 except Exception:
139 await self.capture_exception(
140 context={"request": await get_data_from_request(request, self.client.config, constants.ERROR)}
141 )
142 elasticapm.set_transaction_result("HTTP 5xx", override=False)
143 elasticapm.set_transaction_outcome(constants.OUTCOME.FAILURE, override=False)
144 elasticapm.set_context({"status_code": 500}, "response")
145
146 raise
147 else:
148 await self._request_finished(response)
149 finally:
150 self.client.end_transaction()
151
152 return response
153
154 async def capture_exception(self, *args, **kwargs):
155 """Captures your exception.
156
157 Args:
158 *args:
159 **kwargs:
160 """
161 self.client.capture_exception(*args, **kwargs)
162
163 async def capture_message(self, *args, **kwargs):
164 """Captures your message.
165
166 Args:
167 *args: Whatever
168 **kwargs: Whatever
169 """
170 self.client.capture_message(*args, **kwargs)
171
172 async def _request_started(self, request: Request):
173 """Captures the begin of the request processing to APM.
174
175 Args:
176 request (Request)
177 """
178 # When we consume the body, we replace the streaming mechanism with
179 # a mocked version -- this workaround came from
180 # https://github.com/encode/starlette/issues/495#issuecomment-513138055
181 # and we call the workaround here to make sure that regardless of
182 # `capture_body` settings, we will have access to the body if we need it.
183 if self.client.config.capture_body != "off":
184 await get_body(request)
185
186 if not self.client.should_ignore_url(request.url.path):
187 trace_parent = TraceParent.from_headers(dict(request.headers))
188 self.client.begin_transaction("request", trace_parent=trace_parent)
189
190 await set_context(
191 lambda: get_data_from_request(request, self.client.config, constants.TRANSACTION), "request"
192 )
193 transaction_name = self.get_route_name(request) or request.url.path
194 elasticapm.set_transaction_name("{} {}".format(request.method, transaction_name), override=False)
195
196 async def _request_finished(self, response: Response):
197 """Captures the end of the request processing to APM.
198
199 Args:
200 response (Response)
201 """
202 await set_context(
203 lambda: get_data_from_response(response, self.client.config, constants.TRANSACTION), "response"
204 )
205
206 result = "HTTP {}xx".format(response.status_code // 100)
207 elasticapm.set_transaction_result(result, override=False)
208
209 def get_route_name(self, request: Request) -> str:
210 app = request.app
211 scope = request.scope
212 routes = app.routes
213 route_name = self._get_route_name(scope, routes)
214
215 # Starlette magically redirects requests if the path matches a route name with a trailing slash
216 # appended or removed. To not spam the transaction names list, we do the same here and put these
217 # redirects all in the same "redirect trailing slashes" transaction name
218 if not route_name and app.router.redirect_slashes and scope["path"] != "/":
219 redirect_scope = dict(scope)
220 if scope["path"].endswith("/"):
221 redirect_scope["path"] = scope["path"][:-1]
222 trim = True
223 else:
224 redirect_scope["path"] = scope["path"] + "/"
225 trim = False
226
227 route_name = self._get_route_name(redirect_scope, routes)
228 route_name = route_name + "/" if trim else route_name[:-1]
229 return route_name
230
231 def _get_route_name(self, scope, routes, route_name=None):
232 for route in routes:
233 match, child_scope = route.matches(scope)
234 if match == Match.FULL:
235 route_name = route.path
236 child_scope = {**scope, **child_scope}
237 if isinstance(route, Mount):
238 child_route_name = self._get_route_name(child_scope, route.routes, route_name)
239 if child_route_name is None:
240 route_name = None
241 else:
242 route_name += child_route_name
243 return route_name
244 elif match == Match.PARTIAL and route_name is None:
245 route_name = route.path
246
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/elasticapm/contrib/starlette/__init__.py b/elasticapm/contrib/starlette/__init__.py
--- a/elasticapm/contrib/starlette/__init__.py
+++ b/elasticapm/contrib/starlette/__init__.py
@@ -225,7 +225,8 @@
trim = False
route_name = self._get_route_name(redirect_scope, routes)
- route_name = route_name + "/" if trim else route_name[:-1]
+ if route_name is not None:
+ route_name = route_name + "/" if trim else route_name[:-1]
return route_name
def _get_route_name(self, scope, routes, route_name=None):
|
{"golden_diff": "diff --git a/elasticapm/contrib/starlette/__init__.py b/elasticapm/contrib/starlette/__init__.py\n--- a/elasticapm/contrib/starlette/__init__.py\n+++ b/elasticapm/contrib/starlette/__init__.py\n@@ -225,7 +225,8 @@\n trim = False\n \n route_name = self._get_route_name(redirect_scope, routes)\n- route_name = route_name + \"/\" if trim else route_name[:-1]\n+ if route_name is not None:\n+ route_name = route_name + \"/\" if trim else route_name[:-1]\n return route_name\n \n def _get_route_name(self, scope, routes, route_name=None):\n", "issue": "FastAPI: Calls to undefined routes result in 500\n**Description**\r\nWhen using the apm agent with FastAPI, calls to any undefined route results in a 500 internal server error. Without apm, 404 not found is returned correctly.\r\n\r\n**To Reproduce**\r\n\r\nRun the following simple FastAPI app:\r\n```python\r\nimport uvicorn\r\nfrom elasticapm.contrib.starlette import ElasticAPM, make_apm_client\r\nfrom fastapi import FastAPI\r\n\r\napp = FastAPI()\r\n\r\napm = make_apm_client(\r\n {\r\n \"SERVICE_NAME\": \"undefined-route\",\r\n }\r\n)\r\n\r\napp.add_middleware(ElasticAPM, client=apm)\r\n\r\[email protected](\"/hi\")\r\nasync def hi():\r\n return \"hi\"\r\n\r\nif __name__ == \"__main__\":\r\n uvicorn.run(app, host=\"0.0.0.0\", port=80)\r\n```\r\n\r\n**Observed behavior**\r\nCalling `http://localhost/bye` results in a 500 internal server error.\r\n\r\n**Expected behavior**\r\nCalling `http://localhost/bye` results in a 404 not found error.\r\n\r\n**Environment**\r\n\r\n- OS:\r\n - Client: Windows\r\n - Server: Ubuntu\r\n- Python version: `3.7.3`\r\n- Framework and version: `fastapi==0.61.2`\r\n- APM Server version: docker image `elasticsearch/elasticsearch:7.10.2`\r\n- Agent version: `elastic-apm==master`\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2012, the Sentry Team, see AUTHORS for more details\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n\n\nfrom __future__ import absolute_import\n\nimport starlette\nfrom starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint\nfrom starlette.requests import Request\nfrom starlette.responses import Response\nfrom starlette.routing import Match, Mount\nfrom starlette.types import ASGIApp\n\nimport elasticapm\nimport elasticapm.instrumentation.control\nfrom elasticapm.base import Client\nfrom elasticapm.conf import constants\nfrom elasticapm.contrib.asyncio.traces import set_context\nfrom elasticapm.contrib.starlette.utils import get_body, get_data_from_request, get_data_from_response\nfrom elasticapm.utils.disttracing import TraceParent\nfrom elasticapm.utils.logging import get_logger\n\nlogger = get_logger(\"elasticapm.errors.client\")\n\n\ndef make_apm_client(config: dict, client_cls=Client, **defaults) -> Client:\n \"\"\"Builds ElasticAPM client.\n\n Args:\n config (dict): Dictionary of Client configuration. All keys must be uppercase. See `elasticapm.conf.Config`.\n client_cls (Client): Must be Client or its child.\n **defaults: Additional parameters for Client. See `elasticapm.base.Client`\n\n Returns:\n Client\n \"\"\"\n if \"framework_name\" not in defaults:\n defaults[\"framework_name\"] = \"starlette\"\n defaults[\"framework_version\"] = starlette.__version__\n\n return client_cls(config, **defaults)\n\n\nclass ElasticAPM(BaseHTTPMiddleware):\n \"\"\"\n Starlette / FastAPI middleware for Elastic APM capturing.\n\n >>> elasticapm = make_apm_client({\n >>> 'SERVICE_NAME': 'myapp',\n >>> 'DEBUG': True,\n >>> 'SERVER_URL': 'http://localhost:8200',\n >>> 'CAPTURE_HEADERS': True,\n >>> 'CAPTURE_BODY': 'all'\n >>> })\n\n >>> app.add_middleware(ElasticAPM, client=elasticapm)\n\n Pass an arbitrary APP_NAME and SECRET_TOKEN::\n\n >>> elasticapm = ElasticAPM(app, service_name='myapp', secret_token='asdasdasd')\n\n Pass an explicit client::\n\n >>> elasticapm = ElasticAPM(app, client=client)\n\n Automatically configure logging::\n\n >>> elasticapm = ElasticAPM(app, logging=True)\n\n Capture an exception::\n\n >>> try:\n >>> 1 / 0\n >>> except ZeroDivisionError:\n >>> elasticapm.capture_exception()\n\n Capture a message::\n\n >>> elasticapm.capture_message('hello, world!')\n \"\"\"\n\n def __init__(self, app: ASGIApp, client: Client):\n \"\"\"\n\n Args:\n app (ASGIApp): Starlette app\n client (Client): ElasticAPM Client\n \"\"\"\n self.client = client\n\n if self.client.config.instrument and self.client.config.enabled:\n elasticapm.instrumentation.control.instrument()\n\n super().__init__(app)\n\n async def dispatch(self, request: Request, call_next: RequestResponseEndpoint) -> Response:\n \"\"\"Processes the whole request APM capturing.\n\n Args:\n request (Request)\n call_next (RequestResponseEndpoint): Next request process in Starlette.\n\n Returns:\n Response\n \"\"\"\n await self._request_started(request)\n\n try:\n response = await call_next(request)\n elasticapm.set_transaction_outcome(constants.OUTCOME.SUCCESS, override=False)\n except Exception:\n await self.capture_exception(\n context={\"request\": await get_data_from_request(request, self.client.config, constants.ERROR)}\n )\n elasticapm.set_transaction_result(\"HTTP 5xx\", override=False)\n elasticapm.set_transaction_outcome(constants.OUTCOME.FAILURE, override=False)\n elasticapm.set_context({\"status_code\": 500}, \"response\")\n\n raise\n else:\n await self._request_finished(response)\n finally:\n self.client.end_transaction()\n\n return response\n\n async def capture_exception(self, *args, **kwargs):\n \"\"\"Captures your exception.\n\n Args:\n *args:\n **kwargs:\n \"\"\"\n self.client.capture_exception(*args, **kwargs)\n\n async def capture_message(self, *args, **kwargs):\n \"\"\"Captures your message.\n\n Args:\n *args: Whatever\n **kwargs: Whatever\n \"\"\"\n self.client.capture_message(*args, **kwargs)\n\n async def _request_started(self, request: Request):\n \"\"\"Captures the begin of the request processing to APM.\n\n Args:\n request (Request)\n \"\"\"\n # When we consume the body, we replace the streaming mechanism with\n # a mocked version -- this workaround came from\n # https://github.com/encode/starlette/issues/495#issuecomment-513138055\n # and we call the workaround here to make sure that regardless of\n # `capture_body` settings, we will have access to the body if we need it.\n if self.client.config.capture_body != \"off\":\n await get_body(request)\n\n if not self.client.should_ignore_url(request.url.path):\n trace_parent = TraceParent.from_headers(dict(request.headers))\n self.client.begin_transaction(\"request\", trace_parent=trace_parent)\n\n await set_context(\n lambda: get_data_from_request(request, self.client.config, constants.TRANSACTION), \"request\"\n )\n transaction_name = self.get_route_name(request) or request.url.path\n elasticapm.set_transaction_name(\"{} {}\".format(request.method, transaction_name), override=False)\n\n async def _request_finished(self, response: Response):\n \"\"\"Captures the end of the request processing to APM.\n\n Args:\n response (Response)\n \"\"\"\n await set_context(\n lambda: get_data_from_response(response, self.client.config, constants.TRANSACTION), \"response\"\n )\n\n result = \"HTTP {}xx\".format(response.status_code // 100)\n elasticapm.set_transaction_result(result, override=False)\n\n def get_route_name(self, request: Request) -> str:\n app = request.app\n scope = request.scope\n routes = app.routes\n route_name = self._get_route_name(scope, routes)\n\n # Starlette magically redirects requests if the path matches a route name with a trailing slash\n # appended or removed. To not spam the transaction names list, we do the same here and put these\n # redirects all in the same \"redirect trailing slashes\" transaction name\n if not route_name and app.router.redirect_slashes and scope[\"path\"] != \"/\":\n redirect_scope = dict(scope)\n if scope[\"path\"].endswith(\"/\"):\n redirect_scope[\"path\"] = scope[\"path\"][:-1]\n trim = True\n else:\n redirect_scope[\"path\"] = scope[\"path\"] + \"/\"\n trim = False\n\n route_name = self._get_route_name(redirect_scope, routes)\n route_name = route_name + \"/\" if trim else route_name[:-1]\n return route_name\n\n def _get_route_name(self, scope, routes, route_name=None):\n for route in routes:\n match, child_scope = route.matches(scope)\n if match == Match.FULL:\n route_name = route.path\n child_scope = {**scope, **child_scope}\n if isinstance(route, Mount):\n child_route_name = self._get_route_name(child_scope, route.routes, route_name)\n if child_route_name is None:\n route_name = None\n else:\n route_name += child_route_name\n return route_name\n elif match == Match.PARTIAL and route_name is None:\n route_name = route.path\n", "path": "elasticapm/contrib/starlette/__init__.py"}], "after_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2012, the Sentry Team, see AUTHORS for more details\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n\n\nfrom __future__ import absolute_import\n\nimport starlette\nfrom starlette.middleware.base import BaseHTTPMiddleware, RequestResponseEndpoint\nfrom starlette.requests import Request\nfrom starlette.responses import Response\nfrom starlette.routing import Match, Mount\nfrom starlette.types import ASGIApp\n\nimport elasticapm\nimport elasticapm.instrumentation.control\nfrom elasticapm.base import Client\nfrom elasticapm.conf import constants\nfrom elasticapm.contrib.asyncio.traces import set_context\nfrom elasticapm.contrib.starlette.utils import get_body, get_data_from_request, get_data_from_response\nfrom elasticapm.utils.disttracing import TraceParent\nfrom elasticapm.utils.logging import get_logger\n\nlogger = get_logger(\"elasticapm.errors.client\")\n\n\ndef make_apm_client(config: dict, client_cls=Client, **defaults) -> Client:\n \"\"\"Builds ElasticAPM client.\n\n Args:\n config (dict): Dictionary of Client configuration. All keys must be uppercase. See `elasticapm.conf.Config`.\n client_cls (Client): Must be Client or its child.\n **defaults: Additional parameters for Client. See `elasticapm.base.Client`\n\n Returns:\n Client\n \"\"\"\n if \"framework_name\" not in defaults:\n defaults[\"framework_name\"] = \"starlette\"\n defaults[\"framework_version\"] = starlette.__version__\n\n return client_cls(config, **defaults)\n\n\nclass ElasticAPM(BaseHTTPMiddleware):\n \"\"\"\n Starlette / FastAPI middleware for Elastic APM capturing.\n\n >>> elasticapm = make_apm_client({\n >>> 'SERVICE_NAME': 'myapp',\n >>> 'DEBUG': True,\n >>> 'SERVER_URL': 'http://localhost:8200',\n >>> 'CAPTURE_HEADERS': True,\n >>> 'CAPTURE_BODY': 'all'\n >>> })\n\n >>> app.add_middleware(ElasticAPM, client=elasticapm)\n\n Pass an arbitrary APP_NAME and SECRET_TOKEN::\n\n >>> elasticapm = ElasticAPM(app, service_name='myapp', secret_token='asdasdasd')\n\n Pass an explicit client::\n\n >>> elasticapm = ElasticAPM(app, client=client)\n\n Automatically configure logging::\n\n >>> elasticapm = ElasticAPM(app, logging=True)\n\n Capture an exception::\n\n >>> try:\n >>> 1 / 0\n >>> except ZeroDivisionError:\n >>> elasticapm.capture_exception()\n\n Capture a message::\n\n >>> elasticapm.capture_message('hello, world!')\n \"\"\"\n\n def __init__(self, app: ASGIApp, client: Client):\n \"\"\"\n\n Args:\n app (ASGIApp): Starlette app\n client (Client): ElasticAPM Client\n \"\"\"\n self.client = client\n\n if self.client.config.instrument and self.client.config.enabled:\n elasticapm.instrumentation.control.instrument()\n\n super().__init__(app)\n\n async def dispatch(self, request: Request, call_next: RequestResponseEndpoint) -> Response:\n \"\"\"Processes the whole request APM capturing.\n\n Args:\n request (Request)\n call_next (RequestResponseEndpoint): Next request process in Starlette.\n\n Returns:\n Response\n \"\"\"\n await self._request_started(request)\n\n try:\n response = await call_next(request)\n elasticapm.set_transaction_outcome(constants.OUTCOME.SUCCESS, override=False)\n except Exception:\n await self.capture_exception(\n context={\"request\": await get_data_from_request(request, self.client.config, constants.ERROR)}\n )\n elasticapm.set_transaction_result(\"HTTP 5xx\", override=False)\n elasticapm.set_transaction_outcome(constants.OUTCOME.FAILURE, override=False)\n elasticapm.set_context({\"status_code\": 500}, \"response\")\n\n raise\n else:\n await self._request_finished(response)\n finally:\n self.client.end_transaction()\n\n return response\n\n async def capture_exception(self, *args, **kwargs):\n \"\"\"Captures your exception.\n\n Args:\n *args:\n **kwargs:\n \"\"\"\n self.client.capture_exception(*args, **kwargs)\n\n async def capture_message(self, *args, **kwargs):\n \"\"\"Captures your message.\n\n Args:\n *args: Whatever\n **kwargs: Whatever\n \"\"\"\n self.client.capture_message(*args, **kwargs)\n\n async def _request_started(self, request: Request):\n \"\"\"Captures the begin of the request processing to APM.\n\n Args:\n request (Request)\n \"\"\"\n # When we consume the body, we replace the streaming mechanism with\n # a mocked version -- this workaround came from\n # https://github.com/encode/starlette/issues/495#issuecomment-513138055\n # and we call the workaround here to make sure that regardless of\n # `capture_body` settings, we will have access to the body if we need it.\n if self.client.config.capture_body != \"off\":\n await get_body(request)\n\n if not self.client.should_ignore_url(request.url.path):\n trace_parent = TraceParent.from_headers(dict(request.headers))\n self.client.begin_transaction(\"request\", trace_parent=trace_parent)\n\n await set_context(\n lambda: get_data_from_request(request, self.client.config, constants.TRANSACTION), \"request\"\n )\n transaction_name = self.get_route_name(request) or request.url.path\n elasticapm.set_transaction_name(\"{} {}\".format(request.method, transaction_name), override=False)\n\n async def _request_finished(self, response: Response):\n \"\"\"Captures the end of the request processing to APM.\n\n Args:\n response (Response)\n \"\"\"\n await set_context(\n lambda: get_data_from_response(response, self.client.config, constants.TRANSACTION), \"response\"\n )\n\n result = \"HTTP {}xx\".format(response.status_code // 100)\n elasticapm.set_transaction_result(result, override=False)\n\n def get_route_name(self, request: Request) -> str:\n app = request.app\n scope = request.scope\n routes = app.routes\n route_name = self._get_route_name(scope, routes)\n\n # Starlette magically redirects requests if the path matches a route name with a trailing slash\n # appended or removed. To not spam the transaction names list, we do the same here and put these\n # redirects all in the same \"redirect trailing slashes\" transaction name\n if not route_name and app.router.redirect_slashes and scope[\"path\"] != \"/\":\n redirect_scope = dict(scope)\n if scope[\"path\"].endswith(\"/\"):\n redirect_scope[\"path\"] = scope[\"path\"][:-1]\n trim = True\n else:\n redirect_scope[\"path\"] = scope[\"path\"] + \"/\"\n trim = False\n\n route_name = self._get_route_name(redirect_scope, routes)\n if route_name is not None:\n route_name = route_name + \"/\" if trim else route_name[:-1]\n return route_name\n\n def _get_route_name(self, scope, routes, route_name=None):\n for route in routes:\n match, child_scope = route.matches(scope)\n if match == Match.FULL:\n route_name = route.path\n child_scope = {**scope, **child_scope}\n if isinstance(route, Mount):\n child_route_name = self._get_route_name(child_scope, route.routes, route_name)\n if child_route_name is None:\n route_name = None\n else:\n route_name += child_route_name\n return route_name\n elif match == Match.PARTIAL and route_name is None:\n route_name = route.path\n", "path": "elasticapm/contrib/starlette/__init__.py"}]}
| 3,189 | 158 |
gh_patches_debug_13893
|
rasdani/github-patches
|
git_diff
|
litestar-org__litestar-1370
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: msgspec version needs updating
### Description
Starlite currently needs msgspec >= 0.11.0 from here https://github.com/starlite-api/starlite/blob/8a49aff754d5b409397183eb395cac5a44193d72/pyproject.toml#L52.
Starlite uses msgspec.inspect:
https://github.com/starlite-api/starlite/blob/41089a2d5dff696a25c78a58322d3a8a08ebcac3/starlite/constants.py#L3
msgspec.inspect wasn't introduced in msgspec until 0.12.0:
https://github.com/jcrist/msgspec/releases/tag/0.12.0
### URL to code causing the issue
_No response_
### MCVE
_No response_
### Steps to reproduce
```bash
1. In a new environment, pip install starlite msgspec==0.11.0
2. Try to import starlite
```
### Screenshots
_No response_
### Logs
_No response_
### Starlite Version
1.51.7
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlite/constants.py`
Content:
```
1 from inspect import Signature
2
3 from msgspec.inspect import UNSET
4 from pydantic.fields import Undefined
5
6 from starlite.types import Empty
7
8 DEFAULT_ALLOWED_CORS_HEADERS = {"Accept", "Accept-Language", "Content-Language", "Content-Type"}
9 DEFAULT_CHUNK_SIZE = 1024 * 128 # 128KB
10 HTTP_RESPONSE_BODY = "http.response.body"
11 HTTP_RESPONSE_START = "http.response.start"
12 ONE_MEGABYTE = 1024 * 1024
13 REDIRECT_STATUS_CODES = {301, 302, 303, 307, 308}
14 RESERVED_KWARGS = {"state", "headers", "cookies", "request", "socket", "data", "query", "scope", "body"}
15 SCOPE_STATE_DEPENDENCY_CACHE = "dependency_cache"
16 SCOPE_STATE_NAMESPACE = "__starlite__"
17 SCOPE_STATE_RESPONSE_COMPRESSED = "response_compressed"
18 UNDEFINED_SENTINELS = {Undefined, Signature.empty, UNSET, Empty, Ellipsis}
19 SKIP_VALIDATION_NAMES = {"request", "socket", "scope", "receive", "send"}
20
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/starlite/constants.py b/starlite/constants.py
--- a/starlite/constants.py
+++ b/starlite/constants.py
@@ -1,6 +1,5 @@
from inspect import Signature
-from msgspec.inspect import UNSET
from pydantic.fields import Undefined
from starlite.types import Empty
@@ -15,5 +14,5 @@
SCOPE_STATE_DEPENDENCY_CACHE = "dependency_cache"
SCOPE_STATE_NAMESPACE = "__starlite__"
SCOPE_STATE_RESPONSE_COMPRESSED = "response_compressed"
-UNDEFINED_SENTINELS = {Undefined, Signature.empty, UNSET, Empty, Ellipsis}
+UNDEFINED_SENTINELS = {Undefined, Signature.empty, Empty, Ellipsis}
SKIP_VALIDATION_NAMES = {"request", "socket", "scope", "receive", "send"}
|
{"golden_diff": "diff --git a/starlite/constants.py b/starlite/constants.py\n--- a/starlite/constants.py\n+++ b/starlite/constants.py\n@@ -1,6 +1,5 @@\n from inspect import Signature\n \n-from msgspec.inspect import UNSET\n from pydantic.fields import Undefined\n \n from starlite.types import Empty\n@@ -15,5 +14,5 @@\n SCOPE_STATE_DEPENDENCY_CACHE = \"dependency_cache\"\n SCOPE_STATE_NAMESPACE = \"__starlite__\"\n SCOPE_STATE_RESPONSE_COMPRESSED = \"response_compressed\"\n-UNDEFINED_SENTINELS = {Undefined, Signature.empty, UNSET, Empty, Ellipsis}\n+UNDEFINED_SENTINELS = {Undefined, Signature.empty, Empty, Ellipsis}\n SKIP_VALIDATION_NAMES = {\"request\", \"socket\", \"scope\", \"receive\", \"send\"}\n", "issue": "Bug: msgspec version needs updating\n### Description\n\nStarlite currently needs msgspec >= 0.11.0 from here https://github.com/starlite-api/starlite/blob/8a49aff754d5b409397183eb395cac5a44193d72/pyproject.toml#L52.\r\n\r\nStarlite uses msgspec.inspect:\r\nhttps://github.com/starlite-api/starlite/blob/41089a2d5dff696a25c78a58322d3a8a08ebcac3/starlite/constants.py#L3\r\n\r\nmsgspec.inspect wasn't introduced in msgspec until 0.12.0:\r\nhttps://github.com/jcrist/msgspec/releases/tag/0.12.0\n\n### URL to code causing the issue\n\n_No response_\n\n### MCVE\n\n_No response_\n\n### Steps to reproduce\n\n```bash\n1. In a new environment, pip install starlite msgspec==0.11.0\r\n2. Try to import starlite\n```\n\n\n### Screenshots\n\n_No response_\n\n### Logs\n\n_No response_\n\n### Starlite Version\n\n1.51.7\n\n### Platform\n\n- [ ] Linux\n- [ ] Mac\n- [ ] Windows\n- [ ] Other (Please specify in the description above)\n", "before_files": [{"content": "from inspect import Signature\n\nfrom msgspec.inspect import UNSET\nfrom pydantic.fields import Undefined\n\nfrom starlite.types import Empty\n\nDEFAULT_ALLOWED_CORS_HEADERS = {\"Accept\", \"Accept-Language\", \"Content-Language\", \"Content-Type\"}\nDEFAULT_CHUNK_SIZE = 1024 * 128 # 128KB\nHTTP_RESPONSE_BODY = \"http.response.body\"\nHTTP_RESPONSE_START = \"http.response.start\"\nONE_MEGABYTE = 1024 * 1024\nREDIRECT_STATUS_CODES = {301, 302, 303, 307, 308}\nRESERVED_KWARGS = {\"state\", \"headers\", \"cookies\", \"request\", \"socket\", \"data\", \"query\", \"scope\", \"body\"}\nSCOPE_STATE_DEPENDENCY_CACHE = \"dependency_cache\"\nSCOPE_STATE_NAMESPACE = \"__starlite__\"\nSCOPE_STATE_RESPONSE_COMPRESSED = \"response_compressed\"\nUNDEFINED_SENTINELS = {Undefined, Signature.empty, UNSET, Empty, Ellipsis}\nSKIP_VALIDATION_NAMES = {\"request\", \"socket\", \"scope\", \"receive\", \"send\"}\n", "path": "starlite/constants.py"}], "after_files": [{"content": "from inspect import Signature\n\nfrom pydantic.fields import Undefined\n\nfrom starlite.types import Empty\n\nDEFAULT_ALLOWED_CORS_HEADERS = {\"Accept\", \"Accept-Language\", \"Content-Language\", \"Content-Type\"}\nDEFAULT_CHUNK_SIZE = 1024 * 128 # 128KB\nHTTP_RESPONSE_BODY = \"http.response.body\"\nHTTP_RESPONSE_START = \"http.response.start\"\nONE_MEGABYTE = 1024 * 1024\nREDIRECT_STATUS_CODES = {301, 302, 303, 307, 308}\nRESERVED_KWARGS = {\"state\", \"headers\", \"cookies\", \"request\", \"socket\", \"data\", \"query\", \"scope\", \"body\"}\nSCOPE_STATE_DEPENDENCY_CACHE = \"dependency_cache\"\nSCOPE_STATE_NAMESPACE = \"__starlite__\"\nSCOPE_STATE_RESPONSE_COMPRESSED = \"response_compressed\"\nUNDEFINED_SENTINELS = {Undefined, Signature.empty, Empty, Ellipsis}\nSKIP_VALIDATION_NAMES = {\"request\", \"socket\", \"scope\", \"receive\", \"send\"}\n", "path": "starlite/constants.py"}]}
| 826 | 170 |
gh_patches_debug_7811
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmpose-970
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] Possible bug in `mmpose/apis/train.py`
https://github.com/open-mmlab/mmpose/blob/8d668a578f1580373d90a1eea88f23f6816df60e/mmpose/apis/train.py#L140
Here is the code that may have problem. And I think the default value for `workers_per_gpu` should be an integer but not a dict.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmpose/apis/train.py`
Content:
```
1 # Copyright (c) OpenMMLab. All rights reserved.
2 import warnings
3
4 import torch
5 from mmcv.parallel import MMDataParallel, MMDistributedDataParallel
6 from mmcv.runner import DistSamplerSeedHook, EpochBasedRunner, OptimizerHook
7
8 from mmpose.core import DistEvalHook, EvalHook, build_optimizers
9 from mmpose.core.distributed_wrapper import DistributedDataParallelWrapper
10 from mmpose.datasets import build_dataloader, build_dataset
11 from mmpose.utils import get_root_logger
12
13 try:
14 from mmcv.runner import Fp16OptimizerHook
15 except ImportError:
16 warnings.warn(
17 'Fp16OptimizerHook from mmpose will be deprecated from '
18 'v0.15.0. Please install mmcv>=1.1.4', DeprecationWarning)
19 from mmpose.core import Fp16OptimizerHook
20
21
22 def train_model(model,
23 dataset,
24 cfg,
25 distributed=False,
26 validate=False,
27 timestamp=None,
28 meta=None):
29 """Train model entry function.
30
31 Args:
32 model (nn.Module): The model to be trained.
33 dataset (Dataset): Train dataset.
34 cfg (dict): The config dict for training.
35 distributed (bool): Whether to use distributed training.
36 Default: False.
37 validate (bool): Whether to do evaluation. Default: False.
38 timestamp (str | None): Local time for runner. Default: None.
39 meta (dict | None): Meta dict to record some important information.
40 Default: None
41 """
42 logger = get_root_logger(cfg.log_level)
43
44 # prepare data loaders
45 dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset]
46 # step 1: give default values and override (if exist) from cfg.data
47 loader_cfg = {
48 **dict(
49 seed=cfg.get('seed'),
50 drop_last=False,
51 dist=distributed,
52 num_gpus=len(cfg.gpu_ids)),
53 **({} if torch.__version__ != 'parrots' else dict(
54 prefetch_num=2,
55 pin_memory=False,
56 )),
57 **dict((k, cfg.data[k]) for k in [
58 'samples_per_gpu',
59 'workers_per_gpu',
60 'shuffle',
61 'seed',
62 'drop_last',
63 'prefetch_num',
64 'pin_memory',
65 'persistent_workers',
66 ] if k in cfg.data)
67 }
68
69 # step 2: cfg.data.train_dataloader has highest priority
70 train_loader_cfg = dict(loader_cfg, **cfg.data.get('train_dataloader', {}))
71
72 data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset]
73
74 # determine wether use adversarial training precess or not
75 use_adverserial_train = cfg.get('use_adversarial_train', False)
76
77 # put model on gpus
78 if distributed:
79 find_unused_parameters = cfg.get('find_unused_parameters', True)
80 # Sets the `find_unused_parameters` parameter in
81 # torch.nn.parallel.DistributedDataParallel
82
83 if use_adverserial_train:
84 # Use DistributedDataParallelWrapper for adversarial training
85 model = DistributedDataParallelWrapper(
86 model,
87 device_ids=[torch.cuda.current_device()],
88 broadcast_buffers=False,
89 find_unused_parameters=find_unused_parameters)
90 else:
91 model = MMDistributedDataParallel(
92 model.cuda(),
93 device_ids=[torch.cuda.current_device()],
94 broadcast_buffers=False,
95 find_unused_parameters=find_unused_parameters)
96 else:
97 model = MMDataParallel(
98 model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)
99
100 # build runner
101 optimizer = build_optimizers(model, cfg.optimizer)
102
103 runner = EpochBasedRunner(
104 model,
105 optimizer=optimizer,
106 work_dir=cfg.work_dir,
107 logger=logger,
108 meta=meta)
109 # an ugly workaround to make .log and .log.json filenames the same
110 runner.timestamp = timestamp
111
112 if use_adverserial_train:
113 # The optimizer step process is included in the train_step function
114 # of the model, so the runner should NOT include optimizer hook.
115 optimizer_config = None
116 else:
117 # fp16 setting
118 fp16_cfg = cfg.get('fp16', None)
119 if fp16_cfg is not None:
120 optimizer_config = Fp16OptimizerHook(
121 **cfg.optimizer_config, **fp16_cfg, distributed=distributed)
122 elif distributed and 'type' not in cfg.optimizer_config:
123 optimizer_config = OptimizerHook(**cfg.optimizer_config)
124 else:
125 optimizer_config = cfg.optimizer_config
126
127 # register hooks
128 runner.register_training_hooks(cfg.lr_config, optimizer_config,
129 cfg.checkpoint_config, cfg.log_config,
130 cfg.get('momentum_config', None))
131 if distributed:
132 runner.register_hook(DistSamplerSeedHook())
133
134 # register eval hooks
135 if validate:
136 eval_cfg = cfg.get('evaluation', {})
137 val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))
138 dataloader_setting = dict(
139 samples_per_gpu=1,
140 workers_per_gpu=cfg.data.get('workers_per_gpu', {}),
141 # cfg.gpus will be ignored if distributed
142 num_gpus=len(cfg.gpu_ids),
143 dist=distributed,
144 drop_last=False,
145 shuffle=False)
146 dataloader_setting = dict(dataloader_setting,
147 **cfg.data.get('val_dataloader', {}))
148 val_dataloader = build_dataloader(val_dataset, **dataloader_setting)
149 eval_hook = DistEvalHook if distributed else EvalHook
150 runner.register_hook(eval_hook(val_dataloader, **eval_cfg))
151
152 if cfg.resume_from:
153 runner.resume(cfg.resume_from)
154 elif cfg.load_from:
155 runner.load_checkpoint(cfg.load_from)
156 runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
157
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mmpose/apis/train.py b/mmpose/apis/train.py
--- a/mmpose/apis/train.py
+++ b/mmpose/apis/train.py
@@ -137,7 +137,7 @@
val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))
dataloader_setting = dict(
samples_per_gpu=1,
- workers_per_gpu=cfg.data.get('workers_per_gpu', {}),
+ workers_per_gpu=cfg.data.get('workers_per_gpu', 1),
# cfg.gpus will be ignored if distributed
num_gpus=len(cfg.gpu_ids),
dist=distributed,
|
{"golden_diff": "diff --git a/mmpose/apis/train.py b/mmpose/apis/train.py\n--- a/mmpose/apis/train.py\n+++ b/mmpose/apis/train.py\n@@ -137,7 +137,7 @@\n val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))\n dataloader_setting = dict(\n samples_per_gpu=1,\n- workers_per_gpu=cfg.data.get('workers_per_gpu', {}),\n+ workers_per_gpu=cfg.data.get('workers_per_gpu', 1),\n # cfg.gpus will be ignored if distributed\n num_gpus=len(cfg.gpu_ids),\n dist=distributed,\n", "issue": "[Bug] Possible bug in `mmpose/apis/train.py`\nhttps://github.com/open-mmlab/mmpose/blob/8d668a578f1580373d90a1eea88f23f6816df60e/mmpose/apis/train.py#L140\r\n\r\nHere is the code that may have problem. And I think the default value for `workers_per_gpu` should be an integer but not a dict. \n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport warnings\n\nimport torch\nfrom mmcv.parallel import MMDataParallel, MMDistributedDataParallel\nfrom mmcv.runner import DistSamplerSeedHook, EpochBasedRunner, OptimizerHook\n\nfrom mmpose.core import DistEvalHook, EvalHook, build_optimizers\nfrom mmpose.core.distributed_wrapper import DistributedDataParallelWrapper\nfrom mmpose.datasets import build_dataloader, build_dataset\nfrom mmpose.utils import get_root_logger\n\ntry:\n from mmcv.runner import Fp16OptimizerHook\nexcept ImportError:\n warnings.warn(\n 'Fp16OptimizerHook from mmpose will be deprecated from '\n 'v0.15.0. Please install mmcv>=1.1.4', DeprecationWarning)\n from mmpose.core import Fp16OptimizerHook\n\n\ndef train_model(model,\n dataset,\n cfg,\n distributed=False,\n validate=False,\n timestamp=None,\n meta=None):\n \"\"\"Train model entry function.\n\n Args:\n model (nn.Module): The model to be trained.\n dataset (Dataset): Train dataset.\n cfg (dict): The config dict for training.\n distributed (bool): Whether to use distributed training.\n Default: False.\n validate (bool): Whether to do evaluation. Default: False.\n timestamp (str | None): Local time for runner. Default: None.\n meta (dict | None): Meta dict to record some important information.\n Default: None\n \"\"\"\n logger = get_root_logger(cfg.log_level)\n\n # prepare data loaders\n dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset]\n # step 1: give default values and override (if exist) from cfg.data\n loader_cfg = {\n **dict(\n seed=cfg.get('seed'),\n drop_last=False,\n dist=distributed,\n num_gpus=len(cfg.gpu_ids)),\n **({} if torch.__version__ != 'parrots' else dict(\n prefetch_num=2,\n pin_memory=False,\n )),\n **dict((k, cfg.data[k]) for k in [\n 'samples_per_gpu',\n 'workers_per_gpu',\n 'shuffle',\n 'seed',\n 'drop_last',\n 'prefetch_num',\n 'pin_memory',\n 'persistent_workers',\n ] if k in cfg.data)\n }\n\n # step 2: cfg.data.train_dataloader has highest priority\n train_loader_cfg = dict(loader_cfg, **cfg.data.get('train_dataloader', {}))\n\n data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset]\n\n # determine wether use adversarial training precess or not\n use_adverserial_train = cfg.get('use_adversarial_train', False)\n\n # put model on gpus\n if distributed:\n find_unused_parameters = cfg.get('find_unused_parameters', True)\n # Sets the `find_unused_parameters` parameter in\n # torch.nn.parallel.DistributedDataParallel\n\n if use_adverserial_train:\n # Use DistributedDataParallelWrapper for adversarial training\n model = DistributedDataParallelWrapper(\n model,\n device_ids=[torch.cuda.current_device()],\n broadcast_buffers=False,\n find_unused_parameters=find_unused_parameters)\n else:\n model = MMDistributedDataParallel(\n model.cuda(),\n device_ids=[torch.cuda.current_device()],\n broadcast_buffers=False,\n find_unused_parameters=find_unused_parameters)\n else:\n model = MMDataParallel(\n model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)\n\n # build runner\n optimizer = build_optimizers(model, cfg.optimizer)\n\n runner = EpochBasedRunner(\n model,\n optimizer=optimizer,\n work_dir=cfg.work_dir,\n logger=logger,\n meta=meta)\n # an ugly workaround to make .log and .log.json filenames the same\n runner.timestamp = timestamp\n\n if use_adverserial_train:\n # The optimizer step process is included in the train_step function\n # of the model, so the runner should NOT include optimizer hook.\n optimizer_config = None\n else:\n # fp16 setting\n fp16_cfg = cfg.get('fp16', None)\n if fp16_cfg is not None:\n optimizer_config = Fp16OptimizerHook(\n **cfg.optimizer_config, **fp16_cfg, distributed=distributed)\n elif distributed and 'type' not in cfg.optimizer_config:\n optimizer_config = OptimizerHook(**cfg.optimizer_config)\n else:\n optimizer_config = cfg.optimizer_config\n\n # register hooks\n runner.register_training_hooks(cfg.lr_config, optimizer_config,\n cfg.checkpoint_config, cfg.log_config,\n cfg.get('momentum_config', None))\n if distributed:\n runner.register_hook(DistSamplerSeedHook())\n\n # register eval hooks\n if validate:\n eval_cfg = cfg.get('evaluation', {})\n val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))\n dataloader_setting = dict(\n samples_per_gpu=1,\n workers_per_gpu=cfg.data.get('workers_per_gpu', {}),\n # cfg.gpus will be ignored if distributed\n num_gpus=len(cfg.gpu_ids),\n dist=distributed,\n drop_last=False,\n shuffle=False)\n dataloader_setting = dict(dataloader_setting,\n **cfg.data.get('val_dataloader', {}))\n val_dataloader = build_dataloader(val_dataset, **dataloader_setting)\n eval_hook = DistEvalHook if distributed else EvalHook\n runner.register_hook(eval_hook(val_dataloader, **eval_cfg))\n\n if cfg.resume_from:\n runner.resume(cfg.resume_from)\n elif cfg.load_from:\n runner.load_checkpoint(cfg.load_from)\n runner.run(data_loaders, cfg.workflow, cfg.total_epochs)\n", "path": "mmpose/apis/train.py"}], "after_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport warnings\n\nimport torch\nfrom mmcv.parallel import MMDataParallel, MMDistributedDataParallel\nfrom mmcv.runner import DistSamplerSeedHook, EpochBasedRunner, OptimizerHook\n\nfrom mmpose.core import DistEvalHook, EvalHook, build_optimizers\nfrom mmpose.core.distributed_wrapper import DistributedDataParallelWrapper\nfrom mmpose.datasets import build_dataloader, build_dataset\nfrom mmpose.utils import get_root_logger\n\ntry:\n from mmcv.runner import Fp16OptimizerHook\nexcept ImportError:\n warnings.warn(\n 'Fp16OptimizerHook from mmpose will be deprecated from '\n 'v0.15.0. Please install mmcv>=1.1.4', DeprecationWarning)\n from mmpose.core import Fp16OptimizerHook\n\n\ndef train_model(model,\n dataset,\n cfg,\n distributed=False,\n validate=False,\n timestamp=None,\n meta=None):\n \"\"\"Train model entry function.\n\n Args:\n model (nn.Module): The model to be trained.\n dataset (Dataset): Train dataset.\n cfg (dict): The config dict for training.\n distributed (bool): Whether to use distributed training.\n Default: False.\n validate (bool): Whether to do evaluation. Default: False.\n timestamp (str | None): Local time for runner. Default: None.\n meta (dict | None): Meta dict to record some important information.\n Default: None\n \"\"\"\n logger = get_root_logger(cfg.log_level)\n\n # prepare data loaders\n dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset]\n # step 1: give default values and override (if exist) from cfg.data\n loader_cfg = {\n **dict(\n seed=cfg.get('seed'),\n drop_last=False,\n dist=distributed,\n num_gpus=len(cfg.gpu_ids)),\n **({} if torch.__version__ != 'parrots' else dict(\n prefetch_num=2,\n pin_memory=False,\n )),\n **dict((k, cfg.data[k]) for k in [\n 'samples_per_gpu',\n 'workers_per_gpu',\n 'shuffle',\n 'seed',\n 'drop_last',\n 'prefetch_num',\n 'pin_memory',\n 'persistent_workers',\n ] if k in cfg.data)\n }\n\n # step 2: cfg.data.train_dataloader has highest priority\n train_loader_cfg = dict(loader_cfg, **cfg.data.get('train_dataloader', {}))\n\n data_loaders = [build_dataloader(ds, **train_loader_cfg) for ds in dataset]\n\n # determine wether use adversarial training precess or not\n use_adverserial_train = cfg.get('use_adversarial_train', False)\n\n # put model on gpus\n if distributed:\n find_unused_parameters = cfg.get('find_unused_parameters', True)\n # Sets the `find_unused_parameters` parameter in\n # torch.nn.parallel.DistributedDataParallel\n\n if use_adverserial_train:\n # Use DistributedDataParallelWrapper for adversarial training\n model = DistributedDataParallelWrapper(\n model,\n device_ids=[torch.cuda.current_device()],\n broadcast_buffers=False,\n find_unused_parameters=find_unused_parameters)\n else:\n model = MMDistributedDataParallel(\n model.cuda(),\n device_ids=[torch.cuda.current_device()],\n broadcast_buffers=False,\n find_unused_parameters=find_unused_parameters)\n else:\n model = MMDataParallel(\n model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids)\n\n # build runner\n optimizer = build_optimizers(model, cfg.optimizer)\n\n runner = EpochBasedRunner(\n model,\n optimizer=optimizer,\n work_dir=cfg.work_dir,\n logger=logger,\n meta=meta)\n # an ugly workaround to make .log and .log.json filenames the same\n runner.timestamp = timestamp\n\n if use_adverserial_train:\n # The optimizer step process is included in the train_step function\n # of the model, so the runner should NOT include optimizer hook.\n optimizer_config = None\n else:\n # fp16 setting\n fp16_cfg = cfg.get('fp16', None)\n if fp16_cfg is not None:\n optimizer_config = Fp16OptimizerHook(\n **cfg.optimizer_config, **fp16_cfg, distributed=distributed)\n elif distributed and 'type' not in cfg.optimizer_config:\n optimizer_config = OptimizerHook(**cfg.optimizer_config)\n else:\n optimizer_config = cfg.optimizer_config\n\n # register hooks\n runner.register_training_hooks(cfg.lr_config, optimizer_config,\n cfg.checkpoint_config, cfg.log_config,\n cfg.get('momentum_config', None))\n if distributed:\n runner.register_hook(DistSamplerSeedHook())\n\n # register eval hooks\n if validate:\n eval_cfg = cfg.get('evaluation', {})\n val_dataset = build_dataset(cfg.data.val, dict(test_mode=True))\n dataloader_setting = dict(\n samples_per_gpu=1,\n workers_per_gpu=cfg.data.get('workers_per_gpu', 1),\n # cfg.gpus will be ignored if distributed\n num_gpus=len(cfg.gpu_ids),\n dist=distributed,\n drop_last=False,\n shuffle=False)\n dataloader_setting = dict(dataloader_setting,\n **cfg.data.get('val_dataloader', {}))\n val_dataloader = build_dataloader(val_dataset, **dataloader_setting)\n eval_hook = DistEvalHook if distributed else EvalHook\n runner.register_hook(eval_hook(val_dataloader, **eval_cfg))\n\n if cfg.resume_from:\n runner.resume(cfg.resume_from)\n elif cfg.load_from:\n runner.load_checkpoint(cfg.load_from)\n runner.run(data_loaders, cfg.workflow, cfg.total_epochs)\n", "path": "mmpose/apis/train.py"}]}
| 1,977 | 137 |
gh_patches_debug_6437
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-532
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Participated projects visible for all on user profile
If I write a comment on platform meinBerlin and take part in a participation process, everyone can see my profile and in which participation process I took part, even I'm not logged in. It's not anonymous enough.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/users/views.py`
Content:
```
1 from django.views.generic.detail import DetailView
2
3 from adhocracy4.projects.models import Project
4
5 from . import models
6
7
8 class ProfileView(DetailView):
9 model = models.User
10 slug_field = 'username'
11
12 @property
13 def get_participated_projects(self):
14 user = self.object
15
16 qs = Project.objects.filter(
17 action__actor=user
18 ).distinct()
19
20 return qs
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/apps/users/views.py b/apps/users/views.py
--- a/apps/users/views.py
+++ b/apps/users/views.py
@@ -1,20 +1,8 @@
from django.views.generic.detail import DetailView
-from adhocracy4.projects.models import Project
-
from . import models
class ProfileView(DetailView):
model = models.User
slug_field = 'username'
-
- @property
- def get_participated_projects(self):
- user = self.object
-
- qs = Project.objects.filter(
- action__actor=user
- ).distinct()
-
- return qs
|
{"golden_diff": "diff --git a/apps/users/views.py b/apps/users/views.py\n--- a/apps/users/views.py\n+++ b/apps/users/views.py\n@@ -1,20 +1,8 @@\n from django.views.generic.detail import DetailView\n \n-from adhocracy4.projects.models import Project\n-\n from . import models\n \n \n class ProfileView(DetailView):\n model = models.User\n slug_field = 'username'\n-\n- @property\n- def get_participated_projects(self):\n- user = self.object\n-\n- qs = Project.objects.filter(\n- action__actor=user\n- ).distinct()\n-\n- return qs\n", "issue": "Participated projects visible for all on user profile\nIf I write a comment on platform meinBerlin and take part in a participation process, everyone can see my profile and in which participation process I took part, even I'm not logged in. It's not anonymous enough.\n", "before_files": [{"content": "from django.views.generic.detail import DetailView\n\nfrom adhocracy4.projects.models import Project\n\nfrom . import models\n\n\nclass ProfileView(DetailView):\n model = models.User\n slug_field = 'username'\n\n @property\n def get_participated_projects(self):\n user = self.object\n\n qs = Project.objects.filter(\n action__actor=user\n ).distinct()\n\n return qs\n", "path": "apps/users/views.py"}], "after_files": [{"content": "from django.views.generic.detail import DetailView\n\nfrom . import models\n\n\nclass ProfileView(DetailView):\n model = models.User\n slug_field = 'username'\n", "path": "apps/users/views.py"}]}
| 437 | 135 |
gh_patches_debug_32831
|
rasdani/github-patches
|
git_diff
|
pymeasure__pymeasure-866
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`TypeError` on SerialAdapter `read_bytes(-1)`
When I use `self.read_bytes(-1)` from my instrument using a `SerialAdapter`, I get the error:
> C:\Program Files\Python310\lib\ctypes\__init__.py:66: TypeError
With the traceback:
```
..\..\..\pymeasure\instruments\velleman\velleman_k8090.py:247: in read
response = self.read_bytes(-1)
..\..\..\pymeasure\instruments\instrument.py:153: in read_bytes
return self.adapter.read_bytes(count, **kwargs)
..\..\..\pymeasure\adapters\adapter.py:123: in read_bytes
read = self._read_bytes(count, break_on_termchar, **kwargs)
..\..\..\pymeasure\adapters\serial.py:108: in _read_bytes
return self.connection.read(1e99 if count == -1 else count, **kwargs)
..\..\..\venv\lib\site-packages\serial\serialwin32.py:278: in read
buf = ctypes.create_string_buffer(n)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
init = 1e+99, size = None
def create_string_buffer(init, size=None):
"""create_string_buffer(aBytes) -> character array
create_string_buffer(anInteger) -> character array
create_string_buffer(aBytes, anInteger) -> character array
"""
if isinstance(init, bytes):
if size is None:
size = len(init)+1
_sys.audit("ctypes.create_string_buffer", init, size)
buftype = c_char * size
buf = buftype()
buf.value = init
return buf
elif isinstance(init, int):
_sys.audit("ctypes.create_string_buffer", None, init)
buftype = c_char * init
buf = buftype()
return buf
> raise TypeError(init)
E TypeError: 1e+99
C:\Program Files\Python310\lib\ctypes\__init__.py:66: TypeError
```
It looks like reading 1e99 bytes no (longer?) works.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pymeasure/adapters/serial.py`
Content:
```
1 #
2 # This file is part of the PyMeasure package.
3 #
4 # Copyright (c) 2013-2023 PyMeasure Developers
5 #
6 # Permission is hereby granted, free of charge, to any person obtaining a copy
7 # of this software and associated documentation files (the "Software"), to deal
8 # in the Software without restriction, including without limitation the rights
9 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
10 # copies of the Software, and to permit persons to whom the Software is
11 # furnished to do so, subject to the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be included in
14 # all copies or substantial portions of the Software.
15 #
16 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
22 # THE SOFTWARE.
23 #
24
25 import logging
26
27 import serial
28 from .adapter import Adapter
29
30 log = logging.getLogger(__name__)
31 log.addHandler(logging.NullHandler())
32
33
34 class SerialAdapter(Adapter):
35 """ Adapter class for using the Python Serial package to allow
36 serial communication to instrument
37
38 :param port: Serial port
39 :param preprocess_reply: An optional callable used to preprocess strings
40 received from the instrument. The callable returns the processed string.
41
42 .. deprecated:: 0.11
43 Implement it in the instrument's `read` method instead.
44
45 :param write_termination: String appended to messages before writing them.
46 :param read_termination: String expected at end of read message and removed.
47 :param \\**kwargs: Any valid key-word argument for serial.Serial
48 """
49
50 def __init__(self, port, preprocess_reply=None,
51 write_termination="", read_termination="",
52 **kwargs):
53 super().__init__(preprocess_reply=preprocess_reply)
54 if isinstance(port, serial.SerialBase):
55 self.connection = port
56 else:
57 self.connection = serial.Serial(port, **kwargs)
58 self.write_termination = write_termination
59 self.read_termination = read_termination
60
61 def _write(self, command, **kwargs):
62 """Write a string command to the instrument appending `write_termination`.
63
64 :param str command: Command string to be sent to the instrument
65 (without termination).
66 :param \\**kwargs: Keyword arguments for the connection itself.
67 """
68 command += self.write_termination
69 self._write_bytes(command.encode(), **kwargs)
70
71 def _write_bytes(self, content, **kwargs):
72 """Write the bytes `content` to the instrument.
73
74 :param bytes content: The bytes to write to the instrument.
75 :param \\**kwargs: Keyword arguments for the connection itself.
76 """
77 self.connection.write(content, **kwargs)
78
79 def _read(self, **kwargs):
80 """Read up to (excluding) `read_termination` or the whole read buffer.
81
82 :param \\**kwargs: Keyword arguments for the connection itself.
83 :returns str: ASCII response of the instrument (read_termination is removed first).
84 """
85 read = self._read_bytes(-1, break_on_termchar=True, **kwargs).decode()
86 # Python>3.8 this shorter form is possible:
87 # self._read_bytes(-1).decode().removesuffix(self.read_termination)
88 if self.read_termination:
89 return read.split(self.read_termination)[0]
90 else:
91 return read
92
93 def _read_bytes(self, count, break_on_termchar, **kwargs):
94 """Read a certain number of bytes from the instrument.
95
96 :param int count: Number of bytes to read. A value of -1 indicates to
97 read from the whole read buffer.
98 :param bool break_on_termchar: Stop reading at a termination character.
99 :param \\**kwargs: Keyword arguments for the connection itself.
100 :returns bytes: Bytes response of the instrument (including termination).
101 """
102 if break_on_termchar and self.read_termination:
103 return self.connection.read_until(self.read_termination.encode(),
104 count if count > 0 else None,
105 **kwargs)
106 else:
107 # At -1 we read a very large number of bytes, which can be considered the whole buffer.
108 return self.connection.read(1e99 if count == -1 else count, **kwargs)
109
110 def flush_read_buffer(self):
111 """Flush and discard the input buffer."""
112 self.connection.reset_input_buffer()
113
114 def __repr__(self):
115 return "<SerialAdapter(port='%s')>" % self.connection.port
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pymeasure/adapters/serial.py b/pymeasure/adapters/serial.py
--- a/pymeasure/adapters/serial.py
+++ b/pymeasure/adapters/serial.py
@@ -94,7 +94,7 @@
"""Read a certain number of bytes from the instrument.
:param int count: Number of bytes to read. A value of -1 indicates to
- read from the whole read buffer.
+ read from the whole read buffer (waits for timeout).
:param bool break_on_termchar: Stop reading at a termination character.
:param \\**kwargs: Keyword arguments for the connection itself.
:returns bytes: Bytes response of the instrument (including termination).
@@ -103,9 +103,25 @@
return self.connection.read_until(self.read_termination.encode(),
count if count > 0 else None,
**kwargs)
+ elif count >= 0:
+ return self.connection.read(count, **kwargs)
else:
- # At -1 we read a very large number of bytes, which can be considered the whole buffer.
- return self.connection.read(1e99 if count == -1 else count, **kwargs)
+ # For -1 we empty the buffer completely
+ return self._read_bytes_until_timeout()
+
+ def _read_bytes_until_timeout(self, chunk_size=256, **kwargs):
+ """Read from the serial until a timeout occurs, regardless of the number of bytes.
+
+ :chunk_size: The number of bytes attempted to in a single transaction.
+ Multiple of these transactions will occur.
+ """
+ # `Serial.readlines()` has an unpredictable timeout, see PR #866
+ data = bytes()
+ while True:
+ chunk = self.connection.read(chunk_size, **kwargs)
+ data += chunk
+ if len(chunk) < chunk_size: # If fewer bytes got returned, we had a timeout
+ return data
def flush_read_buffer(self):
"""Flush and discard the input buffer."""
|
{"golden_diff": "diff --git a/pymeasure/adapters/serial.py b/pymeasure/adapters/serial.py\n--- a/pymeasure/adapters/serial.py\n+++ b/pymeasure/adapters/serial.py\n@@ -94,7 +94,7 @@\n \"\"\"Read a certain number of bytes from the instrument.\n \n :param int count: Number of bytes to read. A value of -1 indicates to\n- read from the whole read buffer.\n+ read from the whole read buffer (waits for timeout).\n :param bool break_on_termchar: Stop reading at a termination character.\n :param \\\\**kwargs: Keyword arguments for the connection itself.\n :returns bytes: Bytes response of the instrument (including termination).\n@@ -103,9 +103,25 @@\n return self.connection.read_until(self.read_termination.encode(),\n count if count > 0 else None,\n **kwargs)\n+ elif count >= 0:\n+ return self.connection.read(count, **kwargs)\n else:\n- # At -1 we read a very large number of bytes, which can be considered the whole buffer.\n- return self.connection.read(1e99 if count == -1 else count, **kwargs)\n+ # For -1 we empty the buffer completely\n+ return self._read_bytes_until_timeout()\n+\n+ def _read_bytes_until_timeout(self, chunk_size=256, **kwargs):\n+ \"\"\"Read from the serial until a timeout occurs, regardless of the number of bytes.\n+\n+ :chunk_size: The number of bytes attempted to in a single transaction.\n+ Multiple of these transactions will occur.\n+ \"\"\"\n+ # `Serial.readlines()` has an unpredictable timeout, see PR #866\n+ data = bytes()\n+ while True:\n+ chunk = self.connection.read(chunk_size, **kwargs)\n+ data += chunk\n+ if len(chunk) < chunk_size: # If fewer bytes got returned, we had a timeout\n+ return data\n \n def flush_read_buffer(self):\n \"\"\"Flush and discard the input buffer.\"\"\"\n", "issue": "`TypeError` on SerialAdapter `read_bytes(-1)`\nWhen I use `self.read_bytes(-1)` from my instrument using a `SerialAdapter`, I get the error:\r\n\r\n> C:\\Program Files\\Python310\\lib\\ctypes\\__init__.py:66: TypeError\r\n\r\nWith the traceback:\r\n\r\n```\r\n..\\..\\..\\pymeasure\\instruments\\velleman\\velleman_k8090.py:247: in read\r\n response = self.read_bytes(-1)\r\n..\\..\\..\\pymeasure\\instruments\\instrument.py:153: in read_bytes\r\n return self.adapter.read_bytes(count, **kwargs)\r\n..\\..\\..\\pymeasure\\adapters\\adapter.py:123: in read_bytes\r\n read = self._read_bytes(count, break_on_termchar, **kwargs)\r\n..\\..\\..\\pymeasure\\adapters\\serial.py:108: in _read_bytes\r\n return self.connection.read(1e99 if count == -1 else count, **kwargs)\r\n..\\..\\..\\venv\\lib\\site-packages\\serial\\serialwin32.py:278: in read\r\n buf = ctypes.create_string_buffer(n)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\ninit = 1e+99, size = None\r\n\r\n def create_string_buffer(init, size=None):\r\n \"\"\"create_string_buffer(aBytes) -> character array\r\n create_string_buffer(anInteger) -> character array\r\n create_string_buffer(aBytes, anInteger) -> character array\r\n \"\"\"\r\n if isinstance(init, bytes):\r\n if size is None:\r\n size = len(init)+1\r\n _sys.audit(\"ctypes.create_string_buffer\", init, size)\r\n buftype = c_char * size\r\n buf = buftype()\r\n buf.value = init\r\n return buf\r\n elif isinstance(init, int):\r\n _sys.audit(\"ctypes.create_string_buffer\", None, init)\r\n buftype = c_char * init\r\n buf = buftype()\r\n return buf\r\n> raise TypeError(init)\r\nE TypeError: 1e+99\r\n\r\nC:\\Program Files\\Python310\\lib\\ctypes\\__init__.py:66: TypeError\r\n\r\n```\r\n\r\nIt looks like reading 1e99 bytes no (longer?) works.\n", "before_files": [{"content": "#\n# This file is part of the PyMeasure package.\n#\n# Copyright (c) 2013-2023 PyMeasure Developers\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n\nimport logging\n\nimport serial\nfrom .adapter import Adapter\n\nlog = logging.getLogger(__name__)\nlog.addHandler(logging.NullHandler())\n\n\nclass SerialAdapter(Adapter):\n \"\"\" Adapter class for using the Python Serial package to allow\n serial communication to instrument\n\n :param port: Serial port\n :param preprocess_reply: An optional callable used to preprocess strings\n received from the instrument. The callable returns the processed string.\n\n .. deprecated:: 0.11\n Implement it in the instrument's `read` method instead.\n\n :param write_termination: String appended to messages before writing them.\n :param read_termination: String expected at end of read message and removed.\n :param \\\\**kwargs: Any valid key-word argument for serial.Serial\n \"\"\"\n\n def __init__(self, port, preprocess_reply=None,\n write_termination=\"\", read_termination=\"\",\n **kwargs):\n super().__init__(preprocess_reply=preprocess_reply)\n if isinstance(port, serial.SerialBase):\n self.connection = port\n else:\n self.connection = serial.Serial(port, **kwargs)\n self.write_termination = write_termination\n self.read_termination = read_termination\n\n def _write(self, command, **kwargs):\n \"\"\"Write a string command to the instrument appending `write_termination`.\n\n :param str command: Command string to be sent to the instrument\n (without termination).\n :param \\\\**kwargs: Keyword arguments for the connection itself.\n \"\"\"\n command += self.write_termination\n self._write_bytes(command.encode(), **kwargs)\n\n def _write_bytes(self, content, **kwargs):\n \"\"\"Write the bytes `content` to the instrument.\n\n :param bytes content: The bytes to write to the instrument.\n :param \\\\**kwargs: Keyword arguments for the connection itself.\n \"\"\"\n self.connection.write(content, **kwargs)\n\n def _read(self, **kwargs):\n \"\"\"Read up to (excluding) `read_termination` or the whole read buffer.\n\n :param \\\\**kwargs: Keyword arguments for the connection itself.\n :returns str: ASCII response of the instrument (read_termination is removed first).\n \"\"\"\n read = self._read_bytes(-1, break_on_termchar=True, **kwargs).decode()\n # Python>3.8 this shorter form is possible:\n # self._read_bytes(-1).decode().removesuffix(self.read_termination)\n if self.read_termination:\n return read.split(self.read_termination)[0]\n else:\n return read\n\n def _read_bytes(self, count, break_on_termchar, **kwargs):\n \"\"\"Read a certain number of bytes from the instrument.\n\n :param int count: Number of bytes to read. A value of -1 indicates to\n read from the whole read buffer.\n :param bool break_on_termchar: Stop reading at a termination character.\n :param \\\\**kwargs: Keyword arguments for the connection itself.\n :returns bytes: Bytes response of the instrument (including termination).\n \"\"\"\n if break_on_termchar and self.read_termination:\n return self.connection.read_until(self.read_termination.encode(),\n count if count > 0 else None,\n **kwargs)\n else:\n # At -1 we read a very large number of bytes, which can be considered the whole buffer.\n return self.connection.read(1e99 if count == -1 else count, **kwargs)\n\n def flush_read_buffer(self):\n \"\"\"Flush and discard the input buffer.\"\"\"\n self.connection.reset_input_buffer()\n\n def __repr__(self):\n return \"<SerialAdapter(port='%s')>\" % self.connection.port\n", "path": "pymeasure/adapters/serial.py"}], "after_files": [{"content": "#\n# This file is part of the PyMeasure package.\n#\n# Copyright (c) 2013-2023 PyMeasure Developers\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n\nimport logging\n\nimport serial\nfrom .adapter import Adapter\n\nlog = logging.getLogger(__name__)\nlog.addHandler(logging.NullHandler())\n\n\nclass SerialAdapter(Adapter):\n \"\"\" Adapter class for using the Python Serial package to allow\n serial communication to instrument\n\n :param port: Serial port\n :param preprocess_reply: An optional callable used to preprocess strings\n received from the instrument. The callable returns the processed string.\n\n .. deprecated:: 0.11\n Implement it in the instrument's `read` method instead.\n\n :param write_termination: String appended to messages before writing them.\n :param read_termination: String expected at end of read message and removed.\n :param \\\\**kwargs: Any valid key-word argument for serial.Serial\n \"\"\"\n\n def __init__(self, port, preprocess_reply=None,\n write_termination=\"\", read_termination=\"\",\n **kwargs):\n super().__init__(preprocess_reply=preprocess_reply)\n if isinstance(port, serial.SerialBase):\n self.connection = port\n else:\n self.connection = serial.Serial(port, **kwargs)\n self.write_termination = write_termination\n self.read_termination = read_termination\n\n def _write(self, command, **kwargs):\n \"\"\"Write a string command to the instrument appending `write_termination`.\n\n :param str command: Command string to be sent to the instrument\n (without termination).\n :param \\\\**kwargs: Keyword arguments for the connection itself.\n \"\"\"\n command += self.write_termination\n self._write_bytes(command.encode(), **kwargs)\n\n def _write_bytes(self, content, **kwargs):\n \"\"\"Write the bytes `content` to the instrument.\n\n :param bytes content: The bytes to write to the instrument.\n :param \\\\**kwargs: Keyword arguments for the connection itself.\n \"\"\"\n self.connection.write(content, **kwargs)\n\n def _read(self, **kwargs):\n \"\"\"Read up to (excluding) `read_termination` or the whole read buffer.\n\n :param \\\\**kwargs: Keyword arguments for the connection itself.\n :returns str: ASCII response of the instrument (read_termination is removed first).\n \"\"\"\n read = self._read_bytes(-1, break_on_termchar=True, **kwargs).decode()\n # Python>3.8 this shorter form is possible:\n # self._read_bytes(-1).decode().removesuffix(self.read_termination)\n if self.read_termination:\n return read.split(self.read_termination)[0]\n else:\n return read\n\n def _read_bytes(self, count, break_on_termchar, **kwargs):\n \"\"\"Read a certain number of bytes from the instrument.\n\n :param int count: Number of bytes to read. A value of -1 indicates to\n read from the whole read buffer (waits for timeout).\n :param bool break_on_termchar: Stop reading at a termination character.\n :param \\\\**kwargs: Keyword arguments for the connection itself.\n :returns bytes: Bytes response of the instrument (including termination).\n \"\"\"\n if break_on_termchar and self.read_termination:\n return self.connection.read_until(self.read_termination.encode(),\n count if count > 0 else None,\n **kwargs)\n elif count >= 0:\n return self.connection.read(count, **kwargs)\n else:\n # For -1 we empty the buffer completely\n return self._read_bytes_until_timeout()\n\n def _read_bytes_until_timeout(self, chunk_size=256, **kwargs):\n \"\"\"Read from the serial until a timeout occurs, regardless of the number of bytes.\n\n :chunk_size: The number of bytes attempted to in a single transaction.\n Multiple of these transactions will occur.\n \"\"\"\n # `Serial.readlines()` has an unpredictable timeout, see PR #866\n data = bytes()\n while True:\n chunk = self.connection.read(chunk_size, **kwargs)\n data += chunk\n if len(chunk) < chunk_size: # If fewer bytes got returned, we had a timeout\n return data\n\n def flush_read_buffer(self):\n \"\"\"Flush and discard the input buffer.\"\"\"\n self.connection.reset_input_buffer()\n\n def __repr__(self):\n return \"<SerialAdapter(port='%s')>\" % self.connection.port\n", "path": "pymeasure/adapters/serial.py"}]}
| 2,058 | 449 |
gh_patches_debug_35950
|
rasdani/github-patches
|
git_diff
|
dask__dask-1325
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NotImplementedError: Unknown protocol hdfs
After running into issue #1232, I upgraded to the the source version. hdfs3 is installed and working, but
```
import hdfs3
from dask import dataframe as dd
namenode = 'mynode.com'
port = 8020
hdfs = hdfs3.HDFileSystem(namenode, port)
hdfs.list('/path/to/file.csv')
# returns file node information
data = df.read_csv("hdfs://{0}/path/to/file.csv".format(namenode))
```
This produces the following error
```
NotImplementedErrorTraceback (most recent call last)
<ipython-input-30-2d58fa3887f0> in <module>()
----> 1 data = dd.read_csv('hdfs://{0}/path.to/file.csv'.format(namenode))
/usr/local/share/virtualenvs/cartman/lib/python2.7/site-packages/dask/dataframe/csv.pyc in read_csv(filename, blocksize, chunkbytes, collection, lineterminator, compression, sample, enforce, storage_options, **kwargs)
/usr/local/share/virtualenvs/cartman/lib/python2.7/site-packages/dask/bytes/core.pyc in read_bytes(path, delimiter, not_zero, blocksize, sample, compression, **kwargs)
NotImplementedError: Unknown protocol hdfs://mynode.com/path/to/file.csv
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dask/bytes/core.py`
Content:
```
1 from __future__ import print_function, division, absolute_import
2
3 import io
4
5 from toolz import merge
6
7 from .compression import seekable_files, files as compress_files
8 from .utils import SeekableFile
9 from ..compatibility import PY2
10 from ..delayed import delayed
11 from ..utils import infer_storage_options, system_encoding
12
13 delayed = delayed(pure=True)
14
15 # Global registration dictionaries for backend storage functions
16 # See docstrings to functions below for more information
17 _read_bytes = dict()
18 _open_files = dict()
19 _open_text_files = dict()
20
21
22 def read_bytes(urlpath, delimiter=None, not_zero=False, blocksize=2**27,
23 sample=True, compression=None, **kwargs):
24 """ Convert path to a list of delayed values
25
26 The path may be a filename like ``'2015-01-01.csv'`` or a globstring
27 like ``'2015-*-*.csv'``.
28
29 The path may be preceded by a protocol, like ``s3://`` or ``hdfs://`` if
30 those libraries are installed.
31
32 This cleanly breaks data by a delimiter if given, so that block boundaries
33 start directly after a delimiter and end on the delimiter.
34
35 Parameters
36 ----------
37 urlpath: string
38 Absolute or relative filepath, URL (may include protocols like
39 ``s3://``), or globstring pointing to data.
40 delimiter: bytes
41 An optional delimiter, like ``b'\n'`` on which to split blocks of bytes
42 not_zero: force seek of start-of-file delimiter, discarding header
43 blocksize: int (=128MB)
44 Chunk size
45 compression: string or None
46 String like 'gzip' or 'xz'. Must support efficient random access.
47 sample: bool, int
48 Whether or not to return a sample from the first 10k bytes
49 **kwargs: dict
50 Extra options that make sense to a particular storage connection, e.g.
51 host, port, username, password, etc.
52
53 Examples
54 --------
55 >>> sample, blocks = read_bytes('2015-*-*.csv', delimiter=b'\\n') # doctest: +SKIP
56 >>> sample, blocks = read_bytes('s3://bucket/2015-*-*.csv', delimiter=b'\\n') # doctest: +SKIP
57
58 Returns
59 -------
60 10kB sample header and list of ``dask.Delayed`` objects or list of lists of
61 delayed objects if ``fn`` is a globstring.
62 """
63 if compression is not None and compression not in compress_files:
64 raise ValueError("Compression type %s not supported" % compression)
65
66 storage_options = infer_storage_options(urlpath,
67 inherit_storage_options=kwargs)
68 protocol = storage_options.pop('protocol')
69 try:
70 read_bytes = _read_bytes[protocol]
71 except KeyError:
72 raise NotImplementedError("Unknown protocol %s (%s)" %
73 (protocol, urlpath))
74
75 return read_bytes(storage_options.pop('path'), delimiter=delimiter,
76 not_zero=not_zero, blocksize=blocksize, sample=sample,
77 compression=compression, **storage_options)
78
79
80 def open_files_by(open_files_backend, path, compression=None, **kwargs):
81 """ Given open files backend and path return dask.delayed file-like objects
82
83 NOTE: This is an internal helper function, please refer to
84 :func:`open_files` documentation for more details.
85
86 Parameters
87 ----------
88 path: string
89 Filepath or globstring
90 compression: string
91 Compression to use. See ``dask.bytes.compression.files`` for options.
92 **kwargs: dict
93 Extra options that make sense to a particular storage connection, e.g.
94 host, port, username, password, etc.
95
96 Returns
97 -------
98 List of ``dask.delayed`` objects that compute to file-like objects
99 """
100 files = open_files_backend(path, **kwargs)
101
102 if compression:
103 decompress = merge(seekable_files, compress_files)[compression]
104 if PY2:
105 files = [delayed(SeekableFile)(file) for file in files]
106 files = [delayed(decompress)(file) for file in files]
107
108 return files
109
110
111 def open_files(urlpath, compression=None, **kwargs):
112 """ Given path return dask.delayed file-like objects
113
114 Parameters
115 ----------
116 urlpath: string
117 Absolute or relative filepath, URL (may include protocols like
118 ``s3://``), or globstring pointing to data.
119 compression: string
120 Compression to use. See ``dask.bytes.compression.files`` for options.
121 **kwargs: dict
122 Extra options that make sense to a particular storage connection, e.g.
123 host, port, username, password, etc.
124
125 Examples
126 --------
127 >>> files = open_files('2015-*-*.csv') # doctest: +SKIP
128 >>> files = open_files('s3://bucket/2015-*-*.csv.gz', compression='gzip') # doctest: +SKIP
129
130 Returns
131 -------
132 List of ``dask.delayed`` objects that compute to file-like objects
133 """
134 if compression is not None and compression not in compress_files:
135 raise ValueError("Compression type %s not supported" % compression)
136
137 storage_options = infer_storage_options(urlpath,
138 inherit_storage_options=kwargs)
139 protocol = storage_options.pop('protocol')
140 try:
141 open_files_backend = _open_files[protocol]
142 except KeyError:
143 raise NotImplementedError("Unknown protocol %s (%s)" %
144 (protocol, urlpath))
145
146 return open_files_by(open_files_backend, storage_options.pop('path'),
147 compression=compression, **storage_options)
148
149
150 def open_text_files(urlpath, encoding=system_encoding, errors='strict',
151 compression=None, **kwargs):
152 """ Given path return dask.delayed file-like objects in text mode
153
154 Parameters
155 ----------
156 urlpath: string
157 Absolute or relative filepath, URL (may include protocols like
158 ``s3://``), or globstring pointing to data.
159 encoding: string
160 errors: string
161 compression: string
162 Compression to use. See ``dask.bytes.compression.files`` for options.
163 **kwargs: dict
164 Extra options that make sense to a particular storage connection, e.g.
165 host, port, username, password, etc.
166
167 Examples
168 --------
169 >>> files = open_text_files('2015-*-*.csv', encoding='utf-8') # doctest: +SKIP
170 >>> files = open_text_files('s3://bucket/2015-*-*.csv') # doctest: +SKIP
171
172 Returns
173 -------
174 List of ``dask.delayed`` objects that compute to text file-like objects
175 """
176 if compression is not None and compression not in compress_files:
177 raise ValueError("Compression type %s not supported" % compression)
178
179 storage_options = infer_storage_options(urlpath,
180 inherit_storage_options=kwargs)
181 path = storage_options.pop('path')
182 protocol = storage_options.pop('protocol')
183 if protocol in _open_text_files and compression is None:
184 return _open_text_files[protocol](path,
185 encoding=encoding,
186 errors=errors,
187 **storage_options)
188 elif protocol in _open_files:
189 files = open_files_by(_open_files[protocol],
190 path,
191 compression=compression,
192 **storage_options)
193 if PY2:
194 files = [delayed(SeekableFile)(file) for file in files]
195 return [delayed(io.TextIOWrapper)(file, encoding=encoding,
196 errors=errors) for file in files]
197 else:
198 raise NotImplementedError("Unknown protocol %s (%s)" %
199 (protocol, urlpath))
200
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/dask/bytes/core.py b/dask/bytes/core.py
--- a/dask/bytes/core.py
+++ b/dask/bytes/core.py
@@ -66,6 +66,7 @@
storage_options = infer_storage_options(urlpath,
inherit_storage_options=kwargs)
protocol = storage_options.pop('protocol')
+ ensure_protocol(protocol)
try:
read_bytes = _read_bytes[protocol]
except KeyError:
@@ -137,6 +138,7 @@
storage_options = infer_storage_options(urlpath,
inherit_storage_options=kwargs)
protocol = storage_options.pop('protocol')
+ ensure_protocol(protocol)
try:
open_files_backend = _open_files[protocol]
except KeyError:
@@ -180,6 +182,7 @@
inherit_storage_options=kwargs)
path = storage_options.pop('path')
protocol = storage_options.pop('protocol')
+ ensure_protocol(protocol)
if protocol in _open_text_files and compression is None:
return _open_text_files[protocol](path,
encoding=encoding,
@@ -197,3 +200,28 @@
else:
raise NotImplementedError("Unknown protocol %s (%s)" %
(protocol, urlpath))
+
+
+def ensure_protocol(protocol):
+ if protocol in _read_bytes or protocol in _open_files:
+ return
+
+ if protocol == 's3':
+ try:
+ import dask.s3
+ except ImportError:
+ raise ImportError("Need to install `s3fs` library for s3 support\n"
+ " conda install s3fs -c conda-forge\n"
+ " or\n"
+ " pip install s3fs")
+
+ elif protocol == 'hdfs':
+ try:
+ import distributed.hdfs
+ except ImportError:
+ raise ImportError("Need to install `distributed` and `hdfs3` "
+ "for HDFS support\n"
+ " conda install distributed hdfs3 -c conda-forge")
+
+ else:
+ raise ValueError("Unknown protocol %s" % protocol)
|
{"golden_diff": "diff --git a/dask/bytes/core.py b/dask/bytes/core.py\n--- a/dask/bytes/core.py\n+++ b/dask/bytes/core.py\n@@ -66,6 +66,7 @@\n storage_options = infer_storage_options(urlpath,\n inherit_storage_options=kwargs)\n protocol = storage_options.pop('protocol')\n+ ensure_protocol(protocol)\n try:\n read_bytes = _read_bytes[protocol]\n except KeyError:\n@@ -137,6 +138,7 @@\n storage_options = infer_storage_options(urlpath,\n inherit_storage_options=kwargs)\n protocol = storage_options.pop('protocol')\n+ ensure_protocol(protocol)\n try:\n open_files_backend = _open_files[protocol]\n except KeyError:\n@@ -180,6 +182,7 @@\n inherit_storage_options=kwargs)\n path = storage_options.pop('path')\n protocol = storage_options.pop('protocol')\n+ ensure_protocol(protocol)\n if protocol in _open_text_files and compression is None:\n return _open_text_files[protocol](path,\n encoding=encoding,\n@@ -197,3 +200,28 @@\n else:\n raise NotImplementedError(\"Unknown protocol %s (%s)\" %\n (protocol, urlpath))\n+\n+\n+def ensure_protocol(protocol):\n+ if protocol in _read_bytes or protocol in _open_files:\n+ return\n+\n+ if protocol == 's3':\n+ try:\n+ import dask.s3\n+ except ImportError:\n+ raise ImportError(\"Need to install `s3fs` library for s3 support\\n\"\n+ \" conda install s3fs -c conda-forge\\n\"\n+ \" or\\n\"\n+ \" pip install s3fs\")\n+\n+ elif protocol == 'hdfs':\n+ try:\n+ import distributed.hdfs\n+ except ImportError:\n+ raise ImportError(\"Need to install `distributed` and `hdfs3` \"\n+ \"for HDFS support\\n\"\n+ \" conda install distributed hdfs3 -c conda-forge\")\n+\n+ else:\n+ raise ValueError(\"Unknown protocol %s\" % protocol)\n", "issue": "NotImplementedError: Unknown protocol hdfs\nAfter running into issue #1232, I upgraded to the the source version. hdfs3 is installed and working, but\n\n```\nimport hdfs3\nfrom dask import dataframe as dd\n\nnamenode = 'mynode.com'\nport = 8020\nhdfs = hdfs3.HDFileSystem(namenode, port)\nhdfs.list('/path/to/file.csv')\n# returns file node information\ndata = df.read_csv(\"hdfs://{0}/path/to/file.csv\".format(namenode))\n```\n\nThis produces the following error\n\n```\nNotImplementedErrorTraceback (most recent call last)\n<ipython-input-30-2d58fa3887f0> in <module>()\n----> 1 data = dd.read_csv('hdfs://{0}/path.to/file.csv'.format(namenode))\n\n/usr/local/share/virtualenvs/cartman/lib/python2.7/site-packages/dask/dataframe/csv.pyc in read_csv(filename, blocksize, chunkbytes, collection, lineterminator, compression, sample, enforce, storage_options, **kwargs)\n\n/usr/local/share/virtualenvs/cartman/lib/python2.7/site-packages/dask/bytes/core.pyc in read_bytes(path, delimiter, not_zero, blocksize, sample, compression, **kwargs)\n\nNotImplementedError: Unknown protocol hdfs://mynode.com/path/to/file.csv\n```\n\n", "before_files": [{"content": "from __future__ import print_function, division, absolute_import\n\nimport io\n\nfrom toolz import merge\n\nfrom .compression import seekable_files, files as compress_files\nfrom .utils import SeekableFile\nfrom ..compatibility import PY2\nfrom ..delayed import delayed\nfrom ..utils import infer_storage_options, system_encoding\n\ndelayed = delayed(pure=True)\n\n# Global registration dictionaries for backend storage functions\n# See docstrings to functions below for more information\n_read_bytes = dict()\n_open_files = dict()\n_open_text_files = dict()\n\n\ndef read_bytes(urlpath, delimiter=None, not_zero=False, blocksize=2**27,\n sample=True, compression=None, **kwargs):\n \"\"\" Convert path to a list of delayed values\n\n The path may be a filename like ``'2015-01-01.csv'`` or a globstring\n like ``'2015-*-*.csv'``.\n\n The path may be preceded by a protocol, like ``s3://`` or ``hdfs://`` if\n those libraries are installed.\n\n This cleanly breaks data by a delimiter if given, so that block boundaries\n start directly after a delimiter and end on the delimiter.\n\n Parameters\n ----------\n urlpath: string\n Absolute or relative filepath, URL (may include protocols like\n ``s3://``), or globstring pointing to data.\n delimiter: bytes\n An optional delimiter, like ``b'\\n'`` on which to split blocks of bytes\n not_zero: force seek of start-of-file delimiter, discarding header\n blocksize: int (=128MB)\n Chunk size\n compression: string or None\n String like 'gzip' or 'xz'. Must support efficient random access.\n sample: bool, int\n Whether or not to return a sample from the first 10k bytes\n **kwargs: dict\n Extra options that make sense to a particular storage connection, e.g.\n host, port, username, password, etc.\n\n Examples\n --------\n >>> sample, blocks = read_bytes('2015-*-*.csv', delimiter=b'\\\\n') # doctest: +SKIP\n >>> sample, blocks = read_bytes('s3://bucket/2015-*-*.csv', delimiter=b'\\\\n') # doctest: +SKIP\n\n Returns\n -------\n 10kB sample header and list of ``dask.Delayed`` objects or list of lists of\n delayed objects if ``fn`` is a globstring.\n \"\"\"\n if compression is not None and compression not in compress_files:\n raise ValueError(\"Compression type %s not supported\" % compression)\n\n storage_options = infer_storage_options(urlpath,\n inherit_storage_options=kwargs)\n protocol = storage_options.pop('protocol')\n try:\n read_bytes = _read_bytes[protocol]\n except KeyError:\n raise NotImplementedError(\"Unknown protocol %s (%s)\" %\n (protocol, urlpath))\n\n return read_bytes(storage_options.pop('path'), delimiter=delimiter,\n not_zero=not_zero, blocksize=blocksize, sample=sample,\n compression=compression, **storage_options)\n\n\ndef open_files_by(open_files_backend, path, compression=None, **kwargs):\n \"\"\" Given open files backend and path return dask.delayed file-like objects\n\n NOTE: This is an internal helper function, please refer to\n :func:`open_files` documentation for more details.\n\n Parameters\n ----------\n path: string\n Filepath or globstring\n compression: string\n Compression to use. See ``dask.bytes.compression.files`` for options.\n **kwargs: dict\n Extra options that make sense to a particular storage connection, e.g.\n host, port, username, password, etc.\n\n Returns\n -------\n List of ``dask.delayed`` objects that compute to file-like objects\n \"\"\"\n files = open_files_backend(path, **kwargs)\n\n if compression:\n decompress = merge(seekable_files, compress_files)[compression]\n if PY2:\n files = [delayed(SeekableFile)(file) for file in files]\n files = [delayed(decompress)(file) for file in files]\n\n return files\n\n\ndef open_files(urlpath, compression=None, **kwargs):\n \"\"\" Given path return dask.delayed file-like objects\n\n Parameters\n ----------\n urlpath: string\n Absolute or relative filepath, URL (may include protocols like\n ``s3://``), or globstring pointing to data.\n compression: string\n Compression to use. See ``dask.bytes.compression.files`` for options.\n **kwargs: dict\n Extra options that make sense to a particular storage connection, e.g.\n host, port, username, password, etc.\n\n Examples\n --------\n >>> files = open_files('2015-*-*.csv') # doctest: +SKIP\n >>> files = open_files('s3://bucket/2015-*-*.csv.gz', compression='gzip') # doctest: +SKIP\n\n Returns\n -------\n List of ``dask.delayed`` objects that compute to file-like objects\n \"\"\"\n if compression is not None and compression not in compress_files:\n raise ValueError(\"Compression type %s not supported\" % compression)\n\n storage_options = infer_storage_options(urlpath,\n inherit_storage_options=kwargs)\n protocol = storage_options.pop('protocol')\n try:\n open_files_backend = _open_files[protocol]\n except KeyError:\n raise NotImplementedError(\"Unknown protocol %s (%s)\" %\n (protocol, urlpath))\n\n return open_files_by(open_files_backend, storage_options.pop('path'),\n compression=compression, **storage_options)\n\n\ndef open_text_files(urlpath, encoding=system_encoding, errors='strict',\n compression=None, **kwargs):\n \"\"\" Given path return dask.delayed file-like objects in text mode\n\n Parameters\n ----------\n urlpath: string\n Absolute or relative filepath, URL (may include protocols like\n ``s3://``), or globstring pointing to data.\n encoding: string\n errors: string\n compression: string\n Compression to use. See ``dask.bytes.compression.files`` for options.\n **kwargs: dict\n Extra options that make sense to a particular storage connection, e.g.\n host, port, username, password, etc.\n\n Examples\n --------\n >>> files = open_text_files('2015-*-*.csv', encoding='utf-8') # doctest: +SKIP\n >>> files = open_text_files('s3://bucket/2015-*-*.csv') # doctest: +SKIP\n\n Returns\n -------\n List of ``dask.delayed`` objects that compute to text file-like objects\n \"\"\"\n if compression is not None and compression not in compress_files:\n raise ValueError(\"Compression type %s not supported\" % compression)\n\n storage_options = infer_storage_options(urlpath,\n inherit_storage_options=kwargs)\n path = storage_options.pop('path')\n protocol = storage_options.pop('protocol')\n if protocol in _open_text_files and compression is None:\n return _open_text_files[protocol](path,\n encoding=encoding,\n errors=errors,\n **storage_options)\n elif protocol in _open_files:\n files = open_files_by(_open_files[protocol],\n path,\n compression=compression,\n **storage_options)\n if PY2:\n files = [delayed(SeekableFile)(file) for file in files]\n return [delayed(io.TextIOWrapper)(file, encoding=encoding,\n errors=errors) for file in files]\n else:\n raise NotImplementedError(\"Unknown protocol %s (%s)\" %\n (protocol, urlpath))\n", "path": "dask/bytes/core.py"}], "after_files": [{"content": "from __future__ import print_function, division, absolute_import\n\nimport io\n\nfrom toolz import merge\n\nfrom .compression import seekable_files, files as compress_files\nfrom .utils import SeekableFile\nfrom ..compatibility import PY2\nfrom ..delayed import delayed\nfrom ..utils import infer_storage_options, system_encoding\n\ndelayed = delayed(pure=True)\n\n# Global registration dictionaries for backend storage functions\n# See docstrings to functions below for more information\n_read_bytes = dict()\n_open_files = dict()\n_open_text_files = dict()\n\n\ndef read_bytes(urlpath, delimiter=None, not_zero=False, blocksize=2**27,\n sample=True, compression=None, **kwargs):\n \"\"\" Convert path to a list of delayed values\n\n The path may be a filename like ``'2015-01-01.csv'`` or a globstring\n like ``'2015-*-*.csv'``.\n\n The path may be preceded by a protocol, like ``s3://`` or ``hdfs://`` if\n those libraries are installed.\n\n This cleanly breaks data by a delimiter if given, so that block boundaries\n start directly after a delimiter and end on the delimiter.\n\n Parameters\n ----------\n urlpath: string\n Absolute or relative filepath, URL (may include protocols like\n ``s3://``), or globstring pointing to data.\n delimiter: bytes\n An optional delimiter, like ``b'\\n'`` on which to split blocks of bytes\n not_zero: force seek of start-of-file delimiter, discarding header\n blocksize: int (=128MB)\n Chunk size\n compression: string or None\n String like 'gzip' or 'xz'. Must support efficient random access.\n sample: bool, int\n Whether or not to return a sample from the first 10k bytes\n **kwargs: dict\n Extra options that make sense to a particular storage connection, e.g.\n host, port, username, password, etc.\n\n Examples\n --------\n >>> sample, blocks = read_bytes('2015-*-*.csv', delimiter=b'\\\\n') # doctest: +SKIP\n >>> sample, blocks = read_bytes('s3://bucket/2015-*-*.csv', delimiter=b'\\\\n') # doctest: +SKIP\n\n Returns\n -------\n 10kB sample header and list of ``dask.Delayed`` objects or list of lists of\n delayed objects if ``fn`` is a globstring.\n \"\"\"\n if compression is not None and compression not in compress_files:\n raise ValueError(\"Compression type %s not supported\" % compression)\n\n storage_options = infer_storage_options(urlpath,\n inherit_storage_options=kwargs)\n protocol = storage_options.pop('protocol')\n ensure_protocol(protocol)\n try:\n read_bytes = _read_bytes[protocol]\n except KeyError:\n raise NotImplementedError(\"Unknown protocol %s (%s)\" %\n (protocol, urlpath))\n\n return read_bytes(storage_options.pop('path'), delimiter=delimiter,\n not_zero=not_zero, blocksize=blocksize, sample=sample,\n compression=compression, **storage_options)\n\n\ndef open_files_by(open_files_backend, path, compression=None, **kwargs):\n \"\"\" Given open files backend and path return dask.delayed file-like objects\n\n NOTE: This is an internal helper function, please refer to\n :func:`open_files` documentation for more details.\n\n Parameters\n ----------\n path: string\n Filepath or globstring\n compression: string\n Compression to use. See ``dask.bytes.compression.files`` for options.\n **kwargs: dict\n Extra options that make sense to a particular storage connection, e.g.\n host, port, username, password, etc.\n\n Returns\n -------\n List of ``dask.delayed`` objects that compute to file-like objects\n \"\"\"\n files = open_files_backend(path, **kwargs)\n\n if compression:\n decompress = merge(seekable_files, compress_files)[compression]\n if PY2:\n files = [delayed(SeekableFile)(file) for file in files]\n files = [delayed(decompress)(file) for file in files]\n\n return files\n\n\ndef open_files(urlpath, compression=None, **kwargs):\n \"\"\" Given path return dask.delayed file-like objects\n\n Parameters\n ----------\n urlpath: string\n Absolute or relative filepath, URL (may include protocols like\n ``s3://``), or globstring pointing to data.\n compression: string\n Compression to use. See ``dask.bytes.compression.files`` for options.\n **kwargs: dict\n Extra options that make sense to a particular storage connection, e.g.\n host, port, username, password, etc.\n\n Examples\n --------\n >>> files = open_files('2015-*-*.csv') # doctest: +SKIP\n >>> files = open_files('s3://bucket/2015-*-*.csv.gz', compression='gzip') # doctest: +SKIP\n\n Returns\n -------\n List of ``dask.delayed`` objects that compute to file-like objects\n \"\"\"\n if compression is not None and compression not in compress_files:\n raise ValueError(\"Compression type %s not supported\" % compression)\n\n storage_options = infer_storage_options(urlpath,\n inherit_storage_options=kwargs)\n protocol = storage_options.pop('protocol')\n ensure_protocol(protocol)\n try:\n open_files_backend = _open_files[protocol]\n except KeyError:\n raise NotImplementedError(\"Unknown protocol %s (%s)\" %\n (protocol, urlpath))\n\n return open_files_by(open_files_backend, storage_options.pop('path'),\n compression=compression, **storage_options)\n\n\ndef open_text_files(urlpath, encoding=system_encoding, errors='strict',\n compression=None, **kwargs):\n \"\"\" Given path return dask.delayed file-like objects in text mode\n\n Parameters\n ----------\n urlpath: string\n Absolute or relative filepath, URL (may include protocols like\n ``s3://``), or globstring pointing to data.\n encoding: string\n errors: string\n compression: string\n Compression to use. See ``dask.bytes.compression.files`` for options.\n **kwargs: dict\n Extra options that make sense to a particular storage connection, e.g.\n host, port, username, password, etc.\n\n Examples\n --------\n >>> files = open_text_files('2015-*-*.csv', encoding='utf-8') # doctest: +SKIP\n >>> files = open_text_files('s3://bucket/2015-*-*.csv') # doctest: +SKIP\n\n Returns\n -------\n List of ``dask.delayed`` objects that compute to text file-like objects\n \"\"\"\n if compression is not None and compression not in compress_files:\n raise ValueError(\"Compression type %s not supported\" % compression)\n\n storage_options = infer_storage_options(urlpath,\n inherit_storage_options=kwargs)\n path = storage_options.pop('path')\n protocol = storage_options.pop('protocol')\n ensure_protocol(protocol)\n if protocol in _open_text_files and compression is None:\n return _open_text_files[protocol](path,\n encoding=encoding,\n errors=errors,\n **storage_options)\n elif protocol in _open_files:\n files = open_files_by(_open_files[protocol],\n path,\n compression=compression,\n **storage_options)\n if PY2:\n files = [delayed(SeekableFile)(file) for file in files]\n return [delayed(io.TextIOWrapper)(file, encoding=encoding,\n errors=errors) for file in files]\n else:\n raise NotImplementedError(\"Unknown protocol %s (%s)\" %\n (protocol, urlpath))\n\n\ndef ensure_protocol(protocol):\n if protocol in _read_bytes or protocol in _open_files:\n return\n\n if protocol == 's3':\n try:\n import dask.s3\n except ImportError:\n raise ImportError(\"Need to install `s3fs` library for s3 support\\n\"\n \" conda install s3fs -c conda-forge\\n\"\n \" or\\n\"\n \" pip install s3fs\")\n\n elif protocol == 'hdfs':\n try:\n import distributed.hdfs\n except ImportError:\n raise ImportError(\"Need to install `distributed` and `hdfs3` \"\n \"for HDFS support\\n\"\n \" conda install distributed hdfs3 -c conda-forge\")\n\n else:\n raise ValueError(\"Unknown protocol %s\" % protocol)\n", "path": "dask/bytes/core.py"}]}
| 2,753 | 470 |
gh_patches_debug_7817
|
rasdani/github-patches
|
git_diff
|
lhotse-speech__lhotse-1197
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The 'speaker' field in the stcmds dataset is incorrect
The file 20170001P00001I0001.wav is spoken by a female,
and the file 20170001P00001A0001.wav is spoken by a male.
But they have the same speaker ID P00001
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lhotse/recipes/stcmds.py`
Content:
```
1 """
2 Stcmds is an open-source Chinese Mandarin corpus by Surfingtech (www.surfing.ai), containing utterances from 855 speakers, 102600 utterances;
3 Publicly available on https://www.openslr.org/resources/38
4 ST-CMDS (110 hours)
5
6 """
7 import logging
8 import os
9 import shutil
10 import tarfile
11 from collections import defaultdict
12 from pathlib import Path
13 from typing import Dict, Optional, Union
14
15 from tqdm.auto import tqdm
16
17 from lhotse import fix_manifests, validate_recordings_and_supervisions
18 from lhotse.audio import Recording, RecordingSet
19 from lhotse.supervision import SupervisionSegment, SupervisionSet
20 from lhotse.utils import Pathlike, resumable_download, safe_extract
21
22
23 def text_normalize(line: str):
24 """
25 Modified from https://github.com/wenet-e2e/wenet/blob/main/examples/multi_cn/s0/local/stcmds_data_prep.sh#L42
26 paste -d' ' $data/utt.list $data/text.list |\
27 sed 's/,//g' |\
28 tr '[a-z]' '[A-Z]' |\
29 awk '{if (NF > 1) print $0;}' > $data/train/text
30 """
31 line = line.replace(",", "")
32 line = line.upper()
33 return line
34
35
36 def download_stcmds(
37 target_dir: Pathlike = ".",
38 force_download: bool = False,
39 base_url: str = "http://www.openslr.org/resources",
40 ) -> Path:
41 """
42 Downdload and untar the dataset
43 :param target_dir: Pathlike, the path of the dir to storage the dataset.
44 :param force_download: Bool, if True, download the tars no matter if the tars exist.
45 :param base_url: str, the url of the OpenSLR resources.
46 :return: the path to downloaded and extracted directory with data.
47 """
48 url = f"{base_url}/38"
49 target_dir = Path(target_dir)
50 target_dir.mkdir(parents=True, exist_ok=True)
51 corpus_dir = target_dir / "stcmds"
52 dataset_tar_name = "ST-CMDS-20170001_1-OS.tar.gz"
53 for tar_name in [dataset_tar_name]:
54 tar_path = target_dir / tar_name
55 extracted_dir = corpus_dir / tar_name[:-7]
56 completed_detector = extracted_dir / ".completed"
57 if completed_detector.is_file():
58 logging.info(f"Skipping download of because {completed_detector} exists.")
59 continue
60 resumable_download(
61 f"{url}/{tar_name}", filename=tar_path, force_download=force_download
62 )
63 shutil.rmtree(extracted_dir, ignore_errors=True)
64 with tarfile.open(tar_path) as tar:
65 safe_extract(tar, path=corpus_dir)
66 completed_detector.touch()
67
68 return corpus_dir
69
70
71 def prepare_stcmds(
72 corpus_dir: Pathlike, output_dir: Optional[Pathlike] = None
73 ) -> Dict[str, Dict[str, Union[RecordingSet, SupervisionSet]]]:
74 """
75 Returns the manifests which consist of the Recordings and Supervisions
76 :param corpus_dir: Pathlike, the path of the data dir.
77 :param output_dir: Pathlike, the path where to write the manifests.
78 :return: a Dict whose key is the dataset part, and the value is Dicts with the keys 'recordings' and 'supervisions'.
79 """
80 corpus_dir = Path(corpus_dir)
81 assert corpus_dir.is_dir(), f"No such directory: {corpus_dir}"
82 if output_dir is not None:
83 output_dir = Path(output_dir)
84 output_dir.mkdir(parents=True, exist_ok=True)
85
86 path = corpus_dir / "ST-CMDS-20170001_1-OS"
87 transcript_dict = {}
88 for text_path in path.rglob("**/*.txt"):
89 idx = text_path.stem
90 logging.info(f"processing stcmds transcript {text_path}")
91 with open(text_path, "r", encoding="utf-8") as f:
92 for line in f:
93 line = text_normalize(line)
94 transcript_dict[idx] = line
95
96 manifests = defaultdict(dict)
97 dataset_parts = ["train"]
98 for part in tqdm(
99 dataset_parts,
100 desc="process stcmds audio, it needs waste about 2169 seconds time.",
101 ):
102 logging.info(f"Processing stcmds {part}")
103 recordings = []
104 supervisions = []
105 for audio_path in path.rglob("**/*.wav"):
106 idx = audio_path.stem
107 speaker = "".join(list(idx)[8:14])
108 if idx not in transcript_dict:
109 logging.warning(f"No transcript: {idx}")
110 logging.warning(f"{audio_path} has no transcript")
111 continue
112 text = transcript_dict[idx]
113 if not audio_path.is_file():
114 logging.warning(f"No such file: {audio_path}")
115 continue
116 recording = Recording.from_file(audio_path)
117 recordings.append(recording)
118 segment = SupervisionSegment(
119 id=idx,
120 recording_id=idx,
121 start=0.0,
122 duration=recording.duration,
123 channel=0,
124 language="Chinese",
125 speaker=speaker,
126 text=text.strip(),
127 )
128 supervisions.append(segment)
129
130 recording_set = RecordingSet.from_recordings(recordings)
131 supervision_set = SupervisionSet.from_segments(supervisions)
132
133 recording_set, supervision_set = fix_manifests(recording_set, supervision_set)
134 validate_recordings_and_supervisions(recording_set, supervision_set)
135
136 if output_dir is not None:
137 supervision_set.to_file(output_dir / f"stcmds_supervisions_{part}.jsonl.gz")
138 recording_set.to_file(output_dir / f"stcmds_recordings_{part}.jsonl.gz")
139
140 manifests[part] = {"recordings": recording_set, "supervisions": supervision_set}
141
142 return manifests
143
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lhotse/recipes/stcmds.py b/lhotse/recipes/stcmds.py
--- a/lhotse/recipes/stcmds.py
+++ b/lhotse/recipes/stcmds.py
@@ -104,7 +104,7 @@
supervisions = []
for audio_path in path.rglob("**/*.wav"):
idx = audio_path.stem
- speaker = "".join(list(idx)[8:14])
+ speaker = "".join(list(idx)[8:15])
if idx not in transcript_dict:
logging.warning(f"No transcript: {idx}")
logging.warning(f"{audio_path} has no transcript")
|
{"golden_diff": "diff --git a/lhotse/recipes/stcmds.py b/lhotse/recipes/stcmds.py\n--- a/lhotse/recipes/stcmds.py\n+++ b/lhotse/recipes/stcmds.py\n@@ -104,7 +104,7 @@\n supervisions = []\n for audio_path in path.rglob(\"**/*.wav\"):\n idx = audio_path.stem\n- speaker = \"\".join(list(idx)[8:14])\n+ speaker = \"\".join(list(idx)[8:15])\n if idx not in transcript_dict:\n logging.warning(f\"No transcript: {idx}\")\n logging.warning(f\"{audio_path} has no transcript\")\n", "issue": "The 'speaker' field in the stcmds dataset is incorrect\nThe file 20170001P00001I0001.wav is spoken by a female, \r\nand the file 20170001P00001A0001.wav is spoken by a male.\r\nBut they have the same speaker ID P00001\n", "before_files": [{"content": "\"\"\"\nStcmds is an open-source Chinese Mandarin corpus by Surfingtech (www.surfing.ai), containing utterances from 855 speakers, 102600 utterances;\nPublicly available on https://www.openslr.org/resources/38\nST-CMDS (110 hours)\n\n\"\"\"\nimport logging\nimport os\nimport shutil\nimport tarfile\nfrom collections import defaultdict\nfrom pathlib import Path\nfrom typing import Dict, Optional, Union\n\nfrom tqdm.auto import tqdm\n\nfrom lhotse import fix_manifests, validate_recordings_and_supervisions\nfrom lhotse.audio import Recording, RecordingSet\nfrom lhotse.supervision import SupervisionSegment, SupervisionSet\nfrom lhotse.utils import Pathlike, resumable_download, safe_extract\n\n\ndef text_normalize(line: str):\n \"\"\"\n Modified from https://github.com/wenet-e2e/wenet/blob/main/examples/multi_cn/s0/local/stcmds_data_prep.sh#L42\n paste -d' ' $data/utt.list $data/text.list |\\\n sed 's/\uff0c//g' |\\\n tr '[a-z]' '[A-Z]' |\\\n awk '{if (NF > 1) print $0;}' > $data/train/text\n \"\"\"\n line = line.replace(\"\uff0c\", \"\")\n line = line.upper()\n return line\n\n\ndef download_stcmds(\n target_dir: Pathlike = \".\",\n force_download: bool = False,\n base_url: str = \"http://www.openslr.org/resources\",\n) -> Path:\n \"\"\"\n Downdload and untar the dataset\n :param target_dir: Pathlike, the path of the dir to storage the dataset.\n :param force_download: Bool, if True, download the tars no matter if the tars exist.\n :param base_url: str, the url of the OpenSLR resources.\n :return: the path to downloaded and extracted directory with data.\n \"\"\"\n url = f\"{base_url}/38\"\n target_dir = Path(target_dir)\n target_dir.mkdir(parents=True, exist_ok=True)\n corpus_dir = target_dir / \"stcmds\"\n dataset_tar_name = \"ST-CMDS-20170001_1-OS.tar.gz\"\n for tar_name in [dataset_tar_name]:\n tar_path = target_dir / tar_name\n extracted_dir = corpus_dir / tar_name[:-7]\n completed_detector = extracted_dir / \".completed\"\n if completed_detector.is_file():\n logging.info(f\"Skipping download of because {completed_detector} exists.\")\n continue\n resumable_download(\n f\"{url}/{tar_name}\", filename=tar_path, force_download=force_download\n )\n shutil.rmtree(extracted_dir, ignore_errors=True)\n with tarfile.open(tar_path) as tar:\n safe_extract(tar, path=corpus_dir)\n completed_detector.touch()\n\n return corpus_dir\n\n\ndef prepare_stcmds(\n corpus_dir: Pathlike, output_dir: Optional[Pathlike] = None\n) -> Dict[str, Dict[str, Union[RecordingSet, SupervisionSet]]]:\n \"\"\"\n Returns the manifests which consist of the Recordings and Supervisions\n :param corpus_dir: Pathlike, the path of the data dir.\n :param output_dir: Pathlike, the path where to write the manifests.\n :return: a Dict whose key is the dataset part, and the value is Dicts with the keys 'recordings' and 'supervisions'.\n \"\"\"\n corpus_dir = Path(corpus_dir)\n assert corpus_dir.is_dir(), f\"No such directory: {corpus_dir}\"\n if output_dir is not None:\n output_dir = Path(output_dir)\n output_dir.mkdir(parents=True, exist_ok=True)\n\n path = corpus_dir / \"ST-CMDS-20170001_1-OS\"\n transcript_dict = {}\n for text_path in path.rglob(\"**/*.txt\"):\n idx = text_path.stem\n logging.info(f\"processing stcmds transcript {text_path}\")\n with open(text_path, \"r\", encoding=\"utf-8\") as f:\n for line in f:\n line = text_normalize(line)\n transcript_dict[idx] = line\n\n manifests = defaultdict(dict)\n dataset_parts = [\"train\"]\n for part in tqdm(\n dataset_parts,\n desc=\"process stcmds audio, it needs waste about 2169 seconds time.\",\n ):\n logging.info(f\"Processing stcmds {part}\")\n recordings = []\n supervisions = []\n for audio_path in path.rglob(\"**/*.wav\"):\n idx = audio_path.stem\n speaker = \"\".join(list(idx)[8:14])\n if idx not in transcript_dict:\n logging.warning(f\"No transcript: {idx}\")\n logging.warning(f\"{audio_path} has no transcript\")\n continue\n text = transcript_dict[idx]\n if not audio_path.is_file():\n logging.warning(f\"No such file: {audio_path}\")\n continue\n recording = Recording.from_file(audio_path)\n recordings.append(recording)\n segment = SupervisionSegment(\n id=idx,\n recording_id=idx,\n start=0.0,\n duration=recording.duration,\n channel=0,\n language=\"Chinese\",\n speaker=speaker,\n text=text.strip(),\n )\n supervisions.append(segment)\n\n recording_set = RecordingSet.from_recordings(recordings)\n supervision_set = SupervisionSet.from_segments(supervisions)\n\n recording_set, supervision_set = fix_manifests(recording_set, supervision_set)\n validate_recordings_and_supervisions(recording_set, supervision_set)\n\n if output_dir is not None:\n supervision_set.to_file(output_dir / f\"stcmds_supervisions_{part}.jsonl.gz\")\n recording_set.to_file(output_dir / f\"stcmds_recordings_{part}.jsonl.gz\")\n\n manifests[part] = {\"recordings\": recording_set, \"supervisions\": supervision_set}\n\n return manifests\n", "path": "lhotse/recipes/stcmds.py"}], "after_files": [{"content": "\"\"\"\nStcmds is an open-source Chinese Mandarin corpus by Surfingtech (www.surfing.ai), containing utterances from 855 speakers, 102600 utterances;\nPublicly available on https://www.openslr.org/resources/38\nST-CMDS (110 hours)\n\n\"\"\"\nimport logging\nimport os\nimport shutil\nimport tarfile\nfrom collections import defaultdict\nfrom pathlib import Path\nfrom typing import Dict, Optional, Union\n\nfrom tqdm.auto import tqdm\n\nfrom lhotse import fix_manifests, validate_recordings_and_supervisions\nfrom lhotse.audio import Recording, RecordingSet\nfrom lhotse.supervision import SupervisionSegment, SupervisionSet\nfrom lhotse.utils import Pathlike, resumable_download, safe_extract\n\n\ndef text_normalize(line: str):\n \"\"\"\n Modified from https://github.com/wenet-e2e/wenet/blob/main/examples/multi_cn/s0/local/stcmds_data_prep.sh#L42\n paste -d' ' $data/utt.list $data/text.list |\\\n sed 's/\uff0c//g' |\\\n tr '[a-z]' '[A-Z]' |\\\n awk '{if (NF > 1) print $0;}' > $data/train/text\n \"\"\"\n line = line.replace(\"\uff0c\", \"\")\n line = line.upper()\n return line\n\n\ndef download_stcmds(\n target_dir: Pathlike = \".\",\n force_download: bool = False,\n base_url: str = \"http://www.openslr.org/resources\",\n) -> Path:\n \"\"\"\n Downdload and untar the dataset\n :param target_dir: Pathlike, the path of the dir to storage the dataset.\n :param force_download: Bool, if True, download the tars no matter if the tars exist.\n :param base_url: str, the url of the OpenSLR resources.\n :return: the path to downloaded and extracted directory with data.\n \"\"\"\n url = f\"{base_url}/38\"\n target_dir = Path(target_dir)\n target_dir.mkdir(parents=True, exist_ok=True)\n corpus_dir = target_dir / \"stcmds\"\n dataset_tar_name = \"ST-CMDS-20170001_1-OS.tar.gz\"\n for tar_name in [dataset_tar_name]:\n tar_path = target_dir / tar_name\n extracted_dir = corpus_dir / tar_name[:-7]\n completed_detector = extracted_dir / \".completed\"\n if completed_detector.is_file():\n logging.info(f\"Skipping download of because {completed_detector} exists.\")\n continue\n resumable_download(\n f\"{url}/{tar_name}\", filename=tar_path, force_download=force_download\n )\n shutil.rmtree(extracted_dir, ignore_errors=True)\n with tarfile.open(tar_path) as tar:\n safe_extract(tar, path=corpus_dir)\n completed_detector.touch()\n\n return corpus_dir\n\n\ndef prepare_stcmds(\n corpus_dir: Pathlike, output_dir: Optional[Pathlike] = None\n) -> Dict[str, Dict[str, Union[RecordingSet, SupervisionSet]]]:\n \"\"\"\n Returns the manifests which consist of the Recordings and Supervisions\n :param corpus_dir: Pathlike, the path of the data dir.\n :param output_dir: Pathlike, the path where to write the manifests.\n :return: a Dict whose key is the dataset part, and the value is Dicts with the keys 'recordings' and 'supervisions'.\n \"\"\"\n corpus_dir = Path(corpus_dir)\n assert corpus_dir.is_dir(), f\"No such directory: {corpus_dir}\"\n if output_dir is not None:\n output_dir = Path(output_dir)\n output_dir.mkdir(parents=True, exist_ok=True)\n\n path = corpus_dir / \"ST-CMDS-20170001_1-OS\"\n transcript_dict = {}\n for text_path in path.rglob(\"**/*.txt\"):\n idx = text_path.stem\n logging.info(f\"processing stcmds transcript {text_path}\")\n with open(text_path, \"r\", encoding=\"utf-8\") as f:\n for line in f:\n line = text_normalize(line)\n transcript_dict[idx] = line\n\n manifests = defaultdict(dict)\n dataset_parts = [\"train\"]\n for part in tqdm(\n dataset_parts,\n desc=\"process stcmds audio, it needs waste about 2169 seconds time.\",\n ):\n logging.info(f\"Processing stcmds {part}\")\n recordings = []\n supervisions = []\n for audio_path in path.rglob(\"**/*.wav\"):\n idx = audio_path.stem\n speaker = \"\".join(list(idx)[8:15])\n if idx not in transcript_dict:\n logging.warning(f\"No transcript: {idx}\")\n logging.warning(f\"{audio_path} has no transcript\")\n continue\n text = transcript_dict[idx]\n if not audio_path.is_file():\n logging.warning(f\"No such file: {audio_path}\")\n continue\n recording = Recording.from_file(audio_path)\n recordings.append(recording)\n segment = SupervisionSegment(\n id=idx,\n recording_id=idx,\n start=0.0,\n duration=recording.duration,\n channel=0,\n language=\"Chinese\",\n speaker=speaker,\n text=text.strip(),\n )\n supervisions.append(segment)\n\n recording_set = RecordingSet.from_recordings(recordings)\n supervision_set = SupervisionSet.from_segments(supervisions)\n\n recording_set, supervision_set = fix_manifests(recording_set, supervision_set)\n validate_recordings_and_supervisions(recording_set, supervision_set)\n\n if output_dir is not None:\n supervision_set.to_file(output_dir / f\"stcmds_supervisions_{part}.jsonl.gz\")\n recording_set.to_file(output_dir / f\"stcmds_recordings_{part}.jsonl.gz\")\n\n manifests[part] = {\"recordings\": recording_set, \"supervisions\": supervision_set}\n\n return manifests\n", "path": "lhotse/recipes/stcmds.py"}]}
| 1,962 | 140 |
gh_patches_debug_38811
|
rasdani/github-patches
|
git_diff
|
xonsh__xonsh-4673
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
make promp.env.env_name venv --prompt aware (set in pyvenv.cfg)
## xonfig
<details>
```
> xonfig
<xonsh-code>:1:0 - xonfig
<xonsh-code>:1:0 + ![xonfig]
+------------------+----------------------+
| xonsh | 0.11.0 |
| Git SHA | adfa60ea |
| Commit Date | Feb 11 14:53:00 2022 |
| Python | 3.9.7 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | None |
| shell type | readline |
| history backend | json |
| pygments | 2.11.2 |
| on posix | True |
| on linux | True |
| distro | unknown |
| on wsl | False |
| on darwin | False |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| on jupyter | False |
| jupyter kernel | None |
| xontrib 1 | coreutils |
| xontrib 2 | vox |
| xontrib 3 | voxapi |
| RC file 1 | /home/ob/.xonshrc |
+------------------+----------------------+
```
</details>
## Expected Behavior
When activating a venv via `vox activate` the name should be set to the value of `prompt` key in `pyvenv.cfg` if present (file and key) - see https://docs.python.org/3/library/venv.html.
## Current Behavior
The prompt is always set to name of venv directory independent of prompt settings.
## Steps to Reproduce
```shell
$ python -m venv --prompt "MY SPECIAL PROMPT" .venv
$ cat .venv/pyvenv.cfg | grep prompt
prompt = 'MY SPECIAL PROMPT'
vox activate ./.venv
```
new prompt looks like this:
`[17:58:10] (.venv) ob@ob1 ~/oss/xonsh dropthedot|✓`
but should look like this:
`[17:58:10] (MY SPECIAL PROMPT) ob@ob1 ~/oss/xonsh dropthedot|✓`
## Also add `--prompt` to `vox new`?
If this is done, maybe a good idea would be to also add `--prompt` to `vox new` to make this possible on creation (also in the interest of symmetry), but it seems like the common pattern for many xonsh users is to have virtualenvs in `~/.virtualenv` with the venv folder name being the same as the project, therefore automatically having a meaningful prompt, so this does not seem to be an urgently wanted feature for most.
I want to prepare a PR for the "passive" part of the functionality, but could also have a stab at adding `--prompt` to `vox new` at a later date. My main itch would be to respect `prompt` from already existing venvs first.
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `xonsh/prompt/env.py`
Content:
```
1 """Prompt formatter for virtualenv and others"""
2
3 import os
4
5 from xonsh.built_ins import XSH
6
7
8 def find_env_name():
9 """Finds the current environment name from $VIRTUAL_ENV or
10 $CONDA_DEFAULT_ENV if that is set.
11 """
12 env_path = XSH.env.get("VIRTUAL_ENV", "")
13 if env_path:
14 env_name = os.path.basename(env_path)
15 else:
16 env_name = XSH.env.get("CONDA_DEFAULT_ENV", "")
17 return env_name
18
19
20 def env_name():
21 """Returns the current env_name if it non-empty, surrounded by the
22 ``{env_prefix}`` and ``{env_postfix}`` fields.
23 """
24 env_name = find_env_name()
25 if XSH.env.get("VIRTUAL_ENV_DISABLE_PROMPT") or not env_name:
26 # env name prompt printing disabled, or no environment; just return
27 return
28
29 venv_prompt = XSH.env.get("VIRTUAL_ENV_PROMPT")
30 if venv_prompt is not None:
31 return venv_prompt
32 else:
33 pf = XSH.shell.prompt_formatter
34 pre = pf._get_field_value("env_prefix")
35 post = pf._get_field_value("env_postfix")
36 return pre + env_name + post
37
38
39 def vte_new_tab_cwd():
40 """This prints an escape sequence that tells VTE terminals the hostname
41 and pwd. This should not be needed in most cases, but sometimes is for
42 certain Linux terminals that do not read the PWD from the environment
43 on startup. Note that this does not return a string, it simply prints
44 and flushes the escape sequence to stdout directly.
45 """
46 env = XSH.env
47 t = "\033]7;file://{}{}\007"
48 s = t.format(env.get("HOSTNAME"), env.get("PWD"))
49 print(s, end="", flush=True)
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/xonsh/prompt/env.py b/xonsh/prompt/env.py
--- a/xonsh/prompt/env.py
+++ b/xonsh/prompt/env.py
@@ -1,42 +1,70 @@
"""Prompt formatter for virtualenv and others"""
-
-import os
+import functools
+import re
+from pathlib import Path
+from typing import Optional
from xonsh.built_ins import XSH
-def find_env_name():
- """Finds the current environment name from $VIRTUAL_ENV or
- $CONDA_DEFAULT_ENV if that is set.
+def find_env_name() -> Optional[str]:
+ """Find current environment name from available sources.
+
+ If ``$VIRTUAL_ENV`` is set, it is determined from the prompt setting in
+ ``<venv>/pyvenv.cfg`` or from the folder name of the environment.
+
+ Otherwise - if it is set - from ``$CONDA_DEFAULT_ENV``.
+ """
+ virtual_env = XSH.env.get("VIRTUAL_ENV")
+ if virtual_env:
+ name = _determine_env_name(virtual_env)
+ if name:
+ return name
+ conda_default_env = XSH.env.get("CONDA_DEFAULT_ENV")
+ if conda_default_env:
+ return conda_default_env
+
+
+def env_name() -> str:
+ """Build env_name based on different sources. Respect order of precedence.
+
+ Name from VIRTUAL_ENV_PROMPT will be used as-is.
+ Names from other sources are surrounded with ``{env_prefix}`` and
+ ``{env_postfix}`` fields.
"""
- env_path = XSH.env.get("VIRTUAL_ENV", "")
- if env_path:
- env_name = os.path.basename(env_path)
- else:
- env_name = XSH.env.get("CONDA_DEFAULT_ENV", "")
- return env_name
+ if XSH.env.get("VIRTUAL_ENV_DISABLE_PROMPT"):
+ return ""
+ virtual_env_prompt = XSH.env.get("VIRTUAL_ENV_PROMPT")
+ if virtual_env_prompt:
+ return virtual_env_prompt
+ found_envname = find_env_name()
+ return _surround_env_name(found_envname) if found_envname else ""
+
[email protected]_cache(maxsize=5)
+def _determine_env_name(virtual_env: str) -> str:
+ """Use prompt setting from pyvenv.cfg or basename of virtual_env.
-def env_name():
- """Returns the current env_name if it non-empty, surrounded by the
- ``{env_prefix}`` and ``{env_postfix}`` fields.
+ Tries to be resilient to subtle changes in whitespace and quoting in the
+ configuration file format as it adheres to no clear standard.
"""
- env_name = find_env_name()
- if XSH.env.get("VIRTUAL_ENV_DISABLE_PROMPT") or not env_name:
- # env name prompt printing disabled, or no environment; just return
- return
-
- venv_prompt = XSH.env.get("VIRTUAL_ENV_PROMPT")
- if venv_prompt is not None:
- return venv_prompt
- else:
- pf = XSH.shell.prompt_formatter
- pre = pf._get_field_value("env_prefix")
- post = pf._get_field_value("env_postfix")
- return pre + env_name + post
-
-
-def vte_new_tab_cwd():
+ venv_path = Path(virtual_env)
+ pyvenv_cfg = venv_path / "pyvenv.cfg"
+ if pyvenv_cfg.is_file():
+ match = re.search(r"prompt\s*=\s*(.*)", pyvenv_cfg.read_text())
+ if match:
+ return match.group(1).strip().lstrip("'\"").rstrip("'\"")
+ return venv_path.name
+
+
+def _surround_env_name(name: str) -> str:
+ pf = XSH.shell.prompt_formatter
+ pre = pf._get_field_value("env_prefix")
+ post = pf._get_field_value("env_postfix")
+ return f"{pre}{name}{post}"
+
+
+def vte_new_tab_cwd() -> None:
"""This prints an escape sequence that tells VTE terminals the hostname
and pwd. This should not be needed in most cases, but sometimes is for
certain Linux terminals that do not read the PWD from the environment
|
{"golden_diff": "diff --git a/xonsh/prompt/env.py b/xonsh/prompt/env.py\n--- a/xonsh/prompt/env.py\n+++ b/xonsh/prompt/env.py\n@@ -1,42 +1,70 @@\n \"\"\"Prompt formatter for virtualenv and others\"\"\"\n-\n-import os\n+import functools\n+import re\n+from pathlib import Path\n+from typing import Optional\n \n from xonsh.built_ins import XSH\n \n \n-def find_env_name():\n- \"\"\"Finds the current environment name from $VIRTUAL_ENV or\n- $CONDA_DEFAULT_ENV if that is set.\n+def find_env_name() -> Optional[str]:\n+ \"\"\"Find current environment name from available sources.\n+\n+ If ``$VIRTUAL_ENV`` is set, it is determined from the prompt setting in\n+ ``<venv>/pyvenv.cfg`` or from the folder name of the environment.\n+\n+ Otherwise - if it is set - from ``$CONDA_DEFAULT_ENV``.\n+ \"\"\"\n+ virtual_env = XSH.env.get(\"VIRTUAL_ENV\")\n+ if virtual_env:\n+ name = _determine_env_name(virtual_env)\n+ if name:\n+ return name\n+ conda_default_env = XSH.env.get(\"CONDA_DEFAULT_ENV\")\n+ if conda_default_env:\n+ return conda_default_env\n+\n+\n+def env_name() -> str:\n+ \"\"\"Build env_name based on different sources. Respect order of precedence.\n+\n+ Name from VIRTUAL_ENV_PROMPT will be used as-is.\n+ Names from other sources are surrounded with ``{env_prefix}`` and\n+ ``{env_postfix}`` fields.\n \"\"\"\n- env_path = XSH.env.get(\"VIRTUAL_ENV\", \"\")\n- if env_path:\n- env_name = os.path.basename(env_path)\n- else:\n- env_name = XSH.env.get(\"CONDA_DEFAULT_ENV\", \"\")\n- return env_name\n+ if XSH.env.get(\"VIRTUAL_ENV_DISABLE_PROMPT\"):\n+ return \"\"\n+ virtual_env_prompt = XSH.env.get(\"VIRTUAL_ENV_PROMPT\")\n+ if virtual_env_prompt:\n+ return virtual_env_prompt\n+ found_envname = find_env_name()\n+ return _surround_env_name(found_envname) if found_envname else \"\"\n+\n \[email protected]_cache(maxsize=5)\n+def _determine_env_name(virtual_env: str) -> str:\n+ \"\"\"Use prompt setting from pyvenv.cfg or basename of virtual_env.\n \n-def env_name():\n- \"\"\"Returns the current env_name if it non-empty, surrounded by the\n- ``{env_prefix}`` and ``{env_postfix}`` fields.\n+ Tries to be resilient to subtle changes in whitespace and quoting in the\n+ configuration file format as it adheres to no clear standard.\n \"\"\"\n- env_name = find_env_name()\n- if XSH.env.get(\"VIRTUAL_ENV_DISABLE_PROMPT\") or not env_name:\n- # env name prompt printing disabled, or no environment; just return\n- return\n-\n- venv_prompt = XSH.env.get(\"VIRTUAL_ENV_PROMPT\")\n- if venv_prompt is not None:\n- return venv_prompt\n- else:\n- pf = XSH.shell.prompt_formatter\n- pre = pf._get_field_value(\"env_prefix\")\n- post = pf._get_field_value(\"env_postfix\")\n- return pre + env_name + post\n-\n-\n-def vte_new_tab_cwd():\n+ venv_path = Path(virtual_env)\n+ pyvenv_cfg = venv_path / \"pyvenv.cfg\"\n+ if pyvenv_cfg.is_file():\n+ match = re.search(r\"prompt\\s*=\\s*(.*)\", pyvenv_cfg.read_text())\n+ if match:\n+ return match.group(1).strip().lstrip(\"'\\\"\").rstrip(\"'\\\"\")\n+ return venv_path.name\n+\n+\n+def _surround_env_name(name: str) -> str:\n+ pf = XSH.shell.prompt_formatter\n+ pre = pf._get_field_value(\"env_prefix\")\n+ post = pf._get_field_value(\"env_postfix\")\n+ return f\"{pre}{name}{post}\"\n+\n+\n+def vte_new_tab_cwd() -> None:\n \"\"\"This prints an escape sequence that tells VTE terminals the hostname\n and pwd. This should not be needed in most cases, but sometimes is for\n certain Linux terminals that do not read the PWD from the environment\n", "issue": "make promp.env.env_name venv --prompt aware (set in pyvenv.cfg)\n## xonfig\r\n\r\n<details>\r\n\r\n```\r\n> xonfig\r\n<xonsh-code>:1:0 - xonfig\r\n<xonsh-code>:1:0 + ![xonfig]\r\n+------------------+----------------------+\r\n| xonsh | 0.11.0 |\r\n| Git SHA | adfa60ea |\r\n| Commit Date | Feb 11 14:53:00 2022 |\r\n| Python | 3.9.7 |\r\n| PLY | 3.11 |\r\n| have readline | True |\r\n| prompt toolkit | None |\r\n| shell type | readline |\r\n| history backend | json |\r\n| pygments | 2.11.2 |\r\n| on posix | True |\r\n| on linux | True |\r\n| distro | unknown |\r\n| on wsl | False |\r\n| on darwin | False |\r\n| on windows | False |\r\n| on cygwin | False |\r\n| on msys2 | False |\r\n| is superuser | False |\r\n| default encoding | utf-8 |\r\n| xonsh encoding | utf-8 |\r\n| encoding errors | surrogateescape |\r\n| on jupyter | False |\r\n| jupyter kernel | None |\r\n| xontrib 1 | coreutils |\r\n| xontrib 2 | vox |\r\n| xontrib 3 | voxapi |\r\n| RC file 1 | /home/ob/.xonshrc |\r\n+------------------+----------------------+\r\n```\r\n\r\n</details>\r\n\r\n## Expected Behavior\r\n\r\nWhen activating a venv via `vox activate` the name should be set to the value of `prompt` key in `pyvenv.cfg` if present (file and key) - see https://docs.python.org/3/library/venv.html.\r\n\r\n## Current Behavior\r\n\r\nThe prompt is always set to name of venv directory independent of prompt settings.\r\n\r\n## Steps to Reproduce\r\n\r\n```shell\r\n$ python -m venv --prompt \"MY SPECIAL PROMPT\" .venv\r\n$ cat .venv/pyvenv.cfg | grep prompt\r\nprompt = 'MY SPECIAL PROMPT'\r\nvox activate ./.venv\r\n```\r\n\r\nnew prompt looks like this:\r\n`[17:58:10] (.venv) ob@ob1 ~/oss/xonsh dropthedot|\u2713`\r\n\r\nbut should look like this:\r\n`[17:58:10] (MY SPECIAL PROMPT) ob@ob1 ~/oss/xonsh dropthedot|\u2713`\r\n\r\n## Also add `--prompt` to `vox new`?\r\nIf this is done, maybe a good idea would be to also add `--prompt` to `vox new` to make this possible on creation (also in the interest of symmetry), but it seems like the common pattern for many xonsh users is to have virtualenvs in `~/.virtualenv` with the venv folder name being the same as the project, therefore automatically having a meaningful prompt, so this does not seem to be an urgently wanted feature for most.\r\n\r\nI want to prepare a PR for the \"passive\" part of the functionality, but could also have a stab at adding `--prompt` to `vox new` at a later date. My main itch would be to respect `prompt` from already existing venvs first.\r\n\r\n## For community\r\n\u2b07\ufe0f **Please click the \ud83d\udc4d reaction instead of leaving a `+1` or \ud83d\udc4d comment**\r\n\n", "before_files": [{"content": "\"\"\"Prompt formatter for virtualenv and others\"\"\"\n\nimport os\n\nfrom xonsh.built_ins import XSH\n\n\ndef find_env_name():\n \"\"\"Finds the current environment name from $VIRTUAL_ENV or\n $CONDA_DEFAULT_ENV if that is set.\n \"\"\"\n env_path = XSH.env.get(\"VIRTUAL_ENV\", \"\")\n if env_path:\n env_name = os.path.basename(env_path)\n else:\n env_name = XSH.env.get(\"CONDA_DEFAULT_ENV\", \"\")\n return env_name\n\n\ndef env_name():\n \"\"\"Returns the current env_name if it non-empty, surrounded by the\n ``{env_prefix}`` and ``{env_postfix}`` fields.\n \"\"\"\n env_name = find_env_name()\n if XSH.env.get(\"VIRTUAL_ENV_DISABLE_PROMPT\") or not env_name:\n # env name prompt printing disabled, or no environment; just return\n return\n\n venv_prompt = XSH.env.get(\"VIRTUAL_ENV_PROMPT\")\n if venv_prompt is not None:\n return venv_prompt\n else:\n pf = XSH.shell.prompt_formatter\n pre = pf._get_field_value(\"env_prefix\")\n post = pf._get_field_value(\"env_postfix\")\n return pre + env_name + post\n\n\ndef vte_new_tab_cwd():\n \"\"\"This prints an escape sequence that tells VTE terminals the hostname\n and pwd. This should not be needed in most cases, but sometimes is for\n certain Linux terminals that do not read the PWD from the environment\n on startup. Note that this does not return a string, it simply prints\n and flushes the escape sequence to stdout directly.\n \"\"\"\n env = XSH.env\n t = \"\\033]7;file://{}{}\\007\"\n s = t.format(env.get(\"HOSTNAME\"), env.get(\"PWD\"))\n print(s, end=\"\", flush=True)\n", "path": "xonsh/prompt/env.py"}], "after_files": [{"content": "\"\"\"Prompt formatter for virtualenv and others\"\"\"\nimport functools\nimport re\nfrom pathlib import Path\nfrom typing import Optional\n\nfrom xonsh.built_ins import XSH\n\n\ndef find_env_name() -> Optional[str]:\n \"\"\"Find current environment name from available sources.\n\n If ``$VIRTUAL_ENV`` is set, it is determined from the prompt setting in\n ``<venv>/pyvenv.cfg`` or from the folder name of the environment.\n\n Otherwise - if it is set - from ``$CONDA_DEFAULT_ENV``.\n \"\"\"\n virtual_env = XSH.env.get(\"VIRTUAL_ENV\")\n if virtual_env:\n name = _determine_env_name(virtual_env)\n if name:\n return name\n conda_default_env = XSH.env.get(\"CONDA_DEFAULT_ENV\")\n if conda_default_env:\n return conda_default_env\n\n\ndef env_name() -> str:\n \"\"\"Build env_name based on different sources. Respect order of precedence.\n\n Name from VIRTUAL_ENV_PROMPT will be used as-is.\n Names from other sources are surrounded with ``{env_prefix}`` and\n ``{env_postfix}`` fields.\n \"\"\"\n if XSH.env.get(\"VIRTUAL_ENV_DISABLE_PROMPT\"):\n return \"\"\n virtual_env_prompt = XSH.env.get(\"VIRTUAL_ENV_PROMPT\")\n if virtual_env_prompt:\n return virtual_env_prompt\n found_envname = find_env_name()\n return _surround_env_name(found_envname) if found_envname else \"\"\n\n\[email protected]_cache(maxsize=5)\ndef _determine_env_name(virtual_env: str) -> str:\n \"\"\"Use prompt setting from pyvenv.cfg or basename of virtual_env.\n\n Tries to be resilient to subtle changes in whitespace and quoting in the\n configuration file format as it adheres to no clear standard.\n \"\"\"\n venv_path = Path(virtual_env)\n pyvenv_cfg = venv_path / \"pyvenv.cfg\"\n if pyvenv_cfg.is_file():\n match = re.search(r\"prompt\\s*=\\s*(.*)\", pyvenv_cfg.read_text())\n if match:\n return match.group(1).strip().lstrip(\"'\\\"\").rstrip(\"'\\\"\")\n return venv_path.name\n\n\ndef _surround_env_name(name: str) -> str:\n pf = XSH.shell.prompt_formatter\n pre = pf._get_field_value(\"env_prefix\")\n post = pf._get_field_value(\"env_postfix\")\n return f\"{pre}{name}{post}\"\n\n\ndef vte_new_tab_cwd() -> None:\n \"\"\"This prints an escape sequence that tells VTE terminals the hostname\n and pwd. This should not be needed in most cases, but sometimes is for\n certain Linux terminals that do not read the PWD from the environment\n on startup. Note that this does not return a string, it simply prints\n and flushes the escape sequence to stdout directly.\n \"\"\"\n env = XSH.env\n t = \"\\033]7;file://{}{}\\007\"\n s = t.format(env.get(\"HOSTNAME\"), env.get(\"PWD\"))\n print(s, end=\"\", flush=True)\n", "path": "xonsh/prompt/env.py"}]}
| 1,547 | 977 |
gh_patches_debug_29805
|
rasdani/github-patches
|
git_diff
|
easybuilders__easybuild-easyblocks-2485
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
--module-only does not respect altroot
In `CUDA-11.1.1-GCC-10.2.0.eb` we have `altroot = 'CUDAcore'`. When I build the module (`eb CUDA-11.1.1-GCC-10.2.0.eb -Tr --rebuild`) I get:
```
setenv("EBROOTCUDA", "/rds/bear-apps/devel/eb-sjb-up/EL8/EL8-has/software/CUDAcore/11.1.1")
```
but when I add ` --module-only` this is replaced with:
```
setenv("EBROOTCUDA", root)
```
and `root` is set to `/rds/bear-apps/devel/eb-sjb-up/EL8/EL8-has/software/CUDA/11.1.1-GCC-10.2.0`.
This then breaks any software that uses `get_software_root('CUDA')` to determine the location of the CUDA files. For example, TensorFlow which fails to find `cublas_api.h` and so aborts the install.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `easybuild/easyblocks/generic/bundle.py`
Content:
```
1 ##
2 # Copyright 2009-2021 Ghent University
3 #
4 # This file is part of EasyBuild,
5 # originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),
6 # with support of Ghent University (http://ugent.be/hpc),
7 # the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),
8 # Flemish Research Foundation (FWO) (http://www.fwo.be/en)
9 # and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).
10 #
11 # https://github.com/easybuilders/easybuild
12 #
13 # EasyBuild is free software: you can redistribute it and/or modify
14 # it under the terms of the GNU General Public License as published by
15 # the Free Software Foundation v2.
16 #
17 # EasyBuild is distributed in the hope that it will be useful,
18 # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 # GNU General Public License for more details.
21 #
22 # You should have received a copy of the GNU General Public License
23 # along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.
24 ##
25 """
26 EasyBuild support for installing a bundle of modules, implemented as a generic easyblock
27
28 @author: Stijn De Weirdt (Ghent University)
29 @author: Dries Verdegem (Ghent University)
30 @author: Kenneth Hoste (Ghent University)
31 @author: Pieter De Baets (Ghent University)
32 @author: Jens Timmerman (Ghent University)
33 """
34 import copy
35 import os
36
37 import easybuild.tools.environment as env
38 from easybuild.framework.easyblock import EasyBlock
39 from easybuild.framework.easyconfig import CUSTOM
40 from easybuild.framework.easyconfig.easyconfig import get_easyblock_class
41 from easybuild.tools.build_log import EasyBuildError, print_msg
42 from easybuild.tools.modules import get_software_root, get_software_version
43 from easybuild.tools.py2vs3 import string_type
44
45
46 class Bundle(EasyBlock):
47 """
48 Bundle of modules: only generate module files, nothing to build/install
49 """
50
51 @staticmethod
52 def extra_options(extra_vars=None):
53 """Easyconfig parameters specific to bundles."""
54 if extra_vars is None:
55 extra_vars = {}
56 extra_vars.update({
57 'altroot': [None, "Software name of dependency to use to define $EBROOT for this bundle", CUSTOM],
58 'altversion': [None, "Software name of dependency to use to define $EBVERSION for this bundle", CUSTOM],
59 'default_component_specs': [{}, "Default specs to use for every component", CUSTOM],
60 'components': [(), "List of components to install: tuples w/ name, version and easyblock to use", CUSTOM],
61 'default_easyblock': [None, "Default easyblock to use for components", CUSTOM],
62 })
63 return EasyBlock.extra_options(extra_vars)
64
65 def __init__(self, *args, **kwargs):
66 """Initialize easyblock."""
67 super(Bundle, self).__init__(*args, **kwargs)
68 self.altroot = None
69 self.altversion = None
70
71 # list of EasyConfig instances for components
72 self.comp_cfgs = []
73
74 # list of sources for bundle itself *must* be empty
75 if self.cfg['sources']:
76 raise EasyBuildError("List of sources for bundle itself must be empty, found %s", self.cfg['sources'])
77 if self.cfg['patches']:
78 raise EasyBuildError("List of patches for bundle itself must be empty, found %s", self.cfg['patches'])
79
80 # disable templating to avoid premature resolving of template values
81 self.cfg.enable_templating = False
82
83 # list of checksums for patches (must be included after checksums for sources)
84 checksums_patches = []
85
86 for comp in self.cfg['components']:
87 comp_name, comp_version, comp_specs = comp[0], comp[1], {}
88 if len(comp) == 3:
89 comp_specs = comp[2]
90
91 comp_cfg = self.cfg.copy()
92
93 comp_cfg['name'] = comp_name
94 comp_cfg['version'] = comp_version
95
96 easyblock = comp_specs.get('easyblock') or self.cfg['default_easyblock']
97 if easyblock is None:
98 raise EasyBuildError("No easyblock specified for component %s v%s", comp_cfg['name'],
99 comp_cfg['version'])
100 elif easyblock == 'Bundle':
101 raise EasyBuildError("The Bundle easyblock can not be used to install components in a bundle")
102
103 comp_cfg.easyblock = get_easyblock_class(easyblock, name=comp_cfg['name'])
104
105 # make sure that extra easyconfig parameters are known, so they can be set
106 extra_opts = comp_cfg.easyblock.extra_options()
107 comp_cfg.extend_params(copy.deepcopy(extra_opts))
108
109 comp_cfg.generate_template_values()
110
111 # do not inherit easyblock to use from parent (since that would result in an infinite loop in install_step)
112 comp_cfg['easyblock'] = None
113
114 # reset list of sources/source_urls/checksums
115 comp_cfg['sources'] = comp_cfg['source_urls'] = comp_cfg['checksums'] = comp_cfg['patches'] = []
116
117 for key in self.cfg['default_component_specs']:
118 comp_cfg[key] = self.cfg['default_component_specs'][key]
119
120 for key in comp_specs:
121 comp_cfg[key] = comp_specs[key]
122
123 # enable resolving of templates for component-specific EasyConfig instance
124 comp_cfg.enable_templating = True
125
126 # 'sources' is strictly required
127 if comp_cfg['sources']:
128 # If per-component source URLs are provided, attach them directly to the relevant sources
129 if comp_cfg['source_urls']:
130 for source in comp_cfg['sources']:
131 if isinstance(source, string_type):
132 self.cfg.update('sources', [{'filename': source, 'source_urls': comp_cfg['source_urls']}])
133 elif isinstance(source, dict):
134 # Update source_urls in the 'source' dict to use the one for the components
135 # (if it doesn't already exist)
136 if 'source_urls' not in source:
137 source['source_urls'] = comp_cfg['source_urls']
138 self.cfg.update('sources', [source])
139 else:
140 raise EasyBuildError("Source %s for component %s is neither a string nor a dict, cannot "
141 "process it.", source, comp_cfg['name'])
142 else:
143 # add component sources to list of sources
144 self.cfg.update('sources', comp_cfg['sources'])
145 else:
146 raise EasyBuildError("No sources specification for component %s v%s", comp_name, comp_version)
147
148 if comp_cfg['checksums']:
149 src_cnt = len(comp_cfg['sources'])
150
151 # add per-component checksums for sources to list of checksums
152 self.cfg.update('checksums', comp_cfg['checksums'][:src_cnt])
153
154 # add per-component checksums for patches to list of checksums for patches
155 checksums_patches.extend(comp_cfg['checksums'][src_cnt:])
156
157 if comp_cfg['patches']:
158 self.cfg.update('patches', comp_cfg['patches'])
159
160 self.comp_cfgs.append(comp_cfg)
161
162 self.cfg.update('checksums', checksums_patches)
163
164 self.cfg.enable_templating = True
165
166 def check_checksums(self):
167 """
168 Check whether a SHA256 checksum is available for all sources & patches (incl. extensions).
169
170 :return: list of strings describing checksum issues (missing checksums, wrong checksum type, etc.)
171 """
172 checksum_issues = super(Bundle, self).check_checksums()
173
174 for comp in self.comp_cfgs:
175 checksum_issues.extend(self.check_checksums_for(comp, sub="of component %s" % comp['name']))
176
177 return checksum_issues
178
179 def patch_step(self):
180 """Patch step must be a no-op for bundle, since there are no top-level sources/patches."""
181 pass
182
183 def configure_step(self):
184 """Collect altroot/altversion info."""
185 # pick up altroot/altversion, if they are defined
186 self.altroot = None
187 if self.cfg['altroot']:
188 self.altroot = get_software_root(self.cfg['altroot'])
189 self.altversion = None
190 if self.cfg['altversion']:
191 self.altversion = get_software_version(self.cfg['altversion'])
192
193 def build_step(self):
194 """Do nothing."""
195 pass
196
197 def install_step(self):
198 """Install components, if specified."""
199 comp_cnt = len(self.cfg['components'])
200 for idx, cfg in enumerate(self.comp_cfgs):
201
202 print_msg("installing bundle component %s v%s (%d/%d)..." %
203 (cfg['name'], cfg['version'], idx + 1, comp_cnt))
204 self.log.info("Installing component %s v%s using easyblock %s", cfg['name'], cfg['version'], cfg.easyblock)
205
206 comp = cfg.easyblock(cfg)
207
208 # correct build/install dirs
209 comp.builddir = self.builddir
210 comp.install_subdir, comp.installdir = self.install_subdir, self.installdir
211
212 # make sure we can build in parallel
213 comp.set_parallel()
214
215 # figure out correct start directory
216 comp.guess_start_dir()
217
218 # need to run fetch_patches to ensure per-component patches are applied
219 comp.fetch_patches()
220
221 comp.src = []
222
223 # find match entries in self.src for this component
224 for source in comp.cfg['sources']:
225 if isinstance(source, string_type):
226 comp_src_fn = source
227 elif isinstance(source, dict):
228 if 'filename' in source:
229 comp_src_fn = source['filename']
230 else:
231 raise EasyBuildError("Encountered source file specified as dict without 'filename': %s", source)
232 else:
233 raise EasyBuildError("Specification of unknown type for source file: %s", source)
234
235 found = False
236 for src in self.src:
237 if src['name'] == comp_src_fn:
238 self.log.info("Found spec for source %s for component %s: %s", comp_src_fn, comp.name, src)
239 comp.src.append(src)
240 found = True
241 break
242 if not found:
243 raise EasyBuildError("Failed to find spec for source %s for component %s", comp_src_fn, comp.name)
244
245 # location of first unpacked source is used to determine where to apply patch(es)
246 comp.src[-1]['finalpath'] = comp.cfg['start_dir']
247
248 # run relevant steps
249 for step_name in ['patch', 'configure', 'build', 'install']:
250 if step_name in cfg['skipsteps']:
251 comp.log.info("Skipping '%s' step for component %s v%s", step_name, cfg['name'], cfg['version'])
252 else:
253 comp.run_step(step_name, [lambda x: getattr(x, '%s_step' % step_name)])
254
255 # update environment to ensure stuff provided by former components can be picked up by latter components
256 # once the installation is finalised, this is handled by the generated module
257 reqs = comp.make_module_req_guess()
258 for envvar in reqs:
259 curr_val = os.getenv(envvar, '')
260 curr_paths = curr_val.split(os.pathsep)
261 for subdir in reqs[envvar]:
262 path = os.path.join(self.installdir, subdir)
263 if path not in curr_paths:
264 if curr_val:
265 new_val = '%s:%s' % (path, curr_val)
266 else:
267 new_val = path
268 env.setvar(envvar, new_val)
269
270 def make_module_extra(self, *args, **kwargs):
271 """Set extra stuff in module file, e.g. $EBROOT*, $EBVERSION*, etc."""
272 if 'altroot' not in kwargs:
273 kwargs['altroot'] = self.altroot
274 if 'altversion' not in kwargs:
275 kwargs['altversion'] = self.altversion
276 return super(Bundle, self).make_module_extra(*args, **kwargs)
277
278 def sanity_check_step(self, *args, **kwargs):
279 """
280 Nothing is being installed, so just being able to load the (fake) module is sufficient
281 """
282 if self.cfg['exts_list'] or self.cfg['sanity_check_paths'] or self.cfg['sanity_check_commands']:
283 super(Bundle, self).sanity_check_step(*args, **kwargs)
284 else:
285 self.log.info("Testing loading of module '%s' by means of sanity check" % self.full_mod_name)
286 fake_mod_data = self.load_fake_module(purge=True)
287 self.log.debug("Cleaning up after testing loading of module")
288 self.clean_up_fake_module(fake_mod_data)
289
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/easybuild/easyblocks/generic/bundle.py b/easybuild/easyblocks/generic/bundle.py
--- a/easybuild/easyblocks/generic/bundle.py
+++ b/easybuild/easyblocks/generic/bundle.py
@@ -180,15 +180,19 @@
"""Patch step must be a no-op for bundle, since there are no top-level sources/patches."""
pass
- def configure_step(self):
- """Collect altroot/altversion info."""
- # pick up altroot/altversion, if they are defined
- self.altroot = None
+ def get_altroot_and_altversion(self):
+ """Get altroot and altversion, if they are defined"""
+ altroot = None
if self.cfg['altroot']:
- self.altroot = get_software_root(self.cfg['altroot'])
- self.altversion = None
+ altroot = get_software_root(self.cfg['altroot'])
+ altversion = None
if self.cfg['altversion']:
- self.altversion = get_software_version(self.cfg['altversion'])
+ altversion = get_software_version(self.cfg['altversion'])
+ return altroot, altversion
+
+ def configure_step(self):
+ """Collect altroot/altversion info."""
+ self.altroot, self.altversion = self.get_altroot_and_altversion()
def build_step(self):
"""Do nothing."""
@@ -269,6 +273,9 @@
def make_module_extra(self, *args, **kwargs):
"""Set extra stuff in module file, e.g. $EBROOT*, $EBVERSION*, etc."""
+ if not self.altroot and not self.altversion:
+ # check for altroot and altversion (needed here for a module only build)
+ self.altroot, self.altversion = self.get_altroot_and_altversion()
if 'altroot' not in kwargs:
kwargs['altroot'] = self.altroot
if 'altversion' not in kwargs:
|
{"golden_diff": "diff --git a/easybuild/easyblocks/generic/bundle.py b/easybuild/easyblocks/generic/bundle.py\n--- a/easybuild/easyblocks/generic/bundle.py\n+++ b/easybuild/easyblocks/generic/bundle.py\n@@ -180,15 +180,19 @@\n \"\"\"Patch step must be a no-op for bundle, since there are no top-level sources/patches.\"\"\"\n pass\n \n- def configure_step(self):\n- \"\"\"Collect altroot/altversion info.\"\"\"\n- # pick up altroot/altversion, if they are defined\n- self.altroot = None\n+ def get_altroot_and_altversion(self):\n+ \"\"\"Get altroot and altversion, if they are defined\"\"\"\n+ altroot = None\n if self.cfg['altroot']:\n- self.altroot = get_software_root(self.cfg['altroot'])\n- self.altversion = None\n+ altroot = get_software_root(self.cfg['altroot'])\n+ altversion = None\n if self.cfg['altversion']:\n- self.altversion = get_software_version(self.cfg['altversion'])\n+ altversion = get_software_version(self.cfg['altversion'])\n+ return altroot, altversion\n+\n+ def configure_step(self):\n+ \"\"\"Collect altroot/altversion info.\"\"\"\n+ self.altroot, self.altversion = self.get_altroot_and_altversion()\n \n def build_step(self):\n \"\"\"Do nothing.\"\"\"\n@@ -269,6 +273,9 @@\n \n def make_module_extra(self, *args, **kwargs):\n \"\"\"Set extra stuff in module file, e.g. $EBROOT*, $EBVERSION*, etc.\"\"\"\n+ if not self.altroot and not self.altversion:\n+ # check for altroot and altversion (needed here for a module only build)\n+ self.altroot, self.altversion = self.get_altroot_and_altversion()\n if 'altroot' not in kwargs:\n kwargs['altroot'] = self.altroot\n if 'altversion' not in kwargs:\n", "issue": "--module-only does not respect altroot\nIn `CUDA-11.1.1-GCC-10.2.0.eb` we have `altroot = 'CUDAcore'`. When I build the module (`eb CUDA-11.1.1-GCC-10.2.0.eb -Tr --rebuild`) I get:\r\n```\r\nsetenv(\"EBROOTCUDA\", \"/rds/bear-apps/devel/eb-sjb-up/EL8/EL8-has/software/CUDAcore/11.1.1\")\r\n```\r\nbut when I add ` --module-only` this is replaced with:\r\n```\r\nsetenv(\"EBROOTCUDA\", root)\r\n```\r\nand `root` is set to `/rds/bear-apps/devel/eb-sjb-up/EL8/EL8-has/software/CUDA/11.1.1-GCC-10.2.0`.\r\n\r\nThis then breaks any software that uses `get_software_root('CUDA')` to determine the location of the CUDA files. For example, TensorFlow which fails to find `cublas_api.h` and so aborts the install.\n", "before_files": [{"content": "##\n# Copyright 2009-2021 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nEasyBuild support for installing a bundle of modules, implemented as a generic easyblock\n\n@author: Stijn De Weirdt (Ghent University)\n@author: Dries Verdegem (Ghent University)\n@author: Kenneth Hoste (Ghent University)\n@author: Pieter De Baets (Ghent University)\n@author: Jens Timmerman (Ghent University)\n\"\"\"\nimport copy\nimport os\n\nimport easybuild.tools.environment as env\nfrom easybuild.framework.easyblock import EasyBlock\nfrom easybuild.framework.easyconfig import CUSTOM\nfrom easybuild.framework.easyconfig.easyconfig import get_easyblock_class\nfrom easybuild.tools.build_log import EasyBuildError, print_msg\nfrom easybuild.tools.modules import get_software_root, get_software_version\nfrom easybuild.tools.py2vs3 import string_type\n\n\nclass Bundle(EasyBlock):\n \"\"\"\n Bundle of modules: only generate module files, nothing to build/install\n \"\"\"\n\n @staticmethod\n def extra_options(extra_vars=None):\n \"\"\"Easyconfig parameters specific to bundles.\"\"\"\n if extra_vars is None:\n extra_vars = {}\n extra_vars.update({\n 'altroot': [None, \"Software name of dependency to use to define $EBROOT for this bundle\", CUSTOM],\n 'altversion': [None, \"Software name of dependency to use to define $EBVERSION for this bundle\", CUSTOM],\n 'default_component_specs': [{}, \"Default specs to use for every component\", CUSTOM],\n 'components': [(), \"List of components to install: tuples w/ name, version and easyblock to use\", CUSTOM],\n 'default_easyblock': [None, \"Default easyblock to use for components\", CUSTOM],\n })\n return EasyBlock.extra_options(extra_vars)\n\n def __init__(self, *args, **kwargs):\n \"\"\"Initialize easyblock.\"\"\"\n super(Bundle, self).__init__(*args, **kwargs)\n self.altroot = None\n self.altversion = None\n\n # list of EasyConfig instances for components\n self.comp_cfgs = []\n\n # list of sources for bundle itself *must* be empty\n if self.cfg['sources']:\n raise EasyBuildError(\"List of sources for bundle itself must be empty, found %s\", self.cfg['sources'])\n if self.cfg['patches']:\n raise EasyBuildError(\"List of patches for bundle itself must be empty, found %s\", self.cfg['patches'])\n\n # disable templating to avoid premature resolving of template values\n self.cfg.enable_templating = False\n\n # list of checksums for patches (must be included after checksums for sources)\n checksums_patches = []\n\n for comp in self.cfg['components']:\n comp_name, comp_version, comp_specs = comp[0], comp[1], {}\n if len(comp) == 3:\n comp_specs = comp[2]\n\n comp_cfg = self.cfg.copy()\n\n comp_cfg['name'] = comp_name\n comp_cfg['version'] = comp_version\n\n easyblock = comp_specs.get('easyblock') or self.cfg['default_easyblock']\n if easyblock is None:\n raise EasyBuildError(\"No easyblock specified for component %s v%s\", comp_cfg['name'],\n comp_cfg['version'])\n elif easyblock == 'Bundle':\n raise EasyBuildError(\"The Bundle easyblock can not be used to install components in a bundle\")\n\n comp_cfg.easyblock = get_easyblock_class(easyblock, name=comp_cfg['name'])\n\n # make sure that extra easyconfig parameters are known, so they can be set\n extra_opts = comp_cfg.easyblock.extra_options()\n comp_cfg.extend_params(copy.deepcopy(extra_opts))\n\n comp_cfg.generate_template_values()\n\n # do not inherit easyblock to use from parent (since that would result in an infinite loop in install_step)\n comp_cfg['easyblock'] = None\n\n # reset list of sources/source_urls/checksums\n comp_cfg['sources'] = comp_cfg['source_urls'] = comp_cfg['checksums'] = comp_cfg['patches'] = []\n\n for key in self.cfg['default_component_specs']:\n comp_cfg[key] = self.cfg['default_component_specs'][key]\n\n for key in comp_specs:\n comp_cfg[key] = comp_specs[key]\n\n # enable resolving of templates for component-specific EasyConfig instance\n comp_cfg.enable_templating = True\n\n # 'sources' is strictly required\n if comp_cfg['sources']:\n # If per-component source URLs are provided, attach them directly to the relevant sources\n if comp_cfg['source_urls']:\n for source in comp_cfg['sources']:\n if isinstance(source, string_type):\n self.cfg.update('sources', [{'filename': source, 'source_urls': comp_cfg['source_urls']}])\n elif isinstance(source, dict):\n # Update source_urls in the 'source' dict to use the one for the components\n # (if it doesn't already exist)\n if 'source_urls' not in source:\n source['source_urls'] = comp_cfg['source_urls']\n self.cfg.update('sources', [source])\n else:\n raise EasyBuildError(\"Source %s for component %s is neither a string nor a dict, cannot \"\n \"process it.\", source, comp_cfg['name'])\n else:\n # add component sources to list of sources\n self.cfg.update('sources', comp_cfg['sources'])\n else:\n raise EasyBuildError(\"No sources specification for component %s v%s\", comp_name, comp_version)\n\n if comp_cfg['checksums']:\n src_cnt = len(comp_cfg['sources'])\n\n # add per-component checksums for sources to list of checksums\n self.cfg.update('checksums', comp_cfg['checksums'][:src_cnt])\n\n # add per-component checksums for patches to list of checksums for patches\n checksums_patches.extend(comp_cfg['checksums'][src_cnt:])\n\n if comp_cfg['patches']:\n self.cfg.update('patches', comp_cfg['patches'])\n\n self.comp_cfgs.append(comp_cfg)\n\n self.cfg.update('checksums', checksums_patches)\n\n self.cfg.enable_templating = True\n\n def check_checksums(self):\n \"\"\"\n Check whether a SHA256 checksum is available for all sources & patches (incl. extensions).\n\n :return: list of strings describing checksum issues (missing checksums, wrong checksum type, etc.)\n \"\"\"\n checksum_issues = super(Bundle, self).check_checksums()\n\n for comp in self.comp_cfgs:\n checksum_issues.extend(self.check_checksums_for(comp, sub=\"of component %s\" % comp['name']))\n\n return checksum_issues\n\n def patch_step(self):\n \"\"\"Patch step must be a no-op for bundle, since there are no top-level sources/patches.\"\"\"\n pass\n\n def configure_step(self):\n \"\"\"Collect altroot/altversion info.\"\"\"\n # pick up altroot/altversion, if they are defined\n self.altroot = None\n if self.cfg['altroot']:\n self.altroot = get_software_root(self.cfg['altroot'])\n self.altversion = None\n if self.cfg['altversion']:\n self.altversion = get_software_version(self.cfg['altversion'])\n\n def build_step(self):\n \"\"\"Do nothing.\"\"\"\n pass\n\n def install_step(self):\n \"\"\"Install components, if specified.\"\"\"\n comp_cnt = len(self.cfg['components'])\n for idx, cfg in enumerate(self.comp_cfgs):\n\n print_msg(\"installing bundle component %s v%s (%d/%d)...\" %\n (cfg['name'], cfg['version'], idx + 1, comp_cnt))\n self.log.info(\"Installing component %s v%s using easyblock %s\", cfg['name'], cfg['version'], cfg.easyblock)\n\n comp = cfg.easyblock(cfg)\n\n # correct build/install dirs\n comp.builddir = self.builddir\n comp.install_subdir, comp.installdir = self.install_subdir, self.installdir\n\n # make sure we can build in parallel\n comp.set_parallel()\n\n # figure out correct start directory\n comp.guess_start_dir()\n\n # need to run fetch_patches to ensure per-component patches are applied\n comp.fetch_patches()\n\n comp.src = []\n\n # find match entries in self.src for this component\n for source in comp.cfg['sources']:\n if isinstance(source, string_type):\n comp_src_fn = source\n elif isinstance(source, dict):\n if 'filename' in source:\n comp_src_fn = source['filename']\n else:\n raise EasyBuildError(\"Encountered source file specified as dict without 'filename': %s\", source)\n else:\n raise EasyBuildError(\"Specification of unknown type for source file: %s\", source)\n\n found = False\n for src in self.src:\n if src['name'] == comp_src_fn:\n self.log.info(\"Found spec for source %s for component %s: %s\", comp_src_fn, comp.name, src)\n comp.src.append(src)\n found = True\n break\n if not found:\n raise EasyBuildError(\"Failed to find spec for source %s for component %s\", comp_src_fn, comp.name)\n\n # location of first unpacked source is used to determine where to apply patch(es)\n comp.src[-1]['finalpath'] = comp.cfg['start_dir']\n\n # run relevant steps\n for step_name in ['patch', 'configure', 'build', 'install']:\n if step_name in cfg['skipsteps']:\n comp.log.info(\"Skipping '%s' step for component %s v%s\", step_name, cfg['name'], cfg['version'])\n else:\n comp.run_step(step_name, [lambda x: getattr(x, '%s_step' % step_name)])\n\n # update environment to ensure stuff provided by former components can be picked up by latter components\n # once the installation is finalised, this is handled by the generated module\n reqs = comp.make_module_req_guess()\n for envvar in reqs:\n curr_val = os.getenv(envvar, '')\n curr_paths = curr_val.split(os.pathsep)\n for subdir in reqs[envvar]:\n path = os.path.join(self.installdir, subdir)\n if path not in curr_paths:\n if curr_val:\n new_val = '%s:%s' % (path, curr_val)\n else:\n new_val = path\n env.setvar(envvar, new_val)\n\n def make_module_extra(self, *args, **kwargs):\n \"\"\"Set extra stuff in module file, e.g. $EBROOT*, $EBVERSION*, etc.\"\"\"\n if 'altroot' not in kwargs:\n kwargs['altroot'] = self.altroot\n if 'altversion' not in kwargs:\n kwargs['altversion'] = self.altversion\n return super(Bundle, self).make_module_extra(*args, **kwargs)\n\n def sanity_check_step(self, *args, **kwargs):\n \"\"\"\n Nothing is being installed, so just being able to load the (fake) module is sufficient\n \"\"\"\n if self.cfg['exts_list'] or self.cfg['sanity_check_paths'] or self.cfg['sanity_check_commands']:\n super(Bundle, self).sanity_check_step(*args, **kwargs)\n else:\n self.log.info(\"Testing loading of module '%s' by means of sanity check\" % self.full_mod_name)\n fake_mod_data = self.load_fake_module(purge=True)\n self.log.debug(\"Cleaning up after testing loading of module\")\n self.clean_up_fake_module(fake_mod_data)\n", "path": "easybuild/easyblocks/generic/bundle.py"}], "after_files": [{"content": "##\n# Copyright 2009-2021 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nEasyBuild support for installing a bundle of modules, implemented as a generic easyblock\n\n@author: Stijn De Weirdt (Ghent University)\n@author: Dries Verdegem (Ghent University)\n@author: Kenneth Hoste (Ghent University)\n@author: Pieter De Baets (Ghent University)\n@author: Jens Timmerman (Ghent University)\n\"\"\"\nimport copy\nimport os\n\nimport easybuild.tools.environment as env\nfrom easybuild.framework.easyblock import EasyBlock\nfrom easybuild.framework.easyconfig import CUSTOM\nfrom easybuild.framework.easyconfig.easyconfig import get_easyblock_class\nfrom easybuild.tools.build_log import EasyBuildError, print_msg\nfrom easybuild.tools.modules import get_software_root, get_software_version\nfrom easybuild.tools.py2vs3 import string_type\n\n\nclass Bundle(EasyBlock):\n \"\"\"\n Bundle of modules: only generate module files, nothing to build/install\n \"\"\"\n\n @staticmethod\n def extra_options(extra_vars=None):\n \"\"\"Easyconfig parameters specific to bundles.\"\"\"\n if extra_vars is None:\n extra_vars = {}\n extra_vars.update({\n 'altroot': [None, \"Software name of dependency to use to define $EBROOT for this bundle\", CUSTOM],\n 'altversion': [None, \"Software name of dependency to use to define $EBVERSION for this bundle\", CUSTOM],\n 'default_component_specs': [{}, \"Default specs to use for every component\", CUSTOM],\n 'components': [(), \"List of components to install: tuples w/ name, version and easyblock to use\", CUSTOM],\n 'default_easyblock': [None, \"Default easyblock to use for components\", CUSTOM],\n })\n return EasyBlock.extra_options(extra_vars)\n\n def __init__(self, *args, **kwargs):\n \"\"\"Initialize easyblock.\"\"\"\n super(Bundle, self).__init__(*args, **kwargs)\n self.altroot = None\n self.altversion = None\n\n # list of EasyConfig instances for components\n self.comp_cfgs = []\n\n # list of sources for bundle itself *must* be empty\n if self.cfg['sources']:\n raise EasyBuildError(\"List of sources for bundle itself must be empty, found %s\", self.cfg['sources'])\n if self.cfg['patches']:\n raise EasyBuildError(\"List of patches for bundle itself must be empty, found %s\", self.cfg['patches'])\n\n # disable templating to avoid premature resolving of template values\n self.cfg.enable_templating = False\n\n # list of checksums for patches (must be included after checksums for sources)\n checksums_patches = []\n\n for comp in self.cfg['components']:\n comp_name, comp_version, comp_specs = comp[0], comp[1], {}\n if len(comp) == 3:\n comp_specs = comp[2]\n\n comp_cfg = self.cfg.copy()\n\n comp_cfg['name'] = comp_name\n comp_cfg['version'] = comp_version\n\n easyblock = comp_specs.get('easyblock') or self.cfg['default_easyblock']\n if easyblock is None:\n raise EasyBuildError(\"No easyblock specified for component %s v%s\", comp_cfg['name'],\n comp_cfg['version'])\n elif easyblock == 'Bundle':\n raise EasyBuildError(\"The Bundle easyblock can not be used to install components in a bundle\")\n\n comp_cfg.easyblock = get_easyblock_class(easyblock, name=comp_cfg['name'])\n\n # make sure that extra easyconfig parameters are known, so they can be set\n extra_opts = comp_cfg.easyblock.extra_options()\n comp_cfg.extend_params(copy.deepcopy(extra_opts))\n\n comp_cfg.generate_template_values()\n\n # do not inherit easyblock to use from parent (since that would result in an infinite loop in install_step)\n comp_cfg['easyblock'] = None\n\n # reset list of sources/source_urls/checksums\n comp_cfg['sources'] = comp_cfg['source_urls'] = comp_cfg['checksums'] = comp_cfg['patches'] = []\n\n for key in self.cfg['default_component_specs']:\n comp_cfg[key] = self.cfg['default_component_specs'][key]\n\n for key in comp_specs:\n comp_cfg[key] = comp_specs[key]\n\n # enable resolving of templates for component-specific EasyConfig instance\n comp_cfg.enable_templating = True\n\n # 'sources' is strictly required\n if comp_cfg['sources']:\n # If per-component source URLs are provided, attach them directly to the relevant sources\n if comp_cfg['source_urls']:\n for source in comp_cfg['sources']:\n if isinstance(source, string_type):\n self.cfg.update('sources', [{'filename': source, 'source_urls': comp_cfg['source_urls']}])\n elif isinstance(source, dict):\n # Update source_urls in the 'source' dict to use the one for the components\n # (if it doesn't already exist)\n if 'source_urls' not in source:\n source['source_urls'] = comp_cfg['source_urls']\n self.cfg.update('sources', [source])\n else:\n raise EasyBuildError(\"Source %s for component %s is neither a string nor a dict, cannot \"\n \"process it.\", source, comp_cfg['name'])\n else:\n # add component sources to list of sources\n self.cfg.update('sources', comp_cfg['sources'])\n else:\n raise EasyBuildError(\"No sources specification for component %s v%s\", comp_name, comp_version)\n\n if comp_cfg['checksums']:\n src_cnt = len(comp_cfg['sources'])\n\n # add per-component checksums for sources to list of checksums\n self.cfg.update('checksums', comp_cfg['checksums'][:src_cnt])\n\n # add per-component checksums for patches to list of checksums for patches\n checksums_patches.extend(comp_cfg['checksums'][src_cnt:])\n\n if comp_cfg['patches']:\n self.cfg.update('patches', comp_cfg['patches'])\n\n self.comp_cfgs.append(comp_cfg)\n\n self.cfg.update('checksums', checksums_patches)\n\n self.cfg.enable_templating = True\n\n def check_checksums(self):\n \"\"\"\n Check whether a SHA256 checksum is available for all sources & patches (incl. extensions).\n\n :return: list of strings describing checksum issues (missing checksums, wrong checksum type, etc.)\n \"\"\"\n checksum_issues = super(Bundle, self).check_checksums()\n\n for comp in self.comp_cfgs:\n checksum_issues.extend(self.check_checksums_for(comp, sub=\"of component %s\" % comp['name']))\n\n return checksum_issues\n\n def patch_step(self):\n \"\"\"Patch step must be a no-op for bundle, since there are no top-level sources/patches.\"\"\"\n pass\n\n def get_altroot_and_altversion(self):\n \"\"\"Get altroot and altversion, if they are defined\"\"\"\n altroot = None\n if self.cfg['altroot']:\n altroot = get_software_root(self.cfg['altroot'])\n altversion = None\n if self.cfg['altversion']:\n altversion = get_software_version(self.cfg['altversion'])\n return altroot, altversion\n\n def configure_step(self):\n \"\"\"Collect altroot/altversion info.\"\"\"\n self.altroot, self.altversion = self.get_altroot_and_altversion()\n\n def build_step(self):\n \"\"\"Do nothing.\"\"\"\n pass\n\n def install_step(self):\n \"\"\"Install components, if specified.\"\"\"\n comp_cnt = len(self.cfg['components'])\n for idx, cfg in enumerate(self.comp_cfgs):\n\n print_msg(\"installing bundle component %s v%s (%d/%d)...\" %\n (cfg['name'], cfg['version'], idx + 1, comp_cnt))\n self.log.info(\"Installing component %s v%s using easyblock %s\", cfg['name'], cfg['version'], cfg.easyblock)\n\n comp = cfg.easyblock(cfg)\n\n # correct build/install dirs\n comp.builddir = self.builddir\n comp.install_subdir, comp.installdir = self.install_subdir, self.installdir\n\n # make sure we can build in parallel\n comp.set_parallel()\n\n # figure out correct start directory\n comp.guess_start_dir()\n\n # need to run fetch_patches to ensure per-component patches are applied\n comp.fetch_patches()\n\n comp.src = []\n\n # find match entries in self.src for this component\n for source in comp.cfg['sources']:\n if isinstance(source, string_type):\n comp_src_fn = source\n elif isinstance(source, dict):\n if 'filename' in source:\n comp_src_fn = source['filename']\n else:\n raise EasyBuildError(\"Encountered source file specified as dict without 'filename': %s\", source)\n else:\n raise EasyBuildError(\"Specification of unknown type for source file: %s\", source)\n\n found = False\n for src in self.src:\n if src['name'] == comp_src_fn:\n self.log.info(\"Found spec for source %s for component %s: %s\", comp_src_fn, comp.name, src)\n comp.src.append(src)\n found = True\n break\n if not found:\n raise EasyBuildError(\"Failed to find spec for source %s for component %s\", comp_src_fn, comp.name)\n\n # location of first unpacked source is used to determine where to apply patch(es)\n comp.src[-1]['finalpath'] = comp.cfg['start_dir']\n\n # run relevant steps\n for step_name in ['patch', 'configure', 'build', 'install']:\n if step_name in cfg['skipsteps']:\n comp.log.info(\"Skipping '%s' step for component %s v%s\", step_name, cfg['name'], cfg['version'])\n else:\n comp.run_step(step_name, [lambda x: getattr(x, '%s_step' % step_name)])\n\n # update environment to ensure stuff provided by former components can be picked up by latter components\n # once the installation is finalised, this is handled by the generated module\n reqs = comp.make_module_req_guess()\n for envvar in reqs:\n curr_val = os.getenv(envvar, '')\n curr_paths = curr_val.split(os.pathsep)\n for subdir in reqs[envvar]:\n path = os.path.join(self.installdir, subdir)\n if path not in curr_paths:\n if curr_val:\n new_val = '%s:%s' % (path, curr_val)\n else:\n new_val = path\n env.setvar(envvar, new_val)\n\n def make_module_extra(self, *args, **kwargs):\n \"\"\"Set extra stuff in module file, e.g. $EBROOT*, $EBVERSION*, etc.\"\"\"\n if not self.altroot and not self.altversion:\n # check for altroot and altversion (needed here for a module only build)\n self.altroot, self.altversion = self.get_altroot_and_altversion()\n if 'altroot' not in kwargs:\n kwargs['altroot'] = self.altroot\n if 'altversion' not in kwargs:\n kwargs['altversion'] = self.altversion\n return super(Bundle, self).make_module_extra(*args, **kwargs)\n\n def sanity_check_step(self, *args, **kwargs):\n \"\"\"\n Nothing is being installed, so just being able to load the (fake) module is sufficient\n \"\"\"\n if self.cfg['exts_list'] or self.cfg['sanity_check_paths'] or self.cfg['sanity_check_commands']:\n super(Bundle, self).sanity_check_step(*args, **kwargs)\n else:\n self.log.info(\"Testing loading of module '%s' by means of sanity check\" % self.full_mod_name)\n fake_mod_data = self.load_fake_module(purge=True)\n self.log.debug(\"Cleaning up after testing loading of module\")\n self.clean_up_fake_module(fake_mod_data)\n", "path": "easybuild/easyblocks/generic/bundle.py"}]}
| 4,030 | 454 |
gh_patches_debug_50236
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-577
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 1.4.8
User facing issues on the docket:
+ non-zip_safe pex extraction has a race on renaming #557
+ Execute pex archive at runtime with -m parameter #547
+ OSX's python 2.7.10 interpreter reports a bad local platform, bdists fail to resolve #523
+ `pex.resolver.resolve` does the wrong thing when given an `interpreter` and no `platform` #511
+ [Errno 13] Permission denied when executing with bdist_pex #570
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = '1.4.7'
5
6 # Versions 34.0.0 through 35.0.2 (last pre-36.0.0) de-vendored dependencies which causes problems
7 # for pex code so we exclude that range.
8 SETUPTOOLS_REQUIREMENT = 'setuptools>=20.3,<41,!=34.*,!=35.*'
9
10 WHEEL_REQUIREMENT = 'wheel>=0.26.0,<0.32'
11
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,7 +1,7 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = '1.4.7'
+__version__ = '1.4.8'
# Versions 34.0.0 through 35.0.2 (last pre-36.0.0) de-vendored dependencies which causes problems
# for pex code so we exclude that range.
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,7 +1,7 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = '1.4.7'\n+__version__ = '1.4.8'\n \n # Versions 34.0.0 through 35.0.2 (last pre-36.0.0) de-vendored dependencies which causes problems\n # for pex code so we exclude that range.\n", "issue": "Release 1.4.8\nUser facing issues on the docket:\r\n+ non-zip_safe pex extraction has a race on renaming #557\r\n+ Execute pex archive at runtime with -m parameter #547\r\n+ OSX's python 2.7.10 interpreter reports a bad local platform, bdists fail to resolve #523 \r\n+ `pex.resolver.resolve` does the wrong thing when given an `interpreter` and no `platform` #511\r\n+ [Errno 13] Permission denied when executing with bdist_pex #570\r\n\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '1.4.7'\n\n# Versions 34.0.0 through 35.0.2 (last pre-36.0.0) de-vendored dependencies which causes problems\n# for pex code so we exclude that range.\nSETUPTOOLS_REQUIREMENT = 'setuptools>=20.3,<41,!=34.*,!=35.*'\n\nWHEEL_REQUIREMENT = 'wheel>=0.26.0,<0.32'\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '1.4.8'\n\n# Versions 34.0.0 through 35.0.2 (last pre-36.0.0) de-vendored dependencies which causes problems\n# for pex code so we exclude that range.\nSETUPTOOLS_REQUIREMENT = 'setuptools>=20.3,<41,!=34.*,!=35.*'\n\nWHEEL_REQUIREMENT = 'wheel>=0.26.0,<0.32'\n", "path": "pex/version.py"}]}
| 542 | 143 |
gh_patches_debug_35556
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-5279
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugins.nbcnews: not live
### Checklist
- [X] This is a plugin issue and not a different kind of issue
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Latest build from the master branch
### Description
Every time it start the same video but on the site it is live. Something was changed on the site few days ago.
### Debug log
```text
[cli][debug] OS: Linux-6.1.0-7-amd64-x86_64-with-glibc2.36
[cli][debug] Python: 3.11.2
[cli][debug] Streamlink: 5.3.1+84.g7978e4ab
[cli][debug] Dependencies:
[cli][debug] certifi: 2022.9.24
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.2
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.16.0
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.28.1
[cli][debug] urllib3: 1.26.12
[cli][debug] websocket-client: 1.2.3
[cli][debug] esprima: 4.0.1
[cli][debug] importlib-metadata: 4.12.0
[cli][debug] Arguments:
[cli][debug] url=https://www.nbcnews.com/now/
[cli][debug] --loglevel=debug
[cli][debug] --player=mpv
[cli][debug] --default-stream=['best']
[cli][info] Found matching plugin nbcnews for URL https://www.nbcnews.com/now/
[plugins.nbcnews][debug] API ID: 2007524
[utils.l10n][debug] Language code: ru_RU
[cli][info] Available streams: 144p_alt (worst), 144p, 216p_alt, 216p, 288p_alt, 288p, 360p_alt, 360p, 504p_alt, 504p, 576p_alt, 576p, 720p_alt, 720p, 1080p_alt, 1080p (best)
[cli][info] Opening stream: 1080p (hls)
[cli][info] Starting player: mpv
[stream.hls][debug] Reloading playlist
[cli][debug] Pre-buffering 8192 bytes
[stream.hls][debug] First Sequence: 1786423; Last Sequence: 1786722
[stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 1786423; End Sequence: 1786722
[stream.hls][debug] Adding segment 1786423 to queue
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/nbcnews.py`
Content:
```
1 """
2 $description 24-hour world, US and local news channel, based in the United States of America.
3 $url nbcnews.com
4 $type live
5 """
6
7 import logging
8 import re
9
10 from streamlink.plugin import Plugin, pluginmatcher
11 from streamlink.plugin.api import validate
12 from streamlink.stream.hls import HLSStream
13
14
15 log = logging.getLogger(__name__)
16
17
18 @pluginmatcher(re.compile(
19 r"https?://(?:www\.)?nbcnews\.com/now",
20 ))
21 class NBCNews(Plugin):
22 URL_API = "https://api-leap.nbcsports.com/feeds/assets/{}?application=NBCNews&format=nbc-player&platform=desktop"
23 URL_TOKEN = "https://tokens.playmakerservices.com/"
24
25 title = "NBC News Now"
26
27 def _get_streams(self):
28 self.id = self.session.http.get(
29 self.url,
30 schema=validate.Schema(
31 validate.parse_html(),
32 validate.xml_xpath_string(".//script[@type='application/ld+json'][1]/text()"),
33 validate.none_or_all(
34 validate.parse_json(),
35 {"embedUrl": validate.url()},
36 validate.get("embedUrl"),
37 validate.transform(lambda embed_url: embed_url.split("/")[-1]),
38 ),
39 ),
40 )
41 if self.id is None:
42 return
43 log.debug(f"API ID: {self.id}")
44
45 stream = self.session.http.get(
46 self.URL_API.format(self.id),
47 schema=validate.Schema(
48 validate.parse_json(),
49 {
50 "videoSources": [{
51 "cdnSources": {
52 "primary": [{
53 "sourceUrl": validate.url(path=validate.endswith(".m3u8")),
54 }],
55 },
56 }],
57 },
58 validate.get(("videoSources", 0, "cdnSources", "primary", 0, "sourceUrl")),
59 ),
60 )
61
62 url = self.session.http.post(
63 self.URL_TOKEN,
64 json={
65 "requestorId": "nbcnews",
66 "pid": self.id,
67 "application": "NBCSports",
68 "version": "v1",
69 "platform": "desktop",
70 "token": "",
71 "resourceId": "",
72 "inPath": "false",
73 "authenticationType": "unauth",
74 "cdn": "akamai",
75 "url": stream,
76 },
77 schema=validate.Schema(
78 validate.parse_json(),
79 {
80 "akamai": [{
81 "tokenizedUrl": validate.url(),
82 }],
83 },
84 validate.get(("akamai", 0, "tokenizedUrl")),
85 ),
86 )
87 return HLSStream.parse_variant_playlist(self.session, url)
88
89
90 __plugin__ = NBCNews
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/streamlink/plugins/nbcnews.py b/src/streamlink/plugins/nbcnews.py
deleted file mode 100644
--- a/src/streamlink/plugins/nbcnews.py
+++ /dev/null
@@ -1,90 +0,0 @@
-"""
-$description 24-hour world, US and local news channel, based in the United States of America.
-$url nbcnews.com
-$type live
-"""
-
-import logging
-import re
-
-from streamlink.plugin import Plugin, pluginmatcher
-from streamlink.plugin.api import validate
-from streamlink.stream.hls import HLSStream
-
-
-log = logging.getLogger(__name__)
-
-
-@pluginmatcher(re.compile(
- r"https?://(?:www\.)?nbcnews\.com/now",
-))
-class NBCNews(Plugin):
- URL_API = "https://api-leap.nbcsports.com/feeds/assets/{}?application=NBCNews&format=nbc-player&platform=desktop"
- URL_TOKEN = "https://tokens.playmakerservices.com/"
-
- title = "NBC News Now"
-
- def _get_streams(self):
- self.id = self.session.http.get(
- self.url,
- schema=validate.Schema(
- validate.parse_html(),
- validate.xml_xpath_string(".//script[@type='application/ld+json'][1]/text()"),
- validate.none_or_all(
- validate.parse_json(),
- {"embedUrl": validate.url()},
- validate.get("embedUrl"),
- validate.transform(lambda embed_url: embed_url.split("/")[-1]),
- ),
- ),
- )
- if self.id is None:
- return
- log.debug(f"API ID: {self.id}")
-
- stream = self.session.http.get(
- self.URL_API.format(self.id),
- schema=validate.Schema(
- validate.parse_json(),
- {
- "videoSources": [{
- "cdnSources": {
- "primary": [{
- "sourceUrl": validate.url(path=validate.endswith(".m3u8")),
- }],
- },
- }],
- },
- validate.get(("videoSources", 0, "cdnSources", "primary", 0, "sourceUrl")),
- ),
- )
-
- url = self.session.http.post(
- self.URL_TOKEN,
- json={
- "requestorId": "nbcnews",
- "pid": self.id,
- "application": "NBCSports",
- "version": "v1",
- "platform": "desktop",
- "token": "",
- "resourceId": "",
- "inPath": "false",
- "authenticationType": "unauth",
- "cdn": "akamai",
- "url": stream,
- },
- schema=validate.Schema(
- validate.parse_json(),
- {
- "akamai": [{
- "tokenizedUrl": validate.url(),
- }],
- },
- validate.get(("akamai", 0, "tokenizedUrl")),
- ),
- )
- return HLSStream.parse_variant_playlist(self.session, url)
-
-
-__plugin__ = NBCNews
|
{"golden_diff": "diff --git a/src/streamlink/plugins/nbcnews.py b/src/streamlink/plugins/nbcnews.py\ndeleted file mode 100644\n--- a/src/streamlink/plugins/nbcnews.py\n+++ /dev/null\n@@ -1,90 +0,0 @@\n-\"\"\"\n-$description 24-hour world, US and local news channel, based in the United States of America.\n-$url nbcnews.com\n-$type live\n-\"\"\"\n-\n-import logging\n-import re\n-\n-from streamlink.plugin import Plugin, pluginmatcher\n-from streamlink.plugin.api import validate\n-from streamlink.stream.hls import HLSStream\n-\n-\n-log = logging.getLogger(__name__)\n-\n-\n-@pluginmatcher(re.compile(\n- r\"https?://(?:www\\.)?nbcnews\\.com/now\",\n-))\n-class NBCNews(Plugin):\n- URL_API = \"https://api-leap.nbcsports.com/feeds/assets/{}?application=NBCNews&format=nbc-player&platform=desktop\"\n- URL_TOKEN = \"https://tokens.playmakerservices.com/\"\n-\n- title = \"NBC News Now\"\n-\n- def _get_streams(self):\n- self.id = self.session.http.get(\n- self.url,\n- schema=validate.Schema(\n- validate.parse_html(),\n- validate.xml_xpath_string(\".//script[@type='application/ld+json'][1]/text()\"),\n- validate.none_or_all(\n- validate.parse_json(),\n- {\"embedUrl\": validate.url()},\n- validate.get(\"embedUrl\"),\n- validate.transform(lambda embed_url: embed_url.split(\"/\")[-1]),\n- ),\n- ),\n- )\n- if self.id is None:\n- return\n- log.debug(f\"API ID: {self.id}\")\n-\n- stream = self.session.http.get(\n- self.URL_API.format(self.id),\n- schema=validate.Schema(\n- validate.parse_json(),\n- {\n- \"videoSources\": [{\n- \"cdnSources\": {\n- \"primary\": [{\n- \"sourceUrl\": validate.url(path=validate.endswith(\".m3u8\")),\n- }],\n- },\n- }],\n- },\n- validate.get((\"videoSources\", 0, \"cdnSources\", \"primary\", 0, \"sourceUrl\")),\n- ),\n- )\n-\n- url = self.session.http.post(\n- self.URL_TOKEN,\n- json={\n- \"requestorId\": \"nbcnews\",\n- \"pid\": self.id,\n- \"application\": \"NBCSports\",\n- \"version\": \"v1\",\n- \"platform\": \"desktop\",\n- \"token\": \"\",\n- \"resourceId\": \"\",\n- \"inPath\": \"false\",\n- \"authenticationType\": \"unauth\",\n- \"cdn\": \"akamai\",\n- \"url\": stream,\n- },\n- schema=validate.Schema(\n- validate.parse_json(),\n- {\n- \"akamai\": [{\n- \"tokenizedUrl\": validate.url(),\n- }],\n- },\n- validate.get((\"akamai\", 0, \"tokenizedUrl\")),\n- ),\n- )\n- return HLSStream.parse_variant_playlist(self.session, url)\n-\n-\n-__plugin__ = NBCNews\n", "issue": "plugins.nbcnews: not live\n### Checklist\n\n- [X] This is a plugin issue and not a different kind of issue\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nLatest build from the master branch\n\n### Description\n\nEvery time it start the same video but on the site it is live. Something was changed on the site few days ago.\n\n### Debug log\n\n```text\n[cli][debug] OS: Linux-6.1.0-7-amd64-x86_64-with-glibc2.36\r\n[cli][debug] Python: 3.11.2\r\n[cli][debug] Streamlink: 5.3.1+84.g7978e4ab\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2022.9.24\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.2\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.16.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.28.1\r\n[cli][debug] urllib3: 1.26.12\r\n[cli][debug] websocket-client: 1.2.3\r\n[cli][debug] esprima: 4.0.1\r\n[cli][debug] importlib-metadata: 4.12.0\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://www.nbcnews.com/now/\r\n[cli][debug] --loglevel=debug\r\n[cli][debug] --player=mpv\r\n[cli][debug] --default-stream=['best']\r\n[cli][info] Found matching plugin nbcnews for URL https://www.nbcnews.com/now/\r\n[plugins.nbcnews][debug] API ID: 2007524\r\n[utils.l10n][debug] Language code: ru_RU\r\n[cli][info] Available streams: 144p_alt (worst), 144p, 216p_alt, 216p, 288p_alt, 288p, 360p_alt, 360p, 504p_alt, 504p, 576p_alt, 576p, 720p_alt, 720p, 1080p_alt, 1080p (best)\r\n[cli][info] Opening stream: 1080p (hls)\r\n[cli][info] Starting player: mpv\r\n[stream.hls][debug] Reloading playlist\r\n[cli][debug] Pre-buffering 8192 bytes\r\n[stream.hls][debug] First Sequence: 1786423; Last Sequence: 1786722\r\n[stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 1786423; End Sequence: 1786722\r\n[stream.hls][debug] Adding segment 1786423 to queue\n```\n\n", "before_files": [{"content": "\"\"\"\n$description 24-hour world, US and local news channel, based in the United States of America.\n$url nbcnews.com\n$type live\n\"\"\"\n\nimport logging\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?nbcnews\\.com/now\",\n))\nclass NBCNews(Plugin):\n URL_API = \"https://api-leap.nbcsports.com/feeds/assets/{}?application=NBCNews&format=nbc-player&platform=desktop\"\n URL_TOKEN = \"https://tokens.playmakerservices.com/\"\n\n title = \"NBC News Now\"\n\n def _get_streams(self):\n self.id = self.session.http.get(\n self.url,\n schema=validate.Schema(\n validate.parse_html(),\n validate.xml_xpath_string(\".//script[@type='application/ld+json'][1]/text()\"),\n validate.none_or_all(\n validate.parse_json(),\n {\"embedUrl\": validate.url()},\n validate.get(\"embedUrl\"),\n validate.transform(lambda embed_url: embed_url.split(\"/\")[-1]),\n ),\n ),\n )\n if self.id is None:\n return\n log.debug(f\"API ID: {self.id}\")\n\n stream = self.session.http.get(\n self.URL_API.format(self.id),\n schema=validate.Schema(\n validate.parse_json(),\n {\n \"videoSources\": [{\n \"cdnSources\": {\n \"primary\": [{\n \"sourceUrl\": validate.url(path=validate.endswith(\".m3u8\")),\n }],\n },\n }],\n },\n validate.get((\"videoSources\", 0, \"cdnSources\", \"primary\", 0, \"sourceUrl\")),\n ),\n )\n\n url = self.session.http.post(\n self.URL_TOKEN,\n json={\n \"requestorId\": \"nbcnews\",\n \"pid\": self.id,\n \"application\": \"NBCSports\",\n \"version\": \"v1\",\n \"platform\": \"desktop\",\n \"token\": \"\",\n \"resourceId\": \"\",\n \"inPath\": \"false\",\n \"authenticationType\": \"unauth\",\n \"cdn\": \"akamai\",\n \"url\": stream,\n },\n schema=validate.Schema(\n validate.parse_json(),\n {\n \"akamai\": [{\n \"tokenizedUrl\": validate.url(),\n }],\n },\n validate.get((\"akamai\", 0, \"tokenizedUrl\")),\n ),\n )\n return HLSStream.parse_variant_playlist(self.session, url)\n\n\n__plugin__ = NBCNews\n", "path": "src/streamlink/plugins/nbcnews.py"}], "after_files": [{"content": null, "path": "src/streamlink/plugins/nbcnews.py"}]}
| 1,853 | 696 |
gh_patches_debug_21241
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-2280
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
events not editable for groupmembers
group member CAN edit a project but CANNOT an event related to the project. 403 error.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/offlineevents/rules.py`
Content:
```
1 import rules
2 from rules.predicates import is_superuser
3
4 from adhocracy4.modules import predicates as module_predicates
5 from adhocracy4.organisations.predicates import is_initiator
6 from adhocracy4.projects.predicates import is_moderator
7
8 rules.add_perm(
9 'meinberlin_offlineevents.list_offlineevent',
10 is_superuser | is_initiator | is_moderator
11 )
12
13 rules.add_perm(
14 'meinberlin_offlineevents.view_offlineevent',
15 module_predicates.is_allowed_view_item
16 )
17
18 rules.add_perm(
19 'meinberlin_offlineevents.add_offlineevent',
20 is_superuser | is_initiator | is_moderator
21 )
22
23 rules.add_perm(
24 'meinberlin_offlineevents.change_offlineevent',
25 is_superuser | is_initiator | is_moderator
26 )
27
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/meinberlin/apps/offlineevents/rules.py b/meinberlin/apps/offlineevents/rules.py
--- a/meinberlin/apps/offlineevents/rules.py
+++ b/meinberlin/apps/offlineevents/rules.py
@@ -4,10 +4,11 @@
from adhocracy4.modules import predicates as module_predicates
from adhocracy4.organisations.predicates import is_initiator
from adhocracy4.projects.predicates import is_moderator
+from adhocracy4.projects.predicates import is_prj_group_member
rules.add_perm(
'meinberlin_offlineevents.list_offlineevent',
- is_superuser | is_initiator | is_moderator
+ is_superuser | is_initiator | is_moderator | is_prj_group_member
)
rules.add_perm(
@@ -17,10 +18,10 @@
rules.add_perm(
'meinberlin_offlineevents.add_offlineevent',
- is_superuser | is_initiator | is_moderator
+ is_superuser | is_initiator | is_moderator | is_prj_group_member
)
rules.add_perm(
'meinberlin_offlineevents.change_offlineevent',
- is_superuser | is_initiator | is_moderator
+ is_superuser | is_initiator | is_moderator | is_prj_group_member
)
|
{"golden_diff": "diff --git a/meinberlin/apps/offlineevents/rules.py b/meinberlin/apps/offlineevents/rules.py\n--- a/meinberlin/apps/offlineevents/rules.py\n+++ b/meinberlin/apps/offlineevents/rules.py\n@@ -4,10 +4,11 @@\n from adhocracy4.modules import predicates as module_predicates\n from adhocracy4.organisations.predicates import is_initiator\n from adhocracy4.projects.predicates import is_moderator\n+from adhocracy4.projects.predicates import is_prj_group_member\n \n rules.add_perm(\n 'meinberlin_offlineevents.list_offlineevent',\n- is_superuser | is_initiator | is_moderator\n+ is_superuser | is_initiator | is_moderator | is_prj_group_member\n )\n \n rules.add_perm(\n@@ -17,10 +18,10 @@\n \n rules.add_perm(\n 'meinberlin_offlineevents.add_offlineevent',\n- is_superuser | is_initiator | is_moderator\n+ is_superuser | is_initiator | is_moderator | is_prj_group_member\n )\n \n rules.add_perm(\n 'meinberlin_offlineevents.change_offlineevent',\n- is_superuser | is_initiator | is_moderator\n+ is_superuser | is_initiator | is_moderator | is_prj_group_member\n )\n", "issue": "events not editable for groupmembers\ngroup member CAN edit a project but CANNOT an event related to the project. 403 error.\n", "before_files": [{"content": "import rules\nfrom rules.predicates import is_superuser\n\nfrom adhocracy4.modules import predicates as module_predicates\nfrom adhocracy4.organisations.predicates import is_initiator\nfrom adhocracy4.projects.predicates import is_moderator\n\nrules.add_perm(\n 'meinberlin_offlineevents.list_offlineevent',\n is_superuser | is_initiator | is_moderator\n)\n\nrules.add_perm(\n 'meinberlin_offlineevents.view_offlineevent',\n module_predicates.is_allowed_view_item\n)\n\nrules.add_perm(\n 'meinberlin_offlineevents.add_offlineevent',\n is_superuser | is_initiator | is_moderator\n)\n\nrules.add_perm(\n 'meinberlin_offlineevents.change_offlineevent',\n is_superuser | is_initiator | is_moderator\n)\n", "path": "meinberlin/apps/offlineevents/rules.py"}], "after_files": [{"content": "import rules\nfrom rules.predicates import is_superuser\n\nfrom adhocracy4.modules import predicates as module_predicates\nfrom adhocracy4.organisations.predicates import is_initiator\nfrom adhocracy4.projects.predicates import is_moderator\nfrom adhocracy4.projects.predicates import is_prj_group_member\n\nrules.add_perm(\n 'meinberlin_offlineevents.list_offlineevent',\n is_superuser | is_initiator | is_moderator | is_prj_group_member\n)\n\nrules.add_perm(\n 'meinberlin_offlineevents.view_offlineevent',\n module_predicates.is_allowed_view_item\n)\n\nrules.add_perm(\n 'meinberlin_offlineevents.add_offlineevent',\n is_superuser | is_initiator | is_moderator | is_prj_group_member\n)\n\nrules.add_perm(\n 'meinberlin_offlineevents.change_offlineevent',\n is_superuser | is_initiator | is_moderator | is_prj_group_member\n)\n", "path": "meinberlin/apps/offlineevents/rules.py"}]}
| 519 | 301 |
gh_patches_debug_1221
|
rasdani/github-patches
|
git_diff
|
google__mobly-524
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix pytest warnings in Python 3
`pytests` currently produces the following warnings:
mobly/mobly/test_runner.py:181: PytestWarning: cannot collect test class 'TestRunner' because it has a __init__ constructor
class TestRunner(object):
mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.
expected_completed_and_passed)
mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.
expected_completed_and_passed)
mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.begin_time, expected_begin_time)
mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.end_time, expected_end_time)
mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.
expected_completed_and_passed)
mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.begin_time, expected_begin_time)
mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.end_time, expected_end_time)
mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.begin_time, expected_begin_time)
mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.end_time, expected_end_time)
mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.
expected_completed_and_passed)
mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.
expected_completed_and_passed)
mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.
expected_completed_and_passed)
mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.begin_time, expected_begin_time)
mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.end_time, expected_end_time)
mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.begin_time, expected_begin_time)
mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.end_time, expected_end_time)
mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.
expected_completed_and_passed)
mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.
expected_completed_and_passed)
mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.begin_time, expected_begin_time)
mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(actual_test.end_time, expected_end_time)
mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.
expected_completed_and_passed)
mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.
expected_completed_and_passed)
mobly/tests/mobly/output_test.py:171: DeprecationWarning: Please use assertNotEqual instead.
self.assertNotEquals(output_dir1, output_dir2)
mobly/tests/mobly/output_test.py:205: DeprecationWarning: Please use assertNotEqual instead.
self.assertNotEquals(output_dir1, output_dir2)
-- Docs: https://docs.pytest.org/en/latest/warnings.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2016 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import platform
16 import setuptools
17 from setuptools.command import test
18 import sys
19
20 install_requires = [
21 'future', 'portpicker', 'psutil>=5.4.4', 'pyserial', 'pyyaml',
22 'timeout_decorator'
23 ]
24
25 if sys.version_info < (3, ):
26 install_requires.extend([
27 'enum34',
28 # "futures" is needed for py2 compatibility and it only works in 2.7
29 'futures',
30 ])
31
32 if platform.system() == 'Windows':
33 install_requires.append('pywin32')
34
35
36 class PyTest(test.test):
37 """Class used to execute unit tests using PyTest. This allows us to execute
38 unit tests without having to install the package.
39 """
40
41 def finalize_options(self):
42 test.test.finalize_options(self)
43 self.test_args = ['-x', "tests"]
44 self.test_suite = True
45
46 def run_tests(self):
47 import pytest
48 errno = pytest.main(self.test_args)
49 sys.exit(errno)
50
51
52 def main():
53 setuptools.setup(
54 name='mobly',
55 version='1.7.5',
56 maintainer='Ang Li',
57 maintainer_email='[email protected]',
58 description='Automation framework for special end-to-end test cases',
59 license='Apache2.0',
60 url='https://github.com/google/mobly',
61 download_url='https://github.com/google/mobly/tarball/1.7.5',
62 packages=setuptools.find_packages(),
63 include_package_data=False,
64 scripts=['tools/sl4a_shell.py', 'tools/snippet_shell.py'],
65 tests_require=[
66 'mock',
67 'pytest',
68 'pytz',
69 ],
70 install_requires=install_requires,
71 cmdclass={'test': PyTest},
72 )
73
74
75 if __name__ == '__main__':
76 main()
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -40,7 +40,7 @@
def finalize_options(self):
test.test.finalize_options(self)
- self.test_args = ['-x', "tests"]
+ self.test_args = ['-x', "tests/mobly"]
self.test_suite = True
def run_tests(self):
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -40,7 +40,7 @@\n \n def finalize_options(self):\n test.test.finalize_options(self)\n- self.test_args = ['-x', \"tests\"]\n+ self.test_args = ['-x', \"tests/mobly\"]\n self.test_suite = True\n \n def run_tests(self):\n", "issue": "Fix pytest warnings in Python 3\n`pytests` currently produces the following warnings:\r\n\r\n mobly/mobly/test_runner.py:181: PytestWarning: cannot collect test class 'TestRunner' because it has a __init__ constructor\r\n class TestRunner(object):\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.\r\n expected_completed_and_passed)\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.\r\n expected_completed_and_passed)\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.begin_time, expected_begin_time)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.end_time, expected_end_time)\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.\r\n expected_completed_and_passed)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.begin_time, expected_begin_time)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.end_time, expected_end_time)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.begin_time, expected_begin_time)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.end_time, expected_end_time)\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.\r\n expected_completed_and_passed)\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.\r\n expected_completed_and_passed)\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.\r\n expected_completed_and_passed)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.begin_time, expected_begin_time)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.end_time, expected_end_time)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.begin_time, expected_begin_time)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.end_time, expected_end_time)\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.\r\n expected_completed_and_passed)\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.\r\n expected_completed_and_passed)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:192: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.begin_time, expected_begin_time)\r\n mobly/tests/mobly/base_instrumentation_test_test.py:193: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(actual_test.end_time, expected_end_time)\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.\r\n expected_completed_and_passed)\r\n\r\n mobly/tests/mobly/base_instrumentation_test_test.py:179: DeprecationWarning: Please use assertEqual instead.\r\n expected_completed_and_passed)\r\n\r\n mobly/tests/mobly/output_test.py:171: DeprecationWarning: Please use assertNotEqual instead.\r\n self.assertNotEquals(output_dir1, output_dir2)\r\n\r\n mobly/tests/mobly/output_test.py:205: DeprecationWarning: Please use assertNotEqual instead.\r\n self.assertNotEquals(output_dir1, output_dir2)\r\n\r\n -- Docs: https://docs.pytest.org/en/latest/warnings.html\n", "before_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport platform\nimport setuptools\nfrom setuptools.command import test\nimport sys\n\ninstall_requires = [\n 'future', 'portpicker', 'psutil>=5.4.4', 'pyserial', 'pyyaml',\n 'timeout_decorator'\n]\n\nif sys.version_info < (3, ):\n install_requires.extend([\n 'enum34',\n # \"futures\" is needed for py2 compatibility and it only works in 2.7\n 'futures',\n ])\n\nif platform.system() == 'Windows':\n install_requires.append('pywin32')\n\n\nclass PyTest(test.test):\n \"\"\"Class used to execute unit tests using PyTest. This allows us to execute\n unit tests without having to install the package.\n \"\"\"\n\n def finalize_options(self):\n test.test.finalize_options(self)\n self.test_args = ['-x', \"tests\"]\n self.test_suite = True\n\n def run_tests(self):\n import pytest\n errno = pytest.main(self.test_args)\n sys.exit(errno)\n\n\ndef main():\n setuptools.setup(\n name='mobly',\n version='1.7.5',\n maintainer='Ang Li',\n maintainer_email='[email protected]',\n description='Automation framework for special end-to-end test cases',\n license='Apache2.0',\n url='https://github.com/google/mobly',\n download_url='https://github.com/google/mobly/tarball/1.7.5',\n packages=setuptools.find_packages(),\n include_package_data=False,\n scripts=['tools/sl4a_shell.py', 'tools/snippet_shell.py'],\n tests_require=[\n 'mock',\n 'pytest',\n 'pytz',\n ],\n install_requires=install_requires,\n cmdclass={'test': PyTest},\n )\n\n\nif __name__ == '__main__':\n main()\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport platform\nimport setuptools\nfrom setuptools.command import test\nimport sys\n\ninstall_requires = [\n 'future', 'portpicker', 'psutil>=5.4.4', 'pyserial', 'pyyaml',\n 'timeout_decorator'\n]\n\nif sys.version_info < (3, ):\n install_requires.extend([\n 'enum34',\n # \"futures\" is needed for py2 compatibility and it only works in 2.7\n 'futures',\n ])\n\nif platform.system() == 'Windows':\n install_requires.append('pywin32')\n\n\nclass PyTest(test.test):\n \"\"\"Class used to execute unit tests using PyTest. This allows us to execute\n unit tests without having to install the package.\n \"\"\"\n\n def finalize_options(self):\n test.test.finalize_options(self)\n self.test_args = ['-x', \"tests/mobly\"]\n self.test_suite = True\n\n def run_tests(self):\n import pytest\n errno = pytest.main(self.test_args)\n sys.exit(errno)\n\n\ndef main():\n setuptools.setup(\n name='mobly',\n version='1.7.5',\n maintainer='Ang Li',\n maintainer_email='[email protected]',\n description='Automation framework for special end-to-end test cases',\n license='Apache2.0',\n url='https://github.com/google/mobly',\n download_url='https://github.com/google/mobly/tarball/1.7.5',\n packages=setuptools.find_packages(),\n include_package_data=False,\n scripts=['tools/sl4a_shell.py', 'tools/snippet_shell.py'],\n tests_require=[\n 'mock',\n 'pytest',\n 'pytz',\n ],\n install_requires=install_requires,\n cmdclass={'test': PyTest},\n )\n\n\nif __name__ == '__main__':\n main()\n", "path": "setup.py"}]}
| 1,947 | 87 |
gh_patches_debug_26726
|
rasdani/github-patches
|
git_diff
|
ivy-llc__ivy-15478
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
loggamma
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/jax/random.py`
Content:
```
1 # local
2 import ivy
3 from ivy.func_wrapper import with_unsupported_dtypes
4 from ivy.functional.frontends.jax.func_wrapper import (
5 to_ivy_arrays_and_back,
6 handle_jax_dtype,
7 )
8
9
10 @to_ivy_arrays_and_back
11 def PRNGKey(seed):
12 return ivy.array([0, seed % 4294967295 - (seed // 4294967295)], dtype=ivy.int64)
13
14
15 @handle_jax_dtype
16 @to_ivy_arrays_and_back
17 def uniform(key, shape=(), dtype=None, minval=0.0, maxval=1.0):
18 return ivy.random_uniform(
19 low=minval, high=maxval, shape=shape, dtype=dtype, seed=ivy.to_scalar(key[1])
20 )
21
22
23 @handle_jax_dtype
24 @to_ivy_arrays_and_back
25 def normal(key, shape=(), dtype=None):
26 return ivy.random_normal(shape=shape, dtype=dtype, seed=ivy.to_scalar(key[1]))
27
28
29 def _get_seed(key):
30 key1, key2 = int(key[0]), int(key[1])
31 return ivy.to_scalar(int("".join(map(str, [key1, key2]))))
32
33
34 @handle_jax_dtype
35 @to_ivy_arrays_and_back
36 @with_unsupported_dtypes(
37 {
38 "0.3.14 and below": (
39 "float16",
40 "bfloat16",
41 )
42 },
43 "jax",
44 )
45 def beta(key, a, b, shape=None, dtype=None):
46 seed = _get_seed(key)
47 return ivy.beta(a, b, shape=shape, dtype=dtype, seed=seed)
48
49
50 @handle_jax_dtype
51 @to_ivy_arrays_and_back
52 @with_unsupported_dtypes(
53 {
54 "0.3.14 and below": (
55 "float16",
56 "bfloat16",
57 )
58 },
59 "jax",
60 )
61 def dirichlet(key, alpha, shape=None, dtype="float32"):
62 seed = _get_seed(key)
63 alpha = ivy.astype(alpha, dtype)
64 return ivy.dirichlet(alpha, size=shape, dtype=dtype, seed=seed)
65
66
67 @handle_jax_dtype
68 @to_ivy_arrays_and_back
69 def cauchy(key, shape=(), dtype="float64"):
70 seed = _get_seed(key)
71 u = ivy.random_uniform(low=0.0, high=1.0, shape=shape, dtype=dtype, seed=seed)
72 return ivy.tan(ivy.pi * (u - 0.5))
73
74
75 @handle_jax_dtype
76 @to_ivy_arrays_and_back
77 @with_unsupported_dtypes(
78 {"0.3.14 and below": ("unsigned", "int8", "int16")},
79 "jax",
80 )
81 def poisson(key, lam, shape=None, dtype=None):
82 seed = _get_seed(key)
83 return ivy.poisson(lam, shape=shape, dtype=dtype, seed=seed)
84
85
86 @handle_jax_dtype
87 @to_ivy_arrays_and_back
88 @with_unsupported_dtypes(
89 {
90 "0.3.14 and below": (
91 "float16",
92 "bfloat16",
93 )
94 },
95 "jax",
96 )
97 def gamma(key, a, shape=None, dtype="float64"):
98 seed = _get_seed(key)
99 return ivy.gamma(a, 1.0, shape=shape, dtype=dtype, seed=seed)
100
101
102 @handle_jax_dtype
103 @to_ivy_arrays_and_back
104 @with_unsupported_dtypes(
105 {
106 "0.3.14 and below": (
107 "float16",
108 "bfloat16",
109 )
110 },
111 "jax",
112 )
113 def gumbel(key, shape=(), dtype="float64"):
114 seed = _get_seed(key)
115 uniform_x = ivy.random_uniform(
116 low=0.0,
117 high=1.0,
118 shape=shape,
119 dtype=dtype,
120 seed=seed,
121 )
122 return -ivy.log(-ivy.log(uniform_x))
123
124
125 @handle_jax_dtype
126 @to_ivy_arrays_and_back
127 @with_unsupported_dtypes(
128 {"0.3.14 and below": ("unsigned", "int8", "int16")},
129 "jax",
130 )
131 def rademacher(key, shape, dtype="int64"):
132 seed = _get_seed(key)
133 b = ivy.bernoulli(ivy.array([0.5]), shape=shape, dtype="float32", seed=seed)
134 b = ivy.astype(b, dtype)
135 return 2 * b - 1
136
137
138 @handle_jax_dtype
139 @to_ivy_arrays_and_back
140 @with_unsupported_dtypes(
141 {
142 "0.3.14 and below": (
143 "float16",
144 "bfloat16",
145 )
146 },
147 "jax",
148 )
149 def generalized_normal(key, p, shape=(), dtype="float64"):
150 seed = _get_seed(key)
151 g = ivy.gamma(1 / p, 1.0, shape=shape, dtype=dtype, seed=seed)
152 b = ivy.bernoulli(ivy.array([0.5]), shape=shape, dtype=dtype, seed=seed)
153 r = 2 * b - 1
154 return r * g ** (1 / p)
155
156
157 def t(key, df, shape=(), dtype="float64"):
158 seed = _get_seed(key)
159 n = ivy.random_normal(shape=shape, dtype=dtype, seed=seed)
160 half_df = df / 2.0
161 g = ivy.gamma(half_df, 1.0, shape=shape, dtype=dtype, seed=seed)
162 return n * ivy.sqrt(ivy.divide(half_df, g))
163
164
165 @handle_jax_dtype
166 @to_ivy_arrays_and_back
167 @with_unsupported_dtypes(
168 {"0.3.14 and below": ("unsigned", "int8", "int16")},
169 "jax",
170 )
171 def randint(key, shape, minval, maxval, dtype="int64"):
172 seed = _get_seed(key)
173 return ivy.randint(minval, maxval, shape=shape, dtype=dtype, seed=seed)
174
175 @to_ivy_arrays_and_back
176 def bernoulli(key, p=0.5, shape=None):
177 seed = _get_seed(key)
178 return ivy.bernoulli(p, shape=shape, seed=seed)
179
180 @to_ivy_arrays_and_back
181 def fold_in(key, data):
182 s = ivy.bitwise_left_shift(
183 ivy.asarray(data, dtype=ivy.uint32), ivy.array(32, dtype=ivy.uint32)
184 )
185 return ivy.bitwise_xor(key, s)
186
187 @to_ivy_arrays_and_back
188 def permutation(key, x, axis=0, independent=False):
189 x = ivy.array(x)
190 seed = _get_seed(key)
191 if not ivy.get_num_dims(x):
192 r = int(x)
193 return ivy.shuffle(ivy.arange(r), axis, seed=seed)
194 if independent:
195 return ivy.shuffle(x, axis, seed=seed)
196 rand = ivy.arange(x.shape[axis])
197 ind = ivy.shuffle(rand, 0, seed=seed)
198 return ivy.gather(x, ind, axis=axis)
199
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ivy/functional/frontends/jax/random.py b/ivy/functional/frontends/jax/random.py
--- a/ivy/functional/frontends/jax/random.py
+++ b/ivy/functional/frontends/jax/random.py
@@ -172,11 +172,13 @@
seed = _get_seed(key)
return ivy.randint(minval, maxval, shape=shape, dtype=dtype, seed=seed)
+
@to_ivy_arrays_and_back
def bernoulli(key, p=0.5, shape=None):
seed = _get_seed(key)
return ivy.bernoulli(p, shape=shape, seed=seed)
+
@to_ivy_arrays_and_back
def fold_in(key, data):
s = ivy.bitwise_left_shift(
@@ -184,6 +186,7 @@
)
return ivy.bitwise_xor(key, s)
+
@to_ivy_arrays_and_back
def permutation(key, x, axis=0, independent=False):
x = ivy.array(x)
@@ -196,3 +199,21 @@
rand = ivy.arange(x.shape[axis])
ind = ivy.shuffle(rand, 0, seed=seed)
return ivy.gather(x, ind, axis=axis)
+
+
+# loggamma
+@to_ivy_arrays_and_back
+@handle_jax_dtype
+@to_ivy_arrays_and_back
+@with_unsupported_dtypes(
+ {
+ "0.3.14 and below": (
+ "float16",
+ "bfloat16",
+ )
+ },
+ "jax",
+)
+def loggamma(key, a, shape=None, dtype="float64"):
+ seed = _get_seed(key)
+ return ivy.log(ivy.gamma(a, 1.0, shape=shape, dtype=dtype, seed=seed))
|
{"golden_diff": "diff --git a/ivy/functional/frontends/jax/random.py b/ivy/functional/frontends/jax/random.py\n--- a/ivy/functional/frontends/jax/random.py\n+++ b/ivy/functional/frontends/jax/random.py\n@@ -172,11 +172,13 @@\n seed = _get_seed(key)\n return ivy.randint(minval, maxval, shape=shape, dtype=dtype, seed=seed)\n \n+\n @to_ivy_arrays_and_back\n def bernoulli(key, p=0.5, shape=None):\n seed = _get_seed(key)\n return ivy.bernoulli(p, shape=shape, seed=seed)\n \n+\n @to_ivy_arrays_and_back\n def fold_in(key, data):\n s = ivy.bitwise_left_shift(\n@@ -184,6 +186,7 @@\n )\n return ivy.bitwise_xor(key, s)\n \n+\n @to_ivy_arrays_and_back\n def permutation(key, x, axis=0, independent=False):\n x = ivy.array(x)\n@@ -196,3 +199,21 @@\n rand = ivy.arange(x.shape[axis])\n ind = ivy.shuffle(rand, 0, seed=seed)\n return ivy.gather(x, ind, axis=axis)\n+\n+\n+# loggamma\n+@to_ivy_arrays_and_back\n+@handle_jax_dtype\n+@to_ivy_arrays_and_back\n+@with_unsupported_dtypes(\n+ {\n+ \"0.3.14 and below\": (\n+ \"float16\",\n+ \"bfloat16\",\n+ )\n+ },\n+ \"jax\",\n+)\n+def loggamma(key, a, shape=None, dtype=\"float64\"):\n+ seed = _get_seed(key)\n+ return ivy.log(ivy.gamma(a, 1.0, shape=shape, dtype=dtype, seed=seed))\n", "issue": "loggamma\n\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes\nfrom ivy.functional.frontends.jax.func_wrapper import (\n to_ivy_arrays_and_back,\n handle_jax_dtype,\n)\n\n\n@to_ivy_arrays_and_back\ndef PRNGKey(seed):\n return ivy.array([0, seed % 4294967295 - (seed // 4294967295)], dtype=ivy.int64)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef uniform(key, shape=(), dtype=None, minval=0.0, maxval=1.0):\n return ivy.random_uniform(\n low=minval, high=maxval, shape=shape, dtype=dtype, seed=ivy.to_scalar(key[1])\n )\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef normal(key, shape=(), dtype=None):\n return ivy.random_normal(shape=shape, dtype=dtype, seed=ivy.to_scalar(key[1]))\n\n\ndef _get_seed(key):\n key1, key2 = int(key[0]), int(key[1])\n return ivy.to_scalar(int(\"\".join(map(str, [key1, key2]))))\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\n \"0.3.14 and below\": (\n \"float16\",\n \"bfloat16\",\n )\n },\n \"jax\",\n)\ndef beta(key, a, b, shape=None, dtype=None):\n seed = _get_seed(key)\n return ivy.beta(a, b, shape=shape, dtype=dtype, seed=seed)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\n \"0.3.14 and below\": (\n \"float16\",\n \"bfloat16\",\n )\n },\n \"jax\",\n)\ndef dirichlet(key, alpha, shape=None, dtype=\"float32\"):\n seed = _get_seed(key)\n alpha = ivy.astype(alpha, dtype)\n return ivy.dirichlet(alpha, size=shape, dtype=dtype, seed=seed)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef cauchy(key, shape=(), dtype=\"float64\"):\n seed = _get_seed(key)\n u = ivy.random_uniform(low=0.0, high=1.0, shape=shape, dtype=dtype, seed=seed)\n return ivy.tan(ivy.pi * (u - 0.5))\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\"0.3.14 and below\": (\"unsigned\", \"int8\", \"int16\")},\n \"jax\",\n)\ndef poisson(key, lam, shape=None, dtype=None):\n seed = _get_seed(key)\n return ivy.poisson(lam, shape=shape, dtype=dtype, seed=seed)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\n \"0.3.14 and below\": (\n \"float16\",\n \"bfloat16\",\n )\n },\n \"jax\",\n)\ndef gamma(key, a, shape=None, dtype=\"float64\"):\n seed = _get_seed(key)\n return ivy.gamma(a, 1.0, shape=shape, dtype=dtype, seed=seed)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\n \"0.3.14 and below\": (\n \"float16\",\n \"bfloat16\",\n )\n },\n \"jax\",\n)\ndef gumbel(key, shape=(), dtype=\"float64\"):\n seed = _get_seed(key)\n uniform_x = ivy.random_uniform(\n low=0.0,\n high=1.0,\n shape=shape,\n dtype=dtype,\n seed=seed,\n )\n return -ivy.log(-ivy.log(uniform_x))\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\"0.3.14 and below\": (\"unsigned\", \"int8\", \"int16\")},\n \"jax\",\n)\ndef rademacher(key, shape, dtype=\"int64\"):\n seed = _get_seed(key)\n b = ivy.bernoulli(ivy.array([0.5]), shape=shape, dtype=\"float32\", seed=seed)\n b = ivy.astype(b, dtype)\n return 2 * b - 1\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\n \"0.3.14 and below\": (\n \"float16\",\n \"bfloat16\",\n )\n },\n \"jax\",\n)\ndef generalized_normal(key, p, shape=(), dtype=\"float64\"):\n seed = _get_seed(key)\n g = ivy.gamma(1 / p, 1.0, shape=shape, dtype=dtype, seed=seed)\n b = ivy.bernoulli(ivy.array([0.5]), shape=shape, dtype=dtype, seed=seed)\n r = 2 * b - 1\n return r * g ** (1 / p)\n\n\ndef t(key, df, shape=(), dtype=\"float64\"):\n seed = _get_seed(key)\n n = ivy.random_normal(shape=shape, dtype=dtype, seed=seed)\n half_df = df / 2.0\n g = ivy.gamma(half_df, 1.0, shape=shape, dtype=dtype, seed=seed)\n return n * ivy.sqrt(ivy.divide(half_df, g))\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\"0.3.14 and below\": (\"unsigned\", \"int8\", \"int16\")},\n \"jax\",\n)\ndef randint(key, shape, minval, maxval, dtype=\"int64\"):\n seed = _get_seed(key)\n return ivy.randint(minval, maxval, shape=shape, dtype=dtype, seed=seed)\n\n@to_ivy_arrays_and_back\ndef bernoulli(key, p=0.5, shape=None):\n seed = _get_seed(key)\n return ivy.bernoulli(p, shape=shape, seed=seed)\n\n@to_ivy_arrays_and_back\ndef fold_in(key, data):\n s = ivy.bitwise_left_shift(\n ivy.asarray(data, dtype=ivy.uint32), ivy.array(32, dtype=ivy.uint32)\n )\n return ivy.bitwise_xor(key, s)\n\n@to_ivy_arrays_and_back\ndef permutation(key, x, axis=0, independent=False):\n x = ivy.array(x)\n seed = _get_seed(key)\n if not ivy.get_num_dims(x):\n r = int(x)\n return ivy.shuffle(ivy.arange(r), axis, seed=seed)\n if independent:\n return ivy.shuffle(x, axis, seed=seed)\n rand = ivy.arange(x.shape[axis])\n ind = ivy.shuffle(rand, 0, seed=seed)\n return ivy.gather(x, ind, axis=axis)\n", "path": "ivy/functional/frontends/jax/random.py"}], "after_files": [{"content": "# local\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes\nfrom ivy.functional.frontends.jax.func_wrapper import (\n to_ivy_arrays_and_back,\n handle_jax_dtype,\n)\n\n\n@to_ivy_arrays_and_back\ndef PRNGKey(seed):\n return ivy.array([0, seed % 4294967295 - (seed // 4294967295)], dtype=ivy.int64)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef uniform(key, shape=(), dtype=None, minval=0.0, maxval=1.0):\n return ivy.random_uniform(\n low=minval, high=maxval, shape=shape, dtype=dtype, seed=ivy.to_scalar(key[1])\n )\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef normal(key, shape=(), dtype=None):\n return ivy.random_normal(shape=shape, dtype=dtype, seed=ivy.to_scalar(key[1]))\n\n\ndef _get_seed(key):\n key1, key2 = int(key[0]), int(key[1])\n return ivy.to_scalar(int(\"\".join(map(str, [key1, key2]))))\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\n \"0.3.14 and below\": (\n \"float16\",\n \"bfloat16\",\n )\n },\n \"jax\",\n)\ndef beta(key, a, b, shape=None, dtype=None):\n seed = _get_seed(key)\n return ivy.beta(a, b, shape=shape, dtype=dtype, seed=seed)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\n \"0.3.14 and below\": (\n \"float16\",\n \"bfloat16\",\n )\n },\n \"jax\",\n)\ndef dirichlet(key, alpha, shape=None, dtype=\"float32\"):\n seed = _get_seed(key)\n alpha = ivy.astype(alpha, dtype)\n return ivy.dirichlet(alpha, size=shape, dtype=dtype, seed=seed)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef cauchy(key, shape=(), dtype=\"float64\"):\n seed = _get_seed(key)\n u = ivy.random_uniform(low=0.0, high=1.0, shape=shape, dtype=dtype, seed=seed)\n return ivy.tan(ivy.pi * (u - 0.5))\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\"0.3.14 and below\": (\"unsigned\", \"int8\", \"int16\")},\n \"jax\",\n)\ndef poisson(key, lam, shape=None, dtype=None):\n seed = _get_seed(key)\n return ivy.poisson(lam, shape=shape, dtype=dtype, seed=seed)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\n \"0.3.14 and below\": (\n \"float16\",\n \"bfloat16\",\n )\n },\n \"jax\",\n)\ndef gamma(key, a, shape=None, dtype=\"float64\"):\n seed = _get_seed(key)\n return ivy.gamma(a, 1.0, shape=shape, dtype=dtype, seed=seed)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\n \"0.3.14 and below\": (\n \"float16\",\n \"bfloat16\",\n )\n },\n \"jax\",\n)\ndef gumbel(key, shape=(), dtype=\"float64\"):\n seed = _get_seed(key)\n uniform_x = ivy.random_uniform(\n low=0.0,\n high=1.0,\n shape=shape,\n dtype=dtype,\n seed=seed,\n )\n return -ivy.log(-ivy.log(uniform_x))\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\"0.3.14 and below\": (\"unsigned\", \"int8\", \"int16\")},\n \"jax\",\n)\ndef rademacher(key, shape, dtype=\"int64\"):\n seed = _get_seed(key)\n b = ivy.bernoulli(ivy.array([0.5]), shape=shape, dtype=\"float32\", seed=seed)\n b = ivy.astype(b, dtype)\n return 2 * b - 1\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\n \"0.3.14 and below\": (\n \"float16\",\n \"bfloat16\",\n )\n },\n \"jax\",\n)\ndef generalized_normal(key, p, shape=(), dtype=\"float64\"):\n seed = _get_seed(key)\n g = ivy.gamma(1 / p, 1.0, shape=shape, dtype=dtype, seed=seed)\n b = ivy.bernoulli(ivy.array([0.5]), shape=shape, dtype=dtype, seed=seed)\n r = 2 * b - 1\n return r * g ** (1 / p)\n\n\ndef t(key, df, shape=(), dtype=\"float64\"):\n seed = _get_seed(key)\n n = ivy.random_normal(shape=shape, dtype=dtype, seed=seed)\n half_df = df / 2.0\n g = ivy.gamma(half_df, 1.0, shape=shape, dtype=dtype, seed=seed)\n return n * ivy.sqrt(ivy.divide(half_df, g))\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\"0.3.14 and below\": (\"unsigned\", \"int8\", \"int16\")},\n \"jax\",\n)\ndef randint(key, shape, minval, maxval, dtype=\"int64\"):\n seed = _get_seed(key)\n return ivy.randint(minval, maxval, shape=shape, dtype=dtype, seed=seed)\n\n\n@to_ivy_arrays_and_back\ndef bernoulli(key, p=0.5, shape=None):\n seed = _get_seed(key)\n return ivy.bernoulli(p, shape=shape, seed=seed)\n\n\n@to_ivy_arrays_and_back\ndef fold_in(key, data):\n s = ivy.bitwise_left_shift(\n ivy.asarray(data, dtype=ivy.uint32), ivy.array(32, dtype=ivy.uint32)\n )\n return ivy.bitwise_xor(key, s)\n\n\n@to_ivy_arrays_and_back\ndef permutation(key, x, axis=0, independent=False):\n x = ivy.array(x)\n seed = _get_seed(key)\n if not ivy.get_num_dims(x):\n r = int(x)\n return ivy.shuffle(ivy.arange(r), axis, seed=seed)\n if independent:\n return ivy.shuffle(x, axis, seed=seed)\n rand = ivy.arange(x.shape[axis])\n ind = ivy.shuffle(rand, 0, seed=seed)\n return ivy.gather(x, ind, axis=axis)\n\n\n# loggamma\n@to_ivy_arrays_and_back\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\n \"0.3.14 and below\": (\n \"float16\",\n \"bfloat16\",\n )\n },\n \"jax\",\n)\ndef loggamma(key, a, shape=None, dtype=\"float64\"):\n seed = _get_seed(key)\n return ivy.log(ivy.gamma(a, 1.0, shape=shape, dtype=dtype, seed=seed))\n", "path": "ivy/functional/frontends/jax/random.py"}]}
| 2,396 | 426 |
gh_patches_debug_8409
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-478
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`docker build` argument order is invalid on old versions of Docker
We do: `docker build . --tag thing --label thing`
But this produces an error on Docker 1.11.2:
```
ckuehl@dev4-uswest1cdevc:~/proj/pre-commit$ docker build . --tag thing --label thing
docker: "build" requires 1 argument.
See 'docker build --help'.
Usage: docker build [OPTIONS] PATH | URL | -
Build an image from a Dockerfile
```
The path needs to go at the end on 1.11.2, but it works on 1.13.0 as-is. We should probably just change the order of the arguments to make every version happy.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/languages/docker.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import unicode_literals
3
4 import hashlib
5 import os
6
7 from pre_commit import five
8 from pre_commit.languages import helpers
9 from pre_commit.util import CalledProcessError
10 from pre_commit.util import clean_path_on_failure
11 from pre_commit.util import cmd_output
12 from pre_commit.xargs import xargs
13
14
15 ENVIRONMENT_DIR = 'docker'
16 PRE_COMMIT_LABEL = 'PRE_COMMIT'
17
18
19 def md5(s): # pragma: windows no cover
20 return hashlib.md5(five.to_bytes(s)).hexdigest()
21
22
23 def docker_tag(repo_cmd_runner): # pragma: windows no cover
24 return 'pre-commit-{}'.format(
25 md5(os.path.basename(repo_cmd_runner.path()))
26 ).lower()
27
28
29 def docker_is_running(): # pragma: windows no cover
30 try:
31 return cmd_output('docker', 'ps')[0] == 0
32 except CalledProcessError:
33 return False
34
35
36 def assert_docker_available(): # pragma: windows no cover
37 assert docker_is_running(), (
38 'Docker is either not running or not configured in this environment'
39 )
40
41
42 def build_docker_image(repo_cmd_runner, **kwargs): # pragma: windows no cover
43 pull = kwargs.pop('pull')
44 assert not kwargs, kwargs
45 cmd = (
46 'docker', 'build', '.',
47 '--tag', docker_tag(repo_cmd_runner),
48 '--label', PRE_COMMIT_LABEL,
49 )
50 if pull:
51 cmd += ('--pull',)
52 helpers.run_setup_cmd(repo_cmd_runner, cmd)
53
54
55 def install_environment(
56 repo_cmd_runner,
57 version='default',
58 additional_dependencies=(),
59 ): # pragma: windows no cover
60 assert repo_cmd_runner.exists('Dockerfile'), (
61 'No Dockerfile was found in the hook repository'
62 )
63 helpers.assert_version_default('docker', version)
64 helpers.assert_no_additional_deps('docker', additional_dependencies)
65 assert_docker_available()
66
67 directory = repo_cmd_runner.path(
68 helpers.environment_dir(ENVIRONMENT_DIR, 'default'),
69 )
70
71 # Docker doesn't really have relevant disk environment, but pre-commit
72 # still needs to cleanup it's state files on failure
73 with clean_path_on_failure(directory):
74 build_docker_image(repo_cmd_runner, pull=True)
75 os.mkdir(directory)
76
77
78 def run_hook(repo_cmd_runner, hook, file_args): # pragma: windows no cover
79 assert_docker_available()
80 # Rebuild the docker image in case it has gone missing, as many people do
81 # automated cleanup of docker images.
82 build_docker_image(repo_cmd_runner, pull=False)
83
84 hook_cmd = helpers.to_cmd(hook)
85 entry_executable, cmd_rest = hook_cmd[0], hook_cmd[1:]
86
87 cmd = (
88 'docker', 'run',
89 '--rm',
90 '-u', '{}:{}'.format(os.getuid(), os.getgid()),
91 '-v', '{}:/src:rw'.format(os.getcwd()),
92 '--workdir', '/src',
93 '--entrypoint', entry_executable,
94 docker_tag(repo_cmd_runner)
95 ) + cmd_rest
96
97 return xargs(cmd, file_args)
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pre_commit/languages/docker.py b/pre_commit/languages/docker.py
--- a/pre_commit/languages/docker.py
+++ b/pre_commit/languages/docker.py
@@ -43,12 +43,14 @@
pull = kwargs.pop('pull')
assert not kwargs, kwargs
cmd = (
- 'docker', 'build', '.',
+ 'docker', 'build',
'--tag', docker_tag(repo_cmd_runner),
'--label', PRE_COMMIT_LABEL,
)
if pull:
cmd += ('--pull',)
+ # This must come last for old versions of docker. See #477
+ cmd += ('.',)
helpers.run_setup_cmd(repo_cmd_runner, cmd)
|
{"golden_diff": "diff --git a/pre_commit/languages/docker.py b/pre_commit/languages/docker.py\n--- a/pre_commit/languages/docker.py\n+++ b/pre_commit/languages/docker.py\n@@ -43,12 +43,14 @@\n pull = kwargs.pop('pull')\n assert not kwargs, kwargs\n cmd = (\n- 'docker', 'build', '.',\n+ 'docker', 'build',\n '--tag', docker_tag(repo_cmd_runner),\n '--label', PRE_COMMIT_LABEL,\n )\n if pull:\n cmd += ('--pull',)\n+ # This must come last for old versions of docker. See #477\n+ cmd += ('.',)\n helpers.run_setup_cmd(repo_cmd_runner, cmd)\n", "issue": "`docker build` argument order is invalid on old versions of Docker\nWe do: `docker build . --tag thing --label thing`\r\n\r\nBut this produces an error on Docker 1.11.2:\r\n```\r\nckuehl@dev4-uswest1cdevc:~/proj/pre-commit$ docker build . --tag thing --label thing\r\ndocker: \"build\" requires 1 argument.\r\nSee 'docker build --help'.\r\n\r\nUsage: docker build [OPTIONS] PATH | URL | -\r\n\r\nBuild an image from a Dockerfile\r\n```\r\n\r\nThe path needs to go at the end on 1.11.2, but it works on 1.13.0 as-is. We should probably just change the order of the arguments to make every version happy.\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import unicode_literals\n\nimport hashlib\nimport os\n\nfrom pre_commit import five\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\nfrom pre_commit.xargs import xargs\n\n\nENVIRONMENT_DIR = 'docker'\nPRE_COMMIT_LABEL = 'PRE_COMMIT'\n\n\ndef md5(s): # pragma: windows no cover\n return hashlib.md5(five.to_bytes(s)).hexdigest()\n\n\ndef docker_tag(repo_cmd_runner): # pragma: windows no cover\n return 'pre-commit-{}'.format(\n md5(os.path.basename(repo_cmd_runner.path()))\n ).lower()\n\n\ndef docker_is_running(): # pragma: windows no cover\n try:\n return cmd_output('docker', 'ps')[0] == 0\n except CalledProcessError:\n return False\n\n\ndef assert_docker_available(): # pragma: windows no cover\n assert docker_is_running(), (\n 'Docker is either not running or not configured in this environment'\n )\n\n\ndef build_docker_image(repo_cmd_runner, **kwargs): # pragma: windows no cover\n pull = kwargs.pop('pull')\n assert not kwargs, kwargs\n cmd = (\n 'docker', 'build', '.',\n '--tag', docker_tag(repo_cmd_runner),\n '--label', PRE_COMMIT_LABEL,\n )\n if pull:\n cmd += ('--pull',)\n helpers.run_setup_cmd(repo_cmd_runner, cmd)\n\n\ndef install_environment(\n repo_cmd_runner,\n version='default',\n additional_dependencies=(),\n): # pragma: windows no cover\n assert repo_cmd_runner.exists('Dockerfile'), (\n 'No Dockerfile was found in the hook repository'\n )\n helpers.assert_version_default('docker', version)\n helpers.assert_no_additional_deps('docker', additional_dependencies)\n assert_docker_available()\n\n directory = repo_cmd_runner.path(\n helpers.environment_dir(ENVIRONMENT_DIR, 'default'),\n )\n\n # Docker doesn't really have relevant disk environment, but pre-commit\n # still needs to cleanup it's state files on failure\n with clean_path_on_failure(directory):\n build_docker_image(repo_cmd_runner, pull=True)\n os.mkdir(directory)\n\n\ndef run_hook(repo_cmd_runner, hook, file_args): # pragma: windows no cover\n assert_docker_available()\n # Rebuild the docker image in case it has gone missing, as many people do\n # automated cleanup of docker images.\n build_docker_image(repo_cmd_runner, pull=False)\n\n hook_cmd = helpers.to_cmd(hook)\n entry_executable, cmd_rest = hook_cmd[0], hook_cmd[1:]\n\n cmd = (\n 'docker', 'run',\n '--rm',\n '-u', '{}:{}'.format(os.getuid(), os.getgid()),\n '-v', '{}:/src:rw'.format(os.getcwd()),\n '--workdir', '/src',\n '--entrypoint', entry_executable,\n docker_tag(repo_cmd_runner)\n ) + cmd_rest\n\n return xargs(cmd, file_args)\n", "path": "pre_commit/languages/docker.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import unicode_literals\n\nimport hashlib\nimport os\n\nfrom pre_commit import five\nfrom pre_commit.languages import helpers\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\nfrom pre_commit.xargs import xargs\n\n\nENVIRONMENT_DIR = 'docker'\nPRE_COMMIT_LABEL = 'PRE_COMMIT'\n\n\ndef md5(s): # pragma: windows no cover\n return hashlib.md5(five.to_bytes(s)).hexdigest()\n\n\ndef docker_tag(repo_cmd_runner): # pragma: windows no cover\n return 'pre-commit-{}'.format(\n md5(os.path.basename(repo_cmd_runner.path()))\n ).lower()\n\n\ndef docker_is_running(): # pragma: windows no cover\n try:\n return cmd_output('docker', 'ps')[0] == 0\n except CalledProcessError:\n return False\n\n\ndef assert_docker_available(): # pragma: windows no cover\n assert docker_is_running(), (\n 'Docker is either not running or not configured in this environment'\n )\n\n\ndef build_docker_image(repo_cmd_runner, **kwargs): # pragma: windows no cover\n pull = kwargs.pop('pull')\n assert not kwargs, kwargs\n cmd = (\n 'docker', 'build',\n '--tag', docker_tag(repo_cmd_runner),\n '--label', PRE_COMMIT_LABEL,\n )\n if pull:\n cmd += ('--pull',)\n # This must come last for old versions of docker. See #477\n cmd += ('.',)\n helpers.run_setup_cmd(repo_cmd_runner, cmd)\n\n\ndef install_environment(\n repo_cmd_runner,\n version='default',\n additional_dependencies=(),\n): # pragma: windows no cover\n assert repo_cmd_runner.exists('Dockerfile'), (\n 'No Dockerfile was found in the hook repository'\n )\n helpers.assert_version_default('docker', version)\n helpers.assert_no_additional_deps('docker', additional_dependencies)\n assert_docker_available()\n\n directory = repo_cmd_runner.path(\n helpers.environment_dir(ENVIRONMENT_DIR, 'default'),\n )\n\n # Docker doesn't really have relevant disk environment, but pre-commit\n # still needs to cleanup it's state files on failure\n with clean_path_on_failure(directory):\n build_docker_image(repo_cmd_runner, pull=True)\n os.mkdir(directory)\n\n\ndef run_hook(repo_cmd_runner, hook, file_args): # pragma: windows no cover\n assert_docker_available()\n # Rebuild the docker image in case it has gone missing, as many people do\n # automated cleanup of docker images.\n build_docker_image(repo_cmd_runner, pull=False)\n\n hook_cmd = helpers.to_cmd(hook)\n entry_executable, cmd_rest = hook_cmd[0], hook_cmd[1:]\n\n cmd = (\n 'docker', 'run',\n '--rm',\n '-u', '{}:{}'.format(os.getuid(), os.getgid()),\n '-v', '{}:/src:rw'.format(os.getcwd()),\n '--workdir', '/src',\n '--entrypoint', entry_executable,\n docker_tag(repo_cmd_runner)\n ) + cmd_rest\n\n return xargs(cmd, file_args)\n", "path": "pre_commit/languages/docker.py"}]}
| 1,288 | 156 |
gh_patches_debug_60679
|
rasdani/github-patches
|
git_diff
|
OCHA-DAP__hdx-ckan-1655
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Main Nav changes to accomodate "Feedback" button
Updating the action based on discussions:
1. Remove `Contact` from main nav.
2. Between `About` and `Submit Data`, add a button `Feedback`.
3. While you are there, change `Submit Data` to `Share Data` (there's a later issue for that which this will close)
Button style should be the same as the "Follow Us" button here, except gray, not blue: http://docs.hdx.rwlabs.org/get-involved/
Note that the megaphone icon shown below will not be used. No icon on the button.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext-hdx_theme/ckanext/hdx_theme/version.py`
Content:
```
1 hdx_version = 'v0.4.5'
2
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py
+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
@@ -1 +1 @@
-hdx_version = 'v0.4.5'
+hdx_version = 'v0.4.6'
|
{"golden_diff": "diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n@@ -1 +1 @@\n-hdx_version = 'v0.4.5'\n+hdx_version = 'v0.4.6'\n", "issue": "Main Nav changes to accomodate \"Feedback\" button\nUpdating the action based on discussions:\n1. Remove `Contact` from main nav. \n2. Between `About` and `Submit Data`, add a button `Feedback`. \n3. While you are there, change `Submit Data` to `Share Data` (there's a later issue for that which this will close)\n\nButton style should be the same as the \"Follow Us\" button here, except gray, not blue: http://docs.hdx.rwlabs.org/get-involved/\n\nNote that the megaphone icon shown below will not be used. No icon on the button.\n\n", "before_files": [{"content": "hdx_version = 'v0.4.5'\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}], "after_files": [{"content": "hdx_version = 'v0.4.6'\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}]}
| 415 | 106 |
gh_patches_debug_17445
|
rasdani/github-patches
|
git_diff
|
facebookresearch__fairscale-78
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Async broadcast request in OSS
## 🚀 Feature
Make sure that the broadcast request to sync shards are async, and not blocking the main thread
## Motivation
OSS is currently too slow
## Pitch
Easy fix, no compromise
## Alternatives
Stay slow for no reason
## Additional context
Clear bottleneck when tracing execution
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `fairscale/optim/oss.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
2 #
3 # This source code is licensed under the BSD license found in the
4 # LICENSE file in the root directory of this source tree.
5
6 import copy
7 from itertools import chain
8 import logging
9 from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Tuple, Type
10
11 import torch
12 import torch.distributed as dist
13 from torch.optim import SGD, Optimizer
14
15 from .utils import broadcast_object, recursive_copy_to_device
16
17 if TYPE_CHECKING: # pragma: no cover
18 from torch.optim.optimizer import _params_t
19 else:
20 _params_t = Any
21
22
23 class OSS(Optimizer):
24 """Wraps an arbitrary :class:`optim.Optimizer <torch.optim.Optimizer>`
25 optimizer and shards its state as described by ZeRO_.
26 ::
27 opt = OSS(params, optim=torch.optim.Adam, lr=0.01)
28
29 .. _ZeRO: https://arxiv.org/abs/1910.02054
30
31 We use a greedy algorithm to pack a number of parameters
32 at each rank. Each parameter belongs to a single rank and
33 is not divided among rank.
34
35 After each rank completed their parameter update, they broadcast
36 the new version of the parameters to all other ranks to synchronize
37 the parameters for next round forward/backward computation.
38
39 Args:
40 params (list of tensors):
41 parameters to be optimized
42 Keyword Args:
43 optim (torch.nn.Optimizer):
44 optimizer to shard (default: SGD)
45 group (group):
46 torch.distributed group (default: group.WORLD)
47 """
48
49 optim: Optimizer
50 in_super_constructor: bool
51
52 def __init__(self, params: _params_t, optim: Type[Optimizer] = SGD, group: Any = dist.group.WORLD, **defaults: Any):
53 # Hold all the model params in the root .param_groups
54 self.in_super_constructor = True
55 super().__init__(params, defaults)
56 self.in_super_constructor = False
57
58 # Build the wrapped optimizer, responsible for a shard of the params
59 self.group = group
60 self.rank = dist.get_rank(group)
61 split_param_groups = self.partition_parameters()
62 self.optim = optim(split_param_groups[self.rank], **defaults)
63
64 # Optional consolidated optimizer state
65 self._all_states: List[Dict[str, Any]] = []
66
67 # Current device is set by the parameters allocated to this rank
68 self._device = split_param_groups[self.rank][0]["params"][0].device
69
70 # Sync local and global param_groups keys
71 for global_group, local_group in zip(self.param_groups, self.optim.param_groups):
72 for k, v in local_group.items():
73 if k != "params":
74 global_group[k] = v
75
76 def partition_parameters(self) -> List[List[dict]]:
77 """Partitions parameters across distributed ranks.
78
79 Returns a list of param_groups (which is a list of dict) where each
80 element of the list contains the param_groups for a rank. Element 0
81 corresponds to rank 0, etc. We need all the ranks for the broadcast
82 inside step().
83 """
84 world_size = dist.get_world_size(self.group)
85 param_groups: List[List] = [list() for _ in range(world_size)]
86 sizes = [0] * world_size
87 for param_group in self.param_groups:
88 param_lists: List[List] = [list() for _ in range(world_size)]
89 for param in param_group["params"]:
90 # Add this param to rank with smallest size.
91 rank = sizes.index(min(sizes))
92 param_lists[rank].append(param)
93 sizes[rank] += param.numel()
94 for rank, params in enumerate(param_lists):
95 param_group_rank = copy.copy(param_group)
96 param_group_rank["params"] = params
97 param_groups[rank].append(param_group_rank)
98 return param_groups
99
100 # NOTE(msb) We add a kwargs in order to support Optimizer sub-classes that support extra kwargs.
101 # For example, the apex library contains fused optimizers with a step that supports extra kwargs.
102 def step(self, closure: Optional[Callable[[], float]] = None, **kwargs: Any) -> Optional[float]:
103 # Sync oss param_groups attributes in case they've been updated by a scheduler.
104 self._sync_param_groups()
105
106 # Run the optimizer step on this shard only
107 loss = self.optim.step(closure=closure, **kwargs) # type: ignore
108
109 # Sync all the states
110 for rank, param_groups in enumerate(self.partition_parameters()):
111 for param_group in param_groups:
112 for param in param_group["params"]:
113 dist.broadcast(tensor=param, src=rank, group=self.group)
114 return loss
115
116 def local_state_dict(self) -> dict:
117 """ Gets this rank's state_dict. """
118 return self.optim.state_dict()
119
120 def consolidate_state_dict(self, recipient_rank: int = 0) -> None:
121 """ Update the consolidated state_dict list, one per rank.
122
123 This needs to be called on all replicas """
124
125 # Sync lr and other attributes in case its been updated
126 self._sync_param_groups()
127
128 if self.rank == recipient_rank:
129 # Pull the sharded state from all the other replicas
130 # Store all the states in order, rank by rank
131 logging.debug("Pulling the sharded optimizer state from all replicas")
132 self._all_states = self._collect_sharded_states()
133 else:
134 # Acknowledge broadcasts, and send this rank's shard when needed
135 self._broadcast_state_dict()
136
137 def state_dict(self) -> Dict[str, Any]:
138 """
139 Return the last known global optimizer state, which consist of a list of the shards.
140
141 NOTE: This is limited to the replica which was responsible for the consolidation.
142 The state may also not be up to date, depending on when `consolidate_state_dict` was last called.
143 """
144
145 assert (
146 len(self._all_states) > 0
147 ), "The optimizer state is not materialized, please call consolidate_state_dict on every replica beforehand"
148
149 # Flatten the param_groups, save the partition which logs the rank <> shard correspondence
150 partition: List[Tuple[int, int]] = []
151 param_groups: List[Dict[Any, Any]] = []
152
153 start = 0
154 for i, s in enumerate(self._all_states):
155 param_groups.extend(s["param_groups"])
156 end = start + len(s["param_groups"])
157 partition.append((start, end))
158 start = end
159
160 return {
161 "state": [s["state"] for s in self._all_states],
162 "param_groups": param_groups,
163 "partition": partition,
164 }
165
166 def load_local_state_dict(self, state_dict: dict) -> None:
167 """ Loads this rank's state_dict. """
168
169 self.optim.load_state_dict(state_dict)
170
171 # Workaround PyTorch bug that casts state (https://github.com/pytorch/pytorch/issues/43706)
172 # Copied from https://github.com/pytorch/fairseq/blob/v0.9.0/fairseq/optim/fp16_optimizer.py#L251-L268
173 groups = self.optim.param_groups
174 saved_groups = state_dict["param_groups"]
175 id_map = {
176 old_id: p
177 for old_id, p in zip(chain(*(g["params"] for g in saved_groups)), chain(*(g["params"] for g in groups)))
178 }
179 for k, v in state_dict["state"].items():
180 if k in id_map:
181 param = id_map[k]
182 self.optim.state[param] = recursive_copy_to_device(v, non_blocking=True, device=param.device)
183
184 # Restore the global param_groups (the params themselves are already correct)
185 for global_group, local_group in zip(self.param_groups, groups):
186 for k, v in local_group.items():
187 if k != "params":
188 global_group[k] = v
189
190 def load_state_dict(self, state_dict: Dict[str, Any]) -> None:
191 """ Restore the global parameter groups as well as the shard """
192
193 # Get this optimizer's param_groups shard
194 param_groups = state_dict["param_groups"][
195 state_dict["partition"][self.rank][0] : state_dict["partition"][self.rank][1]
196 ]
197 # Dispatch this rank's state dictionary to the wrapped shard optimizer
198 self.load_local_state_dict({"state": state_dict["state"][self.rank], "param_groups": param_groups})
199
200 def add_param_group(self, param_group: dict) -> None:
201 super().add_param_group(param_group)
202 if not self.in_super_constructor:
203 param_groups = self.partition_parameters()[self.rank]
204 if len(param_groups) == len(self.optim.param_groups) + 1:
205 self.optim.add_param_group(param_groups[-1])
206
207 def _sync_param_groups(self) -> None:
208 """Sync learning rate and other optimizer attributes (needed to support schedulers)."""
209 for global_group, local_group in zip(self.param_groups, self.optim.param_groups):
210 for k in local_group.keys():
211 if k != "params":
212 # Params have been sharded and should not be synced here
213 local_group[k] = global_group[k]
214
215 def _collect_sharded_states(self) -> List[Dict[str, Any]]:
216 """
217 Collect all the state shards, in CPU memory.
218 """
219 empty_buffer = torch.tensor([0], dtype=torch.uint8, device=self._device)
220 all_states: List[Dict[str, Any]] = []
221
222 for rank in range(dist.get_world_size(group=self.group)):
223 if rank == self.rank:
224 logging.debug("Saving self state")
225 all_states.append(
226 recursive_copy_to_device(self.local_state_dict(), non_blocking=True, device=torch.device("cpu"))
227 )
228
229 # Sync with other replicas
230 broadcast_object(empty_buffer, src_rank=rank, group=self.group, dist_device=self._device)
231 else:
232 # Fetch the optim state from the other replicas
233 logging.debug("Receiving state from rank %s ", rank)
234 replica_state = broadcast_object(
235 empty_buffer, src_rank=rank, group=self.group, dist_device=self._device
236 )
237
238 all_states.append(
239 recursive_copy_to_device(replica_state, non_blocking=True, device=torch.device("cpu"))
240 )
241
242 logging.debug("State from rank %s received", rank)
243
244 return all_states
245
246 def _broadcast_state_dict(self) -> None:
247 """
248 Broadcast this rank's state shard, discard others
249 """
250 empty_buffer = torch.tensor([0], dtype=torch.uint8, device=self._device)
251
252 for rank in range(dist.get_world_size(group=self.group)):
253 if rank == self.rank:
254 # Send the state to the reference replica
255 logging.debug(
256 "Sending the sharded optimizer state to the reference replica from rank %s", rank,
257 )
258 broadcast_object(self.local_state_dict(), src_rank=rank, group=self.group, dist_device=self._device)
259 else:
260 # Discard this tensor/rank, broadcast necessary for syncing
261 logging.debug("Discarding broadcast from rank %s", rank)
262 broadcast_object(empty_buffer, src_rank=rank, group=self.group, dist_device=self._device)
263
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/fairscale/optim/oss.py b/fairscale/optim/oss.py
--- a/fairscale/optim/oss.py
+++ b/fairscale/optim/oss.py
@@ -106,11 +106,14 @@
# Run the optimizer step on this shard only
loss = self.optim.step(closure=closure, **kwargs) # type: ignore
- # Sync all the states
+ # Sync all the states. Broadcast requests are issued async, we check completeness before moving on
+ requests = []
for rank, param_groups in enumerate(self.partition_parameters()):
for param_group in param_groups:
for param in param_group["params"]:
- dist.broadcast(tensor=param, src=rank, group=self.group)
+ requests.append(dist.broadcast(tensor=param, src=rank, group=self.group, async_op=True))
+
+ _ = list(map(lambda x: x.wait(), requests))
return loss
def local_state_dict(self) -> dict:
|
{"golden_diff": "diff --git a/fairscale/optim/oss.py b/fairscale/optim/oss.py\n--- a/fairscale/optim/oss.py\n+++ b/fairscale/optim/oss.py\n@@ -106,11 +106,14 @@\n # Run the optimizer step on this shard only\n loss = self.optim.step(closure=closure, **kwargs) # type: ignore\n \n- # Sync all the states\n+ # Sync all the states. Broadcast requests are issued async, we check completeness before moving on\n+ requests = []\n for rank, param_groups in enumerate(self.partition_parameters()):\n for param_group in param_groups:\n for param in param_group[\"params\"]:\n- dist.broadcast(tensor=param, src=rank, group=self.group)\n+ requests.append(dist.broadcast(tensor=param, src=rank, group=self.group, async_op=True))\n+\n+ _ = list(map(lambda x: x.wait(), requests))\n return loss\n \n def local_state_dict(self) -> dict:\n", "issue": "Async broadcast request in OSS\n## \ud83d\ude80 Feature\r\nMake sure that the broadcast request to sync shards are async, and not blocking the main thread\r\n\r\n## Motivation\r\nOSS is currently too slow\r\n\r\n## Pitch\r\nEasy fix, no compromise\r\n\r\n## Alternatives\r\nStay slow for no reason\r\n\r\n## Additional context\r\nClear bottleneck when tracing execution\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\nimport copy\nfrom itertools import chain\nimport logging\nfrom typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Tuple, Type\n\nimport torch\nimport torch.distributed as dist\nfrom torch.optim import SGD, Optimizer\n\nfrom .utils import broadcast_object, recursive_copy_to_device\n\nif TYPE_CHECKING: # pragma: no cover\n from torch.optim.optimizer import _params_t\nelse:\n _params_t = Any\n\n\nclass OSS(Optimizer):\n \"\"\"Wraps an arbitrary :class:`optim.Optimizer <torch.optim.Optimizer>`\n optimizer and shards its state as described by ZeRO_.\n ::\n opt = OSS(params, optim=torch.optim.Adam, lr=0.01)\n\n .. _ZeRO: https://arxiv.org/abs/1910.02054\n\n We use a greedy algorithm to pack a number of parameters\n at each rank. Each parameter belongs to a single rank and\n is not divided among rank.\n\n After each rank completed their parameter update, they broadcast\n the new version of the parameters to all other ranks to synchronize\n the parameters for next round forward/backward computation.\n\n Args:\n params (list of tensors):\n parameters to be optimized\n Keyword Args:\n optim (torch.nn.Optimizer):\n optimizer to shard (default: SGD)\n group (group):\n torch.distributed group (default: group.WORLD)\n \"\"\"\n\n optim: Optimizer\n in_super_constructor: bool\n\n def __init__(self, params: _params_t, optim: Type[Optimizer] = SGD, group: Any = dist.group.WORLD, **defaults: Any):\n # Hold all the model params in the root .param_groups\n self.in_super_constructor = True\n super().__init__(params, defaults)\n self.in_super_constructor = False\n\n # Build the wrapped optimizer, responsible for a shard of the params\n self.group = group\n self.rank = dist.get_rank(group)\n split_param_groups = self.partition_parameters()\n self.optim = optim(split_param_groups[self.rank], **defaults)\n\n # Optional consolidated optimizer state\n self._all_states: List[Dict[str, Any]] = []\n\n # Current device is set by the parameters allocated to this rank\n self._device = split_param_groups[self.rank][0][\"params\"][0].device\n\n # Sync local and global param_groups keys\n for global_group, local_group in zip(self.param_groups, self.optim.param_groups):\n for k, v in local_group.items():\n if k != \"params\":\n global_group[k] = v\n\n def partition_parameters(self) -> List[List[dict]]:\n \"\"\"Partitions parameters across distributed ranks.\n\n Returns a list of param_groups (which is a list of dict) where each\n element of the list contains the param_groups for a rank. Element 0\n corresponds to rank 0, etc. We need all the ranks for the broadcast\n inside step().\n \"\"\"\n world_size = dist.get_world_size(self.group)\n param_groups: List[List] = [list() for _ in range(world_size)]\n sizes = [0] * world_size\n for param_group in self.param_groups:\n param_lists: List[List] = [list() for _ in range(world_size)]\n for param in param_group[\"params\"]:\n # Add this param to rank with smallest size.\n rank = sizes.index(min(sizes))\n param_lists[rank].append(param)\n sizes[rank] += param.numel()\n for rank, params in enumerate(param_lists):\n param_group_rank = copy.copy(param_group)\n param_group_rank[\"params\"] = params\n param_groups[rank].append(param_group_rank)\n return param_groups\n\n # NOTE(msb) We add a kwargs in order to support Optimizer sub-classes that support extra kwargs.\n # For example, the apex library contains fused optimizers with a step that supports extra kwargs.\n def step(self, closure: Optional[Callable[[], float]] = None, **kwargs: Any) -> Optional[float]:\n # Sync oss param_groups attributes in case they've been updated by a scheduler.\n self._sync_param_groups()\n\n # Run the optimizer step on this shard only\n loss = self.optim.step(closure=closure, **kwargs) # type: ignore\n\n # Sync all the states\n for rank, param_groups in enumerate(self.partition_parameters()):\n for param_group in param_groups:\n for param in param_group[\"params\"]:\n dist.broadcast(tensor=param, src=rank, group=self.group)\n return loss\n\n def local_state_dict(self) -> dict:\n \"\"\" Gets this rank's state_dict. \"\"\"\n return self.optim.state_dict()\n\n def consolidate_state_dict(self, recipient_rank: int = 0) -> None:\n \"\"\" Update the consolidated state_dict list, one per rank.\n\n This needs to be called on all replicas \"\"\"\n\n # Sync lr and other attributes in case its been updated\n self._sync_param_groups()\n\n if self.rank == recipient_rank:\n # Pull the sharded state from all the other replicas\n # Store all the states in order, rank by rank\n logging.debug(\"Pulling the sharded optimizer state from all replicas\")\n self._all_states = self._collect_sharded_states()\n else:\n # Acknowledge broadcasts, and send this rank's shard when needed\n self._broadcast_state_dict()\n\n def state_dict(self) -> Dict[str, Any]:\n \"\"\"\n Return the last known global optimizer state, which consist of a list of the shards.\n\n NOTE: This is limited to the replica which was responsible for the consolidation.\n The state may also not be up to date, depending on when `consolidate_state_dict` was last called.\n \"\"\"\n\n assert (\n len(self._all_states) > 0\n ), \"The optimizer state is not materialized, please call consolidate_state_dict on every replica beforehand\"\n\n # Flatten the param_groups, save the partition which logs the rank <> shard correspondence\n partition: List[Tuple[int, int]] = []\n param_groups: List[Dict[Any, Any]] = []\n\n start = 0\n for i, s in enumerate(self._all_states):\n param_groups.extend(s[\"param_groups\"])\n end = start + len(s[\"param_groups\"])\n partition.append((start, end))\n start = end\n\n return {\n \"state\": [s[\"state\"] for s in self._all_states],\n \"param_groups\": param_groups,\n \"partition\": partition,\n }\n\n def load_local_state_dict(self, state_dict: dict) -> None:\n \"\"\" Loads this rank's state_dict. \"\"\"\n\n self.optim.load_state_dict(state_dict)\n\n # Workaround PyTorch bug that casts state (https://github.com/pytorch/pytorch/issues/43706)\n # Copied from https://github.com/pytorch/fairseq/blob/v0.9.0/fairseq/optim/fp16_optimizer.py#L251-L268\n groups = self.optim.param_groups\n saved_groups = state_dict[\"param_groups\"]\n id_map = {\n old_id: p\n for old_id, p in zip(chain(*(g[\"params\"] for g in saved_groups)), chain(*(g[\"params\"] for g in groups)))\n }\n for k, v in state_dict[\"state\"].items():\n if k in id_map:\n param = id_map[k]\n self.optim.state[param] = recursive_copy_to_device(v, non_blocking=True, device=param.device)\n\n # Restore the global param_groups (the params themselves are already correct)\n for global_group, local_group in zip(self.param_groups, groups):\n for k, v in local_group.items():\n if k != \"params\":\n global_group[k] = v\n\n def load_state_dict(self, state_dict: Dict[str, Any]) -> None:\n \"\"\" Restore the global parameter groups as well as the shard \"\"\"\n\n # Get this optimizer's param_groups shard\n param_groups = state_dict[\"param_groups\"][\n state_dict[\"partition\"][self.rank][0] : state_dict[\"partition\"][self.rank][1]\n ]\n # Dispatch this rank's state dictionary to the wrapped shard optimizer\n self.load_local_state_dict({\"state\": state_dict[\"state\"][self.rank], \"param_groups\": param_groups})\n\n def add_param_group(self, param_group: dict) -> None:\n super().add_param_group(param_group)\n if not self.in_super_constructor:\n param_groups = self.partition_parameters()[self.rank]\n if len(param_groups) == len(self.optim.param_groups) + 1:\n self.optim.add_param_group(param_groups[-1])\n\n def _sync_param_groups(self) -> None:\n \"\"\"Sync learning rate and other optimizer attributes (needed to support schedulers).\"\"\"\n for global_group, local_group in zip(self.param_groups, self.optim.param_groups):\n for k in local_group.keys():\n if k != \"params\":\n # Params have been sharded and should not be synced here\n local_group[k] = global_group[k]\n\n def _collect_sharded_states(self) -> List[Dict[str, Any]]:\n \"\"\"\n Collect all the state shards, in CPU memory.\n \"\"\"\n empty_buffer = torch.tensor([0], dtype=torch.uint8, device=self._device)\n all_states: List[Dict[str, Any]] = []\n\n for rank in range(dist.get_world_size(group=self.group)):\n if rank == self.rank:\n logging.debug(\"Saving self state\")\n all_states.append(\n recursive_copy_to_device(self.local_state_dict(), non_blocking=True, device=torch.device(\"cpu\"))\n )\n\n # Sync with other replicas\n broadcast_object(empty_buffer, src_rank=rank, group=self.group, dist_device=self._device)\n else:\n # Fetch the optim state from the other replicas\n logging.debug(\"Receiving state from rank %s \", rank)\n replica_state = broadcast_object(\n empty_buffer, src_rank=rank, group=self.group, dist_device=self._device\n )\n\n all_states.append(\n recursive_copy_to_device(replica_state, non_blocking=True, device=torch.device(\"cpu\"))\n )\n\n logging.debug(\"State from rank %s received\", rank)\n\n return all_states\n\n def _broadcast_state_dict(self) -> None:\n \"\"\"\n Broadcast this rank's state shard, discard others\n \"\"\"\n empty_buffer = torch.tensor([0], dtype=torch.uint8, device=self._device)\n\n for rank in range(dist.get_world_size(group=self.group)):\n if rank == self.rank:\n # Send the state to the reference replica\n logging.debug(\n \"Sending the sharded optimizer state to the reference replica from rank %s\", rank,\n )\n broadcast_object(self.local_state_dict(), src_rank=rank, group=self.group, dist_device=self._device)\n else:\n # Discard this tensor/rank, broadcast necessary for syncing\n logging.debug(\"Discarding broadcast from rank %s\", rank)\n broadcast_object(empty_buffer, src_rank=rank, group=self.group, dist_device=self._device)\n", "path": "fairscale/optim/oss.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\nimport copy\nfrom itertools import chain\nimport logging\nfrom typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Tuple, Type\n\nimport torch\nimport torch.distributed as dist\nfrom torch.optim import SGD, Optimizer\n\nfrom .utils import broadcast_object, recursive_copy_to_device\n\nif TYPE_CHECKING: # pragma: no cover\n from torch.optim.optimizer import _params_t\nelse:\n _params_t = Any\n\n\nclass OSS(Optimizer):\n \"\"\"Wraps an arbitrary :class:`optim.Optimizer <torch.optim.Optimizer>`\n optimizer and shards its state as described by ZeRO_.\n ::\n opt = OSS(params, optim=torch.optim.Adam, lr=0.01)\n\n .. _ZeRO: https://arxiv.org/abs/1910.02054\n\n We use a greedy algorithm to pack a number of parameters\n at each rank. Each parameter belongs to a single rank and\n is not divided among rank.\n\n After each rank completed their parameter update, they broadcast\n the new version of the parameters to all other ranks to synchronize\n the parameters for next round forward/backward computation.\n\n Args:\n params (list of tensors):\n parameters to be optimized\n Keyword Args:\n optim (torch.nn.Optimizer):\n optimizer to shard (default: SGD)\n group (group):\n torch.distributed group (default: group.WORLD)\n \"\"\"\n\n optim: Optimizer\n in_super_constructor: bool\n\n def __init__(self, params: _params_t, optim: Type[Optimizer] = SGD, group: Any = dist.group.WORLD, **defaults: Any):\n # Hold all the model params in the root .param_groups\n self.in_super_constructor = True\n super().__init__(params, defaults)\n self.in_super_constructor = False\n\n # Build the wrapped optimizer, responsible for a shard of the params\n self.group = group\n self.rank = dist.get_rank(group)\n split_param_groups = self.partition_parameters()\n self.optim = optim(split_param_groups[self.rank], **defaults)\n\n # Optional consolidated optimizer state\n self._all_states: List[Dict[str, Any]] = []\n\n # Current device is set by the parameters allocated to this rank\n self._device = split_param_groups[self.rank][0][\"params\"][0].device\n\n # Sync local and global param_groups keys\n for global_group, local_group in zip(self.param_groups, self.optim.param_groups):\n for k, v in local_group.items():\n if k != \"params\":\n global_group[k] = v\n\n def partition_parameters(self) -> List[List[dict]]:\n \"\"\"Partitions parameters across distributed ranks.\n\n Returns a list of param_groups (which is a list of dict) where each\n element of the list contains the param_groups for a rank. Element 0\n corresponds to rank 0, etc. We need all the ranks for the broadcast\n inside step().\n \"\"\"\n world_size = dist.get_world_size(self.group)\n param_groups: List[List] = [list() for _ in range(world_size)]\n sizes = [0] * world_size\n for param_group in self.param_groups:\n param_lists: List[List] = [list() for _ in range(world_size)]\n for param in param_group[\"params\"]:\n # Add this param to rank with smallest size.\n rank = sizes.index(min(sizes))\n param_lists[rank].append(param)\n sizes[rank] += param.numel()\n for rank, params in enumerate(param_lists):\n param_group_rank = copy.copy(param_group)\n param_group_rank[\"params\"] = params\n param_groups[rank].append(param_group_rank)\n return param_groups\n\n # NOTE(msb) We add a kwargs in order to support Optimizer sub-classes that support extra kwargs.\n # For example, the apex library contains fused optimizers with a step that supports extra kwargs.\n def step(self, closure: Optional[Callable[[], float]] = None, **kwargs: Any) -> Optional[float]:\n # Sync oss param_groups attributes in case they've been updated by a scheduler.\n self._sync_param_groups()\n\n # Run the optimizer step on this shard only\n loss = self.optim.step(closure=closure, **kwargs) # type: ignore\n\n # Sync all the states. Broadcast requests are issued async, we check completeness before moving on\n requests = []\n for rank, param_groups in enumerate(self.partition_parameters()):\n for param_group in param_groups:\n for param in param_group[\"params\"]:\n requests.append(dist.broadcast(tensor=param, src=rank, group=self.group, async_op=True))\n\n _ = list(map(lambda x: x.wait(), requests))\n return loss\n\n def local_state_dict(self) -> dict:\n \"\"\" Gets this rank's state_dict. \"\"\"\n return self.optim.state_dict()\n\n def consolidate_state_dict(self, recipient_rank: int = 0) -> None:\n \"\"\" Update the consolidated state_dict list, one per rank.\n\n This needs to be called on all replicas \"\"\"\n\n # Sync lr and other attributes in case its been updated\n self._sync_param_groups()\n\n if self.rank == recipient_rank:\n # Pull the sharded state from all the other replicas\n # Store all the states in order, rank by rank\n logging.debug(\"Pulling the sharded optimizer state from all replicas\")\n self._all_states = self._collect_sharded_states()\n else:\n # Acknowledge broadcasts, and send this rank's shard when needed\n self._broadcast_state_dict()\n\n def state_dict(self) -> Dict[str, Any]:\n \"\"\"\n Return the last known global optimizer state, which consist of a list of the shards.\n\n NOTE: This is limited to the replica which was responsible for the consolidation.\n The state may also not be up to date, depending on when `consolidate_state_dict` was last called.\n \"\"\"\n\n assert (\n len(self._all_states) > 0\n ), \"The optimizer state is not materialized, please call consolidate_state_dict on every replica beforehand\"\n\n # Flatten the param_groups, save the partition which logs the rank <> shard correspondence\n partition: List[Tuple[int, int]] = []\n param_groups: List[Dict[Any, Any]] = []\n\n start = 0\n for i, s in enumerate(self._all_states):\n param_groups.extend(s[\"param_groups\"])\n end = start + len(s[\"param_groups\"])\n partition.append((start, end))\n start = end\n\n return {\n \"state\": [s[\"state\"] for s in self._all_states],\n \"param_groups\": param_groups,\n \"partition\": partition,\n }\n\n def load_local_state_dict(self, state_dict: dict) -> None:\n \"\"\" Loads this rank's state_dict. \"\"\"\n\n self.optim.load_state_dict(state_dict)\n\n # Workaround PyTorch bug that casts state (https://github.com/pytorch/pytorch/issues/43706)\n # Copied from https://github.com/pytorch/fairseq/blob/v0.9.0/fairseq/optim/fp16_optimizer.py#L251-L268\n groups = self.optim.param_groups\n saved_groups = state_dict[\"param_groups\"]\n id_map = {\n old_id: p\n for old_id, p in zip(chain(*(g[\"params\"] for g in saved_groups)), chain(*(g[\"params\"] for g in groups)))\n }\n for k, v in state_dict[\"state\"].items():\n if k in id_map:\n param = id_map[k]\n self.optim.state[param] = recursive_copy_to_device(v, non_blocking=True, device=param.device)\n\n # Restore the global param_groups (the params themselves are already correct)\n for global_group, local_group in zip(self.param_groups, groups):\n for k, v in local_group.items():\n if k != \"params\":\n global_group[k] = v\n\n def load_state_dict(self, state_dict: Dict[str, Any]) -> None:\n \"\"\" Restore the global parameter groups as well as the shard \"\"\"\n\n # Get this optimizer's param_groups shard\n param_groups = state_dict[\"param_groups\"][\n state_dict[\"partition\"][self.rank][0] : state_dict[\"partition\"][self.rank][1]\n ]\n # Dispatch this rank's state dictionary to the wrapped shard optimizer\n self.load_local_state_dict({\"state\": state_dict[\"state\"][self.rank], \"param_groups\": param_groups})\n\n def add_param_group(self, param_group: dict) -> None:\n super().add_param_group(param_group)\n if not self.in_super_constructor:\n param_groups = self.partition_parameters()[self.rank]\n if len(param_groups) == len(self.optim.param_groups) + 1:\n self.optim.add_param_group(param_groups[-1])\n\n def _sync_param_groups(self) -> None:\n \"\"\"Sync learning rate and other optimizer attributes (needed to support schedulers).\"\"\"\n for global_group, local_group in zip(self.param_groups, self.optim.param_groups):\n for k in local_group.keys():\n if k != \"params\":\n # Params have been sharded and should not be synced here\n local_group[k] = global_group[k]\n\n def _collect_sharded_states(self) -> List[Dict[str, Any]]:\n \"\"\"\n Collect all the state shards, in CPU memory.\n \"\"\"\n empty_buffer = torch.tensor([0], dtype=torch.uint8, device=self._device)\n all_states: List[Dict[str, Any]] = []\n\n for rank in range(dist.get_world_size(group=self.group)):\n if rank == self.rank:\n logging.debug(\"Saving self state\")\n all_states.append(\n recursive_copy_to_device(self.local_state_dict(), non_blocking=True, device=torch.device(\"cpu\"))\n )\n\n # Sync with other replicas\n broadcast_object(empty_buffer, src_rank=rank, group=self.group, dist_device=self._device)\n else:\n # Fetch the optim state from the other replicas\n logging.debug(\"Receiving state from rank %s \", rank)\n replica_state = broadcast_object(\n empty_buffer, src_rank=rank, group=self.group, dist_device=self._device\n )\n\n all_states.append(\n recursive_copy_to_device(replica_state, non_blocking=True, device=torch.device(\"cpu\"))\n )\n\n logging.debug(\"State from rank %s received\", rank)\n\n return all_states\n\n def _broadcast_state_dict(self) -> None:\n \"\"\"\n Broadcast this rank's state shard, discard others\n \"\"\"\n empty_buffer = torch.tensor([0], dtype=torch.uint8, device=self._device)\n\n for rank in range(dist.get_world_size(group=self.group)):\n if rank == self.rank:\n # Send the state to the reference replica\n logging.debug(\n \"Sending the sharded optimizer state to the reference replica from rank %s\", rank,\n )\n broadcast_object(self.local_state_dict(), src_rank=rank, group=self.group, dist_device=self._device)\n else:\n # Discard this tensor/rank, broadcast necessary for syncing\n logging.debug(\"Discarding broadcast from rank %s\", rank)\n broadcast_object(empty_buffer, src_rank=rank, group=self.group, dist_device=self._device)\n", "path": "fairscale/optim/oss.py"}]}
| 3,459 | 221 |
gh_patches_debug_10580
|
rasdani/github-patches
|
git_diff
|
plotly__plotly.py-1307
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
v2 API Retry on 429 status_code if users are throttled?
Right now we [Only retry on request errors with 5XX status code and use exponential backoff](https://github.com/plotly/plotly.py/pull/1146). But should we also retry for 429 status codes which users receive if they go over Plot.ly's throttling limit? Especially since calls to the image servers are now limited to 1/sec.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plotly/api/v2/utils.py`
Content:
```
1 from __future__ import absolute_import
2
3 import requests
4 from requests.compat import json as _json
5 from requests.exceptions import RequestException
6 from retrying import retry
7 from plotly import config, exceptions, version, utils
8 from plotly.api.utils import basic_auth
9
10
11 def make_params(**kwargs):
12 """
13 Helper to create a params dict, skipping undefined entries.
14
15 :returns: (dict) A params dict to pass to `request`.
16
17 """
18 return {k: v for k, v in kwargs.items() if v is not None}
19
20
21 def build_url(resource, id='', route=''):
22 """
23 Create a url for a request on a V2 resource.
24
25 :param (str) resource: E.g., 'files', 'plots', 'grids', etc.
26 :param (str) id: The unique identifier for the resource.
27 :param (str) route: Detail/list route. E.g., 'restore', 'lookup', etc.
28 :return: (str) The url.
29
30 """
31 base = config.get_config()['plotly_api_domain']
32 formatter = {'base': base, 'resource': resource, 'id': id, 'route': route}
33
34 # Add path to base url depending on the input params. Note that `route`
35 # can refer to a 'list' or a 'detail' route. Since it cannot refer to
36 # both at the same time, it's overloaded in this function.
37 if id:
38 if route:
39 url = '{base}/v2/{resource}/{id}/{route}'.format(**formatter)
40 else:
41 url = '{base}/v2/{resource}/{id}'.format(**formatter)
42 else:
43 if route:
44 url = '{base}/v2/{resource}/{route}'.format(**formatter)
45 else:
46 url = '{base}/v2/{resource}'.format(**formatter)
47
48 return url
49
50
51 def validate_response(response):
52 """
53 Raise a helpful PlotlyRequestError for failed requests.
54
55 :param (requests.Response) response: A Response object from an api request.
56 :raises: (PlotlyRequestError) If the request failed for any reason.
57 :returns: (None)
58
59 """
60 if response.ok:
61 return
62
63 content = response.content
64 status_code = response.status_code
65 try:
66 parsed_content = response.json()
67 except ValueError:
68 message = content if content else 'No Content'
69 raise exceptions.PlotlyRequestError(message, status_code, content)
70
71 message = ''
72 if isinstance(parsed_content, dict):
73 errors = parsed_content.get('errors', [])
74 messages = [error.get('message') for error in errors]
75 message = '\n'.join([msg for msg in messages if msg])
76 if not message:
77 message = content if content else 'No Content'
78
79 raise exceptions.PlotlyRequestError(message, status_code, content)
80
81
82 def get_headers():
83 """
84 Using session credentials/config, get headers for a V2 API request.
85
86 Users may have their own proxy layer and so we free up the `authorization`
87 header for this purpose (instead adding the user authorization in a new
88 `plotly-authorization` header). See pull #239.
89
90 :returns: (dict) Headers to add to a requests.request call.
91
92 """
93 creds = config.get_credentials()
94
95 headers = {
96 'plotly-client-platform': 'python {}'.format(version.stable_semver()),
97 'content-type': 'application/json'
98 }
99
100 plotly_auth = basic_auth(creds['username'], creds['api_key'])
101 proxy_auth = basic_auth(creds['proxy_username'], creds['proxy_password'])
102
103 if config.get_config()['plotly_proxy_authorization']:
104 headers['authorization'] = proxy_auth
105 if creds['username'] and creds['api_key']:
106 headers['plotly-authorization'] = plotly_auth
107 else:
108 if creds['username'] and creds['api_key']:
109 headers['authorization'] = plotly_auth
110
111 return headers
112
113
114 def should_retry(exception):
115 if isinstance(exception, exceptions.PlotlyRequestError):
116 if (isinstance(exception.status_code, int) and
117 500 <= exception.status_code < 600):
118 # Retry on 5XX errors.
119 return True
120 elif 'Uh oh, an error occurred' in exception.message:
121 return True
122
123 return False
124
125
126 @retry(wait_exponential_multiplier=1000, wait_exponential_max=16000,
127 stop_max_delay=180000, retry_on_exception=should_retry)
128 def request(method, url, **kwargs):
129 """
130 Central place to make any api v2 api request.
131
132 :param (str) method: The request method ('get', 'put', 'delete', ...).
133 :param (str) url: The full api url to make the request to.
134 :param kwargs: These are passed along (but possibly mutated) to requests.
135 :return: (requests.Response) The response directly from requests.
136
137 """
138 kwargs['headers'] = dict(kwargs.get('headers', {}), **get_headers())
139
140 # Change boolean params to lowercase strings. E.g., `True` --> `'true'`.
141 # Just change the value so that requests handles query string creation.
142 if isinstance(kwargs.get('params'), dict):
143 kwargs['params'] = kwargs['params'].copy()
144 for key in kwargs['params']:
145 if isinstance(kwargs['params'][key], bool):
146 kwargs['params'][key] = _json.dumps(kwargs['params'][key])
147
148 # We have a special json encoding class for non-native objects.
149 if kwargs.get('json') is not None:
150 if kwargs.get('data'):
151 raise exceptions.PlotlyError('Cannot supply data and json kwargs.')
152 kwargs['data'] = _json.dumps(kwargs.pop('json'), sort_keys=True,
153 cls=utils.PlotlyJSONEncoder)
154
155 # The config file determines whether reuqests should *verify*.
156 kwargs['verify'] = config.get_config()['plotly_ssl_verification']
157
158 try:
159 response = requests.request(method, url, **kwargs)
160 except RequestException as e:
161 # The message can be an exception. E.g., MaxRetryError.
162 message = str(getattr(e, 'message', 'No message'))
163 response = getattr(e, 'response', None)
164 status_code = response.status_code if response else None
165 content = response.content if response else 'No content'
166 raise exceptions.PlotlyRequestError(message, status_code, content)
167 validate_response(response)
168 return response
169
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/plotly/api/v2/utils.py b/plotly/api/v2/utils.py
--- a/plotly/api/v2/utils.py
+++ b/plotly/api/v2/utils.py
@@ -114,8 +114,8 @@
def should_retry(exception):
if isinstance(exception, exceptions.PlotlyRequestError):
if (isinstance(exception.status_code, int) and
- 500 <= exception.status_code < 600):
- # Retry on 5XX errors.
+ (500 <= exception.status_code < 600 or exception.status_code == 429)):
+ # Retry on 5XX and 429 (image export throttling) errors.
return True
elif 'Uh oh, an error occurred' in exception.message:
return True
|
{"golden_diff": "diff --git a/plotly/api/v2/utils.py b/plotly/api/v2/utils.py\n--- a/plotly/api/v2/utils.py\n+++ b/plotly/api/v2/utils.py\n@@ -114,8 +114,8 @@\n def should_retry(exception):\n if isinstance(exception, exceptions.PlotlyRequestError):\n if (isinstance(exception.status_code, int) and\n- 500 <= exception.status_code < 600):\n- # Retry on 5XX errors.\n+ (500 <= exception.status_code < 600 or exception.status_code == 429)):\n+ # Retry on 5XX and 429 (image export throttling) errors.\n return True\n elif 'Uh oh, an error occurred' in exception.message:\n return True\n", "issue": "v2 API Retry on 429 status_code if users are throttled?\nRight now we [Only retry on request errors with 5XX status code and use exponential backoff](https://github.com/plotly/plotly.py/pull/1146). But should we also retry for 429 status codes which users receive if they go over Plot.ly's throttling limit? Especially since calls to the image servers are now limited to 1/sec.\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport requests\nfrom requests.compat import json as _json\nfrom requests.exceptions import RequestException\nfrom retrying import retry\nfrom plotly import config, exceptions, version, utils\nfrom plotly.api.utils import basic_auth\n\n\ndef make_params(**kwargs):\n \"\"\"\n Helper to create a params dict, skipping undefined entries.\n\n :returns: (dict) A params dict to pass to `request`.\n\n \"\"\"\n return {k: v for k, v in kwargs.items() if v is not None}\n\n\ndef build_url(resource, id='', route=''):\n \"\"\"\n Create a url for a request on a V2 resource.\n\n :param (str) resource: E.g., 'files', 'plots', 'grids', etc.\n :param (str) id: The unique identifier for the resource.\n :param (str) route: Detail/list route. E.g., 'restore', 'lookup', etc.\n :return: (str) The url.\n\n \"\"\"\n base = config.get_config()['plotly_api_domain']\n formatter = {'base': base, 'resource': resource, 'id': id, 'route': route}\n\n # Add path to base url depending on the input params. Note that `route`\n # can refer to a 'list' or a 'detail' route. Since it cannot refer to\n # both at the same time, it's overloaded in this function.\n if id:\n if route:\n url = '{base}/v2/{resource}/{id}/{route}'.format(**formatter)\n else:\n url = '{base}/v2/{resource}/{id}'.format(**formatter)\n else:\n if route:\n url = '{base}/v2/{resource}/{route}'.format(**formatter)\n else:\n url = '{base}/v2/{resource}'.format(**formatter)\n\n return url\n\n\ndef validate_response(response):\n \"\"\"\n Raise a helpful PlotlyRequestError for failed requests.\n\n :param (requests.Response) response: A Response object from an api request.\n :raises: (PlotlyRequestError) If the request failed for any reason.\n :returns: (None)\n\n \"\"\"\n if response.ok:\n return\n\n content = response.content\n status_code = response.status_code\n try:\n parsed_content = response.json()\n except ValueError:\n message = content if content else 'No Content'\n raise exceptions.PlotlyRequestError(message, status_code, content)\n\n message = ''\n if isinstance(parsed_content, dict):\n errors = parsed_content.get('errors', [])\n messages = [error.get('message') for error in errors]\n message = '\\n'.join([msg for msg in messages if msg])\n if not message:\n message = content if content else 'No Content'\n\n raise exceptions.PlotlyRequestError(message, status_code, content)\n\n\ndef get_headers():\n \"\"\"\n Using session credentials/config, get headers for a V2 API request.\n\n Users may have their own proxy layer and so we free up the `authorization`\n header for this purpose (instead adding the user authorization in a new\n `plotly-authorization` header). See pull #239.\n\n :returns: (dict) Headers to add to a requests.request call.\n\n \"\"\"\n creds = config.get_credentials()\n\n headers = {\n 'plotly-client-platform': 'python {}'.format(version.stable_semver()),\n 'content-type': 'application/json'\n }\n\n plotly_auth = basic_auth(creds['username'], creds['api_key'])\n proxy_auth = basic_auth(creds['proxy_username'], creds['proxy_password'])\n\n if config.get_config()['plotly_proxy_authorization']:\n headers['authorization'] = proxy_auth\n if creds['username'] and creds['api_key']:\n headers['plotly-authorization'] = plotly_auth\n else:\n if creds['username'] and creds['api_key']:\n headers['authorization'] = plotly_auth\n\n return headers\n\n\ndef should_retry(exception):\n if isinstance(exception, exceptions.PlotlyRequestError):\n if (isinstance(exception.status_code, int) and\n 500 <= exception.status_code < 600):\n # Retry on 5XX errors.\n return True\n elif 'Uh oh, an error occurred' in exception.message:\n return True\n\n return False\n\n\n@retry(wait_exponential_multiplier=1000, wait_exponential_max=16000,\n stop_max_delay=180000, retry_on_exception=should_retry)\ndef request(method, url, **kwargs):\n \"\"\"\n Central place to make any api v2 api request.\n\n :param (str) method: The request method ('get', 'put', 'delete', ...).\n :param (str) url: The full api url to make the request to.\n :param kwargs: These are passed along (but possibly mutated) to requests.\n :return: (requests.Response) The response directly from requests.\n\n \"\"\"\n kwargs['headers'] = dict(kwargs.get('headers', {}), **get_headers())\n\n # Change boolean params to lowercase strings. E.g., `True` --> `'true'`.\n # Just change the value so that requests handles query string creation.\n if isinstance(kwargs.get('params'), dict):\n kwargs['params'] = kwargs['params'].copy()\n for key in kwargs['params']:\n if isinstance(kwargs['params'][key], bool):\n kwargs['params'][key] = _json.dumps(kwargs['params'][key])\n\n # We have a special json encoding class for non-native objects.\n if kwargs.get('json') is not None:\n if kwargs.get('data'):\n raise exceptions.PlotlyError('Cannot supply data and json kwargs.')\n kwargs['data'] = _json.dumps(kwargs.pop('json'), sort_keys=True,\n cls=utils.PlotlyJSONEncoder)\n\n # The config file determines whether reuqests should *verify*.\n kwargs['verify'] = config.get_config()['plotly_ssl_verification']\n\n try:\n response = requests.request(method, url, **kwargs)\n except RequestException as e:\n # The message can be an exception. E.g., MaxRetryError.\n message = str(getattr(e, 'message', 'No message'))\n response = getattr(e, 'response', None)\n status_code = response.status_code if response else None\n content = response.content if response else 'No content'\n raise exceptions.PlotlyRequestError(message, status_code, content)\n validate_response(response)\n return response\n", "path": "plotly/api/v2/utils.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nimport requests\nfrom requests.compat import json as _json\nfrom requests.exceptions import RequestException\nfrom retrying import retry\nfrom plotly import config, exceptions, version, utils\nfrom plotly.api.utils import basic_auth\n\n\ndef make_params(**kwargs):\n \"\"\"\n Helper to create a params dict, skipping undefined entries.\n\n :returns: (dict) A params dict to pass to `request`.\n\n \"\"\"\n return {k: v for k, v in kwargs.items() if v is not None}\n\n\ndef build_url(resource, id='', route=''):\n \"\"\"\n Create a url for a request on a V2 resource.\n\n :param (str) resource: E.g., 'files', 'plots', 'grids', etc.\n :param (str) id: The unique identifier for the resource.\n :param (str) route: Detail/list route. E.g., 'restore', 'lookup', etc.\n :return: (str) The url.\n\n \"\"\"\n base = config.get_config()['plotly_api_domain']\n formatter = {'base': base, 'resource': resource, 'id': id, 'route': route}\n\n # Add path to base url depending on the input params. Note that `route`\n # can refer to a 'list' or a 'detail' route. Since it cannot refer to\n # both at the same time, it's overloaded in this function.\n if id:\n if route:\n url = '{base}/v2/{resource}/{id}/{route}'.format(**formatter)\n else:\n url = '{base}/v2/{resource}/{id}'.format(**formatter)\n else:\n if route:\n url = '{base}/v2/{resource}/{route}'.format(**formatter)\n else:\n url = '{base}/v2/{resource}'.format(**formatter)\n\n return url\n\n\ndef validate_response(response):\n \"\"\"\n Raise a helpful PlotlyRequestError for failed requests.\n\n :param (requests.Response) response: A Response object from an api request.\n :raises: (PlotlyRequestError) If the request failed for any reason.\n :returns: (None)\n\n \"\"\"\n if response.ok:\n return\n\n content = response.content\n status_code = response.status_code\n try:\n parsed_content = response.json()\n except ValueError:\n message = content if content else 'No Content'\n raise exceptions.PlotlyRequestError(message, status_code, content)\n\n message = ''\n if isinstance(parsed_content, dict):\n errors = parsed_content.get('errors', [])\n messages = [error.get('message') for error in errors]\n message = '\\n'.join([msg for msg in messages if msg])\n if not message:\n message = content if content else 'No Content'\n\n raise exceptions.PlotlyRequestError(message, status_code, content)\n\n\ndef get_headers():\n \"\"\"\n Using session credentials/config, get headers for a V2 API request.\n\n Users may have their own proxy layer and so we free up the `authorization`\n header for this purpose (instead adding the user authorization in a new\n `plotly-authorization` header). See pull #239.\n\n :returns: (dict) Headers to add to a requests.request call.\n\n \"\"\"\n creds = config.get_credentials()\n\n headers = {\n 'plotly-client-platform': 'python {}'.format(version.stable_semver()),\n 'content-type': 'application/json'\n }\n\n plotly_auth = basic_auth(creds['username'], creds['api_key'])\n proxy_auth = basic_auth(creds['proxy_username'], creds['proxy_password'])\n\n if config.get_config()['plotly_proxy_authorization']:\n headers['authorization'] = proxy_auth\n if creds['username'] and creds['api_key']:\n headers['plotly-authorization'] = plotly_auth\n else:\n if creds['username'] and creds['api_key']:\n headers['authorization'] = plotly_auth\n\n return headers\n\n\ndef should_retry(exception):\n if isinstance(exception, exceptions.PlotlyRequestError):\n if (isinstance(exception.status_code, int) and\n (500 <= exception.status_code < 600 or exception.status_code == 429)):\n # Retry on 5XX and 429 (image export throttling) errors.\n return True\n elif 'Uh oh, an error occurred' in exception.message:\n return True\n\n return False\n\n\n@retry(wait_exponential_multiplier=1000, wait_exponential_max=16000,\n stop_max_delay=180000, retry_on_exception=should_retry)\ndef request(method, url, **kwargs):\n \"\"\"\n Central place to make any api v2 api request.\n\n :param (str) method: The request method ('get', 'put', 'delete', ...).\n :param (str) url: The full api url to make the request to.\n :param kwargs: These are passed along (but possibly mutated) to requests.\n :return: (requests.Response) The response directly from requests.\n\n \"\"\"\n kwargs['headers'] = dict(kwargs.get('headers', {}), **get_headers())\n\n # Change boolean params to lowercase strings. E.g., `True` --> `'true'`.\n # Just change the value so that requests handles query string creation.\n if isinstance(kwargs.get('params'), dict):\n kwargs['params'] = kwargs['params'].copy()\n for key in kwargs['params']:\n if isinstance(kwargs['params'][key], bool):\n kwargs['params'][key] = _json.dumps(kwargs['params'][key])\n\n # We have a special json encoding class for non-native objects.\n if kwargs.get('json') is not None:\n if kwargs.get('data'):\n raise exceptions.PlotlyError('Cannot supply data and json kwargs.')\n kwargs['data'] = _json.dumps(kwargs.pop('json'), sort_keys=True,\n cls=utils.PlotlyJSONEncoder)\n\n # The config file determines whether reuqests should *verify*.\n kwargs['verify'] = config.get_config()['plotly_ssl_verification']\n\n try:\n response = requests.request(method, url, **kwargs)\n except RequestException as e:\n # The message can be an exception. E.g., MaxRetryError.\n message = str(getattr(e, 'message', 'No message'))\n response = getattr(e, 'response', None)\n status_code = response.status_code if response else None\n content = response.content if response else 'No content'\n raise exceptions.PlotlyRequestError(message, status_code, content)\n validate_response(response)\n return response\n", "path": "plotly/api/v2/utils.py"}]}
| 2,177 | 182 |
gh_patches_debug_33287
|
rasdani/github-patches
|
git_diff
|
ranaroussi__yfinance-1093
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Issues with Date field
This issue appeared in version 0.1.77. I did no other package or python version updates since then.
Following script used to work on the date field:
`df[df.Date > '2020-01-01']`
But I am now getting following error message: TypeError: '>' not supported between instances of 'Timestamp' and 'str'
When I solve that issue by converting the string to a date (pd.to_datetime), I am getting other issues like: ValueError: Tz-aware datetime.datetime cannot be converted to datetime64 unless utc=True
Somewhat further in my script, there is a pd.merge function based on the Date field received from yfinance. It gives now following error: Cannot compare between dtype('<M8[ns]') and dtype('0')
So I guess something has changed with how the dates are passed through. If yes, do you know how I can strip the received date from all these tz related stuff and just use it as a datetime64? I tried things like .dt.normalize() or .dt.date, but it always seems to give errors.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `yfinance/multi.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # yfinance - market data downloader
5 # https://github.com/ranaroussi/yfinance
6 #
7 # Copyright 2017-2019 Ran Aroussi
8 #
9 # Licensed under the Apache License, Version 2.0 (the "License");
10 # you may not use this file except in compliance with the License.
11 # You may obtain a copy of the License at
12 #
13 # http://www.apache.org/licenses/LICENSE-2.0
14 #
15 # Unless required by applicable law or agreed to in writing, software
16 # distributed under the License is distributed on an "AS IS" BASIS,
17 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18 # See the License for the specific language governing permissions and
19 # limitations under the License.
20 #
21
22 from __future__ import print_function
23
24 import time as _time
25 import multitasking as _multitasking
26 import pandas as _pd
27
28 from . import Ticker, utils
29 from . import shared
30
31
32 def download(tickers, start=None, end=None, actions=False, threads=True,
33 group_by='column', auto_adjust=False, back_adjust=False, keepna=False,
34 progress=True, period="max", show_errors=True, interval="1d", prepost=False,
35 proxy=None, rounding=False, timeout=None, **kwargs):
36 """Download yahoo tickers
37 :Parameters:
38 tickers : str, list
39 List of tickers to download
40 period : str
41 Valid periods: 1d,5d,1mo,3mo,6mo,1y,2y,5y,10y,ytd,max
42 Either Use period parameter or use start and end
43 interval : str
44 Valid intervals: 1m,2m,5m,15m,30m,60m,90m,1h,1d,5d,1wk,1mo,3mo
45 Intraday data cannot extend last 60 days
46 start: str
47 Download start date string (YYYY-MM-DD) or _datetime.
48 Default is 1900-01-01
49 end: str
50 Download end date string (YYYY-MM-DD) or _datetime.
51 Default is now
52 group_by : str
53 Group by 'ticker' or 'column' (default)
54 prepost : bool
55 Include Pre and Post market data in results?
56 Default is False
57 auto_adjust: bool
58 Adjust all OHLC automatically? Default is False
59 keepna: bool
60 Keep NaN rows returned by Yahoo?
61 Default is False
62 actions: bool
63 Download dividend + stock splits data. Default is False
64 threads: bool / int
65 How many threads to use for mass downloading. Default is True
66 proxy: str
67 Optional. Proxy server URL scheme. Default is None
68 rounding: bool
69 Optional. Round values to 2 decimal places?
70 show_errors: bool
71 Optional. Doesn't print errors if False
72 timeout: None or float
73 If not None stops waiting for a response after given number of
74 seconds. (Can also be a fraction of a second e.g. 0.01)
75 """
76
77 # create ticker list
78 tickers = tickers if isinstance(
79 tickers, (list, set, tuple)) else tickers.replace(',', ' ').split()
80
81 # accept isin as ticker
82 shared._ISINS = {}
83 _tickers_ = []
84 for ticker in tickers:
85 if utils.is_isin(ticker):
86 isin = ticker
87 ticker = utils.get_ticker_by_isin(ticker, proxy)
88 shared._ISINS[ticker] = isin
89 _tickers_.append(ticker)
90
91 tickers = _tickers_
92
93 tickers = list(set([ticker.upper() for ticker in tickers]))
94
95 if progress:
96 shared._PROGRESS_BAR = utils.ProgressBar(len(tickers), 'completed')
97
98 # reset shared._DFS
99 shared._DFS = {}
100 shared._ERRORS = {}
101
102 # download using threads
103 if threads:
104 if threads is True:
105 threads = min([len(tickers), _multitasking.cpu_count() * 2])
106 _multitasking.set_max_threads(threads)
107 for i, ticker in enumerate(tickers):
108 _download_one_threaded(ticker, period=period, interval=interval,
109 start=start, end=end, prepost=prepost,
110 actions=actions, auto_adjust=auto_adjust,
111 back_adjust=back_adjust, keepna=keepna,
112 progress=(progress and i > 0), proxy=proxy,
113 rounding=rounding, timeout=timeout)
114 while len(shared._DFS) < len(tickers):
115 _time.sleep(0.01)
116
117 # download synchronously
118 else:
119 for i, ticker in enumerate(tickers):
120 data = _download_one(ticker, period=period, interval=interval,
121 start=start, end=end, prepost=prepost,
122 actions=actions, auto_adjust=auto_adjust,
123 back_adjust=back_adjust, keepna=keepna, proxy=proxy,
124 rounding=rounding, timeout=timeout)
125 shared._DFS[ticker.upper()] = data
126 if progress:
127 shared._PROGRESS_BAR.animate()
128
129 if progress:
130 shared._PROGRESS_BAR.completed()
131
132 if shared._ERRORS and show_errors:
133 print('\n%.f Failed download%s:' % (
134 len(shared._ERRORS), 's' if len(shared._ERRORS) > 1 else ''))
135 # print(shared._ERRORS)
136 print("\n".join(['- %s: %s' %
137 v for v in list(shared._ERRORS.items())]))
138
139 if len(tickers) == 1:
140 ticker = tickers[0]
141 return shared._DFS[shared._ISINS.get(ticker, ticker)]
142
143 try:
144 data = _pd.concat(shared._DFS.values(), axis=1,
145 keys=shared._DFS.keys())
146 except Exception:
147 _realign_dfs()
148 data = _pd.concat(shared._DFS.values(), axis=1,
149 keys=shared._DFS.keys())
150
151 # switch names back to isins if applicable
152 data.rename(columns=shared._ISINS, inplace=True)
153
154 if group_by == 'column':
155 data.columns = data.columns.swaplevel(0, 1)
156 data.sort_index(level=0, axis=1, inplace=True)
157
158 return data
159
160
161 def _realign_dfs():
162 idx_len = 0
163 idx = None
164
165 for df in shared._DFS.values():
166 if len(df) > idx_len:
167 idx_len = len(df)
168 idx = df.index
169
170 for key in shared._DFS.keys():
171 try:
172 shared._DFS[key] = _pd.DataFrame(
173 index=idx, data=shared._DFS[key]).drop_duplicates()
174 except Exception:
175 shared._DFS[key] = _pd.concat([
176 utils.empty_df(idx), shared._DFS[key].dropna()
177 ], axis=0, sort=True)
178
179 # remove duplicate index
180 shared._DFS[key] = shared._DFS[key].loc[
181 ~shared._DFS[key].index.duplicated(keep='last')]
182
183
184 @_multitasking.task
185 def _download_one_threaded(ticker, start=None, end=None,
186 auto_adjust=False, back_adjust=False,
187 actions=False, progress=True, period="max",
188 interval="1d", prepost=False, proxy=None,
189 keepna=False, rounding=False, timeout=None):
190
191 data = _download_one(ticker, start, end, auto_adjust, back_adjust,
192 actions, period, interval, prepost, proxy, rounding,
193 keepna, timeout)
194 shared._DFS[ticker.upper()] = data
195 if progress:
196 shared._PROGRESS_BAR.animate()
197
198
199 def _download_one(ticker, start=None, end=None,
200 auto_adjust=False, back_adjust=False,
201 actions=False, period="max", interval="1d",
202 prepost=False, proxy=None, rounding=False,
203 keepna=False, timeout=None):
204
205 return Ticker(ticker).history(period=period, interval=interval,
206 start=start, end=end, prepost=prepost,
207 actions=actions, auto_adjust=auto_adjust,
208 back_adjust=back_adjust, proxy=proxy,
209 rounding=rounding, keepna=keepna, many=True,
210 timeout=timeout)
211
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/yfinance/multi.py b/yfinance/multi.py
--- a/yfinance/multi.py
+++ b/yfinance/multi.py
@@ -29,7 +29,7 @@
from . import shared
-def download(tickers, start=None, end=None, actions=False, threads=True,
+def download(tickers, start=None, end=None, actions=False, threads=True, ignore_tz=True,
group_by='column', auto_adjust=False, back_adjust=False, keepna=False,
progress=True, period="max", show_errors=True, interval="1d", prepost=False,
proxy=None, rounding=False, timeout=None, **kwargs):
@@ -63,6 +63,9 @@
Download dividend + stock splits data. Default is False
threads: bool / int
How many threads to use for mass downloading. Default is True
+ ignore_tz: bool
+ When combining from different timezones, ignore that part of datetime.
+ Default is True
proxy: str
Optional. Proxy server URL scheme. Default is None
rounding: bool
@@ -140,12 +143,17 @@
ticker = tickers[0]
return shared._DFS[shared._ISINS.get(ticker, ticker)]
+ if ignore_tz:
+ for tkr in shared._DFS.keys():
+ if (shared._DFS[tkr] is not None) and (shared._DFS[tkr].shape[0]>0):
+ shared._DFS[tkr].index = shared._DFS[tkr].index.tz_localize(None)
+
try:
- data = _pd.concat(shared._DFS.values(), axis=1,
+ data = _pd.concat(shared._DFS.values(), axis=1, sort=True,
keys=shared._DFS.keys())
except Exception:
_realign_dfs()
- data = _pd.concat(shared._DFS.values(), axis=1,
+ data = _pd.concat(shared._DFS.values(), axis=1, sort=True,
keys=shared._DFS.keys())
# switch names back to isins if applicable
|
{"golden_diff": "diff --git a/yfinance/multi.py b/yfinance/multi.py\n--- a/yfinance/multi.py\n+++ b/yfinance/multi.py\n@@ -29,7 +29,7 @@\n from . import shared\n \n \n-def download(tickers, start=None, end=None, actions=False, threads=True,\n+def download(tickers, start=None, end=None, actions=False, threads=True, ignore_tz=True, \n group_by='column', auto_adjust=False, back_adjust=False, keepna=False,\n progress=True, period=\"max\", show_errors=True, interval=\"1d\", prepost=False,\n proxy=None, rounding=False, timeout=None, **kwargs):\n@@ -63,6 +63,9 @@\n Download dividend + stock splits data. Default is False\n threads: bool / int\n How many threads to use for mass downloading. Default is True\n+ ignore_tz: bool\n+ When combining from different timezones, ignore that part of datetime.\n+ Default is True\n proxy: str\n Optional. Proxy server URL scheme. Default is None\n rounding: bool\n@@ -140,12 +143,17 @@\n ticker = tickers[0]\n return shared._DFS[shared._ISINS.get(ticker, ticker)]\n \n+ if ignore_tz:\n+ for tkr in shared._DFS.keys():\n+ if (shared._DFS[tkr] is not None) and (shared._DFS[tkr].shape[0]>0):\n+ shared._DFS[tkr].index = shared._DFS[tkr].index.tz_localize(None)\n+\n try:\n- data = _pd.concat(shared._DFS.values(), axis=1,\n+ data = _pd.concat(shared._DFS.values(), axis=1, sort=True,\n keys=shared._DFS.keys())\n except Exception:\n _realign_dfs()\n- data = _pd.concat(shared._DFS.values(), axis=1,\n+ data = _pd.concat(shared._DFS.values(), axis=1, sort=True,\n keys=shared._DFS.keys())\n \n # switch names back to isins if applicable\n", "issue": "Issues with Date field\nThis issue appeared in version 0.1.77. I did no other package or python version updates since then.\r\n\r\nFollowing script used to work on the date field:\r\n`df[df.Date > '2020-01-01']`\r\n\r\nBut I am now getting following error message: TypeError: '>' not supported between instances of 'Timestamp' and 'str'\r\n\r\nWhen I solve that issue by converting the string to a date (pd.to_datetime), I am getting other issues like: ValueError: Tz-aware datetime.datetime cannot be converted to datetime64 unless utc=True\r\n\r\nSomewhat further in my script, there is a pd.merge function based on the Date field received from yfinance. It gives now following error: Cannot compare between dtype('<M8[ns]') and dtype('0')\r\n\r\nSo I guess something has changed with how the dates are passed through. If yes, do you know how I can strip the received date from all these tz related stuff and just use it as a datetime64? I tried things like .dt.normalize() or .dt.date, but it always seems to give errors. \n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# yfinance - market data downloader\n# https://github.com/ranaroussi/yfinance\n#\n# Copyright 2017-2019 Ran Aroussi\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\nfrom __future__ import print_function\n\nimport time as _time\nimport multitasking as _multitasking\nimport pandas as _pd\n\nfrom . import Ticker, utils\nfrom . import shared\n\n\ndef download(tickers, start=None, end=None, actions=False, threads=True,\n group_by='column', auto_adjust=False, back_adjust=False, keepna=False,\n progress=True, period=\"max\", show_errors=True, interval=\"1d\", prepost=False,\n proxy=None, rounding=False, timeout=None, **kwargs):\n \"\"\"Download yahoo tickers\n :Parameters:\n tickers : str, list\n List of tickers to download\n period : str\n Valid periods: 1d,5d,1mo,3mo,6mo,1y,2y,5y,10y,ytd,max\n Either Use period parameter or use start and end\n interval : str\n Valid intervals: 1m,2m,5m,15m,30m,60m,90m,1h,1d,5d,1wk,1mo,3mo\n Intraday data cannot extend last 60 days\n start: str\n Download start date string (YYYY-MM-DD) or _datetime.\n Default is 1900-01-01\n end: str\n Download end date string (YYYY-MM-DD) or _datetime.\n Default is now\n group_by : str\n Group by 'ticker' or 'column' (default)\n prepost : bool\n Include Pre and Post market data in results?\n Default is False\n auto_adjust: bool\n Adjust all OHLC automatically? Default is False\n keepna: bool\n Keep NaN rows returned by Yahoo?\n Default is False\n actions: bool\n Download dividend + stock splits data. Default is False\n threads: bool / int\n How many threads to use for mass downloading. Default is True\n proxy: str\n Optional. Proxy server URL scheme. Default is None\n rounding: bool\n Optional. Round values to 2 decimal places?\n show_errors: bool\n Optional. Doesn't print errors if False\n timeout: None or float\n If not None stops waiting for a response after given number of\n seconds. (Can also be a fraction of a second e.g. 0.01)\n \"\"\"\n\n # create ticker list\n tickers = tickers if isinstance(\n tickers, (list, set, tuple)) else tickers.replace(',', ' ').split()\n\n # accept isin as ticker\n shared._ISINS = {}\n _tickers_ = []\n for ticker in tickers:\n if utils.is_isin(ticker):\n isin = ticker\n ticker = utils.get_ticker_by_isin(ticker, proxy)\n shared._ISINS[ticker] = isin\n _tickers_.append(ticker)\n\n tickers = _tickers_\n\n tickers = list(set([ticker.upper() for ticker in tickers]))\n\n if progress:\n shared._PROGRESS_BAR = utils.ProgressBar(len(tickers), 'completed')\n\n # reset shared._DFS\n shared._DFS = {}\n shared._ERRORS = {}\n\n # download using threads\n if threads:\n if threads is True:\n threads = min([len(tickers), _multitasking.cpu_count() * 2])\n _multitasking.set_max_threads(threads)\n for i, ticker in enumerate(tickers):\n _download_one_threaded(ticker, period=period, interval=interval,\n start=start, end=end, prepost=prepost,\n actions=actions, auto_adjust=auto_adjust,\n back_adjust=back_adjust, keepna=keepna,\n progress=(progress and i > 0), proxy=proxy,\n rounding=rounding, timeout=timeout)\n while len(shared._DFS) < len(tickers):\n _time.sleep(0.01)\n\n # download synchronously\n else:\n for i, ticker in enumerate(tickers):\n data = _download_one(ticker, period=period, interval=interval,\n start=start, end=end, prepost=prepost,\n actions=actions, auto_adjust=auto_adjust,\n back_adjust=back_adjust, keepna=keepna, proxy=proxy,\n rounding=rounding, timeout=timeout)\n shared._DFS[ticker.upper()] = data\n if progress:\n shared._PROGRESS_BAR.animate()\n\n if progress:\n shared._PROGRESS_BAR.completed()\n\n if shared._ERRORS and show_errors:\n print('\\n%.f Failed download%s:' % (\n len(shared._ERRORS), 's' if len(shared._ERRORS) > 1 else ''))\n # print(shared._ERRORS)\n print(\"\\n\".join(['- %s: %s' %\n v for v in list(shared._ERRORS.items())]))\n\n if len(tickers) == 1:\n ticker = tickers[0]\n return shared._DFS[shared._ISINS.get(ticker, ticker)]\n\n try:\n data = _pd.concat(shared._DFS.values(), axis=1,\n keys=shared._DFS.keys())\n except Exception:\n _realign_dfs()\n data = _pd.concat(shared._DFS.values(), axis=1,\n keys=shared._DFS.keys())\n\n # switch names back to isins if applicable\n data.rename(columns=shared._ISINS, inplace=True)\n\n if group_by == 'column':\n data.columns = data.columns.swaplevel(0, 1)\n data.sort_index(level=0, axis=1, inplace=True)\n\n return data\n\n\ndef _realign_dfs():\n idx_len = 0\n idx = None\n\n for df in shared._DFS.values():\n if len(df) > idx_len:\n idx_len = len(df)\n idx = df.index\n\n for key in shared._DFS.keys():\n try:\n shared._DFS[key] = _pd.DataFrame(\n index=idx, data=shared._DFS[key]).drop_duplicates()\n except Exception:\n shared._DFS[key] = _pd.concat([\n utils.empty_df(idx), shared._DFS[key].dropna()\n ], axis=0, sort=True)\n\n # remove duplicate index\n shared._DFS[key] = shared._DFS[key].loc[\n ~shared._DFS[key].index.duplicated(keep='last')]\n\n\n@_multitasking.task\ndef _download_one_threaded(ticker, start=None, end=None,\n auto_adjust=False, back_adjust=False,\n actions=False, progress=True, period=\"max\",\n interval=\"1d\", prepost=False, proxy=None,\n keepna=False, rounding=False, timeout=None):\n\n data = _download_one(ticker, start, end, auto_adjust, back_adjust,\n actions, period, interval, prepost, proxy, rounding,\n keepna, timeout)\n shared._DFS[ticker.upper()] = data\n if progress:\n shared._PROGRESS_BAR.animate()\n\n\ndef _download_one(ticker, start=None, end=None,\n auto_adjust=False, back_adjust=False,\n actions=False, period=\"max\", interval=\"1d\",\n prepost=False, proxy=None, rounding=False,\n keepna=False, timeout=None):\n\n return Ticker(ticker).history(period=period, interval=interval,\n start=start, end=end, prepost=prepost,\n actions=actions, auto_adjust=auto_adjust,\n back_adjust=back_adjust, proxy=proxy,\n rounding=rounding, keepna=keepna, many=True,\n timeout=timeout)\n", "path": "yfinance/multi.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# yfinance - market data downloader\n# https://github.com/ranaroussi/yfinance\n#\n# Copyright 2017-2019 Ran Aroussi\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\nfrom __future__ import print_function\n\nimport time as _time\nimport multitasking as _multitasking\nimport pandas as _pd\n\nfrom . import Ticker, utils\nfrom . import shared\n\n\ndef download(tickers, start=None, end=None, actions=False, threads=True, ignore_tz=True, \n group_by='column', auto_adjust=False, back_adjust=False, keepna=False,\n progress=True, period=\"max\", show_errors=True, interval=\"1d\", prepost=False,\n proxy=None, rounding=False, timeout=None, **kwargs):\n \"\"\"Download yahoo tickers\n :Parameters:\n tickers : str, list\n List of tickers to download\n period : str\n Valid periods: 1d,5d,1mo,3mo,6mo,1y,2y,5y,10y,ytd,max\n Either Use period parameter or use start and end\n interval : str\n Valid intervals: 1m,2m,5m,15m,30m,60m,90m,1h,1d,5d,1wk,1mo,3mo\n Intraday data cannot extend last 60 days\n start: str\n Download start date string (YYYY-MM-DD) or _datetime.\n Default is 1900-01-01\n end: str\n Download end date string (YYYY-MM-DD) or _datetime.\n Default is now\n group_by : str\n Group by 'ticker' or 'column' (default)\n prepost : bool\n Include Pre and Post market data in results?\n Default is False\n auto_adjust: bool\n Adjust all OHLC automatically? Default is False\n keepna: bool\n Keep NaN rows returned by Yahoo?\n Default is False\n actions: bool\n Download dividend + stock splits data. Default is False\n threads: bool / int\n How many threads to use for mass downloading. Default is True\n ignore_tz: bool\n When combining from different timezones, ignore that part of datetime.\n Default is True\n proxy: str\n Optional. Proxy server URL scheme. Default is None\n rounding: bool\n Optional. Round values to 2 decimal places?\n show_errors: bool\n Optional. Doesn't print errors if False\n timeout: None or float\n If not None stops waiting for a response after given number of\n seconds. (Can also be a fraction of a second e.g. 0.01)\n \"\"\"\n\n # create ticker list\n tickers = tickers if isinstance(\n tickers, (list, set, tuple)) else tickers.replace(',', ' ').split()\n\n # accept isin as ticker\n shared._ISINS = {}\n _tickers_ = []\n for ticker in tickers:\n if utils.is_isin(ticker):\n isin = ticker\n ticker = utils.get_ticker_by_isin(ticker, proxy)\n shared._ISINS[ticker] = isin\n _tickers_.append(ticker)\n\n tickers = _tickers_\n\n tickers = list(set([ticker.upper() for ticker in tickers]))\n\n if progress:\n shared._PROGRESS_BAR = utils.ProgressBar(len(tickers), 'completed')\n\n # reset shared._DFS\n shared._DFS = {}\n shared._ERRORS = {}\n\n # download using threads\n if threads:\n if threads is True:\n threads = min([len(tickers), _multitasking.cpu_count() * 2])\n _multitasking.set_max_threads(threads)\n for i, ticker in enumerate(tickers):\n _download_one_threaded(ticker, period=period, interval=interval,\n start=start, end=end, prepost=prepost,\n actions=actions, auto_adjust=auto_adjust,\n back_adjust=back_adjust, keepna=keepna,\n progress=(progress and i > 0), proxy=proxy,\n rounding=rounding, timeout=timeout)\n while len(shared._DFS) < len(tickers):\n _time.sleep(0.01)\n\n # download synchronously\n else:\n for i, ticker in enumerate(tickers):\n data = _download_one(ticker, period=period, interval=interval,\n start=start, end=end, prepost=prepost,\n actions=actions, auto_adjust=auto_adjust,\n back_adjust=back_adjust, keepna=keepna, proxy=proxy,\n rounding=rounding, timeout=timeout)\n shared._DFS[ticker.upper()] = data\n if progress:\n shared._PROGRESS_BAR.animate()\n\n if progress:\n shared._PROGRESS_BAR.completed()\n\n if shared._ERRORS and show_errors:\n print('\\n%.f Failed download%s:' % (\n len(shared._ERRORS), 's' if len(shared._ERRORS) > 1 else ''))\n # print(shared._ERRORS)\n print(\"\\n\".join(['- %s: %s' %\n v for v in list(shared._ERRORS.items())]))\n\n if len(tickers) == 1:\n ticker = tickers[0]\n return shared._DFS[shared._ISINS.get(ticker, ticker)]\n\n if ignore_tz:\n for tkr in shared._DFS.keys():\n if (shared._DFS[tkr] is not None) and (shared._DFS[tkr].shape[0]>0):\n shared._DFS[tkr].index = shared._DFS[tkr].index.tz_localize(None)\n\n try:\n data = _pd.concat(shared._DFS.values(), axis=1, sort=True,\n keys=shared._DFS.keys())\n except Exception:\n _realign_dfs()\n data = _pd.concat(shared._DFS.values(), axis=1, sort=True,\n keys=shared._DFS.keys())\n\n # switch names back to isins if applicable\n data.rename(columns=shared._ISINS, inplace=True)\n\n if group_by == 'column':\n data.columns = data.columns.swaplevel(0, 1)\n data.sort_index(level=0, axis=1, inplace=True)\n\n return data\n\n\ndef _realign_dfs():\n idx_len = 0\n idx = None\n\n for df in shared._DFS.values():\n if len(df) > idx_len:\n idx_len = len(df)\n idx = df.index\n\n for key in shared._DFS.keys():\n try:\n shared._DFS[key] = _pd.DataFrame(\n index=idx, data=shared._DFS[key]).drop_duplicates()\n except Exception:\n shared._DFS[key] = _pd.concat([\n utils.empty_df(idx), shared._DFS[key].dropna()\n ], axis=0, sort=True)\n\n # remove duplicate index\n shared._DFS[key] = shared._DFS[key].loc[\n ~shared._DFS[key].index.duplicated(keep='last')]\n\n\n@_multitasking.task\ndef _download_one_threaded(ticker, start=None, end=None,\n auto_adjust=False, back_adjust=False,\n actions=False, progress=True, period=\"max\",\n interval=\"1d\", prepost=False, proxy=None,\n keepna=False, rounding=False, timeout=None):\n\n data = _download_one(ticker, start, end, auto_adjust, back_adjust,\n actions, period, interval, prepost, proxy, rounding,\n keepna, timeout)\n shared._DFS[ticker.upper()] = data\n if progress:\n shared._PROGRESS_BAR.animate()\n\n\ndef _download_one(ticker, start=None, end=None,\n auto_adjust=False, back_adjust=False,\n actions=False, period=\"max\", interval=\"1d\",\n prepost=False, proxy=None, rounding=False,\n keepna=False, timeout=None):\n\n return Ticker(ticker).history(period=period, interval=interval,\n start=start, end=end, prepost=prepost,\n actions=actions, auto_adjust=auto_adjust,\n back_adjust=back_adjust, proxy=proxy,\n rounding=rounding, keepna=keepna, many=True,\n timeout=timeout)\n", "path": "yfinance/multi.py"}]}
| 2,855 | 466 |
gh_patches_debug_7628
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-3074
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add OpenAPI Specification for /schemas/ endpoint and fix the incorrect specs
## Problem
In order to ensure the accuracy of the specs generated by drf-spectacular for /schemas/ endpoint , we will generate an OpenAPI Specification specifically for this endpoint. This will allow us to verify its correctness by comparing it with the existing test cases and the DRF browsable API page.
## Proposed solution
* Implement custom preprocessing hook function to filter out all endpoints except for the /schemas/ endpoint.The hook function selectively retains only the endpoint paths that match the /schemas/
* Configure the PREPROCESSING_HOOKS setting with the custom hook function, we ensure that only the /schemas/ endpoint is considered during the OpenAPI specification generation process.
* Finally, generate the spec file using the `manage.py spectacular --color --file schema.yml` command
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `config/settings/openapi.py`
Content:
```
1 def custom_preprocessing_hook(endpoints):
2 filtered = []
3 for (path, path_regex, method, callback) in endpoints:
4 # Remove all but DRF API endpoints
5 if path.startswith("/api/db/v0/databases/") or path.startswith("/api/db/v0/data_files/"):
6 filtered.append((path, path_regex, method, callback))
7 return filtered
8
9
10 def remove_url_prefix_hook(result, **kwargs):
11 # Remove namespace and version URL prefix from the operation Id of the generated API schema
12 for path, path_info in result['paths'].items():
13 for method, operation in path_info.items():
14 operation_id = operation.get('operationId')
15 if operation_id:
16 if path.startswith('/api/db/v0/'):
17 operation['operationId'] = operation_id.replace('db_v0_', '')
18 elif path.startswith('/api/ui/v0/'):
19 operation['operationId'] = operation_id.replace('ui_v0_', '')
20
21 return result
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/config/settings/openapi.py b/config/settings/openapi.py
--- a/config/settings/openapi.py
+++ b/config/settings/openapi.py
@@ -2,7 +2,7 @@
filtered = []
for (path, path_regex, method, callback) in endpoints:
# Remove all but DRF API endpoints
- if path.startswith("/api/db/v0/databases/") or path.startswith("/api/db/v0/data_files/"):
+ if path.startswith("/api/db/v0/databases/") or path.startswith("/api/db/v0/data_files/") or path.startswith("/api/db/v0/schemas/"):
filtered.append((path, path_regex, method, callback))
return filtered
|
{"golden_diff": "diff --git a/config/settings/openapi.py b/config/settings/openapi.py\n--- a/config/settings/openapi.py\n+++ b/config/settings/openapi.py\n@@ -2,7 +2,7 @@\n filtered = []\n for (path, path_regex, method, callback) in endpoints:\n # Remove all but DRF API endpoints\n- if path.startswith(\"/api/db/v0/databases/\") or path.startswith(\"/api/db/v0/data_files/\"):\n+ if path.startswith(\"/api/db/v0/databases/\") or path.startswith(\"/api/db/v0/data_files/\") or path.startswith(\"/api/db/v0/schemas/\"):\n filtered.append((path, path_regex, method, callback))\n return filtered\n", "issue": "Add OpenAPI Specification for /schemas/ endpoint and fix the incorrect specs\n## Problem\r\nIn order to ensure the accuracy of the specs generated by drf-spectacular for /schemas/ endpoint , we will generate an OpenAPI Specification specifically for this endpoint. This will allow us to verify its correctness by comparing it with the existing test cases and the DRF browsable API page.\r\n\r\n## Proposed solution\r\n* Implement custom preprocessing hook function to filter out all endpoints except for the /schemas/ endpoint.The hook function selectively retains only the endpoint paths that match the /schemas/ \r\n* Configure the PREPROCESSING_HOOKS setting with the custom hook function, we ensure that only the /schemas/ endpoint is considered during the OpenAPI specification generation process.\r\n* Finally, generate the spec file using the `manage.py spectacular --color --file schema.yml` command\n", "before_files": [{"content": "def custom_preprocessing_hook(endpoints):\n filtered = []\n for (path, path_regex, method, callback) in endpoints:\n # Remove all but DRF API endpoints\n if path.startswith(\"/api/db/v0/databases/\") or path.startswith(\"/api/db/v0/data_files/\"):\n filtered.append((path, path_regex, method, callback))\n return filtered\n\n\ndef remove_url_prefix_hook(result, **kwargs):\n # Remove namespace and version URL prefix from the operation Id of the generated API schema\n for path, path_info in result['paths'].items():\n for method, operation in path_info.items():\n operation_id = operation.get('operationId')\n if operation_id:\n if path.startswith('/api/db/v0/'):\n operation['operationId'] = operation_id.replace('db_v0_', '')\n elif path.startswith('/api/ui/v0/'):\n operation['operationId'] = operation_id.replace('ui_v0_', '')\n\n return result\n", "path": "config/settings/openapi.py"}], "after_files": [{"content": "def custom_preprocessing_hook(endpoints):\n filtered = []\n for (path, path_regex, method, callback) in endpoints:\n # Remove all but DRF API endpoints\n if path.startswith(\"/api/db/v0/databases/\") or path.startswith(\"/api/db/v0/data_files/\") or path.startswith(\"/api/db/v0/schemas/\"):\n filtered.append((path, path_regex, method, callback))\n return filtered\n\n\ndef remove_url_prefix_hook(result, **kwargs):\n # Remove namespace and version URL prefix from the operation Id of the generated API schema\n for path, path_info in result['paths'].items():\n for method, operation in path_info.items():\n operation_id = operation.get('operationId')\n if operation_id:\n if path.startswith('/api/db/v0/'):\n operation['operationId'] = operation_id.replace('db_v0_', '')\n elif path.startswith('/api/ui/v0/'):\n operation['operationId'] = operation_id.replace('ui_v0_', '')\n\n return result\n", "path": "config/settings/openapi.py"}]}
| 671 | 149 |
gh_patches_debug_5497
|
rasdani/github-patches
|
git_diff
|
scikit-hep__pyhf-999
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Keep tighter version constraints on TensorFlow releases
# Description
As Issue #997 makes it clear that minor releases might as well be major releases for TensorFlow, then it is probably worth keeping tighter version constraints on them and just watching the releases of TensorFlow and TensorFlow Probability to see when we can relax these.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup
2
3 extras_require = {
4 'shellcomplete': ['click_completion'],
5 'tensorflow': [
6 'tensorflow~=2.0',
7 'tensorflow-probability~=0.10', # TODO: Temp patch until tfp v0.11
8 ],
9 'torch': ['torch~=1.2'],
10 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],
11 'xmlio': ['uproot~=3.6'], # Future proof against uproot4 API changes
12 'minuit': ['iminuit~=1.4,>=1.4.3'], # Use "name" keyword in MINUIT optimizer
13 }
14 extras_require['backends'] = sorted(
15 set(
16 extras_require['tensorflow']
17 + extras_require['torch']
18 + extras_require['jax']
19 + extras_require['minuit']
20 )
21 )
22 extras_require['contrib'] = sorted(set(['matplotlib']))
23 extras_require['lint'] = sorted(set(['pyflakes', 'black']))
24
25 extras_require['test'] = sorted(
26 set(
27 extras_require['backends']
28 + extras_require['xmlio']
29 + extras_require['contrib']
30 + extras_require['shellcomplete']
31 + [
32 'pytest~=3.5',
33 'pytest-cov>=2.5.1',
34 'pytest-mock',
35 'pytest-benchmark[histogram]',
36 'pytest-console-scripts',
37 'pytest-mpl',
38 'pydocstyle',
39 'coverage>=4.0', # coveralls
40 'papermill~=2.0',
41 'nteract-scrapbook~=0.2',
42 'jupyter',
43 'uproot~=3.3',
44 'graphviz',
45 'jsonpatch',
46 ]
47 )
48 )
49 extras_require['docs'] = sorted(
50 set(
51 [
52 'sphinx>=3.1.2',
53 'sphinxcontrib-bibtex',
54 'sphinx-click',
55 'sphinx_rtd_theme',
56 'nbsphinx',
57 'ipywidgets',
58 'sphinx-issues',
59 'sphinx-copybutton>0.2.9',
60 ]
61 )
62 )
63 extras_require['develop'] = sorted(
64 set(
65 extras_require['docs']
66 + extras_require['lint']
67 + extras_require['test']
68 + ['nbdime', 'bumpversion', 'ipython', 'pre-commit', 'check-manifest', 'twine']
69 )
70 )
71 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
72
73
74 setup(
75 extras_require=extras_require,
76 use_scm_version=lambda: {'local_scheme': lambda version: ''},
77 )
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -3,8 +3,8 @@
extras_require = {
'shellcomplete': ['click_completion'],
'tensorflow': [
- 'tensorflow~=2.0',
- 'tensorflow-probability~=0.10', # TODO: Temp patch until tfp v0.11
+ 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major
+ 'tensorflow-probability~=0.10.0',
],
'torch': ['torch~=1.2'],
'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -3,8 +3,8 @@\n extras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n- 'tensorflow~=2.0',\n- 'tensorflow-probability~=0.10', # TODO: Temp patch until tfp v0.11\n+ 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major\n+ 'tensorflow-probability~=0.10.0',\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],\n", "issue": "Keep tighter version constraints on TensorFlow releases\n# Description\r\n\r\nAs Issue #997 makes it clear that minor releases might as well be major releases for TensorFlow, then it is probably worth keeping tighter version constraints on them and just watching the releases of TensorFlow and TensorFlow Probability to see when we can relax these.\n", "before_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow~=2.0',\n 'tensorflow-probability~=0.10', # TODO: Temp patch until tfp v0.11\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],\n 'xmlio': ['uproot~=3.6'], # Future proof against uproot4 API changes\n 'minuit': ['iminuit~=1.4,>=1.4.3'], # Use \"name\" keyword in MINUIT optimizer\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted(set(['matplotlib']))\nextras_require['lint'] = sorted(set(['pyflakes', 'black']))\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'pytest~=3.5',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'uproot~=3.3',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n [\n 'sphinx>=3.1.2',\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + ['nbdime', 'bumpversion', 'ipython', 'pre-commit', 'check-manifest', 'twine']\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major\n 'tensorflow-probability~=0.10.0',\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],\n 'xmlio': ['uproot~=3.6'], # Future proof against uproot4 API changes\n 'minuit': ['iminuit~=1.4,>=1.4.3'], # Use \"name\" keyword in MINUIT optimizer\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted(set(['matplotlib']))\nextras_require['lint'] = sorted(set(['pyflakes', 'black']))\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'pytest~=3.5',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'uproot~=3.3',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n [\n 'sphinx>=3.1.2',\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + ['nbdime', 'bumpversion', 'ipython', 'pre-commit', 'check-manifest', 'twine']\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}]}
| 1,050 | 170 |
gh_patches_debug_41578
|
rasdani/github-patches
|
git_diff
|
PrefectHQ__prefect-1368
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add optional pull to local agent
If the image is present locally we don't want to attempt a pull from a registry
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/cli/agent.py`
Content:
```
1 import click
2
3 from prefect import config
4 from prefect.utilities.configuration import set_temporary_config
5 from prefect.utilities.serialization import from_qualified_name
6
7 _agents = {
8 "local": "prefect.agent.local.LocalAgent",
9 "kubernetes": "prefect.agent.kubernetes.KubernetesAgent",
10 "nomad": "prefect.agent.nomad.NomadAgent",
11 }
12
13
14 @click.group(hidden=True)
15 def agent():
16 """
17 Manage Prefect agents.
18
19 \b
20 Usage:
21 $ prefect agent [COMMAND]
22
23 \b
24 Arguments:
25 start Start a Prefect agent
26
27 \b
28 Examples:
29 $ prefect agent start
30
31 \b
32 $ prefect agent start kubernetes --token MY_TOKEN
33 """
34 pass
35
36
37 @agent.command(hidden=True)
38 @click.argument("name", default="local")
39 @click.option(
40 "--token", "-t", required=False, help="A Prefect Cloud API token.", hidden=True
41 )
42 def start(name, token):
43 """
44 Start an agent.
45
46 \b
47 Arguments:
48 name TEXT The name of an agent to start (e.g. `local`, `kubernetes`, `nomad`)
49 Defaults to `local`
50
51 \b
52 Options:
53 --token, -t TEXT A Prefect Cloud api token
54 """
55 with set_temporary_config(
56 {"cloud.agent.auth_token": token or config.cloud.agent.auth_token}
57 ):
58 retrieved_agent = _agents.get(name, None)
59
60 if not retrieved_agent:
61 click.secho("{} is not a valid agent".format(name), fg="red")
62 return
63
64 from_qualified_name(retrieved_agent)().start()
65
```
Path: `src/prefect/agent/local/agent.py`
Content:
```
1 import docker
2
3 from prefect import config
4 from prefect.agent import Agent
5 from prefect.environments.storage import Docker
6 from prefect.serialization.storage import StorageSchema
7 from prefect.utilities.graphql import GraphQLResult
8
9
10 class LocalAgent(Agent):
11 """
12 Agent which deploys flow runs locally as Docker containers.
13
14 Args:
15 - base_url (str, optional): URL for a Docker daemon server. Defaults to
16 `unix:///var/run/docker.sock` however other hosts such as
17 `tcp://0.0.0.0:2375` can be provided
18 """
19
20 def __init__(self, base_url: str = None) -> None:
21 super().__init__()
22
23 base_url = base_url or "unix://var/run/docker.sock"
24 self.docker_client = docker.APIClient(base_url=base_url, version="auto")
25
26 # Ping Docker daemon for connection issues
27 try:
28 self.docker_client.ping()
29 except Exception as exc:
30 self.logger.error(
31 "Issue connecting to the Docker daemon. Make sure it is running."
32 )
33 raise exc
34
35 def deploy_flows(self, flow_runs: list) -> None:
36 """
37 Deploy flow runs on your local machine as Docker containers
38
39 Args:
40 - flow_runs (list): A list of GraphQLResult flow run objects
41 """
42 for flow_run in flow_runs:
43
44 storage = StorageSchema().load(flow_run.flow.storage)
45 if not isinstance(StorageSchema().load(flow_run.flow.storage), Docker):
46 self.logger.error(
47 "Storage for flow run {} is not of type Docker.".format(flow_run.id)
48 )
49 continue
50
51 env_vars = self.populate_env_vars(flow_run=flow_run)
52
53 # Pull image if it doesn't exist locally
54 self.docker_client.pull(storage.name)
55
56 # Create a container
57 container = self.docker_client.create_container(
58 storage.name, command="prefect execute cloud-flow", environment=env_vars
59 )
60
61 # Start the container
62 self.docker_client.start(container=container.get("Id"))
63
64 def populate_env_vars(self, flow_run: GraphQLResult) -> dict:
65 """
66 Populate metadata and variables in the environment variables for a flow run
67
68 Args:
69 - flow_run (GraphQLResult): A flow run object
70
71 Returns:
72 - dict: a dictionary representing the populated environment variables
73 """
74 return {
75 "PREFECT__CLOUD__API": config.cloud.api,
76 "PREFECT__CLOUD__AUTH_TOKEN": config.cloud.agent.auth_token,
77 "PREFECT__CONTEXT__FLOW_RUN_ID": flow_run.id, # type: ignore
78 "PREFECT__CLOUD__USE_LOCAL_SECRETS": "false",
79 "PREFECT__LOGGING__LOG_TO_CLOUD": "true",
80 "PREFECT__LOGGING__LEVEL": "DEBUG",
81 "PREFECT__ENGINE__FLOW_RUNNER__DEFAULT_CLASS": "prefect.engine.cloud.CloudFlowRunner",
82 "PREFECT__ENGINE__TASK_RUNNER__DEFAULT_CLASS": "prefect.engine.cloud.CloudTaskRunner",
83 }
84
85
86 if __name__ == "__main__":
87 LocalAgent().start()
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/prefect/agent/local/agent.py b/src/prefect/agent/local/agent.py
--- a/src/prefect/agent/local/agent.py
+++ b/src/prefect/agent/local/agent.py
@@ -1,6 +1,6 @@
import docker
-from prefect import config
+from prefect import config, context
from prefect.agent import Agent
from prefect.environments.storage import Docker
from prefect.serialization.storage import StorageSchema
@@ -15,9 +15,11 @@
- base_url (str, optional): URL for a Docker daemon server. Defaults to
`unix:///var/run/docker.sock` however other hosts such as
`tcp://0.0.0.0:2375` can be provided
+ - no_pull (bool, optional): Flag on whether or not to pull flow images.
+ Defaults to `False` if not provided here or in context.
"""
- def __init__(self, base_url: str = None) -> None:
+ def __init__(self, base_url: str = None, no_pull: bool = None) -> None:
super().__init__()
base_url = base_url or "unix://var/run/docker.sock"
@@ -32,6 +34,10 @@
)
raise exc
+ self.no_pull = no_pull or context.get("no_pull")
+ if self.no_pull is None:
+ self.no_pull = False
+
def deploy_flows(self, flow_runs: list) -> None:
"""
Deploy flow runs on your local machine as Docker containers
@@ -50,8 +56,8 @@
env_vars = self.populate_env_vars(flow_run=flow_run)
- # Pull image if it doesn't exist locally
- self.docker_client.pull(storage.name)
+ if not self.no_pull:
+ self.docker_client.pull(storage.name)
# Create a container
container = self.docker_client.create_container(
diff --git a/src/prefect/cli/agent.py b/src/prefect/cli/agent.py
--- a/src/prefect/cli/agent.py
+++ b/src/prefect/cli/agent.py
@@ -1,6 +1,6 @@
import click
-from prefect import config
+from prefect import config, context
from prefect.utilities.configuration import set_temporary_config
from prefect.utilities.serialization import from_qualified_name
@@ -39,7 +39,8 @@
@click.option(
"--token", "-t", required=False, help="A Prefect Cloud API token.", hidden=True
)
-def start(name, token):
[email protected]("--no-pull", is_flag=True, help="Pull images flag.", hidden=True)
+def start(name, token, no_pull):
"""
Start an agent.
@@ -51,6 +52,8 @@
\b
Options:
--token, -t TEXT A Prefect Cloud api token
+ --no-pull Pull images for a LocalAgent
+ Defaults to pulling if not provided
"""
with set_temporary_config(
{"cloud.agent.auth_token": token or config.cloud.agent.auth_token}
@@ -61,4 +64,5 @@
click.secho("{} is not a valid agent".format(name), fg="red")
return
- from_qualified_name(retrieved_agent)().start()
+ with context(no_pull=no_pull):
+ from_qualified_name(retrieved_agent)().start()
|
{"golden_diff": "diff --git a/src/prefect/agent/local/agent.py b/src/prefect/agent/local/agent.py\n--- a/src/prefect/agent/local/agent.py\n+++ b/src/prefect/agent/local/agent.py\n@@ -1,6 +1,6 @@\n import docker\n \n-from prefect import config\n+from prefect import config, context\n from prefect.agent import Agent\n from prefect.environments.storage import Docker\n from prefect.serialization.storage import StorageSchema\n@@ -15,9 +15,11 @@\n - base_url (str, optional): URL for a Docker daemon server. Defaults to\n `unix:///var/run/docker.sock` however other hosts such as\n `tcp://0.0.0.0:2375` can be provided\n+ - no_pull (bool, optional): Flag on whether or not to pull flow images.\n+ Defaults to `False` if not provided here or in context.\n \"\"\"\n \n- def __init__(self, base_url: str = None) -> None:\n+ def __init__(self, base_url: str = None, no_pull: bool = None) -> None:\n super().__init__()\n \n base_url = base_url or \"unix://var/run/docker.sock\"\n@@ -32,6 +34,10 @@\n )\n raise exc\n \n+ self.no_pull = no_pull or context.get(\"no_pull\")\n+ if self.no_pull is None:\n+ self.no_pull = False\n+\n def deploy_flows(self, flow_runs: list) -> None:\n \"\"\"\n Deploy flow runs on your local machine as Docker containers\n@@ -50,8 +56,8 @@\n \n env_vars = self.populate_env_vars(flow_run=flow_run)\n \n- # Pull image if it doesn't exist locally\n- self.docker_client.pull(storage.name)\n+ if not self.no_pull:\n+ self.docker_client.pull(storage.name)\n \n # Create a container\n container = self.docker_client.create_container(\ndiff --git a/src/prefect/cli/agent.py b/src/prefect/cli/agent.py\n--- a/src/prefect/cli/agent.py\n+++ b/src/prefect/cli/agent.py\n@@ -1,6 +1,6 @@\n import click\n \n-from prefect import config\n+from prefect import config, context\n from prefect.utilities.configuration import set_temporary_config\n from prefect.utilities.serialization import from_qualified_name\n \n@@ -39,7 +39,8 @@\n @click.option(\n \"--token\", \"-t\", required=False, help=\"A Prefect Cloud API token.\", hidden=True\n )\n-def start(name, token):\[email protected](\"--no-pull\", is_flag=True, help=\"Pull images flag.\", hidden=True)\n+def start(name, token, no_pull):\n \"\"\"\n Start an agent.\n \n@@ -51,6 +52,8 @@\n \\b\n Options:\n --token, -t TEXT A Prefect Cloud api token\n+ --no-pull Pull images for a LocalAgent\n+ Defaults to pulling if not provided\n \"\"\"\n with set_temporary_config(\n {\"cloud.agent.auth_token\": token or config.cloud.agent.auth_token}\n@@ -61,4 +64,5 @@\n click.secho(\"{} is not a valid agent\".format(name), fg=\"red\")\n return\n \n- from_qualified_name(retrieved_agent)().start()\n+ with context(no_pull=no_pull):\n+ from_qualified_name(retrieved_agent)().start()\n", "issue": "Add optional pull to local agent\nIf the image is present locally we don't want to attempt a pull from a registry\n", "before_files": [{"content": "import click\n\nfrom prefect import config\nfrom prefect.utilities.configuration import set_temporary_config\nfrom prefect.utilities.serialization import from_qualified_name\n\n_agents = {\n \"local\": \"prefect.agent.local.LocalAgent\",\n \"kubernetes\": \"prefect.agent.kubernetes.KubernetesAgent\",\n \"nomad\": \"prefect.agent.nomad.NomadAgent\",\n}\n\n\[email protected](hidden=True)\ndef agent():\n \"\"\"\n Manage Prefect agents.\n\n \\b\n Usage:\n $ prefect agent [COMMAND]\n\n \\b\n Arguments:\n start Start a Prefect agent\n\n \\b\n Examples:\n $ prefect agent start\n\n \\b\n $ prefect agent start kubernetes --token MY_TOKEN\n \"\"\"\n pass\n\n\[email protected](hidden=True)\[email protected](\"name\", default=\"local\")\[email protected](\n \"--token\", \"-t\", required=False, help=\"A Prefect Cloud API token.\", hidden=True\n)\ndef start(name, token):\n \"\"\"\n Start an agent.\n\n \\b\n Arguments:\n name TEXT The name of an agent to start (e.g. `local`, `kubernetes`, `nomad`)\n Defaults to `local`\n\n \\b\n Options:\n --token, -t TEXT A Prefect Cloud api token\n \"\"\"\n with set_temporary_config(\n {\"cloud.agent.auth_token\": token or config.cloud.agent.auth_token}\n ):\n retrieved_agent = _agents.get(name, None)\n\n if not retrieved_agent:\n click.secho(\"{} is not a valid agent\".format(name), fg=\"red\")\n return\n\n from_qualified_name(retrieved_agent)().start()\n", "path": "src/prefect/cli/agent.py"}, {"content": "import docker\n\nfrom prefect import config\nfrom prefect.agent import Agent\nfrom prefect.environments.storage import Docker\nfrom prefect.serialization.storage import StorageSchema\nfrom prefect.utilities.graphql import GraphQLResult\n\n\nclass LocalAgent(Agent):\n \"\"\"\n Agent which deploys flow runs locally as Docker containers.\n\n Args:\n - base_url (str, optional): URL for a Docker daemon server. Defaults to\n `unix:///var/run/docker.sock` however other hosts such as\n `tcp://0.0.0.0:2375` can be provided\n \"\"\"\n\n def __init__(self, base_url: str = None) -> None:\n super().__init__()\n\n base_url = base_url or \"unix://var/run/docker.sock\"\n self.docker_client = docker.APIClient(base_url=base_url, version=\"auto\")\n\n # Ping Docker daemon for connection issues\n try:\n self.docker_client.ping()\n except Exception as exc:\n self.logger.error(\n \"Issue connecting to the Docker daemon. Make sure it is running.\"\n )\n raise exc\n\n def deploy_flows(self, flow_runs: list) -> None:\n \"\"\"\n Deploy flow runs on your local machine as Docker containers\n\n Args:\n - flow_runs (list): A list of GraphQLResult flow run objects\n \"\"\"\n for flow_run in flow_runs:\n\n storage = StorageSchema().load(flow_run.flow.storage)\n if not isinstance(StorageSchema().load(flow_run.flow.storage), Docker):\n self.logger.error(\n \"Storage for flow run {} is not of type Docker.\".format(flow_run.id)\n )\n continue\n\n env_vars = self.populate_env_vars(flow_run=flow_run)\n\n # Pull image if it doesn't exist locally\n self.docker_client.pull(storage.name)\n\n # Create a container\n container = self.docker_client.create_container(\n storage.name, command=\"prefect execute cloud-flow\", environment=env_vars\n )\n\n # Start the container\n self.docker_client.start(container=container.get(\"Id\"))\n\n def populate_env_vars(self, flow_run: GraphQLResult) -> dict:\n \"\"\"\n Populate metadata and variables in the environment variables for a flow run\n\n Args:\n - flow_run (GraphQLResult): A flow run object\n\n Returns:\n - dict: a dictionary representing the populated environment variables\n \"\"\"\n return {\n \"PREFECT__CLOUD__API\": config.cloud.api,\n \"PREFECT__CLOUD__AUTH_TOKEN\": config.cloud.agent.auth_token,\n \"PREFECT__CONTEXT__FLOW_RUN_ID\": flow_run.id, # type: ignore\n \"PREFECT__CLOUD__USE_LOCAL_SECRETS\": \"false\",\n \"PREFECT__LOGGING__LOG_TO_CLOUD\": \"true\",\n \"PREFECT__LOGGING__LEVEL\": \"DEBUG\",\n \"PREFECT__ENGINE__FLOW_RUNNER__DEFAULT_CLASS\": \"prefect.engine.cloud.CloudFlowRunner\",\n \"PREFECT__ENGINE__TASK_RUNNER__DEFAULT_CLASS\": \"prefect.engine.cloud.CloudTaskRunner\",\n }\n\n\nif __name__ == \"__main__\":\n LocalAgent().start()\n", "path": "src/prefect/agent/local/agent.py"}], "after_files": [{"content": "import click\n\nfrom prefect import config, context\nfrom prefect.utilities.configuration import set_temporary_config\nfrom prefect.utilities.serialization import from_qualified_name\n\n_agents = {\n \"local\": \"prefect.agent.local.LocalAgent\",\n \"kubernetes\": \"prefect.agent.kubernetes.KubernetesAgent\",\n \"nomad\": \"prefect.agent.nomad.NomadAgent\",\n}\n\n\[email protected](hidden=True)\ndef agent():\n \"\"\"\n Manage Prefect agents.\n\n \\b\n Usage:\n $ prefect agent [COMMAND]\n\n \\b\n Arguments:\n start Start a Prefect agent\n\n \\b\n Examples:\n $ prefect agent start\n\n \\b\n $ prefect agent start kubernetes --token MY_TOKEN\n \"\"\"\n pass\n\n\[email protected](hidden=True)\[email protected](\"name\", default=\"local\")\[email protected](\n \"--token\", \"-t\", required=False, help=\"A Prefect Cloud API token.\", hidden=True\n)\[email protected](\"--no-pull\", is_flag=True, help=\"Pull images flag.\", hidden=True)\ndef start(name, token, no_pull):\n \"\"\"\n Start an agent.\n\n \\b\n Arguments:\n name TEXT The name of an agent to start (e.g. `local`, `kubernetes`, `nomad`)\n Defaults to `local`\n\n \\b\n Options:\n --token, -t TEXT A Prefect Cloud api token\n --no-pull Pull images for a LocalAgent\n Defaults to pulling if not provided\n \"\"\"\n with set_temporary_config(\n {\"cloud.agent.auth_token\": token or config.cloud.agent.auth_token}\n ):\n retrieved_agent = _agents.get(name, None)\n\n if not retrieved_agent:\n click.secho(\"{} is not a valid agent\".format(name), fg=\"red\")\n return\n\n with context(no_pull=no_pull):\n from_qualified_name(retrieved_agent)().start()\n", "path": "src/prefect/cli/agent.py"}, {"content": "import docker\n\nfrom prefect import config, context\nfrom prefect.agent import Agent\nfrom prefect.environments.storage import Docker\nfrom prefect.serialization.storage import StorageSchema\nfrom prefect.utilities.graphql import GraphQLResult\n\n\nclass LocalAgent(Agent):\n \"\"\"\n Agent which deploys flow runs locally as Docker containers.\n\n Args:\n - base_url (str, optional): URL for a Docker daemon server. Defaults to\n `unix:///var/run/docker.sock` however other hosts such as\n `tcp://0.0.0.0:2375` can be provided\n - no_pull (bool, optional): Flag on whether or not to pull flow images.\n Defaults to `False` if not provided here or in context.\n \"\"\"\n\n def __init__(self, base_url: str = None, no_pull: bool = None) -> None:\n super().__init__()\n\n base_url = base_url or \"unix://var/run/docker.sock\"\n self.docker_client = docker.APIClient(base_url=base_url, version=\"auto\")\n\n # Ping Docker daemon for connection issues\n try:\n self.docker_client.ping()\n except Exception as exc:\n self.logger.error(\n \"Issue connecting to the Docker daemon. Make sure it is running.\"\n )\n raise exc\n\n self.no_pull = no_pull or context.get(\"no_pull\")\n if self.no_pull is None:\n self.no_pull = False\n\n def deploy_flows(self, flow_runs: list) -> None:\n \"\"\"\n Deploy flow runs on your local machine as Docker containers\n\n Args:\n - flow_runs (list): A list of GraphQLResult flow run objects\n \"\"\"\n for flow_run in flow_runs:\n\n storage = StorageSchema().load(flow_run.flow.storage)\n if not isinstance(StorageSchema().load(flow_run.flow.storage), Docker):\n self.logger.error(\n \"Storage for flow run {} is not of type Docker.\".format(flow_run.id)\n )\n continue\n\n env_vars = self.populate_env_vars(flow_run=flow_run)\n\n if not self.no_pull:\n self.docker_client.pull(storage.name)\n\n # Create a container\n container = self.docker_client.create_container(\n storage.name, command=\"prefect execute cloud-flow\", environment=env_vars\n )\n\n # Start the container\n self.docker_client.start(container=container.get(\"Id\"))\n\n def populate_env_vars(self, flow_run: GraphQLResult) -> dict:\n \"\"\"\n Populate metadata and variables in the environment variables for a flow run\n\n Args:\n - flow_run (GraphQLResult): A flow run object\n\n Returns:\n - dict: a dictionary representing the populated environment variables\n \"\"\"\n return {\n \"PREFECT__CLOUD__API\": config.cloud.api,\n \"PREFECT__CLOUD__AUTH_TOKEN\": config.cloud.agent.auth_token,\n \"PREFECT__CONTEXT__FLOW_RUN_ID\": flow_run.id, # type: ignore\n \"PREFECT__CLOUD__USE_LOCAL_SECRETS\": \"false\",\n \"PREFECT__LOGGING__LOG_TO_CLOUD\": \"true\",\n \"PREFECT__LOGGING__LEVEL\": \"DEBUG\",\n \"PREFECT__ENGINE__FLOW_RUNNER__DEFAULT_CLASS\": \"prefect.engine.cloud.CloudFlowRunner\",\n \"PREFECT__ENGINE__TASK_RUNNER__DEFAULT_CLASS\": \"prefect.engine.cloud.CloudTaskRunner\",\n }\n\n\nif __name__ == \"__main__\":\n LocalAgent().start()\n", "path": "src/prefect/agent/local/agent.py"}]}
| 1,641 | 760 |
gh_patches_debug_61788
|
rasdani/github-patches
|
git_diff
|
electricitymaps__electricitymaps-contrib-2656
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The import/export between Menorca and Mallorca appears reversed
Our data is showing the export going one way, but our data source is showing the reverse according to one user. See screenshots:


--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsers/ES_IB.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import logging
4 from arrow import get
5 from requests import Session
6 from ree import (Formentera, Ibiza,
7 Mallorca, Menorca,
8 BalearicIslands)
9 # package "ree" is used to parse data from www.ree.es // maintained on github by @hectorespert
10
11 from .lib.exceptions import ParserException
12 from .lib.validation import validate, validate_production_diffs
13
14 ## Guess we'll need to figure these out later?! Adapted from ES-CN:
15
16 # Minimum valid zone demand. This is used to eliminate some cases
17 # where generation for one or more modes is obviously missing.
18 FLOORS = {
19 'ES-IB': 0,
20 'ES-IB-FO': 0,
21 'ES-IB-IZ': 0,
22 'ES-IB-MA': 0,
23 'ES-IB-ME': 0,
24 }
25
26
27 def fetch_island_data(zone_key, session):
28 if zone_key == 'ES-IB-FO':
29 formentera_data = Formentera(session, verify=False).get_all()
30 if not formentera_data:
31 raise ParserException(zone_key, "Formentera doesn't respond")
32 else:
33 return formentera_data
34 elif zone_key == 'ES-IB-IZ':
35 ibiza_data = Ibiza(session, verify=False).get_all()
36 if not ibiza_data:
37 raise ParserException(zone_key, "Party is over, Ibiza doesn't respond")
38 else:
39 return ibiza_data
40 elif zone_key == 'ES-IB-MA':
41 mallorca_data = Mallorca(session, verify=False).get_all()
42 if not mallorca_data:
43 raise ParserException(zone_key, "Mallorca doesn't respond")
44 else:
45 return mallorca_data
46 elif zone_key == 'ES-IB-ME':
47 menorca_data = Menorca(session, verify=False).get_all()
48 if not menorca_data:
49 raise ParserException(zone_key, "Menorca doesn't respond")
50 else:
51 return menorca_data
52 elif zone_key == 'ES-IB':
53 balearic_islands = BalearicIslands(session, verify=False).get_all()
54 if not balearic_islands:
55 raise ParserException(zone_key, "Balearic Islands doesn't respond")
56 else:
57 return balearic_islands
58 else:
59 raise ParserException(zone_key, 'Can\'t read this country code {0}'.format(zone_key))
60
61
62 def fetch_consumption(zone_key, session=None, target_datetime=None, logger=None):
63 if target_datetime:
64 raise NotImplementedError('This parser is not yet able to parse past dates')
65
66 ses = session or Session()
67 island_data = fetch_island_data(zone_key, ses)
68 data = []
69 for response in island_data:
70 response_data = {
71 'zoneKey': zone_key,
72 'datetime': get(response.timestamp).datetime,
73 'consumption': response.demand,
74 'source': 'demanda.ree.es'
75 }
76
77 data.append(response_data)
78
79 return data
80
81
82 def fetch_production(zone_key, session=None, target_datetime=None,
83 logger=logging.getLogger(__name__)):
84 if target_datetime:
85 raise NotImplementedError('This parser is not yet able to parse past dates')
86
87 ses = session or Session()
88 island_data = fetch_island_data(zone_key, ses)
89 data = []
90
91 if zone_key == 'ES-IB':
92 expected_range = {'coal': (50,600)}
93 else:
94 expected_range = None
95
96 for response in island_data:
97 if response.production() >= 0:
98 response_data = {
99 'zoneKey': zone_key,
100 'datetime': get(response.timestamp).datetime,
101 'production': {
102 'coal': response.carbon,
103 'gas': round(response.gas + response.combined, 2),
104 'solar': response.solar,
105 'oil': round(response.vapor + response.diesel, 2),
106 'wind': response.wind,
107 'hydro': response.hydraulic,
108 'biomass': response.waste,
109 'nuclear': 0.0,
110 'geothermal': 0.0,
111 'unknown': response.other
112 },
113 'storage': {
114 'hydro': 0.0,
115 'battery': 0.0
116 },
117 'source': 'demanda.ree.es',
118 }
119
120 response_data = validate(response_data, logger,
121 floor=FLOORS[zone_key],
122 expected_range = expected_range)
123
124 if response_data:
125 # append if valid
126 data.append(response_data)
127
128 if len(data) > 1:
129 # granularity is 10 minutes, drops points with change in coal > 100MW
130 data = validate_production_diffs(data, {'coal': 150}, logger)
131
132 return data
133
134
135 def fetch_exchange(zone_key1, zone_key2, session=None, target_datetime=None, logger=None):
136
137 if target_datetime:
138 raise NotImplementedError('This parser is not yet able to parse past dates')
139
140 sorted_zone_keys = '->'.join(sorted([zone_key1, zone_key2]))
141
142 ses = session or Session()
143
144 if sorted_zone_keys == 'ES->ES-IB':
145 responses = BalearicIslands(ses, verify=False).get_all()
146 if not responses:
147 raise ParserException("ES-IB", "No responses")
148 elif sorted_zone_keys == 'ES->ES-IB-MA' or sorted_zone_keys == 'ES-IB-MA->ES-IB-ME' or sorted_zone_keys == 'ES-IB-IZ->ES-IB-MA':
149 responses = Mallorca(ses, verify=False).get_all()
150 if not responses:
151 raise ParserException("ES-IB-MA", "No responses")
152 elif sorted_zone_keys == 'ES-IB-FO->ES-IB-IZ':
153 responses = Formentera(ses, verify=False).get_all()
154 if not responses:
155 raise ParserException("ES-IB-FO", "No responses")
156 else:
157 raise NotImplementedError('This exchange pair is not implemented')
158
159 exchanges = []
160 for response in responses:
161
162 if sorted_zone_keys == 'ES-IB-MA->ES-IB-ME':
163 net_flow = response.link['ma_me']
164 elif sorted_zone_keys == 'ES-IB-IZ->ES-IB-MA':
165 net_flow = response.link['ma_ib']
166 elif sorted_zone_keys == 'ES-IB-FO->ES-IB-IZ':
167 net_flow = -1 * response.link['ib_fo']
168 else:
169 net_flow = response.link['pe_ma']
170
171 exchange = {
172 'sortedZoneKeys': sorted_zone_keys,
173 'datetime': get(response.timestamp).datetime,
174 'netFlow': net_flow,
175 'source': 'demanda.ree.es',
176 }
177
178 exchanges.append(exchange)
179
180 return exchanges
181
182
183 if __name__ == '__main__':
184 session = Session
185 print("fetch_consumption(ES-IB)")
186 print(fetch_consumption('ES-IB', session))
187
188 print("fetch_production(ES-IB)")
189 print(fetch_production('ES-IB', session))
190
191 print("fetch_exchange(ES, ES-IB)")
192 print(fetch_exchange('ES', 'ES-IB', session))
193
194 print("fetch_consumption(ES-IB-FO)")
195 print(fetch_consumption('ES-IB-FO'))
196 print("fetch_production(ES-IB-FO)")
197 print(fetch_production('ES-IB-FO'))
198 print("fetch_consumption(ES-IB-IZ)")
199 print(fetch_consumption('ES-IB-IZ'))
200 print("fetch_production(ES-IB-IZ)")
201 print(fetch_production('ES-IB-IZ'))
202 print("fetch_consumption(ES-IB-MA)")
203 print(fetch_consumption('ES-IB-MA'))
204 print("fetch_production(ES-IB-MA)")
205 print(fetch_production('ES-IB-MA'))
206 print("fetch_consumption(ES-IB-ME)")
207 print(fetch_consumption('ES-IB-ME'))
208 print("fetch_production(ES-IB-ME)")
209 print(fetch_production('ES-IB-ME'))
210 print("fetch_exchange(ES, ES-IB-MA)")
211 print(fetch_exchange('ES', 'ES-IB-MA'))
212 print("fetch_exchange(ES-IB-MA, ES-IB-ME)")
213 print(fetch_exchange('ES-IB-MA', 'ES-IB-ME'))
214 print("fetch_exchange(ES-IB-MA, ES-IB-IZ)")
215 print(fetch_exchange('ES-IB-MA', 'ES-IB-IZ'))
216 print("fetch_exchange(ES-IB-IZ, ES-IB-FO)")
217 print(fetch_exchange('ES-IB-IZ', 'ES-IB-FO'))
218
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/parsers/ES_IB.py b/parsers/ES_IB.py
--- a/parsers/ES_IB.py
+++ b/parsers/ES_IB.py
@@ -160,7 +160,7 @@
for response in responses:
if sorted_zone_keys == 'ES-IB-MA->ES-IB-ME':
- net_flow = response.link['ma_me']
+ net_flow = -1 * response.link['ma_me']
elif sorted_zone_keys == 'ES-IB-IZ->ES-IB-MA':
net_flow = response.link['ma_ib']
elif sorted_zone_keys == 'ES-IB-FO->ES-IB-IZ':
|
{"golden_diff": "diff --git a/parsers/ES_IB.py b/parsers/ES_IB.py\n--- a/parsers/ES_IB.py\n+++ b/parsers/ES_IB.py\n@@ -160,7 +160,7 @@\n for response in responses:\n \n if sorted_zone_keys == 'ES-IB-MA->ES-IB-ME':\n- net_flow = response.link['ma_me']\n+ net_flow = -1 * response.link['ma_me']\n elif sorted_zone_keys == 'ES-IB-IZ->ES-IB-MA':\n net_flow = response.link['ma_ib']\n elif sorted_zone_keys == 'ES-IB-FO->ES-IB-IZ':\n", "issue": "The import/export between Menorca and Mallorca appears reversed\nOur data is showing the export going one way, but our data source is showing the reverse according to one user. See screenshots:\r\n\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport logging\nfrom arrow import get\nfrom requests import Session\nfrom ree import (Formentera, Ibiza,\n Mallorca, Menorca,\n BalearicIslands)\n# package \"ree\" is used to parse data from www.ree.es // maintained on github by @hectorespert\n\nfrom .lib.exceptions import ParserException\nfrom .lib.validation import validate, validate_production_diffs\n\n## Guess we'll need to figure these out later?! Adapted from ES-CN:\n\n# Minimum valid zone demand. This is used to eliminate some cases\n# where generation for one or more modes is obviously missing.\nFLOORS = {\n 'ES-IB': 0,\n 'ES-IB-FO': 0,\n 'ES-IB-IZ': 0,\n 'ES-IB-MA': 0,\n 'ES-IB-ME': 0,\n}\n\n\ndef fetch_island_data(zone_key, session):\n if zone_key == 'ES-IB-FO':\n formentera_data = Formentera(session, verify=False).get_all()\n if not formentera_data:\n raise ParserException(zone_key, \"Formentera doesn't respond\")\n else:\n return formentera_data\n elif zone_key == 'ES-IB-IZ':\n ibiza_data = Ibiza(session, verify=False).get_all()\n if not ibiza_data:\n raise ParserException(zone_key, \"Party is over, Ibiza doesn't respond\")\n else:\n return ibiza_data\n elif zone_key == 'ES-IB-MA':\n mallorca_data = Mallorca(session, verify=False).get_all()\n if not mallorca_data:\n raise ParserException(zone_key, \"Mallorca doesn't respond\")\n else:\n return mallorca_data\n elif zone_key == 'ES-IB-ME':\n menorca_data = Menorca(session, verify=False).get_all()\n if not menorca_data:\n raise ParserException(zone_key, \"Menorca doesn't respond\")\n else:\n return menorca_data\n elif zone_key == 'ES-IB':\n balearic_islands = BalearicIslands(session, verify=False).get_all()\n if not balearic_islands:\n raise ParserException(zone_key, \"Balearic Islands doesn't respond\")\n else:\n return balearic_islands\n else:\n raise ParserException(zone_key, 'Can\\'t read this country code {0}'.format(zone_key))\n\n\ndef fetch_consumption(zone_key, session=None, target_datetime=None, logger=None):\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n ses = session or Session()\n island_data = fetch_island_data(zone_key, ses)\n data = []\n for response in island_data:\n response_data = {\n 'zoneKey': zone_key,\n 'datetime': get(response.timestamp).datetime,\n 'consumption': response.demand,\n 'source': 'demanda.ree.es'\n }\n\n data.append(response_data)\n\n return data\n\n\ndef fetch_production(zone_key, session=None, target_datetime=None,\n logger=logging.getLogger(__name__)):\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n ses = session or Session()\n island_data = fetch_island_data(zone_key, ses)\n data = []\n\n if zone_key == 'ES-IB':\n expected_range = {'coal': (50,600)}\n else:\n expected_range = None\n\n for response in island_data:\n if response.production() >= 0:\n response_data = {\n 'zoneKey': zone_key,\n 'datetime': get(response.timestamp).datetime,\n 'production': {\n 'coal': response.carbon,\n 'gas': round(response.gas + response.combined, 2),\n 'solar': response.solar,\n 'oil': round(response.vapor + response.diesel, 2),\n 'wind': response.wind,\n 'hydro': response.hydraulic,\n 'biomass': response.waste,\n 'nuclear': 0.0,\n 'geothermal': 0.0,\n 'unknown': response.other\n },\n 'storage': {\n 'hydro': 0.0,\n 'battery': 0.0\n },\n 'source': 'demanda.ree.es',\n }\n\n response_data = validate(response_data, logger,\n floor=FLOORS[zone_key],\n expected_range = expected_range)\n\n if response_data:\n # append if valid\n data.append(response_data)\n\n if len(data) > 1:\n # granularity is 10 minutes, drops points with change in coal > 100MW\n data = validate_production_diffs(data, {'coal': 150}, logger)\n\n return data\n\n\ndef fetch_exchange(zone_key1, zone_key2, session=None, target_datetime=None, logger=None):\n\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n sorted_zone_keys = '->'.join(sorted([zone_key1, zone_key2]))\n\n ses = session or Session()\n\n if sorted_zone_keys == 'ES->ES-IB':\n responses = BalearicIslands(ses, verify=False).get_all()\n if not responses:\n raise ParserException(\"ES-IB\", \"No responses\")\n elif sorted_zone_keys == 'ES->ES-IB-MA' or sorted_zone_keys == 'ES-IB-MA->ES-IB-ME' or sorted_zone_keys == 'ES-IB-IZ->ES-IB-MA':\n responses = Mallorca(ses, verify=False).get_all()\n if not responses:\n raise ParserException(\"ES-IB-MA\", \"No responses\")\n elif sorted_zone_keys == 'ES-IB-FO->ES-IB-IZ':\n responses = Formentera(ses, verify=False).get_all()\n if not responses:\n raise ParserException(\"ES-IB-FO\", \"No responses\")\n else:\n raise NotImplementedError('This exchange pair is not implemented')\n\n exchanges = []\n for response in responses:\n\n if sorted_zone_keys == 'ES-IB-MA->ES-IB-ME':\n net_flow = response.link['ma_me']\n elif sorted_zone_keys == 'ES-IB-IZ->ES-IB-MA':\n net_flow = response.link['ma_ib']\n elif sorted_zone_keys == 'ES-IB-FO->ES-IB-IZ':\n net_flow = -1 * response.link['ib_fo']\n else:\n net_flow = response.link['pe_ma']\n\n exchange = {\n 'sortedZoneKeys': sorted_zone_keys,\n 'datetime': get(response.timestamp).datetime,\n 'netFlow': net_flow,\n 'source': 'demanda.ree.es',\n }\n\n exchanges.append(exchange)\n\n return exchanges\n\n\nif __name__ == '__main__':\n session = Session\n print(\"fetch_consumption(ES-IB)\")\n print(fetch_consumption('ES-IB', session))\n\n print(\"fetch_production(ES-IB)\")\n print(fetch_production('ES-IB', session))\n\n print(\"fetch_exchange(ES, ES-IB)\")\n print(fetch_exchange('ES', 'ES-IB', session))\n\n print(\"fetch_consumption(ES-IB-FO)\")\n print(fetch_consumption('ES-IB-FO'))\n print(\"fetch_production(ES-IB-FO)\")\n print(fetch_production('ES-IB-FO'))\n print(\"fetch_consumption(ES-IB-IZ)\")\n print(fetch_consumption('ES-IB-IZ'))\n print(\"fetch_production(ES-IB-IZ)\")\n print(fetch_production('ES-IB-IZ'))\n print(\"fetch_consumption(ES-IB-MA)\")\n print(fetch_consumption('ES-IB-MA'))\n print(\"fetch_production(ES-IB-MA)\")\n print(fetch_production('ES-IB-MA'))\n print(\"fetch_consumption(ES-IB-ME)\")\n print(fetch_consumption('ES-IB-ME'))\n print(\"fetch_production(ES-IB-ME)\")\n print(fetch_production('ES-IB-ME'))\n print(\"fetch_exchange(ES, ES-IB-MA)\")\n print(fetch_exchange('ES', 'ES-IB-MA'))\n print(\"fetch_exchange(ES-IB-MA, ES-IB-ME)\")\n print(fetch_exchange('ES-IB-MA', 'ES-IB-ME'))\n print(\"fetch_exchange(ES-IB-MA, ES-IB-IZ)\")\n print(fetch_exchange('ES-IB-MA', 'ES-IB-IZ'))\n print(\"fetch_exchange(ES-IB-IZ, ES-IB-FO)\")\n print(fetch_exchange('ES-IB-IZ', 'ES-IB-FO'))\n", "path": "parsers/ES_IB.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nimport logging\nfrom arrow import get\nfrom requests import Session\nfrom ree import (Formentera, Ibiza,\n Mallorca, Menorca,\n BalearicIslands)\n# package \"ree\" is used to parse data from www.ree.es // maintained on github by @hectorespert\n\nfrom .lib.exceptions import ParserException\nfrom .lib.validation import validate, validate_production_diffs\n\n## Guess we'll need to figure these out later?! Adapted from ES-CN:\n\n# Minimum valid zone demand. This is used to eliminate some cases\n# where generation for one or more modes is obviously missing.\nFLOORS = {\n 'ES-IB': 0,\n 'ES-IB-FO': 0,\n 'ES-IB-IZ': 0,\n 'ES-IB-MA': 0,\n 'ES-IB-ME': 0,\n}\n\n\ndef fetch_island_data(zone_key, session):\n if zone_key == 'ES-IB-FO':\n formentera_data = Formentera(session, verify=False).get_all()\n if not formentera_data:\n raise ParserException(zone_key, \"Formentera doesn't respond\")\n else:\n return formentera_data\n elif zone_key == 'ES-IB-IZ':\n ibiza_data = Ibiza(session, verify=False).get_all()\n if not ibiza_data:\n raise ParserException(zone_key, \"Party is over, Ibiza doesn't respond\")\n else:\n return ibiza_data\n elif zone_key == 'ES-IB-MA':\n mallorca_data = Mallorca(session, verify=False).get_all()\n if not mallorca_data:\n raise ParserException(zone_key, \"Mallorca doesn't respond\")\n else:\n return mallorca_data\n elif zone_key == 'ES-IB-ME':\n menorca_data = Menorca(session, verify=False).get_all()\n if not menorca_data:\n raise ParserException(zone_key, \"Menorca doesn't respond\")\n else:\n return menorca_data\n elif zone_key == 'ES-IB':\n balearic_islands = BalearicIslands(session, verify=False).get_all()\n if not balearic_islands:\n raise ParserException(zone_key, \"Balearic Islands doesn't respond\")\n else:\n return balearic_islands\n else:\n raise ParserException(zone_key, 'Can\\'t read this country code {0}'.format(zone_key))\n\n\ndef fetch_consumption(zone_key, session=None, target_datetime=None, logger=None):\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n ses = session or Session()\n island_data = fetch_island_data(zone_key, ses)\n data = []\n for response in island_data:\n response_data = {\n 'zoneKey': zone_key,\n 'datetime': get(response.timestamp).datetime,\n 'consumption': response.demand,\n 'source': 'demanda.ree.es'\n }\n\n data.append(response_data)\n\n return data\n\n\ndef fetch_production(zone_key, session=None, target_datetime=None,\n logger=logging.getLogger(__name__)):\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n ses = session or Session()\n island_data = fetch_island_data(zone_key, ses)\n data = []\n\n if zone_key == 'ES-IB':\n expected_range = {'coal': (50,600)}\n else:\n expected_range = None\n\n for response in island_data:\n if response.production() >= 0:\n response_data = {\n 'zoneKey': zone_key,\n 'datetime': get(response.timestamp).datetime,\n 'production': {\n 'coal': response.carbon,\n 'gas': round(response.gas + response.combined, 2),\n 'solar': response.solar,\n 'oil': round(response.vapor + response.diesel, 2),\n 'wind': response.wind,\n 'hydro': response.hydraulic,\n 'biomass': response.waste,\n 'nuclear': 0.0,\n 'geothermal': 0.0,\n 'unknown': response.other\n },\n 'storage': {\n 'hydro': 0.0,\n 'battery': 0.0\n },\n 'source': 'demanda.ree.es',\n }\n\n response_data = validate(response_data, logger,\n floor=FLOORS[zone_key],\n expected_range = expected_range)\n\n if response_data:\n # append if valid\n data.append(response_data)\n\n if len(data) > 1:\n # granularity is 10 minutes, drops points with change in coal > 100MW\n data = validate_production_diffs(data, {'coal': 150}, logger)\n\n return data\n\n\ndef fetch_exchange(zone_key1, zone_key2, session=None, target_datetime=None, logger=None):\n\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n sorted_zone_keys = '->'.join(sorted([zone_key1, zone_key2]))\n\n ses = session or Session()\n\n if sorted_zone_keys == 'ES->ES-IB':\n responses = BalearicIslands(ses, verify=False).get_all()\n if not responses:\n raise ParserException(\"ES-IB\", \"No responses\")\n elif sorted_zone_keys == 'ES->ES-IB-MA' or sorted_zone_keys == 'ES-IB-MA->ES-IB-ME' or sorted_zone_keys == 'ES-IB-IZ->ES-IB-MA':\n responses = Mallorca(ses, verify=False).get_all()\n if not responses:\n raise ParserException(\"ES-IB-MA\", \"No responses\")\n elif sorted_zone_keys == 'ES-IB-FO->ES-IB-IZ':\n responses = Formentera(ses, verify=False).get_all()\n if not responses:\n raise ParserException(\"ES-IB-FO\", \"No responses\")\n else:\n raise NotImplementedError('This exchange pair is not implemented')\n\n exchanges = []\n for response in responses:\n\n if sorted_zone_keys == 'ES-IB-MA->ES-IB-ME':\n net_flow = -1 * response.link['ma_me']\n elif sorted_zone_keys == 'ES-IB-IZ->ES-IB-MA':\n net_flow = response.link['ma_ib']\n elif sorted_zone_keys == 'ES-IB-FO->ES-IB-IZ':\n net_flow = -1 * response.link['ib_fo']\n else:\n net_flow = response.link['pe_ma']\n\n exchange = {\n 'sortedZoneKeys': sorted_zone_keys,\n 'datetime': get(response.timestamp).datetime,\n 'netFlow': net_flow,\n 'source': 'demanda.ree.es',\n }\n\n exchanges.append(exchange)\n\n return exchanges\n\n\nif __name__ == '__main__':\n session = Session\n print(\"fetch_consumption(ES-IB)\")\n print(fetch_consumption('ES-IB', session))\n\n print(\"fetch_production(ES-IB)\")\n print(fetch_production('ES-IB', session))\n\n print(\"fetch_exchange(ES, ES-IB)\")\n print(fetch_exchange('ES', 'ES-IB', session))\n\n print(\"fetch_consumption(ES-IB-FO)\")\n print(fetch_consumption('ES-IB-FO'))\n print(\"fetch_production(ES-IB-FO)\")\n print(fetch_production('ES-IB-FO'))\n print(\"fetch_consumption(ES-IB-IZ)\")\n print(fetch_consumption('ES-IB-IZ'))\n print(\"fetch_production(ES-IB-IZ)\")\n print(fetch_production('ES-IB-IZ'))\n print(\"fetch_consumption(ES-IB-MA)\")\n print(fetch_consumption('ES-IB-MA'))\n print(\"fetch_production(ES-IB-MA)\")\n print(fetch_production('ES-IB-MA'))\n print(\"fetch_consumption(ES-IB-ME)\")\n print(fetch_consumption('ES-IB-ME'))\n print(\"fetch_production(ES-IB-ME)\")\n print(fetch_production('ES-IB-ME'))\n print(\"fetch_exchange(ES, ES-IB-MA)\")\n print(fetch_exchange('ES', 'ES-IB-MA'))\n print(\"fetch_exchange(ES-IB-MA, ES-IB-ME)\")\n print(fetch_exchange('ES-IB-MA', 'ES-IB-ME'))\n print(\"fetch_exchange(ES-IB-MA, ES-IB-IZ)\")\n print(fetch_exchange('ES-IB-MA', 'ES-IB-IZ'))\n print(\"fetch_exchange(ES-IB-IZ, ES-IB-FO)\")\n print(fetch_exchange('ES-IB-IZ', 'ES-IB-FO'))\n", "path": "parsers/ES_IB.py"}]}
| 2,896 | 149 |
gh_patches_debug_19364
|
rasdani/github-patches
|
git_diff
|
microsoft__playwright-python-247
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Fix] sys.stderr.fileno() throws an AttributeError exception
When I tried to delivered the playwright tasks via `celery==4.4.7`, I got an `AttributeError` exception at line 50 in `main.py`.
`except AttributeError:`
I've checked the source from `pytest`, the original code is:
```
except (AttributeError, io.UnsupportedOperation):
# pytest-xdist monkeypatches sys.stderr with an object that is not an actual file.
# https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors
# This is potentially dangerous, but the best we can do.
return sys.__stderr__.fileno()
```
It perfectly works if I changed it to the original code, so I think we should be consistent with the original source code to enhance compatibility.
I also checked the celery official docs. The reason why I got an `AttributeError` exception probably is that the log proxy of celery is not support the `stderr` attribute.
[Fix] sys.stderr.fileno() throws an AttributeError exception
When I tried to delivered the playwright tasks via `celery==4.4.7`, I got an `AttributeError` exception at line 50 in `main.py`.
`except AttributeError:`
I've checked the source from `pytest`, the original code is:
```
except (AttributeError, io.UnsupportedOperation):
# pytest-xdist monkeypatches sys.stderr with an object that is not an actual file.
# https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors
# This is potentially dangerous, but the best we can do.
return sys.__stderr__.fileno()
```
It perfectly works if I changed it to the original code, so I think we should be consistent with the original source code to enhance compatibility.
I also checked the celery official docs. The reason why I got an `AttributeError` exception probably is that the log proxy of celery is not support the `stderr` attribute.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `playwright/main.py`
Content:
```
1 # Copyright (c) Microsoft Corporation.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import asyncio
16 import io
17 import os
18 import subprocess
19 import sys
20 from pathlib import Path
21 from typing import Any
22
23 from greenlet import greenlet
24
25 from playwright.async_api import Playwright as AsyncPlaywright
26 from playwright.connection import Connection
27 from playwright.helper import Error
28 from playwright.object_factory import create_remote_object
29 from playwright.path_utils import get_file_dirname
30 from playwright.playwright import Playwright
31 from playwright.sync_api import Playwright as SyncPlaywright
32 from playwright.sync_base import dispatcher_fiber, set_dispatcher_fiber
33
34
35 def compute_driver_executable() -> Path:
36 package_path = get_file_dirname()
37 platform = sys.platform
38 if platform == "win32":
39 return package_path / "driver" / "playwright-cli.exe"
40 return package_path / "driver" / "playwright-cli"
41
42
43 async def run_driver_async() -> Connection:
44 driver_executable = compute_driver_executable()
45
46 # Sourced from: https://github.com/pytest-dev/pytest/blob/49827adcb9256c9c9c06a25729421dcc3c385edc/src/_pytest/faulthandler.py#L73-L80
47 def _get_stderr_fileno() -> int:
48 try:
49 return sys.stderr.fileno()
50 except io.UnsupportedOperation:
51 # pytest-xdist monkeypatches sys.stderr with an object that is not an actual file.
52 # https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors
53 # This is potentially dangerous, but the best we can do.
54 return sys.__stderr__.fileno()
55
56 proc = await asyncio.create_subprocess_exec(
57 str(driver_executable),
58 "run-driver",
59 stdin=asyncio.subprocess.PIPE,
60 stdout=asyncio.subprocess.PIPE,
61 stderr=_get_stderr_fileno(),
62 limit=32768,
63 )
64 assert proc.stdout
65 assert proc.stdin
66 connection = Connection(
67 proc.stdout, proc.stdin, create_remote_object, asyncio.get_event_loop()
68 )
69 return connection
70
71
72 def run_driver() -> Connection:
73 loop = asyncio.get_event_loop()
74 if loop.is_running():
75 raise Error("Can only run one Playwright at a time.")
76 return loop.run_until_complete(run_driver_async())
77
78
79 class SyncPlaywrightContextManager:
80 def __init__(self) -> None:
81 self._connection = run_driver()
82 self._playwright: SyncPlaywright
83
84 def __enter__(self) -> SyncPlaywright:
85 g_self = greenlet.getcurrent()
86
87 def callback_wrapper(playwright_impl: Playwright) -> None:
88 self._playwright = SyncPlaywright(playwright_impl)
89 g_self.switch()
90
91 self._connection.call_on_object_with_known_name("Playwright", callback_wrapper)
92 set_dispatcher_fiber(greenlet(lambda: self._connection.run_sync()))
93 dispatcher_fiber().switch()
94 playwright = self._playwright
95 playwright.stop = self.__exit__ # type: ignore
96 return playwright
97
98 def start(self) -> SyncPlaywright:
99 return self.__enter__()
100
101 def __exit__(self, *args: Any) -> None:
102 self._connection.stop_sync()
103
104
105 class AsyncPlaywrightContextManager:
106 def __init__(self) -> None:
107 self._connection: Connection
108
109 async def __aenter__(self) -> AsyncPlaywright:
110 self._connection = await run_driver_async()
111 self._connection.run_async()
112 playwright = AsyncPlaywright(
113 await self._connection.wait_for_object_with_known_name("Playwright")
114 )
115 playwright.stop = self.__aexit__ # type: ignore
116 return playwright
117
118 async def start(self) -> AsyncPlaywright:
119 return await self.__aenter__()
120
121 async def __aexit__(self, *args: Any) -> None:
122 self._connection.stop_async()
123
124
125 if sys.platform == "win32":
126 # Use ProactorEventLoop in 3.7, which is default in 3.8
127 loop = asyncio.ProactorEventLoop()
128 asyncio.set_event_loop(loop)
129
130
131 def main() -> None:
132 driver_executable = compute_driver_executable()
133 my_env = os.environ.copy()
134 my_env["PW_CLI_TARGET_LANG"] = "python"
135 subprocess.run([str(driver_executable), *sys.argv[1:]], env=my_env)
136
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/playwright/main.py b/playwright/main.py
--- a/playwright/main.py
+++ b/playwright/main.py
@@ -43,11 +43,11 @@
async def run_driver_async() -> Connection:
driver_executable = compute_driver_executable()
- # Sourced from: https://github.com/pytest-dev/pytest/blob/49827adcb9256c9c9c06a25729421dcc3c385edc/src/_pytest/faulthandler.py#L73-L80
+ # Sourced from: https://github.com/pytest-dev/pytest/blob/824e9cf67abcfc47df25a59bf32ebd8c25fbd02a/src/_pytest/faulthandler.py#L70-L77
def _get_stderr_fileno() -> int:
try:
return sys.stderr.fileno()
- except io.UnsupportedOperation:
+ except (AttributeError, io.UnsupportedOperation):
# pytest-xdist monkeypatches sys.stderr with an object that is not an actual file.
# https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors
# This is potentially dangerous, but the best we can do.
|
{"golden_diff": "diff --git a/playwright/main.py b/playwright/main.py\n--- a/playwright/main.py\n+++ b/playwright/main.py\n@@ -43,11 +43,11 @@\n async def run_driver_async() -> Connection:\n driver_executable = compute_driver_executable()\n \n- # Sourced from: https://github.com/pytest-dev/pytest/blob/49827adcb9256c9c9c06a25729421dcc3c385edc/src/_pytest/faulthandler.py#L73-L80\n+ # Sourced from: https://github.com/pytest-dev/pytest/blob/824e9cf67abcfc47df25a59bf32ebd8c25fbd02a/src/_pytest/faulthandler.py#L70-L77\n def _get_stderr_fileno() -> int:\n try:\n return sys.stderr.fileno()\n- except io.UnsupportedOperation:\n+ except (AttributeError, io.UnsupportedOperation):\n # pytest-xdist monkeypatches sys.stderr with an object that is not an actual file.\n # https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors\n # This is potentially dangerous, but the best we can do.\n", "issue": "[Fix] sys.stderr.fileno() throws an AttributeError exception\nWhen I tried to delivered the playwright tasks via `celery==4.4.7`, I got an `AttributeError` exception at line 50 in `main.py`. \r\n\r\n`except AttributeError:`\r\n\r\nI've checked the source from `pytest`, the original code is:\r\n\r\n ```\r\n except (AttributeError, io.UnsupportedOperation):\r\n # pytest-xdist monkeypatches sys.stderr with an object that is not an actual file.\r\n # https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors\r\n # This is potentially dangerous, but the best we can do.\r\n return sys.__stderr__.fileno()\r\n```\r\n\r\nIt perfectly works if I changed it to the original code, so I think we should be consistent with the original source code to enhance compatibility.\r\n\r\nI also checked the celery official docs. The reason why I got an `AttributeError` exception probably is that the log proxy of celery is not support the `stderr` attribute.\r\n\n[Fix] sys.stderr.fileno() throws an AttributeError exception\nWhen I tried to delivered the playwright tasks via `celery==4.4.7`, I got an `AttributeError` exception at line 50 in `main.py`. \r\n\r\n`except AttributeError:`\r\n\r\nI've checked the source from `pytest`, the original code is:\r\n\r\n ```\r\n except (AttributeError, io.UnsupportedOperation):\r\n # pytest-xdist monkeypatches sys.stderr with an object that is not an actual file.\r\n # https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors\r\n # This is potentially dangerous, but the best we can do.\r\n return sys.__stderr__.fileno()\r\n```\r\n\r\nIt perfectly works if I changed it to the original code, so I think we should be consistent with the original source code to enhance compatibility.\r\n\r\nI also checked the celery official docs. The reason why I got an `AttributeError` exception probably is that the log proxy of celery is not support the `stderr` attribute.\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport asyncio\nimport io\nimport os\nimport subprocess\nimport sys\nfrom pathlib import Path\nfrom typing import Any\n\nfrom greenlet import greenlet\n\nfrom playwright.async_api import Playwright as AsyncPlaywright\nfrom playwright.connection import Connection\nfrom playwright.helper import Error\nfrom playwright.object_factory import create_remote_object\nfrom playwright.path_utils import get_file_dirname\nfrom playwright.playwright import Playwright\nfrom playwright.sync_api import Playwright as SyncPlaywright\nfrom playwright.sync_base import dispatcher_fiber, set_dispatcher_fiber\n\n\ndef compute_driver_executable() -> Path:\n package_path = get_file_dirname()\n platform = sys.platform\n if platform == \"win32\":\n return package_path / \"driver\" / \"playwright-cli.exe\"\n return package_path / \"driver\" / \"playwright-cli\"\n\n\nasync def run_driver_async() -> Connection:\n driver_executable = compute_driver_executable()\n\n # Sourced from: https://github.com/pytest-dev/pytest/blob/49827adcb9256c9c9c06a25729421dcc3c385edc/src/_pytest/faulthandler.py#L73-L80\n def _get_stderr_fileno() -> int:\n try:\n return sys.stderr.fileno()\n except io.UnsupportedOperation:\n # pytest-xdist monkeypatches sys.stderr with an object that is not an actual file.\n # https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors\n # This is potentially dangerous, but the best we can do.\n return sys.__stderr__.fileno()\n\n proc = await asyncio.create_subprocess_exec(\n str(driver_executable),\n \"run-driver\",\n stdin=asyncio.subprocess.PIPE,\n stdout=asyncio.subprocess.PIPE,\n stderr=_get_stderr_fileno(),\n limit=32768,\n )\n assert proc.stdout\n assert proc.stdin\n connection = Connection(\n proc.stdout, proc.stdin, create_remote_object, asyncio.get_event_loop()\n )\n return connection\n\n\ndef run_driver() -> Connection:\n loop = asyncio.get_event_loop()\n if loop.is_running():\n raise Error(\"Can only run one Playwright at a time.\")\n return loop.run_until_complete(run_driver_async())\n\n\nclass SyncPlaywrightContextManager:\n def __init__(self) -> None:\n self._connection = run_driver()\n self._playwright: SyncPlaywright\n\n def __enter__(self) -> SyncPlaywright:\n g_self = greenlet.getcurrent()\n\n def callback_wrapper(playwright_impl: Playwright) -> None:\n self._playwright = SyncPlaywright(playwright_impl)\n g_self.switch()\n\n self._connection.call_on_object_with_known_name(\"Playwright\", callback_wrapper)\n set_dispatcher_fiber(greenlet(lambda: self._connection.run_sync()))\n dispatcher_fiber().switch()\n playwright = self._playwright\n playwright.stop = self.__exit__ # type: ignore\n return playwright\n\n def start(self) -> SyncPlaywright:\n return self.__enter__()\n\n def __exit__(self, *args: Any) -> None:\n self._connection.stop_sync()\n\n\nclass AsyncPlaywrightContextManager:\n def __init__(self) -> None:\n self._connection: Connection\n\n async def __aenter__(self) -> AsyncPlaywright:\n self._connection = await run_driver_async()\n self._connection.run_async()\n playwright = AsyncPlaywright(\n await self._connection.wait_for_object_with_known_name(\"Playwright\")\n )\n playwright.stop = self.__aexit__ # type: ignore\n return playwright\n\n async def start(self) -> AsyncPlaywright:\n return await self.__aenter__()\n\n async def __aexit__(self, *args: Any) -> None:\n self._connection.stop_async()\n\n\nif sys.platform == \"win32\":\n # Use ProactorEventLoop in 3.7, which is default in 3.8\n loop = asyncio.ProactorEventLoop()\n asyncio.set_event_loop(loop)\n\n\ndef main() -> None:\n driver_executable = compute_driver_executable()\n my_env = os.environ.copy()\n my_env[\"PW_CLI_TARGET_LANG\"] = \"python\"\n subprocess.run([str(driver_executable), *sys.argv[1:]], env=my_env)\n", "path": "playwright/main.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport asyncio\nimport io\nimport os\nimport subprocess\nimport sys\nfrom pathlib import Path\nfrom typing import Any\n\nfrom greenlet import greenlet\n\nfrom playwright.async_api import Playwright as AsyncPlaywright\nfrom playwright.connection import Connection\nfrom playwright.helper import Error\nfrom playwright.object_factory import create_remote_object\nfrom playwright.path_utils import get_file_dirname\nfrom playwright.playwright import Playwright\nfrom playwright.sync_api import Playwright as SyncPlaywright\nfrom playwright.sync_base import dispatcher_fiber, set_dispatcher_fiber\n\n\ndef compute_driver_executable() -> Path:\n package_path = get_file_dirname()\n platform = sys.platform\n if platform == \"win32\":\n return package_path / \"driver\" / \"playwright-cli.exe\"\n return package_path / \"driver\" / \"playwright-cli\"\n\n\nasync def run_driver_async() -> Connection:\n driver_executable = compute_driver_executable()\n\n # Sourced from: https://github.com/pytest-dev/pytest/blob/824e9cf67abcfc47df25a59bf32ebd8c25fbd02a/src/_pytest/faulthandler.py#L70-L77\n def _get_stderr_fileno() -> int:\n try:\n return sys.stderr.fileno()\n except (AttributeError, io.UnsupportedOperation):\n # pytest-xdist monkeypatches sys.stderr with an object that is not an actual file.\n # https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors\n # This is potentially dangerous, but the best we can do.\n return sys.__stderr__.fileno()\n\n proc = await asyncio.create_subprocess_exec(\n str(driver_executable),\n \"run-driver\",\n stdin=asyncio.subprocess.PIPE,\n stdout=asyncio.subprocess.PIPE,\n stderr=_get_stderr_fileno(),\n limit=32768,\n )\n assert proc.stdout\n assert proc.stdin\n connection = Connection(\n proc.stdout, proc.stdin, create_remote_object, asyncio.get_event_loop()\n )\n return connection\n\n\ndef run_driver() -> Connection:\n loop = asyncio.get_event_loop()\n if loop.is_running():\n raise Error(\"Can only run one Playwright at a time.\")\n return loop.run_until_complete(run_driver_async())\n\n\nclass SyncPlaywrightContextManager:\n def __init__(self) -> None:\n self._connection = run_driver()\n self._playwright: SyncPlaywright\n\n def __enter__(self) -> SyncPlaywright:\n g_self = greenlet.getcurrent()\n\n def callback_wrapper(playwright_impl: Playwright) -> None:\n self._playwright = SyncPlaywright(playwright_impl)\n g_self.switch()\n\n self._connection.call_on_object_with_known_name(\"Playwright\", callback_wrapper)\n set_dispatcher_fiber(greenlet(lambda: self._connection.run_sync()))\n dispatcher_fiber().switch()\n playwright = self._playwright\n playwright.stop = self.__exit__ # type: ignore\n return playwright\n\n def start(self) -> SyncPlaywright:\n return self.__enter__()\n\n def __exit__(self, *args: Any) -> None:\n self._connection.stop_sync()\n\n\nclass AsyncPlaywrightContextManager:\n def __init__(self) -> None:\n self._connection: Connection\n\n async def __aenter__(self) -> AsyncPlaywright:\n self._connection = await run_driver_async()\n self._connection.run_async()\n playwright = AsyncPlaywright(\n await self._connection.wait_for_object_with_known_name(\"Playwright\")\n )\n playwright.stop = self.__aexit__ # type: ignore\n return playwright\n\n async def start(self) -> AsyncPlaywright:\n return await self.__aenter__()\n\n async def __aexit__(self, *args: Any) -> None:\n self._connection.stop_async()\n\n\nif sys.platform == \"win32\":\n # Use ProactorEventLoop in 3.7, which is default in 3.8\n loop = asyncio.ProactorEventLoop()\n asyncio.set_event_loop(loop)\n\n\ndef main() -> None:\n driver_executable = compute_driver_executable()\n my_env = os.environ.copy()\n my_env[\"PW_CLI_TARGET_LANG\"] = \"python\"\n subprocess.run([str(driver_executable), *sys.argv[1:]], env=my_env)\n", "path": "playwright/main.py"}]}
| 2,065 | 291 |
gh_patches_debug_517
|
rasdani/github-patches
|
git_diff
|
PaddlePaddle__models-4963
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
emotion_detection/utils.py代码错误
utils.py第62行:seq_len = max_seq_len 是缩进错误吧?应该包括在else内,否则seq_len不能表示sequence的真实长度
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PaddleNLP/emotion_detection/utils.py`
Content:
```
1 # Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """
15 EmoTect utilities.
16 """
17 from __future__ import absolute_import
18 from __future__ import division
19 from __future__ import print_function
20
21 import io
22 import os
23 import sys
24 import six
25 import random
26
27 import paddle
28 import paddle.fluid as fluid
29 import numpy as np
30
31
32 def init_checkpoint(exe, init_checkpoint_path, main_program):
33 """
34 Init CheckPoint
35 """
36
37 fluid.load(main_program, init_checkpoint_path, exe)
38
39
40 def word2id(word_dict, query):
41 """
42 Convert word sequence into id list
43 """
44 unk_id = len(word_dict)
45 wids = [
46 word_dict[w] if w in word_dict else unk_id
47 for w in query.strip().split(" ")
48 ]
49 return wids
50
51
52 def pad_wid(wids, max_seq_len=128, pad_id=0):
53 """
54 Padding data to max_seq_len
55 """
56 seq_len = len(wids)
57 if seq_len < max_seq_len:
58 for i in range(max_seq_len - seq_len):
59 wids.append(pad_id)
60 else:
61 wids = wids[:max_seq_len]
62 seq_len = max_seq_len
63 return wids, seq_len
64
65
66 def data_reader(file_path, word_dict, num_examples, phrase, epoch, max_seq_len):
67 """
68 Data reader, which convert word sequence into id list
69 """
70 all_data = []
71 with io.open(file_path, "r", encoding='utf8') as fin:
72 for line in fin:
73 if line.startswith("label"):
74 continue
75 if phrase == "infer":
76 cols = line.strip().split("\t")
77 query = cols[-1] if len(cols) != -1 else cols[0]
78 wids = word2id(word_dict, query)
79 wids, seq_len = pad_wid(wids, max_seq_len)
80 all_data.append((wids, seq_len))
81 else:
82 cols = line.strip().split("\t")
83 if len(cols) != 2:
84 sys.stderr.write("[NOTICE] Error Format Line!")
85 continue
86 label = int(cols[0])
87 query = cols[1].strip()
88 wids = word2id(word_dict, query)
89 wids, seq_len = pad_wid(wids, max_seq_len)
90 all_data.append((wids, label, seq_len))
91 num_examples[phrase] = len(all_data)
92
93 if phrase == "infer":
94
95 def reader():
96 """
97 Infer reader function
98 """
99 for wids, seq_len in all_data:
100 yield wids, seq_len
101
102 return reader
103
104 def reader():
105 """
106 Reader function
107 """
108 for idx in range(epoch):
109 if phrase == "train" and 'ce_mode' not in os.environ:
110 random.shuffle(all_data)
111 for wids, label, seq_len in all_data:
112 yield wids, label, seq_len
113
114 return reader
115
116
117 def load_vocab(file_path):
118 """
119 load the given vocabulary
120 """
121 vocab = {}
122 with io.open(file_path, 'r', encoding='utf8') as fin:
123 wid = 0
124 for line in fin:
125 if line.strip() not in vocab:
126 vocab[line.strip()] = wid
127 wid += 1
128 vocab["<unk>"] = len(vocab)
129 return vocab
130
131
132 def print_arguments(args):
133 """
134 print arguments
135 """
136 print('----------- Configuration Arguments -----------')
137 for arg, value in sorted(six.iteritems(vars(args))):
138 print('%s: %s' % (arg, value))
139 print('------------------------------------------------')
140
141
142 def query2ids(vocab_path, query):
143 """
144 Convert query to id list according to the given vocab
145 """
146 vocab = load_vocab(vocab_path)
147 wids = word2id(vocab, query)
148 return wids
149
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/PaddleNLP/emotion_detection/utils.py b/PaddleNLP/emotion_detection/utils.py
--- a/PaddleNLP/emotion_detection/utils.py
+++ b/PaddleNLP/emotion_detection/utils.py
@@ -59,7 +59,7 @@
wids.append(pad_id)
else:
wids = wids[:max_seq_len]
- seq_len = max_seq_len
+ seq_len = max_seq_len
return wids, seq_len
|
{"golden_diff": "diff --git a/PaddleNLP/emotion_detection/utils.py b/PaddleNLP/emotion_detection/utils.py\n--- a/PaddleNLP/emotion_detection/utils.py\n+++ b/PaddleNLP/emotion_detection/utils.py\n@@ -59,7 +59,7 @@\n wids.append(pad_id)\n else:\n wids = wids[:max_seq_len]\n- seq_len = max_seq_len\n+ seq_len = max_seq_len\n return wids, seq_len\n", "issue": "emotion_detection/utils.py\u4ee3\u7801\u9519\u8bef\nutils.py\u7b2c62\u884c\uff1aseq_len = max_seq_len \u662f\u7f29\u8fdb\u9519\u8bef\u5427\uff1f\u5e94\u8be5\u5305\u62ec\u5728else\u5185\uff0c\u5426\u5219seq_len\u4e0d\u80fd\u8868\u793asequence\u7684\u771f\u5b9e\u957f\u5ea6\n", "before_files": [{"content": "# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nEmoTect utilities.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport io\nimport os\nimport sys\nimport six\nimport random\n\nimport paddle\nimport paddle.fluid as fluid\nimport numpy as np\n\n\ndef init_checkpoint(exe, init_checkpoint_path, main_program):\n \"\"\"\n Init CheckPoint\n \"\"\"\n\n fluid.load(main_program, init_checkpoint_path, exe)\n\n\ndef word2id(word_dict, query):\n \"\"\"\n Convert word sequence into id list\n \"\"\"\n unk_id = len(word_dict)\n wids = [\n word_dict[w] if w in word_dict else unk_id\n for w in query.strip().split(\" \")\n ]\n return wids\n\n\ndef pad_wid(wids, max_seq_len=128, pad_id=0):\n \"\"\"\n Padding data to max_seq_len\n \"\"\"\n seq_len = len(wids)\n if seq_len < max_seq_len:\n for i in range(max_seq_len - seq_len):\n wids.append(pad_id)\n else:\n wids = wids[:max_seq_len]\n seq_len = max_seq_len\n return wids, seq_len\n\n\ndef data_reader(file_path, word_dict, num_examples, phrase, epoch, max_seq_len):\n \"\"\"\n Data reader, which convert word sequence into id list\n \"\"\"\n all_data = []\n with io.open(file_path, \"r\", encoding='utf8') as fin:\n for line in fin:\n if line.startswith(\"label\"):\n continue\n if phrase == \"infer\":\n cols = line.strip().split(\"\\t\")\n query = cols[-1] if len(cols) != -1 else cols[0]\n wids = word2id(word_dict, query)\n wids, seq_len = pad_wid(wids, max_seq_len)\n all_data.append((wids, seq_len))\n else:\n cols = line.strip().split(\"\\t\")\n if len(cols) != 2:\n sys.stderr.write(\"[NOTICE] Error Format Line!\")\n continue\n label = int(cols[0])\n query = cols[1].strip()\n wids = word2id(word_dict, query)\n wids, seq_len = pad_wid(wids, max_seq_len)\n all_data.append((wids, label, seq_len))\n num_examples[phrase] = len(all_data)\n\n if phrase == \"infer\":\n\n def reader():\n \"\"\"\n Infer reader function\n \"\"\"\n for wids, seq_len in all_data:\n yield wids, seq_len\n\n return reader\n\n def reader():\n \"\"\"\n Reader function\n \"\"\"\n for idx in range(epoch):\n if phrase == \"train\" and 'ce_mode' not in os.environ:\n random.shuffle(all_data)\n for wids, label, seq_len in all_data:\n yield wids, label, seq_len\n\n return reader\n\n\ndef load_vocab(file_path):\n \"\"\"\n load the given vocabulary\n \"\"\"\n vocab = {}\n with io.open(file_path, 'r', encoding='utf8') as fin:\n wid = 0\n for line in fin:\n if line.strip() not in vocab:\n vocab[line.strip()] = wid\n wid += 1\n vocab[\"<unk>\"] = len(vocab)\n return vocab\n\n\ndef print_arguments(args):\n \"\"\"\n print arguments\n \"\"\"\n print('----------- Configuration Arguments -----------')\n for arg, value in sorted(six.iteritems(vars(args))):\n print('%s: %s' % (arg, value))\n print('------------------------------------------------')\n\n\ndef query2ids(vocab_path, query):\n \"\"\"\n Convert query to id list according to the given vocab\n \"\"\"\n vocab = load_vocab(vocab_path)\n wids = word2id(vocab, query)\n return wids\n", "path": "PaddleNLP/emotion_detection/utils.py"}], "after_files": [{"content": "# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nEmoTect utilities.\n\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport io\nimport os\nimport sys\nimport six\nimport random\n\nimport paddle\nimport paddle.fluid as fluid\nimport numpy as np\n\n\ndef init_checkpoint(exe, init_checkpoint_path, main_program):\n \"\"\"\n Init CheckPoint\n \"\"\"\n\n fluid.load(main_program, init_checkpoint_path, exe)\n\n\ndef word2id(word_dict, query):\n \"\"\"\n Convert word sequence into id list\n \"\"\"\n unk_id = len(word_dict)\n wids = [\n word_dict[w] if w in word_dict else unk_id\n for w in query.strip().split(\" \")\n ]\n return wids\n\n\ndef pad_wid(wids, max_seq_len=128, pad_id=0):\n \"\"\"\n Padding data to max_seq_len\n \"\"\"\n seq_len = len(wids)\n if seq_len < max_seq_len:\n for i in range(max_seq_len - seq_len):\n wids.append(pad_id)\n else:\n wids = wids[:max_seq_len]\n seq_len = max_seq_len\n return wids, seq_len\n\n\ndef data_reader(file_path, word_dict, num_examples, phrase, epoch, max_seq_len):\n \"\"\"\n Data reader, which convert word sequence into id list\n \"\"\"\n all_data = []\n with io.open(file_path, \"r\", encoding='utf8') as fin:\n for line in fin:\n if line.startswith(\"label\"):\n continue\n if phrase == \"infer\":\n cols = line.strip().split(\"\\t\")\n query = cols[-1] if len(cols) != -1 else cols[0]\n wids = word2id(word_dict, query)\n wids, seq_len = pad_wid(wids, max_seq_len)\n all_data.append((wids, seq_len))\n else:\n cols = line.strip().split(\"\\t\")\n if len(cols) != 2:\n sys.stderr.write(\"[NOTICE] Error Format Line!\")\n continue\n label = int(cols[0])\n query = cols[1].strip()\n wids = word2id(word_dict, query)\n wids, seq_len = pad_wid(wids, max_seq_len)\n all_data.append((wids, label, seq_len))\n num_examples[phrase] = len(all_data)\n\n if phrase == \"infer\":\n\n def reader():\n \"\"\"\n Infer reader function\n \"\"\"\n for wids, seq_len in all_data:\n yield wids, seq_len\n\n return reader\n\n def reader():\n \"\"\"\n Reader function\n \"\"\"\n for idx in range(epoch):\n if phrase == \"train\" and 'ce_mode' not in os.environ:\n random.shuffle(all_data)\n for wids, label, seq_len in all_data:\n yield wids, label, seq_len\n\n return reader\n\n\ndef load_vocab(file_path):\n \"\"\"\n load the given vocabulary\n \"\"\"\n vocab = {}\n with io.open(file_path, 'r', encoding='utf8') as fin:\n wid = 0\n for line in fin:\n if line.strip() not in vocab:\n vocab[line.strip()] = wid\n wid += 1\n vocab[\"<unk>\"] = len(vocab)\n return vocab\n\n\ndef print_arguments(args):\n \"\"\"\n print arguments\n \"\"\"\n print('----------- Configuration Arguments -----------')\n for arg, value in sorted(six.iteritems(vars(args))):\n print('%s: %s' % (arg, value))\n print('------------------------------------------------')\n\n\ndef query2ids(vocab_path, query):\n \"\"\"\n Convert query to id list according to the given vocab\n \"\"\"\n vocab = load_vocab(vocab_path)\n wids = word2id(vocab, query)\n return wids\n", "path": "PaddleNLP/emotion_detection/utils.py"}]}
| 1,630 | 107 |
gh_patches_debug_338
|
rasdani/github-patches
|
git_diff
|
googleapis__google-cloud-python-1865
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Read the Docs build failing
https://readthedocs.org/projects/gcloud-python/builds/4108022/
https://readthedocs.org/projects/gcloud-python/builds/4108027/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2 import sys
3
4 from setuptools import setup
5 from setuptools import find_packages
6
7 here = os.path.abspath(os.path.dirname(__file__))
8
9
10 with open(os.path.join(here, 'README.rst')) as f:
11 README = f.read()
12
13
14 REQUIREMENTS = [
15 'httplib2 >= 0.9.1',
16 'googleapis-common-protos',
17 'oauth2client >= 2.0.1',
18 'protobuf >= 3.0.0b2, != 3.0.0.b2.post1',
19 'six',
20 ]
21
22 GRPC_EXTRAS = [
23 'grpcio >= 0.14.0',
24 'google-gax >= 0.12.0',
25 'gax-google-pubsub-v1',
26 ]
27
28 if sys.version_info[:2] == (2, 7):
29 REQUIREMENTS.extend(GRPC_EXTRAS)
30
31 setup(
32 name='gcloud',
33 version='0.15.0',
34 description='API Client library for Google Cloud',
35 author='Google Cloud Platform',
36 author_email='[email protected]',
37 long_description=README,
38 scripts=[],
39 url='https://github.com/GoogleCloudPlatform/gcloud-python',
40 packages=find_packages(),
41 license='Apache 2.0',
42 platforms='Posix; MacOS X; Windows',
43 include_package_data=True,
44 zip_safe=False,
45 install_requires=REQUIREMENTS,
46 extras_require={'grpc': GRPC_EXTRAS},
47 classifiers=[
48 'Development Status :: 4 - Beta',
49 'Intended Audience :: Developers',
50 'License :: OSI Approved :: Apache Software License',
51 'Operating System :: OS Independent',
52 'Programming Language :: Python :: 2',
53 'Programming Language :: Python :: 2.6',
54 'Programming Language :: Python :: 2.7',
55 'Programming Language :: Python :: 3',
56 'Programming Language :: Python :: 3.4',
57 'Programming Language :: Python :: 3.5',
58 'Topic :: Internet',
59 ]
60 )
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -25,7 +25,7 @@
'gax-google-pubsub-v1',
]
-if sys.version_info[:2] == (2, 7):
+if sys.version_info[:2] == (2, 7) and 'READTHEDOCS' not in os.environ:
REQUIREMENTS.extend(GRPC_EXTRAS)
setup(
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -25,7 +25,7 @@\n 'gax-google-pubsub-v1',\n ]\n \n-if sys.version_info[:2] == (2, 7):\n+if sys.version_info[:2] == (2, 7) and 'READTHEDOCS' not in os.environ:\n REQUIREMENTS.extend(GRPC_EXTRAS)\n \n setup(\n", "issue": "Read the Docs build failing\nhttps://readthedocs.org/projects/gcloud-python/builds/4108022/\nhttps://readthedocs.org/projects/gcloud-python/builds/4108027/\n\n", "before_files": [{"content": "import os\nimport sys\n\nfrom setuptools import setup\nfrom setuptools import find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\nwith open(os.path.join(here, 'README.rst')) as f:\n README = f.read()\n\n\nREQUIREMENTS = [\n 'httplib2 >= 0.9.1',\n 'googleapis-common-protos',\n 'oauth2client >= 2.0.1',\n 'protobuf >= 3.0.0b2, != 3.0.0.b2.post1',\n 'six',\n]\n\nGRPC_EXTRAS = [\n 'grpcio >= 0.14.0',\n 'google-gax >= 0.12.0',\n 'gax-google-pubsub-v1',\n]\n\nif sys.version_info[:2] == (2, 7):\n REQUIREMENTS.extend(GRPC_EXTRAS)\n\nsetup(\n name='gcloud',\n version='0.15.0',\n description='API Client library for Google Cloud',\n author='Google Cloud Platform',\n author_email='[email protected]',\n long_description=README,\n scripts=[],\n url='https://github.com/GoogleCloudPlatform/gcloud-python',\n packages=find_packages(),\n license='Apache 2.0',\n platforms='Posix; MacOS X; Windows',\n include_package_data=True,\n zip_safe=False,\n install_requires=REQUIREMENTS,\n extras_require={'grpc': GRPC_EXTRAS},\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Internet',\n ]\n)\n", "path": "setup.py"}], "after_files": [{"content": "import os\nimport sys\n\nfrom setuptools import setup\nfrom setuptools import find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\nwith open(os.path.join(here, 'README.rst')) as f:\n README = f.read()\n\n\nREQUIREMENTS = [\n 'httplib2 >= 0.9.1',\n 'googleapis-common-protos',\n 'oauth2client >= 2.0.1',\n 'protobuf >= 3.0.0b2, != 3.0.0.b2.post1',\n 'six',\n]\n\nGRPC_EXTRAS = [\n 'grpcio >= 0.14.0',\n 'google-gax >= 0.11.0',\n 'gax-google-pubsub-v1',\n]\n\nif sys.version_info[:2] == (2, 7) and 'READTHEDOCS' not in os.environ:\n REQUIREMENTS.extend(GRPC_EXTRAS)\n\nsetup(\n name='gcloud',\n version='0.15.0',\n description='API Client library for Google Cloud',\n author='Google Cloud Platform',\n author_email='[email protected]',\n long_description=README,\n scripts=[],\n url='https://github.com/GoogleCloudPlatform/gcloud-python',\n packages=find_packages(),\n license='Apache 2.0',\n platforms='Posix; MacOS X; Windows',\n include_package_data=True,\n zip_safe=False,\n install_requires=REQUIREMENTS,\n extras_require={'grpc': GRPC_EXTRAS},\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Internet',\n ]\n)\n", "path": "setup.py"}]}
| 862 | 100 |
gh_patches_debug_60829
|
rasdani/github-patches
|
git_diff
|
microsoft__AzureTRE-1754
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release version 0.3
## Description
As a TRE developer
I want to release current code base as version 0.3
So that people can use a more stable version going forward
## Acceptance criteria
- [ ] All core apps are bumped to 0.3
- [ ] All bundles are bumped to 0.3
- [ ] A tag is created
- [ ] A release is created
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `api_app/_version.py`
Content:
```
1 __version__ = "0.2.28"
2
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/api_app/_version.py b/api_app/_version.py
--- a/api_app/_version.py
+++ b/api_app/_version.py
@@ -1 +1 @@
-__version__ = "0.2.28"
+__version__ = "0.3.0"
|
{"golden_diff": "diff --git a/api_app/_version.py b/api_app/_version.py\n--- a/api_app/_version.py\n+++ b/api_app/_version.py\n@@ -1 +1 @@\n-__version__ = \"0.2.28\"\n+__version__ = \"0.3.0\"\n", "issue": "Release version 0.3\n## Description\r\n\r\nAs a TRE developer \r\nI want to release current code base as version 0.3\r\nSo that people can use a more stable version going forward\r\n\r\n## Acceptance criteria\r\n\r\n- [ ] All core apps are bumped to 0.3\r\n- [ ] All bundles are bumped to 0.3\r\n- [ ] A tag is created\r\n- [ ] A release is created\r\n\n", "before_files": [{"content": "__version__ = \"0.2.28\"\n", "path": "api_app/_version.py"}], "after_files": [{"content": "__version__ = \"0.3.0\"\n", "path": "api_app/_version.py"}]}
| 363 | 63 |
gh_patches_debug_26379
|
rasdani/github-patches
|
git_diff
|
buildbot__buildbot-1572
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix bytes/unicode issue to fix test on Python 3
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `master/buildbot/db/schedulers.py`
Content:
```
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15
16 import sqlalchemy as sa
17 import sqlalchemy.exc
18
19 from buildbot.db import NULL
20 from buildbot.db import base
21 from twisted.internet import defer
22
23
24 class SchedulerAlreadyClaimedError(Exception):
25 pass
26
27
28 class SchedulersConnectorComponent(base.DBConnectorComponent):
29 # Documentation is in developer/db.rst
30
31 def classifyChanges(self, schedulerid, classifications):
32 def thd(conn):
33 transaction = conn.begin()
34 tbl = self.db.model.scheduler_changes
35 ins_q = tbl.insert()
36 upd_q = tbl.update(
37 ((tbl.c.schedulerid == schedulerid)
38 & (tbl.c.changeid == sa.bindparam('wc_changeid'))))
39 for changeid, important in classifications.items():
40 # convert the 'important' value into an integer, since that
41 # is the column type
42 imp_int = important and 1 or 0
43 try:
44 conn.execute(ins_q,
45 schedulerid=schedulerid,
46 changeid=changeid,
47 important=imp_int)
48 except (sqlalchemy.exc.ProgrammingError,
49 sqlalchemy.exc.IntegrityError):
50 transaction.rollback()
51 transaction = conn.begin()
52 # insert failed, so try an update
53 conn.execute(upd_q,
54 wc_changeid=changeid,
55 important=imp_int)
56
57 transaction.commit()
58 return self.db.pool.do(thd)
59
60 def flushChangeClassifications(self, schedulerid, less_than=None):
61 def thd(conn):
62 sch_ch_tbl = self.db.model.scheduler_changes
63 wc = (sch_ch_tbl.c.schedulerid == schedulerid)
64 if less_than is not None:
65 wc = wc & (sch_ch_tbl.c.changeid < less_than)
66 q = sch_ch_tbl.delete(whereclause=wc)
67 conn.execute(q)
68 return self.db.pool.do(thd)
69
70 def getChangeClassifications(self, schedulerid, branch=-1,
71 repository=-1, project=-1,
72 codebase=-1):
73 # -1 here stands for "argument not given", since None has meaning
74 # as a branch
75 def thd(conn):
76 sch_ch_tbl = self.db.model.scheduler_changes
77 ch_tbl = self.db.model.changes
78
79 wc = (sch_ch_tbl.c.schedulerid == schedulerid)
80
81 # may need to filter further based on branch, etc
82 extra_wheres = []
83 if branch != -1:
84 extra_wheres.append(ch_tbl.c.branch == branch)
85 if repository != -1:
86 extra_wheres.append(ch_tbl.c.repository == repository)
87 if project != -1:
88 extra_wheres.append(ch_tbl.c.project == project)
89 if codebase != -1:
90 extra_wheres.append(ch_tbl.c.codebase == codebase)
91
92 # if we need to filter further append those, as well as a join
93 # on changeid (but just once for that one)
94 if extra_wheres:
95 wc &= (sch_ch_tbl.c.changeid == ch_tbl.c.changeid)
96 for w in extra_wheres:
97 wc &= w
98
99 q = sa.select(
100 [sch_ch_tbl.c.changeid, sch_ch_tbl.c.important],
101 whereclause=wc)
102 return dict([(r.changeid, [False, True][r.important])
103 for r in conn.execute(q)])
104 return self.db.pool.do(thd)
105
106 def findSchedulerId(self, name):
107 tbl = self.db.model.schedulers
108 name_hash = self.hashColumns(name)
109 return self.findSomethingId(
110 tbl=tbl,
111 whereclause=(tbl.c.name_hash == name_hash),
112 insert_values=dict(
113 name=name,
114 name_hash=name_hash,
115 ))
116
117 def setSchedulerMaster(self, schedulerid, masterid):
118 def thd(conn):
119 sch_mst_tbl = self.db.model.scheduler_masters
120
121 # handle the masterid=None case to get it out of the way
122 if masterid is None:
123 q = sch_mst_tbl.delete(
124 whereclause=(sch_mst_tbl.c.schedulerid == schedulerid))
125 conn.execute(q)
126 return
127
128 # try a blind insert..
129 try:
130 q = sch_mst_tbl.insert()
131 conn.execute(q,
132 dict(schedulerid=schedulerid, masterid=masterid))
133 except (sa.exc.IntegrityError, sa.exc.ProgrammingError):
134 # someone already owns this scheduler.
135 raise SchedulerAlreadyClaimedError
136
137 return self.db.pool.do(thd)
138
139 @defer.inlineCallbacks
140 def getScheduler(self, schedulerid):
141 sch = yield self.getSchedulers(_schedulerid=schedulerid)
142 if sch:
143 defer.returnValue(sch[0])
144
145 def getSchedulers(self, active=None, masterid=None, _schedulerid=None):
146 def thd(conn):
147 sch_tbl = self.db.model.schedulers
148 sch_mst_tbl = self.db.model.scheduler_masters
149
150 # handle the trivial case of masterid=xx and active=False
151 if masterid is not None and active is not None and not active:
152 return []
153
154 join = sch_tbl.outerjoin(sch_mst_tbl,
155 (sch_tbl.c.id == sch_mst_tbl.c.schedulerid))
156
157 # if we're given a _schedulerid, select only that row
158 wc = None
159 if _schedulerid:
160 wc = (sch_tbl.c.id == _schedulerid)
161 else:
162 # otherwise, filter with active, if necessary
163 if masterid is not None:
164 wc = (sch_mst_tbl.c.masterid == masterid)
165 elif active:
166 wc = (sch_mst_tbl.c.masterid != NULL)
167 elif active is not None:
168 wc = (sch_mst_tbl.c.masterid == NULL)
169
170 q = sa.select([sch_tbl.c.id, sch_tbl.c.name,
171 sch_mst_tbl.c.masterid],
172 from_obj=join, whereclause=wc)
173
174 return [dict(id=row.id, name=row.name,
175 masterid=row.masterid)
176 for row in conn.execute(q).fetchall()]
177 return self.db.pool.do(thd)
178
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/master/buildbot/db/schedulers.py b/master/buildbot/db/schedulers.py
--- a/master/buildbot/db/schedulers.py
+++ b/master/buildbot/db/schedulers.py
@@ -30,13 +30,13 @@
def classifyChanges(self, schedulerid, classifications):
def thd(conn):
- transaction = conn.begin()
tbl = self.db.model.scheduler_changes
ins_q = tbl.insert()
upd_q = tbl.update(
((tbl.c.schedulerid == schedulerid)
& (tbl.c.changeid == sa.bindparam('wc_changeid'))))
for changeid, important in classifications.items():
+ transaction = conn.begin()
# convert the 'important' value into an integer, since that
# is the column type
imp_int = important and 1 or 0
@@ -54,7 +54,7 @@
wc_changeid=changeid,
important=imp_int)
- transaction.commit()
+ transaction.commit()
return self.db.pool.do(thd)
def flushChangeClassifications(self, schedulerid, less_than=None):
|
{"golden_diff": "diff --git a/master/buildbot/db/schedulers.py b/master/buildbot/db/schedulers.py\n--- a/master/buildbot/db/schedulers.py\n+++ b/master/buildbot/db/schedulers.py\n@@ -30,13 +30,13 @@\n \n def classifyChanges(self, schedulerid, classifications):\n def thd(conn):\n- transaction = conn.begin()\n tbl = self.db.model.scheduler_changes\n ins_q = tbl.insert()\n upd_q = tbl.update(\n ((tbl.c.schedulerid == schedulerid)\n & (tbl.c.changeid == sa.bindparam('wc_changeid'))))\n for changeid, important in classifications.items():\n+ transaction = conn.begin()\n # convert the 'important' value into an integer, since that\n # is the column type\n imp_int = important and 1 or 0\n@@ -54,7 +54,7 @@\n wc_changeid=changeid,\n important=imp_int)\n \n- transaction.commit()\n+ transaction.commit()\n return self.db.pool.do(thd)\n \n def flushChangeClassifications(self, schedulerid, less_than=None):\n", "issue": "Fix bytes/unicode issue to fix test on Python 3\n\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nimport sqlalchemy as sa\nimport sqlalchemy.exc\n\nfrom buildbot.db import NULL\nfrom buildbot.db import base\nfrom twisted.internet import defer\n\n\nclass SchedulerAlreadyClaimedError(Exception):\n pass\n\n\nclass SchedulersConnectorComponent(base.DBConnectorComponent):\n # Documentation is in developer/db.rst\n\n def classifyChanges(self, schedulerid, classifications):\n def thd(conn):\n transaction = conn.begin()\n tbl = self.db.model.scheduler_changes\n ins_q = tbl.insert()\n upd_q = tbl.update(\n ((tbl.c.schedulerid == schedulerid)\n & (tbl.c.changeid == sa.bindparam('wc_changeid'))))\n for changeid, important in classifications.items():\n # convert the 'important' value into an integer, since that\n # is the column type\n imp_int = important and 1 or 0\n try:\n conn.execute(ins_q,\n schedulerid=schedulerid,\n changeid=changeid,\n important=imp_int)\n except (sqlalchemy.exc.ProgrammingError,\n sqlalchemy.exc.IntegrityError):\n transaction.rollback()\n transaction = conn.begin()\n # insert failed, so try an update\n conn.execute(upd_q,\n wc_changeid=changeid,\n important=imp_int)\n\n transaction.commit()\n return self.db.pool.do(thd)\n\n def flushChangeClassifications(self, schedulerid, less_than=None):\n def thd(conn):\n sch_ch_tbl = self.db.model.scheduler_changes\n wc = (sch_ch_tbl.c.schedulerid == schedulerid)\n if less_than is not None:\n wc = wc & (sch_ch_tbl.c.changeid < less_than)\n q = sch_ch_tbl.delete(whereclause=wc)\n conn.execute(q)\n return self.db.pool.do(thd)\n\n def getChangeClassifications(self, schedulerid, branch=-1,\n repository=-1, project=-1,\n codebase=-1):\n # -1 here stands for \"argument not given\", since None has meaning\n # as a branch\n def thd(conn):\n sch_ch_tbl = self.db.model.scheduler_changes\n ch_tbl = self.db.model.changes\n\n wc = (sch_ch_tbl.c.schedulerid == schedulerid)\n\n # may need to filter further based on branch, etc\n extra_wheres = []\n if branch != -1:\n extra_wheres.append(ch_tbl.c.branch == branch)\n if repository != -1:\n extra_wheres.append(ch_tbl.c.repository == repository)\n if project != -1:\n extra_wheres.append(ch_tbl.c.project == project)\n if codebase != -1:\n extra_wheres.append(ch_tbl.c.codebase == codebase)\n\n # if we need to filter further append those, as well as a join\n # on changeid (but just once for that one)\n if extra_wheres:\n wc &= (sch_ch_tbl.c.changeid == ch_tbl.c.changeid)\n for w in extra_wheres:\n wc &= w\n\n q = sa.select(\n [sch_ch_tbl.c.changeid, sch_ch_tbl.c.important],\n whereclause=wc)\n return dict([(r.changeid, [False, True][r.important])\n for r in conn.execute(q)])\n return self.db.pool.do(thd)\n\n def findSchedulerId(self, name):\n tbl = self.db.model.schedulers\n name_hash = self.hashColumns(name)\n return self.findSomethingId(\n tbl=tbl,\n whereclause=(tbl.c.name_hash == name_hash),\n insert_values=dict(\n name=name,\n name_hash=name_hash,\n ))\n\n def setSchedulerMaster(self, schedulerid, masterid):\n def thd(conn):\n sch_mst_tbl = self.db.model.scheduler_masters\n\n # handle the masterid=None case to get it out of the way\n if masterid is None:\n q = sch_mst_tbl.delete(\n whereclause=(sch_mst_tbl.c.schedulerid == schedulerid))\n conn.execute(q)\n return\n\n # try a blind insert..\n try:\n q = sch_mst_tbl.insert()\n conn.execute(q,\n dict(schedulerid=schedulerid, masterid=masterid))\n except (sa.exc.IntegrityError, sa.exc.ProgrammingError):\n # someone already owns this scheduler.\n raise SchedulerAlreadyClaimedError\n\n return self.db.pool.do(thd)\n\n @defer.inlineCallbacks\n def getScheduler(self, schedulerid):\n sch = yield self.getSchedulers(_schedulerid=schedulerid)\n if sch:\n defer.returnValue(sch[0])\n\n def getSchedulers(self, active=None, masterid=None, _schedulerid=None):\n def thd(conn):\n sch_tbl = self.db.model.schedulers\n sch_mst_tbl = self.db.model.scheduler_masters\n\n # handle the trivial case of masterid=xx and active=False\n if masterid is not None and active is not None and not active:\n return []\n\n join = sch_tbl.outerjoin(sch_mst_tbl,\n (sch_tbl.c.id == sch_mst_tbl.c.schedulerid))\n\n # if we're given a _schedulerid, select only that row\n wc = None\n if _schedulerid:\n wc = (sch_tbl.c.id == _schedulerid)\n else:\n # otherwise, filter with active, if necessary\n if masterid is not None:\n wc = (sch_mst_tbl.c.masterid == masterid)\n elif active:\n wc = (sch_mst_tbl.c.masterid != NULL)\n elif active is not None:\n wc = (sch_mst_tbl.c.masterid == NULL)\n\n q = sa.select([sch_tbl.c.id, sch_tbl.c.name,\n sch_mst_tbl.c.masterid],\n from_obj=join, whereclause=wc)\n\n return [dict(id=row.id, name=row.name,\n masterid=row.masterid)\n for row in conn.execute(q).fetchall()]\n return self.db.pool.do(thd)\n", "path": "master/buildbot/db/schedulers.py"}], "after_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nimport sqlalchemy as sa\nimport sqlalchemy.exc\n\nfrom buildbot.db import NULL\nfrom buildbot.db import base\nfrom twisted.internet import defer\n\n\nclass SchedulerAlreadyClaimedError(Exception):\n pass\n\n\nclass SchedulersConnectorComponent(base.DBConnectorComponent):\n # Documentation is in developer/db.rst\n\n def classifyChanges(self, schedulerid, classifications):\n def thd(conn):\n tbl = self.db.model.scheduler_changes\n ins_q = tbl.insert()\n upd_q = tbl.update(\n ((tbl.c.schedulerid == schedulerid)\n & (tbl.c.changeid == sa.bindparam('wc_changeid'))))\n for changeid, important in classifications.items():\n transaction = conn.begin()\n # convert the 'important' value into an integer, since that\n # is the column type\n imp_int = important and 1 or 0\n try:\n conn.execute(ins_q,\n schedulerid=schedulerid,\n changeid=changeid,\n important=imp_int)\n except (sqlalchemy.exc.ProgrammingError,\n sqlalchemy.exc.IntegrityError):\n transaction.rollback()\n transaction = conn.begin()\n # insert failed, so try an update\n conn.execute(upd_q,\n wc_changeid=changeid,\n important=imp_int)\n\n transaction.commit()\n return self.db.pool.do(thd)\n\n def flushChangeClassifications(self, schedulerid, less_than=None):\n def thd(conn):\n sch_ch_tbl = self.db.model.scheduler_changes\n wc = (sch_ch_tbl.c.schedulerid == schedulerid)\n if less_than is not None:\n wc = wc & (sch_ch_tbl.c.changeid < less_than)\n q = sch_ch_tbl.delete(whereclause=wc)\n conn.execute(q)\n return self.db.pool.do(thd)\n\n def getChangeClassifications(self, schedulerid, branch=-1,\n repository=-1, project=-1,\n codebase=-1):\n # -1 here stands for \"argument not given\", since None has meaning\n # as a branch\n def thd(conn):\n sch_ch_tbl = self.db.model.scheduler_changes\n ch_tbl = self.db.model.changes\n\n wc = (sch_ch_tbl.c.schedulerid == schedulerid)\n\n # may need to filter further based on branch, etc\n extra_wheres = []\n if branch != -1:\n extra_wheres.append(ch_tbl.c.branch == branch)\n if repository != -1:\n extra_wheres.append(ch_tbl.c.repository == repository)\n if project != -1:\n extra_wheres.append(ch_tbl.c.project == project)\n if codebase != -1:\n extra_wheres.append(ch_tbl.c.codebase == codebase)\n\n # if we need to filter further append those, as well as a join\n # on changeid (but just once for that one)\n if extra_wheres:\n wc &= (sch_ch_tbl.c.changeid == ch_tbl.c.changeid)\n for w in extra_wheres:\n wc &= w\n\n q = sa.select(\n [sch_ch_tbl.c.changeid, sch_ch_tbl.c.important],\n whereclause=wc)\n return dict([(r.changeid, [False, True][r.important])\n for r in conn.execute(q)])\n return self.db.pool.do(thd)\n\n def findSchedulerId(self, name):\n tbl = self.db.model.schedulers\n name_hash = self.hashColumns(name)\n return self.findSomethingId(\n tbl=tbl,\n whereclause=(tbl.c.name_hash == name_hash),\n insert_values=dict(\n name=name,\n name_hash=name_hash,\n ))\n\n def setSchedulerMaster(self, schedulerid, masterid):\n def thd(conn):\n sch_mst_tbl = self.db.model.scheduler_masters\n\n # handle the masterid=None case to get it out of the way\n if masterid is None:\n q = sch_mst_tbl.delete(\n whereclause=(sch_mst_tbl.c.schedulerid == schedulerid))\n conn.execute(q)\n return\n\n # try a blind insert..\n try:\n q = sch_mst_tbl.insert()\n conn.execute(q,\n dict(schedulerid=schedulerid, masterid=masterid))\n except (sa.exc.IntegrityError, sa.exc.ProgrammingError):\n # someone already owns this scheduler.\n raise SchedulerAlreadyClaimedError\n\n return self.db.pool.do(thd)\n\n @defer.inlineCallbacks\n def getScheduler(self, schedulerid):\n sch = yield self.getSchedulers(_schedulerid=schedulerid)\n if sch:\n defer.returnValue(sch[0])\n\n def getSchedulers(self, active=None, masterid=None, _schedulerid=None):\n def thd(conn):\n sch_tbl = self.db.model.schedulers\n sch_mst_tbl = self.db.model.scheduler_masters\n\n # handle the trivial case of masterid=xx and active=False\n if masterid is not None and active is not None and not active:\n return []\n\n join = sch_tbl.outerjoin(sch_mst_tbl,\n (sch_tbl.c.id == sch_mst_tbl.c.schedulerid))\n\n # if we're given a _schedulerid, select only that row\n wc = None\n if _schedulerid:\n wc = (sch_tbl.c.id == _schedulerid)\n else:\n # otherwise, filter with active, if necessary\n if masterid is not None:\n wc = (sch_mst_tbl.c.masterid == masterid)\n elif active:\n wc = (sch_mst_tbl.c.masterid != NULL)\n elif active is not None:\n wc = (sch_mst_tbl.c.masterid == NULL)\n\n q = sa.select([sch_tbl.c.id, sch_tbl.c.name,\n sch_mst_tbl.c.masterid],\n from_obj=join, whereclause=wc)\n\n return [dict(id=row.id, name=row.name,\n masterid=row.masterid)\n for row in conn.execute(q).fetchall()]\n return self.db.pool.do(thd)\n", "path": "master/buildbot/db/schedulers.py"}]}
| 2,160 | 242 |
gh_patches_debug_3399
|
rasdani/github-patches
|
git_diff
|
ansible__molecule-2308
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"devices" support in docker driver
# Issue Type
- Feature request
# Molecule and Ansible details
```
$ ansible --version && molecule --version
ansible 2.8.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/sylvain/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/sylvain/venv/molecule/lib/python3.6/site-packages/ansible
executable location = /home/sylvain/venv/molecule/bin/ansible
python version = 3.6.8 (default, Aug 20 2019, 17:12:48) [GCC 8.3.0]
molecule, version 2.22
```
Molecule installation method (one of):
- pip
Ansible installation method (one of):
- pip
# Desired Behavior
The docker driver does not support the `devices` option available in the `docker_container` ansible module, so we can't test roles that are using fuse for example, except by using `privileged` option which is quite dangerous. Can you please support for this option?
"devices" support in docker driver
# Issue Type
- Feature request
# Molecule and Ansible details
```
$ ansible --version && molecule --version
ansible 2.8.5
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/sylvain/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/sylvain/venv/molecule/lib/python3.6/site-packages/ansible
executable location = /home/sylvain/venv/molecule/bin/ansible
python version = 3.6.8 (default, Aug 20 2019, 17:12:48) [GCC 8.3.0]
molecule, version 2.22
```
Molecule installation method (one of):
- pip
Ansible installation method (one of):
- pip
# Desired Behavior
The docker driver does not support the `devices` option available in the `docker_container` ansible module, so we can't test roles that are using fuse for example, except by using `privileged` option which is quite dangerous. Can you please support for this option?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `molecule/driver/docker.py`
Content:
```
1 # Copyright (c) 2015-2018 Cisco Systems, Inc.
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to
5 # deal in the Software without restriction, including without limitation the
6 # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
7 # sell copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
18 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
19 # DEALINGS IN THE SOFTWARE.
20
21 from __future__ import absolute_import
22
23 import os
24
25 from molecule import logger
26 from molecule.api import Driver
27 from molecule.util import lru_cache
28 from molecule.util import sysexit_with_message
29
30 log = logger.get_logger(__name__)
31
32
33 class Docker(Driver):
34 """
35 The class responsible for managing `Docker`_ containers. `Docker`_ is
36 the default driver used in Molecule.
37
38 Molecule leverages Ansible's `docker_container`_ module, by mapping
39 variables from ``molecule.yml`` into ``create.yml`` and ``destroy.yml``.
40
41 .. _`docker_container`: https://docs.ansible.com/ansible/latest/docker_container_module.html
42 .. _`Docker Security Configuration`: https://docs.docker.com/engine/reference/run/#security-configuration
43 .. _`Docker daemon socket options`: https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-socket-option
44
45 .. code-block:: yaml
46
47 driver:
48 name: docker
49 platforms:
50 - name: instance
51 hostname: instance
52 image: image_name:tag
53 dockerfile: Dockerfile.j2
54 pull: True|False
55 pre_build_image: True|False
56 registry:
57 url: registry.example.com
58 credentials:
59 username: $USERNAME
60 password: $PASSWORD
61 email: [email protected]
62 user: root
63 override_command: True|False
64 command: sleep infinity
65 tty: True|False
66 pid_mode: host
67 privileged: True|False
68 security_opts:
69 - seccomp=unconfined
70 volumes:
71 - /sys/fs/cgroup:/sys/fs/cgroup:ro
72 keep_volumes: True|False
73 tmpfs:
74 - /tmp
75 - /run
76 capabilities:
77 - SYS_ADMIN
78 sysctls:
79 net.core.somaxconn: 1024
80 net.ipv4.tcp_syncookies: 0
81 exposed_ports:
82 - 53/udp
83 - 53/tcp
84 published_ports:
85 - 0.0.0.0:8053:53/udp
86 - 0.0.0.0:8053:53/tcp
87 ulimits:
88 - nofile:262144:262144
89 dns_servers:
90 - 8.8.8.8
91 etc_hosts: "{'host1.example.com': '10.3.1.5'}"
92 networks:
93 - name: foo
94 - name: bar
95 network_mode: host
96 purge_networks: true
97 docker_host: tcp://localhost:12376
98 cacert_path: /foo/bar/ca.pem
99 cert_path: /foo/bar/cert.pem
100 key_path: /foo/bar/key.pem
101 tls_verify: true
102 env:
103 FOO: bar
104 restart_policy: on-failure
105 restart_retries: 1
106 buildargs:
107 http_proxy: http://proxy.example.com:8080/
108
109 If specifying the `CMD`_ directive in your ``Dockerfile.j2`` or consuming a
110 built image which declares a ``CMD`` directive, then you must set
111 ``override_command: False``. Otherwise, Molecule takes care to honour the
112 value of the ``command`` key or uses the default of ``bash -c "while true;
113 do sleep 10000; done"`` to run the container until it is provisioned.
114
115 When attempting to utilize a container image with `systemd`_ as your init
116 system inside the container to simulate a real machine, make sure to set
117 the ``privileged``, ``volumes``, ``command``, and ``environment``
118 values. An example using the ``centos:7`` image is below:
119
120 .. note:: Do note that running containers in privileged mode is considerably
121 less secure. For details, please reference `Docker Security
122 Configuration`_
123
124 .. note:: With the environment variable ``DOCKER_HOST`` the user can bind
125 Molecule to a different `Docker`_ socket than the default
126 ``unix:///var/run/docker.sock``. ``tcp``, ``fd`` and ``ssh``
127 socket types can be configured. For details, please reference
128 `Docker daemon socket options`_.
129
130 .. code-block:: yaml
131
132 platforms:
133 - name: instance
134 image: centos:7
135 privileged: true
136 volumes:
137 - "/sys/fs/cgroup:/sys/fs/cgroup:rw"
138 command: "/usr/sbin/init"
139 tty: True
140 environment:
141 container: docker
142
143 .. code-block:: bash
144
145 $ pip install molecule[docker]
146
147 When pulling from a private registry, it is the user's discretion to decide
148 whether to use hard-code strings or environment variables for passing
149 credentials to molecule.
150
151 .. important::
152
153 Hard-coded credentials in ``molecule.yml`` should be avoided, instead use
154 `variable substitution`_.
155
156 Provide a list of files Molecule will preserve, relative to the scenario
157 ephemeral directory, after any ``destroy`` subcommand execution.
158
159 .. code-block:: yaml
160
161 driver:
162 name: docker
163 safe_files:
164 - foo
165
166 .. _`Docker`: https://www.docker.com
167 .. _`systemd`: https://www.freedesktop.org/wiki/Software/systemd/
168 .. _`CMD`: https://docs.docker.com/engine/reference/builder/#cmd
169 """ # noqa
170
171 def __init__(self, config=None):
172 super(Docker, self).__init__(config)
173 self._name = 'docker'
174
175 @property
176 def name(self):
177 return self._name
178
179 @name.setter
180 def name(self, value):
181 self._name = value
182
183 @property
184 def login_cmd_template(self):
185 return (
186 'docker exec '
187 '-e COLUMNS={columns} '
188 '-e LINES={lines} '
189 '-e TERM=bash '
190 '-e TERM=xterm '
191 '-ti {instance} bash'
192 )
193
194 @property
195 def default_safe_files(self):
196 return [os.path.join(self._config.scenario.ephemeral_directory, 'Dockerfile')]
197
198 @property
199 def default_ssh_connection_options(self):
200 return []
201
202 def login_options(self, instance_name):
203 return {'instance': instance_name}
204
205 def ansible_connection_options(self, instance_name):
206 return {'ansible_connection': 'docker'}
207
208 @lru_cache()
209 def sanity_checks(self):
210 """Implement Docker driver sanity checks."""
211
212 log.info("Sanity checks: '{}'".format(self._name))
213
214 try:
215 # ansible >= 2.8
216 from ansible.module_utils.docker.common import HAS_DOCKER_PY
217 except ImportError:
218 # ansible < 2.8
219 from ansible.module_utils.docker_common import HAS_DOCKER_PY
220
221 if not HAS_DOCKER_PY:
222 msg = (
223 'Missing Docker driver dependency. Please '
224 "install via 'molecule[docker]' or refer to "
225 'your INSTALL.rst driver documentation file'
226 )
227 sysexit_with_message(msg)
228
229 try:
230 import docker
231 import requests
232
233 docker_client = docker.from_env()
234 docker_client.ping()
235 except requests.exceptions.ConnectionError:
236 msg = (
237 'Unable to contact the Docker daemon. '
238 'Please refer to https://docs.docker.com/config/daemon/ '
239 'for managing the daemon'
240 )
241 sysexit_with_message(msg)
242
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/molecule/driver/docker.py b/molecule/driver/docker.py
--- a/molecule/driver/docker.py
+++ b/molecule/driver/docker.py
@@ -67,6 +67,8 @@
privileged: True|False
security_opts:
- seccomp=unconfined
+ devices:
+ - /dev/fuse:/dev/fuse:rwm
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
keep_volumes: True|False
|
{"golden_diff": "diff --git a/molecule/driver/docker.py b/molecule/driver/docker.py\n--- a/molecule/driver/docker.py\n+++ b/molecule/driver/docker.py\n@@ -67,6 +67,8 @@\n privileged: True|False\n security_opts:\n - seccomp=unconfined\n+ devices:\n+ - /dev/fuse:/dev/fuse:rwm\n volumes:\n - /sys/fs/cgroup:/sys/fs/cgroup:ro\n keep_volumes: True|False\n", "issue": "\"devices\" support in docker driver\n# Issue Type\r\n\r\n- Feature request\r\n\r\n# Molecule and Ansible details\r\n\r\n```\r\n$ ansible --version && molecule --version\r\nansible 2.8.5\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = ['/home/sylvain/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /home/sylvain/venv/molecule/lib/python3.6/site-packages/ansible\r\n executable location = /home/sylvain/venv/molecule/bin/ansible\r\n python version = 3.6.8 (default, Aug 20 2019, 17:12:48) [GCC 8.3.0]\r\nmolecule, version 2.22\r\n```\r\n\r\nMolecule installation method (one of):\r\n\r\n- pip\r\n\r\nAnsible installation method (one of):\r\n\r\n- pip\r\n\r\n# Desired Behavior\r\n\r\nThe docker driver does not support the `devices` option available in the `docker_container` ansible module, so we can't test roles that are using fuse for example, except by using `privileged` option which is quite dangerous. Can you please support for this option?\r\n\n\"devices\" support in docker driver\n# Issue Type\r\n\r\n- Feature request\r\n\r\n# Molecule and Ansible details\r\n\r\n```\r\n$ ansible --version && molecule --version\r\nansible 2.8.5\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = ['/home/sylvain/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /home/sylvain/venv/molecule/lib/python3.6/site-packages/ansible\r\n executable location = /home/sylvain/venv/molecule/bin/ansible\r\n python version = 3.6.8 (default, Aug 20 2019, 17:12:48) [GCC 8.3.0]\r\nmolecule, version 2.22\r\n```\r\n\r\nMolecule installation method (one of):\r\n\r\n- pip\r\n\r\nAnsible installation method (one of):\r\n\r\n- pip\r\n\r\n# Desired Behavior\r\n\r\nThe docker driver does not support the `devices` option available in the `docker_container` ansible module, so we can't test roles that are using fuse for example, except by using `privileged` option which is quite dangerous. Can you please support for this option?\r\n\n", "before_files": [{"content": "# Copyright (c) 2015-2018 Cisco Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to\n# deal in the Software without restriction, including without limitation the\n# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n# sell copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nfrom __future__ import absolute_import\n\nimport os\n\nfrom molecule import logger\nfrom molecule.api import Driver\nfrom molecule.util import lru_cache\nfrom molecule.util import sysexit_with_message\n\nlog = logger.get_logger(__name__)\n\n\nclass Docker(Driver):\n \"\"\"\n The class responsible for managing `Docker`_ containers. `Docker`_ is\n the default driver used in Molecule.\n\n Molecule leverages Ansible's `docker_container`_ module, by mapping\n variables from ``molecule.yml`` into ``create.yml`` and ``destroy.yml``.\n\n .. _`docker_container`: https://docs.ansible.com/ansible/latest/docker_container_module.html\n .. _`Docker Security Configuration`: https://docs.docker.com/engine/reference/run/#security-configuration\n .. _`Docker daemon socket options`: https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-socket-option\n\n .. code-block:: yaml\n\n driver:\n name: docker\n platforms:\n - name: instance\n hostname: instance\n image: image_name:tag\n dockerfile: Dockerfile.j2\n pull: True|False\n pre_build_image: True|False\n registry:\n url: registry.example.com\n credentials:\n username: $USERNAME\n password: $PASSWORD\n email: [email protected]\n user: root\n override_command: True|False\n command: sleep infinity\n tty: True|False\n pid_mode: host\n privileged: True|False\n security_opts:\n - seccomp=unconfined\n volumes:\n - /sys/fs/cgroup:/sys/fs/cgroup:ro\n keep_volumes: True|False\n tmpfs:\n - /tmp\n - /run\n capabilities:\n - SYS_ADMIN\n sysctls:\n net.core.somaxconn: 1024\n net.ipv4.tcp_syncookies: 0\n exposed_ports:\n - 53/udp\n - 53/tcp\n published_ports:\n - 0.0.0.0:8053:53/udp\n - 0.0.0.0:8053:53/tcp\n ulimits:\n - nofile:262144:262144\n dns_servers:\n - 8.8.8.8\n etc_hosts: \"{'host1.example.com': '10.3.1.5'}\"\n networks:\n - name: foo\n - name: bar\n network_mode: host\n purge_networks: true\n docker_host: tcp://localhost:12376\n cacert_path: /foo/bar/ca.pem\n cert_path: /foo/bar/cert.pem\n key_path: /foo/bar/key.pem\n tls_verify: true\n env:\n FOO: bar\n restart_policy: on-failure\n restart_retries: 1\n buildargs:\n http_proxy: http://proxy.example.com:8080/\n\n If specifying the `CMD`_ directive in your ``Dockerfile.j2`` or consuming a\n built image which declares a ``CMD`` directive, then you must set\n ``override_command: False``. Otherwise, Molecule takes care to honour the\n value of the ``command`` key or uses the default of ``bash -c \"while true;\n do sleep 10000; done\"`` to run the container until it is provisioned.\n\n When attempting to utilize a container image with `systemd`_ as your init\n system inside the container to simulate a real machine, make sure to set\n the ``privileged``, ``volumes``, ``command``, and ``environment``\n values. An example using the ``centos:7`` image is below:\n\n .. note:: Do note that running containers in privileged mode is considerably\n less secure. For details, please reference `Docker Security\n Configuration`_\n\n .. note:: With the environment variable ``DOCKER_HOST`` the user can bind\n Molecule to a different `Docker`_ socket than the default\n ``unix:///var/run/docker.sock``. ``tcp``, ``fd`` and ``ssh``\n socket types can be configured. For details, please reference\n `Docker daemon socket options`_.\n\n .. code-block:: yaml\n\n platforms:\n - name: instance\n image: centos:7\n privileged: true\n volumes:\n - \"/sys/fs/cgroup:/sys/fs/cgroup:rw\"\n command: \"/usr/sbin/init\"\n tty: True\n environment:\n container: docker\n\n .. code-block:: bash\n\n $ pip install molecule[docker]\n\n When pulling from a private registry, it is the user's discretion to decide\n whether to use hard-code strings or environment variables for passing\n credentials to molecule.\n\n .. important::\n\n Hard-coded credentials in ``molecule.yml`` should be avoided, instead use\n `variable substitution`_.\n\n Provide a list of files Molecule will preserve, relative to the scenario\n ephemeral directory, after any ``destroy`` subcommand execution.\n\n .. code-block:: yaml\n\n driver:\n name: docker\n safe_files:\n - foo\n\n .. _`Docker`: https://www.docker.com\n .. _`systemd`: https://www.freedesktop.org/wiki/Software/systemd/\n .. _`CMD`: https://docs.docker.com/engine/reference/builder/#cmd\n \"\"\" # noqa\n\n def __init__(self, config=None):\n super(Docker, self).__init__(config)\n self._name = 'docker'\n\n @property\n def name(self):\n return self._name\n\n @name.setter\n def name(self, value):\n self._name = value\n\n @property\n def login_cmd_template(self):\n return (\n 'docker exec '\n '-e COLUMNS={columns} '\n '-e LINES={lines} '\n '-e TERM=bash '\n '-e TERM=xterm '\n '-ti {instance} bash'\n )\n\n @property\n def default_safe_files(self):\n return [os.path.join(self._config.scenario.ephemeral_directory, 'Dockerfile')]\n\n @property\n def default_ssh_connection_options(self):\n return []\n\n def login_options(self, instance_name):\n return {'instance': instance_name}\n\n def ansible_connection_options(self, instance_name):\n return {'ansible_connection': 'docker'}\n\n @lru_cache()\n def sanity_checks(self):\n \"\"\"Implement Docker driver sanity checks.\"\"\"\n\n log.info(\"Sanity checks: '{}'\".format(self._name))\n\n try:\n # ansible >= 2.8\n from ansible.module_utils.docker.common import HAS_DOCKER_PY\n except ImportError:\n # ansible < 2.8\n from ansible.module_utils.docker_common import HAS_DOCKER_PY\n\n if not HAS_DOCKER_PY:\n msg = (\n 'Missing Docker driver dependency. Please '\n \"install via 'molecule[docker]' or refer to \"\n 'your INSTALL.rst driver documentation file'\n )\n sysexit_with_message(msg)\n\n try:\n import docker\n import requests\n\n docker_client = docker.from_env()\n docker_client.ping()\n except requests.exceptions.ConnectionError:\n msg = (\n 'Unable to contact the Docker daemon. '\n 'Please refer to https://docs.docker.com/config/daemon/ '\n 'for managing the daemon'\n )\n sysexit_with_message(msg)\n", "path": "molecule/driver/docker.py"}], "after_files": [{"content": "# Copyright (c) 2015-2018 Cisco Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to\n# deal in the Software without restriction, including without limitation the\n# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n# sell copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nfrom __future__ import absolute_import\n\nimport os\n\nfrom molecule import logger\nfrom molecule.api import Driver\nfrom molecule.util import lru_cache\nfrom molecule.util import sysexit_with_message\n\nlog = logger.get_logger(__name__)\n\n\nclass Docker(Driver):\n \"\"\"\n The class responsible for managing `Docker`_ containers. `Docker`_ is\n the default driver used in Molecule.\n\n Molecule leverages Ansible's `docker_container`_ module, by mapping\n variables from ``molecule.yml`` into ``create.yml`` and ``destroy.yml``.\n\n .. _`docker_container`: https://docs.ansible.com/ansible/latest/docker_container_module.html\n .. _`Docker Security Configuration`: https://docs.docker.com/engine/reference/run/#security-configuration\n .. _`Docker daemon socket options`: https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-socket-option\n\n .. code-block:: yaml\n\n driver:\n name: docker\n platforms:\n - name: instance\n hostname: instance\n image: image_name:tag\n dockerfile: Dockerfile.j2\n pull: True|False\n pre_build_image: True|False\n registry:\n url: registry.example.com\n credentials:\n username: $USERNAME\n password: $PASSWORD\n email: [email protected]\n user: root\n override_command: True|False\n command: sleep infinity\n tty: True|False\n pid_mode: host\n privileged: True|False\n security_opts:\n - seccomp=unconfined\n devices:\n - /dev/fuse:/dev/fuse:rwm\n volumes:\n - /sys/fs/cgroup:/sys/fs/cgroup:ro\n keep_volumes: True|False\n tmpfs:\n - /tmp\n - /run\n capabilities:\n - SYS_ADMIN\n sysctls:\n net.core.somaxconn: 1024\n net.ipv4.tcp_syncookies: 0\n exposed_ports:\n - 53/udp\n - 53/tcp\n published_ports:\n - 0.0.0.0:8053:53/udp\n - 0.0.0.0:8053:53/tcp\n ulimits:\n - nofile:262144:262144\n dns_servers:\n - 8.8.8.8\n etc_hosts: \"{'host1.example.com': '10.3.1.5'}\"\n networks:\n - name: foo\n - name: bar\n network_mode: host\n purge_networks: true\n docker_host: tcp://localhost:12376\n cacert_path: /foo/bar/ca.pem\n cert_path: /foo/bar/cert.pem\n key_path: /foo/bar/key.pem\n tls_verify: true\n env:\n FOO: bar\n restart_policy: on-failure\n restart_retries: 1\n buildargs:\n http_proxy: http://proxy.example.com:8080/\n\n If specifying the `CMD`_ directive in your ``Dockerfile.j2`` or consuming a\n built image which declares a ``CMD`` directive, then you must set\n ``override_command: False``. Otherwise, Molecule takes care to honour the\n value of the ``command`` key or uses the default of ``bash -c \"while true;\n do sleep 10000; done\"`` to run the container until it is provisioned.\n\n When attempting to utilize a container image with `systemd`_ as your init\n system inside the container to simulate a real machine, make sure to set\n the ``privileged``, ``volumes``, ``command``, and ``environment``\n values. An example using the ``centos:7`` image is below:\n\n .. note:: Do note that running containers in privileged mode is considerably\n less secure. For details, please reference `Docker Security\n Configuration`_\n\n .. note:: With the environment variable ``DOCKER_HOST`` the user can bind\n Molecule to a different `Docker`_ socket than the default\n ``unix:///var/run/docker.sock``. ``tcp``, ``fd`` and ``ssh``\n socket types can be configured. For details, please reference\n `Docker daemon socket options`_.\n\n .. code-block:: yaml\n\n platforms:\n - name: instance\n image: centos:7\n privileged: true\n volumes:\n - \"/sys/fs/cgroup:/sys/fs/cgroup:rw\"\n command: \"/usr/sbin/init\"\n tty: True\n environment:\n container: docker\n\n .. code-block:: bash\n\n $ pip install molecule[docker]\n\n When pulling from a private registry, it is the user's discretion to decide\n whether to use hard-code strings or environment variables for passing\n credentials to molecule.\n\n .. important::\n\n Hard-coded credentials in ``molecule.yml`` should be avoided, instead use\n `variable substitution`_.\n\n Provide a list of files Molecule will preserve, relative to the scenario\n ephemeral directory, after any ``destroy`` subcommand execution.\n\n .. code-block:: yaml\n\n driver:\n name: docker\n safe_files:\n - foo\n\n .. _`Docker`: https://www.docker.com\n .. _`systemd`: https://www.freedesktop.org/wiki/Software/systemd/\n .. _`CMD`: https://docs.docker.com/engine/reference/builder/#cmd\n \"\"\" # noqa\n\n def __init__(self, config=None):\n super(Docker, self).__init__(config)\n self._name = 'docker'\n\n @property\n def name(self):\n return self._name\n\n @name.setter\n def name(self, value):\n self._name = value\n\n @property\n def login_cmd_template(self):\n return (\n 'docker exec '\n '-e COLUMNS={columns} '\n '-e LINES={lines} '\n '-e TERM=bash '\n '-e TERM=xterm '\n '-ti {instance} bash'\n )\n\n @property\n def default_safe_files(self):\n return [os.path.join(self._config.scenario.ephemeral_directory, 'Dockerfile')]\n\n @property\n def default_ssh_connection_options(self):\n return []\n\n def login_options(self, instance_name):\n return {'instance': instance_name}\n\n def ansible_connection_options(self, instance_name):\n return {'ansible_connection': 'docker'}\n\n @lru_cache()\n def sanity_checks(self):\n \"\"\"Implement Docker driver sanity checks.\"\"\"\n\n log.info(\"Sanity checks: '{}'\".format(self._name))\n\n try:\n # ansible >= 2.8\n from ansible.module_utils.docker.common import HAS_DOCKER_PY\n except ImportError:\n # ansible < 2.8\n from ansible.module_utils.docker_common import HAS_DOCKER_PY\n\n if not HAS_DOCKER_PY:\n msg = (\n 'Missing Docker driver dependency. Please '\n \"install via 'molecule[docker]' or refer to \"\n 'your INSTALL.rst driver documentation file'\n )\n sysexit_with_message(msg)\n\n try:\n import docker\n import requests\n\n docker_client = docker.from_env()\n docker_client.ping()\n except requests.exceptions.ConnectionError:\n msg = (\n 'Unable to contact the Docker daemon. '\n 'Please refer to https://docs.docker.com/config/daemon/ '\n 'for managing the daemon'\n )\n sysexit_with_message(msg)\n", "path": "molecule/driver/docker.py"}]}
| 3,318 | 114 |
gh_patches_debug_10176
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-7364
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
graph --table=file.html does not show outdated packages
They were previously shown in colors. I guess adding a column "outdated" will resolve this issue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conans/assets/templates/search_table_html.py`
Content:
```
1
2 content = """
3 <!DOCTYPE html>
4 <html lang="en">
5 <head>
6 <title>Conan | {{ search.reference }}</title>
7 <link rel="stylesheet" type="text/css" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css"/>
8 <link rel="stylesheet" type="text/css" href="https://cdn.datatables.net/1.10.21/css/dataTables.bootstrap.min.css"/>
9 <style>
10 tr td {
11 white-space:nowrap;
12 }
13 </style>
14 </head>
15 <body>
16 <div class="container-fluid">
17 <h1>{{ search.reference }}</h1>
18 <div class="info">
19 <p>
20 Depending on your package_id_mode, any combination of settings, options and requirements
21 can give you a different packageID. Take into account that your configuration might be
22 different from the one used to generate the packages.
23 </p>
24 </div>
25
26 <table id="results" class="table table-striped table-bordered" style="width:100%">
27 <thead>
28 {%- set headers = results.get_headers(keys=['remote', 'package_id']) %}
29 {%- set headers2rows = headers.row(n_rows=2) %}
30 <tr>
31 {%- for category, subheaders in headers2rows %}
32 <th class="text-center" rowspan="{% if subheaders|length == 1 and not subheaders[0] %}2{% else %}1{% endif %}" colspan="{{ subheaders|length }}">
33 {{ category }}
34 </th>
35 {%- endfor %}
36 </tr>
37 <tr>
38 {%- for category, subheaders in headers2rows %}
39 {%- if subheaders|length != 1 or subheaders[0] != '' %}
40 {%- for subheader in subheaders %}
41 <th>{{ subheader|default(category, true) }}</th>
42 {%- endfor %}
43 {%- endif %}
44 {%- endfor %}
45 </tr>
46 </thead>
47 <tbody>
48 {%- for package in results.packages() %}
49 <tr>
50 {%- for item in package.row(headers) %}
51 <td>{{ item if item != None else ''}}</td>
52 {%- endfor %}
53 </tr>
54 {%- endfor %}
55 </tbody>
56 <tfoot>
57 <tr>
58 {%- for header in headers.row(n_rows=1) %}
59 <th>{{ header }}</th>
60 {%- endfor %}
61 </tr>
62 </tfoot>
63 </table>
64
65 <script type="text/javascript" src="https://code.jquery.com/jquery-3.3.1.js"></script>
66 <script type="text/javascript" src="https://cdn.datatables.net/v/dt/dt-1.10.20/datatables.min.js"></script>
67 <script type="text/javascript" src="https://cdn.datatables.net/1.10.21/js/dataTables.bootstrap.min.js"></script>
68 <script>
69 $(document).ready(function() {
70 // Setup - add a text input to each footer cell
71 $('#results tfoot th').each( function () {
72 var title = $(this).text();
73 $(this).html( '<input type="text" class="form-control filter-input" placeholder="Filter '+title+'" style="width:100%"/>' );
74 });
75
76 var table = $('#results').DataTable( {
77 "dom": "lrtip",
78 "lengthMenu": [[10, 25, 50, -1], [10, 25, 50, "All"]],
79 "pageLength": 10,
80 "columnDefs": [
81 { className: "cell_border_right", "targets": [ {{ headers.keys|length + headers.settings|length -1 }}, {{ headers.keys|length + headers.settings|length + headers.options|length -1 }} ] },
82 { className: "cell_border_right monospaced", "targets": [{{ headers.keys|length -1 }}, ]}
83 ]
84 });
85
86 // Apply the search
87 table.columns().every( function () {
88 var that = this;
89
90 $( 'input', this.footer() ).on( 'keyup change clear', function () {
91 if ( that.search() !== this.value ) {
92 that
93 .search( this.value )
94 .draw();
95 }
96 } );
97 } );
98 });
99 </script>
100 </div>
101 </body>
102 </html>
103 """
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conans/assets/templates/search_table_html.py b/conans/assets/templates/search_table_html.py
--- a/conans/assets/templates/search_table_html.py
+++ b/conans/assets/templates/search_table_html.py
@@ -25,7 +25,7 @@
<table id="results" class="table table-striped table-bordered" style="width:100%">
<thead>
- {%- set headers = results.get_headers(keys=['remote', 'package_id']) %}
+ {%- set headers = results.get_headers(keys=['remote', 'package_id', 'outdated']) %}
{%- set headers2rows = headers.row(n_rows=2) %}
<tr>
{%- for category, subheaders in headers2rows %}
|
{"golden_diff": "diff --git a/conans/assets/templates/search_table_html.py b/conans/assets/templates/search_table_html.py\n--- a/conans/assets/templates/search_table_html.py\n+++ b/conans/assets/templates/search_table_html.py\n@@ -25,7 +25,7 @@\n \n <table id=\"results\" class=\"table table-striped table-bordered\" style=\"width:100%\">\n <thead>\n- {%- set headers = results.get_headers(keys=['remote', 'package_id']) %}\n+ {%- set headers = results.get_headers(keys=['remote', 'package_id', 'outdated']) %}\n {%- set headers2rows = headers.row(n_rows=2) %}\n <tr>\n {%- for category, subheaders in headers2rows %}\n", "issue": "graph --table=file.html does not show outdated packages\nThey were previously shown in colors. I guess adding a column \"outdated\" will resolve this issue.\n", "before_files": [{"content": "\ncontent = \"\"\"\n<!DOCTYPE html>\n<html lang=\"en\">\n <head>\n <title>Conan | {{ search.reference }}</title>\n <link rel=\"stylesheet\" type=\"text/css\" href=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css\"/>\n <link rel=\"stylesheet\" type=\"text/css\" href=\"https://cdn.datatables.net/1.10.21/css/dataTables.bootstrap.min.css\"/>\n <style>\n tr td {\n white-space:nowrap;\n }\n </style>\n </head>\n <body>\n <div class=\"container-fluid\">\n <h1>{{ search.reference }}</h1>\n <div class=\"info\">\n <p>\n Depending on your package_id_mode, any combination of settings, options and requirements\n can give you a different packageID. Take into account that your configuration might be\n different from the one used to generate the packages.\n </p>\n </div>\n\n <table id=\"results\" class=\"table table-striped table-bordered\" style=\"width:100%\">\n <thead>\n {%- set headers = results.get_headers(keys=['remote', 'package_id']) %}\n {%- set headers2rows = headers.row(n_rows=2) %}\n <tr>\n {%- for category, subheaders in headers2rows %}\n <th class=\"text-center\" rowspan=\"{% if subheaders|length == 1 and not subheaders[0] %}2{% else %}1{% endif %}\" colspan=\"{{ subheaders|length }}\">\n {{ category }}\n </th>\n {%- endfor %}\n </tr>\n <tr>\n {%- for category, subheaders in headers2rows %}\n {%- if subheaders|length != 1 or subheaders[0] != '' %}\n {%- for subheader in subheaders %}\n <th>{{ subheader|default(category, true) }}</th>\n {%- endfor %}\n {%- endif %}\n {%- endfor %}\n </tr>\n </thead>\n <tbody>\n {%- for package in results.packages() %}\n <tr>\n {%- for item in package.row(headers) %}\n <td>{{ item if item != None else ''}}</td>\n {%- endfor %}\n </tr>\n {%- endfor %}\n </tbody>\n <tfoot>\n <tr>\n {%- for header in headers.row(n_rows=1) %}\n <th>{{ header }}</th>\n {%- endfor %}\n </tr>\n </tfoot>\n </table>\n\n <script type=\"text/javascript\" src=\"https://code.jquery.com/jquery-3.3.1.js\"></script>\n <script type=\"text/javascript\" src=\"https://cdn.datatables.net/v/dt/dt-1.10.20/datatables.min.js\"></script>\n <script type=\"text/javascript\" src=\"https://cdn.datatables.net/1.10.21/js/dataTables.bootstrap.min.js\"></script>\n <script>\n $(document).ready(function() {\n // Setup - add a text input to each footer cell\n $('#results tfoot th').each( function () {\n var title = $(this).text();\n $(this).html( '<input type=\"text\" class=\"form-control filter-input\" placeholder=\"Filter '+title+'\" style=\"width:100%\"/>' );\n });\n\n var table = $('#results').DataTable( {\n \"dom\": \"lrtip\",\n \"lengthMenu\": [[10, 25, 50, -1], [10, 25, 50, \"All\"]],\n \"pageLength\": 10,\n \"columnDefs\": [\n { className: \"cell_border_right\", \"targets\": [ {{ headers.keys|length + headers.settings|length -1 }}, {{ headers.keys|length + headers.settings|length + headers.options|length -1 }} ] },\n { className: \"cell_border_right monospaced\", \"targets\": [{{ headers.keys|length -1 }}, ]}\n ]\n });\n\n // Apply the search\n table.columns().every( function () {\n var that = this;\n\n $( 'input', this.footer() ).on( 'keyup change clear', function () {\n if ( that.search() !== this.value ) {\n that\n .search( this.value )\n .draw();\n }\n } );\n } );\n });\n </script>\n </div>\n </body>\n</html>\n\"\"\"\n", "path": "conans/assets/templates/search_table_html.py"}], "after_files": [{"content": "\ncontent = \"\"\"\n<!DOCTYPE html>\n<html lang=\"en\">\n <head>\n <title>Conan | {{ search.reference }}</title>\n <link rel=\"stylesheet\" type=\"text/css\" href=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css\"/>\n <link rel=\"stylesheet\" type=\"text/css\" href=\"https://cdn.datatables.net/1.10.21/css/dataTables.bootstrap.min.css\"/>\n <style>\n tr td {\n white-space:nowrap;\n }\n </style>\n </head>\n <body>\n <div class=\"container-fluid\">\n <h1>{{ search.reference }}</h1>\n <div class=\"info\">\n <p>\n Depending on your package_id_mode, any combination of settings, options and requirements\n can give you a different packageID. Take into account that your configuration might be\n different from the one used to generate the packages.\n </p>\n </div>\n\n <table id=\"results\" class=\"table table-striped table-bordered\" style=\"width:100%\">\n <thead>\n {%- set headers = results.get_headers(keys=['remote', 'package_id', 'outdated']) %}\n {%- set headers2rows = headers.row(n_rows=2) %}\n <tr>\n {%- for category, subheaders in headers2rows %}\n <th class=\"text-center\" rowspan=\"{% if subheaders|length == 1 and not subheaders[0] %}2{% else %}1{% endif %}\" colspan=\"{{ subheaders|length }}\">\n {{ category }}\n </th>\n {%- endfor %}\n </tr>\n <tr>\n {%- for category, subheaders in headers2rows %}\n {%- if subheaders|length != 1 or subheaders[0] != '' %}\n {%- for subheader in subheaders %}\n <th>{{ subheader|default(category, true) }}</th>\n {%- endfor %}\n {%- endif %}\n {%- endfor %}\n </tr>\n </thead>\n <tbody>\n {%- for package in results.packages() %}\n <tr>\n {%- for item in package.row(headers) %}\n <td>{{ item if item != None else ''}}</td>\n {%- endfor %}\n </tr>\n {%- endfor %}\n </tbody>\n <tfoot>\n <tr>\n {%- for header in headers.row(n_rows=1) %}\n <th>{{ header }}</th>\n {%- endfor %}\n </tr>\n </tfoot>\n </table>\n\n <script type=\"text/javascript\" src=\"https://code.jquery.com/jquery-3.3.1.js\"></script>\n <script type=\"text/javascript\" src=\"https://cdn.datatables.net/v/dt/dt-1.10.20/datatables.min.js\"></script>\n <script type=\"text/javascript\" src=\"https://cdn.datatables.net/1.10.21/js/dataTables.bootstrap.min.js\"></script>\n <script>\n $(document).ready(function() {\n // Setup - add a text input to each footer cell\n $('#results tfoot th').each( function () {\n var title = $(this).text();\n $(this).html( '<input type=\"text\" class=\"form-control filter-input\" placeholder=\"Filter '+title+'\" style=\"width:100%\"/>' );\n });\n\n var table = $('#results').DataTable( {\n \"dom\": \"lrtip\",\n \"lengthMenu\": [[10, 25, 50, -1], [10, 25, 50, \"All\"]],\n \"pageLength\": 10,\n \"columnDefs\": [\n { className: \"cell_border_right\", \"targets\": [ {{ headers.keys|length + headers.settings|length -1 }}, {{ headers.keys|length + headers.settings|length + headers.options|length -1 }} ] },\n { className: \"cell_border_right monospaced\", \"targets\": [{{ headers.keys|length -1 }}, ]}\n ]\n });\n\n // Apply the search\n table.columns().every( function () {\n var that = this;\n\n $( 'input', this.footer() ).on( 'keyup change clear', function () {\n if ( that.search() !== this.value ) {\n that\n .search( this.value )\n .draw();\n }\n } );\n } );\n });\n </script>\n </div>\n </body>\n</html>\n\"\"\"\n", "path": "conans/assets/templates/search_table_html.py"}]}
| 1,462 | 159 |
gh_patches_debug_35269
|
rasdani/github-patches
|
git_diff
|
Lightning-AI__torchmetrics-853
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Jaccard Index behaviour when num_classes is absent
The [jaccard index function](https://github.com/PyTorchLightning/metrics/blob/8fade87062a7b87c1e6429bbe1c4e0112b3713a5/torchmetrics/functional/classification/jaccard.py#L69) determines the number of classes automatically.
This can result in strange results if the user forgets to specify `num_classes`. For example, in IoU determination in semantic segmentation, not all images have all classes. So determining them automatically will almost always lead to wrong number.
Additionally, the [warning](https://github.com/PyTorchLightning/metrics/blob/8fade87062a7b87c1e6429bbe1c4e0112b3713a5/torchmetrics/utilities/data.py#L156) that shows up makes it seem like something is wrong, when the user specifies the number of classes explicitly (but this is not the case at all):
```python
/run/determined/pythonuserbase/lib/python3.7/site-packages/torchmetrics/utilities/prints.py:36: RuntimeWarning: You have set 11 number of classes which is different from predicted (1000) and target (10) number of classes
```
Would it be crazy if `num_classes` requirement is default and the automatic determination of classes is a special option that must be enabled?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchmetrics/functional/classification/jaccard.py`
Content:
```
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Optional
15
16 import torch
17 from torch import Tensor
18
19 from torchmetrics.functional.classification.confusion_matrix import _confusion_matrix_update
20 from torchmetrics.utilities.data import get_num_classes
21 from torchmetrics.utilities.distributed import reduce
22
23
24 def _jaccard_from_confmat(
25 confmat: Tensor,
26 num_classes: int,
27 ignore_index: Optional[int] = None,
28 absent_score: float = 0.0,
29 reduction: str = "elementwise_mean",
30 ) -> Tensor:
31 """Computes the intersection over union from confusion matrix.
32
33 Args:
34 confmat: Confusion matrix without normalization
35 num_classes: Number of classes for a given prediction and target tensor
36 ignore_index: optional int specifying a target class to ignore. If given, this class index does not contribute
37 to the returned score, regardless of reduction method.
38 absent_score: score to use for an individual class, if no instances of the class index were present in `pred`
39 AND no instances of the class index were present in `target`.
40 reduction: a method to reduce metric score over labels.
41
42 - ``'elementwise_mean'``: takes the mean (default)
43 - ``'sum'``: takes the sum
44 - ``'none'``: no reduction will be applied
45 """
46
47 # Remove the ignored class index from the scores.
48 if ignore_index is not None and 0 <= ignore_index < num_classes:
49 confmat[ignore_index] = 0.0
50
51 intersection = torch.diag(confmat)
52 union = confmat.sum(0) + confmat.sum(1) - intersection
53
54 # If this class is absent in both target AND pred (union == 0), then use the absent_score for this class.
55 scores = intersection.float() / union.float()
56 scores[union == 0] = absent_score
57
58 if ignore_index is not None and 0 <= ignore_index < num_classes:
59 scores = torch.cat(
60 [
61 scores[:ignore_index],
62 scores[ignore_index + 1 :],
63 ]
64 )
65
66 return reduce(scores, reduction=reduction)
67
68
69 def jaccard_index(
70 preds: Tensor,
71 target: Tensor,
72 ignore_index: Optional[int] = None,
73 absent_score: float = 0.0,
74 threshold: float = 0.5,
75 num_classes: Optional[int] = None,
76 reduction: str = "elementwise_mean",
77 ) -> Tensor:
78 r"""
79 Computes `Jaccard index`_
80
81 .. math:: J(A,B) = \frac{|A\cap B|}{|A\cup B|}
82
83 Where: :math:`A` and :math:`B` are both tensors of the same size,
84 containing integer class values. They may be subject to conversion from
85 input data (see description below).
86
87 Note that it is different from box IoU.
88
89 If preds and target are the same shape and preds is a float tensor, we use the ``self.threshold`` argument
90 to convert into integer labels. This is the case for binary and multi-label probabilities.
91
92 If pred has an extra dimension as in the case of multi-class scores we
93 perform an argmax on ``dim=1``.
94
95 Args:
96 preds: tensor containing predictions from model (probabilities, or labels) with shape ``[N, d1, d2, ...]``
97 target: tensor containing ground truth labels with shape ``[N, d1, d2, ...]``
98 ignore_index: optional int specifying a target class to ignore. If given,
99 this class index does not contribute to the returned score, regardless
100 of reduction method. Has no effect if given an int that is not in the
101 range [0, num_classes-1], where num_classes is either given or derived
102 from pred and target. By default, no index is ignored, and all classes are used.
103 absent_score: score to use for an individual class, if no instances of
104 the class index were present in `pred` AND no instances of the class
105 index were present in `target`. For example, if we have 3 classes,
106 [0, 0] for `pred`, and [0, 2] for `target`, then class 1 would be
107 assigned the `absent_score`.
108 threshold:
109 Threshold value for binary or multi-label probabilities.
110 num_classes:
111 Optionally specify the number of classes
112 reduction: a method to reduce metric score over labels.
113
114 - ``'elementwise_mean'``: takes the mean (default)
115 - ``'sum'``: takes the sum
116 - ``'none'``: no reduction will be applied
117
118 Return:
119 IoU score: Tensor containing single value if reduction is
120 'elementwise_mean', or number of classes if reduction is 'none'
121
122 Example:
123 >>> from torchmetrics.functional import jaccard_index
124 >>> target = torch.randint(0, 2, (10, 25, 25))
125 >>> pred = torch.tensor(target)
126 >>> pred[2:5, 7:13, 9:15] = 1 - pred[2:5, 7:13, 9:15]
127 >>> jaccard_index(pred, target)
128 tensor(0.9660)
129 """
130
131 num_classes = get_num_classes(preds=preds, target=target, num_classes=num_classes)
132 confmat = _confusion_matrix_update(preds, target, num_classes, threshold)
133 return _jaccard_from_confmat(confmat, num_classes, ignore_index, absent_score, reduction)
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/torchmetrics/functional/classification/jaccard.py b/torchmetrics/functional/classification/jaccard.py
--- a/torchmetrics/functional/classification/jaccard.py
+++ b/torchmetrics/functional/classification/jaccard.py
@@ -69,10 +69,10 @@
def jaccard_index(
preds: Tensor,
target: Tensor,
+ num_classes: int,
ignore_index: Optional[int] = None,
absent_score: float = 0.0,
threshold: float = 0.5,
- num_classes: Optional[int] = None,
reduction: str = "elementwise_mean",
) -> Tensor:
r"""
@@ -95,6 +95,7 @@
Args:
preds: tensor containing predictions from model (probabilities, or labels) with shape ``[N, d1, d2, ...]``
target: tensor containing ground truth labels with shape ``[N, d1, d2, ...]``
+ num_classes: Specify the number of classes
ignore_index: optional int specifying a target class to ignore. If given,
this class index does not contribute to the returned score, regardless
of reduction method. Has no effect if given an int that is not in the
@@ -107,8 +108,7 @@
assigned the `absent_score`.
threshold:
Threshold value for binary or multi-label probabilities.
- num_classes:
- Optionally specify the number of classes
+
reduction: a method to reduce metric score over labels.
- ``'elementwise_mean'``: takes the mean (default)
@@ -124,7 +124,7 @@
>>> target = torch.randint(0, 2, (10, 25, 25))
>>> pred = torch.tensor(target)
>>> pred[2:5, 7:13, 9:15] = 1 - pred[2:5, 7:13, 9:15]
- >>> jaccard_index(pred, target)
+ >>> jaccard_index(pred, target, num_classes=2)
tensor(0.9660)
"""
|
{"golden_diff": "diff --git a/torchmetrics/functional/classification/jaccard.py b/torchmetrics/functional/classification/jaccard.py\n--- a/torchmetrics/functional/classification/jaccard.py\n+++ b/torchmetrics/functional/classification/jaccard.py\n@@ -69,10 +69,10 @@\n def jaccard_index(\n preds: Tensor,\n target: Tensor,\n+ num_classes: int,\n ignore_index: Optional[int] = None,\n absent_score: float = 0.0,\n threshold: float = 0.5,\n- num_classes: Optional[int] = None,\n reduction: str = \"elementwise_mean\",\n ) -> Tensor:\n r\"\"\"\n@@ -95,6 +95,7 @@\n Args:\n preds: tensor containing predictions from model (probabilities, or labels) with shape ``[N, d1, d2, ...]``\n target: tensor containing ground truth labels with shape ``[N, d1, d2, ...]``\n+ num_classes: Specify the number of classes\n ignore_index: optional int specifying a target class to ignore. If given,\n this class index does not contribute to the returned score, regardless\n of reduction method. Has no effect if given an int that is not in the\n@@ -107,8 +108,7 @@\n assigned the `absent_score`.\n threshold:\n Threshold value for binary or multi-label probabilities.\n- num_classes:\n- Optionally specify the number of classes\n+\n reduction: a method to reduce metric score over labels.\n \n - ``'elementwise_mean'``: takes the mean (default)\n@@ -124,7 +124,7 @@\n >>> target = torch.randint(0, 2, (10, 25, 25))\n >>> pred = torch.tensor(target)\n >>> pred[2:5, 7:13, 9:15] = 1 - pred[2:5, 7:13, 9:15]\n- >>> jaccard_index(pred, target)\n+ >>> jaccard_index(pred, target, num_classes=2)\n tensor(0.9660)\n \"\"\"\n", "issue": "Jaccard Index behaviour when num_classes is absent\nThe [jaccard index function](https://github.com/PyTorchLightning/metrics/blob/8fade87062a7b87c1e6429bbe1c4e0112b3713a5/torchmetrics/functional/classification/jaccard.py#L69) determines the number of classes automatically.\r\nThis can result in strange results if the user forgets to specify `num_classes`. For example, in IoU determination in semantic segmentation, not all images have all classes. So determining them automatically will almost always lead to wrong number.\r\nAdditionally, the [warning](https://github.com/PyTorchLightning/metrics/blob/8fade87062a7b87c1e6429bbe1c4e0112b3713a5/torchmetrics/utilities/data.py#L156) that shows up makes it seem like something is wrong, when the user specifies the number of classes explicitly (but this is not the case at all):\r\n\r\n```python\r\n/run/determined/pythonuserbase/lib/python3.7/site-packages/torchmetrics/utilities/prints.py:36: RuntimeWarning: You have set 11 number of classes which is different from predicted (1000) and target (10) number of classes\r\n```\r\n\r\nWould it be crazy if `num_classes` requirement is default and the automatic determination of classes is a special option that must be enabled?\r\n\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Optional\n\nimport torch\nfrom torch import Tensor\n\nfrom torchmetrics.functional.classification.confusion_matrix import _confusion_matrix_update\nfrom torchmetrics.utilities.data import get_num_classes\nfrom torchmetrics.utilities.distributed import reduce\n\n\ndef _jaccard_from_confmat(\n confmat: Tensor,\n num_classes: int,\n ignore_index: Optional[int] = None,\n absent_score: float = 0.0,\n reduction: str = \"elementwise_mean\",\n) -> Tensor:\n \"\"\"Computes the intersection over union from confusion matrix.\n\n Args:\n confmat: Confusion matrix without normalization\n num_classes: Number of classes for a given prediction and target tensor\n ignore_index: optional int specifying a target class to ignore. If given, this class index does not contribute\n to the returned score, regardless of reduction method.\n absent_score: score to use for an individual class, if no instances of the class index were present in `pred`\n AND no instances of the class index were present in `target`.\n reduction: a method to reduce metric score over labels.\n\n - ``'elementwise_mean'``: takes the mean (default)\n - ``'sum'``: takes the sum\n - ``'none'``: no reduction will be applied\n \"\"\"\n\n # Remove the ignored class index from the scores.\n if ignore_index is not None and 0 <= ignore_index < num_classes:\n confmat[ignore_index] = 0.0\n\n intersection = torch.diag(confmat)\n union = confmat.sum(0) + confmat.sum(1) - intersection\n\n # If this class is absent in both target AND pred (union == 0), then use the absent_score for this class.\n scores = intersection.float() / union.float()\n scores[union == 0] = absent_score\n\n if ignore_index is not None and 0 <= ignore_index < num_classes:\n scores = torch.cat(\n [\n scores[:ignore_index],\n scores[ignore_index + 1 :],\n ]\n )\n\n return reduce(scores, reduction=reduction)\n\n\ndef jaccard_index(\n preds: Tensor,\n target: Tensor,\n ignore_index: Optional[int] = None,\n absent_score: float = 0.0,\n threshold: float = 0.5,\n num_classes: Optional[int] = None,\n reduction: str = \"elementwise_mean\",\n) -> Tensor:\n r\"\"\"\n Computes `Jaccard index`_\n\n .. math:: J(A,B) = \\frac{|A\\cap B|}{|A\\cup B|}\n\n Where: :math:`A` and :math:`B` are both tensors of the same size,\n containing integer class values. They may be subject to conversion from\n input data (see description below).\n\n Note that it is different from box IoU.\n\n If preds and target are the same shape and preds is a float tensor, we use the ``self.threshold`` argument\n to convert into integer labels. This is the case for binary and multi-label probabilities.\n\n If pred has an extra dimension as in the case of multi-class scores we\n perform an argmax on ``dim=1``.\n\n Args:\n preds: tensor containing predictions from model (probabilities, or labels) with shape ``[N, d1, d2, ...]``\n target: tensor containing ground truth labels with shape ``[N, d1, d2, ...]``\n ignore_index: optional int specifying a target class to ignore. If given,\n this class index does not contribute to the returned score, regardless\n of reduction method. Has no effect if given an int that is not in the\n range [0, num_classes-1], where num_classes is either given or derived\n from pred and target. By default, no index is ignored, and all classes are used.\n absent_score: score to use for an individual class, if no instances of\n the class index were present in `pred` AND no instances of the class\n index were present in `target`. For example, if we have 3 classes,\n [0, 0] for `pred`, and [0, 2] for `target`, then class 1 would be\n assigned the `absent_score`.\n threshold:\n Threshold value for binary or multi-label probabilities.\n num_classes:\n Optionally specify the number of classes\n reduction: a method to reduce metric score over labels.\n\n - ``'elementwise_mean'``: takes the mean (default)\n - ``'sum'``: takes the sum\n - ``'none'``: no reduction will be applied\n\n Return:\n IoU score: Tensor containing single value if reduction is\n 'elementwise_mean', or number of classes if reduction is 'none'\n\n Example:\n >>> from torchmetrics.functional import jaccard_index\n >>> target = torch.randint(0, 2, (10, 25, 25))\n >>> pred = torch.tensor(target)\n >>> pred[2:5, 7:13, 9:15] = 1 - pred[2:5, 7:13, 9:15]\n >>> jaccard_index(pred, target)\n tensor(0.9660)\n \"\"\"\n\n num_classes = get_num_classes(preds=preds, target=target, num_classes=num_classes)\n confmat = _confusion_matrix_update(preds, target, num_classes, threshold)\n return _jaccard_from_confmat(confmat, num_classes, ignore_index, absent_score, reduction)\n", "path": "torchmetrics/functional/classification/jaccard.py"}], "after_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Optional\n\nimport torch\nfrom torch import Tensor\n\nfrom torchmetrics.functional.classification.confusion_matrix import _confusion_matrix_update\nfrom torchmetrics.utilities.data import get_num_classes\nfrom torchmetrics.utilities.distributed import reduce\n\n\ndef _jaccard_from_confmat(\n confmat: Tensor,\n num_classes: int,\n ignore_index: Optional[int] = None,\n absent_score: float = 0.0,\n reduction: str = \"elementwise_mean\",\n) -> Tensor:\n \"\"\"Computes the intersection over union from confusion matrix.\n\n Args:\n confmat: Confusion matrix without normalization\n num_classes: Number of classes for a given prediction and target tensor\n ignore_index: optional int specifying a target class to ignore. If given, this class index does not contribute\n to the returned score, regardless of reduction method.\n absent_score: score to use for an individual class, if no instances of the class index were present in `pred`\n AND no instances of the class index were present in `target`.\n reduction: a method to reduce metric score over labels.\n\n - ``'elementwise_mean'``: takes the mean (default)\n - ``'sum'``: takes the sum\n - ``'none'``: no reduction will be applied\n \"\"\"\n\n # Remove the ignored class index from the scores.\n if ignore_index is not None and 0 <= ignore_index < num_classes:\n confmat[ignore_index] = 0.0\n\n intersection = torch.diag(confmat)\n union = confmat.sum(0) + confmat.sum(1) - intersection\n\n # If this class is absent in both target AND pred (union == 0), then use the absent_score for this class.\n scores = intersection.float() / union.float()\n scores[union == 0] = absent_score\n\n if ignore_index is not None and 0 <= ignore_index < num_classes:\n scores = torch.cat(\n [\n scores[:ignore_index],\n scores[ignore_index + 1 :],\n ]\n )\n\n return reduce(scores, reduction=reduction)\n\n\ndef jaccard_index(\n preds: Tensor,\n target: Tensor,\n num_classes: int,\n ignore_index: Optional[int] = None,\n absent_score: float = 0.0,\n threshold: float = 0.5,\n reduction: str = \"elementwise_mean\",\n) -> Tensor:\n r\"\"\"\n Computes `Jaccard index`_\n\n .. math:: J(A,B) = \\frac{|A\\cap B|}{|A\\cup B|}\n\n Where: :math:`A` and :math:`B` are both tensors of the same size,\n containing integer class values. They may be subject to conversion from\n input data (see description below).\n\n Note that it is different from box IoU.\n\n If preds and target are the same shape and preds is a float tensor, we use the ``self.threshold`` argument\n to convert into integer labels. This is the case for binary and multi-label probabilities.\n\n If pred has an extra dimension as in the case of multi-class scores we\n perform an argmax on ``dim=1``.\n\n Args:\n preds: tensor containing predictions from model (probabilities, or labels) with shape ``[N, d1, d2, ...]``\n target: tensor containing ground truth labels with shape ``[N, d1, d2, ...]``\n num_classes: Specify the number of classes\n ignore_index: optional int specifying a target class to ignore. If given,\n this class index does not contribute to the returned score, regardless\n of reduction method. Has no effect if given an int that is not in the\n range [0, num_classes-1], where num_classes is either given or derived\n from pred and target. By default, no index is ignored, and all classes are used.\n absent_score: score to use for an individual class, if no instances of\n the class index were present in `pred` AND no instances of the class\n index were present in `target`. For example, if we have 3 classes,\n [0, 0] for `pred`, and [0, 2] for `target`, then class 1 would be\n assigned the `absent_score`.\n threshold:\n Threshold value for binary or multi-label probabilities.\n\n reduction: a method to reduce metric score over labels.\n\n - ``'elementwise_mean'``: takes the mean (default)\n - ``'sum'``: takes the sum\n - ``'none'``: no reduction will be applied\n\n Return:\n IoU score: Tensor containing single value if reduction is\n 'elementwise_mean', or number of classes if reduction is 'none'\n\n Example:\n >>> from torchmetrics.functional import jaccard_index\n >>> target = torch.randint(0, 2, (10, 25, 25))\n >>> pred = torch.tensor(target)\n >>> pred[2:5, 7:13, 9:15] = 1 - pred[2:5, 7:13, 9:15]\n >>> jaccard_index(pred, target, num_classes=2)\n tensor(0.9660)\n \"\"\"\n\n num_classes = get_num_classes(preds=preds, target=target, num_classes=num_classes)\n confmat = _confusion_matrix_update(preds, target, num_classes, threshold)\n return _jaccard_from_confmat(confmat, num_classes, ignore_index, absent_score, reduction)\n", "path": "torchmetrics/functional/classification/jaccard.py"}]}
| 2,232 | 485 |
gh_patches_debug_24631
|
rasdani/github-patches
|
git_diff
|
pypa__pipenv-5401
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
using conda python to create pipenv adds both python_version and python_full_version to pipfile
Hello,
If I use a miniconda environment's python for creating a pipenv environment, it adds BOTH, a python_version and python_full_version (e.g. 3.9 and 3.9.13) to the pipfile, and that throws an error.
System details:
I am using Ubuntu 22.04 via WSL.
System python is: 3.10.6
Pip version is: 22.2.2
Pipenv installed via pip (outside of any environment): 2022.10.9
Conda version is: 22.9.0
All are the latest right now
Steps to reproduce the error:
(mlzoomcamp is a conda environment with python 3.9.13 (but I also used a conda env with python 3.10 and the same issue persisted))
```
(base) mona@Memoona-PC:~$ pipenv --python=/home/mona/miniconda3/envs/mlzoomcamp/bin/python
# runs fine, but when I install something:
(base) mona@Memoona-PC:~/extra$ pipenv install numpy
```
It installs numpy correctly, but then throws this error:
```
Installing numpy...
Adding numpy to Pipfile's [packages]...
✔ Installation Succeeded
Pipfile.lock not found, creating...
Traceback (most recent call last):
File "/home/mona/.local/bin/pipenv", line 8, in <module>
sys.exit(cli())
File "/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/click/core.py", line 1128, in __call__
return self.main(*args, **kwargs)
File "/home/mona/.local/lib/python3.10/site-packages/pipenv/cli/options.py", line 57, in main
return super().main(*args, **kwargs, windows_expand_args=False)
File "/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/click/core.py", line 1053, in main
rv = self.invoke(ctx)
File "/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/click/core.py", line 1659, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/click/core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/click/decorators.py", line 84, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "/home/mona/.local/lib/python3.10/site-packages/pipenv/cli/command.py", line 236, in install
do_install(
File "/home/mona/.local/lib/python3.10/site-packages/pipenv/core.py", line 2389, in do_install
do_init(
File "/home/mona/.local/lib/python3.10/site-packages/pipenv/core.py", line 1303, in do_init
do_lock(
File "/home/mona/.local/lib/python3.10/site-packages/pipenv/core.py", line 1070, in do_lock
lockfile = project._lockfile(categories=lockfile_categories)
File "/home/mona/.local/lib/python3.10/site-packages/pipenv/project.py", line 581, in _lockfile
plette.Pipfile.load(pf), categories=categories
File "/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/plette/pipfiles.py", line 59, in load
return cls(data)
File "/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/plette/models/base.py", line 49, in __init__
self.validate(data)
File "/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/plette/pipfiles.py", line 43, in validate
klass.validate(data[key])
File "/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/plette/models/base.py", line 81, in validate
return validate(cls, data)
File "/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/plette/models/base.py", line 38, in validate
raise ValidationError(data, v)
pipenv.vendor.plette.models.base.ValidationError: {'python_version': '3.9', 'python_full_version': '3.9.13'}
python_full_version: 'python_version' must not be present with 'python_full_version'
python_version: 'python_full_version' must not be present with 'python_version'
```
The pipenv file at this time looks like this:
```
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
numpy = "*"
[dev-packages]
[requires]
python_version = "3.9"
python_full_version = "3.9.13"
```
I looked at the pipfile before the install command, and it has already added the 2 versions as soon as the environment was created.
I did find a fix, to manually remove the python_version line from pipfile.
Also, if I use the system python to create an environment, this issue doesn't happen, so this is most likely a conda issue. Do I have to manually remove the python_version each time from the pipfile?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pipenv/vendor/plette/models/sections.py`
Content:
```
1 from .base import DataView, DataViewMapping, DataViewSequence
2 from .hashes import Hash
3 from .packages import Package
4 from .scripts import Script
5 from .sources import Source
6
7
8 class PackageCollection(DataViewMapping):
9 item_class = Package
10
11
12 class ScriptCollection(DataViewMapping):
13 item_class = Script
14
15
16 class SourceCollection(DataViewSequence):
17 item_class = Source
18
19
20 class Requires(DataView):
21 """Representation of the `[requires]` section in a Pipfile.
22 """
23 __SCHEMA__ = {
24 "python_version": {
25 "type": "string",
26 "excludes": ["python_full_version"],
27 },
28 "python_full_version": {
29 "type": "string",
30 "excludes": ["python_version"],
31 },
32 }
33
34 @property
35 def python_version(self):
36 try:
37 return self._data["python_version"]
38 except KeyError:
39 raise AttributeError("python_version")
40
41 @property
42 def python_full_version(self):
43 try:
44 return self._data["python_full_version"]
45 except KeyError:
46 raise AttributeError("python_full_version")
47
48
49 META_SECTIONS = {
50 "hash": Hash,
51 "requires": Requires,
52 "sources": SourceCollection,
53 }
54
55
56 class Meta(DataView):
57 """Representation of the `_meta` section in a Pipfile.lock.
58 """
59 __SCHEMA__ = {
60 "hash": {"type": "dict", "required": True},
61 "pipfile-spec": {"type": "integer", "required": True, "min": 0},
62 "requires": {"type": "dict", "required": True},
63 "sources": {"type": "list", "required": True},
64 }
65
66 @classmethod
67 def validate(cls, data):
68 super(Meta, cls).validate(data)
69 for key, klass in META_SECTIONS.items():
70 klass.validate(data[key])
71
72 def __getitem__(self, key):
73 value = super(Meta, self).__getitem__(key)
74 try:
75 return META_SECTIONS[key](value)
76 except KeyError:
77 return value
78
79 def __setitem__(self, key, value):
80 if isinstance(value, DataView):
81 self._data[key] = value._data
82 else:
83 self._data[key] = value
84
85 @property
86 def hash_(self):
87 return self["hash"]
88
89 @hash_.setter
90 def hash_(self, value):
91 self["hash"] = value
92
93 @property
94 def hash(self):
95 return self["hash"]
96
97 @hash.setter
98 def hash(self, value):
99 self["hash"] = value
100
101 @property
102 def pipfile_spec(self):
103 return self["pipfile-spec"]
104
105 @pipfile_spec.setter
106 def pipfile_spec(self, value):
107 self["pipfile-spec"] = value
108
109 @property
110 def requires(self):
111 return self["requires"]
112
113 @requires.setter
114 def requires(self, value):
115 self["requires"] = value
116
117 @property
118 def sources(self):
119 return self["sources"]
120
121 @sources.setter
122 def sources(self, value):
123 self["sources"] = value
124
```
Path: `pipenv/vendor/plette/__init__.py`
Content:
```
1 __all__ = [
2 "__version__",
3 "Lockfile", "Pipfile",
4 ]
5
6 __version__ = '0.4.1'
7
8 from .lockfiles import Lockfile
9 from .pipfiles import Pipfile
10
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pipenv/vendor/plette/__init__.py b/pipenv/vendor/plette/__init__.py
--- a/pipenv/vendor/plette/__init__.py
+++ b/pipenv/vendor/plette/__init__.py
@@ -3,7 +3,7 @@
"Lockfile", "Pipfile",
]
-__version__ = '0.4.1'
+__version__ = '0.4.2'
from .lockfiles import Lockfile
from .pipfiles import Pipfile
diff --git a/pipenv/vendor/plette/models/sections.py b/pipenv/vendor/plette/models/sections.py
--- a/pipenv/vendor/plette/models/sections.py
+++ b/pipenv/vendor/plette/models/sections.py
@@ -18,16 +18,14 @@
class Requires(DataView):
- """Representation of the `[requires]` section in a Pipfile.
- """
+ """Representation of the `[requires]` section in a Pipfile."""
+
__SCHEMA__ = {
"python_version": {
"type": "string",
- "excludes": ["python_full_version"],
},
"python_full_version": {
"type": "string",
- "excludes": ["python_version"],
},
}
@@ -54,8 +52,8 @@
class Meta(DataView):
- """Representation of the `_meta` section in a Pipfile.lock.
- """
+ """Representation of the `_meta` section in a Pipfile.lock."""
+
__SCHEMA__ = {
"hash": {"type": "dict", "required": True},
"pipfile-spec": {"type": "integer", "required": True, "min": 0},
|
{"golden_diff": "diff --git a/pipenv/vendor/plette/__init__.py b/pipenv/vendor/plette/__init__.py\n--- a/pipenv/vendor/plette/__init__.py\n+++ b/pipenv/vendor/plette/__init__.py\n@@ -3,7 +3,7 @@\n \"Lockfile\", \"Pipfile\",\n ]\n \n-__version__ = '0.4.1'\n+__version__ = '0.4.2'\n \n from .lockfiles import Lockfile\n from .pipfiles import Pipfile\ndiff --git a/pipenv/vendor/plette/models/sections.py b/pipenv/vendor/plette/models/sections.py\n--- a/pipenv/vendor/plette/models/sections.py\n+++ b/pipenv/vendor/plette/models/sections.py\n@@ -18,16 +18,14 @@\n \n \n class Requires(DataView):\n- \"\"\"Representation of the `[requires]` section in a Pipfile.\n- \"\"\"\n+ \"\"\"Representation of the `[requires]` section in a Pipfile.\"\"\"\n+\n __SCHEMA__ = {\n \"python_version\": {\n \"type\": \"string\",\n- \"excludes\": [\"python_full_version\"],\n },\n \"python_full_version\": {\n \"type\": \"string\",\n- \"excludes\": [\"python_version\"],\n },\n }\n \n@@ -54,8 +52,8 @@\n \n \n class Meta(DataView):\n- \"\"\"Representation of the `_meta` section in a Pipfile.lock.\n- \"\"\"\n+ \"\"\"Representation of the `_meta` section in a Pipfile.lock.\"\"\"\n+\n __SCHEMA__ = {\n \"hash\": {\"type\": \"dict\", \"required\": True},\n \"pipfile-spec\": {\"type\": \"integer\", \"required\": True, \"min\": 0},\n", "issue": "using conda python to create pipenv adds both python_version and python_full_version to pipfile\nHello, \r\n\r\nIf I use a miniconda environment's python for creating a pipenv environment, it adds BOTH, a python_version and python_full_version (e.g. 3.9 and 3.9.13) to the pipfile, and that throws an error.\r\n\r\nSystem details:\r\n\r\nI am using Ubuntu 22.04 via WSL. \r\nSystem python is: 3.10.6\r\nPip version is: 22.2.2\r\nPipenv installed via pip (outside of any environment): 2022.10.9\r\nConda version is: 22.9.0\r\nAll are the latest right now\r\n\r\n\r\nSteps to reproduce the error:\r\n\r\n(mlzoomcamp is a conda environment with python 3.9.13 (but I also used a conda env with python 3.10 and the same issue persisted)) \r\n\r\n```\r\n(base) mona@Memoona-PC:~$ pipenv --python=/home/mona/miniconda3/envs/mlzoomcamp/bin/python\r\n# runs fine, but when I install something:\r\n(base) mona@Memoona-PC:~/extra$ pipenv install numpy\r\n```\r\n\r\nIt installs numpy correctly, but then throws this error: \r\n\r\n```\r\nInstalling numpy...\r\nAdding numpy to Pipfile's [packages]...\r\n\u2714 Installation Succeeded\r\nPipfile.lock not found, creating...\r\nTraceback (most recent call last):\r\n File \"/home/mona/.local/bin/pipenv\", line 8, in <module>\r\n sys.exit(cli())\r\n File \"/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/click/core.py\", line 1128, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/home/mona/.local/lib/python3.10/site-packages/pipenv/cli/options.py\", line 57, in main\r\n return super().main(*args, **kwargs, windows_expand_args=False)\r\n File \"/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/click/core.py\", line 1053, in main\r\n rv = self.invoke(ctx)\r\n File \"/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/click/core.py\", line 1659, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/click/core.py\", line 1395, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/click/core.py\", line 754, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/click/decorators.py\", line 84, in new_func\r\n return ctx.invoke(f, obj, *args, **kwargs)\r\n File \"/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/click/core.py\", line 754, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/home/mona/.local/lib/python3.10/site-packages/pipenv/cli/command.py\", line 236, in install\r\n do_install(\r\n File \"/home/mona/.local/lib/python3.10/site-packages/pipenv/core.py\", line 2389, in do_install\r\n do_init(\r\n File \"/home/mona/.local/lib/python3.10/site-packages/pipenv/core.py\", line 1303, in do_init\r\n do_lock(\r\n File \"/home/mona/.local/lib/python3.10/site-packages/pipenv/core.py\", line 1070, in do_lock\r\n lockfile = project._lockfile(categories=lockfile_categories)\r\n File \"/home/mona/.local/lib/python3.10/site-packages/pipenv/project.py\", line 581, in _lockfile\r\n plette.Pipfile.load(pf), categories=categories\r\n File \"/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/plette/pipfiles.py\", line 59, in load\r\n return cls(data)\r\n File \"/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/plette/models/base.py\", line 49, in __init__\r\n self.validate(data)\r\n File \"/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/plette/pipfiles.py\", line 43, in validate\r\n klass.validate(data[key])\r\n File \"/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/plette/models/base.py\", line 81, in validate\r\n return validate(cls, data)\r\n File \"/home/mona/.local/lib/python3.10/site-packages/pipenv/vendor/plette/models/base.py\", line 38, in validate\r\n raise ValidationError(data, v)\r\npipenv.vendor.plette.models.base.ValidationError: {'python_version': '3.9', 'python_full_version': '3.9.13'}\r\npython_full_version: 'python_version' must not be present with 'python_full_version'\r\npython_version: 'python_full_version' must not be present with 'python_version'\r\n```\r\n\r\nThe pipenv file at this time looks like this:\r\n\r\n```\r\n[[source]]\r\nurl = \"https://pypi.org/simple\"\r\nverify_ssl = true\r\nname = \"pypi\"\r\n\r\n[packages]\r\nnumpy = \"*\"\r\n\r\n[dev-packages]\r\n\r\n[requires]\r\npython_version = \"3.9\"\r\npython_full_version = \"3.9.13\"\r\n```\r\n\r\nI looked at the pipfile before the install command, and it has already added the 2 versions as soon as the environment was created.\r\n\r\nI did find a fix, to manually remove the python_version line from pipfile. \r\n\r\nAlso, if I use the system python to create an environment, this issue doesn't happen, so this is most likely a conda issue. Do I have to manually remove the python_version each time from the pipfile? \r\n\r\n\n", "before_files": [{"content": "from .base import DataView, DataViewMapping, DataViewSequence\nfrom .hashes import Hash\nfrom .packages import Package\nfrom .scripts import Script\nfrom .sources import Source\n\n\nclass PackageCollection(DataViewMapping):\n item_class = Package\n\n\nclass ScriptCollection(DataViewMapping):\n item_class = Script\n\n\nclass SourceCollection(DataViewSequence):\n item_class = Source\n\n\nclass Requires(DataView):\n \"\"\"Representation of the `[requires]` section in a Pipfile.\n \"\"\"\n __SCHEMA__ = {\n \"python_version\": {\n \"type\": \"string\",\n \"excludes\": [\"python_full_version\"],\n },\n \"python_full_version\": {\n \"type\": \"string\",\n \"excludes\": [\"python_version\"],\n },\n }\n\n @property\n def python_version(self):\n try:\n return self._data[\"python_version\"]\n except KeyError:\n raise AttributeError(\"python_version\")\n\n @property\n def python_full_version(self):\n try:\n return self._data[\"python_full_version\"]\n except KeyError:\n raise AttributeError(\"python_full_version\")\n\n\nMETA_SECTIONS = {\n \"hash\": Hash,\n \"requires\": Requires,\n \"sources\": SourceCollection,\n}\n\n\nclass Meta(DataView):\n \"\"\"Representation of the `_meta` section in a Pipfile.lock.\n \"\"\"\n __SCHEMA__ = {\n \"hash\": {\"type\": \"dict\", \"required\": True},\n \"pipfile-spec\": {\"type\": \"integer\", \"required\": True, \"min\": 0},\n \"requires\": {\"type\": \"dict\", \"required\": True},\n \"sources\": {\"type\": \"list\", \"required\": True},\n }\n\n @classmethod\n def validate(cls, data):\n super(Meta, cls).validate(data)\n for key, klass in META_SECTIONS.items():\n klass.validate(data[key])\n\n def __getitem__(self, key):\n value = super(Meta, self).__getitem__(key)\n try:\n return META_SECTIONS[key](value)\n except KeyError:\n return value\n\n def __setitem__(self, key, value):\n if isinstance(value, DataView):\n self._data[key] = value._data\n else:\n self._data[key] = value\n\n @property\n def hash_(self):\n return self[\"hash\"]\n\n @hash_.setter\n def hash_(self, value):\n self[\"hash\"] = value\n\n @property\n def hash(self):\n return self[\"hash\"]\n\n @hash.setter\n def hash(self, value):\n self[\"hash\"] = value\n\n @property\n def pipfile_spec(self):\n return self[\"pipfile-spec\"]\n\n @pipfile_spec.setter\n def pipfile_spec(self, value):\n self[\"pipfile-spec\"] = value\n\n @property\n def requires(self):\n return self[\"requires\"]\n\n @requires.setter\n def requires(self, value):\n self[\"requires\"] = value\n\n @property\n def sources(self):\n return self[\"sources\"]\n\n @sources.setter\n def sources(self, value):\n self[\"sources\"] = value\n", "path": "pipenv/vendor/plette/models/sections.py"}, {"content": "__all__ = [\n \"__version__\",\n \"Lockfile\", \"Pipfile\",\n]\n\n__version__ = '0.4.1'\n\nfrom .lockfiles import Lockfile\nfrom .pipfiles import Pipfile\n", "path": "pipenv/vendor/plette/__init__.py"}], "after_files": [{"content": "from .base import DataView, DataViewMapping, DataViewSequence\nfrom .hashes import Hash\nfrom .packages import Package\nfrom .scripts import Script\nfrom .sources import Source\n\n\nclass PackageCollection(DataViewMapping):\n item_class = Package\n\n\nclass ScriptCollection(DataViewMapping):\n item_class = Script\n\n\nclass SourceCollection(DataViewSequence):\n item_class = Source\n\n\nclass Requires(DataView):\n \"\"\"Representation of the `[requires]` section in a Pipfile.\"\"\"\n\n __SCHEMA__ = {\n \"python_version\": {\n \"type\": \"string\",\n },\n \"python_full_version\": {\n \"type\": \"string\",\n },\n }\n\n @property\n def python_version(self):\n try:\n return self._data[\"python_version\"]\n except KeyError:\n raise AttributeError(\"python_version\")\n\n @property\n def python_full_version(self):\n try:\n return self._data[\"python_full_version\"]\n except KeyError:\n raise AttributeError(\"python_full_version\")\n\n\nMETA_SECTIONS = {\n \"hash\": Hash,\n \"requires\": Requires,\n \"sources\": SourceCollection,\n}\n\n\nclass Meta(DataView):\n \"\"\"Representation of the `_meta` section in a Pipfile.lock.\"\"\"\n\n __SCHEMA__ = {\n \"hash\": {\"type\": \"dict\", \"required\": True},\n \"pipfile-spec\": {\"type\": \"integer\", \"required\": True, \"min\": 0},\n \"requires\": {\"type\": \"dict\", \"required\": True},\n \"sources\": {\"type\": \"list\", \"required\": True},\n }\n\n @classmethod\n def validate(cls, data):\n super(Meta, cls).validate(data)\n for key, klass in META_SECTIONS.items():\n klass.validate(data[key])\n\n def __getitem__(self, key):\n value = super(Meta, self).__getitem__(key)\n try:\n return META_SECTIONS[key](value)\n except KeyError:\n return value\n\n def __setitem__(self, key, value):\n if isinstance(value, DataView):\n self._data[key] = value._data\n else:\n self._data[key] = value\n\n @property\n def hash_(self):\n return self[\"hash\"]\n\n @hash_.setter\n def hash_(self, value):\n self[\"hash\"] = value\n\n @property\n def hash(self):\n return self[\"hash\"]\n\n @hash.setter\n def hash(self, value):\n self[\"hash\"] = value\n\n @property\n def pipfile_spec(self):\n return self[\"pipfile-spec\"]\n\n @pipfile_spec.setter\n def pipfile_spec(self, value):\n self[\"pipfile-spec\"] = value\n\n @property\n def requires(self):\n return self[\"requires\"]\n\n @requires.setter\n def requires(self, value):\n self[\"requires\"] = value\n\n @property\n def sources(self):\n return self[\"sources\"]\n\n @sources.setter\n def sources(self, value):\n self[\"sources\"] = value\n", "path": "pipenv/vendor/plette/models/sections.py"}, {"content": "__all__ = [\n \"__version__\",\n \"Lockfile\", \"Pipfile\",\n]\n\n__version__ = '0.4.2'\n\nfrom .lockfiles import Lockfile\nfrom .pipfiles import Pipfile\n", "path": "pipenv/vendor/plette/__init__.py"}]}
| 2,681 | 392 |
gh_patches_debug_7461
|
rasdani/github-patches
|
git_diff
|
conda__conda-5357
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
latest 4.3.x (4.3.18-37-g79f86d4c) not picking up conda-build subcommands
From conda-build's test suite:
```
________________________________ test_skeleton_pypi ________________________________
Traceback (most recent call last):
File "/home/dev/code/conda-build/tests/test_published_examples.py", line 15, in test_skeleton_pypi
check_call_env(cmd.split())
File "/home/dev/code/conda-build/conda_build/utils.py", line 670, in check_call_env
return _func_defaulting_env_to_os_environ(subprocess.check_call, *popenargs, **kwargs)
File "/home/dev/code/conda-build/conda_build/utils.py", line 666, in _func_defaulting_env_to_os_environ
return func(_args, **kwargs)
File "/opt/miniconda/lib/python2.7/subprocess.py", line 541, in check_call
raise CalledProcessError(retcode, cmd)
CalledProcessError: Command '['conda', 'skeleton', 'pypi', 'pyinstrument']' returned non-zero exit status 2
------------------------------- Captured stderr call -------------------------------
usage: conda [-h] [-V] command ...
conda: error: argument command: invalid choice: u'skeleton' (choose from u'info', u'help', u'list', u'search', u'create', u'install', u'update', u'upgrade', u'remove', u'uninstall', u'config', u'clean', u'package')
```
This seems to happen only with python 2.7, not 3.6: https://travis-ci.org/conda/conda-build/builds/232848688
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda/cli/conda_argparse.py`
Content:
```
1 # (c) 2012-2013 Continuum Analytics, Inc. / http://continuum.io
2 # All Rights Reserved
3 #
4 # conda is distributed under the terms of the BSD 3-clause license.
5 # Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.
6
7 from __future__ import print_function, division, absolute_import, unicode_literals
8
9 import argparse
10 import os
11 import subprocess
12 import sys
13
14 from .common import add_parser_help
15 from .find_commands import find_commands, find_executable
16 from ..exceptions import CommandNotFoundError
17
18
19 _ARGCOMPLETE_DEBUG = False
20 def debug_argcomplete(msg):
21 # To debug this, replace ttys001 with the fd of the terminal you are using
22 # (use the `tty` command to find this), and set _ARGCOMPLETE_DEBUG above
23 # to True. You can also `export _ARC_DEBUG=1` in the shell you are using
24 # to print debug messages from argcomplete.
25 if _ARGCOMPLETE_DEBUG:
26 f = open('/dev/ttys001', 'w')
27 f.write("\n%s\n" % msg)
28 f.flush()
29
30
31 try:
32 import argcomplete
33 argcomplete.CompletionFinder
34 except (ImportError, AttributeError):
35 # On Python 3.3, argcomplete can be an empty namespace package when
36 # we are in the conda-recipes directory.
37 argcomplete = None
38
39 if argcomplete:
40 class CondaSubprocessCompletionFinder(argcomplete.CompletionFinder):
41 def __call__(self, argument_parser, **kwargs):
42 def call_super():
43 parent = super(CondaSubprocessCompletionFinder, self)
44 return parent.__call__(argument_parser, **kwargs)
45
46 debug_argcomplete("Working")
47
48 if argument_parser.prog != 'conda':
49 debug_argcomplete("Argument parser is not conda")
50 return call_super()
51
52 environ = os.environ.copy()
53 if 'COMP_LINE' not in environ:
54 debug_argcomplete("COMP_LINE not in environ")
55 return call_super()
56
57 subcommands = find_commands()
58 for subcommand in subcommands:
59 if 'conda %s' % subcommand in environ['COMP_LINE']:
60 environ['COMP_LINE'] = environ['COMP_LINE'].replace('conda %s' % subcommand,
61 'conda-%s' % subcommand)
62 debug_argcomplete("Using subprocess")
63 debug_argcomplete(sys.argv)
64 import pprint
65 debug_argcomplete(pprint.pformat(environ))
66 args = [find_executable('conda-%s' % subcommand)]
67 debug_argcomplete(args)
68 p = subprocess.Popen(args, env=environ, close_fds=False)
69 p.communicate()
70 sys.exit()
71 else:
72 debug_argcomplete("Not using subprocess")
73 debug_argcomplete(sys.argv)
74 debug_argcomplete(argument_parser)
75 return call_super()
76
77 class ArgumentParser(argparse.ArgumentParser):
78 def __init__(self, *args, **kwargs):
79 if not kwargs.get('formatter_class'):
80 kwargs['formatter_class'] = argparse.RawDescriptionHelpFormatter
81 if 'add_help' not in kwargs:
82 add_custom_help = True
83 kwargs['add_help'] = False
84 else:
85 add_custom_help = False
86 super(ArgumentParser, self).__init__(*args, **kwargs)
87
88 if add_custom_help:
89 add_parser_help(self)
90
91 if self.description:
92 self.description += "\n\nOptions:\n"
93
94 def _get_action_from_name(self, name):
95 """Given a name, get the Action instance registered with this parser.
96 If only it were made available in the ArgumentError object. It is
97 passed as it's first arg...
98 """
99 container = self._actions
100 if name is None:
101 return None
102 for action in container:
103 if '/'.join(action.option_strings) == name:
104 return action
105 elif action.metavar == name:
106 return action
107 elif action.dest == name:
108 return action
109
110 def error(self, message):
111 import re
112 import subprocess
113 from .find_commands import find_executable
114
115 exc = sys.exc_info()[1]
116 if exc:
117 # this is incredibly lame, but argparse stupidly does not expose
118 # reasonable hooks for customizing error handling
119 if hasattr(exc, 'argument_name'):
120 argument = self._get_action_from_name(exc.argument_name)
121 else:
122 argument = None
123 if argument and argument.dest == "cmd":
124 m = re.compile(r"invalid choice: '([\w\-]+)'").match(exc.message)
125 if m:
126 cmd = m.group(1)
127 executable = find_executable('conda-' + cmd)
128 if not executable:
129 raise CommandNotFoundError(cmd)
130
131 args = [find_executable('conda-' + cmd)]
132 args.extend(sys.argv[2:])
133 p = subprocess.Popen(args)
134 try:
135 p.communicate()
136 except KeyboardInterrupt:
137 p.wait()
138 finally:
139 sys.exit(p.returncode)
140
141 super(ArgumentParser, self).error(message)
142
143 def print_help(self):
144 super(ArgumentParser, self).print_help()
145
146 if self.prog == 'conda' and sys.argv[1:] in ([], ['help'], ['-h'], ['--help']):
147 print("""
148 other commands, such as "conda build", are avaialble when additional conda
149 packages (e.g. conda-build) are installed
150 """)
151
152 def parse_args(self, *args, **kwargs):
153 if argcomplete:
154 CondaSubprocessCompletionFinder()(self)
155
156 return super(ArgumentParser, self).parse_args(*args, **kwargs)
157
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conda/cli/conda_argparse.py b/conda/cli/conda_argparse.py
--- a/conda/cli/conda_argparse.py
+++ b/conda/cli/conda_argparse.py
@@ -121,7 +121,7 @@
else:
argument = None
if argument and argument.dest == "cmd":
- m = re.compile(r"invalid choice: '([\w\-]+)'").match(exc.message)
+ m = re.compile(r"invalid choice: u?'([\w\-]+)'").match(exc.message)
if m:
cmd = m.group(1)
executable = find_executable('conda-' + cmd)
|
{"golden_diff": "diff --git a/conda/cli/conda_argparse.py b/conda/cli/conda_argparse.py\n--- a/conda/cli/conda_argparse.py\n+++ b/conda/cli/conda_argparse.py\n@@ -121,7 +121,7 @@\n else:\n argument = None\n if argument and argument.dest == \"cmd\":\n- m = re.compile(r\"invalid choice: '([\\w\\-]+)'\").match(exc.message)\n+ m = re.compile(r\"invalid choice: u?'([\\w\\-]+)'\").match(exc.message)\n if m:\n cmd = m.group(1)\n executable = find_executable('conda-' + cmd)\n", "issue": "latest 4.3.x (4.3.18-37-g79f86d4c) not picking up conda-build subcommands\nFrom conda-build's test suite:\r\n\r\n```\r\n________________________________ test_skeleton_pypi ________________________________\r\nTraceback (most recent call last):\r\n File \"/home/dev/code/conda-build/tests/test_published_examples.py\", line 15, in test_skeleton_pypi\r\n check_call_env(cmd.split())\r\n File \"/home/dev/code/conda-build/conda_build/utils.py\", line 670, in check_call_env\r\n return _func_defaulting_env_to_os_environ(subprocess.check_call, *popenargs, **kwargs)\r\n File \"/home/dev/code/conda-build/conda_build/utils.py\", line 666, in _func_defaulting_env_to_os_environ\r\n return func(_args, **kwargs)\r\n File \"/opt/miniconda/lib/python2.7/subprocess.py\", line 541, in check_call\r\n raise CalledProcessError(retcode, cmd)\r\nCalledProcessError: Command '['conda', 'skeleton', 'pypi', 'pyinstrument']' returned non-zero exit status 2\r\n------------------------------- Captured stderr call -------------------------------\r\nusage: conda [-h] [-V] command ...\r\nconda: error: argument command: invalid choice: u'skeleton' (choose from u'info', u'help', u'list', u'search', u'create', u'install', u'update', u'upgrade', u'remove', u'uninstall', u'config', u'clean', u'package')\r\n```\r\n\r\nThis seems to happen only with python 2.7, not 3.6: https://travis-ci.org/conda/conda-build/builds/232848688\n", "before_files": [{"content": "# (c) 2012-2013 Continuum Analytics, Inc. / http://continuum.io\n# All Rights Reserved\n#\n# conda is distributed under the terms of the BSD 3-clause license.\n# Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.\n\nfrom __future__ import print_function, division, absolute_import, unicode_literals\n\nimport argparse\nimport os\nimport subprocess\nimport sys\n\nfrom .common import add_parser_help\nfrom .find_commands import find_commands, find_executable\nfrom ..exceptions import CommandNotFoundError\n\n\n_ARGCOMPLETE_DEBUG = False\ndef debug_argcomplete(msg):\n # To debug this, replace ttys001 with the fd of the terminal you are using\n # (use the `tty` command to find this), and set _ARGCOMPLETE_DEBUG above\n # to True. You can also `export _ARC_DEBUG=1` in the shell you are using\n # to print debug messages from argcomplete.\n if _ARGCOMPLETE_DEBUG:\n f = open('/dev/ttys001', 'w')\n f.write(\"\\n%s\\n\" % msg)\n f.flush()\n\n\ntry:\n import argcomplete\n argcomplete.CompletionFinder\nexcept (ImportError, AttributeError):\n # On Python 3.3, argcomplete can be an empty namespace package when\n # we are in the conda-recipes directory.\n argcomplete = None\n\nif argcomplete:\n class CondaSubprocessCompletionFinder(argcomplete.CompletionFinder):\n def __call__(self, argument_parser, **kwargs):\n def call_super():\n parent = super(CondaSubprocessCompletionFinder, self)\n return parent.__call__(argument_parser, **kwargs)\n\n debug_argcomplete(\"Working\")\n\n if argument_parser.prog != 'conda':\n debug_argcomplete(\"Argument parser is not conda\")\n return call_super()\n\n environ = os.environ.copy()\n if 'COMP_LINE' not in environ:\n debug_argcomplete(\"COMP_LINE not in environ\")\n return call_super()\n\n subcommands = find_commands()\n for subcommand in subcommands:\n if 'conda %s' % subcommand in environ['COMP_LINE']:\n environ['COMP_LINE'] = environ['COMP_LINE'].replace('conda %s' % subcommand,\n 'conda-%s' % subcommand)\n debug_argcomplete(\"Using subprocess\")\n debug_argcomplete(sys.argv)\n import pprint\n debug_argcomplete(pprint.pformat(environ))\n args = [find_executable('conda-%s' % subcommand)]\n debug_argcomplete(args)\n p = subprocess.Popen(args, env=environ, close_fds=False)\n p.communicate()\n sys.exit()\n else:\n debug_argcomplete(\"Not using subprocess\")\n debug_argcomplete(sys.argv)\n debug_argcomplete(argument_parser)\n return call_super()\n\nclass ArgumentParser(argparse.ArgumentParser):\n def __init__(self, *args, **kwargs):\n if not kwargs.get('formatter_class'):\n kwargs['formatter_class'] = argparse.RawDescriptionHelpFormatter\n if 'add_help' not in kwargs:\n add_custom_help = True\n kwargs['add_help'] = False\n else:\n add_custom_help = False\n super(ArgumentParser, self).__init__(*args, **kwargs)\n\n if add_custom_help:\n add_parser_help(self)\n\n if self.description:\n self.description += \"\\n\\nOptions:\\n\"\n\n def _get_action_from_name(self, name):\n \"\"\"Given a name, get the Action instance registered with this parser.\n If only it were made available in the ArgumentError object. It is\n passed as it's first arg...\n \"\"\"\n container = self._actions\n if name is None:\n return None\n for action in container:\n if '/'.join(action.option_strings) == name:\n return action\n elif action.metavar == name:\n return action\n elif action.dest == name:\n return action\n\n def error(self, message):\n import re\n import subprocess\n from .find_commands import find_executable\n\n exc = sys.exc_info()[1]\n if exc:\n # this is incredibly lame, but argparse stupidly does not expose\n # reasonable hooks for customizing error handling\n if hasattr(exc, 'argument_name'):\n argument = self._get_action_from_name(exc.argument_name)\n else:\n argument = None\n if argument and argument.dest == \"cmd\":\n m = re.compile(r\"invalid choice: '([\\w\\-]+)'\").match(exc.message)\n if m:\n cmd = m.group(1)\n executable = find_executable('conda-' + cmd)\n if not executable:\n raise CommandNotFoundError(cmd)\n\n args = [find_executable('conda-' + cmd)]\n args.extend(sys.argv[2:])\n p = subprocess.Popen(args)\n try:\n p.communicate()\n except KeyboardInterrupt:\n p.wait()\n finally:\n sys.exit(p.returncode)\n\n super(ArgumentParser, self).error(message)\n\n def print_help(self):\n super(ArgumentParser, self).print_help()\n\n if self.prog == 'conda' and sys.argv[1:] in ([], ['help'], ['-h'], ['--help']):\n print(\"\"\"\nother commands, such as \"conda build\", are avaialble when additional conda\npackages (e.g. conda-build) are installed\n\"\"\")\n\n def parse_args(self, *args, **kwargs):\n if argcomplete:\n CondaSubprocessCompletionFinder()(self)\n\n return super(ArgumentParser, self).parse_args(*args, **kwargs)\n", "path": "conda/cli/conda_argparse.py"}], "after_files": [{"content": "# (c) 2012-2013 Continuum Analytics, Inc. / http://continuum.io\n# All Rights Reserved\n#\n# conda is distributed under the terms of the BSD 3-clause license.\n# Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.\n\nfrom __future__ import print_function, division, absolute_import, unicode_literals\n\nimport argparse\nimport os\nimport subprocess\nimport sys\n\nfrom .common import add_parser_help\nfrom .find_commands import find_commands, find_executable\nfrom ..exceptions import CommandNotFoundError\n\n\n_ARGCOMPLETE_DEBUG = False\ndef debug_argcomplete(msg):\n # To debug this, replace ttys001 with the fd of the terminal you are using\n # (use the `tty` command to find this), and set _ARGCOMPLETE_DEBUG above\n # to True. You can also `export _ARC_DEBUG=1` in the shell you are using\n # to print debug messages from argcomplete.\n if _ARGCOMPLETE_DEBUG:\n f = open('/dev/ttys001', 'w')\n f.write(\"\\n%s\\n\" % msg)\n f.flush()\n\n\ntry:\n import argcomplete\n argcomplete.CompletionFinder\nexcept (ImportError, AttributeError):\n # On Python 3.3, argcomplete can be an empty namespace package when\n # we are in the conda-recipes directory.\n argcomplete = None\n\nif argcomplete:\n class CondaSubprocessCompletionFinder(argcomplete.CompletionFinder):\n def __call__(self, argument_parser, **kwargs):\n def call_super():\n parent = super(CondaSubprocessCompletionFinder, self)\n return parent.__call__(argument_parser, **kwargs)\n\n debug_argcomplete(\"Working\")\n\n if argument_parser.prog != 'conda':\n debug_argcomplete(\"Argument parser is not conda\")\n return call_super()\n\n environ = os.environ.copy()\n if 'COMP_LINE' not in environ:\n debug_argcomplete(\"COMP_LINE not in environ\")\n return call_super()\n\n subcommands = find_commands()\n for subcommand in subcommands:\n if 'conda %s' % subcommand in environ['COMP_LINE']:\n environ['COMP_LINE'] = environ['COMP_LINE'].replace('conda %s' % subcommand,\n 'conda-%s' % subcommand)\n debug_argcomplete(\"Using subprocess\")\n debug_argcomplete(sys.argv)\n import pprint\n debug_argcomplete(pprint.pformat(environ))\n args = [find_executable('conda-%s' % subcommand)]\n debug_argcomplete(args)\n p = subprocess.Popen(args, env=environ, close_fds=False)\n p.communicate()\n sys.exit()\n else:\n debug_argcomplete(\"Not using subprocess\")\n debug_argcomplete(sys.argv)\n debug_argcomplete(argument_parser)\n return call_super()\n\nclass ArgumentParser(argparse.ArgumentParser):\n def __init__(self, *args, **kwargs):\n if not kwargs.get('formatter_class'):\n kwargs['formatter_class'] = argparse.RawDescriptionHelpFormatter\n if 'add_help' not in kwargs:\n add_custom_help = True\n kwargs['add_help'] = False\n else:\n add_custom_help = False\n super(ArgumentParser, self).__init__(*args, **kwargs)\n\n if add_custom_help:\n add_parser_help(self)\n\n if self.description:\n self.description += \"\\n\\nOptions:\\n\"\n\n def _get_action_from_name(self, name):\n \"\"\"Given a name, get the Action instance registered with this parser.\n If only it were made available in the ArgumentError object. It is\n passed as it's first arg...\n \"\"\"\n container = self._actions\n if name is None:\n return None\n for action in container:\n if '/'.join(action.option_strings) == name:\n return action\n elif action.metavar == name:\n return action\n elif action.dest == name:\n return action\n\n def error(self, message):\n import re\n import subprocess\n from .find_commands import find_executable\n\n exc = sys.exc_info()[1]\n if exc:\n # this is incredibly lame, but argparse stupidly does not expose\n # reasonable hooks for customizing error handling\n if hasattr(exc, 'argument_name'):\n argument = self._get_action_from_name(exc.argument_name)\n else:\n argument = None\n if argument and argument.dest == \"cmd\":\n m = re.compile(r\"invalid choice: u?'([\\w\\-]+)'\").match(exc.message)\n if m:\n cmd = m.group(1)\n executable = find_executable('conda-' + cmd)\n if not executable:\n raise CommandNotFoundError(cmd)\n\n args = [find_executable('conda-' + cmd)]\n args.extend(sys.argv[2:])\n p = subprocess.Popen(args)\n try:\n p.communicate()\n except KeyboardInterrupt:\n p.wait()\n finally:\n sys.exit(p.returncode)\n\n super(ArgumentParser, self).error(message)\n\n def print_help(self):\n super(ArgumentParser, self).print_help()\n\n if self.prog == 'conda' and sys.argv[1:] in ([], ['help'], ['-h'], ['--help']):\n print(\"\"\"\nother commands, such as \"conda build\", are avaialble when additional conda\npackages (e.g. conda-build) are installed\n\"\"\")\n\n def parse_args(self, *args, **kwargs):\n if argcomplete:\n CondaSubprocessCompletionFinder()(self)\n\n return super(ArgumentParser, self).parse_args(*args, **kwargs)\n", "path": "conda/cli/conda_argparse.py"}]}
| 2,221 | 145 |
gh_patches_debug_13525
|
rasdani/github-patches
|
git_diff
|
pyro-ppl__pyro-3002
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[bug] bart.py example consistently fails with cholesky error with default arguments
I'm just attempting to get started using pyro, and was trying to work through examples when I found that the example listed on the example page under "Multivariate Forecasting" reliably fails for me.
I see that there is a related open issue which is quite old https://github.com/pyro-ppl/pyro/issues/2017 which has some suggestions for the broader problem which seemed promising, but the PR following the approach discussed (https://github.com/pyro-ppl/pyro/pull/2019) was never approved.
It's not impossible that this is due to some environmental factor. However, I created a clean virtualenv to explore pyro. If there's some environmental contributing factor which I am not aware of, please document it, or even better, add a helper method to health-check a given environment.
If you can confirm/reproduce the failure, I would respectfully suggest that either
- a solution should be pursued
- a working example of a workaround should be exhibited and linked
- or at a minimum non-functioning "examples" should be removed from the examples page
### Issue Description
`examples/contrib/forecast/bart.py` fails with cholesky error when run with default params (no args)
Note that this example attempts to use the `backtest` method, which trains a model several times over different time windows. The first several such windows succeed.
The error appears as follows:
```
Traceback (most recent call last):
File "examples/contrib/forecast/bart.py", line 180, in <module>
main(args)
File "examples/contrib/forecast/bart.py", line 156, in main
forecaster_options=forecaster_options,
File "/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/contrib/forecast/evaluate.py", line 205, in backtest
batch_size=batch_size,
File "/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/contrib/forecast/forecaster.py", line 361, in __call__
return super().__call__(data, covariates, num_samples, batch_size)
File "/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/contrib/forecast/forecaster.py", line 390, in forward
return self.model(data, covariates)
File "/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/nn/module.py", line 426, in __call__
return super().__call__(*args, **kwargs)
File "/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/contrib/forecast/forecaster.py", line 185, in forward
self.model(zero_data, covariates)
File "examples/contrib/forecast/bart.py", line 121, in model
self.predict(noise_model, prediction)
File "/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/contrib/forecast/forecaster.py", line 157, in predict
noise = pyro.sample("residual", noise_dist)
File "/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/primitives.py", line 163, in sample
apply_stack(msg)
File "/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/poutine/runtime.py", line 218, in apply_stack
default_process_message(msg)
File "/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/poutine/runtime.py", line 179, in default_process_message
msg["value"] = msg["fn"](*msg["args"], **msg["kwargs"])
File "/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/distributions/torch_distribution.py", line 49, in __call__
if self.has_rsample
File "/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/distributions/hmm.py", line 584, in rsample
z = _sequential_gaussian_filter_sample(self._init, trans, sample_shape)
File "/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/distributions/hmm.py", line 144, in _sequential_gaussian_filter_sample
contracted = joint.marginalize(left=state_dim)
File "/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/ops/gaussian.py", line 244, in marginalize
P_b = cholesky(P_bb)
File "/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/ops/tensor_utils.py", line 399, in cholesky
return torch.linalg.cholesky(x)
RuntimeError: torch.linalg.cholesky: (Batch element 255): The factorization could not be completed because the input is not positive-definite (the leading minor of order 2 is not positive-definite).
```
### Environment
| | value |
|--|--|
| OS | macOS Big Sur (11.5.2) (intell)|
| python version | 3.7.9|
| pytorch version | 1.10.1|
| pyro version | 1.8.0|
Note, I get the same behavior on linux in docker.
### Code Snippet
copy-pasted the example here: https://pyro.ai/examples/forecast_simple.html / https://github.com/pyro-ppl/pyro/blob/dev/examples/contrib/forecast/bart.py and simply ran:
```
python bart.py
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/contrib/forecast/bart.py`
Content:
```
1 # Copyright Contributors to the Pyro project.
2 # SPDX-License-Identifier: Apache-2.0
3
4 import argparse
5 import logging
6
7 import numpy as np
8 import torch
9
10 import pyro
11 import pyro.distributions as dist
12 from pyro.contrib.examples.bart import load_bart_od
13 from pyro.contrib.forecast import ForecastingModel, backtest
14 from pyro.ops.tensor_utils import periodic_cumsum, periodic_repeat
15
16 logging.getLogger("pyro").setLevel(logging.DEBUG)
17 logging.getLogger("pyro").handlers[0].setLevel(logging.DEBUG)
18
19
20 def preprocess(args):
21 """
22 Extract a tensor of (arrivals,departures) to Embarcadero station.
23 """
24 print("Loading data")
25 dataset = load_bart_od()
26
27 # The full dataset has all station->station ridership counts for all of 50
28 # train stations. In this simple example we will model only the aggretate
29 # counts to and from a single station, Embarcadero.
30 i = dataset["stations"].index("EMBR")
31 arrivals = dataset["counts"][:, :, i].sum(-1)
32 departures = dataset["counts"][:, i, :].sum(-1)
33 data = torch.stack([arrivals, departures], dim=-1)
34
35 # This simple example uses no covariates, so we will construct a
36 # zero-element tensor of the correct length as empty covariates.
37 covariates = torch.zeros(len(data), 0)
38
39 return data, covariates
40
41
42 # We define a model by subclassing the ForecastingModel class and implementing
43 # a single .model() method.
44 class Model(ForecastingModel):
45 # The .model() method inputs two tensors: a fake tensor zero_data that is
46 # the same size and dtype as the real data (but of course the generative
47 # model shouldn't depend on the value of the data it generates!), and a
48 # tensor of covariates. Our simple model depends on no covariates, so we
49 # simply pass in an empty tensor (see the preprocess() function above).
50 def model(self, zero_data, covariates):
51 period = 24 * 7
52 duration, dim = zero_data.shape[-2:]
53 assert dim == 2 # Data is bivariate: (arrivals, departures).
54
55 # Sample global parameters.
56 noise_scale = pyro.sample(
57 "noise_scale", dist.LogNormal(torch.full((dim,), -3.0), 1.0).to_event(1)
58 )
59 assert noise_scale.shape[-1:] == (dim,)
60 trans_timescale = pyro.sample(
61 "trans_timescale", dist.LogNormal(torch.zeros(dim), 1).to_event(1)
62 )
63 assert trans_timescale.shape[-1:] == (dim,)
64
65 trans_loc = pyro.sample("trans_loc", dist.Cauchy(0, 1 / period))
66 trans_loc = trans_loc.unsqueeze(-1).expand(trans_loc.shape + (dim,))
67 assert trans_loc.shape[-1:] == (dim,)
68 trans_scale = pyro.sample(
69 "trans_scale", dist.LogNormal(torch.zeros(dim), 0.1).to_event(1)
70 )
71 trans_corr = pyro.sample("trans_corr", dist.LKJCholesky(dim, torch.ones(())))
72 trans_scale_tril = trans_scale.unsqueeze(-1) * trans_corr
73 assert trans_scale_tril.shape[-2:] == (dim, dim)
74
75 obs_scale = pyro.sample(
76 "obs_scale", dist.LogNormal(torch.zeros(dim), 0.1).to_event(1)
77 )
78 obs_corr = pyro.sample("obs_corr", dist.LKJCholesky(dim, torch.ones(())))
79 obs_scale_tril = obs_scale.unsqueeze(-1) * obs_corr
80 assert obs_scale_tril.shape[-2:] == (dim, dim)
81
82 # Note the initial seasonality should be sampled in a plate with the
83 # same dim as the time_plate, dim=-1. That way we can repeat the dim
84 # below using periodic_repeat().
85 with pyro.plate("season_plate", period, dim=-1):
86 season_init = pyro.sample(
87 "season_init", dist.Normal(torch.zeros(dim), 1).to_event(1)
88 )
89 assert season_init.shape[-2:] == (period, dim)
90
91 # Sample independent noise at each time step.
92 with self.time_plate:
93 season_noise = pyro.sample(
94 "season_noise", dist.Normal(0, noise_scale).to_event(1)
95 )
96 assert season_noise.shape[-2:] == (duration, dim)
97
98 # Construct a prediction. This prediction has an exactly repeated
99 # seasonal part plus slow seasonal drift. We use two deterministic,
100 # linear functions to transform our diagonal Normal noise to nontrivial
101 # samples from a Gaussian process.
102 prediction = periodic_repeat(season_init, duration, dim=-2) + periodic_cumsum(
103 season_noise, period, dim=-2
104 )
105 assert prediction.shape[-2:] == (duration, dim)
106
107 # Construct a joint noise model. This model is a GaussianHMM, whose
108 # .rsample() and .log_prob() methods are parallelized over time; this
109 # this entire model is parallelized over time.
110 init_dist = dist.Normal(torch.zeros(dim), 100).to_event(1)
111 trans_mat = trans_timescale.neg().exp().diag_embed()
112 trans_dist = dist.MultivariateNormal(trans_loc, scale_tril=trans_scale_tril)
113 obs_mat = torch.eye(dim)
114 obs_dist = dist.MultivariateNormal(torch.zeros(dim), scale_tril=obs_scale_tril)
115 noise_model = dist.GaussianHMM(
116 init_dist, trans_mat, trans_dist, obs_mat, obs_dist, duration=duration
117 )
118 assert noise_model.event_shape == (duration, dim)
119
120 # The final statement registers our noise model and prediction.
121 self.predict(noise_model, prediction)
122
123
124 def main(args):
125 data, covariates = preprocess(args)
126
127 # We will model positive count data by log1p-transforming it into real
128 # valued data. But since we want to evaluate back in the count domain, we
129 # will also define a transform to apply during evaluation, transforming
130 # from real back to count-valued data. Truth is mapped by the log1p()
131 # inverse expm1(), but the prediction will be sampled from a Poisson
132 # distribution.
133 data = data.log1p()
134
135 def transform(pred, truth):
136 pred = torch.poisson(pred.clamp(min=1e-4).expm1())
137 truth = truth.expm1()
138 return pred, truth
139
140 # The backtest() function automatically trains and evaluates our model on
141 # different windows of data.
142 forecaster_options = {
143 "num_steps": args.num_steps,
144 "learning_rate": args.learning_rate,
145 "log_every": args.log_every,
146 "dct_gradients": args.dct,
147 }
148 metrics = backtest(
149 data,
150 covariates,
151 Model,
152 train_window=args.train_window,
153 test_window=args.test_window,
154 stride=args.stride,
155 num_samples=args.num_samples,
156 forecaster_options=forecaster_options,
157 )
158
159 for name in ["mae", "rmse", "crps"]:
160 values = [m[name] for m in metrics]
161 mean = np.mean(values)
162 std = np.std(values)
163 print("{} = {:0.3g} +- {:0.3g}".format(name, mean, std))
164 return metrics
165
166
167 if __name__ == "__main__":
168 assert pyro.__version__.startswith("1.8.0")
169 parser = argparse.ArgumentParser(description="Bart Ridership Forecasting Example")
170 parser.add_argument("--train-window", default=2160, type=int)
171 parser.add_argument("--test-window", default=336, type=int)
172 parser.add_argument("--stride", default=168, type=int)
173 parser.add_argument("-n", "--num-steps", default=501, type=int)
174 parser.add_argument("-lr", "--learning-rate", default=0.05, type=float)
175 parser.add_argument("--dct", action="store_true")
176 parser.add_argument("--num-samples", default=100, type=int)
177 parser.add_argument("--log-every", default=50, type=int)
178 parser.add_argument("--seed", default=1234567890, type=int)
179 args = parser.parse_args()
180 main(args)
181
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/contrib/forecast/bart.py b/examples/contrib/forecast/bart.py
--- a/examples/contrib/forecast/bart.py
+++ b/examples/contrib/forecast/bart.py
@@ -171,7 +171,7 @@
parser.add_argument("--test-window", default=336, type=int)
parser.add_argument("--stride", default=168, type=int)
parser.add_argument("-n", "--num-steps", default=501, type=int)
- parser.add_argument("-lr", "--learning-rate", default=0.05, type=float)
+ parser.add_argument("-lr", "--learning-rate", default=0.01, type=float)
parser.add_argument("--dct", action="store_true")
parser.add_argument("--num-samples", default=100, type=int)
parser.add_argument("--log-every", default=50, type=int)
|
{"golden_diff": "diff --git a/examples/contrib/forecast/bart.py b/examples/contrib/forecast/bart.py\n--- a/examples/contrib/forecast/bart.py\n+++ b/examples/contrib/forecast/bart.py\n@@ -171,7 +171,7 @@\n parser.add_argument(\"--test-window\", default=336, type=int)\n parser.add_argument(\"--stride\", default=168, type=int)\n parser.add_argument(\"-n\", \"--num-steps\", default=501, type=int)\n- parser.add_argument(\"-lr\", \"--learning-rate\", default=0.05, type=float)\n+ parser.add_argument(\"-lr\", \"--learning-rate\", default=0.01, type=float)\n parser.add_argument(\"--dct\", action=\"store_true\")\n parser.add_argument(\"--num-samples\", default=100, type=int)\n parser.add_argument(\"--log-every\", default=50, type=int)\n", "issue": "[bug] bart.py example consistently fails with cholesky error with default arguments\nI'm just attempting to get started using pyro, and was trying to work through examples when I found that the example listed on the example page under \"Multivariate Forecasting\" reliably fails for me.\r\n\r\nI see that there is a related open issue which is quite old https://github.com/pyro-ppl/pyro/issues/2017 which has some suggestions for the broader problem which seemed promising, but the PR following the approach discussed (https://github.com/pyro-ppl/pyro/pull/2019) was never approved.\r\n\r\nIt's not impossible that this is due to some environmental factor. However, I created a clean virtualenv to explore pyro. If there's some environmental contributing factor which I am not aware of, please document it, or even better, add a helper method to health-check a given environment.\r\n\r\nIf you can confirm/reproduce the failure, I would respectfully suggest that either \r\n- a solution should be pursued\r\n- a working example of a workaround should be exhibited and linked\r\n- or at a minimum non-functioning \"examples\" should be removed from the examples page\r\n\r\n### Issue Description\r\n\r\n`examples/contrib/forecast/bart.py` fails with cholesky error when run with default params (no args)\r\n\r\nNote that this example attempts to use the `backtest` method, which trains a model several times over different time windows. The first several such windows succeed.\r\n\r\nThe error appears as follows:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"examples/contrib/forecast/bart.py\", line 180, in <module>\r\n main(args)\r\n File \"examples/contrib/forecast/bart.py\", line 156, in main\r\n forecaster_options=forecaster_options,\r\n File \"/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/contrib/forecast/evaluate.py\", line 205, in backtest\r\n batch_size=batch_size,\r\n File \"/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/contrib/forecast/forecaster.py\", line 361, in __call__\r\n return super().__call__(data, covariates, num_samples, batch_size)\r\n File \"/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1102, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/contrib/forecast/forecaster.py\", line 390, in forward\r\n return self.model(data, covariates)\r\n File \"/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/nn/module.py\", line 426, in __call__\r\n return super().__call__(*args, **kwargs)\r\n File \"/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1102, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/contrib/forecast/forecaster.py\", line 185, in forward\r\n self.model(zero_data, covariates)\r\n File \"examples/contrib/forecast/bart.py\", line 121, in model\r\n self.predict(noise_model, prediction)\r\n File \"/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/contrib/forecast/forecaster.py\", line 157, in predict\r\n noise = pyro.sample(\"residual\", noise_dist)\r\n File \"/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/primitives.py\", line 163, in sample\r\n apply_stack(msg)\r\n File \"/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/poutine/runtime.py\", line 218, in apply_stack\r\n default_process_message(msg)\r\n File \"/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/poutine/runtime.py\", line 179, in default_process_message\r\n msg[\"value\"] = msg[\"fn\"](*msg[\"args\"], **msg[\"kwargs\"])\r\n File \"/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/distributions/torch_distribution.py\", line 49, in __call__\r\n if self.has_rsample\r\n File \"/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/distributions/hmm.py\", line 584, in rsample\r\n z = _sequential_gaussian_filter_sample(self._init, trans, sample_shape)\r\n File \"/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/distributions/hmm.py\", line 144, in _sequential_gaussian_filter_sample\r\n contracted = joint.marginalize(left=state_dim)\r\n File \"/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/ops/gaussian.py\", line 244, in marginalize\r\n P_b = cholesky(P_bb)\r\n File \"/Users/aaron/.pyenv/versions/vpyro/lib/python3.7/site-packages/pyro/ops/tensor_utils.py\", line 399, in cholesky\r\n return torch.linalg.cholesky(x)\r\nRuntimeError: torch.linalg.cholesky: (Batch element 255): The factorization could not be completed because the input is not positive-definite (the leading minor of order 2 is not positive-definite).\r\n```\r\n\r\n### Environment\r\n\r\n| | value |\r\n|--|--|\r\n| OS | macOS Big Sur (11.5.2) (intell)|\r\n| python version | 3.7.9| \r\n| pytorch version | 1.10.1|\r\n| pyro version | 1.8.0|\r\n\r\nNote, I get the same behavior on linux in docker.\r\n\r\n### Code Snippet\r\ncopy-pasted the example here: https://pyro.ai/examples/forecast_simple.html / https://github.com/pyro-ppl/pyro/blob/dev/examples/contrib/forecast/bart.py and simply ran:\r\n\r\n```\r\npython bart.py\r\n```\r\n\n", "before_files": [{"content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\n\nimport argparse\nimport logging\n\nimport numpy as np\nimport torch\n\nimport pyro\nimport pyro.distributions as dist\nfrom pyro.contrib.examples.bart import load_bart_od\nfrom pyro.contrib.forecast import ForecastingModel, backtest\nfrom pyro.ops.tensor_utils import periodic_cumsum, periodic_repeat\n\nlogging.getLogger(\"pyro\").setLevel(logging.DEBUG)\nlogging.getLogger(\"pyro\").handlers[0].setLevel(logging.DEBUG)\n\n\ndef preprocess(args):\n \"\"\"\n Extract a tensor of (arrivals,departures) to Embarcadero station.\n \"\"\"\n print(\"Loading data\")\n dataset = load_bart_od()\n\n # The full dataset has all station->station ridership counts for all of 50\n # train stations. In this simple example we will model only the aggretate\n # counts to and from a single station, Embarcadero.\n i = dataset[\"stations\"].index(\"EMBR\")\n arrivals = dataset[\"counts\"][:, :, i].sum(-1)\n departures = dataset[\"counts\"][:, i, :].sum(-1)\n data = torch.stack([arrivals, departures], dim=-1)\n\n # This simple example uses no covariates, so we will construct a\n # zero-element tensor of the correct length as empty covariates.\n covariates = torch.zeros(len(data), 0)\n\n return data, covariates\n\n\n# We define a model by subclassing the ForecastingModel class and implementing\n# a single .model() method.\nclass Model(ForecastingModel):\n # The .model() method inputs two tensors: a fake tensor zero_data that is\n # the same size and dtype as the real data (but of course the generative\n # model shouldn't depend on the value of the data it generates!), and a\n # tensor of covariates. Our simple model depends on no covariates, so we\n # simply pass in an empty tensor (see the preprocess() function above).\n def model(self, zero_data, covariates):\n period = 24 * 7\n duration, dim = zero_data.shape[-2:]\n assert dim == 2 # Data is bivariate: (arrivals, departures).\n\n # Sample global parameters.\n noise_scale = pyro.sample(\n \"noise_scale\", dist.LogNormal(torch.full((dim,), -3.0), 1.0).to_event(1)\n )\n assert noise_scale.shape[-1:] == (dim,)\n trans_timescale = pyro.sample(\n \"trans_timescale\", dist.LogNormal(torch.zeros(dim), 1).to_event(1)\n )\n assert trans_timescale.shape[-1:] == (dim,)\n\n trans_loc = pyro.sample(\"trans_loc\", dist.Cauchy(0, 1 / period))\n trans_loc = trans_loc.unsqueeze(-1).expand(trans_loc.shape + (dim,))\n assert trans_loc.shape[-1:] == (dim,)\n trans_scale = pyro.sample(\n \"trans_scale\", dist.LogNormal(torch.zeros(dim), 0.1).to_event(1)\n )\n trans_corr = pyro.sample(\"trans_corr\", dist.LKJCholesky(dim, torch.ones(())))\n trans_scale_tril = trans_scale.unsqueeze(-1) * trans_corr\n assert trans_scale_tril.shape[-2:] == (dim, dim)\n\n obs_scale = pyro.sample(\n \"obs_scale\", dist.LogNormal(torch.zeros(dim), 0.1).to_event(1)\n )\n obs_corr = pyro.sample(\"obs_corr\", dist.LKJCholesky(dim, torch.ones(())))\n obs_scale_tril = obs_scale.unsqueeze(-1) * obs_corr\n assert obs_scale_tril.shape[-2:] == (dim, dim)\n\n # Note the initial seasonality should be sampled in a plate with the\n # same dim as the time_plate, dim=-1. That way we can repeat the dim\n # below using periodic_repeat().\n with pyro.plate(\"season_plate\", period, dim=-1):\n season_init = pyro.sample(\n \"season_init\", dist.Normal(torch.zeros(dim), 1).to_event(1)\n )\n assert season_init.shape[-2:] == (period, dim)\n\n # Sample independent noise at each time step.\n with self.time_plate:\n season_noise = pyro.sample(\n \"season_noise\", dist.Normal(0, noise_scale).to_event(1)\n )\n assert season_noise.shape[-2:] == (duration, dim)\n\n # Construct a prediction. This prediction has an exactly repeated\n # seasonal part plus slow seasonal drift. We use two deterministic,\n # linear functions to transform our diagonal Normal noise to nontrivial\n # samples from a Gaussian process.\n prediction = periodic_repeat(season_init, duration, dim=-2) + periodic_cumsum(\n season_noise, period, dim=-2\n )\n assert prediction.shape[-2:] == (duration, dim)\n\n # Construct a joint noise model. This model is a GaussianHMM, whose\n # .rsample() and .log_prob() methods are parallelized over time; this\n # this entire model is parallelized over time.\n init_dist = dist.Normal(torch.zeros(dim), 100).to_event(1)\n trans_mat = trans_timescale.neg().exp().diag_embed()\n trans_dist = dist.MultivariateNormal(trans_loc, scale_tril=trans_scale_tril)\n obs_mat = torch.eye(dim)\n obs_dist = dist.MultivariateNormal(torch.zeros(dim), scale_tril=obs_scale_tril)\n noise_model = dist.GaussianHMM(\n init_dist, trans_mat, trans_dist, obs_mat, obs_dist, duration=duration\n )\n assert noise_model.event_shape == (duration, dim)\n\n # The final statement registers our noise model and prediction.\n self.predict(noise_model, prediction)\n\n\ndef main(args):\n data, covariates = preprocess(args)\n\n # We will model positive count data by log1p-transforming it into real\n # valued data. But since we want to evaluate back in the count domain, we\n # will also define a transform to apply during evaluation, transforming\n # from real back to count-valued data. Truth is mapped by the log1p()\n # inverse expm1(), but the prediction will be sampled from a Poisson\n # distribution.\n data = data.log1p()\n\n def transform(pred, truth):\n pred = torch.poisson(pred.clamp(min=1e-4).expm1())\n truth = truth.expm1()\n return pred, truth\n\n # The backtest() function automatically trains and evaluates our model on\n # different windows of data.\n forecaster_options = {\n \"num_steps\": args.num_steps,\n \"learning_rate\": args.learning_rate,\n \"log_every\": args.log_every,\n \"dct_gradients\": args.dct,\n }\n metrics = backtest(\n data,\n covariates,\n Model,\n train_window=args.train_window,\n test_window=args.test_window,\n stride=args.stride,\n num_samples=args.num_samples,\n forecaster_options=forecaster_options,\n )\n\n for name in [\"mae\", \"rmse\", \"crps\"]:\n values = [m[name] for m in metrics]\n mean = np.mean(values)\n std = np.std(values)\n print(\"{} = {:0.3g} +- {:0.3g}\".format(name, mean, std))\n return metrics\n\n\nif __name__ == \"__main__\":\n assert pyro.__version__.startswith(\"1.8.0\")\n parser = argparse.ArgumentParser(description=\"Bart Ridership Forecasting Example\")\n parser.add_argument(\"--train-window\", default=2160, type=int)\n parser.add_argument(\"--test-window\", default=336, type=int)\n parser.add_argument(\"--stride\", default=168, type=int)\n parser.add_argument(\"-n\", \"--num-steps\", default=501, type=int)\n parser.add_argument(\"-lr\", \"--learning-rate\", default=0.05, type=float)\n parser.add_argument(\"--dct\", action=\"store_true\")\n parser.add_argument(\"--num-samples\", default=100, type=int)\n parser.add_argument(\"--log-every\", default=50, type=int)\n parser.add_argument(\"--seed\", default=1234567890, type=int)\n args = parser.parse_args()\n main(args)\n", "path": "examples/contrib/forecast/bart.py"}], "after_files": [{"content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\n\nimport argparse\nimport logging\n\nimport numpy as np\nimport torch\n\nimport pyro\nimport pyro.distributions as dist\nfrom pyro.contrib.examples.bart import load_bart_od\nfrom pyro.contrib.forecast import ForecastingModel, backtest\nfrom pyro.ops.tensor_utils import periodic_cumsum, periodic_repeat\n\nlogging.getLogger(\"pyro\").setLevel(logging.DEBUG)\nlogging.getLogger(\"pyro\").handlers[0].setLevel(logging.DEBUG)\n\n\ndef preprocess(args):\n \"\"\"\n Extract a tensor of (arrivals,departures) to Embarcadero station.\n \"\"\"\n print(\"Loading data\")\n dataset = load_bart_od()\n\n # The full dataset has all station->station ridership counts for all of 50\n # train stations. In this simple example we will model only the aggretate\n # counts to and from a single station, Embarcadero.\n i = dataset[\"stations\"].index(\"EMBR\")\n arrivals = dataset[\"counts\"][:, :, i].sum(-1)\n departures = dataset[\"counts\"][:, i, :].sum(-1)\n data = torch.stack([arrivals, departures], dim=-1)\n\n # This simple example uses no covariates, so we will construct a\n # zero-element tensor of the correct length as empty covariates.\n covariates = torch.zeros(len(data), 0)\n\n return data, covariates\n\n\n# We define a model by subclassing the ForecastingModel class and implementing\n# a single .model() method.\nclass Model(ForecastingModel):\n # The .model() method inputs two tensors: a fake tensor zero_data that is\n # the same size and dtype as the real data (but of course the generative\n # model shouldn't depend on the value of the data it generates!), and a\n # tensor of covariates. Our simple model depends on no covariates, so we\n # simply pass in an empty tensor (see the preprocess() function above).\n def model(self, zero_data, covariates):\n period = 24 * 7\n duration, dim = zero_data.shape[-2:]\n assert dim == 2 # Data is bivariate: (arrivals, departures).\n\n # Sample global parameters.\n noise_scale = pyro.sample(\n \"noise_scale\", dist.LogNormal(torch.full((dim,), -3.0), 1.0).to_event(1)\n )\n assert noise_scale.shape[-1:] == (dim,)\n trans_timescale = pyro.sample(\n \"trans_timescale\", dist.LogNormal(torch.zeros(dim), 1).to_event(1)\n )\n assert trans_timescale.shape[-1:] == (dim,)\n\n trans_loc = pyro.sample(\"trans_loc\", dist.Cauchy(0, 1 / period))\n trans_loc = trans_loc.unsqueeze(-1).expand(trans_loc.shape + (dim,))\n assert trans_loc.shape[-1:] == (dim,)\n trans_scale = pyro.sample(\n \"trans_scale\", dist.LogNormal(torch.zeros(dim), 0.1).to_event(1)\n )\n trans_corr = pyro.sample(\"trans_corr\", dist.LKJCholesky(dim, torch.ones(())))\n trans_scale_tril = trans_scale.unsqueeze(-1) * trans_corr\n assert trans_scale_tril.shape[-2:] == (dim, dim)\n\n obs_scale = pyro.sample(\n \"obs_scale\", dist.LogNormal(torch.zeros(dim), 0.1).to_event(1)\n )\n obs_corr = pyro.sample(\"obs_corr\", dist.LKJCholesky(dim, torch.ones(())))\n obs_scale_tril = obs_scale.unsqueeze(-1) * obs_corr\n assert obs_scale_tril.shape[-2:] == (dim, dim)\n\n # Note the initial seasonality should be sampled in a plate with the\n # same dim as the time_plate, dim=-1. That way we can repeat the dim\n # below using periodic_repeat().\n with pyro.plate(\"season_plate\", period, dim=-1):\n season_init = pyro.sample(\n \"season_init\", dist.Normal(torch.zeros(dim), 1).to_event(1)\n )\n assert season_init.shape[-2:] == (period, dim)\n\n # Sample independent noise at each time step.\n with self.time_plate:\n season_noise = pyro.sample(\n \"season_noise\", dist.Normal(0, noise_scale).to_event(1)\n )\n assert season_noise.shape[-2:] == (duration, dim)\n\n # Construct a prediction. This prediction has an exactly repeated\n # seasonal part plus slow seasonal drift. We use two deterministic,\n # linear functions to transform our diagonal Normal noise to nontrivial\n # samples from a Gaussian process.\n prediction = periodic_repeat(season_init, duration, dim=-2) + periodic_cumsum(\n season_noise, period, dim=-2\n )\n assert prediction.shape[-2:] == (duration, dim)\n\n # Construct a joint noise model. This model is a GaussianHMM, whose\n # .rsample() and .log_prob() methods are parallelized over time; this\n # this entire model is parallelized over time.\n init_dist = dist.Normal(torch.zeros(dim), 100).to_event(1)\n trans_mat = trans_timescale.neg().exp().diag_embed()\n trans_dist = dist.MultivariateNormal(trans_loc, scale_tril=trans_scale_tril)\n obs_mat = torch.eye(dim)\n obs_dist = dist.MultivariateNormal(torch.zeros(dim), scale_tril=obs_scale_tril)\n noise_model = dist.GaussianHMM(\n init_dist, trans_mat, trans_dist, obs_mat, obs_dist, duration=duration\n )\n assert noise_model.event_shape == (duration, dim)\n\n # The final statement registers our noise model and prediction.\n self.predict(noise_model, prediction)\n\n\ndef main(args):\n data, covariates = preprocess(args)\n\n # We will model positive count data by log1p-transforming it into real\n # valued data. But since we want to evaluate back in the count domain, we\n # will also define a transform to apply during evaluation, transforming\n # from real back to count-valued data. Truth is mapped by the log1p()\n # inverse expm1(), but the prediction will be sampled from a Poisson\n # distribution.\n data = data.log1p()\n\n def transform(pred, truth):\n pred = torch.poisson(pred.clamp(min=1e-4).expm1())\n truth = truth.expm1()\n return pred, truth\n\n # The backtest() function automatically trains and evaluates our model on\n # different windows of data.\n forecaster_options = {\n \"num_steps\": args.num_steps,\n \"learning_rate\": args.learning_rate,\n \"log_every\": args.log_every,\n \"dct_gradients\": args.dct,\n }\n metrics = backtest(\n data,\n covariates,\n Model,\n train_window=args.train_window,\n test_window=args.test_window,\n stride=args.stride,\n num_samples=args.num_samples,\n forecaster_options=forecaster_options,\n )\n\n for name in [\"mae\", \"rmse\", \"crps\"]:\n values = [m[name] for m in metrics]\n mean = np.mean(values)\n std = np.std(values)\n print(\"{} = {:0.3g} +- {:0.3g}\".format(name, mean, std))\n return metrics\n\n\nif __name__ == \"__main__\":\n assert pyro.__version__.startswith(\"1.8.0\")\n parser = argparse.ArgumentParser(description=\"Bart Ridership Forecasting Example\")\n parser.add_argument(\"--train-window\", default=2160, type=int)\n parser.add_argument(\"--test-window\", default=336, type=int)\n parser.add_argument(\"--stride\", default=168, type=int)\n parser.add_argument(\"-n\", \"--num-steps\", default=501, type=int)\n parser.add_argument(\"-lr\", \"--learning-rate\", default=0.01, type=float)\n parser.add_argument(\"--dct\", action=\"store_true\")\n parser.add_argument(\"--num-samples\", default=100, type=int)\n parser.add_argument(\"--log-every\", default=50, type=int)\n parser.add_argument(\"--seed\", default=1234567890, type=int)\n args = parser.parse_args()\n main(args)\n", "path": "examples/contrib/forecast/bart.py"}]}
| 3,977 | 204 |
gh_patches_debug_18833
|
rasdani/github-patches
|
git_diff
|
deepchecks__deepchecks-1548
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG][HOTFIX] Add deprecation warning for condition parameter names
In 0.7 the drift checks were updated to add Cramer's V.
Thus the previous parameter condition names ("max_psi... "max_earthmovers...") were renamed.
Previous parameters should still be re-added with deprecation warnings, for all relevant checks and conditions.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `deepchecks/vision/deprecation_warnings.py`
Content:
```
1 # ----------------------------------------------------------------------------
2 # Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)
3 #
4 # This file is part of Deepchecks.
5 # Deepchecks is distributed under the terms of the GNU Affero General
6 # Public License (version 3 or later).
7 # You should have received a copy of the GNU Affero General Public License
8 # along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.
9 # ----------------------------------------------------------------------------
10 #
11 """This file changes default 'ignore' action of DeprecationWarnings for specific deprecation messages."""
12 import warnings
13
14 # Added in version 0.6.2, deprecates max_num_categories in all drift checks
15 warnings.filterwarnings(
16 action='always',
17 message=r'.*max_num_categories.*',
18 category=DeprecationWarning,
19 module=r'deepchecks.*'
20 )
21
22 # Added in 0.7 Warning filters for deprecated functions in deepchecks.tabular.checks
23 # Should be removed in 0.8
24 warnings.filterwarnings(
25 action='once',
26 message=r'deepchecks.vision.checks.performance is deprecated.*',
27 category=DeprecationWarning,
28 module=r'deepchecks.*'
29 )
30
31 warnings.filterwarnings(
32 action='once',
33 message=r'deepchecks.vision.checks.methodology is deprecated.*',
34 category=DeprecationWarning,
35 module=r'deepchecks.tabular.checks.methodology.*'
36 )
37
38 warnings.filterwarnings(
39 action='once',
40 message=r'deepchecks.vision.checks.distribution is deprecated.*',
41 category=DeprecationWarning,
42 module=r'deepchecks.*'
43 )
44
45 warnings.filterwarnings(
46 action='always',
47 message=r'the integrity_validation suite is deprecated.*',
48 category=DeprecationWarning,
49 module=r'deepchecks.*'
50 )
51
```
Path: `deepchecks/tabular/deprecation_warnings.py`
Content:
```
1 # ----------------------------------------------------------------------------
2 # Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)
3 #
4 # This file is part of Deepchecks.
5 # Deepchecks is distributed under the terms of the GNU Affero General
6 # Public License (version 3 or later).
7 # You should have received a copy of the GNU Affero General Public License
8 # along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.
9 # ----------------------------------------------------------------------------
10 #
11 """This file changes default 'ignore' action of DeprecationWarnings for specific deprecation messages."""
12 import warnings
13
14 # Added in version 0.6.2, deprecates max_num_categories in all drift checks
15 warnings.filterwarnings(
16 action='always',
17 message=r'.*max_num_categories.*',
18 category=DeprecationWarning,
19 module=r'deepchecks.*'
20 )
21
22 # Added in 0.7 Warning filters for deprecated functions in deepchecks.tabular.checks
23 # Should be removed in 0.8
24 warnings.filterwarnings(
25 action='once',
26 message=r'deepchecks\.tabular\.checks\.performance is deprecated.*',
27 category=DeprecationWarning,
28 module=r'deepchecks.*'
29 )
30
31 warnings.filterwarnings(
32 action='once',
33 message=r'deepchecks.tabular.checks.overview is deprecated.*',
34 category=DeprecationWarning,
35 module=r'deepchecks.*'
36 )
37
38 warnings.filterwarnings(
39 action='once',
40 message=r'deepchecks.tabular.checks.methodology is deprecated.*',
41 category=DeprecationWarning,
42 module=r'deepchecks.tabular.checks.methodology.*'
43 )
44
45 warnings.filterwarnings(
46 action='once',
47 message=r'deepchecks.tabular.checks.distribution is deprecated.*',
48 category=DeprecationWarning,
49 module=r'deepchecks.*'
50 )
51
52 warnings.filterwarnings(
53 action='always',
54 message=r'the single_dataset_integrity suite is deprecated.*',
55 category=DeprecationWarning,
56 module=r'deepchecks.*'
57 )
58
59 warnings.filterwarnings(
60 action='always',
61 message=r'the train_test_leakage suite is deprecated.*',
62 category=DeprecationWarning,
63 module=r'deepchecks.*'
64 )
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/deepchecks/tabular/deprecation_warnings.py b/deepchecks/tabular/deprecation_warnings.py
--- a/deepchecks/tabular/deprecation_warnings.py
+++ b/deepchecks/tabular/deprecation_warnings.py
@@ -62,3 +62,20 @@
category=DeprecationWarning,
module=r'deepchecks.*'
)
+
+# Added in 0.7 Warning filters for drift conditions
+# Should be removed in 0.8
+
+warnings.filterwarnings(
+ action='once',
+ message=r'.*max_allowed_psi_score is deprecated.*',
+ category=DeprecationWarning,
+ module=r'deepchecks.*'
+)
+
+warnings.filterwarnings(
+ action='once',
+ message=r'.*max_allowed_earth_movers_score is deprecated.*',
+ category=DeprecationWarning,
+ module=r'deepchecks.*'
+)
diff --git a/deepchecks/vision/deprecation_warnings.py b/deepchecks/vision/deprecation_warnings.py
--- a/deepchecks/vision/deprecation_warnings.py
+++ b/deepchecks/vision/deprecation_warnings.py
@@ -48,3 +48,20 @@
category=DeprecationWarning,
module=r'deepchecks.*'
)
+
+# Added in 0.7 Warning filters for drift conditions
+# Should be removed in 0.8
+
+warnings.filterwarnings(
+ action='once',
+ message=r'.*max_allowed_psi_score is deprecated.*',
+ category=DeprecationWarning,
+ module=r'deepchecks.*'
+)
+
+warnings.filterwarnings(
+ action='once',
+ message=r'.*max_allowed_earth_movers_score is deprecated.*',
+ category=DeprecationWarning,
+ module=r'deepchecks.*'
+)
|
{"golden_diff": "diff --git a/deepchecks/tabular/deprecation_warnings.py b/deepchecks/tabular/deprecation_warnings.py\n--- a/deepchecks/tabular/deprecation_warnings.py\n+++ b/deepchecks/tabular/deprecation_warnings.py\n@@ -62,3 +62,20 @@\n category=DeprecationWarning,\n module=r'deepchecks.*'\n )\n+\n+# Added in 0.7 Warning filters for drift conditions\n+# Should be removed in 0.8\n+\n+warnings.filterwarnings(\n+ action='once',\n+ message=r'.*max_allowed_psi_score is deprecated.*',\n+ category=DeprecationWarning,\n+ module=r'deepchecks.*'\n+)\n+\n+warnings.filterwarnings(\n+ action='once',\n+ message=r'.*max_allowed_earth_movers_score is deprecated.*',\n+ category=DeprecationWarning,\n+ module=r'deepchecks.*'\n+)\ndiff --git a/deepchecks/vision/deprecation_warnings.py b/deepchecks/vision/deprecation_warnings.py\n--- a/deepchecks/vision/deprecation_warnings.py\n+++ b/deepchecks/vision/deprecation_warnings.py\n@@ -48,3 +48,20 @@\n category=DeprecationWarning,\n module=r'deepchecks.*'\n )\n+\n+# Added in 0.7 Warning filters for drift conditions\n+# Should be removed in 0.8\n+\n+warnings.filterwarnings(\n+ action='once',\n+ message=r'.*max_allowed_psi_score is deprecated.*',\n+ category=DeprecationWarning,\n+ module=r'deepchecks.*'\n+)\n+\n+warnings.filterwarnings(\n+ action='once',\n+ message=r'.*max_allowed_earth_movers_score is deprecated.*',\n+ category=DeprecationWarning,\n+ module=r'deepchecks.*'\n+)\n", "issue": "[BUG][HOTFIX] Add deprecation warning for condition parameter names\nIn 0.7 the drift checks were updated to add Cramer's V.\r\nThus the previous parameter condition names (\"max_psi... \"max_earthmovers...\") were renamed.\r\nPrevious parameters should still be re-added with deprecation warnings, for all relevant checks and conditions.\r\n\n", "before_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"This file changes default 'ignore' action of DeprecationWarnings for specific deprecation messages.\"\"\"\nimport warnings\n\n# Added in version 0.6.2, deprecates max_num_categories in all drift checks\nwarnings.filterwarnings(\n action='always',\n message=r'.*max_num_categories.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\n# Added in 0.7 Warning filters for deprecated functions in deepchecks.tabular.checks\n# Should be removed in 0.8\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.vision.checks.performance is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.vision.checks.methodology is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.tabular.checks.methodology.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.vision.checks.distribution is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='always',\n message=r'the integrity_validation suite is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n", "path": "deepchecks/vision/deprecation_warnings.py"}, {"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"This file changes default 'ignore' action of DeprecationWarnings for specific deprecation messages.\"\"\"\nimport warnings\n\n# Added in version 0.6.2, deprecates max_num_categories in all drift checks\nwarnings.filterwarnings(\n action='always',\n message=r'.*max_num_categories.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\n# Added in 0.7 Warning filters for deprecated functions in deepchecks.tabular.checks\n# Should be removed in 0.8\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks\\.tabular\\.checks\\.performance is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.tabular.checks.overview is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.tabular.checks.methodology is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.tabular.checks.methodology.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.tabular.checks.distribution is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='always',\n message=r'the single_dataset_integrity suite is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='always',\n message=r'the train_test_leakage suite is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n", "path": "deepchecks/tabular/deprecation_warnings.py"}], "after_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"This file changes default 'ignore' action of DeprecationWarnings for specific deprecation messages.\"\"\"\nimport warnings\n\n# Added in version 0.6.2, deprecates max_num_categories in all drift checks\nwarnings.filterwarnings(\n action='always',\n message=r'.*max_num_categories.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\n# Added in 0.7 Warning filters for deprecated functions in deepchecks.tabular.checks\n# Should be removed in 0.8\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.vision.checks.performance is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.vision.checks.methodology is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.tabular.checks.methodology.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.vision.checks.distribution is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='always',\n message=r'the integrity_validation suite is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\n# Added in 0.7 Warning filters for drift conditions\n# Should be removed in 0.8\n\nwarnings.filterwarnings(\n action='once',\n message=r'.*max_allowed_psi_score is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'.*max_allowed_earth_movers_score is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n", "path": "deepchecks/vision/deprecation_warnings.py"}, {"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"This file changes default 'ignore' action of DeprecationWarnings for specific deprecation messages.\"\"\"\nimport warnings\n\n# Added in version 0.6.2, deprecates max_num_categories in all drift checks\nwarnings.filterwarnings(\n action='always',\n message=r'.*max_num_categories.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\n# Added in 0.7 Warning filters for deprecated functions in deepchecks.tabular.checks\n# Should be removed in 0.8\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks\\.tabular\\.checks\\.performance is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.tabular.checks.overview is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.tabular.checks.methodology is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.tabular.checks.methodology.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'deepchecks.tabular.checks.distribution is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='always',\n message=r'the single_dataset_integrity suite is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='always',\n message=r'the train_test_leakage suite is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\n# Added in 0.7 Warning filters for drift conditions\n# Should be removed in 0.8\n\nwarnings.filterwarnings(\n action='once',\n message=r'.*max_allowed_psi_score is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n\nwarnings.filterwarnings(\n action='once',\n message=r'.*max_allowed_earth_movers_score is deprecated.*',\n category=DeprecationWarning,\n module=r'deepchecks.*'\n)\n", "path": "deepchecks/tabular/deprecation_warnings.py"}]}
| 1,446 | 402 |
gh_patches_debug_6600
|
rasdani/github-patches
|
git_diff
|
CTFd__CTFd-596
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Certain templates can be force loaded
There are certain templates which are accessible because of the static_html handler. This isn't a vulnerability because it appears like no sensitive information is loaded but it should be fixed.
It's possible that the template handler itself should just be entirely removed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CTFd/views.py`
Content:
```
1 import os
2 import re
3
4 from flask import current_app as app, render_template, request, redirect, abort, jsonify, url_for, session, Blueprint, Response, send_file
5 from flask.helpers import safe_join
6 from jinja2.exceptions import TemplateNotFound
7 from passlib.hash import bcrypt_sha256
8
9 from CTFd.models import db, Teams, Solves, Awards, Files, Pages
10 from CTFd.utils import cache, markdown
11 from CTFd import utils
12
13 views = Blueprint('views', __name__)
14
15
16 @views.route('/setup', methods=['GET', 'POST'])
17 def setup():
18 # with app.app_context():
19 # admin = Teams.query.filter_by(admin=True).first()
20
21 if not utils.is_setup():
22 if not session.get('nonce'):
23 session['nonce'] = utils.sha512(os.urandom(10))
24 if request.method == 'POST':
25 ctf_name = request.form['ctf_name']
26 ctf_name = utils.set_config('ctf_name', ctf_name)
27
28 # CSS
29 css = utils.set_config('start', '')
30
31 # Admin user
32 name = request.form['name']
33 email = request.form['email']
34 password = request.form['password']
35 admin = Teams(name, email, password)
36 admin.admin = True
37 admin.banned = True
38
39 # Index page
40
41 index = """<div class="row">
42 <div class="col-md-6 offset-md-3">
43 <img class="w-100 mx-auto d-block" style="max-width: 500px;padding: 50px;padding-top: 14vh;" src="themes/core/static/img/logo.png" />
44 <h3 class="text-center">
45 <p>A cool CTF platform from <a href="https://ctfd.io">ctfd.io</a></p>
46 <p>Follow us on social media:</p>
47 <a href="https://twitter.com/ctfdio"><i class="fab fa-twitter fa-2x" aria-hidden="true"></i></a>
48 <a href="https://facebook.com/ctfdio"><i class="fab fa-facebook fa-2x" aria-hidden="true"></i></a>
49 <a href="https://github.com/ctfd"><i class="fab fa-github fa-2x" aria-hidden="true"></i></a>
50 </h3>
51 <br>
52 <h4 class="text-center">
53 <a href="admin">Click here</a> to login and setup your CTF
54 </h4>
55 </div>
56 </div>""".format(request.script_root)
57
58 page = Pages(title=None, route='index', html=index, draft=False)
59
60 # max attempts per challenge
61 max_tries = utils.set_config('max_tries', 0)
62
63 # Start time
64 start = utils.set_config('start', None)
65 end = utils.set_config('end', None)
66 freeze = utils.set_config('freeze', None)
67
68 # Challenges cannot be viewed by unregistered users
69 view_challenges_unregistered = utils.set_config('view_challenges_unregistered', None)
70
71 # Allow/Disallow registration
72 prevent_registration = utils.set_config('prevent_registration', None)
73
74 # Verify emails
75 verify_emails = utils.set_config('verify_emails', None)
76
77 mail_server = utils.set_config('mail_server', None)
78 mail_port = utils.set_config('mail_port', None)
79 mail_tls = utils.set_config('mail_tls', None)
80 mail_ssl = utils.set_config('mail_ssl', None)
81 mail_username = utils.set_config('mail_username', None)
82 mail_password = utils.set_config('mail_password', None)
83 mail_useauth = utils.set_config('mail_useauth', None)
84
85 setup = utils.set_config('setup', True)
86
87 db.session.add(page)
88 db.session.add(admin)
89 db.session.commit()
90
91 session['username'] = admin.name
92 session['id'] = admin.id
93 session['admin'] = admin.admin
94 session['nonce'] = utils.sha512(os.urandom(10))
95
96 db.session.close()
97 app.setup = False
98 with app.app_context():
99 cache.clear()
100
101 return redirect(url_for('views.static_html'))
102 return render_template('setup.html', nonce=session.get('nonce'))
103 return redirect(url_for('views.static_html'))
104
105
106 # Custom CSS handler
107 @views.route('/static/user.css')
108 def custom_css():
109 return Response(utils.get_config('css'), mimetype='text/css')
110
111
112 # Static HTML files
113 @views.route("/", defaults={'template': 'index'})
114 @views.route("/<path:template>")
115 def static_html(template):
116 page = utils.get_page(template)
117 if page is None:
118 try:
119 return render_template('%s.html' % template)
120 except TemplateNotFound:
121 abort(404)
122 else:
123 if page.auth_required and utils.authed() is False:
124 return redirect(url_for('auth.login', next=request.path))
125
126 return render_template('page.html', content=markdown(page.html))
127
128
129 @views.route('/teams', defaults={'page': '1'})
130 @views.route('/teams/<int:page>')
131 def teams(page):
132 if utils.get_config('workshop_mode'):
133 abort(404)
134 page = abs(int(page))
135 results_per_page = 50
136 page_start = results_per_page * (page - 1)
137 page_end = results_per_page * (page - 1) + results_per_page
138
139 if utils.get_config('verify_emails'):
140 count = Teams.query.filter_by(verified=True, banned=False).count()
141 teams = Teams.query.filter_by(verified=True, banned=False).slice(page_start, page_end).all()
142 else:
143 count = Teams.query.filter_by(banned=False).count()
144 teams = Teams.query.filter_by(banned=False).slice(page_start, page_end).all()
145 pages = int(count / results_per_page) + (count % results_per_page > 0)
146 return render_template('teams.html', teams=teams, team_pages=pages, curr_page=page)
147
148
149 @views.route('/team', methods=['GET'])
150 def private_team():
151 if utils.authed():
152 teamid = session['id']
153
154 freeze = utils.get_config('freeze')
155 user = Teams.query.filter_by(id=teamid).first_or_404()
156 solves = Solves.query.filter_by(teamid=teamid)
157 awards = Awards.query.filter_by(teamid=teamid)
158
159 place = user.place()
160 score = user.score()
161
162 if freeze:
163 freeze = utils.unix_time_to_utc(freeze)
164 if teamid != session.get('id'):
165 solves = solves.filter(Solves.date < freeze)
166 awards = awards.filter(Awards.date < freeze)
167
168 solves = solves.all()
169 awards = awards.all()
170
171 return render_template('team.html', solves=solves, awards=awards, team=user, score=score, place=place, score_frozen=utils.is_scoreboard_frozen())
172 else:
173 return redirect(url_for('auth.login'))
174
175
176 @views.route('/team/<int:teamid>', methods=['GET', 'POST'])
177 def team(teamid):
178 if utils.get_config('workshop_mode'):
179 abort(404)
180
181 if utils.get_config('view_scoreboard_if_utils.authed') and not utils.authed():
182 return redirect(url_for('auth.login', next=request.path))
183 errors = []
184 freeze = utils.get_config('freeze')
185 user = Teams.query.filter_by(id=teamid).first_or_404()
186 solves = Solves.query.filter_by(teamid=teamid)
187 awards = Awards.query.filter_by(teamid=teamid)
188
189 place = user.place()
190 score = user.score()
191
192 if freeze:
193 freeze = utils.unix_time_to_utc(freeze)
194 if teamid != session.get('id'):
195 solves = solves.filter(Solves.date < freeze)
196 awards = awards.filter(Awards.date < freeze)
197
198 solves = solves.all()
199 awards = awards.all()
200
201 db.session.close()
202
203 if utils.hide_scores() and teamid != session.get('id'):
204 errors.append('Scores are currently hidden')
205
206 if errors:
207 return render_template('team.html', team=user, errors=errors)
208
209 if request.method == 'GET':
210 return render_template('team.html', solves=solves, awards=awards, team=user, score=score, place=place, score_frozen=utils.is_scoreboard_frozen())
211 elif request.method == 'POST':
212 json = {'solves': []}
213 for x in solves:
214 json['solves'].append({'id': x.id, 'chal': x.chalid, 'team': x.teamid})
215 return jsonify(json)
216
217
218 @views.route('/profile', methods=['POST', 'GET'])
219 def profile():
220 if utils.authed():
221 if request.method == "POST":
222 errors = []
223
224 name = request.form.get('name').strip()
225 email = request.form.get('email').strip()
226 website = request.form.get('website').strip()
227 affiliation = request.form.get('affiliation').strip()
228 country = request.form.get('country').strip()
229
230 user = Teams.query.filter_by(id=session['id']).first()
231
232 if not utils.get_config('prevent_name_change'):
233 names = Teams.query.filter_by(name=name).first()
234 name_len = len(request.form['name']) == 0
235
236 emails = Teams.query.filter_by(email=email).first()
237 valid_email = utils.check_email_format(email)
238
239 if utils.check_email_format(name) is True:
240 errors.append('Team name cannot be an email address')
241
242 if ('password' in request.form.keys() and not len(request.form['password']) == 0) and \
243 (not bcrypt_sha256.verify(request.form.get('confirm').strip(), user.password)):
244 errors.append("Your old password doesn't match what we have.")
245 if not valid_email:
246 errors.append("That email doesn't look right")
247 if not utils.get_config('prevent_name_change') and names and name != session['username']:
248 errors.append('That team name is already taken')
249 if emails and emails.id != session['id']:
250 errors.append('That email has already been used')
251 if not utils.get_config('prevent_name_change') and name_len:
252 errors.append('Pick a longer team name')
253 if website.strip() and not utils.validate_url(website):
254 errors.append("That doesn't look like a valid URL")
255
256 if len(errors) > 0:
257 return render_template('profile.html', name=name, email=email, website=website,
258 affiliation=affiliation, country=country, errors=errors)
259 else:
260 team = Teams.query.filter_by(id=session['id']).first()
261 if team.name != name:
262 if not utils.get_config('prevent_name_change'):
263 team.name = name
264 session['username'] = team.name
265 if team.email != email.lower():
266 team.email = email.lower()
267 if utils.get_config('verify_emails'):
268 team.verified = False
269
270 if 'password' in request.form.keys() and not len(request.form['password']) == 0:
271 team.password = bcrypt_sha256.encrypt(request.form.get('password'))
272 team.website = website
273 team.affiliation = affiliation
274 team.country = country
275 db.session.commit()
276 db.session.close()
277 return redirect(url_for('views.profile'))
278 else:
279 user = Teams.query.filter_by(id=session['id']).first()
280 name = user.name
281 email = user.email
282 website = user.website
283 affiliation = user.affiliation
284 country = user.country
285 prevent_name_change = utils.get_config('prevent_name_change')
286 confirm_email = utils.get_config('verify_emails') and not user.verified
287 return render_template('profile.html', name=name, email=email, website=website, affiliation=affiliation,
288 country=country, prevent_name_change=prevent_name_change, confirm_email=confirm_email)
289 else:
290 return redirect(url_for('auth.login'))
291
292
293 @views.route('/files', defaults={'path': ''})
294 @views.route('/files/<path:path>')
295 def file_handler(path):
296 f = Files.query.filter_by(location=path).first_or_404()
297 if f.chal:
298 if not utils.is_admin():
299 if not utils.ctftime():
300 if utils.view_after_ctf() and utils.ctf_started():
301 pass
302 else:
303 abort(403)
304 upload_folder = os.path.join(app.root_path, app.config['UPLOAD_FOLDER'])
305 return send_file(safe_join(upload_folder, f.location))
306
307
308 @views.route('/themes/<theme>/static/<path:path>')
309 def themes_handler(theme, path):
310 filename = safe_join(app.root_path, 'themes', theme, 'static', path)
311 if os.path.isfile(filename):
312 return send_file(filename)
313 else:
314 abort(404)
315
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/CTFd/views.py b/CTFd/views.py
--- a/CTFd/views.py
+++ b/CTFd/views.py
@@ -115,10 +115,7 @@
def static_html(template):
page = utils.get_page(template)
if page is None:
- try:
- return render_template('%s.html' % template)
- except TemplateNotFound:
- abort(404)
+ abort(404)
else:
if page.auth_required and utils.authed() is False:
return redirect(url_for('auth.login', next=request.path))
|
{"golden_diff": "diff --git a/CTFd/views.py b/CTFd/views.py\n--- a/CTFd/views.py\n+++ b/CTFd/views.py\n@@ -115,10 +115,7 @@\n def static_html(template):\n page = utils.get_page(template)\n if page is None:\n- try:\n- return render_template('%s.html' % template)\n- except TemplateNotFound:\n- abort(404)\n+ abort(404)\n else:\n if page.auth_required and utils.authed() is False:\n return redirect(url_for('auth.login', next=request.path))\n", "issue": "Certain templates can be force loaded\nThere are certain templates which are accessible because of the static_html handler. This isn't a vulnerability because it appears like no sensitive information is loaded but it should be fixed. \r\n\r\nIt's possible that the template handler itself should just be entirely removed. \n", "before_files": [{"content": "import os\nimport re\n\nfrom flask import current_app as app, render_template, request, redirect, abort, jsonify, url_for, session, Blueprint, Response, send_file\nfrom flask.helpers import safe_join\nfrom jinja2.exceptions import TemplateNotFound\nfrom passlib.hash import bcrypt_sha256\n\nfrom CTFd.models import db, Teams, Solves, Awards, Files, Pages\nfrom CTFd.utils import cache, markdown\nfrom CTFd import utils\n\nviews = Blueprint('views', __name__)\n\n\[email protected]('/setup', methods=['GET', 'POST'])\ndef setup():\n # with app.app_context():\n # admin = Teams.query.filter_by(admin=True).first()\n\n if not utils.is_setup():\n if not session.get('nonce'):\n session['nonce'] = utils.sha512(os.urandom(10))\n if request.method == 'POST':\n ctf_name = request.form['ctf_name']\n ctf_name = utils.set_config('ctf_name', ctf_name)\n\n # CSS\n css = utils.set_config('start', '')\n\n # Admin user\n name = request.form['name']\n email = request.form['email']\n password = request.form['password']\n admin = Teams(name, email, password)\n admin.admin = True\n admin.banned = True\n\n # Index page\n\n index = \"\"\"<div class=\"row\">\n <div class=\"col-md-6 offset-md-3\">\n <img class=\"w-100 mx-auto d-block\" style=\"max-width: 500px;padding: 50px;padding-top: 14vh;\" src=\"themes/core/static/img/logo.png\" />\n <h3 class=\"text-center\">\n <p>A cool CTF platform from <a href=\"https://ctfd.io\">ctfd.io</a></p>\n <p>Follow us on social media:</p>\n <a href=\"https://twitter.com/ctfdio\"><i class=\"fab fa-twitter fa-2x\" aria-hidden=\"true\"></i></a> \n <a href=\"https://facebook.com/ctfdio\"><i class=\"fab fa-facebook fa-2x\" aria-hidden=\"true\"></i></a> \n <a href=\"https://github.com/ctfd\"><i class=\"fab fa-github fa-2x\" aria-hidden=\"true\"></i></a>\n </h3>\n <br>\n <h4 class=\"text-center\">\n <a href=\"admin\">Click here</a> to login and setup your CTF\n </h4>\n </div>\n</div>\"\"\".format(request.script_root)\n\n page = Pages(title=None, route='index', html=index, draft=False)\n\n # max attempts per challenge\n max_tries = utils.set_config('max_tries', 0)\n\n # Start time\n start = utils.set_config('start', None)\n end = utils.set_config('end', None)\n freeze = utils.set_config('freeze', None)\n\n # Challenges cannot be viewed by unregistered users\n view_challenges_unregistered = utils.set_config('view_challenges_unregistered', None)\n\n # Allow/Disallow registration\n prevent_registration = utils.set_config('prevent_registration', None)\n\n # Verify emails\n verify_emails = utils.set_config('verify_emails', None)\n\n mail_server = utils.set_config('mail_server', None)\n mail_port = utils.set_config('mail_port', None)\n mail_tls = utils.set_config('mail_tls', None)\n mail_ssl = utils.set_config('mail_ssl', None)\n mail_username = utils.set_config('mail_username', None)\n mail_password = utils.set_config('mail_password', None)\n mail_useauth = utils.set_config('mail_useauth', None)\n\n setup = utils.set_config('setup', True)\n\n db.session.add(page)\n db.session.add(admin)\n db.session.commit()\n\n session['username'] = admin.name\n session['id'] = admin.id\n session['admin'] = admin.admin\n session['nonce'] = utils.sha512(os.urandom(10))\n\n db.session.close()\n app.setup = False\n with app.app_context():\n cache.clear()\n\n return redirect(url_for('views.static_html'))\n return render_template('setup.html', nonce=session.get('nonce'))\n return redirect(url_for('views.static_html'))\n\n\n# Custom CSS handler\[email protected]('/static/user.css')\ndef custom_css():\n return Response(utils.get_config('css'), mimetype='text/css')\n\n\n# Static HTML files\[email protected](\"/\", defaults={'template': 'index'})\[email protected](\"/<path:template>\")\ndef static_html(template):\n page = utils.get_page(template)\n if page is None:\n try:\n return render_template('%s.html' % template)\n except TemplateNotFound:\n abort(404)\n else:\n if page.auth_required and utils.authed() is False:\n return redirect(url_for('auth.login', next=request.path))\n\n return render_template('page.html', content=markdown(page.html))\n\n\[email protected]('/teams', defaults={'page': '1'})\[email protected]('/teams/<int:page>')\ndef teams(page):\n if utils.get_config('workshop_mode'):\n abort(404)\n page = abs(int(page))\n results_per_page = 50\n page_start = results_per_page * (page - 1)\n page_end = results_per_page * (page - 1) + results_per_page\n\n if utils.get_config('verify_emails'):\n count = Teams.query.filter_by(verified=True, banned=False).count()\n teams = Teams.query.filter_by(verified=True, banned=False).slice(page_start, page_end).all()\n else:\n count = Teams.query.filter_by(banned=False).count()\n teams = Teams.query.filter_by(banned=False).slice(page_start, page_end).all()\n pages = int(count / results_per_page) + (count % results_per_page > 0)\n return render_template('teams.html', teams=teams, team_pages=pages, curr_page=page)\n\n\[email protected]('/team', methods=['GET'])\ndef private_team():\n if utils.authed():\n teamid = session['id']\n\n freeze = utils.get_config('freeze')\n user = Teams.query.filter_by(id=teamid).first_or_404()\n solves = Solves.query.filter_by(teamid=teamid)\n awards = Awards.query.filter_by(teamid=teamid)\n\n place = user.place()\n score = user.score()\n\n if freeze:\n freeze = utils.unix_time_to_utc(freeze)\n if teamid != session.get('id'):\n solves = solves.filter(Solves.date < freeze)\n awards = awards.filter(Awards.date < freeze)\n\n solves = solves.all()\n awards = awards.all()\n\n return render_template('team.html', solves=solves, awards=awards, team=user, score=score, place=place, score_frozen=utils.is_scoreboard_frozen())\n else:\n return redirect(url_for('auth.login'))\n\n\[email protected]('/team/<int:teamid>', methods=['GET', 'POST'])\ndef team(teamid):\n if utils.get_config('workshop_mode'):\n abort(404)\n\n if utils.get_config('view_scoreboard_if_utils.authed') and not utils.authed():\n return redirect(url_for('auth.login', next=request.path))\n errors = []\n freeze = utils.get_config('freeze')\n user = Teams.query.filter_by(id=teamid).first_or_404()\n solves = Solves.query.filter_by(teamid=teamid)\n awards = Awards.query.filter_by(teamid=teamid)\n\n place = user.place()\n score = user.score()\n\n if freeze:\n freeze = utils.unix_time_to_utc(freeze)\n if teamid != session.get('id'):\n solves = solves.filter(Solves.date < freeze)\n awards = awards.filter(Awards.date < freeze)\n\n solves = solves.all()\n awards = awards.all()\n\n db.session.close()\n\n if utils.hide_scores() and teamid != session.get('id'):\n errors.append('Scores are currently hidden')\n\n if errors:\n return render_template('team.html', team=user, errors=errors)\n\n if request.method == 'GET':\n return render_template('team.html', solves=solves, awards=awards, team=user, score=score, place=place, score_frozen=utils.is_scoreboard_frozen())\n elif request.method == 'POST':\n json = {'solves': []}\n for x in solves:\n json['solves'].append({'id': x.id, 'chal': x.chalid, 'team': x.teamid})\n return jsonify(json)\n\n\[email protected]('/profile', methods=['POST', 'GET'])\ndef profile():\n if utils.authed():\n if request.method == \"POST\":\n errors = []\n\n name = request.form.get('name').strip()\n email = request.form.get('email').strip()\n website = request.form.get('website').strip()\n affiliation = request.form.get('affiliation').strip()\n country = request.form.get('country').strip()\n\n user = Teams.query.filter_by(id=session['id']).first()\n\n if not utils.get_config('prevent_name_change'):\n names = Teams.query.filter_by(name=name).first()\n name_len = len(request.form['name']) == 0\n\n emails = Teams.query.filter_by(email=email).first()\n valid_email = utils.check_email_format(email)\n\n if utils.check_email_format(name) is True:\n errors.append('Team name cannot be an email address')\n\n if ('password' in request.form.keys() and not len(request.form['password']) == 0) and \\\n (not bcrypt_sha256.verify(request.form.get('confirm').strip(), user.password)):\n errors.append(\"Your old password doesn't match what we have.\")\n if not valid_email:\n errors.append(\"That email doesn't look right\")\n if not utils.get_config('prevent_name_change') and names and name != session['username']:\n errors.append('That team name is already taken')\n if emails and emails.id != session['id']:\n errors.append('That email has already been used')\n if not utils.get_config('prevent_name_change') and name_len:\n errors.append('Pick a longer team name')\n if website.strip() and not utils.validate_url(website):\n errors.append(\"That doesn't look like a valid URL\")\n\n if len(errors) > 0:\n return render_template('profile.html', name=name, email=email, website=website,\n affiliation=affiliation, country=country, errors=errors)\n else:\n team = Teams.query.filter_by(id=session['id']).first()\n if team.name != name:\n if not utils.get_config('prevent_name_change'):\n team.name = name\n session['username'] = team.name\n if team.email != email.lower():\n team.email = email.lower()\n if utils.get_config('verify_emails'):\n team.verified = False\n\n if 'password' in request.form.keys() and not len(request.form['password']) == 0:\n team.password = bcrypt_sha256.encrypt(request.form.get('password'))\n team.website = website\n team.affiliation = affiliation\n team.country = country\n db.session.commit()\n db.session.close()\n return redirect(url_for('views.profile'))\n else:\n user = Teams.query.filter_by(id=session['id']).first()\n name = user.name\n email = user.email\n website = user.website\n affiliation = user.affiliation\n country = user.country\n prevent_name_change = utils.get_config('prevent_name_change')\n confirm_email = utils.get_config('verify_emails') and not user.verified\n return render_template('profile.html', name=name, email=email, website=website, affiliation=affiliation,\n country=country, prevent_name_change=prevent_name_change, confirm_email=confirm_email)\n else:\n return redirect(url_for('auth.login'))\n\n\[email protected]('/files', defaults={'path': ''})\[email protected]('/files/<path:path>')\ndef file_handler(path):\n f = Files.query.filter_by(location=path).first_or_404()\n if f.chal:\n if not utils.is_admin():\n if not utils.ctftime():\n if utils.view_after_ctf() and utils.ctf_started():\n pass\n else:\n abort(403)\n upload_folder = os.path.join(app.root_path, app.config['UPLOAD_FOLDER'])\n return send_file(safe_join(upload_folder, f.location))\n\n\[email protected]('/themes/<theme>/static/<path:path>')\ndef themes_handler(theme, path):\n filename = safe_join(app.root_path, 'themes', theme, 'static', path)\n if os.path.isfile(filename):\n return send_file(filename)\n else:\n abort(404)\n", "path": "CTFd/views.py"}], "after_files": [{"content": "import os\nimport re\n\nfrom flask import current_app as app, render_template, request, redirect, abort, jsonify, url_for, session, Blueprint, Response, send_file\nfrom flask.helpers import safe_join\nfrom jinja2.exceptions import TemplateNotFound\nfrom passlib.hash import bcrypt_sha256\n\nfrom CTFd.models import db, Teams, Solves, Awards, Files, Pages\nfrom CTFd.utils import cache, markdown\nfrom CTFd import utils\n\nviews = Blueprint('views', __name__)\n\n\[email protected]('/setup', methods=['GET', 'POST'])\ndef setup():\n # with app.app_context():\n # admin = Teams.query.filter_by(admin=True).first()\n\n if not utils.is_setup():\n if not session.get('nonce'):\n session['nonce'] = utils.sha512(os.urandom(10))\n if request.method == 'POST':\n ctf_name = request.form['ctf_name']\n ctf_name = utils.set_config('ctf_name', ctf_name)\n\n # CSS\n css = utils.set_config('start', '')\n\n # Admin user\n name = request.form['name']\n email = request.form['email']\n password = request.form['password']\n admin = Teams(name, email, password)\n admin.admin = True\n admin.banned = True\n\n # Index page\n\n index = \"\"\"<div class=\"row\">\n <div class=\"col-md-6 offset-md-3\">\n <img class=\"w-100 mx-auto d-block\" style=\"max-width: 500px;padding: 50px;padding-top: 14vh;\" src=\"themes/core/static/img/logo.png\" />\n <h3 class=\"text-center\">\n <p>A cool CTF platform from <a href=\"https://ctfd.io\">ctfd.io</a></p>\n <p>Follow us on social media:</p>\n <a href=\"https://twitter.com/ctfdio\"><i class=\"fab fa-twitter fa-2x\" aria-hidden=\"true\"></i></a> \n <a href=\"https://facebook.com/ctfdio\"><i class=\"fab fa-facebook fa-2x\" aria-hidden=\"true\"></i></a> \n <a href=\"https://github.com/ctfd\"><i class=\"fab fa-github fa-2x\" aria-hidden=\"true\"></i></a>\n </h3>\n <br>\n <h4 class=\"text-center\">\n <a href=\"admin\">Click here</a> to login and setup your CTF\n </h4>\n </div>\n</div>\"\"\".format(request.script_root)\n\n page = Pages(title=None, route='index', html=index, draft=False)\n\n # max attempts per challenge\n max_tries = utils.set_config('max_tries', 0)\n\n # Start time\n start = utils.set_config('start', None)\n end = utils.set_config('end', None)\n freeze = utils.set_config('freeze', None)\n\n # Challenges cannot be viewed by unregistered users\n view_challenges_unregistered = utils.set_config('view_challenges_unregistered', None)\n\n # Allow/Disallow registration\n prevent_registration = utils.set_config('prevent_registration', None)\n\n # Verify emails\n verify_emails = utils.set_config('verify_emails', None)\n\n mail_server = utils.set_config('mail_server', None)\n mail_port = utils.set_config('mail_port', None)\n mail_tls = utils.set_config('mail_tls', None)\n mail_ssl = utils.set_config('mail_ssl', None)\n mail_username = utils.set_config('mail_username', None)\n mail_password = utils.set_config('mail_password', None)\n mail_useauth = utils.set_config('mail_useauth', None)\n\n setup = utils.set_config('setup', True)\n\n db.session.add(page)\n db.session.add(admin)\n db.session.commit()\n\n session['username'] = admin.name\n session['id'] = admin.id\n session['admin'] = admin.admin\n session['nonce'] = utils.sha512(os.urandom(10))\n\n db.session.close()\n app.setup = False\n with app.app_context():\n cache.clear()\n\n return redirect(url_for('views.static_html'))\n return render_template('setup.html', nonce=session.get('nonce'))\n return redirect(url_for('views.static_html'))\n\n\n# Custom CSS handler\[email protected]('/static/user.css')\ndef custom_css():\n return Response(utils.get_config('css'), mimetype='text/css')\n\n\n# Static HTML files\[email protected](\"/\", defaults={'template': 'index'})\[email protected](\"/<path:template>\")\ndef static_html(template):\n page = utils.get_page(template)\n if page is None:\n abort(404)\n else:\n if page.auth_required and utils.authed() is False:\n return redirect(url_for('auth.login', next=request.path))\n\n return render_template('page.html', content=markdown(page.html))\n\n\[email protected]('/teams', defaults={'page': '1'})\[email protected]('/teams/<int:page>')\ndef teams(page):\n if utils.get_config('workshop_mode'):\n abort(404)\n page = abs(int(page))\n results_per_page = 50\n page_start = results_per_page * (page - 1)\n page_end = results_per_page * (page - 1) + results_per_page\n\n if utils.get_config('verify_emails'):\n count = Teams.query.filter_by(verified=True, banned=False).count()\n teams = Teams.query.filter_by(verified=True, banned=False).slice(page_start, page_end).all()\n else:\n count = Teams.query.filter_by(banned=False).count()\n teams = Teams.query.filter_by(banned=False).slice(page_start, page_end).all()\n pages = int(count / results_per_page) + (count % results_per_page > 0)\n return render_template('teams.html', teams=teams, team_pages=pages, curr_page=page)\n\n\[email protected]('/team', methods=['GET'])\ndef private_team():\n if utils.authed():\n teamid = session['id']\n\n freeze = utils.get_config('freeze')\n user = Teams.query.filter_by(id=teamid).first_or_404()\n solves = Solves.query.filter_by(teamid=teamid)\n awards = Awards.query.filter_by(teamid=teamid)\n\n place = user.place()\n score = user.score()\n\n if freeze:\n freeze = utils.unix_time_to_utc(freeze)\n if teamid != session.get('id'):\n solves = solves.filter(Solves.date < freeze)\n awards = awards.filter(Awards.date < freeze)\n\n solves = solves.all()\n awards = awards.all()\n\n return render_template('team.html', solves=solves, awards=awards, team=user, score=score, place=place, score_frozen=utils.is_scoreboard_frozen())\n else:\n return redirect(url_for('auth.login'))\n\n\[email protected]('/team/<int:teamid>', methods=['GET', 'POST'])\ndef team(teamid):\n if utils.get_config('workshop_mode'):\n abort(404)\n\n if utils.get_config('view_scoreboard_if_utils.authed') and not utils.authed():\n return redirect(url_for('auth.login', next=request.path))\n errors = []\n freeze = utils.get_config('freeze')\n user = Teams.query.filter_by(id=teamid).first_or_404()\n solves = Solves.query.filter_by(teamid=teamid)\n awards = Awards.query.filter_by(teamid=teamid)\n\n place = user.place()\n score = user.score()\n\n if freeze:\n freeze = utils.unix_time_to_utc(freeze)\n if teamid != session.get('id'):\n solves = solves.filter(Solves.date < freeze)\n awards = awards.filter(Awards.date < freeze)\n\n solves = solves.all()\n awards = awards.all()\n\n db.session.close()\n\n if utils.hide_scores() and teamid != session.get('id'):\n errors.append('Scores are currently hidden')\n\n if errors:\n return render_template('team.html', team=user, errors=errors)\n\n if request.method == 'GET':\n return render_template('team.html', solves=solves, awards=awards, team=user, score=score, place=place, score_frozen=utils.is_scoreboard_frozen())\n elif request.method == 'POST':\n json = {'solves': []}\n for x in solves:\n json['solves'].append({'id': x.id, 'chal': x.chalid, 'team': x.teamid})\n return jsonify(json)\n\n\[email protected]('/profile', methods=['POST', 'GET'])\ndef profile():\n if utils.authed():\n if request.method == \"POST\":\n errors = []\n\n name = request.form.get('name').strip()\n email = request.form.get('email').strip()\n website = request.form.get('website').strip()\n affiliation = request.form.get('affiliation').strip()\n country = request.form.get('country').strip()\n\n user = Teams.query.filter_by(id=session['id']).first()\n\n if not utils.get_config('prevent_name_change'):\n names = Teams.query.filter_by(name=name).first()\n name_len = len(request.form['name']) == 0\n\n emails = Teams.query.filter_by(email=email).first()\n valid_email = utils.check_email_format(email)\n\n if utils.check_email_format(name) is True:\n errors.append('Team name cannot be an email address')\n\n if ('password' in request.form.keys() and not len(request.form['password']) == 0) and \\\n (not bcrypt_sha256.verify(request.form.get('confirm').strip(), user.password)):\n errors.append(\"Your old password doesn't match what we have.\")\n if not valid_email:\n errors.append(\"That email doesn't look right\")\n if not utils.get_config('prevent_name_change') and names and name != session['username']:\n errors.append('That team name is already taken')\n if emails and emails.id != session['id']:\n errors.append('That email has already been used')\n if not utils.get_config('prevent_name_change') and name_len:\n errors.append('Pick a longer team name')\n if website.strip() and not utils.validate_url(website):\n errors.append(\"That doesn't look like a valid URL\")\n\n if len(errors) > 0:\n return render_template('profile.html', name=name, email=email, website=website,\n affiliation=affiliation, country=country, errors=errors)\n else:\n team = Teams.query.filter_by(id=session['id']).first()\n if team.name != name:\n if not utils.get_config('prevent_name_change'):\n team.name = name\n session['username'] = team.name\n if team.email != email.lower():\n team.email = email.lower()\n if utils.get_config('verify_emails'):\n team.verified = False\n\n if 'password' in request.form.keys() and not len(request.form['password']) == 0:\n team.password = bcrypt_sha256.encrypt(request.form.get('password'))\n team.website = website\n team.affiliation = affiliation\n team.country = country\n db.session.commit()\n db.session.close()\n return redirect(url_for('views.profile'))\n else:\n user = Teams.query.filter_by(id=session['id']).first()\n name = user.name\n email = user.email\n website = user.website\n affiliation = user.affiliation\n country = user.country\n prevent_name_change = utils.get_config('prevent_name_change')\n confirm_email = utils.get_config('verify_emails') and not user.verified\n return render_template('profile.html', name=name, email=email, website=website, affiliation=affiliation,\n country=country, prevent_name_change=prevent_name_change, confirm_email=confirm_email)\n else:\n return redirect(url_for('auth.login'))\n\n\[email protected]('/files', defaults={'path': ''})\[email protected]('/files/<path:path>')\ndef file_handler(path):\n f = Files.query.filter_by(location=path).first_or_404()\n if f.chal:\n if not utils.is_admin():\n if not utils.ctftime():\n if utils.view_after_ctf() and utils.ctf_started():\n pass\n else:\n abort(403)\n upload_folder = os.path.join(app.root_path, app.config['UPLOAD_FOLDER'])\n return send_file(safe_join(upload_folder, f.location))\n\n\[email protected]('/themes/<theme>/static/<path:path>')\ndef themes_handler(theme, path):\n filename = safe_join(app.root_path, 'themes', theme, 'static', path)\n if os.path.isfile(filename):\n return send_file(filename)\n else:\n abort(404)\n", "path": "CTFd/views.py"}]}
| 3,953 | 133 |
gh_patches_debug_41508
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-5577
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
scrapy parse doesn't support async callbacks
In master when running `scrapy parse` for a spider with `async def parse` the page is downloaded but then the spider hangs. In #4978 it instead raises `TypeError: 'async_generator' object is not iterable`. Both problems happen because the parse command calls `iterate_spider_output` and doesn't expect a Deferred or an async iterator.
scrapy parse doesn't support async callbacks
In master when running `scrapy parse` for a spider with `async def parse` the page is downloaded but then the spider hangs. In #4978 it instead raises `TypeError: 'async_generator' object is not iterable`. Both problems happen because the parse command calls `iterate_spider_output` and doesn't expect a Deferred or an async iterator.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/commands/parse.py`
Content:
```
1 import json
2 import logging
3 from typing import Dict
4
5 from itemadapter import is_item, ItemAdapter
6 from w3lib.url import is_url
7
8 from scrapy.commands import BaseRunSpiderCommand
9 from scrapy.http import Request
10 from scrapy.utils import display
11 from scrapy.utils.spider import iterate_spider_output, spidercls_for_request
12 from scrapy.exceptions import UsageError
13
14
15 logger = logging.getLogger(__name__)
16
17
18 class Command(BaseRunSpiderCommand):
19 requires_project = True
20
21 spider = None
22 items: Dict[int, list] = {}
23 requests: Dict[int, list] = {}
24
25 first_response = None
26
27 def syntax(self):
28 return "[options] <url>"
29
30 def short_desc(self):
31 return "Parse URL (using its spider) and print the results"
32
33 def add_options(self, parser):
34 BaseRunSpiderCommand.add_options(self, parser)
35 parser.add_argument("--spider", dest="spider", default=None,
36 help="use this spider without looking for one")
37 parser.add_argument("--pipelines", action="store_true",
38 help="process items through pipelines")
39 parser.add_argument("--nolinks", dest="nolinks", action="store_true",
40 help="don't show links to follow (extracted requests)")
41 parser.add_argument("--noitems", dest="noitems", action="store_true",
42 help="don't show scraped items")
43 parser.add_argument("--nocolour", dest="nocolour", action="store_true",
44 help="avoid using pygments to colorize the output")
45 parser.add_argument("-r", "--rules", dest="rules", action="store_true",
46 help="use CrawlSpider rules to discover the callback")
47 parser.add_argument("-c", "--callback", dest="callback",
48 help="use this callback for parsing, instead looking for a callback")
49 parser.add_argument("-m", "--meta", dest="meta",
50 help="inject extra meta into the Request, it must be a valid raw json string")
51 parser.add_argument("--cbkwargs", dest="cbkwargs",
52 help="inject extra callback kwargs into the Request, it must be a valid raw json string")
53 parser.add_argument("-d", "--depth", dest="depth", type=int, default=1,
54 help="maximum depth for parsing requests [default: %(default)s]")
55 parser.add_argument("-v", "--verbose", dest="verbose", action="store_true",
56 help="print each depth level one by one")
57
58 @property
59 def max_level(self):
60 max_items, max_requests = 0, 0
61 if self.items:
62 max_items = max(self.items)
63 if self.requests:
64 max_requests = max(self.requests)
65 return max(max_items, max_requests)
66
67 def add_items(self, lvl, new_items):
68 old_items = self.items.get(lvl, [])
69 self.items[lvl] = old_items + new_items
70
71 def add_requests(self, lvl, new_reqs):
72 old_reqs = self.requests.get(lvl, [])
73 self.requests[lvl] = old_reqs + new_reqs
74
75 def print_items(self, lvl=None, colour=True):
76 if lvl is None:
77 items = [item for lst in self.items.values() for item in lst]
78 else:
79 items = self.items.get(lvl, [])
80
81 print("# Scraped Items ", "-" * 60)
82 display.pprint([ItemAdapter(x).asdict() for x in items], colorize=colour)
83
84 def print_requests(self, lvl=None, colour=True):
85 if lvl is None:
86 if self.requests:
87 requests = self.requests[max(self.requests)]
88 else:
89 requests = []
90 else:
91 requests = self.requests.get(lvl, [])
92
93 print("# Requests ", "-" * 65)
94 display.pprint(requests, colorize=colour)
95
96 def print_results(self, opts):
97 colour = not opts.nocolour
98
99 if opts.verbose:
100 for level in range(1, self.max_level + 1):
101 print(f'\n>>> DEPTH LEVEL: {level} <<<')
102 if not opts.noitems:
103 self.print_items(level, colour)
104 if not opts.nolinks:
105 self.print_requests(level, colour)
106 else:
107 print(f'\n>>> STATUS DEPTH LEVEL {self.max_level} <<<')
108 if not opts.noitems:
109 self.print_items(colour=colour)
110 if not opts.nolinks:
111 self.print_requests(colour=colour)
112
113 def run_callback(self, response, callback, cb_kwargs=None):
114 cb_kwargs = cb_kwargs or {}
115 items, requests = [], []
116
117 for x in iterate_spider_output(callback(response, **cb_kwargs)):
118 if is_item(x):
119 items.append(x)
120 elif isinstance(x, Request):
121 requests.append(x)
122 return items, requests
123
124 def get_callback_from_rules(self, spider, response):
125 if getattr(spider, 'rules', None):
126 for rule in spider.rules:
127 if rule.link_extractor.matches(response.url):
128 return rule.callback or "parse"
129 else:
130 logger.error('No CrawlSpider rules found in spider %(spider)r, '
131 'please specify a callback to use for parsing',
132 {'spider': spider.name})
133
134 def set_spidercls(self, url, opts):
135 spider_loader = self.crawler_process.spider_loader
136 if opts.spider:
137 try:
138 self.spidercls = spider_loader.load(opts.spider)
139 except KeyError:
140 logger.error('Unable to find spider: %(spider)s',
141 {'spider': opts.spider})
142 else:
143 self.spidercls = spidercls_for_request(spider_loader, Request(url))
144 if not self.spidercls:
145 logger.error('Unable to find spider for: %(url)s', {'url': url})
146
147 def _start_requests(spider):
148 yield self.prepare_request(spider, Request(url), opts)
149 if self.spidercls:
150 self.spidercls.start_requests = _start_requests
151
152 def start_parsing(self, url, opts):
153 self.crawler_process.crawl(self.spidercls, **opts.spargs)
154 self.pcrawler = list(self.crawler_process.crawlers)[0]
155 self.crawler_process.start()
156
157 if not self.first_response:
158 logger.error('No response downloaded for: %(url)s',
159 {'url': url})
160
161 def prepare_request(self, spider, request, opts):
162 def callback(response, **cb_kwargs):
163 # memorize first request
164 if not self.first_response:
165 self.first_response = response
166
167 # determine real callback
168 cb = response.meta['_callback']
169 if not cb:
170 if opts.callback:
171 cb = opts.callback
172 elif opts.rules and self.first_response == response:
173 cb = self.get_callback_from_rules(spider, response)
174
175 if not cb:
176 logger.error('Cannot find a rule that matches %(url)r in spider: %(spider)s',
177 {'url': response.url, 'spider': spider.name})
178 return
179 else:
180 cb = 'parse'
181
182 if not callable(cb):
183 cb_method = getattr(spider, cb, None)
184 if callable(cb_method):
185 cb = cb_method
186 else:
187 logger.error('Cannot find callback %(callback)r in spider: %(spider)s',
188 {'callback': cb, 'spider': spider.name})
189 return
190
191 # parse items and requests
192 depth = response.meta['_depth']
193
194 items, requests = self.run_callback(response, cb, cb_kwargs)
195 if opts.pipelines:
196 itemproc = self.pcrawler.engine.scraper.itemproc
197 for item in items:
198 itemproc.process_item(item, spider)
199 self.add_items(depth, items)
200 self.add_requests(depth, requests)
201
202 scraped_data = items if opts.output else []
203 if depth < opts.depth:
204 for req in requests:
205 req.meta['_depth'] = depth + 1
206 req.meta['_callback'] = req.callback
207 req.callback = callback
208 scraped_data += requests
209
210 return scraped_data
211
212 # update request meta if any extra meta was passed through the --meta/-m opts.
213 if opts.meta:
214 request.meta.update(opts.meta)
215
216 # update cb_kwargs if any extra values were was passed through the --cbkwargs option.
217 if opts.cbkwargs:
218 request.cb_kwargs.update(opts.cbkwargs)
219
220 request.meta['_depth'] = 1
221 request.meta['_callback'] = request.callback
222 request.callback = callback
223 return request
224
225 def process_options(self, args, opts):
226 BaseRunSpiderCommand.process_options(self, args, opts)
227
228 self.process_request_meta(opts)
229 self.process_request_cb_kwargs(opts)
230
231 def process_request_meta(self, opts):
232 if opts.meta:
233 try:
234 opts.meta = json.loads(opts.meta)
235 except ValueError:
236 raise UsageError("Invalid -m/--meta value, pass a valid json string to -m or --meta. "
237 "Example: --meta='{\"foo\" : \"bar\"}'", print_help=False)
238
239 def process_request_cb_kwargs(self, opts):
240 if opts.cbkwargs:
241 try:
242 opts.cbkwargs = json.loads(opts.cbkwargs)
243 except ValueError:
244 raise UsageError("Invalid --cbkwargs value, pass a valid json string to --cbkwargs. "
245 "Example: --cbkwargs='{\"foo\" : \"bar\"}'", print_help=False)
246
247 def run(self, args, opts):
248 # parse arguments
249 if not len(args) == 1 or not is_url(args[0]):
250 raise UsageError()
251 else:
252 url = args[0]
253
254 # prepare spidercls
255 self.set_spidercls(url, opts)
256
257 if self.spidercls and opts.depth > 0:
258 self.start_parsing(url, opts)
259 self.print_results(opts)
260
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scrapy/commands/parse.py b/scrapy/commands/parse.py
--- a/scrapy/commands/parse.py
+++ b/scrapy/commands/parse.py
@@ -5,6 +5,8 @@
from itemadapter import is_item, ItemAdapter
from w3lib.url import is_url
+from twisted.internet.defer import maybeDeferred
+
from scrapy.commands import BaseRunSpiderCommand
from scrapy.http import Request
from scrapy.utils import display
@@ -110,16 +112,19 @@
if not opts.nolinks:
self.print_requests(colour=colour)
- def run_callback(self, response, callback, cb_kwargs=None):
- cb_kwargs = cb_kwargs or {}
+ def _get_items_and_requests(self, spider_output, opts, depth, spider, callback):
items, requests = [], []
-
- for x in iterate_spider_output(callback(response, **cb_kwargs)):
+ for x in spider_output:
if is_item(x):
items.append(x)
elif isinstance(x, Request):
requests.append(x)
- return items, requests
+ return items, requests, opts, depth, spider, callback
+
+ def run_callback(self, response, callback, cb_kwargs=None):
+ cb_kwargs = cb_kwargs or {}
+ d = maybeDeferred(iterate_spider_output, callback(response, **cb_kwargs))
+ return d
def get_callback_from_rules(self, spider, response):
if getattr(spider, 'rules', None):
@@ -158,6 +163,25 @@
logger.error('No response downloaded for: %(url)s',
{'url': url})
+ def scraped_data(self, args):
+ items, requests, opts, depth, spider, callback = args
+ if opts.pipelines:
+ itemproc = self.pcrawler.engine.scraper.itemproc
+ for item in items:
+ itemproc.process_item(item, spider)
+ self.add_items(depth, items)
+ self.add_requests(depth, requests)
+
+ scraped_data = items if opts.output else []
+ if depth < opts.depth:
+ for req in requests:
+ req.meta['_depth'] = depth + 1
+ req.meta['_callback'] = req.callback
+ req.callback = callback
+ scraped_data += requests
+
+ return scraped_data
+
def prepare_request(self, spider, request, opts):
def callback(response, **cb_kwargs):
# memorize first request
@@ -191,23 +215,10 @@
# parse items and requests
depth = response.meta['_depth']
- items, requests = self.run_callback(response, cb, cb_kwargs)
- if opts.pipelines:
- itemproc = self.pcrawler.engine.scraper.itemproc
- for item in items:
- itemproc.process_item(item, spider)
- self.add_items(depth, items)
- self.add_requests(depth, requests)
-
- scraped_data = items if opts.output else []
- if depth < opts.depth:
- for req in requests:
- req.meta['_depth'] = depth + 1
- req.meta['_callback'] = req.callback
- req.callback = callback
- scraped_data += requests
-
- return scraped_data
+ d = self.run_callback(response, cb, cb_kwargs)
+ d.addCallback(self._get_items_and_requests, opts, depth, spider, callback)
+ d.addCallback(self.scraped_data)
+ return d
# update request meta if any extra meta was passed through the --meta/-m opts.
if opts.meta:
|
{"golden_diff": "diff --git a/scrapy/commands/parse.py b/scrapy/commands/parse.py\n--- a/scrapy/commands/parse.py\n+++ b/scrapy/commands/parse.py\n@@ -5,6 +5,8 @@\n from itemadapter import is_item, ItemAdapter\n from w3lib.url import is_url\n \n+from twisted.internet.defer import maybeDeferred\n+\n from scrapy.commands import BaseRunSpiderCommand\n from scrapy.http import Request\n from scrapy.utils import display\n@@ -110,16 +112,19 @@\n if not opts.nolinks:\n self.print_requests(colour=colour)\n \n- def run_callback(self, response, callback, cb_kwargs=None):\n- cb_kwargs = cb_kwargs or {}\n+ def _get_items_and_requests(self, spider_output, opts, depth, spider, callback):\n items, requests = [], []\n-\n- for x in iterate_spider_output(callback(response, **cb_kwargs)):\n+ for x in spider_output:\n if is_item(x):\n items.append(x)\n elif isinstance(x, Request):\n requests.append(x)\n- return items, requests\n+ return items, requests, opts, depth, spider, callback\n+\n+ def run_callback(self, response, callback, cb_kwargs=None):\n+ cb_kwargs = cb_kwargs or {}\n+ d = maybeDeferred(iterate_spider_output, callback(response, **cb_kwargs))\n+ return d\n \n def get_callback_from_rules(self, spider, response):\n if getattr(spider, 'rules', None):\n@@ -158,6 +163,25 @@\n logger.error('No response downloaded for: %(url)s',\n {'url': url})\n \n+ def scraped_data(self, args):\n+ items, requests, opts, depth, spider, callback = args\n+ if opts.pipelines:\n+ itemproc = self.pcrawler.engine.scraper.itemproc\n+ for item in items:\n+ itemproc.process_item(item, spider)\n+ self.add_items(depth, items)\n+ self.add_requests(depth, requests)\n+\n+ scraped_data = items if opts.output else []\n+ if depth < opts.depth:\n+ for req in requests:\n+ req.meta['_depth'] = depth + 1\n+ req.meta['_callback'] = req.callback\n+ req.callback = callback\n+ scraped_data += requests\n+\n+ return scraped_data\n+\n def prepare_request(self, spider, request, opts):\n def callback(response, **cb_kwargs):\n # memorize first request\n@@ -191,23 +215,10 @@\n # parse items and requests\n depth = response.meta['_depth']\n \n- items, requests = self.run_callback(response, cb, cb_kwargs)\n- if opts.pipelines:\n- itemproc = self.pcrawler.engine.scraper.itemproc\n- for item in items:\n- itemproc.process_item(item, spider)\n- self.add_items(depth, items)\n- self.add_requests(depth, requests)\n-\n- scraped_data = items if opts.output else []\n- if depth < opts.depth:\n- for req in requests:\n- req.meta['_depth'] = depth + 1\n- req.meta['_callback'] = req.callback\n- req.callback = callback\n- scraped_data += requests\n-\n- return scraped_data\n+ d = self.run_callback(response, cb, cb_kwargs)\n+ d.addCallback(self._get_items_and_requests, opts, depth, spider, callback)\n+ d.addCallback(self.scraped_data)\n+ return d\n \n # update request meta if any extra meta was passed through the --meta/-m opts.\n if opts.meta:\n", "issue": "scrapy parse doesn't support async callbacks\nIn master when running `scrapy parse` for a spider with `async def parse` the page is downloaded but then the spider hangs. In #4978 it instead raises `TypeError: 'async_generator' object is not iterable`. Both problems happen because the parse command calls `iterate_spider_output` and doesn't expect a Deferred or an async iterator. \nscrapy parse doesn't support async callbacks\nIn master when running `scrapy parse` for a spider with `async def parse` the page is downloaded but then the spider hangs. In #4978 it instead raises `TypeError: 'async_generator' object is not iterable`. Both problems happen because the parse command calls `iterate_spider_output` and doesn't expect a Deferred or an async iterator. \n", "before_files": [{"content": "import json\nimport logging\nfrom typing import Dict\n\nfrom itemadapter import is_item, ItemAdapter\nfrom w3lib.url import is_url\n\nfrom scrapy.commands import BaseRunSpiderCommand\nfrom scrapy.http import Request\nfrom scrapy.utils import display\nfrom scrapy.utils.spider import iterate_spider_output, spidercls_for_request\nfrom scrapy.exceptions import UsageError\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass Command(BaseRunSpiderCommand):\n requires_project = True\n\n spider = None\n items: Dict[int, list] = {}\n requests: Dict[int, list] = {}\n\n first_response = None\n\n def syntax(self):\n return \"[options] <url>\"\n\n def short_desc(self):\n return \"Parse URL (using its spider) and print the results\"\n\n def add_options(self, parser):\n BaseRunSpiderCommand.add_options(self, parser)\n parser.add_argument(\"--spider\", dest=\"spider\", default=None,\n help=\"use this spider without looking for one\")\n parser.add_argument(\"--pipelines\", action=\"store_true\",\n help=\"process items through pipelines\")\n parser.add_argument(\"--nolinks\", dest=\"nolinks\", action=\"store_true\",\n help=\"don't show links to follow (extracted requests)\")\n parser.add_argument(\"--noitems\", dest=\"noitems\", action=\"store_true\",\n help=\"don't show scraped items\")\n parser.add_argument(\"--nocolour\", dest=\"nocolour\", action=\"store_true\",\n help=\"avoid using pygments to colorize the output\")\n parser.add_argument(\"-r\", \"--rules\", dest=\"rules\", action=\"store_true\",\n help=\"use CrawlSpider rules to discover the callback\")\n parser.add_argument(\"-c\", \"--callback\", dest=\"callback\",\n help=\"use this callback for parsing, instead looking for a callback\")\n parser.add_argument(\"-m\", \"--meta\", dest=\"meta\",\n help=\"inject extra meta into the Request, it must be a valid raw json string\")\n parser.add_argument(\"--cbkwargs\", dest=\"cbkwargs\",\n help=\"inject extra callback kwargs into the Request, it must be a valid raw json string\")\n parser.add_argument(\"-d\", \"--depth\", dest=\"depth\", type=int, default=1,\n help=\"maximum depth for parsing requests [default: %(default)s]\")\n parser.add_argument(\"-v\", \"--verbose\", dest=\"verbose\", action=\"store_true\",\n help=\"print each depth level one by one\")\n\n @property\n def max_level(self):\n max_items, max_requests = 0, 0\n if self.items:\n max_items = max(self.items)\n if self.requests:\n max_requests = max(self.requests)\n return max(max_items, max_requests)\n\n def add_items(self, lvl, new_items):\n old_items = self.items.get(lvl, [])\n self.items[lvl] = old_items + new_items\n\n def add_requests(self, lvl, new_reqs):\n old_reqs = self.requests.get(lvl, [])\n self.requests[lvl] = old_reqs + new_reqs\n\n def print_items(self, lvl=None, colour=True):\n if lvl is None:\n items = [item for lst in self.items.values() for item in lst]\n else:\n items = self.items.get(lvl, [])\n\n print(\"# Scraped Items \", \"-\" * 60)\n display.pprint([ItemAdapter(x).asdict() for x in items], colorize=colour)\n\n def print_requests(self, lvl=None, colour=True):\n if lvl is None:\n if self.requests:\n requests = self.requests[max(self.requests)]\n else:\n requests = []\n else:\n requests = self.requests.get(lvl, [])\n\n print(\"# Requests \", \"-\" * 65)\n display.pprint(requests, colorize=colour)\n\n def print_results(self, opts):\n colour = not opts.nocolour\n\n if opts.verbose:\n for level in range(1, self.max_level + 1):\n print(f'\\n>>> DEPTH LEVEL: {level} <<<')\n if not opts.noitems:\n self.print_items(level, colour)\n if not opts.nolinks:\n self.print_requests(level, colour)\n else:\n print(f'\\n>>> STATUS DEPTH LEVEL {self.max_level} <<<')\n if not opts.noitems:\n self.print_items(colour=colour)\n if not opts.nolinks:\n self.print_requests(colour=colour)\n\n def run_callback(self, response, callback, cb_kwargs=None):\n cb_kwargs = cb_kwargs or {}\n items, requests = [], []\n\n for x in iterate_spider_output(callback(response, **cb_kwargs)):\n if is_item(x):\n items.append(x)\n elif isinstance(x, Request):\n requests.append(x)\n return items, requests\n\n def get_callback_from_rules(self, spider, response):\n if getattr(spider, 'rules', None):\n for rule in spider.rules:\n if rule.link_extractor.matches(response.url):\n return rule.callback or \"parse\"\n else:\n logger.error('No CrawlSpider rules found in spider %(spider)r, '\n 'please specify a callback to use for parsing',\n {'spider': spider.name})\n\n def set_spidercls(self, url, opts):\n spider_loader = self.crawler_process.spider_loader\n if opts.spider:\n try:\n self.spidercls = spider_loader.load(opts.spider)\n except KeyError:\n logger.error('Unable to find spider: %(spider)s',\n {'spider': opts.spider})\n else:\n self.spidercls = spidercls_for_request(spider_loader, Request(url))\n if not self.spidercls:\n logger.error('Unable to find spider for: %(url)s', {'url': url})\n\n def _start_requests(spider):\n yield self.prepare_request(spider, Request(url), opts)\n if self.spidercls:\n self.spidercls.start_requests = _start_requests\n\n def start_parsing(self, url, opts):\n self.crawler_process.crawl(self.spidercls, **opts.spargs)\n self.pcrawler = list(self.crawler_process.crawlers)[0]\n self.crawler_process.start()\n\n if not self.first_response:\n logger.error('No response downloaded for: %(url)s',\n {'url': url})\n\n def prepare_request(self, spider, request, opts):\n def callback(response, **cb_kwargs):\n # memorize first request\n if not self.first_response:\n self.first_response = response\n\n # determine real callback\n cb = response.meta['_callback']\n if not cb:\n if opts.callback:\n cb = opts.callback\n elif opts.rules and self.first_response == response:\n cb = self.get_callback_from_rules(spider, response)\n\n if not cb:\n logger.error('Cannot find a rule that matches %(url)r in spider: %(spider)s',\n {'url': response.url, 'spider': spider.name})\n return\n else:\n cb = 'parse'\n\n if not callable(cb):\n cb_method = getattr(spider, cb, None)\n if callable(cb_method):\n cb = cb_method\n else:\n logger.error('Cannot find callback %(callback)r in spider: %(spider)s',\n {'callback': cb, 'spider': spider.name})\n return\n\n # parse items and requests\n depth = response.meta['_depth']\n\n items, requests = self.run_callback(response, cb, cb_kwargs)\n if opts.pipelines:\n itemproc = self.pcrawler.engine.scraper.itemproc\n for item in items:\n itemproc.process_item(item, spider)\n self.add_items(depth, items)\n self.add_requests(depth, requests)\n\n scraped_data = items if opts.output else []\n if depth < opts.depth:\n for req in requests:\n req.meta['_depth'] = depth + 1\n req.meta['_callback'] = req.callback\n req.callback = callback\n scraped_data += requests\n\n return scraped_data\n\n # update request meta if any extra meta was passed through the --meta/-m opts.\n if opts.meta:\n request.meta.update(opts.meta)\n\n # update cb_kwargs if any extra values were was passed through the --cbkwargs option.\n if opts.cbkwargs:\n request.cb_kwargs.update(opts.cbkwargs)\n\n request.meta['_depth'] = 1\n request.meta['_callback'] = request.callback\n request.callback = callback\n return request\n\n def process_options(self, args, opts):\n BaseRunSpiderCommand.process_options(self, args, opts)\n\n self.process_request_meta(opts)\n self.process_request_cb_kwargs(opts)\n\n def process_request_meta(self, opts):\n if opts.meta:\n try:\n opts.meta = json.loads(opts.meta)\n except ValueError:\n raise UsageError(\"Invalid -m/--meta value, pass a valid json string to -m or --meta. \"\n \"Example: --meta='{\\\"foo\\\" : \\\"bar\\\"}'\", print_help=False)\n\n def process_request_cb_kwargs(self, opts):\n if opts.cbkwargs:\n try:\n opts.cbkwargs = json.loads(opts.cbkwargs)\n except ValueError:\n raise UsageError(\"Invalid --cbkwargs value, pass a valid json string to --cbkwargs. \"\n \"Example: --cbkwargs='{\\\"foo\\\" : \\\"bar\\\"}'\", print_help=False)\n\n def run(self, args, opts):\n # parse arguments\n if not len(args) == 1 or not is_url(args[0]):\n raise UsageError()\n else:\n url = args[0]\n\n # prepare spidercls\n self.set_spidercls(url, opts)\n\n if self.spidercls and opts.depth > 0:\n self.start_parsing(url, opts)\n self.print_results(opts)\n", "path": "scrapy/commands/parse.py"}], "after_files": [{"content": "import json\nimport logging\nfrom typing import Dict\n\nfrom itemadapter import is_item, ItemAdapter\nfrom w3lib.url import is_url\n\nfrom twisted.internet.defer import maybeDeferred\n\nfrom scrapy.commands import BaseRunSpiderCommand\nfrom scrapy.http import Request\nfrom scrapy.utils import display\nfrom scrapy.utils.spider import iterate_spider_output, spidercls_for_request\nfrom scrapy.exceptions import UsageError\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass Command(BaseRunSpiderCommand):\n requires_project = True\n\n spider = None\n items: Dict[int, list] = {}\n requests: Dict[int, list] = {}\n\n first_response = None\n\n def syntax(self):\n return \"[options] <url>\"\n\n def short_desc(self):\n return \"Parse URL (using its spider) and print the results\"\n\n def add_options(self, parser):\n BaseRunSpiderCommand.add_options(self, parser)\n parser.add_argument(\"--spider\", dest=\"spider\", default=None,\n help=\"use this spider without looking for one\")\n parser.add_argument(\"--pipelines\", action=\"store_true\",\n help=\"process items through pipelines\")\n parser.add_argument(\"--nolinks\", dest=\"nolinks\", action=\"store_true\",\n help=\"don't show links to follow (extracted requests)\")\n parser.add_argument(\"--noitems\", dest=\"noitems\", action=\"store_true\",\n help=\"don't show scraped items\")\n parser.add_argument(\"--nocolour\", dest=\"nocolour\", action=\"store_true\",\n help=\"avoid using pygments to colorize the output\")\n parser.add_argument(\"-r\", \"--rules\", dest=\"rules\", action=\"store_true\",\n help=\"use CrawlSpider rules to discover the callback\")\n parser.add_argument(\"-c\", \"--callback\", dest=\"callback\",\n help=\"use this callback for parsing, instead looking for a callback\")\n parser.add_argument(\"-m\", \"--meta\", dest=\"meta\",\n help=\"inject extra meta into the Request, it must be a valid raw json string\")\n parser.add_argument(\"--cbkwargs\", dest=\"cbkwargs\",\n help=\"inject extra callback kwargs into the Request, it must be a valid raw json string\")\n parser.add_argument(\"-d\", \"--depth\", dest=\"depth\", type=int, default=1,\n help=\"maximum depth for parsing requests [default: %(default)s]\")\n parser.add_argument(\"-v\", \"--verbose\", dest=\"verbose\", action=\"store_true\",\n help=\"print each depth level one by one\")\n\n @property\n def max_level(self):\n max_items, max_requests = 0, 0\n if self.items:\n max_items = max(self.items)\n if self.requests:\n max_requests = max(self.requests)\n return max(max_items, max_requests)\n\n def add_items(self, lvl, new_items):\n old_items = self.items.get(lvl, [])\n self.items[lvl] = old_items + new_items\n\n def add_requests(self, lvl, new_reqs):\n old_reqs = self.requests.get(lvl, [])\n self.requests[lvl] = old_reqs + new_reqs\n\n def print_items(self, lvl=None, colour=True):\n if lvl is None:\n items = [item for lst in self.items.values() for item in lst]\n else:\n items = self.items.get(lvl, [])\n\n print(\"# Scraped Items \", \"-\" * 60)\n display.pprint([ItemAdapter(x).asdict() for x in items], colorize=colour)\n\n def print_requests(self, lvl=None, colour=True):\n if lvl is None:\n if self.requests:\n requests = self.requests[max(self.requests)]\n else:\n requests = []\n else:\n requests = self.requests.get(lvl, [])\n\n print(\"# Requests \", \"-\" * 65)\n display.pprint(requests, colorize=colour)\n\n def print_results(self, opts):\n colour = not opts.nocolour\n\n if opts.verbose:\n for level in range(1, self.max_level + 1):\n print(f'\\n>>> DEPTH LEVEL: {level} <<<')\n if not opts.noitems:\n self.print_items(level, colour)\n if not opts.nolinks:\n self.print_requests(level, colour)\n else:\n print(f'\\n>>> STATUS DEPTH LEVEL {self.max_level} <<<')\n if not opts.noitems:\n self.print_items(colour=colour)\n if not opts.nolinks:\n self.print_requests(colour=colour)\n\n def _get_items_and_requests(self, spider_output, opts, depth, spider, callback):\n items, requests = [], []\n for x in spider_output:\n if is_item(x):\n items.append(x)\n elif isinstance(x, Request):\n requests.append(x)\n return items, requests, opts, depth, spider, callback\n\n def run_callback(self, response, callback, cb_kwargs=None):\n cb_kwargs = cb_kwargs or {}\n d = maybeDeferred(iterate_spider_output, callback(response, **cb_kwargs))\n return d\n\n def get_callback_from_rules(self, spider, response):\n if getattr(spider, 'rules', None):\n for rule in spider.rules:\n if rule.link_extractor.matches(response.url):\n return rule.callback or \"parse\"\n else:\n logger.error('No CrawlSpider rules found in spider %(spider)r, '\n 'please specify a callback to use for parsing',\n {'spider': spider.name})\n\n def set_spidercls(self, url, opts):\n spider_loader = self.crawler_process.spider_loader\n if opts.spider:\n try:\n self.spidercls = spider_loader.load(opts.spider)\n except KeyError:\n logger.error('Unable to find spider: %(spider)s',\n {'spider': opts.spider})\n else:\n self.spidercls = spidercls_for_request(spider_loader, Request(url))\n if not self.spidercls:\n logger.error('Unable to find spider for: %(url)s', {'url': url})\n\n def _start_requests(spider):\n yield self.prepare_request(spider, Request(url), opts)\n if self.spidercls:\n self.spidercls.start_requests = _start_requests\n\n def start_parsing(self, url, opts):\n self.crawler_process.crawl(self.spidercls, **opts.spargs)\n self.pcrawler = list(self.crawler_process.crawlers)[0]\n self.crawler_process.start()\n\n if not self.first_response:\n logger.error('No response downloaded for: %(url)s',\n {'url': url})\n\n def scraped_data(self, args):\n items, requests, opts, depth, spider, callback = args\n if opts.pipelines:\n itemproc = self.pcrawler.engine.scraper.itemproc\n for item in items:\n itemproc.process_item(item, spider)\n self.add_items(depth, items)\n self.add_requests(depth, requests)\n\n scraped_data = items if opts.output else []\n if depth < opts.depth:\n for req in requests:\n req.meta['_depth'] = depth + 1\n req.meta['_callback'] = req.callback\n req.callback = callback\n scraped_data += requests\n\n return scraped_data\n\n def prepare_request(self, spider, request, opts):\n def callback(response, **cb_kwargs):\n # memorize first request\n if not self.first_response:\n self.first_response = response\n\n # determine real callback\n cb = response.meta['_callback']\n if not cb:\n if opts.callback:\n cb = opts.callback\n elif opts.rules and self.first_response == response:\n cb = self.get_callback_from_rules(spider, response)\n\n if not cb:\n logger.error('Cannot find a rule that matches %(url)r in spider: %(spider)s',\n {'url': response.url, 'spider': spider.name})\n return\n else:\n cb = 'parse'\n\n if not callable(cb):\n cb_method = getattr(spider, cb, None)\n if callable(cb_method):\n cb = cb_method\n else:\n logger.error('Cannot find callback %(callback)r in spider: %(spider)s',\n {'callback': cb, 'spider': spider.name})\n return\n\n # parse items and requests\n depth = response.meta['_depth']\n\n d = self.run_callback(response, cb, cb_kwargs)\n d.addCallback(self._get_items_and_requests, opts, depth, spider, callback)\n d.addCallback(self.scraped_data)\n return d\n\n # update request meta if any extra meta was passed through the --meta/-m opts.\n if opts.meta:\n request.meta.update(opts.meta)\n\n # update cb_kwargs if any extra values were was passed through the --cbkwargs option.\n if opts.cbkwargs:\n request.cb_kwargs.update(opts.cbkwargs)\n\n request.meta['_depth'] = 1\n request.meta['_callback'] = request.callback\n request.callback = callback\n return request\n\n def process_options(self, args, opts):\n BaseRunSpiderCommand.process_options(self, args, opts)\n\n self.process_request_meta(opts)\n self.process_request_cb_kwargs(opts)\n\n def process_request_meta(self, opts):\n if opts.meta:\n try:\n opts.meta = json.loads(opts.meta)\n except ValueError:\n raise UsageError(\"Invalid -m/--meta value, pass a valid json string to -m or --meta. \"\n \"Example: --meta='{\\\"foo\\\" : \\\"bar\\\"}'\", print_help=False)\n\n def process_request_cb_kwargs(self, opts):\n if opts.cbkwargs:\n try:\n opts.cbkwargs = json.loads(opts.cbkwargs)\n except ValueError:\n raise UsageError(\"Invalid --cbkwargs value, pass a valid json string to --cbkwargs. \"\n \"Example: --cbkwargs='{\\\"foo\\\" : \\\"bar\\\"}'\", print_help=False)\n\n def run(self, args, opts):\n # parse arguments\n if not len(args) == 1 or not is_url(args[0]):\n raise UsageError()\n else:\n url = args[0]\n\n # prepare spidercls\n self.set_spidercls(url, opts)\n\n if self.spidercls and opts.depth > 0:\n self.start_parsing(url, opts)\n self.print_results(opts)\n", "path": "scrapy/commands/parse.py"}]}
| 3,214 | 795 |
gh_patches_debug_14959
|
rasdani/github-patches
|
git_diff
|
mesonbuild__meson-4439
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Gnome-initial-setup is broken
https://build.gnome.org/continuous/buildmaster/builds/2018/10/25/27/build/log-gnome-initial-setup.txt
Git bisect says the cause is this: https://github.com/mesonbuild/meson/commit/176af2c8c33
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mesonbuild/modules/i18n.py`
Content:
```
1 # Copyright 2016 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import shutil
16
17 from os import path
18 from .. import coredata, mesonlib, build
19 from ..mesonlib import MesonException
20 from . import ModuleReturnValue
21 from . import ExtensionModule
22 from ..interpreterbase import permittedKwargs, FeatureNew, FeatureNewKwargs
23
24 PRESET_ARGS = {
25 'glib': [
26 '--from-code=UTF-8',
27 '--add-comments',
28
29 # https://developer.gnome.org/glib/stable/glib-I18N.html
30 '--keyword=_',
31 '--keyword=N_',
32 '--keyword=C_:1c,2',
33 '--keyword=NC_:1c,2',
34 '--keyword=g_dcgettext:2',
35 '--keyword=g_dngettext:2,3',
36 '--keyword=g_dpgettext2:2c,3',
37
38 '--flag=N_:1:pass-c-format',
39 '--flag=C_:2:pass-c-format',
40 '--flag=NC_:2:pass-c-format',
41 '--flag=g_dngettext:2:pass-c-format',
42 '--flag=g_strdup_printf:1:c-format',
43 '--flag=g_string_printf:2:c-format',
44 '--flag=g_string_append_printf:2:c-format',
45 '--flag=g_error_new:3:c-format',
46 '--flag=g_set_error:4:c-format',
47 ]
48 }
49
50 class I18nModule(ExtensionModule):
51
52 @staticmethod
53 def _get_data_dirs(state, dirs):
54 """Returns source directories of relative paths"""
55 src_dir = path.join(state.environment.get_source_dir(), state.subdir)
56 return [path.join(src_dir, d) for d in dirs]
57
58 @FeatureNew('i18n.merge_file', '0.37.0')
59 @permittedKwargs({'languages', 'data_dirs', 'preset', 'args', 'po_dir', 'type',
60 'input', 'output', 'install', 'install_dir'})
61 def merge_file(self, state, args, kwargs):
62 podir = kwargs.pop('po_dir', None)
63 if not podir:
64 raise MesonException('i18n: po_dir is a required kwarg')
65 podir = path.join(state.build_to_src, state.subdir, podir)
66
67 file_type = kwargs.pop('type', 'xml')
68 VALID_TYPES = ('xml', 'desktop')
69 if file_type not in VALID_TYPES:
70 raise MesonException('i18n: "{}" is not a valid type {}'.format(file_type, VALID_TYPES))
71
72 datadirs = self._get_data_dirs(state, mesonlib.stringlistify(kwargs.pop('data_dirs', [])))
73 datadirs = '--datadirs=' + ':'.join(datadirs) if datadirs else None
74
75 command = state.environment.get_build_command() + [
76 '--internal', 'msgfmthelper',
77 '@INPUT@', '@OUTPUT@', file_type, podir
78 ]
79 if datadirs:
80 command.append(datadirs)
81
82 kwargs['command'] = command
83
84 inputfile = kwargs['input']
85 if hasattr(inputfile, 'held_object'):
86 ct = build.CustomTarget(kwargs['output'] + '_merge', state.subdir, state.subproject, kwargs)
87 else:
88 if isinstance(inputfile, str):
89 inputfile = mesonlib.File.from_source_file(state.environment.source_dir,
90 state.subdir, inputfile)
91 output = kwargs['output']
92 ifile_abs = inputfile.absolute_path(state.environment.source_dir,
93 state.environment.build_dir)
94 values = mesonlib.get_filenames_templates_dict([ifile_abs], None)
95 outputs = mesonlib.substitute_values([output], values)
96 output = outputs[0]
97 ct = build.CustomTarget(output + '_' + state.subdir.replace('/', '@').replace('\\', '@') + '_merge', state.subdir, state.subproject, kwargs)
98 return ModuleReturnValue(ct, [ct])
99
100 @FeatureNewKwargs('i18n.gettext', '0.37.0', ['preset'])
101 @permittedKwargs({'po_dir', 'data_dirs', 'type', 'languages', 'args', 'preset', 'install'})
102 def gettext(self, state, args, kwargs):
103 if len(args) != 1:
104 raise coredata.MesonException('Gettext requires one positional argument (package name).')
105 if not shutil.which('xgettext'):
106 raise coredata.MesonException('Can not do gettext because xgettext is not installed.')
107 packagename = args[0]
108 languages = mesonlib.stringlistify(kwargs.get('languages', []))
109 datadirs = self._get_data_dirs(state, mesonlib.stringlistify(kwargs.get('data_dirs', [])))
110 extra_args = mesonlib.stringlistify(kwargs.get('args', []))
111
112 preset = kwargs.pop('preset', None)
113 if preset:
114 preset_args = PRESET_ARGS.get(preset)
115 if not preset_args:
116 raise coredata.MesonException('i18n: Preset "{}" is not one of the valid options: {}'.format(
117 preset, list(PRESET_ARGS.keys())))
118 extra_args = set(preset_args + extra_args)
119
120 pkg_arg = '--pkgname=' + packagename
121 lang_arg = '--langs=' + '@@'.join(languages) if languages else None
122 datadirs = '--datadirs=' + ':'.join(datadirs) if datadirs else None
123 extra_args = '--extra-args=' + '@@'.join(extra_args) if extra_args else None
124
125 potargs = state.environment.get_build_command() + ['--internal', 'gettext', 'pot', pkg_arg]
126 if datadirs:
127 potargs.append(datadirs)
128 if extra_args:
129 potargs.append(extra_args)
130 pottarget = build.RunTarget(packagename + '-pot', potargs[0], potargs[1:], [], state.subdir, state.subproject)
131
132 gmoargs = state.environment.get_build_command() + ['--internal', 'gettext', 'gen_gmo']
133 if lang_arg:
134 gmoargs.append(lang_arg)
135 gmotarget = build.RunTarget(packagename + '-gmo', gmoargs[0], gmoargs[1:], [], state.subdir, state.subproject)
136
137 updatepoargs = state.environment.get_build_command() + ['--internal', 'gettext', 'update_po', pkg_arg]
138 if lang_arg:
139 updatepoargs.append(lang_arg)
140 if datadirs:
141 updatepoargs.append(datadirs)
142 if extra_args:
143 updatepoargs.append(extra_args)
144 updatepotarget = build.RunTarget(packagename + '-update-po', updatepoargs[0], updatepoargs[1:], [], state.subdir, state.subproject)
145
146 targets = [pottarget, gmotarget, updatepotarget]
147
148 install = kwargs.get('install', True)
149 if install:
150 script = state.environment.get_build_command()
151 args = ['--internal', 'gettext', 'install',
152 '--subdir=' + state.subdir,
153 '--localedir=' + state.environment.coredata.get_builtin_option('localedir'),
154 pkg_arg]
155 if lang_arg:
156 args.append(lang_arg)
157 iscript = build.RunScript(script, args)
158 targets.append(iscript)
159
160 return ModuleReturnValue(None, targets)
161
162 def initialize(*args, **kwargs):
163 return I18nModule(*args, **kwargs)
164
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mesonbuild/modules/i18n.py b/mesonbuild/modules/i18n.py
--- a/mesonbuild/modules/i18n.py
+++ b/mesonbuild/modules/i18n.py
@@ -85,6 +85,10 @@
if hasattr(inputfile, 'held_object'):
ct = build.CustomTarget(kwargs['output'] + '_merge', state.subdir, state.subproject, kwargs)
else:
+ if isinstance(inputfile, list):
+ # We only use this input file to create a name of the custom target.
+ # Thus we can ignore the other entries.
+ inputfile = inputfile[0]
if isinstance(inputfile, str):
inputfile = mesonlib.File.from_source_file(state.environment.source_dir,
state.subdir, inputfile)
|
{"golden_diff": "diff --git a/mesonbuild/modules/i18n.py b/mesonbuild/modules/i18n.py\n--- a/mesonbuild/modules/i18n.py\n+++ b/mesonbuild/modules/i18n.py\n@@ -85,6 +85,10 @@\n if hasattr(inputfile, 'held_object'):\n ct = build.CustomTarget(kwargs['output'] + '_merge', state.subdir, state.subproject, kwargs)\n else:\n+ if isinstance(inputfile, list):\n+ # We only use this input file to create a name of the custom target.\n+ # Thus we can ignore the other entries.\n+ inputfile = inputfile[0]\n if isinstance(inputfile, str):\n inputfile = mesonlib.File.from_source_file(state.environment.source_dir,\n state.subdir, inputfile)\n", "issue": "Gnome-initial-setup is broken\nhttps://build.gnome.org/continuous/buildmaster/builds/2018/10/25/27/build/log-gnome-initial-setup.txt\r\n\r\nGit bisect says the cause is this: https://github.com/mesonbuild/meson/commit/176af2c8c33\n", "before_files": [{"content": "# Copyright 2016 The Meson development team\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport shutil\n\nfrom os import path\nfrom .. import coredata, mesonlib, build\nfrom ..mesonlib import MesonException\nfrom . import ModuleReturnValue\nfrom . import ExtensionModule\nfrom ..interpreterbase import permittedKwargs, FeatureNew, FeatureNewKwargs\n\nPRESET_ARGS = {\n 'glib': [\n '--from-code=UTF-8',\n '--add-comments',\n\n # https://developer.gnome.org/glib/stable/glib-I18N.html\n '--keyword=_',\n '--keyword=N_',\n '--keyword=C_:1c,2',\n '--keyword=NC_:1c,2',\n '--keyword=g_dcgettext:2',\n '--keyword=g_dngettext:2,3',\n '--keyword=g_dpgettext2:2c,3',\n\n '--flag=N_:1:pass-c-format',\n '--flag=C_:2:pass-c-format',\n '--flag=NC_:2:pass-c-format',\n '--flag=g_dngettext:2:pass-c-format',\n '--flag=g_strdup_printf:1:c-format',\n '--flag=g_string_printf:2:c-format',\n '--flag=g_string_append_printf:2:c-format',\n '--flag=g_error_new:3:c-format',\n '--flag=g_set_error:4:c-format',\n ]\n}\n\nclass I18nModule(ExtensionModule):\n\n @staticmethod\n def _get_data_dirs(state, dirs):\n \"\"\"Returns source directories of relative paths\"\"\"\n src_dir = path.join(state.environment.get_source_dir(), state.subdir)\n return [path.join(src_dir, d) for d in dirs]\n\n @FeatureNew('i18n.merge_file', '0.37.0')\n @permittedKwargs({'languages', 'data_dirs', 'preset', 'args', 'po_dir', 'type',\n 'input', 'output', 'install', 'install_dir'})\n def merge_file(self, state, args, kwargs):\n podir = kwargs.pop('po_dir', None)\n if not podir:\n raise MesonException('i18n: po_dir is a required kwarg')\n podir = path.join(state.build_to_src, state.subdir, podir)\n\n file_type = kwargs.pop('type', 'xml')\n VALID_TYPES = ('xml', 'desktop')\n if file_type not in VALID_TYPES:\n raise MesonException('i18n: \"{}\" is not a valid type {}'.format(file_type, VALID_TYPES))\n\n datadirs = self._get_data_dirs(state, mesonlib.stringlistify(kwargs.pop('data_dirs', [])))\n datadirs = '--datadirs=' + ':'.join(datadirs) if datadirs else None\n\n command = state.environment.get_build_command() + [\n '--internal', 'msgfmthelper',\n '@INPUT@', '@OUTPUT@', file_type, podir\n ]\n if datadirs:\n command.append(datadirs)\n\n kwargs['command'] = command\n\n inputfile = kwargs['input']\n if hasattr(inputfile, 'held_object'):\n ct = build.CustomTarget(kwargs['output'] + '_merge', state.subdir, state.subproject, kwargs)\n else:\n if isinstance(inputfile, str):\n inputfile = mesonlib.File.from_source_file(state.environment.source_dir,\n state.subdir, inputfile)\n output = kwargs['output']\n ifile_abs = inputfile.absolute_path(state.environment.source_dir,\n state.environment.build_dir)\n values = mesonlib.get_filenames_templates_dict([ifile_abs], None)\n outputs = mesonlib.substitute_values([output], values)\n output = outputs[0]\n ct = build.CustomTarget(output + '_' + state.subdir.replace('/', '@').replace('\\\\', '@') + '_merge', state.subdir, state.subproject, kwargs)\n return ModuleReturnValue(ct, [ct])\n\n @FeatureNewKwargs('i18n.gettext', '0.37.0', ['preset'])\n @permittedKwargs({'po_dir', 'data_dirs', 'type', 'languages', 'args', 'preset', 'install'})\n def gettext(self, state, args, kwargs):\n if len(args) != 1:\n raise coredata.MesonException('Gettext requires one positional argument (package name).')\n if not shutil.which('xgettext'):\n raise coredata.MesonException('Can not do gettext because xgettext is not installed.')\n packagename = args[0]\n languages = mesonlib.stringlistify(kwargs.get('languages', []))\n datadirs = self._get_data_dirs(state, mesonlib.stringlistify(kwargs.get('data_dirs', [])))\n extra_args = mesonlib.stringlistify(kwargs.get('args', []))\n\n preset = kwargs.pop('preset', None)\n if preset:\n preset_args = PRESET_ARGS.get(preset)\n if not preset_args:\n raise coredata.MesonException('i18n: Preset \"{}\" is not one of the valid options: {}'.format(\n preset, list(PRESET_ARGS.keys())))\n extra_args = set(preset_args + extra_args)\n\n pkg_arg = '--pkgname=' + packagename\n lang_arg = '--langs=' + '@@'.join(languages) if languages else None\n datadirs = '--datadirs=' + ':'.join(datadirs) if datadirs else None\n extra_args = '--extra-args=' + '@@'.join(extra_args) if extra_args else None\n\n potargs = state.environment.get_build_command() + ['--internal', 'gettext', 'pot', pkg_arg]\n if datadirs:\n potargs.append(datadirs)\n if extra_args:\n potargs.append(extra_args)\n pottarget = build.RunTarget(packagename + '-pot', potargs[0], potargs[1:], [], state.subdir, state.subproject)\n\n gmoargs = state.environment.get_build_command() + ['--internal', 'gettext', 'gen_gmo']\n if lang_arg:\n gmoargs.append(lang_arg)\n gmotarget = build.RunTarget(packagename + '-gmo', gmoargs[0], gmoargs[1:], [], state.subdir, state.subproject)\n\n updatepoargs = state.environment.get_build_command() + ['--internal', 'gettext', 'update_po', pkg_arg]\n if lang_arg:\n updatepoargs.append(lang_arg)\n if datadirs:\n updatepoargs.append(datadirs)\n if extra_args:\n updatepoargs.append(extra_args)\n updatepotarget = build.RunTarget(packagename + '-update-po', updatepoargs[0], updatepoargs[1:], [], state.subdir, state.subproject)\n\n targets = [pottarget, gmotarget, updatepotarget]\n\n install = kwargs.get('install', True)\n if install:\n script = state.environment.get_build_command()\n args = ['--internal', 'gettext', 'install',\n '--subdir=' + state.subdir,\n '--localedir=' + state.environment.coredata.get_builtin_option('localedir'),\n pkg_arg]\n if lang_arg:\n args.append(lang_arg)\n iscript = build.RunScript(script, args)\n targets.append(iscript)\n\n return ModuleReturnValue(None, targets)\n\ndef initialize(*args, **kwargs):\n return I18nModule(*args, **kwargs)\n", "path": "mesonbuild/modules/i18n.py"}], "after_files": [{"content": "# Copyright 2016 The Meson development team\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport shutil\n\nfrom os import path\nfrom .. import coredata, mesonlib, build\nfrom ..mesonlib import MesonException\nfrom . import ModuleReturnValue\nfrom . import ExtensionModule\nfrom ..interpreterbase import permittedKwargs, FeatureNew, FeatureNewKwargs\n\nPRESET_ARGS = {\n 'glib': [\n '--from-code=UTF-8',\n '--add-comments',\n\n # https://developer.gnome.org/glib/stable/glib-I18N.html\n '--keyword=_',\n '--keyword=N_',\n '--keyword=C_:1c,2',\n '--keyword=NC_:1c,2',\n '--keyword=g_dcgettext:2',\n '--keyword=g_dngettext:2,3',\n '--keyword=g_dpgettext2:2c,3',\n\n '--flag=N_:1:pass-c-format',\n '--flag=C_:2:pass-c-format',\n '--flag=NC_:2:pass-c-format',\n '--flag=g_dngettext:2:pass-c-format',\n '--flag=g_strdup_printf:1:c-format',\n '--flag=g_string_printf:2:c-format',\n '--flag=g_string_append_printf:2:c-format',\n '--flag=g_error_new:3:c-format',\n '--flag=g_set_error:4:c-format',\n ]\n}\n\nclass I18nModule(ExtensionModule):\n\n @staticmethod\n def _get_data_dirs(state, dirs):\n \"\"\"Returns source directories of relative paths\"\"\"\n src_dir = path.join(state.environment.get_source_dir(), state.subdir)\n return [path.join(src_dir, d) for d in dirs]\n\n @FeatureNew('i18n.merge_file', '0.37.0')\n @permittedKwargs({'languages', 'data_dirs', 'preset', 'args', 'po_dir', 'type',\n 'input', 'output', 'install', 'install_dir'})\n def merge_file(self, state, args, kwargs):\n podir = kwargs.pop('po_dir', None)\n if not podir:\n raise MesonException('i18n: po_dir is a required kwarg')\n podir = path.join(state.build_to_src, state.subdir, podir)\n\n file_type = kwargs.pop('type', 'xml')\n VALID_TYPES = ('xml', 'desktop')\n if file_type not in VALID_TYPES:\n raise MesonException('i18n: \"{}\" is not a valid type {}'.format(file_type, VALID_TYPES))\n\n datadirs = self._get_data_dirs(state, mesonlib.stringlistify(kwargs.pop('data_dirs', [])))\n datadirs = '--datadirs=' + ':'.join(datadirs) if datadirs else None\n\n command = state.environment.get_build_command() + [\n '--internal', 'msgfmthelper',\n '@INPUT@', '@OUTPUT@', file_type, podir\n ]\n if datadirs:\n command.append(datadirs)\n\n kwargs['command'] = command\n\n inputfile = kwargs['input']\n if hasattr(inputfile, 'held_object'):\n ct = build.CustomTarget(kwargs['output'] + '_merge', state.subdir, state.subproject, kwargs)\n else:\n if isinstance(inputfile, list):\n # We only use this input file to create a name of the custom target.\n # Thus we can ignore the other entries.\n inputfile = inputfile[0]\n if isinstance(inputfile, str):\n inputfile = mesonlib.File.from_source_file(state.environment.source_dir,\n state.subdir, inputfile)\n output = kwargs['output']\n ifile_abs = inputfile.absolute_path(state.environment.source_dir,\n state.environment.build_dir)\n values = mesonlib.get_filenames_templates_dict([ifile_abs], None)\n outputs = mesonlib.substitute_values([output], values)\n output = outputs[0]\n ct = build.CustomTarget(output + '_' + state.subdir.replace('/', '@').replace('\\\\', '@') + '_merge', state.subdir, state.subproject, kwargs)\n return ModuleReturnValue(ct, [ct])\n\n @FeatureNewKwargs('i18n.gettext', '0.37.0', ['preset'])\n @permittedKwargs({'po_dir', 'data_dirs', 'type', 'languages', 'args', 'preset', 'install'})\n def gettext(self, state, args, kwargs):\n if len(args) != 1:\n raise coredata.MesonException('Gettext requires one positional argument (package name).')\n if not shutil.which('xgettext'):\n raise coredata.MesonException('Can not do gettext because xgettext is not installed.')\n packagename = args[0]\n languages = mesonlib.stringlistify(kwargs.get('languages', []))\n datadirs = self._get_data_dirs(state, mesonlib.stringlistify(kwargs.get('data_dirs', [])))\n extra_args = mesonlib.stringlistify(kwargs.get('args', []))\n\n preset = kwargs.pop('preset', None)\n if preset:\n preset_args = PRESET_ARGS.get(preset)\n if not preset_args:\n raise coredata.MesonException('i18n: Preset \"{}\" is not one of the valid options: {}'.format(\n preset, list(PRESET_ARGS.keys())))\n extra_args = set(preset_args + extra_args)\n\n pkg_arg = '--pkgname=' + packagename\n lang_arg = '--langs=' + '@@'.join(languages) if languages else None\n datadirs = '--datadirs=' + ':'.join(datadirs) if datadirs else None\n extra_args = '--extra-args=' + '@@'.join(extra_args) if extra_args else None\n\n potargs = state.environment.get_build_command() + ['--internal', 'gettext', 'pot', pkg_arg]\n if datadirs:\n potargs.append(datadirs)\n if extra_args:\n potargs.append(extra_args)\n pottarget = build.RunTarget(packagename + '-pot', potargs[0], potargs[1:], [], state.subdir, state.subproject)\n\n gmoargs = state.environment.get_build_command() + ['--internal', 'gettext', 'gen_gmo']\n if lang_arg:\n gmoargs.append(lang_arg)\n gmotarget = build.RunTarget(packagename + '-gmo', gmoargs[0], gmoargs[1:], [], state.subdir, state.subproject)\n\n updatepoargs = state.environment.get_build_command() + ['--internal', 'gettext', 'update_po', pkg_arg]\n if lang_arg:\n updatepoargs.append(lang_arg)\n if datadirs:\n updatepoargs.append(datadirs)\n if extra_args:\n updatepoargs.append(extra_args)\n updatepotarget = build.RunTarget(packagename + '-update-po', updatepoargs[0], updatepoargs[1:], [], state.subdir, state.subproject)\n\n targets = [pottarget, gmotarget, updatepotarget]\n\n install = kwargs.get('install', True)\n if install:\n script = state.environment.get_build_command()\n args = ['--internal', 'gettext', 'install',\n '--subdir=' + state.subdir,\n '--localedir=' + state.environment.coredata.get_builtin_option('localedir'),\n pkg_arg]\n if lang_arg:\n args.append(lang_arg)\n iscript = build.RunScript(script, args)\n targets.append(iscript)\n\n return ModuleReturnValue(None, targets)\n\ndef initialize(*args, **kwargs):\n return I18nModule(*args, **kwargs)\n", "path": "mesonbuild/modules/i18n.py"}]}
| 2,449 | 178 |
gh_patches_debug_20513
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-1896
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error processing 'rule_notify' on 'QuotaMailPlugin': 'int' object has no attribute 'lower'
https://beta.getsentry.com/sentry/sentry/group/56738068/
```
AttributeError: 'int' object has no attribute 'lower'
(11 additional frame(s) were not displayed)
...
File "sentry/interfaces/user.py", line 99, in to_email_html
return render_to_string('sentry/partial/interfaces/user_email.html', context)
File "sentry/web/helpers.py", line 122, in render_to_string
return loader.render_to_string(template, context)
File "templatetag_sugar/node.py", line 19, in render
return self.function(context, *args, **kwargs)
File "sentry/templatetags/sentry_helpers.py", line 314, in gravatar_url
return get_gravatar_url(email, size, default)
File "sentry/utils/avatar.py", line 18, in get_gravatar_url
hashlib.md5(email.lower()).hexdigest())
Stacktrace (most recent call last):
Error processing 'rule_notify' on 'QuotaMailPlugin': 'int' object has no attribute 'lower'
File "sentry/partial/interfaces/user_email.html", line 7
<tr>
{% if user_email %}
<td style="width:84px;padding-top:10px;vertical-align:top">
<img src="{% gravatar_url user_email size 64 %}">
</td>
{% endif %}
<td>
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/interfaces/user.py`
Content:
```
1 """
2 sentry.interfaces.user
3 ~~~~~~~~~~~~~~~~~~~~~~
4
5 :copyright: (c) 2010-2014 by the Sentry Team, see AUTHORS for more details.
6 :license: BSD, see LICENSE for more details.
7 """
8 from __future__ import absolute_import
9
10 __all__ = ('User',)
11
12 from sentry.interfaces.base import Interface
13 from sentry.utils.safe import trim, trim_dict
14 from sentry.web.helpers import render_to_string
15 from ipaddr import IPAddress
16
17
18 def validate_ip(value, required=True):
19 if not required and not value:
20 return
21
22 # will raise a ValueError
23 IPAddress(value)
24 return value
25
26
27 class User(Interface):
28 """
29 An interface which describes the authenticated User for a request.
30
31 You should provide **at least** either an `id` (a unique identifier for
32 an authenticated user) or `ip_address` (their IP address).
33
34 All other attributes are optional.
35
36 >>> {
37 >>> "id": "unique_id",
38 >>> "username": "my_user",
39 >>> "email": "[email protected]"
40 >>> "ip_address": "127.0.0.1",
41 >>> "optional": "value"
42 >>> }
43 """
44 @classmethod
45 def to_python(cls, data):
46 data = data.copy()
47
48 extra_data = data.pop('data', data)
49 if not isinstance(extra_data, dict):
50 extra_data = {}
51
52 kwargs = {
53 'id': trim(data.pop('id', None), 128),
54 'email': trim(data.pop('email', None), 128),
55 'username': trim(data.pop('username', None), 128),
56 'ip_address': validate_ip(data.pop('ip_address', None), False),
57 }
58
59 kwargs['data'] = trim_dict(extra_data)
60 return cls(**kwargs)
61
62 def get_api_context(self, is_public=False):
63 return {
64 'id': self.id,
65 'email': self.email,
66 'username': self.username,
67 'ipAddress': self.ip_address,
68 'data': self.data,
69 }
70
71 def get_path(self):
72 return 'sentry.interfaces.User'
73
74 def get_hash(self):
75 return []
76
77 def to_email_html(self, event, **kwargs):
78 context = {
79 'user_id': self.id,
80 'user_email': self.email,
81 'user_username': self.username,
82 'user_ip_address': self.ip_address,
83 'user_data': self.data,
84 }
85 return render_to_string('sentry/partial/interfaces/user_email.html', context)
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/sentry/interfaces/user.py b/src/sentry/interfaces/user.py
--- a/src/sentry/interfaces/user.py
+++ b/src/sentry/interfaces/user.py
@@ -15,6 +15,19 @@
from ipaddr import IPAddress
+def validate_email(value, required=True):
+ if not required and not value:
+ return
+
+ if not isinstance(value, basestring):
+ raise TypeError('object of type %r is not an email address' % type(value).__name__)
+
+ # safe to assume an email address at least has a @ in it.
+ if '@' not in value:
+ raise ValueError('malformed email address')
+ return value
+
+
def validate_ip(value, required=True):
if not required and not value:
return
@@ -51,7 +64,7 @@
kwargs = {
'id': trim(data.pop('id', None), 128),
- 'email': trim(data.pop('email', None), 128),
+ 'email': trim(validate_email(data.pop('email', None), False), 128),
'username': trim(data.pop('username', None), 128),
'ip_address': validate_ip(data.pop('ip_address', None), False),
}
|
{"golden_diff": "diff --git a/src/sentry/interfaces/user.py b/src/sentry/interfaces/user.py\n--- a/src/sentry/interfaces/user.py\n+++ b/src/sentry/interfaces/user.py\n@@ -15,6 +15,19 @@\n from ipaddr import IPAddress\n \n \n+def validate_email(value, required=True):\n+ if not required and not value:\n+ return\n+\n+ if not isinstance(value, basestring):\n+ raise TypeError('object of type %r is not an email address' % type(value).__name__)\n+\n+ # safe to assume an email address at least has a @ in it.\n+ if '@' not in value:\n+ raise ValueError('malformed email address')\n+ return value\n+\n+\n def validate_ip(value, required=True):\n if not required and not value:\n return\n@@ -51,7 +64,7 @@\n \n kwargs = {\n 'id': trim(data.pop('id', None), 128),\n- 'email': trim(data.pop('email', None), 128),\n+ 'email': trim(validate_email(data.pop('email', None), False), 128),\n 'username': trim(data.pop('username', None), 128),\n 'ip_address': validate_ip(data.pop('ip_address', None), False),\n }\n", "issue": "Error processing 'rule_notify' on 'QuotaMailPlugin': 'int' object has no attribute 'lower'\nhttps://beta.getsentry.com/sentry/sentry/group/56738068/\n\n```\nAttributeError: 'int' object has no attribute 'lower'\n(11 additional frame(s) were not displayed)\n...\n File \"sentry/interfaces/user.py\", line 99, in to_email_html\n return render_to_string('sentry/partial/interfaces/user_email.html', context)\n File \"sentry/web/helpers.py\", line 122, in render_to_string\n return loader.render_to_string(template, context)\n File \"templatetag_sugar/node.py\", line 19, in render\n return self.function(context, *args, **kwargs)\n File \"sentry/templatetags/sentry_helpers.py\", line 314, in gravatar_url\n return get_gravatar_url(email, size, default)\n File \"sentry/utils/avatar.py\", line 18, in get_gravatar_url\n hashlib.md5(email.lower()).hexdigest())\n\nStacktrace (most recent call last):\n\nError processing 'rule_notify' on 'QuotaMailPlugin': 'int' object has no attribute 'lower'\n\nFile \"sentry/partial/interfaces/user_email.html\", line 7\n\n <tr>\n {% if user_email %}\n <td style=\"width:84px;padding-top:10px;vertical-align:top\">\n <img src=\"{% gravatar_url user_email size 64 %}\">\n </td>\n {% endif %}\n <td>\n```\n\n", "before_files": [{"content": "\"\"\"\nsentry.interfaces.user\n~~~~~~~~~~~~~~~~~~~~~~\n\n:copyright: (c) 2010-2014 by the Sentry Team, see AUTHORS for more details.\n:license: BSD, see LICENSE for more details.\n\"\"\"\nfrom __future__ import absolute_import\n\n__all__ = ('User',)\n\nfrom sentry.interfaces.base import Interface\nfrom sentry.utils.safe import trim, trim_dict\nfrom sentry.web.helpers import render_to_string\nfrom ipaddr import IPAddress\n\n\ndef validate_ip(value, required=True):\n if not required and not value:\n return\n\n # will raise a ValueError\n IPAddress(value)\n return value\n\n\nclass User(Interface):\n \"\"\"\n An interface which describes the authenticated User for a request.\n\n You should provide **at least** either an `id` (a unique identifier for\n an authenticated user) or `ip_address` (their IP address).\n\n All other attributes are optional.\n\n >>> {\n >>> \"id\": \"unique_id\",\n >>> \"username\": \"my_user\",\n >>> \"email\": \"[email protected]\"\n >>> \"ip_address\": \"127.0.0.1\",\n >>> \"optional\": \"value\"\n >>> }\n \"\"\"\n @classmethod\n def to_python(cls, data):\n data = data.copy()\n\n extra_data = data.pop('data', data)\n if not isinstance(extra_data, dict):\n extra_data = {}\n\n kwargs = {\n 'id': trim(data.pop('id', None), 128),\n 'email': trim(data.pop('email', None), 128),\n 'username': trim(data.pop('username', None), 128),\n 'ip_address': validate_ip(data.pop('ip_address', None), False),\n }\n\n kwargs['data'] = trim_dict(extra_data)\n return cls(**kwargs)\n\n def get_api_context(self, is_public=False):\n return {\n 'id': self.id,\n 'email': self.email,\n 'username': self.username,\n 'ipAddress': self.ip_address,\n 'data': self.data,\n }\n\n def get_path(self):\n return 'sentry.interfaces.User'\n\n def get_hash(self):\n return []\n\n def to_email_html(self, event, **kwargs):\n context = {\n 'user_id': self.id,\n 'user_email': self.email,\n 'user_username': self.username,\n 'user_ip_address': self.ip_address,\n 'user_data': self.data,\n }\n return render_to_string('sentry/partial/interfaces/user_email.html', context)\n", "path": "src/sentry/interfaces/user.py"}], "after_files": [{"content": "\"\"\"\nsentry.interfaces.user\n~~~~~~~~~~~~~~~~~~~~~~\n\n:copyright: (c) 2010-2014 by the Sentry Team, see AUTHORS for more details.\n:license: BSD, see LICENSE for more details.\n\"\"\"\nfrom __future__ import absolute_import\n\n__all__ = ('User',)\n\nfrom sentry.interfaces.base import Interface\nfrom sentry.utils.safe import trim, trim_dict\nfrom sentry.web.helpers import render_to_string\nfrom ipaddr import IPAddress\n\n\ndef validate_email(value, required=True):\n if not required and not value:\n return\n\n if not isinstance(value, basestring):\n raise TypeError('object of type %r is not an email address' % type(value).__name__)\n\n # safe to assume an email address at least has a @ in it.\n if '@' not in value:\n raise ValueError('malformed email address')\n return value\n\n\ndef validate_ip(value, required=True):\n if not required and not value:\n return\n\n # will raise a ValueError\n IPAddress(value)\n return value\n\n\nclass User(Interface):\n \"\"\"\n An interface which describes the authenticated User for a request.\n\n You should provide **at least** either an `id` (a unique identifier for\n an authenticated user) or `ip_address` (their IP address).\n\n All other attributes are optional.\n\n >>> {\n >>> \"id\": \"unique_id\",\n >>> \"username\": \"my_user\",\n >>> \"email\": \"[email protected]\"\n >>> \"ip_address\": \"127.0.0.1\",\n >>> \"optional\": \"value\"\n >>> }\n \"\"\"\n @classmethod\n def to_python(cls, data):\n data = data.copy()\n\n extra_data = data.pop('data', data)\n if not isinstance(extra_data, dict):\n extra_data = {}\n\n kwargs = {\n 'id': trim(data.pop('id', None), 128),\n 'email': trim(validate_email(data.pop('email', None), False), 128),\n 'username': trim(data.pop('username', None), 128),\n 'ip_address': validate_ip(data.pop('ip_address', None), False),\n }\n\n kwargs['data'] = trim_dict(extra_data)\n return cls(**kwargs)\n\n def get_api_context(self, is_public=False):\n return {\n 'id': self.id,\n 'email': self.email,\n 'username': self.username,\n 'ipAddress': self.ip_address,\n 'data': self.data,\n }\n\n def get_path(self):\n return 'sentry.interfaces.User'\n\n def get_hash(self):\n return []\n\n def to_email_html(self, event, **kwargs):\n context = {\n 'user_id': self.id,\n 'user_email': self.email,\n 'user_username': self.username,\n 'user_ip_address': self.ip_address,\n 'user_data': self.data,\n }\n return render_to_string('sentry/partial/interfaces/user_email.html', context)\n", "path": "src/sentry/interfaces/user.py"}]}
| 1,332 | 284 |
gh_patches_debug_16562
|
rasdani/github-patches
|
git_diff
|
mlflow__mlflow-4047
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] change in 1.12.0 causes S3 SSLError
Thank you for submitting an issue. Please refer to our [issue policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md) for additional information about bug reports. For help with debugging your code, please refer to [Stack Overflow](https://stackoverflow.com/questions/tagged/mlflow).
**Please fill in this bug report template to ensure a timely and thorough response.**
### Willingness to contribute
The MLflow Community encourages bug fix contributions. Would you or another member of your organization be willing to contribute a fix for this bug to the MLflow code base?
- [ ] Yes. I can contribute a fix for this bug independently.
- [ ] Yes. I would be willing to contribute a fix for this bug with guidance from the MLflow community.
- [ x] No. I cannot contribute a bug fix at this time.
### System information
- **Have I written custom code (as opposed to using a stock example script provided in MLflow)**:
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: macOS 10.15.6
- **MLflow installed from (source or binary)**: binary
- **MLflow version (run ``mlflow --version``)**: 1.12.0
- **Python version**: 3.8.5
- **npm version, if running the dev UI**:
- **Exact command to reproduce**:
S3ArtifactRepository(my_uri).list_artifacts(my_path)
### Describe the problem
Describe the problem clearly here. Include descriptions of the expected behavior and the actual behavior.
As of this version I'm getting an error:
"certificate verify failed: self signed certificate in certificate chain."
I'm working with private corporate endpoints. This worked fine before 1.12.0. I've isolated it to changes in mlflow.store.artifact.s3_artifact_repo, line 54. Simply adding verify=True explicitly seems to have caused this, even though this should be the default in boto3.
### Code to reproduce issue
Provide a reproducible test case that is the bare minimum necessary to generate the problem.
Tricky
### Other info / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
### What component(s), interfaces, languages, and integrations does this bug affect?
Components
- [x ] `area/artifacts`: Artifact stores and artifact logging
- [ ] `area/build`: Build and test infrastructure for MLflow
- [ ] `area/docs`: MLflow documentation pages
- [ ] `area/examples`: Example code
- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry
- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors
- [ ] `area/projects`: MLproject format, project running backends
- [ ] `area/scoring`: Local serving, model deployment tools, spark UDFs
- [ ] `area/server-infra`: MLflow server, JavaScript dev server
- [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging
Interface
- [ ] `area/uiux`: Front-end, user experience, JavaScript, plotting
- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry
- [ ] `area/windows`: Windows support
Language
- [ ] `language/r`: R APIs and clients
- [ ] `language/java`: Java APIs and clients
- [ ] `language/new`: Proposals for new client languages
Integrations
- [ ] `integrations/azure`: Azure and Azure ML integrations
- [ ] `integrations/sagemaker`: SageMaker integrations
- [ ] `integrations/databricks`: Databricks integrations
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mlflow/store/artifact/s3_artifact_repo.py`
Content:
```
1 import os
2 from mimetypes import guess_type
3
4 import posixpath
5 import urllib.parse
6
7 from mlflow import data
8 from mlflow.entities import FileInfo
9 from mlflow.exceptions import MlflowException
10 from mlflow.store.artifact.artifact_repo import ArtifactRepository
11 from mlflow.utils.file_utils import relative_path_to_artifact_path
12
13
14 class S3ArtifactRepository(ArtifactRepository):
15 """Stores artifacts on Amazon S3."""
16
17 @staticmethod
18 def parse_s3_uri(uri):
19 """Parse an S3 URI, returning (bucket, path)"""
20 parsed = urllib.parse.urlparse(uri)
21 if parsed.scheme != "s3":
22 raise Exception("Not an S3 URI: %s" % uri)
23 path = parsed.path
24 if path.startswith("/"):
25 path = path[1:]
26 return parsed.netloc, path
27
28 @staticmethod
29 def get_s3_file_upload_extra_args():
30 import json
31
32 s3_file_upload_extra_args = os.environ.get("MLFLOW_S3_UPLOAD_EXTRA_ARGS")
33 if s3_file_upload_extra_args:
34 return json.loads(s3_file_upload_extra_args)
35 else:
36 return None
37
38 def _get_s3_client(self):
39 import boto3
40 from botocore.client import Config
41
42 s3_endpoint_url = os.environ.get("MLFLOW_S3_ENDPOINT_URL")
43 ignore_tls = os.environ.get("MLFLOW_S3_IGNORE_TLS")
44
45 verify = True
46 if ignore_tls:
47 verify = ignore_tls.lower() not in ["true", "yes", "1"]
48
49 # NOTE: If you need to specify this env variable, please file an issue at
50 # https://github.com/mlflow/mlflow/issues so we know your use-case!
51 signature_version = os.environ.get("MLFLOW_EXPERIMENTAL_S3_SIGNATURE_VERSION", "s3v4")
52 return boto3.client(
53 "s3",
54 config=Config(signature_version=signature_version),
55 endpoint_url=s3_endpoint_url,
56 verify=verify,
57 )
58
59 def _upload_file(self, s3_client, local_file, bucket, key):
60 extra_args = dict()
61 guessed_type, guessed_encoding = guess_type(local_file)
62 if guessed_type is not None:
63 extra_args["ContentType"] = guessed_type
64 if guessed_encoding is not None:
65 extra_args["ContentEncoding"] = guessed_encoding
66 environ_extra_args = self.get_s3_file_upload_extra_args()
67 if environ_extra_args is not None:
68 extra_args.update(environ_extra_args)
69 s3_client.upload_file(Filename=local_file, Bucket=bucket, Key=key, ExtraArgs=extra_args)
70
71 def log_artifact(self, local_file, artifact_path=None):
72 (bucket, dest_path) = data.parse_s3_uri(self.artifact_uri)
73 if artifact_path:
74 dest_path = posixpath.join(dest_path, artifact_path)
75 dest_path = posixpath.join(dest_path, os.path.basename(local_file))
76 self._upload_file(
77 s3_client=self._get_s3_client(), local_file=local_file, bucket=bucket, key=dest_path
78 )
79
80 def log_artifacts(self, local_dir, artifact_path=None):
81 (bucket, dest_path) = data.parse_s3_uri(self.artifact_uri)
82 if artifact_path:
83 dest_path = posixpath.join(dest_path, artifact_path)
84 s3_client = self._get_s3_client()
85 local_dir = os.path.abspath(local_dir)
86 for (root, _, filenames) in os.walk(local_dir):
87 upload_path = dest_path
88 if root != local_dir:
89 rel_path = os.path.relpath(root, local_dir)
90 rel_path = relative_path_to_artifact_path(rel_path)
91 upload_path = posixpath.join(dest_path, rel_path)
92 for f in filenames:
93 self._upload_file(
94 s3_client=s3_client,
95 local_file=os.path.join(root, f),
96 bucket=bucket,
97 key=posixpath.join(upload_path, f),
98 )
99
100 def list_artifacts(self, path=None):
101 (bucket, artifact_path) = data.parse_s3_uri(self.artifact_uri)
102 dest_path = artifact_path
103 if path:
104 dest_path = posixpath.join(dest_path, path)
105 infos = []
106 prefix = dest_path + "/" if dest_path else ""
107 s3_client = self._get_s3_client()
108 paginator = s3_client.get_paginator("list_objects_v2")
109 results = paginator.paginate(Bucket=bucket, Prefix=prefix, Delimiter="/")
110 for result in results:
111 # Subdirectories will be listed as "common prefixes" due to the way we made the request
112 for obj in result.get("CommonPrefixes", []):
113 subdir_path = obj.get("Prefix")
114 self._verify_listed_object_contains_artifact_path_prefix(
115 listed_object_path=subdir_path, artifact_path=artifact_path
116 )
117 subdir_rel_path = posixpath.relpath(path=subdir_path, start=artifact_path)
118 if subdir_rel_path.endswith("/"):
119 subdir_rel_path = subdir_rel_path[:-1]
120 infos.append(FileInfo(subdir_rel_path, True, None))
121 # Objects listed directly will be files
122 for obj in result.get("Contents", []):
123 file_path = obj.get("Key")
124 self._verify_listed_object_contains_artifact_path_prefix(
125 listed_object_path=file_path, artifact_path=artifact_path
126 )
127 file_rel_path = posixpath.relpath(path=file_path, start=artifact_path)
128 file_size = int(obj.get("Size"))
129 infos.append(FileInfo(file_rel_path, False, file_size))
130 return sorted(infos, key=lambda f: f.path)
131
132 @staticmethod
133 def _verify_listed_object_contains_artifact_path_prefix(listed_object_path, artifact_path):
134 if not listed_object_path.startswith(artifact_path):
135 raise MlflowException(
136 "The path of the listed S3 object does not begin with the specified"
137 " artifact path. Artifact path: {artifact_path}. Object path:"
138 " {object_path}.".format(
139 artifact_path=artifact_path, object_path=listed_object_path
140 )
141 )
142
143 def _download_file(self, remote_file_path, local_path):
144 (bucket, s3_root_path) = data.parse_s3_uri(self.artifact_uri)
145 s3_full_path = posixpath.join(s3_root_path, remote_file_path)
146 s3_client = self._get_s3_client()
147 s3_client.download_file(bucket, s3_full_path, local_path)
148
149 def delete_artifacts(self, artifact_path=None):
150 (bucket, dest_path) = data.parse_s3_uri(self.artifact_uri)
151 if artifact_path:
152 dest_path = posixpath.join(dest_path, artifact_path)
153
154 s3_client = self._get_s3_client()
155 list_objects = s3_client.list_objects(Bucket=bucket, Prefix=dest_path).get("Contents", [])
156 for to_delete_obj in list_objects:
157 file_path = to_delete_obj.get("Key")
158 self._verify_listed_object_contains_artifact_path_prefix(
159 listed_object_path=file_path, artifact_path=dest_path
160 )
161 s3_client.delete_object(Bucket=bucket, Key=file_path)
162
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mlflow/store/artifact/s3_artifact_repo.py b/mlflow/store/artifact/s3_artifact_repo.py
--- a/mlflow/store/artifact/s3_artifact_repo.py
+++ b/mlflow/store/artifact/s3_artifact_repo.py
@@ -42,9 +42,14 @@
s3_endpoint_url = os.environ.get("MLFLOW_S3_ENDPOINT_URL")
ignore_tls = os.environ.get("MLFLOW_S3_IGNORE_TLS")
- verify = True
+ do_verify = True
if ignore_tls:
- verify = ignore_tls.lower() not in ["true", "yes", "1"]
+ do_verify = ignore_tls.lower() not in ["true", "yes", "1"]
+
+ # The valid verify argument value is None/False/path to cert bundle file, See
+ # https://github.com/boto/boto3/blob/73865126cad3938ca80a2f567a1c79cb248169a7/
+ # boto3/session.py#L212
+ verify = None if do_verify else False
# NOTE: If you need to specify this env variable, please file an issue at
# https://github.com/mlflow/mlflow/issues so we know your use-case!
|
{"golden_diff": "diff --git a/mlflow/store/artifact/s3_artifact_repo.py b/mlflow/store/artifact/s3_artifact_repo.py\n--- a/mlflow/store/artifact/s3_artifact_repo.py\n+++ b/mlflow/store/artifact/s3_artifact_repo.py\n@@ -42,9 +42,14 @@\n s3_endpoint_url = os.environ.get(\"MLFLOW_S3_ENDPOINT_URL\")\n ignore_tls = os.environ.get(\"MLFLOW_S3_IGNORE_TLS\")\n \n- verify = True\n+ do_verify = True\n if ignore_tls:\n- verify = ignore_tls.lower() not in [\"true\", \"yes\", \"1\"]\n+ do_verify = ignore_tls.lower() not in [\"true\", \"yes\", \"1\"]\n+\n+ # The valid verify argument value is None/False/path to cert bundle file, See\n+ # https://github.com/boto/boto3/blob/73865126cad3938ca80a2f567a1c79cb248169a7/\n+ # boto3/session.py#L212\n+ verify = None if do_verify else False\n \n # NOTE: If you need to specify this env variable, please file an issue at\n # https://github.com/mlflow/mlflow/issues so we know your use-case!\n", "issue": "[BUG] change in 1.12.0 causes S3 SSLError\nThank you for submitting an issue. Please refer to our [issue policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md) for additional information about bug reports. For help with debugging your code, please refer to [Stack Overflow](https://stackoverflow.com/questions/tagged/mlflow).\r\n\r\n**Please fill in this bug report template to ensure a timely and thorough response.**\r\n\r\n### Willingness to contribute\r\nThe MLflow Community encourages bug fix contributions. Would you or another member of your organization be willing to contribute a fix for this bug to the MLflow code base?\r\n\r\n- [ ] Yes. I can contribute a fix for this bug independently.\r\n- [ ] Yes. I would be willing to contribute a fix for this bug with guidance from the MLflow community.\r\n- [ x] No. I cannot contribute a bug fix at this time.\r\n\r\n### System information\r\n- **Have I written custom code (as opposed to using a stock example script provided in MLflow)**:\r\n- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: macOS 10.15.6\r\n- **MLflow installed from (source or binary)**: binary\r\n- **MLflow version (run ``mlflow --version``)**: 1.12.0\r\n- **Python version**: 3.8.5\r\n- **npm version, if running the dev UI**:\r\n- **Exact command to reproduce**:\r\nS3ArtifactRepository(my_uri).list_artifacts(my_path)\r\n\r\n### Describe the problem\r\nDescribe the problem clearly here. Include descriptions of the expected behavior and the actual behavior.\r\n\r\nAs of this version I'm getting an error:\r\n\"certificate verify failed: self signed certificate in certificate chain.\" \r\n\r\nI'm working with private corporate endpoints. This worked fine before 1.12.0. I've isolated it to changes in mlflow.store.artifact.s3_artifact_repo, line 54. Simply adding verify=True explicitly seems to have caused this, even though this should be the default in boto3. \r\n\r\n### Code to reproduce issue\r\nProvide a reproducible test case that is the bare minimum necessary to generate the problem.\r\n\r\nTricky \r\n\r\n### Other info / logs\r\nInclude any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.\r\n\r\n\r\n### What component(s), interfaces, languages, and integrations does this bug affect?\r\nComponents \r\n- [x ] `area/artifacts`: Artifact stores and artifact logging\r\n- [ ] `area/build`: Build and test infrastructure for MLflow\r\n- [ ] `area/docs`: MLflow documentation pages\r\n- [ ] `area/examples`: Example code\r\n- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry\r\n- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors\r\n- [ ] `area/projects`: MLproject format, project running backends\r\n- [ ] `area/scoring`: Local serving, model deployment tools, spark UDFs\r\n- [ ] `area/server-infra`: MLflow server, JavaScript dev server\r\n- [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging\r\n\r\nInterface \r\n- [ ] `area/uiux`: Front-end, user experience, JavaScript, plotting\r\n- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models\r\n- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry\r\n- [ ] `area/windows`: Windows support\r\n\r\nLanguage \r\n- [ ] `language/r`: R APIs and clients\r\n- [ ] `language/java`: Java APIs and clients\r\n- [ ] `language/new`: Proposals for new client languages\r\n\r\nIntegrations\r\n- [ ] `integrations/azure`: Azure and Azure ML integrations\r\n- [ ] `integrations/sagemaker`: SageMaker integrations\r\n- [ ] `integrations/databricks`: Databricks integrations\r\n\n", "before_files": [{"content": "import os\nfrom mimetypes import guess_type\n\nimport posixpath\nimport urllib.parse\n\nfrom mlflow import data\nfrom mlflow.entities import FileInfo\nfrom mlflow.exceptions import MlflowException\nfrom mlflow.store.artifact.artifact_repo import ArtifactRepository\nfrom mlflow.utils.file_utils import relative_path_to_artifact_path\n\n\nclass S3ArtifactRepository(ArtifactRepository):\n \"\"\"Stores artifacts on Amazon S3.\"\"\"\n\n @staticmethod\n def parse_s3_uri(uri):\n \"\"\"Parse an S3 URI, returning (bucket, path)\"\"\"\n parsed = urllib.parse.urlparse(uri)\n if parsed.scheme != \"s3\":\n raise Exception(\"Not an S3 URI: %s\" % uri)\n path = parsed.path\n if path.startswith(\"/\"):\n path = path[1:]\n return parsed.netloc, path\n\n @staticmethod\n def get_s3_file_upload_extra_args():\n import json\n\n s3_file_upload_extra_args = os.environ.get(\"MLFLOW_S3_UPLOAD_EXTRA_ARGS\")\n if s3_file_upload_extra_args:\n return json.loads(s3_file_upload_extra_args)\n else:\n return None\n\n def _get_s3_client(self):\n import boto3\n from botocore.client import Config\n\n s3_endpoint_url = os.environ.get(\"MLFLOW_S3_ENDPOINT_URL\")\n ignore_tls = os.environ.get(\"MLFLOW_S3_IGNORE_TLS\")\n\n verify = True\n if ignore_tls:\n verify = ignore_tls.lower() not in [\"true\", \"yes\", \"1\"]\n\n # NOTE: If you need to specify this env variable, please file an issue at\n # https://github.com/mlflow/mlflow/issues so we know your use-case!\n signature_version = os.environ.get(\"MLFLOW_EXPERIMENTAL_S3_SIGNATURE_VERSION\", \"s3v4\")\n return boto3.client(\n \"s3\",\n config=Config(signature_version=signature_version),\n endpoint_url=s3_endpoint_url,\n verify=verify,\n )\n\n def _upload_file(self, s3_client, local_file, bucket, key):\n extra_args = dict()\n guessed_type, guessed_encoding = guess_type(local_file)\n if guessed_type is not None:\n extra_args[\"ContentType\"] = guessed_type\n if guessed_encoding is not None:\n extra_args[\"ContentEncoding\"] = guessed_encoding\n environ_extra_args = self.get_s3_file_upload_extra_args()\n if environ_extra_args is not None:\n extra_args.update(environ_extra_args)\n s3_client.upload_file(Filename=local_file, Bucket=bucket, Key=key, ExtraArgs=extra_args)\n\n def log_artifact(self, local_file, artifact_path=None):\n (bucket, dest_path) = data.parse_s3_uri(self.artifact_uri)\n if artifact_path:\n dest_path = posixpath.join(dest_path, artifact_path)\n dest_path = posixpath.join(dest_path, os.path.basename(local_file))\n self._upload_file(\n s3_client=self._get_s3_client(), local_file=local_file, bucket=bucket, key=dest_path\n )\n\n def log_artifacts(self, local_dir, artifact_path=None):\n (bucket, dest_path) = data.parse_s3_uri(self.artifact_uri)\n if artifact_path:\n dest_path = posixpath.join(dest_path, artifact_path)\n s3_client = self._get_s3_client()\n local_dir = os.path.abspath(local_dir)\n for (root, _, filenames) in os.walk(local_dir):\n upload_path = dest_path\n if root != local_dir:\n rel_path = os.path.relpath(root, local_dir)\n rel_path = relative_path_to_artifact_path(rel_path)\n upload_path = posixpath.join(dest_path, rel_path)\n for f in filenames:\n self._upload_file(\n s3_client=s3_client,\n local_file=os.path.join(root, f),\n bucket=bucket,\n key=posixpath.join(upload_path, f),\n )\n\n def list_artifacts(self, path=None):\n (bucket, artifact_path) = data.parse_s3_uri(self.artifact_uri)\n dest_path = artifact_path\n if path:\n dest_path = posixpath.join(dest_path, path)\n infos = []\n prefix = dest_path + \"/\" if dest_path else \"\"\n s3_client = self._get_s3_client()\n paginator = s3_client.get_paginator(\"list_objects_v2\")\n results = paginator.paginate(Bucket=bucket, Prefix=prefix, Delimiter=\"/\")\n for result in results:\n # Subdirectories will be listed as \"common prefixes\" due to the way we made the request\n for obj in result.get(\"CommonPrefixes\", []):\n subdir_path = obj.get(\"Prefix\")\n self._verify_listed_object_contains_artifact_path_prefix(\n listed_object_path=subdir_path, artifact_path=artifact_path\n )\n subdir_rel_path = posixpath.relpath(path=subdir_path, start=artifact_path)\n if subdir_rel_path.endswith(\"/\"):\n subdir_rel_path = subdir_rel_path[:-1]\n infos.append(FileInfo(subdir_rel_path, True, None))\n # Objects listed directly will be files\n for obj in result.get(\"Contents\", []):\n file_path = obj.get(\"Key\")\n self._verify_listed_object_contains_artifact_path_prefix(\n listed_object_path=file_path, artifact_path=artifact_path\n )\n file_rel_path = posixpath.relpath(path=file_path, start=artifact_path)\n file_size = int(obj.get(\"Size\"))\n infos.append(FileInfo(file_rel_path, False, file_size))\n return sorted(infos, key=lambda f: f.path)\n\n @staticmethod\n def _verify_listed_object_contains_artifact_path_prefix(listed_object_path, artifact_path):\n if not listed_object_path.startswith(artifact_path):\n raise MlflowException(\n \"The path of the listed S3 object does not begin with the specified\"\n \" artifact path. Artifact path: {artifact_path}. Object path:\"\n \" {object_path}.\".format(\n artifact_path=artifact_path, object_path=listed_object_path\n )\n )\n\n def _download_file(self, remote_file_path, local_path):\n (bucket, s3_root_path) = data.parse_s3_uri(self.artifact_uri)\n s3_full_path = posixpath.join(s3_root_path, remote_file_path)\n s3_client = self._get_s3_client()\n s3_client.download_file(bucket, s3_full_path, local_path)\n\n def delete_artifacts(self, artifact_path=None):\n (bucket, dest_path) = data.parse_s3_uri(self.artifact_uri)\n if artifact_path:\n dest_path = posixpath.join(dest_path, artifact_path)\n\n s3_client = self._get_s3_client()\n list_objects = s3_client.list_objects(Bucket=bucket, Prefix=dest_path).get(\"Contents\", [])\n for to_delete_obj in list_objects:\n file_path = to_delete_obj.get(\"Key\")\n self._verify_listed_object_contains_artifact_path_prefix(\n listed_object_path=file_path, artifact_path=dest_path\n )\n s3_client.delete_object(Bucket=bucket, Key=file_path)\n", "path": "mlflow/store/artifact/s3_artifact_repo.py"}], "after_files": [{"content": "import os\nfrom mimetypes import guess_type\n\nimport posixpath\nimport urllib.parse\n\nfrom mlflow import data\nfrom mlflow.entities import FileInfo\nfrom mlflow.exceptions import MlflowException\nfrom mlflow.store.artifact.artifact_repo import ArtifactRepository\nfrom mlflow.utils.file_utils import relative_path_to_artifact_path\n\n\nclass S3ArtifactRepository(ArtifactRepository):\n \"\"\"Stores artifacts on Amazon S3.\"\"\"\n\n @staticmethod\n def parse_s3_uri(uri):\n \"\"\"Parse an S3 URI, returning (bucket, path)\"\"\"\n parsed = urllib.parse.urlparse(uri)\n if parsed.scheme != \"s3\":\n raise Exception(\"Not an S3 URI: %s\" % uri)\n path = parsed.path\n if path.startswith(\"/\"):\n path = path[1:]\n return parsed.netloc, path\n\n @staticmethod\n def get_s3_file_upload_extra_args():\n import json\n\n s3_file_upload_extra_args = os.environ.get(\"MLFLOW_S3_UPLOAD_EXTRA_ARGS\")\n if s3_file_upload_extra_args:\n return json.loads(s3_file_upload_extra_args)\n else:\n return None\n\n def _get_s3_client(self):\n import boto3\n from botocore.client import Config\n\n s3_endpoint_url = os.environ.get(\"MLFLOW_S3_ENDPOINT_URL\")\n ignore_tls = os.environ.get(\"MLFLOW_S3_IGNORE_TLS\")\n\n do_verify = True\n if ignore_tls:\n do_verify = ignore_tls.lower() not in [\"true\", \"yes\", \"1\"]\n\n # The valid verify argument value is None/False/path to cert bundle file, See\n # https://github.com/boto/boto3/blob/73865126cad3938ca80a2f567a1c79cb248169a7/\n # boto3/session.py#L212\n verify = None if do_verify else False\n\n # NOTE: If you need to specify this env variable, please file an issue at\n # https://github.com/mlflow/mlflow/issues so we know your use-case!\n signature_version = os.environ.get(\"MLFLOW_EXPERIMENTAL_S3_SIGNATURE_VERSION\", \"s3v4\")\n return boto3.client(\n \"s3\",\n config=Config(signature_version=signature_version),\n endpoint_url=s3_endpoint_url,\n verify=verify,\n )\n\n def _upload_file(self, s3_client, local_file, bucket, key):\n extra_args = dict()\n guessed_type, guessed_encoding = guess_type(local_file)\n if guessed_type is not None:\n extra_args[\"ContentType\"] = guessed_type\n if guessed_encoding is not None:\n extra_args[\"ContentEncoding\"] = guessed_encoding\n environ_extra_args = self.get_s3_file_upload_extra_args()\n if environ_extra_args is not None:\n extra_args.update(environ_extra_args)\n s3_client.upload_file(Filename=local_file, Bucket=bucket, Key=key, ExtraArgs=extra_args)\n\n def log_artifact(self, local_file, artifact_path=None):\n (bucket, dest_path) = data.parse_s3_uri(self.artifact_uri)\n if artifact_path:\n dest_path = posixpath.join(dest_path, artifact_path)\n dest_path = posixpath.join(dest_path, os.path.basename(local_file))\n self._upload_file(\n s3_client=self._get_s3_client(), local_file=local_file, bucket=bucket, key=dest_path\n )\n\n def log_artifacts(self, local_dir, artifact_path=None):\n (bucket, dest_path) = data.parse_s3_uri(self.artifact_uri)\n if artifact_path:\n dest_path = posixpath.join(dest_path, artifact_path)\n s3_client = self._get_s3_client()\n local_dir = os.path.abspath(local_dir)\n for (root, _, filenames) in os.walk(local_dir):\n upload_path = dest_path\n if root != local_dir:\n rel_path = os.path.relpath(root, local_dir)\n rel_path = relative_path_to_artifact_path(rel_path)\n upload_path = posixpath.join(dest_path, rel_path)\n for f in filenames:\n self._upload_file(\n s3_client=s3_client,\n local_file=os.path.join(root, f),\n bucket=bucket,\n key=posixpath.join(upload_path, f),\n )\n\n def list_artifacts(self, path=None):\n (bucket, artifact_path) = data.parse_s3_uri(self.artifact_uri)\n dest_path = artifact_path\n if path:\n dest_path = posixpath.join(dest_path, path)\n infos = []\n prefix = dest_path + \"/\" if dest_path else \"\"\n s3_client = self._get_s3_client()\n paginator = s3_client.get_paginator(\"list_objects_v2\")\n results = paginator.paginate(Bucket=bucket, Prefix=prefix, Delimiter=\"/\")\n for result in results:\n # Subdirectories will be listed as \"common prefixes\" due to the way we made the request\n for obj in result.get(\"CommonPrefixes\", []):\n subdir_path = obj.get(\"Prefix\")\n self._verify_listed_object_contains_artifact_path_prefix(\n listed_object_path=subdir_path, artifact_path=artifact_path\n )\n subdir_rel_path = posixpath.relpath(path=subdir_path, start=artifact_path)\n if subdir_rel_path.endswith(\"/\"):\n subdir_rel_path = subdir_rel_path[:-1]\n infos.append(FileInfo(subdir_rel_path, True, None))\n # Objects listed directly will be files\n for obj in result.get(\"Contents\", []):\n file_path = obj.get(\"Key\")\n self._verify_listed_object_contains_artifact_path_prefix(\n listed_object_path=file_path, artifact_path=artifact_path\n )\n file_rel_path = posixpath.relpath(path=file_path, start=artifact_path)\n file_size = int(obj.get(\"Size\"))\n infos.append(FileInfo(file_rel_path, False, file_size))\n return sorted(infos, key=lambda f: f.path)\n\n @staticmethod\n def _verify_listed_object_contains_artifact_path_prefix(listed_object_path, artifact_path):\n if not listed_object_path.startswith(artifact_path):\n raise MlflowException(\n \"The path of the listed S3 object does not begin with the specified\"\n \" artifact path. Artifact path: {artifact_path}. Object path:\"\n \" {object_path}.\".format(\n artifact_path=artifact_path, object_path=listed_object_path\n )\n )\n\n def _download_file(self, remote_file_path, local_path):\n (bucket, s3_root_path) = data.parse_s3_uri(self.artifact_uri)\n s3_full_path = posixpath.join(s3_root_path, remote_file_path)\n s3_client = self._get_s3_client()\n s3_client.download_file(bucket, s3_full_path, local_path)\n\n def delete_artifacts(self, artifact_path=None):\n (bucket, dest_path) = data.parse_s3_uri(self.artifact_uri)\n if artifact_path:\n dest_path = posixpath.join(dest_path, artifact_path)\n\n s3_client = self._get_s3_client()\n list_objects = s3_client.list_objects(Bucket=bucket, Prefix=dest_path).get(\"Contents\", [])\n for to_delete_obj in list_objects:\n file_path = to_delete_obj.get(\"Key\")\n self._verify_listed_object_contains_artifact_path_prefix(\n listed_object_path=file_path, artifact_path=dest_path\n )\n s3_client.delete_object(Bucket=bucket, Key=file_path)\n", "path": "mlflow/store/artifact/s3_artifact_repo.py"}]}
| 3,046 | 292 |
gh_patches_debug_35495
|
rasdani/github-patches
|
git_diff
|
mne-tools__mne-python-9075
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
use bibtex in eegbci.py
convert references in `mne/datasets/eegbci/eegbci.py` to use footcite / footbibliography
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mne/datasets/eegbci/eegbci.py`
Content:
```
1 # Author: Martin Billinger <[email protected]>
2 # License: BSD Style.
3
4 import os
5 from os import path as op
6
7 from ..utils import _get_path, _do_path_update
8 from ...utils import _fetch_file, _url_to_local_path, verbose
9
10
11 EEGMI_URL = 'https://physionet.org/files/eegmmidb/1.0.0/'
12
13
14 @verbose
15 def data_path(url, path=None, force_update=False, update_path=None,
16 verbose=None):
17 """Get path to local copy of EEGMMI dataset URL.
18
19 This is a low-level function useful for getting a local copy of a
20 remote EEGBCI dataset [1]_ which is available at PhysioNet [2]_.
21
22 Parameters
23 ----------
24 url : str
25 The dataset to use.
26 path : None | str
27 Location of where to look for the EEGBCI data storing location.
28 If None, the environment variable or config parameter
29 ``MNE_DATASETS_EEGBCI_PATH`` is used. If it doesn't exist, the
30 "~/mne_data" directory is used. If the EEGBCI dataset
31 is not found under the given path, the data
32 will be automatically downloaded to the specified folder.
33 force_update : bool
34 Force update of the dataset even if a local copy exists.
35 update_path : bool | None
36 If True, set the MNE_DATASETS_EEGBCI_PATH in mne-python
37 config to the given path. If None, the user is prompted.
38 %(verbose)s
39
40 Returns
41 -------
42 path : list of str
43 Local path to the given data file. This path is contained inside a list
44 of length one, for compatibility.
45
46 Notes
47 -----
48 For example, one could do:
49
50 >>> from mne.datasets import eegbci
51 >>> url = 'http://www.physionet.org/physiobank/database/eegmmidb/'
52 >>> eegbci.data_path(url, os.getenv('HOME') + '/datasets') # doctest:+SKIP
53
54 This would download the given EEGBCI data file to the 'datasets' folder,
55 and prompt the user to save the 'datasets' path to the mne-python config,
56 if it isn't there already.
57
58 References
59 ----------
60 .. [1] Schalk, G., McFarland, D.J., Hinterberger, T., Birbaumer, N.,
61 Wolpaw, J.R. (2004) BCI2000: A General-Purpose Brain-Computer
62 Interface (BCI) System. IEEE TBME 51(6):1034-1043
63 .. [2] Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PCh,
64 Mark RG, Mietus JE, Moody GB, Peng C-K, Stanley HE. (2000)
65 PhysioBank, PhysioToolkit, and PhysioNet: Components of a New
66 Research Resource for Complex Physiologic Signals.
67 Circulation 101(23):e215-e220
68 """ # noqa: E501
69 key = 'MNE_DATASETS_EEGBCI_PATH'
70 name = 'EEGBCI'
71 path = _get_path(path, key, name)
72 destination = _url_to_local_path(url, op.join(path, 'MNE-eegbci-data'))
73 destinations = [destination]
74
75 # Fetch the file
76 if not op.isfile(destination) or force_update:
77 if op.isfile(destination):
78 os.remove(destination)
79 if not op.isdir(op.dirname(destination)):
80 os.makedirs(op.dirname(destination))
81 _fetch_file(url, destination, print_destination=False)
82
83 # Offer to update the path
84 _do_path_update(path, update_path, key, name)
85 return destinations
86
87
88 @verbose
89 def load_data(subject, runs, path=None, force_update=False, update_path=None,
90 base_url=EEGMI_URL, verbose=None): # noqa: D301
91 """Get paths to local copies of EEGBCI dataset files.
92
93 This will fetch data for the EEGBCI dataset [1]_, which is also
94 available at PhysioNet [2]_.
95
96 Parameters
97 ----------
98 subject : int
99 The subject to use. Can be in the range of 1-109 (inclusive).
100 runs : int | list of int
101 The runs to use. See Notes for details.
102 path : None | str
103 Location of where to look for the EEGBCI data storing location.
104 If None, the environment variable or config parameter
105 ``MNE_DATASETS_EEGBCI_PATH`` is used. If it doesn't exist, the
106 "~/mne_data" directory is used. If the EEGBCI dataset
107 is not found under the given path, the data
108 will be automatically downloaded to the specified folder.
109 force_update : bool
110 Force update of the dataset even if a local copy exists.
111 update_path : bool | None
112 If True, set the MNE_DATASETS_EEGBCI_PATH in mne-python
113 config to the given path. If None, the user is prompted.
114 base_url : str
115 The URL root for the data.
116 %(verbose)s
117
118 Returns
119 -------
120 paths : list
121 List of local data paths of the given type.
122
123 Notes
124 -----
125 The run numbers correspond to:
126
127 ========= ===================================
128 run task
129 ========= ===================================
130 1 Baseline, eyes open
131 2 Baseline, eyes closed
132 3, 7, 11 Motor execution: left vs right hand
133 4, 8, 12 Motor imagery: left vs right hand
134 5, 9, 13 Motor execution: hands vs feet
135 6, 10, 14 Motor imagery: hands vs feet
136 ========= ===================================
137
138 For example, one could do::
139
140 >>> from mne.datasets import eegbci
141 >>> eegbci.load_data(1, [4, 10, 14], os.getenv('HOME') + '/datasets') # doctest:+SKIP
142
143 This would download runs 4, 10, and 14 (hand/foot motor imagery) runs from
144 subject 1 in the EEGBCI dataset to the 'datasets' folder, and prompt the
145 user to save the 'datasets' path to the mne-python config, if it isn't
146 there already.
147
148 References
149 ----------
150 .. [1] Schalk, G., McFarland, D.J., Hinterberger, T., Birbaumer, N.,
151 Wolpaw, J.R. (2004) BCI2000: A General-Purpose Brain-Computer
152 Interface (BCI) System. IEEE TBME 51(6):1034-1043
153 .. [2] Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PCh,
154 Mark RG, Mietus JE, Moody GB, Peng C-K, Stanley HE. (2000)
155 PhysioBank, PhysioToolkit, and PhysioNet: Components of a New
156 Research Resource for Complex Physiologic Signals.
157 Circulation 101(23):e215-e220
158 """ # noqa: E501
159 if not hasattr(runs, '__iter__'):
160 runs = [runs]
161
162 data_paths = []
163 for r in runs:
164 url = '{u}S{s:03d}/S{s:03d}R{r:02d}.edf'.format(u=base_url,
165 s=subject, r=r)
166 data_paths.extend(data_path(url, path, force_update, update_path))
167
168 return data_paths
169
170
171 def standardize(raw):
172 """Standardize channel positions and names.
173
174 Parameters
175 ----------
176 raw : instance of Raw
177 The raw data to standardize. Operates in-place.
178 """
179 rename = dict()
180 for name in raw.ch_names:
181 std_name = name.strip('.')
182 std_name = std_name.upper()
183 if std_name.endswith('Z'):
184 std_name = std_name[:-1] + 'z'
185 if std_name.startswith('FP'):
186 std_name = 'Fp' + std_name[2:]
187 rename[name] = std_name
188 raw.rename_channels(rename)
189
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mne/datasets/eegbci/eegbci.py b/mne/datasets/eegbci/eegbci.py
--- a/mne/datasets/eegbci/eegbci.py
+++ b/mne/datasets/eegbci/eegbci.py
@@ -17,7 +17,7 @@
"""Get path to local copy of EEGMMI dataset URL.
This is a low-level function useful for getting a local copy of a
- remote EEGBCI dataset [1]_ which is available at PhysioNet [2]_.
+ remote EEGBCI dataset :footcite:`SchalkEtAl2004` which is available at PhysioNet :footcite:`GoldbergerEtAl2000`.
Parameters
----------
@@ -57,14 +57,7 @@
References
----------
- .. [1] Schalk, G., McFarland, D.J., Hinterberger, T., Birbaumer, N.,
- Wolpaw, J.R. (2004) BCI2000: A General-Purpose Brain-Computer
- Interface (BCI) System. IEEE TBME 51(6):1034-1043
- .. [2] Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PCh,
- Mark RG, Mietus JE, Moody GB, Peng C-K, Stanley HE. (2000)
- PhysioBank, PhysioToolkit, and PhysioNet: Components of a New
- Research Resource for Complex Physiologic Signals.
- Circulation 101(23):e215-e220
+ .. footbibliography::
""" # noqa: E501
key = 'MNE_DATASETS_EEGBCI_PATH'
name = 'EEGBCI'
@@ -90,8 +83,8 @@
base_url=EEGMI_URL, verbose=None): # noqa: D301
"""Get paths to local copies of EEGBCI dataset files.
- This will fetch data for the EEGBCI dataset [1]_, which is also
- available at PhysioNet [2]_.
+ This will fetch data for the EEGBCI dataset :footcite:`SchalkEtAl2004`, which is also
+ available at PhysioNet :footcite:`GoldbergerEtAl2000`.
Parameters
----------
@@ -147,14 +140,7 @@
References
----------
- .. [1] Schalk, G., McFarland, D.J., Hinterberger, T., Birbaumer, N.,
- Wolpaw, J.R. (2004) BCI2000: A General-Purpose Brain-Computer
- Interface (BCI) System. IEEE TBME 51(6):1034-1043
- .. [2] Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PCh,
- Mark RG, Mietus JE, Moody GB, Peng C-K, Stanley HE. (2000)
- PhysioBank, PhysioToolkit, and PhysioNet: Components of a New
- Research Resource for Complex Physiologic Signals.
- Circulation 101(23):e215-e220
+ .. footbibliography::
""" # noqa: E501
if not hasattr(runs, '__iter__'):
runs = [runs]
|
{"golden_diff": "diff --git a/mne/datasets/eegbci/eegbci.py b/mne/datasets/eegbci/eegbci.py\n--- a/mne/datasets/eegbci/eegbci.py\n+++ b/mne/datasets/eegbci/eegbci.py\n@@ -17,7 +17,7 @@\n \"\"\"Get path to local copy of EEGMMI dataset URL.\n \n This is a low-level function useful for getting a local copy of a\n- remote EEGBCI dataset [1]_ which is available at PhysioNet [2]_.\n+ remote EEGBCI dataset :footcite:`SchalkEtAl2004` which is available at PhysioNet :footcite:`GoldbergerEtAl2000`.\n \n Parameters\n ----------\n@@ -57,14 +57,7 @@\n \n References\n ----------\n- .. [1] Schalk, G., McFarland, D.J., Hinterberger, T., Birbaumer, N.,\n- Wolpaw, J.R. (2004) BCI2000: A General-Purpose Brain-Computer\n- Interface (BCI) System. IEEE TBME 51(6):1034-1043\n- .. [2] Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PCh,\n- Mark RG, Mietus JE, Moody GB, Peng C-K, Stanley HE. (2000)\n- PhysioBank, PhysioToolkit, and PhysioNet: Components of a New\n- Research Resource for Complex Physiologic Signals.\n- Circulation 101(23):e215-e220\n+ .. footbibliography::\n \"\"\" # noqa: E501\n key = 'MNE_DATASETS_EEGBCI_PATH'\n name = 'EEGBCI'\n@@ -90,8 +83,8 @@\n base_url=EEGMI_URL, verbose=None): # noqa: D301\n \"\"\"Get paths to local copies of EEGBCI dataset files.\n \n- This will fetch data for the EEGBCI dataset [1]_, which is also\n- available at PhysioNet [2]_.\n+ This will fetch data for the EEGBCI dataset :footcite:`SchalkEtAl2004`, which is also\n+ available at PhysioNet :footcite:`GoldbergerEtAl2000`.\n \n Parameters\n ----------\n@@ -147,14 +140,7 @@\n \n References\n ----------\n- .. [1] Schalk, G., McFarland, D.J., Hinterberger, T., Birbaumer, N.,\n- Wolpaw, J.R. (2004) BCI2000: A General-Purpose Brain-Computer\n- Interface (BCI) System. IEEE TBME 51(6):1034-1043\n- .. [2] Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PCh,\n- Mark RG, Mietus JE, Moody GB, Peng C-K, Stanley HE. (2000)\n- PhysioBank, PhysioToolkit, and PhysioNet: Components of a New\n- Research Resource for Complex Physiologic Signals.\n- Circulation 101(23):e215-e220\n+ .. footbibliography::\n \"\"\" # noqa: E501\n if not hasattr(runs, '__iter__'):\n runs = [runs]\n", "issue": "use bibtex in eegbci.py\nconvert references in `mne/datasets/eegbci/eegbci.py` to use footcite / footbibliography\r\n\n", "before_files": [{"content": "# Author: Martin Billinger <[email protected]>\n# License: BSD Style.\n\nimport os\nfrom os import path as op\n\nfrom ..utils import _get_path, _do_path_update\nfrom ...utils import _fetch_file, _url_to_local_path, verbose\n\n\nEEGMI_URL = 'https://physionet.org/files/eegmmidb/1.0.0/'\n\n\n@verbose\ndef data_path(url, path=None, force_update=False, update_path=None,\n verbose=None):\n \"\"\"Get path to local copy of EEGMMI dataset URL.\n\n This is a low-level function useful for getting a local copy of a\n remote EEGBCI dataset [1]_ which is available at PhysioNet [2]_.\n\n Parameters\n ----------\n url : str\n The dataset to use.\n path : None | str\n Location of where to look for the EEGBCI data storing location.\n If None, the environment variable or config parameter\n ``MNE_DATASETS_EEGBCI_PATH`` is used. If it doesn't exist, the\n \"~/mne_data\" directory is used. If the EEGBCI dataset\n is not found under the given path, the data\n will be automatically downloaded to the specified folder.\n force_update : bool\n Force update of the dataset even if a local copy exists.\n update_path : bool | None\n If True, set the MNE_DATASETS_EEGBCI_PATH in mne-python\n config to the given path. If None, the user is prompted.\n %(verbose)s\n\n Returns\n -------\n path : list of str\n Local path to the given data file. This path is contained inside a list\n of length one, for compatibility.\n\n Notes\n -----\n For example, one could do:\n\n >>> from mne.datasets import eegbci\n >>> url = 'http://www.physionet.org/physiobank/database/eegmmidb/'\n >>> eegbci.data_path(url, os.getenv('HOME') + '/datasets') # doctest:+SKIP\n\n This would download the given EEGBCI data file to the 'datasets' folder,\n and prompt the user to save the 'datasets' path to the mne-python config,\n if it isn't there already.\n\n References\n ----------\n .. [1] Schalk, G., McFarland, D.J., Hinterberger, T., Birbaumer, N.,\n Wolpaw, J.R. (2004) BCI2000: A General-Purpose Brain-Computer\n Interface (BCI) System. IEEE TBME 51(6):1034-1043\n .. [2] Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PCh,\n Mark RG, Mietus JE, Moody GB, Peng C-K, Stanley HE. (2000)\n PhysioBank, PhysioToolkit, and PhysioNet: Components of a New\n Research Resource for Complex Physiologic Signals.\n Circulation 101(23):e215-e220\n \"\"\" # noqa: E501\n key = 'MNE_DATASETS_EEGBCI_PATH'\n name = 'EEGBCI'\n path = _get_path(path, key, name)\n destination = _url_to_local_path(url, op.join(path, 'MNE-eegbci-data'))\n destinations = [destination]\n\n # Fetch the file\n if not op.isfile(destination) or force_update:\n if op.isfile(destination):\n os.remove(destination)\n if not op.isdir(op.dirname(destination)):\n os.makedirs(op.dirname(destination))\n _fetch_file(url, destination, print_destination=False)\n\n # Offer to update the path\n _do_path_update(path, update_path, key, name)\n return destinations\n\n\n@verbose\ndef load_data(subject, runs, path=None, force_update=False, update_path=None,\n base_url=EEGMI_URL, verbose=None): # noqa: D301\n \"\"\"Get paths to local copies of EEGBCI dataset files.\n\n This will fetch data for the EEGBCI dataset [1]_, which is also\n available at PhysioNet [2]_.\n\n Parameters\n ----------\n subject : int\n The subject to use. Can be in the range of 1-109 (inclusive).\n runs : int | list of int\n The runs to use. See Notes for details.\n path : None | str\n Location of where to look for the EEGBCI data storing location.\n If None, the environment variable or config parameter\n ``MNE_DATASETS_EEGBCI_PATH`` is used. If it doesn't exist, the\n \"~/mne_data\" directory is used. If the EEGBCI dataset\n is not found under the given path, the data\n will be automatically downloaded to the specified folder.\n force_update : bool\n Force update of the dataset even if a local copy exists.\n update_path : bool | None\n If True, set the MNE_DATASETS_EEGBCI_PATH in mne-python\n config to the given path. If None, the user is prompted.\n base_url : str\n The URL root for the data.\n %(verbose)s\n\n Returns\n -------\n paths : list\n List of local data paths of the given type.\n\n Notes\n -----\n The run numbers correspond to:\n\n ========= ===================================\n run task\n ========= ===================================\n 1 Baseline, eyes open\n 2 Baseline, eyes closed\n 3, 7, 11 Motor execution: left vs right hand\n 4, 8, 12 Motor imagery: left vs right hand\n 5, 9, 13 Motor execution: hands vs feet\n 6, 10, 14 Motor imagery: hands vs feet\n ========= ===================================\n\n For example, one could do::\n\n >>> from mne.datasets import eegbci\n >>> eegbci.load_data(1, [4, 10, 14], os.getenv('HOME') + '/datasets') # doctest:+SKIP\n\n This would download runs 4, 10, and 14 (hand/foot motor imagery) runs from\n subject 1 in the EEGBCI dataset to the 'datasets' folder, and prompt the\n user to save the 'datasets' path to the mne-python config, if it isn't\n there already.\n\n References\n ----------\n .. [1] Schalk, G., McFarland, D.J., Hinterberger, T., Birbaumer, N.,\n Wolpaw, J.R. (2004) BCI2000: A General-Purpose Brain-Computer\n Interface (BCI) System. IEEE TBME 51(6):1034-1043\n .. [2] Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PCh,\n Mark RG, Mietus JE, Moody GB, Peng C-K, Stanley HE. (2000)\n PhysioBank, PhysioToolkit, and PhysioNet: Components of a New\n Research Resource for Complex Physiologic Signals.\n Circulation 101(23):e215-e220\n \"\"\" # noqa: E501\n if not hasattr(runs, '__iter__'):\n runs = [runs]\n\n data_paths = []\n for r in runs:\n url = '{u}S{s:03d}/S{s:03d}R{r:02d}.edf'.format(u=base_url,\n s=subject, r=r)\n data_paths.extend(data_path(url, path, force_update, update_path))\n\n return data_paths\n\n\ndef standardize(raw):\n \"\"\"Standardize channel positions and names.\n\n Parameters\n ----------\n raw : instance of Raw\n The raw data to standardize. Operates in-place.\n \"\"\"\n rename = dict()\n for name in raw.ch_names:\n std_name = name.strip('.')\n std_name = std_name.upper()\n if std_name.endswith('Z'):\n std_name = std_name[:-1] + 'z'\n if std_name.startswith('FP'):\n std_name = 'Fp' + std_name[2:]\n rename[name] = std_name\n raw.rename_channels(rename)\n", "path": "mne/datasets/eegbci/eegbci.py"}], "after_files": [{"content": "# Author: Martin Billinger <[email protected]>\n# License: BSD Style.\n\nimport os\nfrom os import path as op\n\nfrom ..utils import _get_path, _do_path_update\nfrom ...utils import _fetch_file, _url_to_local_path, verbose\n\n\nEEGMI_URL = 'https://physionet.org/files/eegmmidb/1.0.0/'\n\n\n@verbose\ndef data_path(url, path=None, force_update=False, update_path=None,\n verbose=None):\n \"\"\"Get path to local copy of EEGMMI dataset URL.\n\n This is a low-level function useful for getting a local copy of a\n remote EEGBCI dataset :footcite:`SchalkEtAl2004` which is available at PhysioNet :footcite:`GoldbergerEtAl2000`.\n\n Parameters\n ----------\n url : str\n The dataset to use.\n path : None | str\n Location of where to look for the EEGBCI data storing location.\n If None, the environment variable or config parameter\n ``MNE_DATASETS_EEGBCI_PATH`` is used. If it doesn't exist, the\n \"~/mne_data\" directory is used. If the EEGBCI dataset\n is not found under the given path, the data\n will be automatically downloaded to the specified folder.\n force_update : bool\n Force update of the dataset even if a local copy exists.\n update_path : bool | None\n If True, set the MNE_DATASETS_EEGBCI_PATH in mne-python\n config to the given path. If None, the user is prompted.\n %(verbose)s\n\n Returns\n -------\n path : list of str\n Local path to the given data file. This path is contained inside a list\n of length one, for compatibility.\n\n Notes\n -----\n For example, one could do:\n\n >>> from mne.datasets import eegbci\n >>> url = 'http://www.physionet.org/physiobank/database/eegmmidb/'\n >>> eegbci.data_path(url, os.getenv('HOME') + '/datasets') # doctest:+SKIP\n\n This would download the given EEGBCI data file to the 'datasets' folder,\n and prompt the user to save the 'datasets' path to the mne-python config,\n if it isn't there already.\n\n References\n ----------\n .. footbibliography::\n \"\"\" # noqa: E501\n key = 'MNE_DATASETS_EEGBCI_PATH'\n name = 'EEGBCI'\n path = _get_path(path, key, name)\n destination = _url_to_local_path(url, op.join(path, 'MNE-eegbci-data'))\n destinations = [destination]\n\n # Fetch the file\n if not op.isfile(destination) or force_update:\n if op.isfile(destination):\n os.remove(destination)\n if not op.isdir(op.dirname(destination)):\n os.makedirs(op.dirname(destination))\n _fetch_file(url, destination, print_destination=False)\n\n # Offer to update the path\n _do_path_update(path, update_path, key, name)\n return destinations\n\n\n@verbose\ndef load_data(subject, runs, path=None, force_update=False, update_path=None,\n base_url=EEGMI_URL, verbose=None): # noqa: D301\n \"\"\"Get paths to local copies of EEGBCI dataset files.\n\n This will fetch data for the EEGBCI dataset :footcite:`SchalkEtAl2004`, which is also\n available at PhysioNet :footcite:`GoldbergerEtAl2000`.\n\n Parameters\n ----------\n subject : int\n The subject to use. Can be in the range of 1-109 (inclusive).\n runs : int | list of int\n The runs to use. See Notes for details.\n path : None | str\n Location of where to look for the EEGBCI data storing location.\n If None, the environment variable or config parameter\n ``MNE_DATASETS_EEGBCI_PATH`` is used. If it doesn't exist, the\n \"~/mne_data\" directory is used. If the EEGBCI dataset\n is not found under the given path, the data\n will be automatically downloaded to the specified folder.\n force_update : bool\n Force update of the dataset even if a local copy exists.\n update_path : bool | None\n If True, set the MNE_DATASETS_EEGBCI_PATH in mne-python\n config to the given path. If None, the user is prompted.\n base_url : str\n The URL root for the data.\n %(verbose)s\n\n Returns\n -------\n paths : list\n List of local data paths of the given type.\n\n Notes\n -----\n The run numbers correspond to:\n\n ========= ===================================\n run task\n ========= ===================================\n 1 Baseline, eyes open\n 2 Baseline, eyes closed\n 3, 7, 11 Motor execution: left vs right hand\n 4, 8, 12 Motor imagery: left vs right hand\n 5, 9, 13 Motor execution: hands vs feet\n 6, 10, 14 Motor imagery: hands vs feet\n ========= ===================================\n\n For example, one could do::\n\n >>> from mne.datasets import eegbci\n >>> eegbci.load_data(1, [4, 10, 14], os.getenv('HOME') + '/datasets') # doctest:+SKIP\n\n This would download runs 4, 10, and 14 (hand/foot motor imagery) runs from\n subject 1 in the EEGBCI dataset to the 'datasets' folder, and prompt the\n user to save the 'datasets' path to the mne-python config, if it isn't\n there already.\n\n References\n ----------\n .. footbibliography::\n \"\"\" # noqa: E501\n if not hasattr(runs, '__iter__'):\n runs = [runs]\n\n data_paths = []\n for r in runs:\n url = '{u}S{s:03d}/S{s:03d}R{r:02d}.edf'.format(u=base_url,\n s=subject, r=r)\n data_paths.extend(data_path(url, path, force_update, update_path))\n\n return data_paths\n\n\ndef standardize(raw):\n \"\"\"Standardize channel positions and names.\n\n Parameters\n ----------\n raw : instance of Raw\n The raw data to standardize. Operates in-place.\n \"\"\"\n rename = dict()\n for name in raw.ch_names:\n std_name = name.strip('.')\n std_name = std_name.upper()\n if std_name.endswith('Z'):\n std_name = std_name[:-1] + 'z'\n if std_name.startswith('FP'):\n std_name = 'Fp' + std_name[2:]\n rename[name] = std_name\n raw.rename_channels(rename)\n", "path": "mne/datasets/eegbci/eegbci.py"}]}
| 2,662 | 818 |
gh_patches_debug_32134
|
rasdani/github-patches
|
git_diff
|
GPflow__GPflow-1860
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug check_shapes with keras layers
Hi @jesnie,
I've been using the `check_shapes` decorator in eager the last couple of days and I really love it!
I've encountered a small inconvenience when using it in combination with Keras layers. The following snippets show the weird behaviour:
1. Custom keras layer without `check_shapes` decorator runs:
```python
from gpflow.experimental.check_shapes import check_shapes
import tensorflow as tf
class MyModel(tf.keras.Model):
def call(self, x, y):
return x + y
layer = MyModel()
x = tf.random.normal((32, 3))
y = tf.random.normal((32, 1))
layer(x, y)
```
2. Adding `check_shapes` decorator breaks the code:
```python
from gpflow.experimental.check_shapes import check_shapes
import tensorflow as tf
class MyModel(tf.keras.Model):
@check_shapes(
"x: [batch, input_dim]",
"y: [batch, 1]",
"return: [batch, input_dim]",
)
def call(self, x, y):
return x + y
layer = MyModel()
x = tf.random.normal((32, 3))
y = tf.random.normal((32, 1))
layer(x, y)
```
as now the layer expects a `training` kwarg to be added. The following error is thrown:
```
[TypeError: call() got an unexpected keyword argument 'training']()
```
3. After adding a `training` kwarg the code runs again. However, this isn't necessary when only using "native" keras.
```python
from gpflow.experimental.check_shapes import check_shapes
import tensorflow as tf
class MyModel(tf.keras.Model):
@check_shapes(
"x: [batch, input_dim]",
"y: [batch, 1]",
"return: [batch, input_dim]",
)
def call(self, x, y, training = None):
return x + y
layer = MyModel()
x = tf.random.normal((32, 3))
y = tf.random.normal((32, 1))
layer(x, y)
```
I'm using the following versions with python 3.8.13:
```
gpflow 2.4.0
tensorflow 2.5.3
```
Many thanks,
Vincent
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gpflow/experimental/check_shapes/decorator.py`
Content:
```
1 # Copyright 2022 The GPflow Contributors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """
15 Decorator for checking the shapes of function using tf Tensors.
16 """
17 import inspect
18 from functools import wraps
19 from typing import Any, Callable, Sequence, cast
20
21 from ..utils import experimental
22 from .accessors import set_check_shapes
23 from .argument_ref import RESULT_TOKEN
24 from .base_types import C
25 from .checker import ShapeChecker
26 from .checker_context import set_shape_checker
27 from .config import get_enable_check_shapes
28 from .error_contexts import (
29 ConditionContext,
30 FunctionCallContext,
31 FunctionDefinitionContext,
32 NoteContext,
33 ParallelContext,
34 StackContext,
35 )
36 from .parser import parse_and_rewrite_docstring, parse_function_spec
37 from .specs import ParsedArgumentSpec
38
39
40 def null_check_shapes(func: C) -> C:
41 """
42 Annotates the given function so that it looks like it has shape checks, but without actually
43 checking anything.
44
45 This is necessary not to break `@inherit_check_shapes` when shape checking is disabled.
46 """
47 set_check_shapes(func, null_check_shapes)
48 return func
49
50
51 @experimental
52 def check_shapes(*specs: str) -> Callable[[C], C]:
53 """
54 Decorator that checks the shapes of tensor arguments.
55
56 :param specs: Specification of arguments to check. See: `Check specification`_.
57 """
58 if not get_enable_check_shapes():
59 return null_check_shapes
60
61 unbound_error_context = FunctionCallContext(check_shapes)
62
63 func_spec = parse_function_spec(specs, unbound_error_context)
64
65 pre_specs = [spec for spec in func_spec.arguments if not spec.argument_ref.is_result]
66 post_specs = [spec for spec in func_spec.arguments if spec.argument_ref.is_result]
67 note_specs = func_spec.notes
68
69 def _check_shapes(func: C) -> C:
70 bound_error_context = FunctionDefinitionContext(func)
71 signature = inspect.signature(func)
72
73 @wraps(func)
74 def wrapped(*args: Any, **kwargs: Any) -> Any:
75 if not get_enable_check_shapes():
76 return func(*args, **kwargs)
77
78 try:
79 bound_arguments = signature.bind(*args, **kwargs)
80 except TypeError as e:
81 # TypeError is raised if *args and **kwargs don't actually match the arguments of
82 # `func`. In that case we just call `func` normally, which will also result in an
83 # error, but an error with the error message the user is used to.
84 func(*args, **kwargs)
85 raise AssertionError(
86 "The above line should fail so this line should never be reached."
87 ) from e
88 bound_arguments.apply_defaults()
89 arg_map = bound_arguments.arguments
90
91 checker = ShapeChecker()
92 for note_spec in note_specs:
93 checker.add_context(StackContext(bound_error_context, NoteContext(note_spec)))
94
95 def _check_specs(specs: Sequence[ParsedArgumentSpec]) -> None:
96 processed_specs = []
97
98 for arg_spec in specs:
99 for arg_value, relative_arg_context in arg_spec.argument_ref.get(
100 arg_map, bound_error_context
101 ):
102 arg_context = StackContext(bound_error_context, relative_arg_context)
103
104 if arg_spec.condition is not None:
105 condition, condition_context = arg_spec.condition.get(
106 arg_map,
107 StackContext(arg_context, ConditionContext(arg_spec.condition)),
108 )
109 if not condition:
110 continue
111 arg_context = StackContext(
112 bound_error_context,
113 ParallelContext(
114 (
115 StackContext(
116 relative_arg_context,
117 StackContext(
118 ConditionContext(arg_spec.condition),
119 condition_context,
120 ),
121 ),
122 )
123 ),
124 )
125
126 processed_specs.append((arg_value, arg_spec.tensor, arg_context))
127
128 checker.check_shapes(processed_specs)
129
130 _check_specs(pre_specs)
131
132 with set_shape_checker(checker):
133 result = func(*args, **kwargs)
134 arg_map[RESULT_TOKEN] = result
135
136 _check_specs(post_specs)
137
138 return result
139
140 set_check_shapes(wrapped, _check_shapes)
141 wrapped.__doc__ = parse_and_rewrite_docstring(
142 wrapped.__doc__, func_spec, bound_error_context
143 )
144 return cast(C, wrapped)
145
146 return _check_shapes
147
```
Path: `gpflow/experimental/check_shapes/exceptions.py`
Content:
```
1 # Copyright 2022 The GPflow Contributors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # pylint: disable=broad-except
16
17 """
18 Exceptions generated by `check_shapes`.
19
20 These rely heavily on the infrastructure in `error_contexts.py`.
21 """
22 from .error_contexts import ErrorContext, MessageBuilder
23
24
25 class CheckShapesError(Exception):
26 """
27 Common super class for `check_shapes` errors.
28 """
29
30 error_message: str
31
32 def __init__(self, context: ErrorContext) -> None:
33 builder = MessageBuilder()
34 builder.add_line("")
35 builder.add_line(self.error_message)
36 with builder.indent() as b:
37 context.print(b)
38 super().__init__(builder.build())
39
40 self.context = context
41
42
43 class VariableTypeError(CheckShapesError):
44 """
45 Error raised if a variable is used both as a rank-1 and a variable-rank variable.
46 """
47
48 error_message = (
49 "Cannot use the same variable to bind both a single dimension"
50 " and a variable number of dimensions."
51 )
52
53
54 class SpecificationParseError(CheckShapesError):
55 """
56 Error raised if there was an error parsing the shape specification.
57 """
58
59 error_message = "Unable to parse shape specification."
60
61
62 class DocstringParseError(CheckShapesError):
63 """
64 Error raised if there was an error parsing the shape specification.
65 """
66
67 error_message = "Unable to parse docstring."
68
69
70 class ArgumentReferenceError(CheckShapesError):
71 """
72 Error raised if the argument to check the shape of could not be resolved.
73 """
74
75 error_message = "Unable to resolve argument / missing argument."
76
77
78 class ShapeMismatchError(CheckShapesError):
79 """
80 Error raised if a function is called with tensors of the wrong shape.
81 """
82
83 error_message = "Tensor shape mismatch."
84
85
86 class NoShapeError(CheckShapesError):
87 """
88 Error raised if we are trying to get the shape of an object that does not have a shape.
89 """
90
91 error_message = "Unable to determine shape of object."
92
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gpflow/experimental/check_shapes/decorator.py b/gpflow/experimental/check_shapes/decorator.py
--- a/gpflow/experimental/check_shapes/decorator.py
+++ b/gpflow/experimental/check_shapes/decorator.py
@@ -15,9 +15,11 @@
Decorator for checking the shapes of function using tf Tensors.
"""
import inspect
-from functools import wraps
+from functools import update_wrapper
from typing import Any, Callable, Sequence, cast
+import tensorflow as tf
+
from ..utils import experimental
from .accessors import set_check_shapes
from .argument_ref import RESULT_TOKEN
@@ -70,7 +72,6 @@
bound_error_context = FunctionDefinitionContext(func)
signature = inspect.signature(func)
- @wraps(func)
def wrapped(*args: Any, **kwargs: Any) -> Any:
if not get_enable_check_shapes():
return func(*args, **kwargs)
@@ -137,10 +138,12 @@
return result
+ # Make TensorFlow understand our decoration:
+ tf.compat.v1.flags.tf_decorator.make_decorator(func, wrapped)
+
+ update_wrapper(wrapped, func)
set_check_shapes(wrapped, _check_shapes)
- wrapped.__doc__ = parse_and_rewrite_docstring(
- wrapped.__doc__, func_spec, bound_error_context
- )
+ wrapped.__doc__ = parse_and_rewrite_docstring(func.__doc__, func_spec, bound_error_context)
return cast(C, wrapped)
return _check_shapes
diff --git a/gpflow/experimental/check_shapes/exceptions.py b/gpflow/experimental/check_shapes/exceptions.py
--- a/gpflow/experimental/check_shapes/exceptions.py
+++ b/gpflow/experimental/check_shapes/exceptions.py
@@ -39,6 +39,9 @@
self.context = context
+ # Prevent Keras from rewriting our exception:
+ self._keras_call_info_injected = True
+
class VariableTypeError(CheckShapesError):
"""
|
{"golden_diff": "diff --git a/gpflow/experimental/check_shapes/decorator.py b/gpflow/experimental/check_shapes/decorator.py\n--- a/gpflow/experimental/check_shapes/decorator.py\n+++ b/gpflow/experimental/check_shapes/decorator.py\n@@ -15,9 +15,11 @@\n Decorator for checking the shapes of function using tf Tensors.\n \"\"\"\n import inspect\n-from functools import wraps\n+from functools import update_wrapper\n from typing import Any, Callable, Sequence, cast\n \n+import tensorflow as tf\n+\n from ..utils import experimental\n from .accessors import set_check_shapes\n from .argument_ref import RESULT_TOKEN\n@@ -70,7 +72,6 @@\n bound_error_context = FunctionDefinitionContext(func)\n signature = inspect.signature(func)\n \n- @wraps(func)\n def wrapped(*args: Any, **kwargs: Any) -> Any:\n if not get_enable_check_shapes():\n return func(*args, **kwargs)\n@@ -137,10 +138,12 @@\n \n return result\n \n+ # Make TensorFlow understand our decoration:\n+ tf.compat.v1.flags.tf_decorator.make_decorator(func, wrapped)\n+\n+ update_wrapper(wrapped, func)\n set_check_shapes(wrapped, _check_shapes)\n- wrapped.__doc__ = parse_and_rewrite_docstring(\n- wrapped.__doc__, func_spec, bound_error_context\n- )\n+ wrapped.__doc__ = parse_and_rewrite_docstring(func.__doc__, func_spec, bound_error_context)\n return cast(C, wrapped)\n \n return _check_shapes\ndiff --git a/gpflow/experimental/check_shapes/exceptions.py b/gpflow/experimental/check_shapes/exceptions.py\n--- a/gpflow/experimental/check_shapes/exceptions.py\n+++ b/gpflow/experimental/check_shapes/exceptions.py\n@@ -39,6 +39,9 @@\n \n self.context = context\n \n+ # Prevent Keras from rewriting our exception:\n+ self._keras_call_info_injected = True\n+\n \n class VariableTypeError(CheckShapesError):\n \"\"\"\n", "issue": "Bug check_shapes with keras layers\nHi @jesnie,\r\n\r\nI've been using the `check_shapes` decorator in eager the last couple of days and I really love it!\r\n\r\nI've encountered a small inconvenience when using it in combination with Keras layers. The following snippets show the weird behaviour:\r\n\r\n1. Custom keras layer without `check_shapes` decorator runs:\r\n```python\r\nfrom gpflow.experimental.check_shapes import check_shapes\r\nimport tensorflow as tf\r\n\r\nclass MyModel(tf.keras.Model):\r\n\r\n def call(self, x, y):\r\n return x + y\r\n\r\nlayer = MyModel()\r\nx = tf.random.normal((32, 3))\r\ny = tf.random.normal((32, 1))\r\nlayer(x, y)\r\n```\r\n\r\n2. Adding `check_shapes` decorator breaks the code:\r\n```python\r\nfrom gpflow.experimental.check_shapes import check_shapes\r\nimport tensorflow as tf\r\n\r\nclass MyModel(tf.keras.Model):\r\n\r\n @check_shapes(\r\n \"x: [batch, input_dim]\",\r\n \"y: [batch, 1]\",\r\n \"return: [batch, input_dim]\",\r\n )\r\n def call(self, x, y):\r\n return x + y\r\n\r\nlayer = MyModel()\r\nx = tf.random.normal((32, 3))\r\ny = tf.random.normal((32, 1))\r\nlayer(x, y)\r\n```\r\nas now the layer expects a `training` kwarg to be added. The following error is thrown:\r\n```\r\n[TypeError: call() got an unexpected keyword argument 'training']()\r\n```\r\n\r\n3. After adding a `training` kwarg the code runs again. However, this isn't necessary when only using \"native\" keras.\r\n```python\r\nfrom gpflow.experimental.check_shapes import check_shapes\r\nimport tensorflow as tf\r\n\r\nclass MyModel(tf.keras.Model):\r\n\r\n @check_shapes(\r\n \"x: [batch, input_dim]\",\r\n \"y: [batch, 1]\",\r\n \"return: [batch, input_dim]\",\r\n )\r\n def call(self, x, y, training = None):\r\n return x + y\r\n\r\nlayer = MyModel()\r\nx = tf.random.normal((32, 3))\r\ny = tf.random.normal((32, 1))\r\nlayer(x, y)\r\n```\r\n\r\nI'm using the following versions with python 3.8.13:\r\n```\r\ngpflow 2.4.0\r\ntensorflow 2.5.3\r\n```\r\n\r\nMany thanks,\r\nVincent\n", "before_files": [{"content": "# Copyright 2022 The GPflow Contributors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nDecorator for checking the shapes of function using tf Tensors.\n\"\"\"\nimport inspect\nfrom functools import wraps\nfrom typing import Any, Callable, Sequence, cast\n\nfrom ..utils import experimental\nfrom .accessors import set_check_shapes\nfrom .argument_ref import RESULT_TOKEN\nfrom .base_types import C\nfrom .checker import ShapeChecker\nfrom .checker_context import set_shape_checker\nfrom .config import get_enable_check_shapes\nfrom .error_contexts import (\n ConditionContext,\n FunctionCallContext,\n FunctionDefinitionContext,\n NoteContext,\n ParallelContext,\n StackContext,\n)\nfrom .parser import parse_and_rewrite_docstring, parse_function_spec\nfrom .specs import ParsedArgumentSpec\n\n\ndef null_check_shapes(func: C) -> C:\n \"\"\"\n Annotates the given function so that it looks like it has shape checks, but without actually\n checking anything.\n\n This is necessary not to break `@inherit_check_shapes` when shape checking is disabled.\n \"\"\"\n set_check_shapes(func, null_check_shapes)\n return func\n\n\n@experimental\ndef check_shapes(*specs: str) -> Callable[[C], C]:\n \"\"\"\n Decorator that checks the shapes of tensor arguments.\n\n :param specs: Specification of arguments to check. See: `Check specification`_.\n \"\"\"\n if not get_enable_check_shapes():\n return null_check_shapes\n\n unbound_error_context = FunctionCallContext(check_shapes)\n\n func_spec = parse_function_spec(specs, unbound_error_context)\n\n pre_specs = [spec for spec in func_spec.arguments if not spec.argument_ref.is_result]\n post_specs = [spec for spec in func_spec.arguments if spec.argument_ref.is_result]\n note_specs = func_spec.notes\n\n def _check_shapes(func: C) -> C:\n bound_error_context = FunctionDefinitionContext(func)\n signature = inspect.signature(func)\n\n @wraps(func)\n def wrapped(*args: Any, **kwargs: Any) -> Any:\n if not get_enable_check_shapes():\n return func(*args, **kwargs)\n\n try:\n bound_arguments = signature.bind(*args, **kwargs)\n except TypeError as e:\n # TypeError is raised if *args and **kwargs don't actually match the arguments of\n # `func`. In that case we just call `func` normally, which will also result in an\n # error, but an error with the error message the user is used to.\n func(*args, **kwargs)\n raise AssertionError(\n \"The above line should fail so this line should never be reached.\"\n ) from e\n bound_arguments.apply_defaults()\n arg_map = bound_arguments.arguments\n\n checker = ShapeChecker()\n for note_spec in note_specs:\n checker.add_context(StackContext(bound_error_context, NoteContext(note_spec)))\n\n def _check_specs(specs: Sequence[ParsedArgumentSpec]) -> None:\n processed_specs = []\n\n for arg_spec in specs:\n for arg_value, relative_arg_context in arg_spec.argument_ref.get(\n arg_map, bound_error_context\n ):\n arg_context = StackContext(bound_error_context, relative_arg_context)\n\n if arg_spec.condition is not None:\n condition, condition_context = arg_spec.condition.get(\n arg_map,\n StackContext(arg_context, ConditionContext(arg_spec.condition)),\n )\n if not condition:\n continue\n arg_context = StackContext(\n bound_error_context,\n ParallelContext(\n (\n StackContext(\n relative_arg_context,\n StackContext(\n ConditionContext(arg_spec.condition),\n condition_context,\n ),\n ),\n )\n ),\n )\n\n processed_specs.append((arg_value, arg_spec.tensor, arg_context))\n\n checker.check_shapes(processed_specs)\n\n _check_specs(pre_specs)\n\n with set_shape_checker(checker):\n result = func(*args, **kwargs)\n arg_map[RESULT_TOKEN] = result\n\n _check_specs(post_specs)\n\n return result\n\n set_check_shapes(wrapped, _check_shapes)\n wrapped.__doc__ = parse_and_rewrite_docstring(\n wrapped.__doc__, func_spec, bound_error_context\n )\n return cast(C, wrapped)\n\n return _check_shapes\n", "path": "gpflow/experimental/check_shapes/decorator.py"}, {"content": "# Copyright 2022 The GPflow Contributors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# pylint: disable=broad-except\n\n\"\"\"\nExceptions generated by `check_shapes`.\n\nThese rely heavily on the infrastructure in `error_contexts.py`.\n\"\"\"\nfrom .error_contexts import ErrorContext, MessageBuilder\n\n\nclass CheckShapesError(Exception):\n \"\"\"\n Common super class for `check_shapes` errors.\n \"\"\"\n\n error_message: str\n\n def __init__(self, context: ErrorContext) -> None:\n builder = MessageBuilder()\n builder.add_line(\"\")\n builder.add_line(self.error_message)\n with builder.indent() as b:\n context.print(b)\n super().__init__(builder.build())\n\n self.context = context\n\n\nclass VariableTypeError(CheckShapesError):\n \"\"\"\n Error raised if a variable is used both as a rank-1 and a variable-rank variable.\n \"\"\"\n\n error_message = (\n \"Cannot use the same variable to bind both a single dimension\"\n \" and a variable number of dimensions.\"\n )\n\n\nclass SpecificationParseError(CheckShapesError):\n \"\"\"\n Error raised if there was an error parsing the shape specification.\n \"\"\"\n\n error_message = \"Unable to parse shape specification.\"\n\n\nclass DocstringParseError(CheckShapesError):\n \"\"\"\n Error raised if there was an error parsing the shape specification.\n \"\"\"\n\n error_message = \"Unable to parse docstring.\"\n\n\nclass ArgumentReferenceError(CheckShapesError):\n \"\"\"\n Error raised if the argument to check the shape of could not be resolved.\n \"\"\"\n\n error_message = \"Unable to resolve argument / missing argument.\"\n\n\nclass ShapeMismatchError(CheckShapesError):\n \"\"\"\n Error raised if a function is called with tensors of the wrong shape.\n \"\"\"\n\n error_message = \"Tensor shape mismatch.\"\n\n\nclass NoShapeError(CheckShapesError):\n \"\"\"\n Error raised if we are trying to get the shape of an object that does not have a shape.\n \"\"\"\n\n error_message = \"Unable to determine shape of object.\"\n", "path": "gpflow/experimental/check_shapes/exceptions.py"}], "after_files": [{"content": "# Copyright 2022 The GPflow Contributors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nDecorator for checking the shapes of function using tf Tensors.\n\"\"\"\nimport inspect\nfrom functools import update_wrapper\nfrom typing import Any, Callable, Sequence, cast\n\nimport tensorflow as tf\n\nfrom ..utils import experimental\nfrom .accessors import set_check_shapes\nfrom .argument_ref import RESULT_TOKEN\nfrom .base_types import C\nfrom .checker import ShapeChecker\nfrom .checker_context import set_shape_checker\nfrom .config import get_enable_check_shapes\nfrom .error_contexts import (\n ConditionContext,\n FunctionCallContext,\n FunctionDefinitionContext,\n NoteContext,\n ParallelContext,\n StackContext,\n)\nfrom .parser import parse_and_rewrite_docstring, parse_function_spec\nfrom .specs import ParsedArgumentSpec\n\n\ndef null_check_shapes(func: C) -> C:\n \"\"\"\n Annotates the given function so that it looks like it has shape checks, but without actually\n checking anything.\n\n This is necessary not to break `@inherit_check_shapes` when shape checking is disabled.\n \"\"\"\n set_check_shapes(func, null_check_shapes)\n return func\n\n\n@experimental\ndef check_shapes(*specs: str) -> Callable[[C], C]:\n \"\"\"\n Decorator that checks the shapes of tensor arguments.\n\n :param specs: Specification of arguments to check. See: `Check specification`_.\n \"\"\"\n if not get_enable_check_shapes():\n return null_check_shapes\n\n unbound_error_context = FunctionCallContext(check_shapes)\n\n func_spec = parse_function_spec(specs, unbound_error_context)\n\n pre_specs = [spec for spec in func_spec.arguments if not spec.argument_ref.is_result]\n post_specs = [spec for spec in func_spec.arguments if spec.argument_ref.is_result]\n note_specs = func_spec.notes\n\n def _check_shapes(func: C) -> C:\n bound_error_context = FunctionDefinitionContext(func)\n signature = inspect.signature(func)\n\n def wrapped(*args: Any, **kwargs: Any) -> Any:\n if not get_enable_check_shapes():\n return func(*args, **kwargs)\n\n try:\n bound_arguments = signature.bind(*args, **kwargs)\n except TypeError as e:\n # TypeError is raised if *args and **kwargs don't actually match the arguments of\n # `func`. In that case we just call `func` normally, which will also result in an\n # error, but an error with the error message the user is used to.\n func(*args, **kwargs)\n raise AssertionError(\n \"The above line should fail so this line should never be reached.\"\n ) from e\n bound_arguments.apply_defaults()\n arg_map = bound_arguments.arguments\n\n checker = ShapeChecker()\n for note_spec in note_specs:\n checker.add_context(StackContext(bound_error_context, NoteContext(note_spec)))\n\n def _check_specs(specs: Sequence[ParsedArgumentSpec]) -> None:\n processed_specs = []\n\n for arg_spec in specs:\n for arg_value, relative_arg_context in arg_spec.argument_ref.get(\n arg_map, bound_error_context\n ):\n arg_context = StackContext(bound_error_context, relative_arg_context)\n\n if arg_spec.condition is not None:\n condition, condition_context = arg_spec.condition.get(\n arg_map,\n StackContext(arg_context, ConditionContext(arg_spec.condition)),\n )\n if not condition:\n continue\n arg_context = StackContext(\n bound_error_context,\n ParallelContext(\n (\n StackContext(\n relative_arg_context,\n StackContext(\n ConditionContext(arg_spec.condition),\n condition_context,\n ),\n ),\n )\n ),\n )\n\n processed_specs.append((arg_value, arg_spec.tensor, arg_context))\n\n checker.check_shapes(processed_specs)\n\n _check_specs(pre_specs)\n\n with set_shape_checker(checker):\n result = func(*args, **kwargs)\n arg_map[RESULT_TOKEN] = result\n\n _check_specs(post_specs)\n\n return result\n\n # Make TensorFlow understand our decoration:\n tf.compat.v1.flags.tf_decorator.make_decorator(func, wrapped)\n\n update_wrapper(wrapped, func)\n set_check_shapes(wrapped, _check_shapes)\n wrapped.__doc__ = parse_and_rewrite_docstring(func.__doc__, func_spec, bound_error_context)\n return cast(C, wrapped)\n\n return _check_shapes\n", "path": "gpflow/experimental/check_shapes/decorator.py"}, {"content": "# Copyright 2022 The GPflow Contributors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# pylint: disable=broad-except\n\n\"\"\"\nExceptions generated by `check_shapes`.\n\nThese rely heavily on the infrastructure in `error_contexts.py`.\n\"\"\"\nfrom .error_contexts import ErrorContext, MessageBuilder\n\n\nclass CheckShapesError(Exception):\n \"\"\"\n Common super class for `check_shapes` errors.\n \"\"\"\n\n error_message: str\n\n def __init__(self, context: ErrorContext) -> None:\n builder = MessageBuilder()\n builder.add_line(\"\")\n builder.add_line(self.error_message)\n with builder.indent() as b:\n context.print(b)\n super().__init__(builder.build())\n\n self.context = context\n\n # Prevent Keras from rewriting our exception:\n self._keras_call_info_injected = True\n\n\nclass VariableTypeError(CheckShapesError):\n \"\"\"\n Error raised if a variable is used both as a rank-1 and a variable-rank variable.\n \"\"\"\n\n error_message = (\n \"Cannot use the same variable to bind both a single dimension\"\n \" and a variable number of dimensions.\"\n )\n\n\nclass SpecificationParseError(CheckShapesError):\n \"\"\"\n Error raised if there was an error parsing the shape specification.\n \"\"\"\n\n error_message = \"Unable to parse shape specification.\"\n\n\nclass DocstringParseError(CheckShapesError):\n \"\"\"\n Error raised if there was an error parsing the shape specification.\n \"\"\"\n\n error_message = \"Unable to parse docstring.\"\n\n\nclass ArgumentReferenceError(CheckShapesError):\n \"\"\"\n Error raised if the argument to check the shape of could not be resolved.\n \"\"\"\n\n error_message = \"Unable to resolve argument / missing argument.\"\n\n\nclass ShapeMismatchError(CheckShapesError):\n \"\"\"\n Error raised if a function is called with tensors of the wrong shape.\n \"\"\"\n\n error_message = \"Tensor shape mismatch.\"\n\n\nclass NoShapeError(CheckShapesError):\n \"\"\"\n Error raised if we are trying to get the shape of an object that does not have a shape.\n \"\"\"\n\n error_message = \"Unable to determine shape of object.\"\n", "path": "gpflow/experimental/check_shapes/exceptions.py"}]}
| 2,886 | 448 |
gh_patches_debug_10089
|
rasdani/github-patches
|
git_diff
|
sopel-irc__sopel-1587
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
admin: admins can usurp owners
I believe `admin` and `owner` are meant to be two separate permission levels for the bot. There are `@require_admin` and `@require_owner` for commands, implying that there are certain functions for an `owner` that should not be available to `admin`s.
That being said, if the `admin` plugin is enabled, an `admin` can simply:
<details>
<summary>Not a spoiler :sweat_smile: </summary>
<pre>.set core.owner <i>I-Am-gRoot</i></pre>
</details>
bypassing the <code>@require_owner</code>, and potentially locking them out of their own bot.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sopel/modules/admin.py`
Content:
```
1 # coding=utf-8
2 """
3 admin.py - Sopel Admin Module
4 Copyright 2010-2011, Sean B. Palmer (inamidst.com) and Michael Yanovich
5 (yanovich.net)
6 Copyright © 2012, Elad Alfassa, <[email protected]>
7 Copyright 2013, Ari Koivula <[email protected]>
8
9 Licensed under the Eiffel Forum License 2.
10
11 https://sopel.chat
12 """
13 from __future__ import unicode_literals, absolute_import, print_function, division
14
15 from sopel.config.types import (
16 StaticSection, ValidatedAttribute, FilenameAttribute
17 )
18 import sopel.module
19
20
21 class AdminSection(StaticSection):
22 hold_ground = ValidatedAttribute('hold_ground', bool, default=False)
23 """Auto re-join on kick"""
24 auto_accept_invite = ValidatedAttribute('auto_accept_invite', bool,
25 default=True)
26
27
28 def configure(config):
29 """
30 | name | example | purpose |
31 | ---- | ------- | ------- |
32 | hold\\_ground | False | Auto-rejoin the channel after being kicked. |
33 | auto\\_accept\\_invite | True | Auto-join channels when invited. |
34 """
35 config.define_section('admin', AdminSection)
36 config.admin.configure_setting('hold_ground',
37 "Automatically re-join after being kicked?")
38 config.admin.configure_setting('auto_accept_invite',
39 'Automatically join channels when invited?')
40
41
42 def setup(bot):
43 bot.config.define_section('admin', AdminSection)
44
45
46 @sopel.module.require_privmsg
47 @sopel.module.require_admin
48 @sopel.module.commands('join')
49 @sopel.module.priority('low')
50 @sopel.module.example('.join #example or .join #example key')
51 def join(bot, trigger):
52 """Join the specified channel. This is an admin-only command."""
53 channel, key = trigger.group(3), trigger.group(4)
54 if not channel:
55 return
56 elif not key:
57 bot.join(channel)
58 else:
59 bot.join(channel, key)
60
61
62 @sopel.module.require_privmsg
63 @sopel.module.require_admin
64 @sopel.module.commands('part')
65 @sopel.module.priority('low')
66 @sopel.module.example('.part #example')
67 def part(bot, trigger):
68 """Part the specified channel. This is an admin-only command."""
69 channel, _sep, part_msg = trigger.group(2).partition(' ')
70 if part_msg:
71 bot.part(channel, part_msg)
72 else:
73 bot.part(channel)
74
75
76 @sopel.module.require_privmsg
77 @sopel.module.require_owner
78 @sopel.module.commands('quit')
79 @sopel.module.priority('low')
80 def quit(bot, trigger):
81 """Quit from the server. This is an owner-only command."""
82 quit_message = trigger.group(2)
83 if not quit_message:
84 quit_message = 'Quitting on command from %s' % trigger.nick
85
86 bot.quit(quit_message)
87
88
89 @sopel.module.require_privmsg
90 @sopel.module.require_admin
91 @sopel.module.commands('msg')
92 @sopel.module.priority('low')
93 @sopel.module.example('.msg #YourPants Does anyone else smell neurotoxin?')
94 def msg(bot, trigger):
95 """
96 Send a message to a given channel or nick. Can only be done in privmsg by
97 an admin.
98 """
99 if trigger.group(2) is None:
100 return
101
102 channel, _sep, message = trigger.group(2).partition(' ')
103 message = message.strip()
104 if not channel or not message:
105 return
106
107 bot.msg(channel, message)
108
109
110 @sopel.module.require_privmsg
111 @sopel.module.require_admin
112 @sopel.module.commands('me')
113 @sopel.module.priority('low')
114 def me(bot, trigger):
115 """
116 Send an ACTION (/me) to a given channel or nick. Can only be done in
117 privmsg by an admin.
118 """
119 if trigger.group(2) is None:
120 return
121
122 channel, _sep, action = trigger.group(2).partition(' ')
123 action = action.strip()
124 if not channel or not action:
125 return
126
127 msg = '\x01ACTION %s\x01' % action
128 bot.msg(channel, msg)
129
130
131 @sopel.module.event('INVITE')
132 @sopel.module.rule('.*')
133 @sopel.module.priority('low')
134 def invite_join(bot, trigger):
135 """
136 Join a channel Sopel is invited to, if the inviter is an admin.
137 """
138 if trigger.admin or bot.config.admin.auto_accept_invite:
139 bot.join(trigger.args[1])
140 return
141
142
143 @sopel.module.event('KICK')
144 @sopel.module.rule(r'.*')
145 @sopel.module.priority('low')
146 def hold_ground(bot, trigger):
147 """
148 This function monitors all kicks across all channels Sopel is in. If it
149 detects that it is the one kicked it'll automatically join that channel.
150
151 WARNING: This may not be needed and could cause problems if Sopel becomes
152 annoying. Please use this with caution.
153 """
154 if bot.config.admin.hold_ground:
155 channel = trigger.sender
156 if trigger.args[1] == bot.nick:
157 bot.join(channel)
158
159
160 @sopel.module.require_privmsg
161 @sopel.module.require_admin
162 @sopel.module.commands('mode')
163 @sopel.module.priority('low')
164 def mode(bot, trigger):
165 """Set a user mode on Sopel. Can only be done in privmsg by an admin."""
166 mode = trigger.group(3)
167 bot.write(('MODE', bot.nick + ' ' + mode))
168
169
170 @sopel.module.require_privmsg("This command only works as a private message.")
171 @sopel.module.require_admin("This command requires admin privileges.")
172 @sopel.module.commands('set')
173 @sopel.module.example('.set core.owner Me')
174 def set_config(bot, trigger):
175 """See and modify values of Sopel's config object.
176
177 Trigger args:
178 arg1 - section and option, in the form "section.option"
179 arg2 - value
180
181 If there is no section, section will default to "core".
182 If value is None, the option will be deleted.
183 """
184 # Get section and option from first argument.
185 match = trigger.group(3)
186 if match is None:
187 bot.reply("Usage: .set section.option value")
188 return
189 arg1 = match.split('.')
190 if len(arg1) == 1:
191 section_name, option = "core", arg1[0]
192 elif len(arg1) == 2:
193 section_name, option = arg1
194 else:
195 bot.reply("Usage: .set section.option value")
196 return
197 section = getattr(bot.config, section_name)
198 static_sec = isinstance(section, StaticSection)
199
200 if static_sec and not hasattr(section, option):
201 bot.say('[{}] section has no option {}.'.format(section_name, option))
202 return
203
204 delim = trigger.group(2).find(' ')
205 # Skip preceding whitespaces, if any.
206 while delim > 0 and delim < len(trigger.group(2)) and trigger.group(2)[delim] == ' ':
207 delim = delim + 1
208
209 # Display current value if no value is given.
210 if delim == -1 or delim == len(trigger.group(2)):
211 if not static_sec and bot.config.parser.has_option(section, option):
212 bot.reply("Option %s.%s does not exist." % (section_name, option))
213 return
214 # Except if the option looks like a password. Censor those to stop them
215 # from being put on log files.
216 if option.endswith("password") or option.endswith("pass"):
217 value = "(password censored)"
218 else:
219 value = getattr(section, option)
220 bot.reply("%s.%s = %s" % (section_name, option, value))
221 return
222
223 # Otherwise, set the value to one given as argument 2.
224 value = trigger.group(2)[delim:]
225 if static_sec:
226 descriptor = getattr(section.__class__, option)
227 try:
228 if isinstance(descriptor, FilenameAttribute):
229 value = descriptor.parse(bot.config, descriptor, value)
230 else:
231 value = descriptor.parse(value)
232 except ValueError as exc:
233 bot.say("Can't set attribute: " + str(exc))
234 return
235 setattr(section, option, value)
236
237
238 @sopel.module.require_privmsg
239 @sopel.module.require_admin
240 @sopel.module.commands('save')
241 @sopel.module.example('.save')
242 def save_config(bot, trigger):
243 """Save state of Sopel's config object to the configuration file."""
244 bot.config.save()
245
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sopel/modules/admin.py b/sopel/modules/admin.py
--- a/sopel/modules/admin.py
+++ b/sopel/modules/admin.py
@@ -220,6 +220,13 @@
bot.reply("%s.%s = %s" % (section_name, option, value))
return
+ # 'core.owner' cannot be set by the bot. Any changes to this setting must be
+ # made manually in the config file.
+ if section_name == 'core' and option == 'owner':
+ bot.say('\'core.owner\' cannot be set by the bot. '
+ 'This setting must be changed manually in the configuration file.')
+ return
+
# Otherwise, set the value to one given as argument 2.
value = trigger.group(2)[delim:]
if static_sec:
|
{"golden_diff": "diff --git a/sopel/modules/admin.py b/sopel/modules/admin.py\n--- a/sopel/modules/admin.py\n+++ b/sopel/modules/admin.py\n@@ -220,6 +220,13 @@\n bot.reply(\"%s.%s = %s\" % (section_name, option, value))\n return\n \n+ # 'core.owner' cannot be set by the bot. Any changes to this setting must be\n+ # made manually in the config file.\n+ if section_name == 'core' and option == 'owner':\n+ bot.say('\\'core.owner\\' cannot be set by the bot. '\n+ 'This setting must be changed manually in the configuration file.')\n+ return\n+\n # Otherwise, set the value to one given as argument 2.\n value = trigger.group(2)[delim:]\n if static_sec:\n", "issue": "admin: admins can usurp owners\nI believe `admin` and `owner` are meant to be two separate permission levels for the bot. There are `@require_admin` and `@require_owner` for commands, implying that there are certain functions for an `owner` that should not be available to `admin`s. \r\n\r\nThat being said, if the `admin` plugin is enabled, an `admin` can simply:\r\n<details>\r\n<summary>Not a spoiler :sweat_smile: </summary>\r\n\r\n<pre>.set core.owner <i>I-Am-gRoot</i></pre> \r\n</details>\r\nbypassing the <code>@require_owner</code>, and potentially locking them out of their own bot.\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"\nadmin.py - Sopel Admin Module\nCopyright 2010-2011, Sean B. Palmer (inamidst.com) and Michael Yanovich\n(yanovich.net)\nCopyright \u00a9 2012, Elad Alfassa, <[email protected]>\nCopyright 2013, Ari Koivula <[email protected]>\n\nLicensed under the Eiffel Forum License 2.\n\nhttps://sopel.chat\n\"\"\"\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nfrom sopel.config.types import (\n StaticSection, ValidatedAttribute, FilenameAttribute\n)\nimport sopel.module\n\n\nclass AdminSection(StaticSection):\n hold_ground = ValidatedAttribute('hold_ground', bool, default=False)\n \"\"\"Auto re-join on kick\"\"\"\n auto_accept_invite = ValidatedAttribute('auto_accept_invite', bool,\n default=True)\n\n\ndef configure(config):\n \"\"\"\n | name | example | purpose |\n | ---- | ------- | ------- |\n | hold\\\\_ground | False | Auto-rejoin the channel after being kicked. |\n | auto\\\\_accept\\\\_invite | True | Auto-join channels when invited. |\n \"\"\"\n config.define_section('admin', AdminSection)\n config.admin.configure_setting('hold_ground',\n \"Automatically re-join after being kicked?\")\n config.admin.configure_setting('auto_accept_invite',\n 'Automatically join channels when invited?')\n\n\ndef setup(bot):\n bot.config.define_section('admin', AdminSection)\n\n\[email protected]_privmsg\[email protected]_admin\[email protected]('join')\[email protected]('low')\[email protected]('.join #example or .join #example key')\ndef join(bot, trigger):\n \"\"\"Join the specified channel. This is an admin-only command.\"\"\"\n channel, key = trigger.group(3), trigger.group(4)\n if not channel:\n return\n elif not key:\n bot.join(channel)\n else:\n bot.join(channel, key)\n\n\[email protected]_privmsg\[email protected]_admin\[email protected]('part')\[email protected]('low')\[email protected]('.part #example')\ndef part(bot, trigger):\n \"\"\"Part the specified channel. This is an admin-only command.\"\"\"\n channel, _sep, part_msg = trigger.group(2).partition(' ')\n if part_msg:\n bot.part(channel, part_msg)\n else:\n bot.part(channel)\n\n\[email protected]_privmsg\[email protected]_owner\[email protected]('quit')\[email protected]('low')\ndef quit(bot, trigger):\n \"\"\"Quit from the server. This is an owner-only command.\"\"\"\n quit_message = trigger.group(2)\n if not quit_message:\n quit_message = 'Quitting on command from %s' % trigger.nick\n\n bot.quit(quit_message)\n\n\[email protected]_privmsg\[email protected]_admin\[email protected]('msg')\[email protected]('low')\[email protected]('.msg #YourPants Does anyone else smell neurotoxin?')\ndef msg(bot, trigger):\n \"\"\"\n Send a message to a given channel or nick. Can only be done in privmsg by\n an admin.\n \"\"\"\n if trigger.group(2) is None:\n return\n\n channel, _sep, message = trigger.group(2).partition(' ')\n message = message.strip()\n if not channel or not message:\n return\n\n bot.msg(channel, message)\n\n\[email protected]_privmsg\[email protected]_admin\[email protected]('me')\[email protected]('low')\ndef me(bot, trigger):\n \"\"\"\n Send an ACTION (/me) to a given channel or nick. Can only be done in\n privmsg by an admin.\n \"\"\"\n if trigger.group(2) is None:\n return\n\n channel, _sep, action = trigger.group(2).partition(' ')\n action = action.strip()\n if not channel or not action:\n return\n\n msg = '\\x01ACTION %s\\x01' % action\n bot.msg(channel, msg)\n\n\[email protected]('INVITE')\[email protected]('.*')\[email protected]('low')\ndef invite_join(bot, trigger):\n \"\"\"\n Join a channel Sopel is invited to, if the inviter is an admin.\n \"\"\"\n if trigger.admin or bot.config.admin.auto_accept_invite:\n bot.join(trigger.args[1])\n return\n\n\[email protected]('KICK')\[email protected](r'.*')\[email protected]('low')\ndef hold_ground(bot, trigger):\n \"\"\"\n This function monitors all kicks across all channels Sopel is in. If it\n detects that it is the one kicked it'll automatically join that channel.\n\n WARNING: This may not be needed and could cause problems if Sopel becomes\n annoying. Please use this with caution.\n \"\"\"\n if bot.config.admin.hold_ground:\n channel = trigger.sender\n if trigger.args[1] == bot.nick:\n bot.join(channel)\n\n\[email protected]_privmsg\[email protected]_admin\[email protected]('mode')\[email protected]('low')\ndef mode(bot, trigger):\n \"\"\"Set a user mode on Sopel. Can only be done in privmsg by an admin.\"\"\"\n mode = trigger.group(3)\n bot.write(('MODE', bot.nick + ' ' + mode))\n\n\[email protected]_privmsg(\"This command only works as a private message.\")\[email protected]_admin(\"This command requires admin privileges.\")\[email protected]('set')\[email protected]('.set core.owner Me')\ndef set_config(bot, trigger):\n \"\"\"See and modify values of Sopel's config object.\n\n Trigger args:\n arg1 - section and option, in the form \"section.option\"\n arg2 - value\n\n If there is no section, section will default to \"core\".\n If value is None, the option will be deleted.\n \"\"\"\n # Get section and option from first argument.\n match = trigger.group(3)\n if match is None:\n bot.reply(\"Usage: .set section.option value\")\n return\n arg1 = match.split('.')\n if len(arg1) == 1:\n section_name, option = \"core\", arg1[0]\n elif len(arg1) == 2:\n section_name, option = arg1\n else:\n bot.reply(\"Usage: .set section.option value\")\n return\n section = getattr(bot.config, section_name)\n static_sec = isinstance(section, StaticSection)\n\n if static_sec and not hasattr(section, option):\n bot.say('[{}] section has no option {}.'.format(section_name, option))\n return\n\n delim = trigger.group(2).find(' ')\n # Skip preceding whitespaces, if any.\n while delim > 0 and delim < len(trigger.group(2)) and trigger.group(2)[delim] == ' ':\n delim = delim + 1\n\n # Display current value if no value is given.\n if delim == -1 or delim == len(trigger.group(2)):\n if not static_sec and bot.config.parser.has_option(section, option):\n bot.reply(\"Option %s.%s does not exist.\" % (section_name, option))\n return\n # Except if the option looks like a password. Censor those to stop them\n # from being put on log files.\n if option.endswith(\"password\") or option.endswith(\"pass\"):\n value = \"(password censored)\"\n else:\n value = getattr(section, option)\n bot.reply(\"%s.%s = %s\" % (section_name, option, value))\n return\n\n # Otherwise, set the value to one given as argument 2.\n value = trigger.group(2)[delim:]\n if static_sec:\n descriptor = getattr(section.__class__, option)\n try:\n if isinstance(descriptor, FilenameAttribute):\n value = descriptor.parse(bot.config, descriptor, value)\n else:\n value = descriptor.parse(value)\n except ValueError as exc:\n bot.say(\"Can't set attribute: \" + str(exc))\n return\n setattr(section, option, value)\n\n\[email protected]_privmsg\[email protected]_admin\[email protected]('save')\[email protected]('.save')\ndef save_config(bot, trigger):\n \"\"\"Save state of Sopel's config object to the configuration file.\"\"\"\n bot.config.save()\n", "path": "sopel/modules/admin.py"}], "after_files": [{"content": "# coding=utf-8\n\"\"\"\nadmin.py - Sopel Admin Module\nCopyright 2010-2011, Sean B. Palmer (inamidst.com) and Michael Yanovich\n(yanovich.net)\nCopyright \u00a9 2012, Elad Alfassa, <[email protected]>\nCopyright 2013, Ari Koivula <[email protected]>\n\nLicensed under the Eiffel Forum License 2.\n\nhttps://sopel.chat\n\"\"\"\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nfrom sopel.config.types import (\n StaticSection, ValidatedAttribute, FilenameAttribute\n)\nimport sopel.module\n\n\nclass AdminSection(StaticSection):\n hold_ground = ValidatedAttribute('hold_ground', bool, default=False)\n \"\"\"Auto re-join on kick\"\"\"\n auto_accept_invite = ValidatedAttribute('auto_accept_invite', bool,\n default=True)\n\n\ndef configure(config):\n \"\"\"\n | name | example | purpose |\n | ---- | ------- | ------- |\n | hold\\\\_ground | False | Auto-rejoin the channel after being kicked. |\n | auto\\\\_accept\\\\_invite | True | Auto-join channels when invited. |\n \"\"\"\n config.define_section('admin', AdminSection)\n config.admin.configure_setting('hold_ground',\n \"Automatically re-join after being kicked?\")\n config.admin.configure_setting('auto_accept_invite',\n 'Automatically join channels when invited?')\n\n\ndef setup(bot):\n bot.config.define_section('admin', AdminSection)\n\n\[email protected]_privmsg\[email protected]_admin\[email protected]('join')\[email protected]('low')\[email protected]('.join #example or .join #example key')\ndef join(bot, trigger):\n \"\"\"Join the specified channel. This is an admin-only command.\"\"\"\n channel, key = trigger.group(3), trigger.group(4)\n if not channel:\n return\n elif not key:\n bot.join(channel)\n else:\n bot.join(channel, key)\n\n\[email protected]_privmsg\[email protected]_admin\[email protected]('part')\[email protected]('low')\[email protected]('.part #example')\ndef part(bot, trigger):\n \"\"\"Part the specified channel. This is an admin-only command.\"\"\"\n channel, _sep, part_msg = trigger.group(2).partition(' ')\n if part_msg:\n bot.part(channel, part_msg)\n else:\n bot.part(channel)\n\n\[email protected]_privmsg\[email protected]_owner\[email protected]('quit')\[email protected]('low')\ndef quit(bot, trigger):\n \"\"\"Quit from the server. This is an owner-only command.\"\"\"\n quit_message = trigger.group(2)\n if not quit_message:\n quit_message = 'Quitting on command from %s' % trigger.nick\n\n bot.quit(quit_message)\n\n\[email protected]_privmsg\[email protected]_admin\[email protected]('msg')\[email protected]('low')\[email protected]('.msg #YourPants Does anyone else smell neurotoxin?')\ndef msg(bot, trigger):\n \"\"\"\n Send a message to a given channel or nick. Can only be done in privmsg by\n an admin.\n \"\"\"\n if trigger.group(2) is None:\n return\n\n channel, _sep, message = trigger.group(2).partition(' ')\n message = message.strip()\n if not channel or not message:\n return\n\n bot.msg(channel, message)\n\n\[email protected]_privmsg\[email protected]_admin\[email protected]('me')\[email protected]('low')\ndef me(bot, trigger):\n \"\"\"\n Send an ACTION (/me) to a given channel or nick. Can only be done in\n privmsg by an admin.\n \"\"\"\n if trigger.group(2) is None:\n return\n\n channel, _sep, action = trigger.group(2).partition(' ')\n action = action.strip()\n if not channel or not action:\n return\n\n msg = '\\x01ACTION %s\\x01' % action\n bot.msg(channel, msg)\n\n\[email protected]('INVITE')\[email protected]('.*')\[email protected]('low')\ndef invite_join(bot, trigger):\n \"\"\"\n Join a channel Sopel is invited to, if the inviter is an admin.\n \"\"\"\n if trigger.admin or bot.config.admin.auto_accept_invite:\n bot.join(trigger.args[1])\n return\n\n\[email protected]('KICK')\[email protected](r'.*')\[email protected]('low')\ndef hold_ground(bot, trigger):\n \"\"\"\n This function monitors all kicks across all channels Sopel is in. If it\n detects that it is the one kicked it'll automatically join that channel.\n\n WARNING: This may not be needed and could cause problems if Sopel becomes\n annoying. Please use this with caution.\n \"\"\"\n if bot.config.admin.hold_ground:\n channel = trigger.sender\n if trigger.args[1] == bot.nick:\n bot.join(channel)\n\n\[email protected]_privmsg\[email protected]_admin\[email protected]('mode')\[email protected]('low')\ndef mode(bot, trigger):\n \"\"\"Set a user mode on Sopel. Can only be done in privmsg by an admin.\"\"\"\n mode = trigger.group(3)\n bot.write(('MODE', bot.nick + ' ' + mode))\n\n\[email protected]_privmsg(\"This command only works as a private message.\")\[email protected]_admin(\"This command requires admin privileges.\")\[email protected]('set')\[email protected]('.set core.owner Me')\ndef set_config(bot, trigger):\n \"\"\"See and modify values of Sopel's config object.\n\n Trigger args:\n arg1 - section and option, in the form \"section.option\"\n arg2 - value\n\n If there is no section, section will default to \"core\".\n If value is None, the option will be deleted.\n \"\"\"\n # Get section and option from first argument.\n match = trigger.group(3)\n if match is None:\n bot.reply(\"Usage: .set section.option value\")\n return\n arg1 = match.split('.')\n if len(arg1) == 1:\n section_name, option = \"core\", arg1[0]\n elif len(arg1) == 2:\n section_name, option = arg1\n else:\n bot.reply(\"Usage: .set section.option value\")\n return\n section = getattr(bot.config, section_name)\n static_sec = isinstance(section, StaticSection)\n\n if static_sec and not hasattr(section, option):\n bot.say('[{}] section has no option {}.'.format(section_name, option))\n return\n\n delim = trigger.group(2).find(' ')\n # Skip preceding whitespaces, if any.\n while delim > 0 and delim < len(trigger.group(2)) and trigger.group(2)[delim] == ' ':\n delim = delim + 1\n\n # Display current value if no value is given.\n if delim == -1 or delim == len(trigger.group(2)):\n if not static_sec and bot.config.parser.has_option(section, option):\n bot.reply(\"Option %s.%s does not exist.\" % (section_name, option))\n return\n # Except if the option looks like a password. Censor those to stop them\n # from being put on log files.\n if option.endswith(\"password\") or option.endswith(\"pass\"):\n value = \"(password censored)\"\n else:\n value = getattr(section, option)\n bot.reply(\"%s.%s = %s\" % (section_name, option, value))\n return\n\n # 'core.owner' cannot be set by the bot. Any changes to this setting must be\n # made manually in the config file.\n if section_name == 'core' and option == 'owner':\n bot.say('\\'core.owner\\' cannot be set by the bot. '\n 'This setting must be changed manually in the configuration file.')\n return\n\n # Otherwise, set the value to one given as argument 2.\n value = trigger.group(2)[delim:]\n if static_sec:\n descriptor = getattr(section.__class__, option)\n try:\n if isinstance(descriptor, FilenameAttribute):\n value = descriptor.parse(bot.config, descriptor, value)\n else:\n value = descriptor.parse(value)\n except ValueError as exc:\n bot.say(\"Can't set attribute: \" + str(exc))\n return\n setattr(section, option, value)\n\n\[email protected]_privmsg\[email protected]_admin\[email protected]('save')\[email protected]('.save')\ndef save_config(bot, trigger):\n \"\"\"Save state of Sopel's config object to the configuration file.\"\"\"\n bot.config.save()\n", "path": "sopel/modules/admin.py"}]}
| 2,958 | 188 |
gh_patches_debug_35633
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-583
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix several configuration issues
Fix the pending issues in #563 as pointed out by @mauriciovasquezbernal.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opentelemetry-api/src/opentelemetry/configuration/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # FIXME find a better way to avoid all those "Expression has type "Any"" errors
16 # type: ignore
17
18 """
19 Simple configuration manager
20
21 This is a configuration manager for OpenTelemetry. It reads configuration
22 values from environment variables prefixed with
23 ``OPENTELEMETRY_PYTHON_`` whose characters are only all caps and underscores.
24 The first character after ``OPENTELEMETRY_PYTHON_`` must be an uppercase
25 character.
26
27 For example, these environment variables will be read:
28
29 1. ``OPENTELEMETRY_PYTHON_SOMETHING``
30 2. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_``
31 3. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND__ELSE``
32
33 These won't:
34
35 1. ``OPENTELEMETRY_PYTH_SOMETHING``
36 2. ``OPENTELEMETRY_PYTHON_something``
37 3. ``OPENTELEMETRY_PYTHON_SOMETHING_2_AND__ELSE``
38 4. ``OPENTELEMETRY_PYTHON_SOMETHING_%_ELSE``
39
40 The values stored in the environment variables can be found in an instance of
41 ``opentelemetry.configuration.Configuration``. This class can be instantiated
42 freely because instantiating it returns a singleton.
43
44 For example, if the environment variable
45 ``OPENTELEMETRY_PYTHON_METER_PROVIDER`` value is ``my_meter_provider``, then
46 ``Configuration().meter_provider == "my_meter_provider"`` would be ``True``.
47
48 Non defined attributes will always return ``None``. This is intended to make it
49 easier to use the ``Configuration`` object in actual code, because it won't be
50 necessary to check for the attribute to be defined first.
51
52 Environment variables used by OpenTelemetry
53 -------------------------------------------
54
55 1. OPENTELEMETRY_PYTHON_METER_PROVIDER
56 2. OPENTELEMETRY_PYTHON_TRACER_PROVIDER
57
58 The value of these environment variables should be the name of the entry point
59 that points to the class that implements either provider. This OpenTelemetry
60 API package provides one entry point for each, which can be found in the
61 setup.py file::
62
63 entry_points={
64 ...
65 "opentelemetry_meter_provider": [
66 "default_meter_provider = "
67 "opentelemetry.metrics:DefaultMeterProvider"
68 ],
69 "opentelemetry_tracer_provider": [
70 "default_tracer_provider = "
71 "opentelemetry.trace:DefaultTracerProvider"
72 ],
73 }
74
75 To use the meter provider above, then the
76 ``OPENTELEMETRY_PYTHON_METER_PROVIDER`` should be set to
77 "default_meter_provider" (this is not actually necessary since the
78 OpenTelemetry API provided providers are the default ones used if no
79 configuration is found in the environment variables).
80 """
81
82 from os import environ
83 from re import fullmatch
84
85
86 class Configuration:
87 _instance = None
88
89 __slots__ = []
90
91 def __new__(cls) -> "Configuration":
92 if Configuration._instance is None:
93
94 for key, value in environ.items():
95
96 match = fullmatch("OPENTELEMETRY_PYTHON_([A-Z][A-Z_]*)", key)
97
98 if match is not None:
99
100 key = match.group(1).lower()
101
102 setattr(Configuration, "_{}".format(key), value)
103 setattr(
104 Configuration,
105 key,
106 property(
107 fget=lambda cls, key=key: getattr(
108 cls, "_{}".format(key)
109 )
110 ),
111 )
112
113 Configuration.__slots__.append(key)
114
115 Configuration.__slots__ = tuple(Configuration.__slots__)
116
117 Configuration._instance = object.__new__(cls)
118
119 return cls._instance
120
121 def __getattr__(self, name):
122 return None
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/opentelemetry-api/src/opentelemetry/configuration/__init__.py b/opentelemetry-api/src/opentelemetry/configuration/__init__.py
--- a/opentelemetry-api/src/opentelemetry/configuration/__init__.py
+++ b/opentelemetry-api/src/opentelemetry/configuration/__init__.py
@@ -19,27 +19,27 @@
Simple configuration manager
This is a configuration manager for OpenTelemetry. It reads configuration
-values from environment variables prefixed with
-``OPENTELEMETRY_PYTHON_`` whose characters are only all caps and underscores.
-The first character after ``OPENTELEMETRY_PYTHON_`` must be an uppercase
-character.
+values from environment variables prefixed with ``OPENTELEMETRY_PYTHON_`` whose
+characters are only alphanumeric characters and unserscores, except for the
+first character after ``OPENTELEMETRY_PYTHON_`` which must not be a number.
For example, these environment variables will be read:
1. ``OPENTELEMETRY_PYTHON_SOMETHING``
2. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_``
3. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND__ELSE``
+4. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND_else``
+4. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND_else2``
These won't:
1. ``OPENTELEMETRY_PYTH_SOMETHING``
-2. ``OPENTELEMETRY_PYTHON_something``
-3. ``OPENTELEMETRY_PYTHON_SOMETHING_2_AND__ELSE``
-4. ``OPENTELEMETRY_PYTHON_SOMETHING_%_ELSE``
+2. ``OPENTELEMETRY_PYTHON_2_SOMETHING_AND__ELSE``
+3. ``OPENTELEMETRY_PYTHON_SOMETHING_%_ELSE``
The values stored in the environment variables can be found in an instance of
``opentelemetry.configuration.Configuration``. This class can be instantiated
-freely because instantiating it returns a singleton.
+freely because instantiating it returns always the same object.
For example, if the environment variable
``OPENTELEMETRY_PYTHON_METER_PROVIDER`` value is ``my_meter_provider``, then
@@ -93,11 +93,13 @@
for key, value in environ.items():
- match = fullmatch("OPENTELEMETRY_PYTHON_([A-Z][A-Z_]*)", key)
+ match = fullmatch(
+ r"OPENTELEMETRY_PYTHON_([A-Za-z_][\w_]*)", key
+ )
if match is not None:
- key = match.group(1).lower()
+ key = match.group(1)
setattr(Configuration, "_{}".format(key), value)
setattr(
|
{"golden_diff": "diff --git a/opentelemetry-api/src/opentelemetry/configuration/__init__.py b/opentelemetry-api/src/opentelemetry/configuration/__init__.py\n--- a/opentelemetry-api/src/opentelemetry/configuration/__init__.py\n+++ b/opentelemetry-api/src/opentelemetry/configuration/__init__.py\n@@ -19,27 +19,27 @@\n Simple configuration manager\n \n This is a configuration manager for OpenTelemetry. It reads configuration\n-values from environment variables prefixed with\n-``OPENTELEMETRY_PYTHON_`` whose characters are only all caps and underscores.\n-The first character after ``OPENTELEMETRY_PYTHON_`` must be an uppercase\n-character.\n+values from environment variables prefixed with ``OPENTELEMETRY_PYTHON_`` whose\n+characters are only alphanumeric characters and unserscores, except for the\n+first character after ``OPENTELEMETRY_PYTHON_`` which must not be a number.\n \n For example, these environment variables will be read:\n \n 1. ``OPENTELEMETRY_PYTHON_SOMETHING``\n 2. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_``\n 3. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND__ELSE``\n+4. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND_else``\n+4. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND_else2``\n \n These won't:\n \n 1. ``OPENTELEMETRY_PYTH_SOMETHING``\n-2. ``OPENTELEMETRY_PYTHON_something``\n-3. ``OPENTELEMETRY_PYTHON_SOMETHING_2_AND__ELSE``\n-4. ``OPENTELEMETRY_PYTHON_SOMETHING_%_ELSE``\n+2. ``OPENTELEMETRY_PYTHON_2_SOMETHING_AND__ELSE``\n+3. ``OPENTELEMETRY_PYTHON_SOMETHING_%_ELSE``\n \n The values stored in the environment variables can be found in an instance of\n ``opentelemetry.configuration.Configuration``. This class can be instantiated\n-freely because instantiating it returns a singleton.\n+freely because instantiating it returns always the same object.\n \n For example, if the environment variable\n ``OPENTELEMETRY_PYTHON_METER_PROVIDER`` value is ``my_meter_provider``, then\n@@ -93,11 +93,13 @@\n \n for key, value in environ.items():\n \n- match = fullmatch(\"OPENTELEMETRY_PYTHON_([A-Z][A-Z_]*)\", key)\n+ match = fullmatch(\n+ r\"OPENTELEMETRY_PYTHON_([A-Za-z_][\\w_]*)\", key\n+ )\n \n if match is not None:\n \n- key = match.group(1).lower()\n+ key = match.group(1)\n \n setattr(Configuration, \"_{}\".format(key), value)\n setattr(\n", "issue": "Fix several configuration issues\nFix the pending issues in #563 as pointed out by @mauriciovasquezbernal. \n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# FIXME find a better way to avoid all those \"Expression has type \"Any\"\" errors\n# type: ignore\n\n\"\"\"\nSimple configuration manager\n\nThis is a configuration manager for OpenTelemetry. It reads configuration\nvalues from environment variables prefixed with\n``OPENTELEMETRY_PYTHON_`` whose characters are only all caps and underscores.\nThe first character after ``OPENTELEMETRY_PYTHON_`` must be an uppercase\ncharacter.\n\nFor example, these environment variables will be read:\n\n1. ``OPENTELEMETRY_PYTHON_SOMETHING``\n2. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_``\n3. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND__ELSE``\n\nThese won't:\n\n1. ``OPENTELEMETRY_PYTH_SOMETHING``\n2. ``OPENTELEMETRY_PYTHON_something``\n3. ``OPENTELEMETRY_PYTHON_SOMETHING_2_AND__ELSE``\n4. ``OPENTELEMETRY_PYTHON_SOMETHING_%_ELSE``\n\nThe values stored in the environment variables can be found in an instance of\n``opentelemetry.configuration.Configuration``. This class can be instantiated\nfreely because instantiating it returns a singleton.\n\nFor example, if the environment variable\n``OPENTELEMETRY_PYTHON_METER_PROVIDER`` value is ``my_meter_provider``, then\n``Configuration().meter_provider == \"my_meter_provider\"`` would be ``True``.\n\nNon defined attributes will always return ``None``. This is intended to make it\neasier to use the ``Configuration`` object in actual code, because it won't be\nnecessary to check for the attribute to be defined first.\n\nEnvironment variables used by OpenTelemetry\n-------------------------------------------\n\n1. OPENTELEMETRY_PYTHON_METER_PROVIDER\n2. OPENTELEMETRY_PYTHON_TRACER_PROVIDER\n\nThe value of these environment variables should be the name of the entry point\nthat points to the class that implements either provider. This OpenTelemetry\nAPI package provides one entry point for each, which can be found in the\nsetup.py file::\n\n entry_points={\n ...\n \"opentelemetry_meter_provider\": [\n \"default_meter_provider = \"\n \"opentelemetry.metrics:DefaultMeterProvider\"\n ],\n \"opentelemetry_tracer_provider\": [\n \"default_tracer_provider = \"\n \"opentelemetry.trace:DefaultTracerProvider\"\n ],\n }\n\nTo use the meter provider above, then the\n``OPENTELEMETRY_PYTHON_METER_PROVIDER`` should be set to\n\"default_meter_provider\" (this is not actually necessary since the\nOpenTelemetry API provided providers are the default ones used if no\nconfiguration is found in the environment variables).\n\"\"\"\n\nfrom os import environ\nfrom re import fullmatch\n\n\nclass Configuration:\n _instance = None\n\n __slots__ = []\n\n def __new__(cls) -> \"Configuration\":\n if Configuration._instance is None:\n\n for key, value in environ.items():\n\n match = fullmatch(\"OPENTELEMETRY_PYTHON_([A-Z][A-Z_]*)\", key)\n\n if match is not None:\n\n key = match.group(1).lower()\n\n setattr(Configuration, \"_{}\".format(key), value)\n setattr(\n Configuration,\n key,\n property(\n fget=lambda cls, key=key: getattr(\n cls, \"_{}\".format(key)\n )\n ),\n )\n\n Configuration.__slots__.append(key)\n\n Configuration.__slots__ = tuple(Configuration.__slots__)\n\n Configuration._instance = object.__new__(cls)\n\n return cls._instance\n\n def __getattr__(self, name):\n return None\n", "path": "opentelemetry-api/src/opentelemetry/configuration/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# FIXME find a better way to avoid all those \"Expression has type \"Any\"\" errors\n# type: ignore\n\n\"\"\"\nSimple configuration manager\n\nThis is a configuration manager for OpenTelemetry. It reads configuration\nvalues from environment variables prefixed with ``OPENTELEMETRY_PYTHON_`` whose\ncharacters are only alphanumeric characters and unserscores, except for the\nfirst character after ``OPENTELEMETRY_PYTHON_`` which must not be a number.\n\nFor example, these environment variables will be read:\n\n1. ``OPENTELEMETRY_PYTHON_SOMETHING``\n2. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_``\n3. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND__ELSE``\n4. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND_else``\n4. ``OPENTELEMETRY_PYTHON_SOMETHING_ELSE_AND_else2``\n\nThese won't:\n\n1. ``OPENTELEMETRY_PYTH_SOMETHING``\n2. ``OPENTELEMETRY_PYTHON_2_SOMETHING_AND__ELSE``\n3. ``OPENTELEMETRY_PYTHON_SOMETHING_%_ELSE``\n\nThe values stored in the environment variables can be found in an instance of\n``opentelemetry.configuration.Configuration``. This class can be instantiated\nfreely because instantiating it returns always the same object.\n\nFor example, if the environment variable\n``OPENTELEMETRY_PYTHON_METER_PROVIDER`` value is ``my_meter_provider``, then\n``Configuration().meter_provider == \"my_meter_provider\"`` would be ``True``.\n\nNon defined attributes will always return ``None``. This is intended to make it\neasier to use the ``Configuration`` object in actual code, because it won't be\nnecessary to check for the attribute to be defined first.\n\nEnvironment variables used by OpenTelemetry\n-------------------------------------------\n\n1. OPENTELEMETRY_PYTHON_METER_PROVIDER\n2. OPENTELEMETRY_PYTHON_TRACER_PROVIDER\n\nThe value of these environment variables should be the name of the entry point\nthat points to the class that implements either provider. This OpenTelemetry\nAPI package provides one entry point for each, which can be found in the\nsetup.py file::\n\n entry_points={\n ...\n \"opentelemetry_meter_provider\": [\n \"default_meter_provider = \"\n \"opentelemetry.metrics:DefaultMeterProvider\"\n ],\n \"opentelemetry_tracer_provider\": [\n \"default_tracer_provider = \"\n \"opentelemetry.trace:DefaultTracerProvider\"\n ],\n }\n\nTo use the meter provider above, then the\n``OPENTELEMETRY_PYTHON_METER_PROVIDER`` should be set to\n\"default_meter_provider\" (this is not actually necessary since the\nOpenTelemetry API provided providers are the default ones used if no\nconfiguration is found in the environment variables).\n\"\"\"\n\nfrom os import environ\nfrom re import fullmatch\n\n\nclass Configuration:\n _instance = None\n\n __slots__ = []\n\n def __new__(cls) -> \"Configuration\":\n if Configuration._instance is None:\n\n for key, value in environ.items():\n\n match = fullmatch(\n r\"OPENTELEMETRY_PYTHON_([A-Za-z_][\\w_]*)\", key\n )\n\n if match is not None:\n\n key = match.group(1)\n\n setattr(Configuration, \"_{}\".format(key), value)\n setattr(\n Configuration,\n key,\n property(\n fget=lambda cls, key=key: getattr(\n cls, \"_{}\".format(key)\n )\n ),\n )\n\n Configuration.__slots__.append(key)\n\n Configuration.__slots__ = tuple(Configuration.__slots__)\n\n Configuration._instance = object.__new__(cls)\n\n return cls._instance\n\n def __getattr__(self, name):\n return None\n", "path": "opentelemetry-api/src/opentelemetry/configuration/__init__.py"}]}
| 1,490 | 638 |
gh_patches_debug_61676
|
rasdani/github-patches
|
git_diff
|
joke2k__faker-1710
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Lower bound for typing_extensions?
* Faker version: 14.1.0
* OS: OpenSUSE Leap 15.3
For old Python versions (before Python 3.8), *faker* requires the *typing_extensions* with at least version 3.10.0.2. Due to some other dependencies inside my project, I am required to use version 3.7.4.3.
### Steps to reproduce
1. `pip install faker`
### Expected behavior
*faker* works with `typing_extensions==3.7.4.3` as well, to allow better integration with larger systems which are stuck at such older versions for some reasons.
### Actual behavior
*faker* declares `typing_extensions>=3.10.0.2` (#1557), although I could see no issue when using it with `typing_extensions==3.7.4.3`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 from pathlib import Path
4
5 from setuptools import find_packages, setup
6
7 here = Path(__file__).resolve().parent
8 README = (here / "README.rst").read_text(encoding="utf-8")
9 VERSION = (here / "VERSION").read_text(encoding="utf-8").strip()
10
11 excluded_packages = ["docs", "tests", "tests.*"]
12
13
14 # this module can be zip-safe if the zipimporter implements iter_modules or if
15 # pkgutil.iter_importer_modules has registered a dispatch for the zipimporter.
16 try:
17 import pkgutil
18 import zipimport
19
20 zip_safe = (
21 hasattr(zipimport.zipimporter, "iter_modules")
22 or zipimport.zipimporter in pkgutil.iter_importer_modules.registry.keys()
23 )
24 except AttributeError:
25 zip_safe = False
26
27 setup(
28 name="Faker",
29 version=VERSION,
30 description="Faker is a Python package that generates fake data for you.",
31 long_description=README,
32 entry_points={
33 "console_scripts": ["faker=faker.cli:execute_from_command_line"],
34 "pytest11": ["faker = faker.contrib.pytest.plugin"],
35 },
36 classifiers=[
37 # See https://pypi.org/pypi?%3Aaction=list_classifiers
38 "Development Status :: 5 - Production/Stable",
39 "Environment :: Console",
40 "Intended Audience :: Developers",
41 "Programming Language :: Python",
42 "Programming Language :: Python :: 3",
43 "Programming Language :: Python :: 3 :: Only",
44 "Programming Language :: Python :: 3.6",
45 "Programming Language :: Python :: 3.7",
46 "Programming Language :: Python :: 3.8",
47 "Programming Language :: Python :: 3.9",
48 "Programming Language :: Python :: Implementation :: CPython",
49 "Programming Language :: Python :: Implementation :: PyPy",
50 "Topic :: Software Development :: Libraries :: Python Modules",
51 "Topic :: Software Development :: Testing",
52 "Topic :: Utilities",
53 "License :: OSI Approved :: MIT License",
54 ],
55 keywords="faker fixtures data test mock generator",
56 author="joke2k",
57 author_email="[email protected]",
58 url="https://github.com/joke2k/faker",
59 project_urls={
60 "Bug Tracker": "https://github.com/joke2k/faker/issues",
61 "Changes": "https://github.com/joke2k/faker/blob/master/CHANGELOG.md",
62 "Documentation": "http://faker.rtfd.org/",
63 "Source Code": "https://github.com/joke2k/faker",
64 },
65 license="MIT License",
66 packages=find_packages(exclude=excluded_packages),
67 package_data={
68 "faker": ["py.typed"],
69 },
70 platforms=["any"],
71 zip_safe=zip_safe,
72 python_requires=">=3.6",
73 install_requires=[
74 "python-dateutil>=2.4",
75 "typing-extensions>=3.10.0.2;python_version<'3.8'",
76 ],
77 )
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -72,6 +72,6 @@
python_requires=">=3.6",
install_requires=[
"python-dateutil>=2.4",
- "typing-extensions>=3.10.0.2;python_version<'3.8'",
+ "typing-extensions>=3.7.4.3;python_version<'3.8'",
],
)
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -72,6 +72,6 @@\n python_requires=\">=3.6\",\n install_requires=[\n \"python-dateutil>=2.4\",\n- \"typing-extensions>=3.10.0.2;python_version<'3.8'\",\n+ \"typing-extensions>=3.7.4.3;python_version<'3.8'\",\n ],\n )\n", "issue": "Lower bound for typing_extensions?\n* Faker version: 14.1.0\r\n* OS: OpenSUSE Leap 15.3\r\n\r\nFor old Python versions (before Python 3.8), *faker* requires the *typing_extensions* with at least version 3.10.0.2. Due to some other dependencies inside my project, I am required to use version 3.7.4.3.\r\n\r\n### Steps to reproduce\r\n\r\n1. `pip install faker`\r\n\r\n### Expected behavior\r\n\r\n*faker* works with `typing_extensions==3.7.4.3` as well, to allow better integration with larger systems which are stuck at such older versions for some reasons.\r\n\r\n### Actual behavior\r\n\r\n*faker* declares `typing_extensions>=3.10.0.2` (#1557), although I could see no issue when using it with `typing_extensions==3.7.4.3`.\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom pathlib import Path\n\nfrom setuptools import find_packages, setup\n\nhere = Path(__file__).resolve().parent\nREADME = (here / \"README.rst\").read_text(encoding=\"utf-8\")\nVERSION = (here / \"VERSION\").read_text(encoding=\"utf-8\").strip()\n\nexcluded_packages = [\"docs\", \"tests\", \"tests.*\"]\n\n\n# this module can be zip-safe if the zipimporter implements iter_modules or if\n# pkgutil.iter_importer_modules has registered a dispatch for the zipimporter.\ntry:\n import pkgutil\n import zipimport\n\n zip_safe = (\n hasattr(zipimport.zipimporter, \"iter_modules\")\n or zipimport.zipimporter in pkgutil.iter_importer_modules.registry.keys()\n )\nexcept AttributeError:\n zip_safe = False\n\nsetup(\n name=\"Faker\",\n version=VERSION,\n description=\"Faker is a Python package that generates fake data for you.\",\n long_description=README,\n entry_points={\n \"console_scripts\": [\"faker=faker.cli:execute_from_command_line\"],\n \"pytest11\": [\"faker = faker.contrib.pytest.plugin\"],\n },\n classifiers=[\n # See https://pypi.org/pypi?%3Aaction=list_classifiers\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n \"Topic :: Software Development :: Testing\",\n \"Topic :: Utilities\",\n \"License :: OSI Approved :: MIT License\",\n ],\n keywords=\"faker fixtures data test mock generator\",\n author=\"joke2k\",\n author_email=\"[email protected]\",\n url=\"https://github.com/joke2k/faker\",\n project_urls={\n \"Bug Tracker\": \"https://github.com/joke2k/faker/issues\",\n \"Changes\": \"https://github.com/joke2k/faker/blob/master/CHANGELOG.md\",\n \"Documentation\": \"http://faker.rtfd.org/\",\n \"Source Code\": \"https://github.com/joke2k/faker\",\n },\n license=\"MIT License\",\n packages=find_packages(exclude=excluded_packages),\n package_data={\n \"faker\": [\"py.typed\"],\n },\n platforms=[\"any\"],\n zip_safe=zip_safe,\n python_requires=\">=3.6\",\n install_requires=[\n \"python-dateutil>=2.4\",\n \"typing-extensions>=3.10.0.2;python_version<'3.8'\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\nfrom pathlib import Path\n\nfrom setuptools import find_packages, setup\n\nhere = Path(__file__).resolve().parent\nREADME = (here / \"README.rst\").read_text(encoding=\"utf-8\")\nVERSION = (here / \"VERSION\").read_text(encoding=\"utf-8\").strip()\n\nexcluded_packages = [\"docs\", \"tests\", \"tests.*\"]\n\n\n# this module can be zip-safe if the zipimporter implements iter_modules or if\n# pkgutil.iter_importer_modules has registered a dispatch for the zipimporter.\ntry:\n import pkgutil\n import zipimport\n\n zip_safe = (\n hasattr(zipimport.zipimporter, \"iter_modules\")\n or zipimport.zipimporter in pkgutil.iter_importer_modules.registry.keys()\n )\nexcept AttributeError:\n zip_safe = False\n\nsetup(\n name=\"Faker\",\n version=VERSION,\n description=\"Faker is a Python package that generates fake data for you.\",\n long_description=README,\n entry_points={\n \"console_scripts\": [\"faker=faker.cli:execute_from_command_line\"],\n \"pytest11\": [\"faker = faker.contrib.pytest.plugin\"],\n },\n classifiers=[\n # See https://pypi.org/pypi?%3Aaction=list_classifiers\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n \"Topic :: Software Development :: Testing\",\n \"Topic :: Utilities\",\n \"License :: OSI Approved :: MIT License\",\n ],\n keywords=\"faker fixtures data test mock generator\",\n author=\"joke2k\",\n author_email=\"[email protected]\",\n url=\"https://github.com/joke2k/faker\",\n project_urls={\n \"Bug Tracker\": \"https://github.com/joke2k/faker/issues\",\n \"Changes\": \"https://github.com/joke2k/faker/blob/master/CHANGELOG.md\",\n \"Documentation\": \"http://faker.rtfd.org/\",\n \"Source Code\": \"https://github.com/joke2k/faker\",\n },\n license=\"MIT License\",\n packages=find_packages(exclude=excluded_packages),\n package_data={\n \"faker\": [\"py.typed\"],\n },\n platforms=[\"any\"],\n zip_safe=zip_safe,\n python_requires=\">=3.6\",\n install_requires=[\n \"python-dateutil>=2.4\",\n \"typing-extensions>=3.7.4.3;python_version<'3.8'\",\n ],\n)\n", "path": "setup.py"}]}
| 1,246 | 103 |
gh_patches_debug_14376
|
rasdani/github-patches
|
git_diff
|
DataDog__dd-agent-495
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
apache check doesn't support basic auth
If I want basic auth, I'd assume that all I have to do is make my config look like:
```
apache_status_url: http://user:[email protected]:80/server-status?auto
```
But actually that just doesn't work. We need to do some special incantations to make `urllib2` do basic auth.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checks.d/apache.py`
Content:
```
1 import urllib2
2
3 from util import headers
4 from checks import AgentCheck
5
6 class Apache(AgentCheck):
7 """Tracks basic connection/requests/workers metrics
8
9 See http://httpd.apache.org/docs/2.2/mod/mod_status.html for more details
10 """
11 GAUGES = {
12 'IdleWorkers': 'apache.performance.idle_workers',
13 'BusyWorkers': 'apache.performance.busy_workers',
14 'CPULoad': 'apache.performance.cpu_load',
15 'Uptime': 'apache.performance.uptime',
16 'Total kBytes': 'apache.net.bytes',
17 'Total Accesses': 'apache.net.hits',
18 }
19
20 RATES = {
21 'Total kBytes': 'apache.net.bytes_per_s',
22 'Total Accesses': 'apache.net.request_per_s'
23 }
24
25 def __init__(self, name, init_config, agentConfig, instances=None):
26 AgentCheck.__init__(self, name, init_config, agentConfig, instances)
27 self.assumed_url = {}
28
29 def check(self, instance):
30 if 'apache_status_url' not in instance:
31 raise Exception("Missing 'apache_status_url' in Apache config")
32
33 url = self.assumed_url.get(instance['apache_status_url'], instance['apache_status_url'])
34
35 tags = instance.get('tags', [])
36 req = urllib2.Request(url, None,
37 headers(self.agentConfig))
38 request = urllib2.urlopen(req)
39 response = request.read()
40
41 metric_count = 0
42 # Loop through and extract the numerical values
43 for line in response.split('\n'):
44 values = line.split(': ')
45 if len(values) == 2: # match
46 metric, value = values
47 try:
48 value = float(value)
49 except ValueError:
50 continue
51
52 # Special case: kBytes => bytes
53 if metric == 'Total kBytes':
54 value = value * 1024
55
56 # Send metric as a gauge, if applicable
57 if metric in self.GAUGES:
58 metric_count += 1
59 metric_name = self.GAUGES[metric]
60 self.gauge(metric_name, value, tags=tags)
61
62 # Send metric as a rate, if applicable
63 if metric in self.RATES:
64 metric_count += 1
65 metric_name = self.RATES[metric]
66 self.rate(metric_name, value, tags=tags)
67
68 if metric_count == 0:
69 if self.assumed_url.get(instance['apache_status_url'], None) is None and url[-5:] != '?auto':
70 self.assumed_url[instance['apache_status_url']]= '%s?auto' % url
71 self.warning("Assuming url was not correct. Trying to add ?auto suffix to the url")
72 self.check(instance)
73 else:
74 raise Exception("No metrics were fetched for this instance. Make sure that %s is the proper url." % instance['apache_status_url'])
75
76
77 @staticmethod
78 def parse_agent_config(agentConfig):
79 if not agentConfig.get('apache_status_url'):
80 return False
81
82 return {
83 'instances': [{'apache_status_url': agentConfig.get('apache_status_url')}]
84 }
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/checks.d/apache.py b/checks.d/apache.py
--- a/checks.d/apache.py
+++ b/checks.d/apache.py
@@ -1,4 +1,5 @@
import urllib2
+import base64
from util import headers
from checks import AgentCheck
@@ -35,6 +36,10 @@
tags = instance.get('tags', [])
req = urllib2.Request(url, None,
headers(self.agentConfig))
+ if 'apache_user' in instance and 'apache_password' in instance:
+ auth_str = '%s:%s' % (instance['apache_user'], instance['apache_password'])
+ encoded_auth_str = base64.encodestring(auth_str)
+ req.add_header("Authorization", "Basic %s" % encoded_auth_str)
request = urllib2.urlopen(req)
response = request.read()
|
{"golden_diff": "diff --git a/checks.d/apache.py b/checks.d/apache.py\n--- a/checks.d/apache.py\n+++ b/checks.d/apache.py\n@@ -1,4 +1,5 @@\n import urllib2\n+import base64\n \n from util import headers\n from checks import AgentCheck\n@@ -35,6 +36,10 @@\n tags = instance.get('tags', [])\n req = urllib2.Request(url, None,\n headers(self.agentConfig))\n+ if 'apache_user' in instance and 'apache_password' in instance:\n+ auth_str = '%s:%s' % (instance['apache_user'], instance['apache_password'])\n+ encoded_auth_str = base64.encodestring(auth_str)\n+ req.add_header(\"Authorization\", \"Basic %s\" % encoded_auth_str)\n request = urllib2.urlopen(req)\n response = request.read()\n", "issue": "apache check doesn't support basic auth\nIf I want basic auth, I'd assume that all I have to do is make my config look like:\n\n```\napache_status_url: http://user:[email protected]:80/server-status?auto\n```\n\nBut actually that just doesn't work. We need to do some special incantations to make `urllib2` do basic auth.\n\n", "before_files": [{"content": "import urllib2\n\nfrom util import headers\nfrom checks import AgentCheck\n\nclass Apache(AgentCheck):\n \"\"\"Tracks basic connection/requests/workers metrics\n\n See http://httpd.apache.org/docs/2.2/mod/mod_status.html for more details\n \"\"\"\n GAUGES = {\n 'IdleWorkers': 'apache.performance.idle_workers',\n 'BusyWorkers': 'apache.performance.busy_workers',\n 'CPULoad': 'apache.performance.cpu_load',\n 'Uptime': 'apache.performance.uptime',\n 'Total kBytes': 'apache.net.bytes',\n 'Total Accesses': 'apache.net.hits',\n }\n\n RATES = {\n 'Total kBytes': 'apache.net.bytes_per_s',\n 'Total Accesses': 'apache.net.request_per_s'\n }\n\n def __init__(self, name, init_config, agentConfig, instances=None):\n AgentCheck.__init__(self, name, init_config, agentConfig, instances)\n self.assumed_url = {}\n\n def check(self, instance):\n if 'apache_status_url' not in instance:\n raise Exception(\"Missing 'apache_status_url' in Apache config\")\n\n url = self.assumed_url.get(instance['apache_status_url'], instance['apache_status_url'])\n\n tags = instance.get('tags', [])\n req = urllib2.Request(url, None,\n headers(self.agentConfig))\n request = urllib2.urlopen(req)\n response = request.read()\n\n metric_count = 0\n # Loop through and extract the numerical values\n for line in response.split('\\n'):\n values = line.split(': ')\n if len(values) == 2: # match\n metric, value = values\n try:\n value = float(value)\n except ValueError:\n continue\n\n # Special case: kBytes => bytes\n if metric == 'Total kBytes':\n value = value * 1024\n\n # Send metric as a gauge, if applicable\n if metric in self.GAUGES:\n metric_count += 1\n metric_name = self.GAUGES[metric]\n self.gauge(metric_name, value, tags=tags)\n\n # Send metric as a rate, if applicable\n if metric in self.RATES:\n metric_count += 1\n metric_name = self.RATES[metric]\n self.rate(metric_name, value, tags=tags)\n\n if metric_count == 0:\n if self.assumed_url.get(instance['apache_status_url'], None) is None and url[-5:] != '?auto':\n self.assumed_url[instance['apache_status_url']]= '%s?auto' % url\n self.warning(\"Assuming url was not correct. Trying to add ?auto suffix to the url\")\n self.check(instance)\n else:\n raise Exception(\"No metrics were fetched for this instance. Make sure that %s is the proper url.\" % instance['apache_status_url'])\n\n\n @staticmethod\n def parse_agent_config(agentConfig):\n if not agentConfig.get('apache_status_url'):\n return False\n\n return {\n 'instances': [{'apache_status_url': agentConfig.get('apache_status_url')}]\n }\n", "path": "checks.d/apache.py"}], "after_files": [{"content": "import urllib2\nimport base64\n\nfrom util import headers\nfrom checks import AgentCheck\n\nclass Apache(AgentCheck):\n \"\"\"Tracks basic connection/requests/workers metrics\n\n See http://httpd.apache.org/docs/2.2/mod/mod_status.html for more details\n \"\"\"\n GAUGES = {\n 'IdleWorkers': 'apache.performance.idle_workers',\n 'BusyWorkers': 'apache.performance.busy_workers',\n 'CPULoad': 'apache.performance.cpu_load',\n 'Uptime': 'apache.performance.uptime',\n 'Total kBytes': 'apache.net.bytes',\n 'Total Accesses': 'apache.net.hits',\n }\n\n RATES = {\n 'Total kBytes': 'apache.net.bytes_per_s',\n 'Total Accesses': 'apache.net.request_per_s'\n }\n\n def __init__(self, name, init_config, agentConfig, instances=None):\n AgentCheck.__init__(self, name, init_config, agentConfig, instances)\n self.assumed_url = {}\n\n def check(self, instance):\n if 'apache_status_url' not in instance:\n raise Exception(\"Missing 'apache_status_url' in Apache config\")\n\n url = self.assumed_url.get(instance['apache_status_url'], instance['apache_status_url'])\n\n tags = instance.get('tags', [])\n req = urllib2.Request(url, None,\n headers(self.agentConfig))\n if 'apache_user' in instance and 'apache_password' in instance:\n auth_str = '%s:%s' % (instance['apache_user'], instance['apache_password'])\n encoded_auth_str = base64.encodestring(auth_str)\n req.add_header(\"Authorization\", \"Basic %s\" % encoded_auth_str)\n request = urllib2.urlopen(req)\n response = request.read()\n\n metric_count = 0\n # Loop through and extract the numerical values\n for line in response.split('\\n'):\n values = line.split(': ')\n if len(values) == 2: # match\n metric, value = values\n try:\n value = float(value)\n except ValueError:\n continue\n\n # Special case: kBytes => bytes\n if metric == 'Total kBytes':\n value = value * 1024\n\n # Send metric as a gauge, if applicable\n if metric in self.GAUGES:\n metric_count += 1\n metric_name = self.GAUGES[metric]\n self.gauge(metric_name, value, tags=tags)\n\n # Send metric as a rate, if applicable\n if metric in self.RATES:\n metric_count += 1\n metric_name = self.RATES[metric]\n self.rate(metric_name, value, tags=tags)\n\n if metric_count == 0:\n if self.assumed_url.get(instance['apache_status_url'], None) is None and url[-5:] != '?auto':\n self.assumed_url[instance['apache_status_url']]= '%s?auto' % url\n self.warning(\"Assuming url was not correct. Trying to add ?auto suffix to the url\")\n self.check(instance)\n else:\n raise Exception(\"No metrics were fetched for this instance. Make sure that %s is the proper url.\" % instance['apache_status_url'])\n\n\n @staticmethod\n def parse_agent_config(agentConfig):\n if not agentConfig.get('apache_status_url'):\n return False\n\n return {\n 'instances': [{'apache_status_url': agentConfig.get('apache_status_url')}]\n }\n", "path": "checks.d/apache.py"}]}
| 1,189 | 187 |
gh_patches_debug_4502
|
rasdani/github-patches
|
git_diff
|
deis__deis-1495
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CoreOS on Google Compute Engine
Could you add Compute Engine to contrib?
http://googlecloudplatform.blogspot.cz/2014/05/official-coreos-images-are-now-available-on-google-compute-engine.html?m=1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `client/setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 """Install the Deis command-line client."""
4
5
6 try:
7 from setuptools import setup
8 USE_SETUPTOOLS = True
9 except ImportError:
10 from distutils.core import setup
11 USE_SETUPTOOLS = False
12
13 try:
14 LONG_DESCRIPTION = open('README.rst').read()
15 except IOError:
16 LONG_DESCRIPTION = 'Deis command-line client'
17
18 try:
19 APACHE_LICENSE = open('LICENSE').read()
20 except IOError:
21 APACHE_LICENSE = 'See http://www.apache.org/licenses/LICENSE-2.0'
22
23 KWARGS = {}
24 if USE_SETUPTOOLS:
25 KWARGS = {'entry_points': {'console_scripts': ['deis = deis:main']}}
26 else:
27 KWARGS = {'scripts': ['deis']}
28
29
30 setup(name='deis',
31 version='0.11.0',
32 license=APACHE_LICENSE,
33 description='Command-line Client for Deis, the open PaaS',
34 author='OpDemand',
35 author_email='[email protected]',
36 url='https://github.com/deis/deis',
37 keywords=[
38 'opdemand', 'deis', 'paas', 'cloud', 'chef', 'docker', 'heroku',
39 'aws', 'ec2', 'rackspace', 'digitalocean'
40 ],
41 classifiers=[
42 'Development Status :: 4 - Beta',
43 'Environment :: Console',
44 'Intended Audience :: Developers',
45 'Intended Audience :: Information Technology',
46 'Intended Audience :: System Administrators',
47 'License :: OSI Approved :: Apache Software License',
48 'Operating System :: OS Independent',
49 'Programming Language :: Python',
50 'Programming Language :: Python :: 2.7',
51 'Topic :: Internet',
52 'Topic :: System :: Systems Administration',
53 ],
54 py_modules=['deis'],
55 data_files=[
56 ('.', ['README.rst']),
57 ],
58 long_description=LONG_DESCRIPTION,
59 install_requires=[
60 'docopt==0.6.1', 'python-dateutil==2.2',
61 'PyYAML==3.10', 'requests==2.2.1', 'urllib3==1.8.2'
62 ],
63 zip_safe=True,
64 **KWARGS)
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/client/setup.py b/client/setup.py
--- a/client/setup.py
+++ b/client/setup.py
@@ -36,7 +36,7 @@
url='https://github.com/deis/deis',
keywords=[
'opdemand', 'deis', 'paas', 'cloud', 'chef', 'docker', 'heroku',
- 'aws', 'ec2', 'rackspace', 'digitalocean'
+ 'aws', 'ec2', 'rackspace', 'digitalocean', 'gce'
],
classifiers=[
'Development Status :: 4 - Beta',
|
{"golden_diff": "diff --git a/client/setup.py b/client/setup.py\n--- a/client/setup.py\n+++ b/client/setup.py\n@@ -36,7 +36,7 @@\n url='https://github.com/deis/deis',\n keywords=[\n 'opdemand', 'deis', 'paas', 'cloud', 'chef', 'docker', 'heroku',\n- 'aws', 'ec2', 'rackspace', 'digitalocean'\n+ 'aws', 'ec2', 'rackspace', 'digitalocean', 'gce'\n ],\n classifiers=[\n 'Development Status :: 4 - Beta',\n", "issue": "CoreOS on Google Compute Engine\nCould you add Compute Engine to contrib?\n\nhttp://googlecloudplatform.blogspot.cz/2014/05/official-coreos-images-are-now-available-on-google-compute-engine.html?m=1\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"Install the Deis command-line client.\"\"\"\n\n\ntry:\n from setuptools import setup\n USE_SETUPTOOLS = True\nexcept ImportError:\n from distutils.core import setup\n USE_SETUPTOOLS = False\n\ntry:\n LONG_DESCRIPTION = open('README.rst').read()\nexcept IOError:\n LONG_DESCRIPTION = 'Deis command-line client'\n\ntry:\n APACHE_LICENSE = open('LICENSE').read()\nexcept IOError:\n APACHE_LICENSE = 'See http://www.apache.org/licenses/LICENSE-2.0'\n\nKWARGS = {}\nif USE_SETUPTOOLS:\n KWARGS = {'entry_points': {'console_scripts': ['deis = deis:main']}}\nelse:\n KWARGS = {'scripts': ['deis']}\n\n\nsetup(name='deis',\n version='0.11.0',\n license=APACHE_LICENSE,\n description='Command-line Client for Deis, the open PaaS',\n author='OpDemand',\n author_email='[email protected]',\n url='https://github.com/deis/deis',\n keywords=[\n 'opdemand', 'deis', 'paas', 'cloud', 'chef', 'docker', 'heroku',\n 'aws', 'ec2', 'rackspace', 'digitalocean'\n ],\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Information Technology',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.7',\n 'Topic :: Internet',\n 'Topic :: System :: Systems Administration',\n ],\n py_modules=['deis'],\n data_files=[\n ('.', ['README.rst']),\n ],\n long_description=LONG_DESCRIPTION,\n install_requires=[\n 'docopt==0.6.1', 'python-dateutil==2.2',\n 'PyYAML==3.10', 'requests==2.2.1', 'urllib3==1.8.2'\n ],\n zip_safe=True,\n **KWARGS)\n", "path": "client/setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"Install the Deis command-line client.\"\"\"\n\n\ntry:\n from setuptools import setup\n USE_SETUPTOOLS = True\nexcept ImportError:\n from distutils.core import setup\n USE_SETUPTOOLS = False\n\ntry:\n LONG_DESCRIPTION = open('README.rst').read()\nexcept IOError:\n LONG_DESCRIPTION = 'Deis command-line client'\n\ntry:\n APACHE_LICENSE = open('LICENSE').read()\nexcept IOError:\n APACHE_LICENSE = 'See http://www.apache.org/licenses/LICENSE-2.0'\n\nKWARGS = {}\nif USE_SETUPTOOLS:\n KWARGS = {'entry_points': {'console_scripts': ['deis = deis:main']}}\nelse:\n KWARGS = {'scripts': ['deis']}\n\n\nsetup(name='deis',\n version='0.11.0',\n license=APACHE_LICENSE,\n description='Command-line Client for Deis, the open PaaS',\n author='OpDemand',\n author_email='[email protected]',\n url='https://github.com/deis/deis',\n keywords=[\n 'opdemand', 'deis', 'paas', 'cloud', 'chef', 'docker', 'heroku',\n 'aws', 'ec2', 'rackspace', 'digitalocean', 'gce'\n ],\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Information Technology',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.7',\n 'Topic :: Internet',\n 'Topic :: System :: Systems Administration',\n ],\n py_modules=['deis'],\n data_files=[\n ('.', ['README.rst']),\n ],\n long_description=LONG_DESCRIPTION,\n install_requires=[\n 'docopt==0.6.1', 'python-dateutil==2.2',\n 'PyYAML==3.10', 'requests==2.2.1', 'urllib3==1.8.2'\n ],\n zip_safe=True,\n **KWARGS)\n", "path": "client/setup.py"}]}
| 912 | 134 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.