problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_16731 | rasdani/github-patches | git_diff | Mailu__Mailu-2468 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SSL fails for all domains if a single domain fails LetsEncrypt challenge
## Before you open your issue
- [X] Check if no issue or pull-request for this already exists.
- [X] Check [documentation](https://mailu.io/master/) and [FAQ](https://mailu.io/master/faq.html).
- [X] You understand `Mailu` is made by volunteers in their **free time** — be conscise, civil and accept that delays can occur.
- [X] The title of the issue should be short and simple.
-
## Environment & Versions
### Environment
- [X] docker compose
- [ ] kubernetes
- [ ] docker swarm
### Versions
```
1.9
```
## Description
I used [mailu setup utility](https://setup.mailu.io/1.9/) to create a docker-compose.yml with multiple email domains. Turns out not all domains were already pointing to the server IP, so some challenges failed. This leads to nginx closing port 443. So even the main domain is not reachable via SSL.
After removing the non-working domains the cert is created successfully and SSL is working.
## Replication Steps
Create a new mailu setup, add multiple domains of which some are not pointing to the server.
## Expected behaviour
There should be a certificate for the domains that are reachable and nginx should make those accessible with SSL on port 443.
## Logs
```
2022-10-05T19:47:24.203180336Z Domain: email.example.com
2022-10-05T19:47:24.203182530Z Type: dns
2022-10-05T19:47:24.203184754Z Detail: no valid A records found for email.example.com; no valid AAAA records found for email.example.com
2022-10-05T19:47:24.203187149Z
2022-10-05T19:47:24.203189393Z Hint: The Certificate Authority couldn't exterally verify that the standalone plugin completed the required http-01 challenges. Ensure the plugin is configured correctly and that the changes it makes are accessible from the internet.
2022-10-05T19:47:24.203192008Z
2022-10-05T19:47:24.702017069Z 2022/10/05 21:47:24 [notice] 1#1: signal 1 (SIGHUP) received from 22, reconfiguring
2022-10-05T19:47:24.702118810Z 2022/10/05 21:47:24 [notice] 1#1: reconfiguring
2022-10-05T19:47:24.705542967Z 2022/10/05 21:47:24 [warn] 1#1: conflicting server name "" on 0.0.0.0:80, ignored
2022-10-05T19:47:24.705911789Z 2022/10/05 21:47:24 [notice] 1#1: using the "epoll" event method
2022-10-05T19:47:24.706081756Z 2022/10/05 21:47:24 [notice] 1#1: start worker processes
2022-10-05T19:47:24.706331032Z 2022/10/05 21:47:24 [notice] 1#1: start worker process 23
2022-10-05T19:47:24.706639951Z 2022/10/05 21:47:24 [notice] 1#1: start worker process 24
2022-10-05T19:47:24.706852248Z 2022/10/05 21:47:24 [notice] 1#1: start worker process 25
2022-10-05T19:47:24.730032307Z Hook 'post-hook' ran with output:
2022-10-05T19:47:24.730052144Z Missing cert or key file, disabling TLS
2022-10-05T19:47:24.730291842Z Hook 'post-hook' ran with error output:
2022-10-05T19:47:24.730302613Z nginx: [warn] conflicting server name "" on 0.0.0.0:80, ignored
2022-10-05T19:47:24.732101009Z Some challenges have failed.
2022-10-05T19:47:24.732342892Z Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /var/log/letsencrypt/letsencrypt.log or re-run Certbot with -v for more details.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/nginx/letsencrypt.py`
Content:
```
1 #!/usr/bin/python3
2
3 import os
4 import time
5 import subprocess
6
7 hostnames = ','.join(set(host.strip() for host in os.environ['HOSTNAMES'].split(',')))
8
9 command = [
10 "certbot",
11 "-n", "--agree-tos", # non-interactive
12 "-d", hostnames, "--expand", "--allow-subset-of-names",
13 "-m", "{}@{}".format(os.environ["POSTMASTER"], os.environ["DOMAIN"]),
14 "certonly", "--standalone",
15 "--cert-name", "mailu",
16 "--preferred-challenges", "http", "--http-01-port", "8008",
17 "--keep-until-expiring",
18 "--renew-with-new-domains",
19 "--config-dir", "/certs/letsencrypt",
20 "--post-hook", "/config.py"
21 ]
22 command2 = [
23 "certbot",
24 "-n", "--agree-tos", # non-interactive
25 "-d", hostnames, "--expand", "--allow-subset-of-names",
26 "-m", "{}@{}".format(os.environ["POSTMASTER"], os.environ["DOMAIN"]),
27 "certonly", "--standalone",
28 "--cert-name", "mailu-ecdsa",
29 "--preferred-challenges", "http", "--http-01-port", "8008",
30 "--keep-until-expiring",
31 "--key-type", "ecdsa",
32 "--renew-with-new-domains",
33 "--config-dir", "/certs/letsencrypt",
34 "--post-hook", "/config.py"
35 ]
36
37 # Wait for nginx to start
38 time.sleep(5)
39
40 # Run certbot every day
41 while True:
42 subprocess.call(command)
43 subprocess.call(command2)
44 time.sleep(86400)
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/nginx/letsencrypt.py b/core/nginx/letsencrypt.py
--- a/core/nginx/letsencrypt.py
+++ b/core/nginx/letsencrypt.py
@@ -15,6 +15,7 @@
"--cert-name", "mailu",
"--preferred-challenges", "http", "--http-01-port", "8008",
"--keep-until-expiring",
+ "--allow-subset-of-names",
"--renew-with-new-domains",
"--config-dir", "/certs/letsencrypt",
"--post-hook", "/config.py"
@@ -28,6 +29,7 @@
"--cert-name", "mailu-ecdsa",
"--preferred-challenges", "http", "--http-01-port", "8008",
"--keep-until-expiring",
+ "--allow-subset-of-names",
"--key-type", "ecdsa",
"--renew-with-new-domains",
"--config-dir", "/certs/letsencrypt",
| {"golden_diff": "diff --git a/core/nginx/letsencrypt.py b/core/nginx/letsencrypt.py\n--- a/core/nginx/letsencrypt.py\n+++ b/core/nginx/letsencrypt.py\n@@ -15,6 +15,7 @@\n \"--cert-name\", \"mailu\",\n \"--preferred-challenges\", \"http\", \"--http-01-port\", \"8008\",\n \"--keep-until-expiring\",\n+ \"--allow-subset-of-names\",\n \"--renew-with-new-domains\",\n \"--config-dir\", \"/certs/letsencrypt\",\n \"--post-hook\", \"/config.py\"\n@@ -28,6 +29,7 @@\n \"--cert-name\", \"mailu-ecdsa\",\n \"--preferred-challenges\", \"http\", \"--http-01-port\", \"8008\",\n \"--keep-until-expiring\",\n+ \"--allow-subset-of-names\",\n \"--key-type\", \"ecdsa\",\n \"--renew-with-new-domains\",\n \"--config-dir\", \"/certs/letsencrypt\",\n", "issue": "SSL fails for all domains if a single domain fails LetsEncrypt challenge\n## Before you open your issue\r\n- [X] Check if no issue or pull-request for this already exists.\r\n- [X] Check [documentation](https://mailu.io/master/) and [FAQ](https://mailu.io/master/faq.html). \r\n- [X] You understand `Mailu` is made by volunteers in their **free time** \u2014 be conscise, civil and accept that delays can occur.\r\n- [X] The title of the issue should be short and simple.\r\n- \r\n## Environment & Versions\r\n### Environment\r\n - [X] docker compose\r\n - [ ] kubernetes\r\n - [ ] docker swarm\r\n\r\n### Versions\r\n```\r\n1.9\r\n```\r\n\r\n## Description\r\nI used [mailu setup utility](https://setup.mailu.io/1.9/) to create a docker-compose.yml with multiple email domains. Turns out not all domains were already pointing to the server IP, so some challenges failed. This leads to nginx closing port 443. So even the main domain is not reachable via SSL.\r\n\r\nAfter removing the non-working domains the cert is created successfully and SSL is working.\r\n\r\n## Replication Steps\r\nCreate a new mailu setup, add multiple domains of which some are not pointing to the server.\r\n\r\n## Expected behaviour\r\nThere should be a certificate for the domains that are reachable and nginx should make those accessible with SSL on port 443.\r\n\r\n## Logs\r\n```\r\n2022-10-05T19:47:24.203180336Z Domain: email.example.com\r\n2022-10-05T19:47:24.203182530Z Type: dns\r\n2022-10-05T19:47:24.203184754Z Detail: no valid A records found for email.example.com; no valid AAAA records found for email.example.com\r\n2022-10-05T19:47:24.203187149Z\r\n2022-10-05T19:47:24.203189393Z Hint: The Certificate Authority couldn't exterally verify that the standalone plugin completed the required http-01 challenges. Ensure the plugin is configured correctly and that the changes it makes are accessible from the internet.\r\n2022-10-05T19:47:24.203192008Z\r\n2022-10-05T19:47:24.702017069Z 2022/10/05 21:47:24 [notice] 1#1: signal 1 (SIGHUP) received from 22, reconfiguring\r\n2022-10-05T19:47:24.702118810Z 2022/10/05 21:47:24 [notice] 1#1: reconfiguring\r\n2022-10-05T19:47:24.705542967Z 2022/10/05 21:47:24 [warn] 1#1: conflicting server name \"\" on 0.0.0.0:80, ignored\r\n2022-10-05T19:47:24.705911789Z 2022/10/05 21:47:24 [notice] 1#1: using the \"epoll\" event method\r\n2022-10-05T19:47:24.706081756Z 2022/10/05 21:47:24 [notice] 1#1: start worker processes\r\n2022-10-05T19:47:24.706331032Z 2022/10/05 21:47:24 [notice] 1#1: start worker process 23\r\n2022-10-05T19:47:24.706639951Z 2022/10/05 21:47:24 [notice] 1#1: start worker process 24\r\n2022-10-05T19:47:24.706852248Z 2022/10/05 21:47:24 [notice] 1#1: start worker process 25\r\n2022-10-05T19:47:24.730032307Z Hook 'post-hook' ran with output:\r\n2022-10-05T19:47:24.730052144Z Missing cert or key file, disabling TLS\r\n2022-10-05T19:47:24.730291842Z Hook 'post-hook' ran with error output:\r\n2022-10-05T19:47:24.730302613Z nginx: [warn] conflicting server name \"\" on 0.0.0.0:80, ignored\r\n2022-10-05T19:47:24.732101009Z Some challenges have failed.\r\n2022-10-05T19:47:24.732342892Z Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /var/log/letsencrypt/letsencrypt.log or re-run Certbot with -v for more details.\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport time\nimport subprocess\n\nhostnames = ','.join(set(host.strip() for host in os.environ['HOSTNAMES'].split(',')))\n\ncommand = [\n \"certbot\",\n \"-n\", \"--agree-tos\", # non-interactive\n \"-d\", hostnames, \"--expand\", \"--allow-subset-of-names\",\n \"-m\", \"{}@{}\".format(os.environ[\"POSTMASTER\"], os.environ[\"DOMAIN\"]),\n \"certonly\", \"--standalone\",\n \"--cert-name\", \"mailu\",\n \"--preferred-challenges\", \"http\", \"--http-01-port\", \"8008\",\n \"--keep-until-expiring\",\n \"--renew-with-new-domains\",\n \"--config-dir\", \"/certs/letsencrypt\",\n \"--post-hook\", \"/config.py\"\n]\ncommand2 = [\n \"certbot\",\n \"-n\", \"--agree-tos\", # non-interactive\n \"-d\", hostnames, \"--expand\", \"--allow-subset-of-names\",\n \"-m\", \"{}@{}\".format(os.environ[\"POSTMASTER\"], os.environ[\"DOMAIN\"]),\n \"certonly\", \"--standalone\",\n \"--cert-name\", \"mailu-ecdsa\",\n \"--preferred-challenges\", \"http\", \"--http-01-port\", \"8008\",\n \"--keep-until-expiring\",\n \"--key-type\", \"ecdsa\",\n \"--renew-with-new-domains\",\n \"--config-dir\", \"/certs/letsencrypt\",\n \"--post-hook\", \"/config.py\"\n]\n\n# Wait for nginx to start\ntime.sleep(5)\n\n# Run certbot every day\nwhile True:\n subprocess.call(command)\n subprocess.call(command2)\n time.sleep(86400)\n", "path": "core/nginx/letsencrypt.py"}], "after_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport time\nimport subprocess\n\nhostnames = ','.join(set(host.strip() for host in os.environ['HOSTNAMES'].split(',')))\n\ncommand = [\n \"certbot\",\n \"-n\", \"--agree-tos\", # non-interactive\n \"-d\", hostnames, \"--expand\", \"--allow-subset-of-names\",\n \"-m\", \"{}@{}\".format(os.environ[\"POSTMASTER\"], os.environ[\"DOMAIN\"]),\n \"certonly\", \"--standalone\",\n \"--cert-name\", \"mailu\",\n \"--preferred-challenges\", \"http\", \"--http-01-port\", \"8008\",\n \"--keep-until-expiring\",\n \"--allow-subset-of-names\",\n \"--renew-with-new-domains\",\n \"--config-dir\", \"/certs/letsencrypt\",\n \"--post-hook\", \"/config.py\"\n]\ncommand2 = [\n \"certbot\",\n \"-n\", \"--agree-tos\", # non-interactive\n \"-d\", hostnames, \"--expand\", \"--allow-subset-of-names\",\n \"-m\", \"{}@{}\".format(os.environ[\"POSTMASTER\"], os.environ[\"DOMAIN\"]),\n \"certonly\", \"--standalone\",\n \"--cert-name\", \"mailu-ecdsa\",\n \"--preferred-challenges\", \"http\", \"--http-01-port\", \"8008\",\n \"--keep-until-expiring\",\n \"--allow-subset-of-names\",\n \"--key-type\", \"ecdsa\",\n \"--renew-with-new-domains\",\n \"--config-dir\", \"/certs/letsencrypt\",\n \"--post-hook\", \"/config.py\"\n]\n\n# Wait for nginx to start\ntime.sleep(5)\n\n# Run certbot every day\nwhile True:\n subprocess.call(command)\n subprocess.call(command2)\n time.sleep(86400)\n", "path": "core/nginx/letsencrypt.py"}]} | 2,100 | 219 |
gh_patches_debug_8126 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-728 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The status of the worker pod should be Error when there is any error happened.
The status of the worker pod is still Running and will never stop even if there are some errors in function `handle_task` like:
```bash
Traceback (most recent call last):
File "/elasticdl/python/elasticdl/worker/worker.py", line 236, in run
self._handle_task(task)
File "/elasticdl/python/elasticdl/worker/worker.py", line 190, in _handle_task
task, record_buf, min_model_version
File "/elasticdl/python/elasticdl/worker/worker.py", line 211, in _process_minibatch
features, labels
File "/elasticdl/python/elasticdl/worker/worker.py", line 168, in _run_training_task
accepted, min_model_version = self.report_gradient(grads)
File "/elasticdl/python/elasticdl/worker/worker.py", line 115, in report_gradient
res = self._stub.ReportGradient(req)
File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 565, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticdl/python/elasticdl/worker/worker.py`
Content:
```
1 import logging
2 import traceback
3
4 import tensorflow as tf
5
6 assert tf.executing_eagerly() # noqa
7
8 import recordio
9
10 from contextlib import closing
11 from elasticdl.proto import elasticdl_pb2_grpc
12 from elasticdl.proto import elasticdl_pb2
13 from elasticdl.python.elasticdl.common.ndarray import (
14 ndarray_to_tensor,
15 tensor_to_ndarray,
16 )
17 from elasticdl.python.elasticdl.common.model_helper import load_module
18
19 # The default maximum number of a minibatch retry as its results
20 # (e.g. gradients) are not accepted by master.
21 DEFAULT_MAX_MINIBATCH_RETRY_NUM = 64
22
23
24 class Worker(object):
25 """ElasticDL worker"""
26
27 def __init__(
28 self,
29 worker_id,
30 model_file,
31 channel=None,
32 max_minibatch_retry_num=DEFAULT_MAX_MINIBATCH_RETRY_NUM,
33 codec_file=None,
34 ):
35 """
36 Arguments:
37 model_file: A module to define the model
38 channel: grpc channel
39 max_minibatch_retry_num: The maximum number of a minibatch retry
40 as its results (e.g. gradients) are not accepted by master.
41 """
42 self._logger = logging.getLogger(__name__)
43 self._worker_id = worker_id
44 model_module = load_module(model_file)
45 self._model = model_module.model
46 self._feature_columns = model_module.feature_columns()
47 self._var_created = self._model.built
48 self._input_fn = model_module.input_fn
49 self._opt_fn = model_module.optimizer
50 self._loss = model_module.loss
51 self._eval_metrics_fn = model_module.eval_metrics_fn
52 all_columns = self._feature_columns + model_module.label_columns()
53
54 # Initilize codec
55 codec_module = load_module(codec_file)
56 codec_module.codec.init(all_columns)
57 self._codec = codec_module.codec
58
59 if channel is None:
60 self._stub = None
61 else:
62 self._stub = elasticdl_pb2_grpc.MasterStub(channel)
63 self._max_minibatch_retry_num = max_minibatch_retry_num
64 self._model_version = -1
65
66 def get_task(self):
67 """
68 get task from master
69 """
70 req = elasticdl_pb2.GetTaskRequest()
71 req.worker_id = self._worker_id
72
73 return self._stub.GetTask(req)
74
75 def get_model(self, version, method):
76 """
77 get model from master, and update model_version
78 """
79 req = elasticdl_pb2.GetModelRequest()
80 req.version = version
81 req.method = method
82 model = self._stub.GetModel(req)
83
84 for var in self._model.trainable_variables:
85 # Assumes all trainable variables exist in model.param.
86 var.assign(tensor_to_ndarray(model.param[var.name]))
87 self._model_version = model.version
88
89 def report_task_result(self, task_id, err_msg):
90 """
91 report task result to master
92 """
93 report = elasticdl_pb2.ReportTaskResultRequest()
94 report.task_id = task_id
95 report.err_message = err_msg
96 return self._stub.ReportTaskResult(report)
97
98 def report_variable(self):
99 """
100 report variable to ps.
101 """
102 req = elasticdl_pb2.ReportVariableRequest()
103 for v in self._model.trainable_variables:
104 req.variable[v.name].CopyFrom(ndarray_to_tensor(v.numpy()))
105 self._stub.ReportVariable(req)
106
107 def report_gradient(self, grads):
108 """
109 report gradient to ps, return (accepted, model_version) from rpc call.
110 """
111 req = elasticdl_pb2.ReportGradientRequest()
112 for g, v in zip(grads, self._model.trainable_variables):
113 req.gradient[v.name].CopyFrom(ndarray_to_tensor(g.numpy()))
114 req.model_version = self._model_version
115 res = self._stub.ReportGradient(req)
116 return res.accepted, res.model_version
117
118 def report_evaluation_metrics(self, evaluation_metrics):
119 """
120 report evaluation metrics to ps, return (accepted, model_version)
121 from rpc call.
122 """
123 req = elasticdl_pb2.ReportEvaluationMetricsRequest()
124 for k, v in evaluation_metrics.items():
125 v_np = v.numpy()
126 # If scalar, convert to numpy 1D array with size 1
127 if not v_np.shape:
128 v_np = v_np.reshape(1)
129 req.evaluation_metrics[k].CopyFrom(ndarray_to_tensor(v_np))
130 req.model_version = self._model_version
131 res = self._stub.ReportEvaluationMetrics(req)
132 return res.accepted, res.model_version
133
134 @staticmethod
135 def _get_batch(reader, batch_size, decode):
136 res = []
137 for i in range(batch_size):
138 record = reader.record()
139 if record is None:
140 break
141 res.append(decode(record))
142 return res
143
144 def _get_features_and_labels(self, record_buf):
145 batch_input_data, batch_labels = self._input_fn(record_buf)
146 features = [
147 batch_input_data[f_col.key] for f_col in self._feature_columns
148 ]
149 if len(features) == 1:
150 features = features[0]
151 return features, batch_labels
152
153 def _create_variable_and_report(self, features):
154 # Use model.call to create variables, then report to ps
155 _ = self._model.call(features)
156 self.report_variable()
157 self._var_created = True
158
159 def _run_training_task(self, features, labels):
160 with tf.GradientTape() as tape:
161 outputs = self._model.call(features, training=True)
162 loss = self._loss(outputs, labels)
163
164 # TODO: Add regularization loss if any,
165 # which should be divided by the
166 # number of contributing workers.
167 grads = tape.gradient(loss, self._model.trainable_variables)
168 accepted, min_model_version = self.report_gradient(grads)
169 return accepted, min_model_version, loss
170
171 def _run_evaluation_task(self, features, labels):
172 outputs = self._model.call(features, training=False)
173 evaluation_metrics = self._eval_metrics_fn(outputs, labels)
174 return self.report_evaluation_metrics(evaluation_metrics)
175
176 def _handle_task(self, task):
177 min_model_version = task.model_version
178 with closing(
179 recordio.Scanner(
180 task.shard_file_name, task.start, task.end - task.start
181 )
182 ) as reader:
183 while True:
184 record_buf = self._get_batch(
185 reader, task.minibatch_size, self._codec.decode
186 )
187 if not record_buf:
188 break
189 min_model_version = self._process_minibatch(
190 task, record_buf, min_model_version
191 )
192
193 def _process_minibatch(self, task, record_buf, min_model_version):
194 features, labels = self._get_features_and_labels(record_buf)
195 if not self._var_created:
196 self._create_variable_and_report(features)
197 for _ in range(self._max_minibatch_retry_num):
198 if task.type == elasticdl_pb2.EVALUATION:
199 self.get_model(min_model_version, elasticdl_pb2.FIXED)
200 accepted, _ = self._run_evaluation_task(features, labels)
201 if accepted:
202 break
203 elif task.type == elasticdl_pb2.TRAINING:
204 # TODO: optimize the logic to avoid unnecessary
205 # get_model call.
206 self.get_model(
207 max(self._model_version, min_model_version),
208 elasticdl_pb2.MINIMUM,
209 )
210 accepted, min_model_version, loss = self._run_training_task(
211 features, labels
212 )
213 if accepted:
214 self._logger.info("Loss is %f" % loss.numpy())
215 break
216 else:
217 raise RuntimeError("Unrecognized task type, %s" % task.type)
218 else:
219 # Worker got stuck, fail the task.
220 # TODO: stop the worker if it fails to make any
221 # progress for some time.
222 raise RuntimeError("Worker got stuck")
223 return min_model_version
224
225 def run(self):
226 """
227 Fetches task from master and performs training or evaluation.
228 """
229 while True:
230 task = self.get_task()
231 if not task.shard_file_name:
232 # No more task
233 break
234 err_msg = ""
235 try:
236 self._handle_task(task)
237 except Exception as ex:
238 err_msg = str(ex)
239 traceback.print_exc()
240 self.report_task_result(task.task_id, err_msg)
241
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticdl/python/elasticdl/worker/worker.py b/elasticdl/python/elasticdl/worker/worker.py
--- a/elasticdl/python/elasticdl/worker/worker.py
+++ b/elasticdl/python/elasticdl/worker/worker.py
@@ -234,7 +234,11 @@
err_msg = ""
try:
self._handle_task(task)
+ except RuntimeError as err:
+ err_msg = str(err)
+ traceback.print_exc()
except Exception as ex:
err_msg = str(ex)
traceback.print_exc()
+ raise ex
self.report_task_result(task.task_id, err_msg)
| {"golden_diff": "diff --git a/elasticdl/python/elasticdl/worker/worker.py b/elasticdl/python/elasticdl/worker/worker.py\n--- a/elasticdl/python/elasticdl/worker/worker.py\n+++ b/elasticdl/python/elasticdl/worker/worker.py\n@@ -234,7 +234,11 @@\n err_msg = \"\"\n try:\n self._handle_task(task)\n+ except RuntimeError as err:\n+ err_msg = str(err)\n+ traceback.print_exc()\n except Exception as ex:\n err_msg = str(ex)\n traceback.print_exc()\n+ raise ex\n self.report_task_result(task.task_id, err_msg)\n", "issue": "The status of the worker pod should be Error when there is any error happened.\nThe status of the worker pod is still Running and will never stop even if there are some errors in function `handle_task` like:\r\n```bash\r\nTraceback (most recent call last):\r\n File \"/elasticdl/python/elasticdl/worker/worker.py\", line 236, in run\r\n self._handle_task(task)\r\n File \"/elasticdl/python/elasticdl/worker/worker.py\", line 190, in _handle_task\r\n task, record_buf, min_model_version\r\n File \"/elasticdl/python/elasticdl/worker/worker.py\", line 211, in _process_minibatch\r\n features, labels\r\n File \"/elasticdl/python/elasticdl/worker/worker.py\", line 168, in _run_training_task\r\n accepted, min_model_version = self.report_gradient(grads)\r\n File \"/elasticdl/python/elasticdl/worker/worker.py\", line 115, in report_gradient\r\n res = self._stub.ReportGradient(req)\r\n File \"/usr/local/lib/python3.6/dist-packages/grpc/_channel.py\", line 565, in __call__\r\n return _end_unary_response_blocking(state, call, False, None)\r\n File \"/usr/local/lib/python3.6/dist-packages/grpc/_channel.py\", line 467, in _end_unary_response_blocking\r\n raise _Rendezvous(state, None, None, deadline)\n", "before_files": [{"content": "import logging\nimport traceback\n\nimport tensorflow as tf\n\nassert tf.executing_eagerly() # noqa\n\nimport recordio\n\nfrom contextlib import closing\nfrom elasticdl.proto import elasticdl_pb2_grpc\nfrom elasticdl.proto import elasticdl_pb2\nfrom elasticdl.python.elasticdl.common.ndarray import (\n ndarray_to_tensor,\n tensor_to_ndarray,\n)\nfrom elasticdl.python.elasticdl.common.model_helper import load_module\n\n# The default maximum number of a minibatch retry as its results\n# (e.g. gradients) are not accepted by master.\nDEFAULT_MAX_MINIBATCH_RETRY_NUM = 64\n\n\nclass Worker(object):\n \"\"\"ElasticDL worker\"\"\"\n\n def __init__(\n self,\n worker_id,\n model_file,\n channel=None,\n max_minibatch_retry_num=DEFAULT_MAX_MINIBATCH_RETRY_NUM,\n codec_file=None,\n ):\n \"\"\"\n Arguments:\n model_file: A module to define the model\n channel: grpc channel\n max_minibatch_retry_num: The maximum number of a minibatch retry\n as its results (e.g. gradients) are not accepted by master.\n \"\"\"\n self._logger = logging.getLogger(__name__)\n self._worker_id = worker_id\n model_module = load_module(model_file)\n self._model = model_module.model\n self._feature_columns = model_module.feature_columns()\n self._var_created = self._model.built\n self._input_fn = model_module.input_fn\n self._opt_fn = model_module.optimizer\n self._loss = model_module.loss\n self._eval_metrics_fn = model_module.eval_metrics_fn\n all_columns = self._feature_columns + model_module.label_columns()\n\n # Initilize codec\n codec_module = load_module(codec_file)\n codec_module.codec.init(all_columns)\n self._codec = codec_module.codec\n\n if channel is None:\n self._stub = None\n else:\n self._stub = elasticdl_pb2_grpc.MasterStub(channel)\n self._max_minibatch_retry_num = max_minibatch_retry_num\n self._model_version = -1\n\n def get_task(self):\n \"\"\"\n get task from master\n \"\"\"\n req = elasticdl_pb2.GetTaskRequest()\n req.worker_id = self._worker_id\n\n return self._stub.GetTask(req)\n\n def get_model(self, version, method):\n \"\"\"\n get model from master, and update model_version\n \"\"\"\n req = elasticdl_pb2.GetModelRequest()\n req.version = version\n req.method = method\n model = self._stub.GetModel(req)\n\n for var in self._model.trainable_variables:\n # Assumes all trainable variables exist in model.param.\n var.assign(tensor_to_ndarray(model.param[var.name]))\n self._model_version = model.version\n\n def report_task_result(self, task_id, err_msg):\n \"\"\"\n report task result to master\n \"\"\"\n report = elasticdl_pb2.ReportTaskResultRequest()\n report.task_id = task_id\n report.err_message = err_msg\n return self._stub.ReportTaskResult(report)\n\n def report_variable(self):\n \"\"\"\n report variable to ps.\n \"\"\"\n req = elasticdl_pb2.ReportVariableRequest()\n for v in self._model.trainable_variables:\n req.variable[v.name].CopyFrom(ndarray_to_tensor(v.numpy()))\n self._stub.ReportVariable(req)\n\n def report_gradient(self, grads):\n \"\"\"\n report gradient to ps, return (accepted, model_version) from rpc call.\n \"\"\"\n req = elasticdl_pb2.ReportGradientRequest()\n for g, v in zip(grads, self._model.trainable_variables):\n req.gradient[v.name].CopyFrom(ndarray_to_tensor(g.numpy()))\n req.model_version = self._model_version\n res = self._stub.ReportGradient(req)\n return res.accepted, res.model_version\n\n def report_evaluation_metrics(self, evaluation_metrics):\n \"\"\"\n report evaluation metrics to ps, return (accepted, model_version)\n from rpc call.\n \"\"\"\n req = elasticdl_pb2.ReportEvaluationMetricsRequest()\n for k, v in evaluation_metrics.items():\n v_np = v.numpy()\n # If scalar, convert to numpy 1D array with size 1\n if not v_np.shape:\n v_np = v_np.reshape(1)\n req.evaluation_metrics[k].CopyFrom(ndarray_to_tensor(v_np))\n req.model_version = self._model_version\n res = self._stub.ReportEvaluationMetrics(req)\n return res.accepted, res.model_version\n\n @staticmethod\n def _get_batch(reader, batch_size, decode):\n res = []\n for i in range(batch_size):\n record = reader.record()\n if record is None:\n break\n res.append(decode(record))\n return res\n\n def _get_features_and_labels(self, record_buf):\n batch_input_data, batch_labels = self._input_fn(record_buf)\n features = [\n batch_input_data[f_col.key] for f_col in self._feature_columns\n ]\n if len(features) == 1:\n features = features[0]\n return features, batch_labels\n\n def _create_variable_and_report(self, features):\n # Use model.call to create variables, then report to ps\n _ = self._model.call(features)\n self.report_variable()\n self._var_created = True\n\n def _run_training_task(self, features, labels):\n with tf.GradientTape() as tape:\n outputs = self._model.call(features, training=True)\n loss = self._loss(outputs, labels)\n\n # TODO: Add regularization loss if any,\n # which should be divided by the\n # number of contributing workers.\n grads = tape.gradient(loss, self._model.trainable_variables)\n accepted, min_model_version = self.report_gradient(grads)\n return accepted, min_model_version, loss\n\n def _run_evaluation_task(self, features, labels):\n outputs = self._model.call(features, training=False)\n evaluation_metrics = self._eval_metrics_fn(outputs, labels)\n return self.report_evaluation_metrics(evaluation_metrics)\n\n def _handle_task(self, task):\n min_model_version = task.model_version\n with closing(\n recordio.Scanner(\n task.shard_file_name, task.start, task.end - task.start\n )\n ) as reader:\n while True:\n record_buf = self._get_batch(\n reader, task.minibatch_size, self._codec.decode\n )\n if not record_buf:\n break\n min_model_version = self._process_minibatch(\n task, record_buf, min_model_version\n )\n\n def _process_minibatch(self, task, record_buf, min_model_version):\n features, labels = self._get_features_and_labels(record_buf)\n if not self._var_created:\n self._create_variable_and_report(features)\n for _ in range(self._max_minibatch_retry_num):\n if task.type == elasticdl_pb2.EVALUATION:\n self.get_model(min_model_version, elasticdl_pb2.FIXED)\n accepted, _ = self._run_evaluation_task(features, labels)\n if accepted:\n break\n elif task.type == elasticdl_pb2.TRAINING:\n # TODO: optimize the logic to avoid unnecessary\n # get_model call.\n self.get_model(\n max(self._model_version, min_model_version),\n elasticdl_pb2.MINIMUM,\n )\n accepted, min_model_version, loss = self._run_training_task(\n features, labels\n )\n if accepted:\n self._logger.info(\"Loss is %f\" % loss.numpy())\n break\n else:\n raise RuntimeError(\"Unrecognized task type, %s\" % task.type)\n else:\n # Worker got stuck, fail the task.\n # TODO: stop the worker if it fails to make any\n # progress for some time.\n raise RuntimeError(\"Worker got stuck\")\n return min_model_version\n\n def run(self):\n \"\"\"\n Fetches task from master and performs training or evaluation.\n \"\"\"\n while True:\n task = self.get_task()\n if not task.shard_file_name:\n # No more task\n break\n err_msg = \"\"\n try:\n self._handle_task(task)\n except Exception as ex:\n err_msg = str(ex)\n traceback.print_exc()\n self.report_task_result(task.task_id, err_msg)\n", "path": "elasticdl/python/elasticdl/worker/worker.py"}], "after_files": [{"content": "import logging\nimport traceback\n\nimport tensorflow as tf\n\nassert tf.executing_eagerly() # noqa\n\nimport recordio\n\nfrom contextlib import closing\nfrom elasticdl.proto import elasticdl_pb2_grpc\nfrom elasticdl.proto import elasticdl_pb2\nfrom elasticdl.python.elasticdl.common.ndarray import (\n ndarray_to_tensor,\n tensor_to_ndarray,\n)\nfrom elasticdl.python.elasticdl.common.model_helper import load_module\n\n# The default maximum number of a minibatch retry as its results\n# (e.g. gradients) are not accepted by master.\nDEFAULT_MAX_MINIBATCH_RETRY_NUM = 64\n\n\nclass Worker(object):\n \"\"\"ElasticDL worker\"\"\"\n\n def __init__(\n self,\n worker_id,\n model_file,\n channel=None,\n max_minibatch_retry_num=DEFAULT_MAX_MINIBATCH_RETRY_NUM,\n codec_file=None,\n ):\n \"\"\"\n Arguments:\n model_file: A module to define the model\n channel: grpc channel\n max_minibatch_retry_num: The maximum number of a minibatch retry\n as its results (e.g. gradients) are not accepted by master.\n \"\"\"\n self._logger = logging.getLogger(__name__)\n self._worker_id = worker_id\n model_module = load_module(model_file)\n self._model = model_module.model\n self._feature_columns = model_module.feature_columns()\n self._var_created = self._model.built\n self._input_fn = model_module.input_fn\n self._opt_fn = model_module.optimizer\n self._loss = model_module.loss\n self._eval_metrics_fn = model_module.eval_metrics_fn\n all_columns = self._feature_columns + model_module.label_columns()\n\n # Initilize codec\n codec_module = load_module(codec_file)\n codec_module.codec.init(all_columns)\n self._codec = codec_module.codec\n\n if channel is None:\n self._stub = None\n else:\n self._stub = elasticdl_pb2_grpc.MasterStub(channel)\n self._max_minibatch_retry_num = max_minibatch_retry_num\n self._model_version = -1\n\n def get_task(self):\n \"\"\"\n get task from master\n \"\"\"\n req = elasticdl_pb2.GetTaskRequest()\n req.worker_id = self._worker_id\n\n return self._stub.GetTask(req)\n\n def get_model(self, version, method):\n \"\"\"\n get model from master, and update model_version\n \"\"\"\n req = elasticdl_pb2.GetModelRequest()\n req.version = version\n req.method = method\n model = self._stub.GetModel(req)\n\n for var in self._model.trainable_variables:\n # Assumes all trainable variables exist in model.param.\n var.assign(tensor_to_ndarray(model.param[var.name]))\n self._model_version = model.version\n\n def report_task_result(self, task_id, err_msg):\n \"\"\"\n report task result to master\n \"\"\"\n report = elasticdl_pb2.ReportTaskResultRequest()\n report.task_id = task_id\n report.err_message = err_msg\n return self._stub.ReportTaskResult(report)\n\n def report_variable(self):\n \"\"\"\n report variable to ps.\n \"\"\"\n req = elasticdl_pb2.ReportVariableRequest()\n for v in self._model.trainable_variables:\n req.variable[v.name].CopyFrom(ndarray_to_tensor(v.numpy()))\n self._stub.ReportVariable(req)\n\n def report_gradient(self, grads):\n \"\"\"\n report gradient to ps, return (accepted, model_version) from rpc call.\n \"\"\"\n req = elasticdl_pb2.ReportGradientRequest()\n for g, v in zip(grads, self._model.trainable_variables):\n req.gradient[v.name].CopyFrom(ndarray_to_tensor(g.numpy()))\n req.model_version = self._model_version\n res = self._stub.ReportGradient(req)\n return res.accepted, res.model_version\n\n def report_evaluation_metrics(self, evaluation_metrics):\n \"\"\"\n report evaluation metrics to ps, return (accepted, model_version)\n from rpc call.\n \"\"\"\n req = elasticdl_pb2.ReportEvaluationMetricsRequest()\n for k, v in evaluation_metrics.items():\n v_np = v.numpy()\n # If scalar, convert to numpy 1D array with size 1\n if not v_np.shape:\n v_np = v_np.reshape(1)\n req.evaluation_metrics[k].CopyFrom(ndarray_to_tensor(v_np))\n req.model_version = self._model_version\n res = self._stub.ReportEvaluationMetrics(req)\n return res.accepted, res.model_version\n\n @staticmethod\n def _get_batch(reader, batch_size, decode):\n res = []\n for i in range(batch_size):\n record = reader.record()\n if record is None:\n break\n res.append(decode(record))\n return res\n\n def _get_features_and_labels(self, record_buf):\n batch_input_data, batch_labels = self._input_fn(record_buf)\n features = [\n batch_input_data[f_col.key] for f_col in self._feature_columns\n ]\n if len(features) == 1:\n features = features[0]\n return features, batch_labels\n\n def _create_variable_and_report(self, features):\n # Use model.call to create variables, then report to ps\n _ = self._model.call(features)\n self.report_variable()\n self._var_created = True\n\n def _run_training_task(self, features, labels):\n with tf.GradientTape() as tape:\n outputs = self._model.call(features, training=True)\n loss = self._loss(outputs, labels)\n\n # TODO: Add regularization loss if any,\n # which should be divided by the\n # number of contributing workers.\n grads = tape.gradient(loss, self._model.trainable_variables)\n accepted, min_model_version = self.report_gradient(grads)\n return accepted, min_model_version, loss\n\n def _run_evaluation_task(self, features, labels):\n outputs = self._model.call(features, training=False)\n evaluation_metrics = self._eval_metrics_fn(outputs, labels)\n return self.report_evaluation_metrics(evaluation_metrics)\n\n def _handle_task(self, task):\n min_model_version = task.model_version\n with closing(\n recordio.Scanner(\n task.shard_file_name, task.start, task.end - task.start\n )\n ) as reader:\n while True:\n record_buf = self._get_batch(\n reader, task.minibatch_size, self._codec.decode\n )\n if not record_buf:\n break\n min_model_version = self._process_minibatch(\n task, record_buf, min_model_version\n )\n\n def _process_minibatch(self, task, record_buf, min_model_version):\n features, labels = self._get_features_and_labels(record_buf)\n if not self._var_created:\n self._create_variable_and_report(features)\n for _ in range(self._max_minibatch_retry_num):\n if task.type == elasticdl_pb2.EVALUATION:\n self.get_model(min_model_version, elasticdl_pb2.FIXED)\n accepted, _ = self._run_evaluation_task(features, labels)\n if accepted:\n break\n elif task.type == elasticdl_pb2.TRAINING:\n # TODO: optimize the logic to avoid unnecessary\n # get_model call.\n self.get_model(\n max(self._model_version, min_model_version),\n elasticdl_pb2.MINIMUM,\n )\n accepted, min_model_version, loss = self._run_training_task(\n features, labels\n )\n if accepted:\n self._logger.info(\"Loss is %f\" % loss.numpy())\n break\n else:\n raise RuntimeError(\"Unrecognized task type, %s\" % task.type)\n else:\n # Worker got stuck, fail the task.\n # TODO: stop the worker if it fails to make any\n # progress for some time.\n raise RuntimeError(\"Worker got stuck\")\n return min_model_version\n\n def run(self):\n \"\"\"\n Fetches task from master and performs training or evaluation.\n \"\"\"\n while True:\n task = self.get_task()\n if not task.shard_file_name:\n # No more task\n break\n err_msg = \"\"\n try:\n self._handle_task(task)\n except RuntimeError as err:\n err_msg = str(err)\n traceback.print_exc()\n except Exception as ex:\n err_msg = str(ex)\n traceback.print_exc()\n raise ex\n self.report_task_result(task.task_id, err_msg)\n", "path": "elasticdl/python/elasticdl/worker/worker.py"}]} | 3,009 | 146 |
gh_patches_debug_14215 | rasdani/github-patches | git_diff | qutebrowser__qutebrowser-1734 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Errors when `completion show` setting is false
Pressing up, down, or tab when there are completions available but the `completion show` setting is `false` crashes qutebrowser. The relevant error:
```
...
File "/home/marshall/Github/qutebrowser/qutebrowser/completion/completionwidget.py", line 164, in _next_idx
idx = self.selectionModel().currentIndex()
AttributeError: 'NoneType' object has no attribute 'currentIndex'
```
This appears after the recent completion fixes from @rcorre.
Generally, the completion `show` and `auto-open` options don't seem to make sense. Before the change, `show` would do what I expected `auto-open` to do (turning it off hides the completion window until the user asks for it with up, down, or tab). `auto-open` doesn't do anything (before or after the change), as far as I can tell.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qutebrowser/completion/completionwidget.py`
Content:
```
1 # vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:
2
3 # Copyright 2014-2016 Florian Bruhin (The Compiler) <[email protected]>
4 #
5 # This file is part of qutebrowser.
6 #
7 # qutebrowser is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # qutebrowser is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.
19
20 """Completion view for statusbar command section.
21
22 Defines a CompletionView which uses CompletionFiterModel and CompletionModel
23 subclasses to provide completions.
24 """
25
26 from PyQt5.QtWidgets import QStyle, QTreeView, QSizePolicy
27 from PyQt5.QtCore import (pyqtSlot, pyqtSignal, Qt, QItemSelectionModel,
28 QItemSelection)
29
30 from qutebrowser.config import config, style
31 from qutebrowser.completion import completiondelegate
32 from qutebrowser.completion.models import base
33 from qutebrowser.utils import qtutils, objreg, utils, usertypes
34 from qutebrowser.commands import cmdexc, cmdutils
35
36
37 class CompletionView(QTreeView):
38
39 """The view showing available completions.
40
41 Based on QTreeView but heavily customized so root elements show as category
42 headers, and children show as flat list.
43
44 Attributes:
45 enabled: Whether showing the CompletionView is enabled.
46 _win_id: The ID of the window this CompletionView is associated with.
47 _height: The height to use for the CompletionView.
48 _height_perc: Either None or a percentage if height should be relative.
49 _delegate: The item delegate used.
50 _column_widths: A list of column widths, in percent.
51
52 Signals:
53 resize_completion: Emitted when the completion should be resized.
54 selection_changed: Emitted when the completion item selection changes.
55 """
56
57 # Drawing the item foreground will be done by CompletionItemDelegate, so we
58 # don't define that in this stylesheet.
59 STYLESHEET = """
60 QTreeView {
61 font: {{ font['completion'] }};
62 background-color: {{ color['completion.bg'] }};
63 alternate-background-color: {{ color['completion.alternate-bg'] }};
64 outline: 0;
65 border: 0px;
66 }
67
68 QTreeView::item:disabled {
69 background-color: {{ color['completion.category.bg'] }};
70 border-top: 1px solid
71 {{ color['completion.category.border.top'] }};
72 border-bottom: 1px solid
73 {{ color['completion.category.border.bottom'] }};
74 }
75
76 QTreeView::item:selected, QTreeView::item:selected:hover {
77 border-top: 1px solid
78 {{ color['completion.item.selected.border.top'] }};
79 border-bottom: 1px solid
80 {{ color['completion.item.selected.border.bottom'] }};
81 background-color: {{ color['completion.item.selected.bg'] }};
82 }
83
84 QTreeView:item::hover {
85 border: 0px;
86 }
87
88 QTreeView QScrollBar {
89 width: {{ config.get('completion', 'scrollbar-width') }}px;
90 background: {{ color['completion.scrollbar.bg'] }};
91 }
92
93 QTreeView QScrollBar::handle {
94 background: {{ color['completion.scrollbar.fg'] }};
95 border: {{ config.get('completion', 'scrollbar-padding') }}px solid
96 {{ color['completion.scrollbar.bg'] }};
97 min-height: 10px;
98 }
99
100 QTreeView QScrollBar::sub-line, QScrollBar::add-line {
101 border: none;
102 background: none;
103 }
104 """
105
106 resize_completion = pyqtSignal()
107 selection_changed = pyqtSignal(QItemSelection)
108
109 def __init__(self, win_id, parent=None):
110 super().__init__(parent)
111 self._win_id = win_id
112 self.enabled = config.get('completion', 'show')
113 objreg.get('config').changed.connect(self.set_enabled)
114 # FIXME handle new aliases.
115 # objreg.get('config').changed.connect(self.init_command_completion)
116
117 self._column_widths = base.BaseCompletionModel.COLUMN_WIDTHS
118
119 self._delegate = completiondelegate.CompletionItemDelegate(self)
120 self.setItemDelegate(self._delegate)
121 style.set_register_stylesheet(self)
122 self.setSizePolicy(QSizePolicy.Fixed, QSizePolicy.Minimum)
123 self.setHeaderHidden(True)
124 self.setAlternatingRowColors(True)
125 self.setIndentation(0)
126 self.setItemsExpandable(False)
127 self.setExpandsOnDoubleClick(False)
128 self.setHorizontalScrollBarPolicy(Qt.ScrollBarAlwaysOff)
129 # WORKAROUND
130 # This is a workaround for weird race conditions with invalid
131 # item indexes leading to segfaults in Qt.
132 #
133 # Some background: http://bugs.quassel-irc.org/issues/663
134 # The proposed fix there was later reverted because it didn't help.
135 self.setUniformRowHeights(True)
136 self.hide()
137 # FIXME set elidemode
138 # https://github.com/The-Compiler/qutebrowser/issues/118
139
140 def __repr__(self):
141 return utils.get_repr(self)
142
143 def _resize_columns(self):
144 """Resize the completion columns based on column_widths."""
145 width = self.size().width()
146 pixel_widths = [(width * perc // 100) for perc in self._column_widths]
147 if self.verticalScrollBar().isVisible():
148 pixel_widths[-1] -= self.style().pixelMetric(
149 QStyle.PM_ScrollBarExtent) + 5
150 for i, w in enumerate(pixel_widths):
151 self.setColumnWidth(i, w)
152
153 def _next_idx(self, upwards):
154 """Get the previous/next QModelIndex displayed in the view.
155
156 Used by tab_handler.
157
158 Args:
159 upwards: Get previous item, not next.
160
161 Return:
162 A QModelIndex.
163 """
164 idx = self.selectionModel().currentIndex()
165 if not idx.isValid():
166 # No item selected yet
167 if upwards:
168 return self.model().last_item()
169 else:
170 return self.model().first_item()
171 while True:
172 idx = self.indexAbove(idx) if upwards else self.indexBelow(idx)
173 # wrap around if we arrived at beginning/end
174 if not idx.isValid() and upwards:
175 return self.model().last_item()
176 elif not idx.isValid() and not upwards:
177 idx = self.model().first_item()
178 self.scrollTo(idx.parent())
179 return idx
180 elif idx.parent().isValid():
181 # Item is a real item, not a category header -> success
182 return idx
183
184 def _next_prev_item(self, prev):
185 """Handle a tab press for the CompletionView.
186
187 Select the previous/next item and write the new text to the
188 statusbar.
189
190 Helper for completion_item_next and completion_item_prev.
191
192 Args:
193 prev: True for prev item, False for next one.
194 """
195 idx = self._next_idx(prev)
196 qtutils.ensure_valid(idx)
197 self.selectionModel().setCurrentIndex(
198 idx, QItemSelectionModel.ClearAndSelect | QItemSelectionModel.Rows)
199
200 def set_model(self, model):
201 """Switch completion to a new model.
202
203 Called from on_update_completion().
204
205 Args:
206 model: The model to use.
207 """
208 old_model = self.model()
209 sel_model = self.selectionModel()
210
211 self.setModel(model)
212
213 if sel_model is not None:
214 sel_model.deleteLater()
215 if old_model is not None:
216 old_model.deleteLater()
217
218 for i in range(model.rowCount()):
219 self.expand(model.index(i, 0))
220
221 self._column_widths = model.srcmodel.COLUMN_WIDTHS
222 self._resize_columns()
223 self.maybe_resize_completion()
224
225 def set_pattern(self, pattern):
226 """Set the completion pattern for the current model.
227
228 Called from on_update_completion().
229
230 Args:
231 pattern: The filter pattern to set (what the user entered).
232 """
233 self.model().set_pattern(pattern)
234 self.maybe_resize_completion()
235
236 @pyqtSlot()
237 def maybe_resize_completion(self):
238 """Emit the resize_completion signal if the config says so."""
239 if config.get('completion', 'shrink'):
240 self.resize_completion.emit()
241
242 @config.change_filter('completion', 'show')
243 def set_enabled(self):
244 """Update self.enabled when the config changed."""
245 self.enabled = config.get('completion', 'show')
246
247 @pyqtSlot()
248 def on_clear_completion_selection(self):
249 """Clear the selection model when an item is activated."""
250 selmod = self.selectionModel()
251 if selmod is not None:
252 selmod.clearSelection()
253 selmod.clearCurrentIndex()
254
255 def selectionChanged(self, selected, deselected):
256 """Extend selectionChanged to call completers selection_changed."""
257 super().selectionChanged(selected, deselected)
258 self.selection_changed.emit(selected)
259
260 def resizeEvent(self, e):
261 """Extend resizeEvent to adjust column size."""
262 super().resizeEvent(e)
263 self._resize_columns()
264
265 def showEvent(self, e):
266 """Adjust the completion size and scroll when it's freshly shown."""
267 self.resize_completion.emit()
268 scrollbar = self.verticalScrollBar()
269 if scrollbar is not None:
270 scrollbar.setValue(scrollbar.minimum())
271 super().showEvent(e)
272
273 @cmdutils.register(instance='completion', hide=True,
274 modes=[usertypes.KeyMode.command], scope='window')
275 def completion_item_prev(self):
276 """Select the previous completion item."""
277 self._next_prev_item(True)
278
279 @cmdutils.register(instance='completion', hide=True,
280 modes=[usertypes.KeyMode.command], scope='window')
281 def completion_item_next(self):
282 """Select the next completion item."""
283 self._next_prev_item(False)
284
285 @cmdutils.register(instance='completion', hide=True,
286 modes=[usertypes.KeyMode.command], scope='window')
287 def completion_item_del(self):
288 """Delete the current completion item."""
289 if not self.currentIndex().isValid():
290 raise cmdexc.CommandError("No item selected!")
291 try:
292 self.model().srcmodel.delete_cur_item(self)
293 except NotImplementedError:
294 raise cmdexc.CommandError("Cannot delete this item.")
295
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/qutebrowser/completion/completionwidget.py b/qutebrowser/completion/completionwidget.py
--- a/qutebrowser/completion/completionwidget.py
+++ b/qutebrowser/completion/completionwidget.py
@@ -192,10 +192,13 @@
Args:
prev: True for prev item, False for next one.
"""
- idx = self._next_idx(prev)
- qtutils.ensure_valid(idx)
- self.selectionModel().setCurrentIndex(
- idx, QItemSelectionModel.ClearAndSelect | QItemSelectionModel.Rows)
+ # selmodel can be None if 'show' and 'auto-open' are set to False
+ # https://github.com/The-Compiler/qutebrowser/issues/1731
+ selmodel = self.selectionModel()
+ if (selmodel is not None):
+ idx = self._next_idx(prev)
+ selmodel.setCurrentIndex(idx,
+ QItemSelectionModel.ClearAndSelect | QItemSelectionModel.Rows)
def set_model(self, model):
"""Switch completion to a new model.
| {"golden_diff": "diff --git a/qutebrowser/completion/completionwidget.py b/qutebrowser/completion/completionwidget.py\n--- a/qutebrowser/completion/completionwidget.py\n+++ b/qutebrowser/completion/completionwidget.py\n@@ -192,10 +192,13 @@\n Args:\n prev: True for prev item, False for next one.\n \"\"\"\n- idx = self._next_idx(prev)\n- qtutils.ensure_valid(idx)\n- self.selectionModel().setCurrentIndex(\n- idx, QItemSelectionModel.ClearAndSelect | QItemSelectionModel.Rows)\n+ # selmodel can be None if 'show' and 'auto-open' are set to False\n+ # https://github.com/The-Compiler/qutebrowser/issues/1731\n+ selmodel = self.selectionModel()\n+ if (selmodel is not None):\n+ idx = self._next_idx(prev)\n+ selmodel.setCurrentIndex(idx,\n+ QItemSelectionModel.ClearAndSelect | QItemSelectionModel.Rows)\n \n def set_model(self, model):\n \"\"\"Switch completion to a new model.\n", "issue": "Errors when `completion show` setting is false\nPressing up, down, or tab when there are completions available but the `completion show` setting is `false` crashes qutebrowser. The relevant error:\n\n```\n ...\n File \"/home/marshall/Github/qutebrowser/qutebrowser/completion/completionwidget.py\", line 164, in _next_idx\n idx = self.selectionModel().currentIndex()\nAttributeError: 'NoneType' object has no attribute 'currentIndex'\n```\n\nThis appears after the recent completion fixes from @rcorre.\n\nGenerally, the completion `show` and `auto-open` options don't seem to make sense. Before the change, `show` would do what I expected `auto-open` to do (turning it off hides the completion window until the user asks for it with up, down, or tab). `auto-open` doesn't do anything (before or after the change), as far as I can tell.\n\n", "before_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2014-2016 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Completion view for statusbar command section.\n\nDefines a CompletionView which uses CompletionFiterModel and CompletionModel\nsubclasses to provide completions.\n\"\"\"\n\nfrom PyQt5.QtWidgets import QStyle, QTreeView, QSizePolicy\nfrom PyQt5.QtCore import (pyqtSlot, pyqtSignal, Qt, QItemSelectionModel,\n QItemSelection)\n\nfrom qutebrowser.config import config, style\nfrom qutebrowser.completion import completiondelegate\nfrom qutebrowser.completion.models import base\nfrom qutebrowser.utils import qtutils, objreg, utils, usertypes\nfrom qutebrowser.commands import cmdexc, cmdutils\n\n\nclass CompletionView(QTreeView):\n\n \"\"\"The view showing available completions.\n\n Based on QTreeView but heavily customized so root elements show as category\n headers, and children show as flat list.\n\n Attributes:\n enabled: Whether showing the CompletionView is enabled.\n _win_id: The ID of the window this CompletionView is associated with.\n _height: The height to use for the CompletionView.\n _height_perc: Either None or a percentage if height should be relative.\n _delegate: The item delegate used.\n _column_widths: A list of column widths, in percent.\n\n Signals:\n resize_completion: Emitted when the completion should be resized.\n selection_changed: Emitted when the completion item selection changes.\n \"\"\"\n\n # Drawing the item foreground will be done by CompletionItemDelegate, so we\n # don't define that in this stylesheet.\n STYLESHEET = \"\"\"\n QTreeView {\n font: {{ font['completion'] }};\n background-color: {{ color['completion.bg'] }};\n alternate-background-color: {{ color['completion.alternate-bg'] }};\n outline: 0;\n border: 0px;\n }\n\n QTreeView::item:disabled {\n background-color: {{ color['completion.category.bg'] }};\n border-top: 1px solid\n {{ color['completion.category.border.top'] }};\n border-bottom: 1px solid\n {{ color['completion.category.border.bottom'] }};\n }\n\n QTreeView::item:selected, QTreeView::item:selected:hover {\n border-top: 1px solid\n {{ color['completion.item.selected.border.top'] }};\n border-bottom: 1px solid\n {{ color['completion.item.selected.border.bottom'] }};\n background-color: {{ color['completion.item.selected.bg'] }};\n }\n\n QTreeView:item::hover {\n border: 0px;\n }\n\n QTreeView QScrollBar {\n width: {{ config.get('completion', 'scrollbar-width') }}px;\n background: {{ color['completion.scrollbar.bg'] }};\n }\n\n QTreeView QScrollBar::handle {\n background: {{ color['completion.scrollbar.fg'] }};\n border: {{ config.get('completion', 'scrollbar-padding') }}px solid\n {{ color['completion.scrollbar.bg'] }};\n min-height: 10px;\n }\n\n QTreeView QScrollBar::sub-line, QScrollBar::add-line {\n border: none;\n background: none;\n }\n \"\"\"\n\n resize_completion = pyqtSignal()\n selection_changed = pyqtSignal(QItemSelection)\n\n def __init__(self, win_id, parent=None):\n super().__init__(parent)\n self._win_id = win_id\n self.enabled = config.get('completion', 'show')\n objreg.get('config').changed.connect(self.set_enabled)\n # FIXME handle new aliases.\n # objreg.get('config').changed.connect(self.init_command_completion)\n\n self._column_widths = base.BaseCompletionModel.COLUMN_WIDTHS\n\n self._delegate = completiondelegate.CompletionItemDelegate(self)\n self.setItemDelegate(self._delegate)\n style.set_register_stylesheet(self)\n self.setSizePolicy(QSizePolicy.Fixed, QSizePolicy.Minimum)\n self.setHeaderHidden(True)\n self.setAlternatingRowColors(True)\n self.setIndentation(0)\n self.setItemsExpandable(False)\n self.setExpandsOnDoubleClick(False)\n self.setHorizontalScrollBarPolicy(Qt.ScrollBarAlwaysOff)\n # WORKAROUND\n # This is a workaround for weird race conditions with invalid\n # item indexes leading to segfaults in Qt.\n #\n # Some background: http://bugs.quassel-irc.org/issues/663\n # The proposed fix there was later reverted because it didn't help.\n self.setUniformRowHeights(True)\n self.hide()\n # FIXME set elidemode\n # https://github.com/The-Compiler/qutebrowser/issues/118\n\n def __repr__(self):\n return utils.get_repr(self)\n\n def _resize_columns(self):\n \"\"\"Resize the completion columns based on column_widths.\"\"\"\n width = self.size().width()\n pixel_widths = [(width * perc // 100) for perc in self._column_widths]\n if self.verticalScrollBar().isVisible():\n pixel_widths[-1] -= self.style().pixelMetric(\n QStyle.PM_ScrollBarExtent) + 5\n for i, w in enumerate(pixel_widths):\n self.setColumnWidth(i, w)\n\n def _next_idx(self, upwards):\n \"\"\"Get the previous/next QModelIndex displayed in the view.\n\n Used by tab_handler.\n\n Args:\n upwards: Get previous item, not next.\n\n Return:\n A QModelIndex.\n \"\"\"\n idx = self.selectionModel().currentIndex()\n if not idx.isValid():\n # No item selected yet\n if upwards:\n return self.model().last_item()\n else:\n return self.model().first_item()\n while True:\n idx = self.indexAbove(idx) if upwards else self.indexBelow(idx)\n # wrap around if we arrived at beginning/end\n if not idx.isValid() and upwards:\n return self.model().last_item()\n elif not idx.isValid() and not upwards:\n idx = self.model().first_item()\n self.scrollTo(idx.parent())\n return idx\n elif idx.parent().isValid():\n # Item is a real item, not a category header -> success\n return idx\n\n def _next_prev_item(self, prev):\n \"\"\"Handle a tab press for the CompletionView.\n\n Select the previous/next item and write the new text to the\n statusbar.\n\n Helper for completion_item_next and completion_item_prev.\n\n Args:\n prev: True for prev item, False for next one.\n \"\"\"\n idx = self._next_idx(prev)\n qtutils.ensure_valid(idx)\n self.selectionModel().setCurrentIndex(\n idx, QItemSelectionModel.ClearAndSelect | QItemSelectionModel.Rows)\n\n def set_model(self, model):\n \"\"\"Switch completion to a new model.\n\n Called from on_update_completion().\n\n Args:\n model: The model to use.\n \"\"\"\n old_model = self.model()\n sel_model = self.selectionModel()\n\n self.setModel(model)\n\n if sel_model is not None:\n sel_model.deleteLater()\n if old_model is not None:\n old_model.deleteLater()\n\n for i in range(model.rowCount()):\n self.expand(model.index(i, 0))\n\n self._column_widths = model.srcmodel.COLUMN_WIDTHS\n self._resize_columns()\n self.maybe_resize_completion()\n\n def set_pattern(self, pattern):\n \"\"\"Set the completion pattern for the current model.\n\n Called from on_update_completion().\n\n Args:\n pattern: The filter pattern to set (what the user entered).\n \"\"\"\n self.model().set_pattern(pattern)\n self.maybe_resize_completion()\n\n @pyqtSlot()\n def maybe_resize_completion(self):\n \"\"\"Emit the resize_completion signal if the config says so.\"\"\"\n if config.get('completion', 'shrink'):\n self.resize_completion.emit()\n\n @config.change_filter('completion', 'show')\n def set_enabled(self):\n \"\"\"Update self.enabled when the config changed.\"\"\"\n self.enabled = config.get('completion', 'show')\n\n @pyqtSlot()\n def on_clear_completion_selection(self):\n \"\"\"Clear the selection model when an item is activated.\"\"\"\n selmod = self.selectionModel()\n if selmod is not None:\n selmod.clearSelection()\n selmod.clearCurrentIndex()\n\n def selectionChanged(self, selected, deselected):\n \"\"\"Extend selectionChanged to call completers selection_changed.\"\"\"\n super().selectionChanged(selected, deselected)\n self.selection_changed.emit(selected)\n\n def resizeEvent(self, e):\n \"\"\"Extend resizeEvent to adjust column size.\"\"\"\n super().resizeEvent(e)\n self._resize_columns()\n\n def showEvent(self, e):\n \"\"\"Adjust the completion size and scroll when it's freshly shown.\"\"\"\n self.resize_completion.emit()\n scrollbar = self.verticalScrollBar()\n if scrollbar is not None:\n scrollbar.setValue(scrollbar.minimum())\n super().showEvent(e)\n\n @cmdutils.register(instance='completion', hide=True,\n modes=[usertypes.KeyMode.command], scope='window')\n def completion_item_prev(self):\n \"\"\"Select the previous completion item.\"\"\"\n self._next_prev_item(True)\n\n @cmdutils.register(instance='completion', hide=True,\n modes=[usertypes.KeyMode.command], scope='window')\n def completion_item_next(self):\n \"\"\"Select the next completion item.\"\"\"\n self._next_prev_item(False)\n\n @cmdutils.register(instance='completion', hide=True,\n modes=[usertypes.KeyMode.command], scope='window')\n def completion_item_del(self):\n \"\"\"Delete the current completion item.\"\"\"\n if not self.currentIndex().isValid():\n raise cmdexc.CommandError(\"No item selected!\")\n try:\n self.model().srcmodel.delete_cur_item(self)\n except NotImplementedError:\n raise cmdexc.CommandError(\"Cannot delete this item.\")\n", "path": "qutebrowser/completion/completionwidget.py"}], "after_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2014-2016 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Completion view for statusbar command section.\n\nDefines a CompletionView which uses CompletionFiterModel and CompletionModel\nsubclasses to provide completions.\n\"\"\"\n\nfrom PyQt5.QtWidgets import QStyle, QTreeView, QSizePolicy\nfrom PyQt5.QtCore import (pyqtSlot, pyqtSignal, Qt, QItemSelectionModel,\n QItemSelection)\n\nfrom qutebrowser.config import config, style\nfrom qutebrowser.completion import completiondelegate\nfrom qutebrowser.completion.models import base\nfrom qutebrowser.utils import qtutils, objreg, utils, usertypes\nfrom qutebrowser.commands import cmdexc, cmdutils\n\n\nclass CompletionView(QTreeView):\n\n \"\"\"The view showing available completions.\n\n Based on QTreeView but heavily customized so root elements show as category\n headers, and children show as flat list.\n\n Attributes:\n enabled: Whether showing the CompletionView is enabled.\n _win_id: The ID of the window this CompletionView is associated with.\n _height: The height to use for the CompletionView.\n _height_perc: Either None or a percentage if height should be relative.\n _delegate: The item delegate used.\n _column_widths: A list of column widths, in percent.\n\n Signals:\n resize_completion: Emitted when the completion should be resized.\n selection_changed: Emitted when the completion item selection changes.\n \"\"\"\n\n # Drawing the item foreground will be done by CompletionItemDelegate, so we\n # don't define that in this stylesheet.\n STYLESHEET = \"\"\"\n QTreeView {\n font: {{ font['completion'] }};\n background-color: {{ color['completion.bg'] }};\n alternate-background-color: {{ color['completion.alternate-bg'] }};\n outline: 0;\n border: 0px;\n }\n\n QTreeView::item:disabled {\n background-color: {{ color['completion.category.bg'] }};\n border-top: 1px solid\n {{ color['completion.category.border.top'] }};\n border-bottom: 1px solid\n {{ color['completion.category.border.bottom'] }};\n }\n\n QTreeView::item:selected, QTreeView::item:selected:hover {\n border-top: 1px solid\n {{ color['completion.item.selected.border.top'] }};\n border-bottom: 1px solid\n {{ color['completion.item.selected.border.bottom'] }};\n background-color: {{ color['completion.item.selected.bg'] }};\n }\n\n QTreeView:item::hover {\n border: 0px;\n }\n\n QTreeView QScrollBar {\n width: {{ config.get('completion', 'scrollbar-width') }}px;\n background: {{ color['completion.scrollbar.bg'] }};\n }\n\n QTreeView QScrollBar::handle {\n background: {{ color['completion.scrollbar.fg'] }};\n border: {{ config.get('completion', 'scrollbar-padding') }}px solid\n {{ color['completion.scrollbar.bg'] }};\n min-height: 10px;\n }\n\n QTreeView QScrollBar::sub-line, QScrollBar::add-line {\n border: none;\n background: none;\n }\n \"\"\"\n\n resize_completion = pyqtSignal()\n selection_changed = pyqtSignal(QItemSelection)\n\n def __init__(self, win_id, parent=None):\n super().__init__(parent)\n self._win_id = win_id\n self.enabled = config.get('completion', 'show')\n objreg.get('config').changed.connect(self.set_enabled)\n # FIXME handle new aliases.\n # objreg.get('config').changed.connect(self.init_command_completion)\n\n self._column_widths = base.BaseCompletionModel.COLUMN_WIDTHS\n\n self._delegate = completiondelegate.CompletionItemDelegate(self)\n self.setItemDelegate(self._delegate)\n style.set_register_stylesheet(self)\n self.setSizePolicy(QSizePolicy.Fixed, QSizePolicy.Minimum)\n self.setHeaderHidden(True)\n self.setAlternatingRowColors(True)\n self.setIndentation(0)\n self.setItemsExpandable(False)\n self.setExpandsOnDoubleClick(False)\n self.setHorizontalScrollBarPolicy(Qt.ScrollBarAlwaysOff)\n # WORKAROUND\n # This is a workaround for weird race conditions with invalid\n # item indexes leading to segfaults in Qt.\n #\n # Some background: http://bugs.quassel-irc.org/issues/663\n # The proposed fix there was later reverted because it didn't help.\n self.setUniformRowHeights(True)\n self.hide()\n # FIXME set elidemode\n # https://github.com/The-Compiler/qutebrowser/issues/118\n\n def __repr__(self):\n return utils.get_repr(self)\n\n def _resize_columns(self):\n \"\"\"Resize the completion columns based on column_widths.\"\"\"\n width = self.size().width()\n pixel_widths = [(width * perc // 100) for perc in self._column_widths]\n if self.verticalScrollBar().isVisible():\n pixel_widths[-1] -= self.style().pixelMetric(\n QStyle.PM_ScrollBarExtent) + 5\n for i, w in enumerate(pixel_widths):\n self.setColumnWidth(i, w)\n\n def _next_idx(self, upwards):\n \"\"\"Get the previous/next QModelIndex displayed in the view.\n\n Used by tab_handler.\n\n Args:\n upwards: Get previous item, not next.\n\n Return:\n A QModelIndex.\n \"\"\"\n idx = self.selectionModel().currentIndex()\n if not idx.isValid():\n # No item selected yet\n if upwards:\n return self.model().last_item()\n else:\n return self.model().first_item()\n while True:\n idx = self.indexAbove(idx) if upwards else self.indexBelow(idx)\n # wrap around if we arrived at beginning/end\n if not idx.isValid() and upwards:\n return self.model().last_item()\n elif not idx.isValid() and not upwards:\n idx = self.model().first_item()\n self.scrollTo(idx.parent())\n return idx\n elif idx.parent().isValid():\n # Item is a real item, not a category header -> success\n return idx\n\n def _next_prev_item(self, prev):\n \"\"\"Handle a tab press for the CompletionView.\n\n Select the previous/next item and write the new text to the\n statusbar.\n\n Helper for completion_item_next and completion_item_prev.\n\n Args:\n prev: True for prev item, False for next one.\n \"\"\"\n # selmodel can be None if 'show' and 'auto-open' are set to False\n # https://github.com/The-Compiler/qutebrowser/issues/1731\n selmodel = self.selectionModel()\n if (selmodel is not None):\n idx = self._next_idx(prev)\n selmodel.setCurrentIndex(idx,\n QItemSelectionModel.ClearAndSelect | QItemSelectionModel.Rows)\n\n def set_model(self, model):\n \"\"\"Switch completion to a new model.\n\n Called from on_update_completion().\n\n Args:\n model: The model to use.\n \"\"\"\n old_model = self.model()\n sel_model = self.selectionModel()\n\n self.setModel(model)\n\n if sel_model is not None:\n sel_model.deleteLater()\n if old_model is not None:\n old_model.deleteLater()\n\n for i in range(model.rowCount()):\n self.expand(model.index(i, 0))\n\n self._column_widths = model.srcmodel.COLUMN_WIDTHS\n self._resize_columns()\n self.maybe_resize_completion()\n\n def set_pattern(self, pattern):\n \"\"\"Set the completion pattern for the current model.\n\n Called from on_update_completion().\n\n Args:\n pattern: The filter pattern to set (what the user entered).\n \"\"\"\n self.model().set_pattern(pattern)\n self.maybe_resize_completion()\n\n @pyqtSlot()\n def maybe_resize_completion(self):\n \"\"\"Emit the resize_completion signal if the config says so.\"\"\"\n if config.get('completion', 'shrink'):\n self.resize_completion.emit()\n\n @config.change_filter('completion', 'show')\n def set_enabled(self):\n \"\"\"Update self.enabled when the config changed.\"\"\"\n self.enabled = config.get('completion', 'show')\n\n @pyqtSlot()\n def on_clear_completion_selection(self):\n \"\"\"Clear the selection model when an item is activated.\"\"\"\n selmod = self.selectionModel()\n if selmod is not None:\n selmod.clearSelection()\n selmod.clearCurrentIndex()\n\n def selectionChanged(self, selected, deselected):\n \"\"\"Extend selectionChanged to call completers selection_changed.\"\"\"\n super().selectionChanged(selected, deselected)\n self.selection_changed.emit(selected)\n\n def resizeEvent(self, e):\n \"\"\"Extend resizeEvent to adjust column size.\"\"\"\n super().resizeEvent(e)\n self._resize_columns()\n\n def showEvent(self, e):\n \"\"\"Adjust the completion size and scroll when it's freshly shown.\"\"\"\n self.resize_completion.emit()\n scrollbar = self.verticalScrollBar()\n if scrollbar is not None:\n scrollbar.setValue(scrollbar.minimum())\n super().showEvent(e)\n\n @cmdutils.register(instance='completion', hide=True,\n modes=[usertypes.KeyMode.command], scope='window')\n def completion_item_prev(self):\n \"\"\"Select the previous completion item.\"\"\"\n self._next_prev_item(True)\n\n @cmdutils.register(instance='completion', hide=True,\n modes=[usertypes.KeyMode.command], scope='window')\n def completion_item_next(self):\n \"\"\"Select the next completion item.\"\"\"\n self._next_prev_item(False)\n\n @cmdutils.register(instance='completion', hide=True,\n modes=[usertypes.KeyMode.command], scope='window')\n def completion_item_del(self):\n \"\"\"Delete the current completion item.\"\"\"\n if not self.currentIndex().isValid():\n raise cmdexc.CommandError(\"No item selected!\")\n try:\n self.model().srcmodel.delete_cur_item(self)\n except NotImplementedError:\n raise cmdexc.CommandError(\"Cannot delete this item.\")\n", "path": "qutebrowser/completion/completionwidget.py"}]} | 3,516 | 240 |
gh_patches_debug_29212 | rasdani/github-patches | git_diff | NVIDIA-Merlin__NVTabular-1414 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[FEA] Simplify AddMetadata Tag for NVTabular
**Is your feature request related to a problem? Please describe.**
Currently, we provide the functionality for the user to tag columns with the operator `AddMetadata`.
The use case is that users will use the operator mainly for adding tags.
Should we provide a wrapper called `AddTag` or `TagAs` to simplify the AddMetadata operator?
Should we provide multiple wrappers for common tags - e.g.
`TagAsUserID()`, `TagAsItemID()`, `TagAsUserFeatures()`, `TagAsItemFeatures()`, etc.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nvtabular/ops/add_metadata.py`
Content:
```
1 #
2 # Copyright (c) 2021, NVIDIA CORPORATION.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 #
16 from nvtabular.dispatch import DataFrameType
17
18 from .operator import ColumnSelector, Operator
19
20
21 class AddMetadata(Operator):
22 """
23 This operator will add user defined tags and properties
24 to a Schema.
25 """
26
27 def __init__(self, tags=None, properties=None):
28 super().__init__()
29 self.tags = tags or []
30 self.properties = properties or {}
31
32 def transform(self, col_selector: ColumnSelector, df: DataFrameType) -> DataFrameType:
33 return df
34
35 @property
36 def output_tags(self):
37 return self.tags
38
39 @property
40 def output_properties(self):
41 return self.properties
42
```
Path: `nvtabular/ops/__init__.py`
Content:
```
1 #
2 # Copyright (c) 2021, NVIDIA CORPORATION.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 #
16
17 # alias submodules here to avoid breaking everything with moving to submodules
18 # flake8: noqa
19 from .add_metadata import AddMetadata
20 from .bucketize import Bucketize
21 from .categorify import Categorify, get_embedding_sizes
22 from .clip import Clip
23 from .column_similarity import ColumnSimilarity
24 from .data_stats import DataStats
25 from .difference_lag import DifferenceLag
26 from .drop_low_cardinality import DropLowCardinality
27 from .dropna import Dropna
28 from .fill import FillMedian, FillMissing
29 from .filter import Filter
30 from .groupby import Groupby
31 from .hash_bucket import HashBucket
32 from .hashed_cross import HashedCross
33 from .join_external import JoinExternal
34 from .join_groupby import JoinGroupby
35 from .lambdaop import LambdaOp
36 from .list_slice import ListSlice
37 from .logop import LogOp
38 from .normalize import Normalize, NormalizeMinMax
39 from .operator import ColumnSelector, Operator
40 from .reduce_dtype_size import ReduceDtypeSize
41 from .rename import Rename
42 from .stat_operator import StatOperator
43 from .target_encoding import TargetEncoding
44 from .value_counts import ValueCount
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nvtabular/ops/__init__.py b/nvtabular/ops/__init__.py
--- a/nvtabular/ops/__init__.py
+++ b/nvtabular/ops/__init__.py
@@ -16,7 +16,15 @@
# alias submodules here to avoid breaking everything with moving to submodules
# flake8: noqa
-from .add_metadata import AddMetadata
+from .add_metadata import (
+ AddMetadata,
+ AddProperties,
+ AddTags,
+ TagAsItemFeatures,
+ TagAsItemID,
+ TagAsUserFeatures,
+ TagAsUserID,
+)
from .bucketize import Bucketize
from .categorify import Categorify, get_embedding_sizes
from .clip import Clip
diff --git a/nvtabular/ops/add_metadata.py b/nvtabular/ops/add_metadata.py
--- a/nvtabular/ops/add_metadata.py
+++ b/nvtabular/ops/add_metadata.py
@@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
+from merlin.schema.tags import Tags
from nvtabular.dispatch import DataFrameType
from .operator import ColumnSelector, Operator
@@ -39,3 +40,38 @@
@property
def output_properties(self):
return self.properties
+
+
+class AddTags(AddMetadata):
+ def __init__(self, tags=None):
+ super().__init__(tags=tags)
+
+
+class AddProperties(AddMetadata):
+ def __init__(self, properties=None):
+ super().__init__(properties=properties)
+
+
+# Wrappers for common features
+class TagAsUserID(Operator):
+ @property
+ def output_tags(self):
+ return [Tags.USER_ID]
+
+
+class TagAsItemID(Operator):
+ @property
+ def output_tags(self):
+ return [Tags.ITEM_ID]
+
+
+class TagAsUserFeatures(Operator):
+ @property
+ def output_tags(self):
+ return [Tags.USER]
+
+
+class TagAsItemFeatures(Operator):
+ @property
+ def output_tags(self):
+ return [Tags.ITEM]
| {"golden_diff": "diff --git a/nvtabular/ops/__init__.py b/nvtabular/ops/__init__.py\n--- a/nvtabular/ops/__init__.py\n+++ b/nvtabular/ops/__init__.py\n@@ -16,7 +16,15 @@\n \n # alias submodules here to avoid breaking everything with moving to submodules\n # flake8: noqa\n-from .add_metadata import AddMetadata\n+from .add_metadata import (\n+ AddMetadata,\n+ AddProperties,\n+ AddTags,\n+ TagAsItemFeatures,\n+ TagAsItemID,\n+ TagAsUserFeatures,\n+ TagAsUserID,\n+)\n from .bucketize import Bucketize\n from .categorify import Categorify, get_embedding_sizes\n from .clip import Clip\ndiff --git a/nvtabular/ops/add_metadata.py b/nvtabular/ops/add_metadata.py\n--- a/nvtabular/ops/add_metadata.py\n+++ b/nvtabular/ops/add_metadata.py\n@@ -13,6 +13,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n #\n+from merlin.schema.tags import Tags\n from nvtabular.dispatch import DataFrameType\n \n from .operator import ColumnSelector, Operator\n@@ -39,3 +40,38 @@\n @property\n def output_properties(self):\n return self.properties\n+\n+\n+class AddTags(AddMetadata):\n+ def __init__(self, tags=None):\n+ super().__init__(tags=tags)\n+\n+\n+class AddProperties(AddMetadata):\n+ def __init__(self, properties=None):\n+ super().__init__(properties=properties)\n+\n+\n+# Wrappers for common features\n+class TagAsUserID(Operator):\n+ @property\n+ def output_tags(self):\n+ return [Tags.USER_ID]\n+\n+\n+class TagAsItemID(Operator):\n+ @property\n+ def output_tags(self):\n+ return [Tags.ITEM_ID]\n+\n+\n+class TagAsUserFeatures(Operator):\n+ @property\n+ def output_tags(self):\n+ return [Tags.USER]\n+\n+\n+class TagAsItemFeatures(Operator):\n+ @property\n+ def output_tags(self):\n+ return [Tags.ITEM]\n", "issue": "[FEA] Simplify AddMetadata Tag for NVTabular\n**Is your feature request related to a problem? Please describe.**\r\nCurrently, we provide the functionality for the user to tag columns with the operator `AddMetadata`.\r\nThe use case is that users will use the operator mainly for adding tags. \r\n\r\nShould we provide a wrapper called `AddTag` or `TagAs` to simplify the AddMetadata operator?\r\nShould we provide multiple wrappers for common tags - e.g.\r\n\r\n`TagAsUserID()`, `TagAsItemID()`, `TagAsUserFeatures()`, `TagAsItemFeatures()`, etc.\r\n\r\n\n", "before_files": [{"content": "#\n# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nfrom nvtabular.dispatch import DataFrameType\n\nfrom .operator import ColumnSelector, Operator\n\n\nclass AddMetadata(Operator):\n \"\"\"\n This operator will add user defined tags and properties\n to a Schema.\n \"\"\"\n\n def __init__(self, tags=None, properties=None):\n super().__init__()\n self.tags = tags or []\n self.properties = properties or {}\n\n def transform(self, col_selector: ColumnSelector, df: DataFrameType) -> DataFrameType:\n return df\n\n @property\n def output_tags(self):\n return self.tags\n\n @property\n def output_properties(self):\n return self.properties\n", "path": "nvtabular/ops/add_metadata.py"}, {"content": "#\n# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n# alias submodules here to avoid breaking everything with moving to submodules\n# flake8: noqa\nfrom .add_metadata import AddMetadata\nfrom .bucketize import Bucketize\nfrom .categorify import Categorify, get_embedding_sizes\nfrom .clip import Clip\nfrom .column_similarity import ColumnSimilarity\nfrom .data_stats import DataStats\nfrom .difference_lag import DifferenceLag\nfrom .drop_low_cardinality import DropLowCardinality\nfrom .dropna import Dropna\nfrom .fill import FillMedian, FillMissing\nfrom .filter import Filter\nfrom .groupby import Groupby\nfrom .hash_bucket import HashBucket\nfrom .hashed_cross import HashedCross\nfrom .join_external import JoinExternal\nfrom .join_groupby import JoinGroupby\nfrom .lambdaop import LambdaOp\nfrom .list_slice import ListSlice\nfrom .logop import LogOp\nfrom .normalize import Normalize, NormalizeMinMax\nfrom .operator import ColumnSelector, Operator\nfrom .reduce_dtype_size import ReduceDtypeSize\nfrom .rename import Rename\nfrom .stat_operator import StatOperator\nfrom .target_encoding import TargetEncoding\nfrom .value_counts import ValueCount\n", "path": "nvtabular/ops/__init__.py"}], "after_files": [{"content": "#\n# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nfrom merlin.schema.tags import Tags\nfrom nvtabular.dispatch import DataFrameType\n\nfrom .operator import ColumnSelector, Operator\n\n\nclass AddMetadata(Operator):\n \"\"\"\n This operator will add user defined tags and properties\n to a Schema.\n \"\"\"\n\n def __init__(self, tags=None, properties=None):\n super().__init__()\n self.tags = tags or []\n self.properties = properties or {}\n\n def transform(self, col_selector: ColumnSelector, df: DataFrameType) -> DataFrameType:\n return df\n\n @property\n def output_tags(self):\n return self.tags\n\n @property\n def output_properties(self):\n return self.properties\n\n\nclass AddTags(AddMetadata):\n def __init__(self, tags=None):\n super().__init__(tags=tags)\n\n\nclass AddProperties(AddMetadata):\n def __init__(self, properties=None):\n super().__init__(properties=properties)\n\n\n# Wrappers for common features\nclass TagAsUserID(Operator):\n @property\n def output_tags(self):\n return [Tags.USER_ID]\n\n\nclass TagAsItemID(Operator):\n @property\n def output_tags(self):\n return [Tags.ITEM_ID]\n\n\nclass TagAsUserFeatures(Operator):\n @property\n def output_tags(self):\n return [Tags.USER]\n\n\nclass TagAsItemFeatures(Operator):\n @property\n def output_tags(self):\n return [Tags.ITEM]\n", "path": "nvtabular/ops/add_metadata.py"}, {"content": "#\n# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n# alias submodules here to avoid breaking everything with moving to submodules\n# flake8: noqa\nfrom .add_metadata import (\n AddMetadata,\n AddProperties,\n AddTags,\n TagAsItemFeatures,\n TagAsItemID,\n TagAsUserFeatures,\n TagAsUserID,\n)\nfrom .bucketize import Bucketize\nfrom .categorify import Categorify, get_embedding_sizes\nfrom .clip import Clip\nfrom .column_similarity import ColumnSimilarity\nfrom .data_stats import DataStats\nfrom .difference_lag import DifferenceLag\nfrom .drop_low_cardinality import DropLowCardinality\nfrom .dropna import Dropna\nfrom .fill import FillMedian, FillMissing\nfrom .filter import Filter\nfrom .groupby import Groupby\nfrom .hash_bucket import HashBucket\nfrom .hashed_cross import HashedCross\nfrom .join_external import JoinExternal\nfrom .join_groupby import JoinGroupby\nfrom .lambdaop import LambdaOp\nfrom .list_slice import ListSlice\nfrom .logop import LogOp\nfrom .normalize import Normalize, NormalizeMinMax\nfrom .operator import ColumnSelector, Operator\nfrom .reduce_dtype_size import ReduceDtypeSize\nfrom .rename import Rename\nfrom .stat_operator import StatOperator\nfrom .target_encoding import TargetEncoding\nfrom .value_counts import ValueCount\n", "path": "nvtabular/ops/__init__.py"}]} | 1,224 | 499 |
gh_patches_debug_41866 | rasdani/github-patches | git_diff | pymedusa__Medusa-4929 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[APP SUBMITTED]: AttributeError: 'NoneType' object has no attribute 'content'
### INFO
**Python Version**: `2.7.13 (v2.7.13:a06454b1afa1, Dec 17 2016, 20:53:40) [MSC v.1500 64 bit (AMD64)]`
**Operating System**: `Windows-10-10.0.17134`
**Locale**: `cp1252`
**Branch**: [master](../tree/master)
**Database**: `44.11`
**Commit**: pymedusa/Medusa@f5973b28465246e67465d7d86bdc8aa74d13e659
**Link to Log**: https://gist.github.com/65cbbe28cae113abf840e4ccfacfb9b0
### ERROR
<pre>
2018-08-04 15:19:44 ERROR SEARCHQUEUE-DAILY-SEARCH :: [f5973b2] DailySearchQueueItem Exception, error: 'NoneType' object has no attribute 'content'
Traceback (most recent call last):
File "C:\Medusa\Medusa\<a href="../blob/f5973b28465246e67465d7d86bdc8aa74d13e659/medusa/search/queue.py#L289">medusa\search\queue.py</a>", line 289, in run
self.success = snatch_episode(result)
File "C:\Medusa\Medusa\<a href="../blob/f5973b28465246e67465d7d86bdc8aa74d13e659/medusa/search/core.py#L135">medusa\search\core.py</a>", line 135, in snatch_episode
nzb_data = result.provider.download_nzb_for_post(result)
File "C:\Medusa\Medusa\<a href="../blob/f5973b28465246e67465d7d86bdc8aa74d13e659/medusa/providers/nzb/binsearch.py#L267">medusa\providers\nzb\binsearch.py</a>", line 267, in download_nzb_for_post
verify=verify, hooks={}, allow_redirects=True).content
AttributeError: 'NoneType' object has no attribute 'content'
</pre>
---
_STAFF NOTIFIED_: @pymedusa/support @pymedusa/moderators
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `medusa/providers/nzb/binsearch.py`
Content:
```
1 # coding=utf-8
2
3 """Provider code for Binsearch provider."""
4
5 from __future__ import unicode_literals
6
7 import logging
8 import re
9 from builtins import zip
10 from os.path import join
11
12 from medusa import tv
13 from medusa.bs4_parser import BS4Parser
14 from medusa.helper.common import convert_size, sanitize_filename
15 from medusa.helpers import download_file
16 from medusa.logger.adapters.style import BraceAdapter
17 from medusa.providers.nzb.nzb_provider import NZBProvider
18
19 from requests.compat import urljoin
20
21 log = BraceAdapter(logging.getLogger(__name__))
22 log.logger.addHandler(logging.NullHandler())
23
24
25 class BinSearchProvider(NZBProvider):
26 """BinSearch Newznab provider."""
27
28 size_regex = re.compile(r'size: (\d+\.\d+\xa0\w{2}), parts', re.I)
29 title_regex = re.compile(r'\"([^\"]+)"', re.I)
30 title_reqex_clean = re.compile(r'^[ \d_]+ (.+)')
31 title_regex_rss = re.compile(r'- \"([^\"]+)"', re.I)
32 nzb_check_segment = re.compile(r'<segment bytes="[\d]+"')
33
34 def __init__(self):
35 """Initialize the class."""
36 super(BinSearchProvider, self).__init__('BinSearch')
37
38 # Credentials
39 self.public = True
40
41 # URLs
42 self.url = 'https://www.binsearch.info'
43 self.urls = {
44 'search': urljoin(self.url, 'index.php'),
45 'rss': urljoin(self.url, 'browse.php')
46 }
47
48 # Proper Strings
49 self.proper_strings = ['PROPER', 'REPACK', 'REAL', 'RERIP']
50
51 # Miscellaneous Options
52
53 # Cache
54 self.cache = tv.Cache(self, min_time=10)
55
56 def search(self, search_strings, **kwargs):
57 """
58 Search a provider and parse the results.
59
60 :param search_strings: A dict with mode (key) and the search value (value)
61 :returns: A list of search results (structure)
62 """
63 results = []
64 search_params = {
65 'adv_age': '',
66 'xminsize': 20,
67 'max': 250,
68 }
69 groups = [1, 2]
70
71 for mode in search_strings:
72 log.debug('Search mode: {0}', mode)
73 # https://www.binsearch.info/browse.php?bg=alt.binaries.teevee&server=2
74 for search_string in search_strings[mode]:
75 search_params['q'] = search_string
76 for group in groups:
77 # Try both 'search in the most popular groups' & 'search in the other groups' modes
78 search_params['server'] = group
79 if mode != 'RSS':
80 log.debug('Search string: {search}', {'search': search_string})
81 search_url = self.urls['search']
82 else:
83 search_params = {
84 'bg': 'alt.binaries.teevee',
85 'server': 2,
86 'max': 50,
87 }
88 search_url = self.urls['rss']
89 response = self.session.get(search_url, params=search_params)
90 if not response:
91 log.debug('No data returned from provider')
92 continue
93
94 results += self.parse(response.text, mode)
95
96 return results
97
98 def parse(self, data, mode):
99 """
100 Parse search results for items.
101
102 :param data: The raw response from a search
103 :param mode: The current mode used to search, e.g. RSS
104
105 :return: A list of items found
106 """
107 def process_column_header(td):
108 return td.get_text(strip=True).lower()
109
110 items = []
111
112 with BS4Parser(data, 'html5lib') as html:
113
114 # We need to store the post url, to be used with every result later on.
115 post_url = html.find('form', {'method': 'post'})['action']
116
117 table = html.find('table', class_='xMenuT')
118 rows = table('tr') if table else []
119 row_offset = 1
120 if not rows or not len(rows) - row_offset:
121 log.debug('Data returned from provider does not contain any torrents')
122 return items
123
124 headers = rows[0]('th')
125 # 0, 1, subject, poster, group, age
126 labels = [process_column_header(header) or idx
127 for idx, header in enumerate(headers)]
128
129 # Skip column headers
130 rows = rows[row_offset:]
131 for row in rows:
132 try:
133 col = dict(list(zip(labels, row('td'))))
134 nzb_id_input = col[0 if mode == 'RSS' else 1].find('input')
135 if not nzb_id_input:
136 continue
137 nzb_id = nzb_id_input['name']
138 # Try and get the the article subject from the weird binsearch format
139 title = self.clean_title(col['subject'].text, mode)
140
141 except AttributeError:
142 log.debug('Parsing rows, that may not always have usefull info. Skipping to next.')
143 continue
144 if not all([title, nzb_id]):
145 continue
146 # Obtain the size from the 'description'
147 size_field = BinSearchProvider.size_regex.search(col['subject'].text)
148 if size_field:
149 size_field = size_field.group(1)
150 size = convert_size(size_field, sep='\xa0') or -1
151 size = int(size)
152
153 download_url = urljoin(self.url, '{post_url}|nzb_id={nzb_id}'.format(post_url=post_url, nzb_id=nzb_id))
154
155 # For future use
156 # detail_url = 'https://www.binsearch.info/?q={0}'.format(title)
157 human_time = True
158 date = col['age' if mode != 'RSS' else 'date'].get_text(strip=True).replace('-', ' ')
159 if mode == 'RSS':
160 human_time = False
161 pubdate_raw = date
162 pubdate = self.parse_pubdate(pubdate_raw, human_time=human_time)
163
164 item = {
165 'title': title,
166 'link': download_url,
167 'size': size,
168 'pubdate': pubdate,
169 }
170 if mode != 'RSS':
171 log.debug('Found result: {0}', title)
172
173 items.append(item)
174
175 return items
176
177 @staticmethod
178 def clean_title(title, mode):
179 """
180 Clean title field, using a series of regex.
181
182 RSS search requires different cleaning then the other searches.
183 When adding to this function, make sure you update the tests.
184 """
185 try:
186 if mode == 'RSS':
187 title = BinSearchProvider.title_regex_rss.search(title).group(1)
188 else:
189 title = BinSearchProvider.title_regex.search(title).group(1)
190 if BinSearchProvider.title_reqex_clean.search(title):
191 title = BinSearchProvider.title_reqex_clean.search(title).group(1)
192 for extension in ('.nfo', '.par2', '.rar', '.zip', '.nzb', '.part'):
193 # Strip extensions that aren't part of the file name
194 if title.endswith(extension):
195 title = title[:len(title) - len(extension)]
196 return title
197 except AttributeError:
198 return None
199
200 def download_result(self, result):
201 """
202 Download result from provider.
203
204 This is used when a blackhole is used for sending the nzb file to the nzb client.
205 For now the url and the post data is stored as one string in the db, using a pipe (|) to separate them.
206
207 :param result: A SearchResult object.
208 :return: The result of the nzb download (True/False).
209 """
210 if not self.login():
211 return False
212
213 result_name = sanitize_filename(result.name)
214 filename = join(self._get_storage_dir(), result_name + '.' + self.provider_type)
215
216 if result.url.startswith('http'):
217 self.session.headers.update({
218 'Referer': '/'.join(result.url.split('/')[:3]) + '/'
219 })
220
221 log.info('Downloading {result} from {provider} at {url}',
222 {'result': result.name, 'provider': self.name, 'url': result.url})
223
224 verify = False if self.public else None
225
226 url, data = result.url.split('|')
227
228 data = {
229 data.split('=')[1]: 'on',
230 'action': 'nzb'
231 }
232
233 if download_file(url, filename, method='POST', data=data, session=self.session,
234 headers=self.headers, verify=verify):
235
236 if self._verify_download(filename):
237 log.info('Saved {result} to {location}',
238 {'result': result.name, 'location': filename})
239 return True
240
241 return False
242
243 def download_nzb_for_post(self, result):
244 """
245 Download the nzb content, prior to sending it to the nzb download client.
246
247 :param result: Nzb SearchResult object.
248 :return: The content of the nzb file if successful else None.
249 """
250 if not self.login():
251 return False
252
253 # For now to separate the url and the post data, where splitting it with a pipe.
254 url, data = result.url.split('|')
255
256 data = {
257 data.split('=')[1]: 'on',
258 'action': 'nzb'
259 }
260
261 log.info('Downloading {result} from {provider} at {url} and data {data}',
262 {'result': result.name, 'provider': self.name, 'url': result.url, 'data': data})
263
264 verify = False if self.public else None
265
266 result = self.session.post(url, data=data, headers=self.session.headers,
267 verify=verify, hooks={}, allow_redirects=True).content
268
269 # Validate that the result has the content of a valid nzb.
270 if not BinSearchProvider.nzb_check_segment.search(result):
271 log.info('Result returned from BinSearch was not a valid nzb')
272 return None
273
274 return result
275
276 def _get_size(self, item):
277 """
278 Get result size.
279
280 Overwrite this, as the default _get_size() from nzb_provider isn't working for us.
281 :param item:
282 :return: size in bytes or -1
283 """
284 return item.get('size', -1)
285
286
287 provider = BinSearchProvider()
288
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/medusa/providers/nzb/binsearch.py b/medusa/providers/nzb/binsearch.py
--- a/medusa/providers/nzb/binsearch.py
+++ b/medusa/providers/nzb/binsearch.py
@@ -42,7 +42,7 @@
self.url = 'https://www.binsearch.info'
self.urls = {
'search': urljoin(self.url, 'index.php'),
- 'rss': urljoin(self.url, 'browse.php')
+ 'rss': urljoin(self.url, 'browse.php'),
}
# Proper Strings
@@ -86,8 +86,9 @@
'max': 50,
}
search_url = self.urls['rss']
+
response = self.session.get(search_url, params=search_params)
- if not response:
+ if not response or not response.text:
log.debug('No data returned from provider')
continue
@@ -139,10 +140,11 @@
title = self.clean_title(col['subject'].text, mode)
except AttributeError:
- log.debug('Parsing rows, that may not always have usefull info. Skipping to next.')
+ log.debug('Parsing rows, that may not always have useful info. Skipping to next.')
continue
if not all([title, nzb_id]):
continue
+
# Obtain the size from the 'description'
size_field = BinSearchProvider.size_regex.search(col['subject'].text)
if size_field:
@@ -227,7 +229,7 @@
data = {
data.split('=')[1]: 'on',
- 'action': 'nzb'
+ 'action': 'nzb',
}
if download_file(url, filename, method='POST', data=data, session=self.session,
@@ -255,7 +257,7 @@
data = {
data.split('=')[1]: 'on',
- 'action': 'nzb'
+ 'action': 'nzb',
}
log.info('Downloading {result} from {provider} at {url} and data {data}',
@@ -263,15 +265,18 @@
verify = False if self.public else None
- result = self.session.post(url, data=data, headers=self.session.headers,
- verify=verify, hooks={}, allow_redirects=True).content
+ response = self.session.post(url, data=data, headers=self.session.headers,
+ verify=verify, hooks={}, allow_redirects=True)
+ if not response or not response.content:
+ log.warning('Failed to download the NZB from BinSearch')
+ return None
# Validate that the result has the content of a valid nzb.
- if not BinSearchProvider.nzb_check_segment.search(result):
- log.info('Result returned from BinSearch was not a valid nzb')
+ if not BinSearchProvider.nzb_check_segment.search(response.content):
+ log.warning('Result returned from BinSearch was not a valid NZB')
return None
- return result
+ return response.content
def _get_size(self, item):
"""
| {"golden_diff": "diff --git a/medusa/providers/nzb/binsearch.py b/medusa/providers/nzb/binsearch.py\n--- a/medusa/providers/nzb/binsearch.py\n+++ b/medusa/providers/nzb/binsearch.py\n@@ -42,7 +42,7 @@\n self.url = 'https://www.binsearch.info'\n self.urls = {\n 'search': urljoin(self.url, 'index.php'),\n- 'rss': urljoin(self.url, 'browse.php')\n+ 'rss': urljoin(self.url, 'browse.php'),\n }\n \n # Proper Strings\n@@ -86,8 +86,9 @@\n 'max': 50,\n }\n search_url = self.urls['rss']\n+\n response = self.session.get(search_url, params=search_params)\n- if not response:\n+ if not response or not response.text:\n log.debug('No data returned from provider')\n continue\n \n@@ -139,10 +140,11 @@\n title = self.clean_title(col['subject'].text, mode)\n \n except AttributeError:\n- log.debug('Parsing rows, that may not always have usefull info. Skipping to next.')\n+ log.debug('Parsing rows, that may not always have useful info. Skipping to next.')\n continue\n if not all([title, nzb_id]):\n continue\n+\n # Obtain the size from the 'description'\n size_field = BinSearchProvider.size_regex.search(col['subject'].text)\n if size_field:\n@@ -227,7 +229,7 @@\n \n data = {\n data.split('=')[1]: 'on',\n- 'action': 'nzb'\n+ 'action': 'nzb',\n }\n \n if download_file(url, filename, method='POST', data=data, session=self.session,\n@@ -255,7 +257,7 @@\n \n data = {\n data.split('=')[1]: 'on',\n- 'action': 'nzb'\n+ 'action': 'nzb',\n }\n \n log.info('Downloading {result} from {provider} at {url} and data {data}',\n@@ -263,15 +265,18 @@\n \n verify = False if self.public else None\n \n- result = self.session.post(url, data=data, headers=self.session.headers,\n- verify=verify, hooks={}, allow_redirects=True).content\n+ response = self.session.post(url, data=data, headers=self.session.headers,\n+ verify=verify, hooks={}, allow_redirects=True)\n+ if not response or not response.content:\n+ log.warning('Failed to download the NZB from BinSearch')\n+ return None\n \n # Validate that the result has the content of a valid nzb.\n- if not BinSearchProvider.nzb_check_segment.search(result):\n- log.info('Result returned from BinSearch was not a valid nzb')\n+ if not BinSearchProvider.nzb_check_segment.search(response.content):\n+ log.warning('Result returned from BinSearch was not a valid NZB')\n return None\n \n- return result\n+ return response.content\n \n def _get_size(self, item):\n \"\"\"\n", "issue": "[APP SUBMITTED]: AttributeError: 'NoneType' object has no attribute 'content'\n\n### INFO\n**Python Version**: `2.7.13 (v2.7.13:a06454b1afa1, Dec 17 2016, 20:53:40) [MSC v.1500 64 bit (AMD64)]`\n**Operating System**: `Windows-10-10.0.17134`\n**Locale**: `cp1252`\n**Branch**: [master](../tree/master)\n**Database**: `44.11`\n**Commit**: pymedusa/Medusa@f5973b28465246e67465d7d86bdc8aa74d13e659\n**Link to Log**: https://gist.github.com/65cbbe28cae113abf840e4ccfacfb9b0\n### ERROR\n<pre>\n2018-08-04 15:19:44 ERROR SEARCHQUEUE-DAILY-SEARCH :: [f5973b2] DailySearchQueueItem Exception, error: 'NoneType' object has no attribute 'content'\nTraceback (most recent call last):\n File \"C:\\Medusa\\Medusa\\<a href=\"../blob/f5973b28465246e67465d7d86bdc8aa74d13e659/medusa/search/queue.py#L289\">medusa\\search\\queue.py</a>\", line 289, in run\n self.success = snatch_episode(result)\n File \"C:\\Medusa\\Medusa\\<a href=\"../blob/f5973b28465246e67465d7d86bdc8aa74d13e659/medusa/search/core.py#L135\">medusa\\search\\core.py</a>\", line 135, in snatch_episode\n nzb_data = result.provider.download_nzb_for_post(result)\n File \"C:\\Medusa\\Medusa\\<a href=\"../blob/f5973b28465246e67465d7d86bdc8aa74d13e659/medusa/providers/nzb/binsearch.py#L267\">medusa\\providers\\nzb\\binsearch.py</a>\", line 267, in download_nzb_for_post\n verify=verify, hooks={}, allow_redirects=True).content\nAttributeError: 'NoneType' object has no attribute 'content'\n</pre>\n---\n_STAFF NOTIFIED_: @pymedusa/support @pymedusa/moderators\n\n", "before_files": [{"content": "# coding=utf-8\n\n\"\"\"Provider code for Binsearch provider.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport logging\nimport re\nfrom builtins import zip\nfrom os.path import join\n\nfrom medusa import tv\nfrom medusa.bs4_parser import BS4Parser\nfrom medusa.helper.common import convert_size, sanitize_filename\nfrom medusa.helpers import download_file\nfrom medusa.logger.adapters.style import BraceAdapter\nfrom medusa.providers.nzb.nzb_provider import NZBProvider\n\nfrom requests.compat import urljoin\n\nlog = BraceAdapter(logging.getLogger(__name__))\nlog.logger.addHandler(logging.NullHandler())\n\n\nclass BinSearchProvider(NZBProvider):\n \"\"\"BinSearch Newznab provider.\"\"\"\n\n size_regex = re.compile(r'size: (\\d+\\.\\d+\\xa0\\w{2}), parts', re.I)\n title_regex = re.compile(r'\\\"([^\\\"]+)\"', re.I)\n title_reqex_clean = re.compile(r'^[ \\d_]+ (.+)')\n title_regex_rss = re.compile(r'- \\\"([^\\\"]+)\"', re.I)\n nzb_check_segment = re.compile(r'<segment bytes=\"[\\d]+\"')\n\n def __init__(self):\n \"\"\"Initialize the class.\"\"\"\n super(BinSearchProvider, self).__init__('BinSearch')\n\n # Credentials\n self.public = True\n\n # URLs\n self.url = 'https://www.binsearch.info'\n self.urls = {\n 'search': urljoin(self.url, 'index.php'),\n 'rss': urljoin(self.url, 'browse.php')\n }\n\n # Proper Strings\n self.proper_strings = ['PROPER', 'REPACK', 'REAL', 'RERIP']\n\n # Miscellaneous Options\n\n # Cache\n self.cache = tv.Cache(self, min_time=10)\n\n def search(self, search_strings, **kwargs):\n \"\"\"\n Search a provider and parse the results.\n\n :param search_strings: A dict with mode (key) and the search value (value)\n :returns: A list of search results (structure)\n \"\"\"\n results = []\n search_params = {\n 'adv_age': '',\n 'xminsize': 20,\n 'max': 250,\n }\n groups = [1, 2]\n\n for mode in search_strings:\n log.debug('Search mode: {0}', mode)\n # https://www.binsearch.info/browse.php?bg=alt.binaries.teevee&server=2\n for search_string in search_strings[mode]:\n search_params['q'] = search_string\n for group in groups:\n # Try both 'search in the most popular groups' & 'search in the other groups' modes\n search_params['server'] = group\n if mode != 'RSS':\n log.debug('Search string: {search}', {'search': search_string})\n search_url = self.urls['search']\n else:\n search_params = {\n 'bg': 'alt.binaries.teevee',\n 'server': 2,\n 'max': 50,\n }\n search_url = self.urls['rss']\n response = self.session.get(search_url, params=search_params)\n if not response:\n log.debug('No data returned from provider')\n continue\n\n results += self.parse(response.text, mode)\n\n return results\n\n def parse(self, data, mode):\n \"\"\"\n Parse search results for items.\n\n :param data: The raw response from a search\n :param mode: The current mode used to search, e.g. RSS\n\n :return: A list of items found\n \"\"\"\n def process_column_header(td):\n return td.get_text(strip=True).lower()\n\n items = []\n\n with BS4Parser(data, 'html5lib') as html:\n\n # We need to store the post url, to be used with every result later on.\n post_url = html.find('form', {'method': 'post'})['action']\n\n table = html.find('table', class_='xMenuT')\n rows = table('tr') if table else []\n row_offset = 1\n if not rows or not len(rows) - row_offset:\n log.debug('Data returned from provider does not contain any torrents')\n return items\n\n headers = rows[0]('th')\n # 0, 1, subject, poster, group, age\n labels = [process_column_header(header) or idx\n for idx, header in enumerate(headers)]\n\n # Skip column headers\n rows = rows[row_offset:]\n for row in rows:\n try:\n col = dict(list(zip(labels, row('td'))))\n nzb_id_input = col[0 if mode == 'RSS' else 1].find('input')\n if not nzb_id_input:\n continue\n nzb_id = nzb_id_input['name']\n # Try and get the the article subject from the weird binsearch format\n title = self.clean_title(col['subject'].text, mode)\n\n except AttributeError:\n log.debug('Parsing rows, that may not always have usefull info. Skipping to next.')\n continue\n if not all([title, nzb_id]):\n continue\n # Obtain the size from the 'description'\n size_field = BinSearchProvider.size_regex.search(col['subject'].text)\n if size_field:\n size_field = size_field.group(1)\n size = convert_size(size_field, sep='\\xa0') or -1\n size = int(size)\n\n download_url = urljoin(self.url, '{post_url}|nzb_id={nzb_id}'.format(post_url=post_url, nzb_id=nzb_id))\n\n # For future use\n # detail_url = 'https://www.binsearch.info/?q={0}'.format(title)\n human_time = True\n date = col['age' if mode != 'RSS' else 'date'].get_text(strip=True).replace('-', ' ')\n if mode == 'RSS':\n human_time = False\n pubdate_raw = date\n pubdate = self.parse_pubdate(pubdate_raw, human_time=human_time)\n\n item = {\n 'title': title,\n 'link': download_url,\n 'size': size,\n 'pubdate': pubdate,\n }\n if mode != 'RSS':\n log.debug('Found result: {0}', title)\n\n items.append(item)\n\n return items\n\n @staticmethod\n def clean_title(title, mode):\n \"\"\"\n Clean title field, using a series of regex.\n\n RSS search requires different cleaning then the other searches.\n When adding to this function, make sure you update the tests.\n \"\"\"\n try:\n if mode == 'RSS':\n title = BinSearchProvider.title_regex_rss.search(title).group(1)\n else:\n title = BinSearchProvider.title_regex.search(title).group(1)\n if BinSearchProvider.title_reqex_clean.search(title):\n title = BinSearchProvider.title_reqex_clean.search(title).group(1)\n for extension in ('.nfo', '.par2', '.rar', '.zip', '.nzb', '.part'):\n # Strip extensions that aren't part of the file name\n if title.endswith(extension):\n title = title[:len(title) - len(extension)]\n return title\n except AttributeError:\n return None\n\n def download_result(self, result):\n \"\"\"\n Download result from provider.\n\n This is used when a blackhole is used for sending the nzb file to the nzb client.\n For now the url and the post data is stored as one string in the db, using a pipe (|) to separate them.\n\n :param result: A SearchResult object.\n :return: The result of the nzb download (True/False).\n \"\"\"\n if not self.login():\n return False\n\n result_name = sanitize_filename(result.name)\n filename = join(self._get_storage_dir(), result_name + '.' + self.provider_type)\n\n if result.url.startswith('http'):\n self.session.headers.update({\n 'Referer': '/'.join(result.url.split('/')[:3]) + '/'\n })\n\n log.info('Downloading {result} from {provider} at {url}',\n {'result': result.name, 'provider': self.name, 'url': result.url})\n\n verify = False if self.public else None\n\n url, data = result.url.split('|')\n\n data = {\n data.split('=')[1]: 'on',\n 'action': 'nzb'\n }\n\n if download_file(url, filename, method='POST', data=data, session=self.session,\n headers=self.headers, verify=verify):\n\n if self._verify_download(filename):\n log.info('Saved {result} to {location}',\n {'result': result.name, 'location': filename})\n return True\n\n return False\n\n def download_nzb_for_post(self, result):\n \"\"\"\n Download the nzb content, prior to sending it to the nzb download client.\n\n :param result: Nzb SearchResult object.\n :return: The content of the nzb file if successful else None.\n \"\"\"\n if not self.login():\n return False\n\n # For now to separate the url and the post data, where splitting it with a pipe.\n url, data = result.url.split('|')\n\n data = {\n data.split('=')[1]: 'on',\n 'action': 'nzb'\n }\n\n log.info('Downloading {result} from {provider} at {url} and data {data}',\n {'result': result.name, 'provider': self.name, 'url': result.url, 'data': data})\n\n verify = False if self.public else None\n\n result = self.session.post(url, data=data, headers=self.session.headers,\n verify=verify, hooks={}, allow_redirects=True).content\n\n # Validate that the result has the content of a valid nzb.\n if not BinSearchProvider.nzb_check_segment.search(result):\n log.info('Result returned from BinSearch was not a valid nzb')\n return None\n\n return result\n\n def _get_size(self, item):\n \"\"\"\n Get result size.\n\n Overwrite this, as the default _get_size() from nzb_provider isn't working for us.\n :param item:\n :return: size in bytes or -1\n \"\"\"\n return item.get('size', -1)\n\n\nprovider = BinSearchProvider()\n", "path": "medusa/providers/nzb/binsearch.py"}], "after_files": [{"content": "# coding=utf-8\n\n\"\"\"Provider code for Binsearch provider.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport logging\nimport re\nfrom builtins import zip\nfrom os.path import join\n\nfrom medusa import tv\nfrom medusa.bs4_parser import BS4Parser\nfrom medusa.helper.common import convert_size, sanitize_filename\nfrom medusa.helpers import download_file\nfrom medusa.logger.adapters.style import BraceAdapter\nfrom medusa.providers.nzb.nzb_provider import NZBProvider\n\nfrom requests.compat import urljoin\n\nlog = BraceAdapter(logging.getLogger(__name__))\nlog.logger.addHandler(logging.NullHandler())\n\n\nclass BinSearchProvider(NZBProvider):\n \"\"\"BinSearch Newznab provider.\"\"\"\n\n size_regex = re.compile(r'size: (\\d+\\.\\d+\\xa0\\w{2}), parts', re.I)\n title_regex = re.compile(r'\\\"([^\\\"]+)\"', re.I)\n title_reqex_clean = re.compile(r'^[ \\d_]+ (.+)')\n title_regex_rss = re.compile(r'- \\\"([^\\\"]+)\"', re.I)\n nzb_check_segment = re.compile(r'<segment bytes=\"[\\d]+\"')\n\n def __init__(self):\n \"\"\"Initialize the class.\"\"\"\n super(BinSearchProvider, self).__init__('BinSearch')\n\n # Credentials\n self.public = True\n\n # URLs\n self.url = 'https://www.binsearch.info'\n self.urls = {\n 'search': urljoin(self.url, 'index.php'),\n 'rss': urljoin(self.url, 'browse.php'),\n }\n\n # Proper Strings\n self.proper_strings = ['PROPER', 'REPACK', 'REAL', 'RERIP']\n\n # Miscellaneous Options\n\n # Cache\n self.cache = tv.Cache(self, min_time=10)\n\n def search(self, search_strings, **kwargs):\n \"\"\"\n Search a provider and parse the results.\n\n :param search_strings: A dict with mode (key) and the search value (value)\n :returns: A list of search results (structure)\n \"\"\"\n results = []\n search_params = {\n 'adv_age': '',\n 'xminsize': 20,\n 'max': 250,\n }\n groups = [1, 2]\n\n for mode in search_strings:\n log.debug('Search mode: {0}', mode)\n # https://www.binsearch.info/browse.php?bg=alt.binaries.teevee&server=2\n for search_string in search_strings[mode]:\n search_params['q'] = search_string\n for group in groups:\n # Try both 'search in the most popular groups' & 'search in the other groups' modes\n search_params['server'] = group\n if mode != 'RSS':\n log.debug('Search string: {search}', {'search': search_string})\n search_url = self.urls['search']\n else:\n search_params = {\n 'bg': 'alt.binaries.teevee',\n 'server': 2,\n 'max': 50,\n }\n search_url = self.urls['rss']\n\n response = self.session.get(search_url, params=search_params)\n if not response or not response.text:\n log.debug('No data returned from provider')\n continue\n\n results += self.parse(response.text, mode)\n\n return results\n\n def parse(self, data, mode):\n \"\"\"\n Parse search results for items.\n\n :param data: The raw response from a search\n :param mode: The current mode used to search, e.g. RSS\n\n :return: A list of items found\n \"\"\"\n def process_column_header(td):\n return td.get_text(strip=True).lower()\n\n items = []\n\n with BS4Parser(data, 'html5lib') as html:\n\n # We need to store the post url, to be used with every result later on.\n post_url = html.find('form', {'method': 'post'})['action']\n\n table = html.find('table', class_='xMenuT')\n rows = table('tr') if table else []\n row_offset = 1\n if not rows or not len(rows) - row_offset:\n log.debug('Data returned from provider does not contain any torrents')\n return items\n\n headers = rows[0]('th')\n # 0, 1, subject, poster, group, age\n labels = [process_column_header(header) or idx\n for idx, header in enumerate(headers)]\n\n # Skip column headers\n rows = rows[row_offset:]\n for row in rows:\n try:\n col = dict(list(zip(labels, row('td'))))\n nzb_id_input = col[0 if mode == 'RSS' else 1].find('input')\n if not nzb_id_input:\n continue\n nzb_id = nzb_id_input['name']\n # Try and get the the article subject from the weird binsearch format\n title = self.clean_title(col['subject'].text, mode)\n\n except AttributeError:\n log.debug('Parsing rows, that may not always have useful info. Skipping to next.')\n continue\n if not all([title, nzb_id]):\n continue\n\n # Obtain the size from the 'description'\n size_field = BinSearchProvider.size_regex.search(col['subject'].text)\n if size_field:\n size_field = size_field.group(1)\n size = convert_size(size_field, sep='\\xa0') or -1\n size = int(size)\n\n download_url = urljoin(self.url, '{post_url}|nzb_id={nzb_id}'.format(post_url=post_url, nzb_id=nzb_id))\n\n # For future use\n # detail_url = 'https://www.binsearch.info/?q={0}'.format(title)\n human_time = True\n date = col['age' if mode != 'RSS' else 'date'].get_text(strip=True).replace('-', ' ')\n if mode == 'RSS':\n human_time = False\n pubdate_raw = date\n pubdate = self.parse_pubdate(pubdate_raw, human_time=human_time)\n\n item = {\n 'title': title,\n 'link': download_url,\n 'size': size,\n 'pubdate': pubdate,\n }\n if mode != 'RSS':\n log.debug('Found result: {0}', title)\n\n items.append(item)\n\n return items\n\n @staticmethod\n def clean_title(title, mode):\n \"\"\"\n Clean title field, using a series of regex.\n\n RSS search requires different cleaning then the other searches.\n When adding to this function, make sure you update the tests.\n \"\"\"\n try:\n if mode == 'RSS':\n title = BinSearchProvider.title_regex_rss.search(title).group(1)\n else:\n title = BinSearchProvider.title_regex.search(title).group(1)\n if BinSearchProvider.title_reqex_clean.search(title):\n title = BinSearchProvider.title_reqex_clean.search(title).group(1)\n for extension in ('.nfo', '.par2', '.rar', '.zip', '.nzb', '.part'):\n # Strip extensions that aren't part of the file name\n if title.endswith(extension):\n title = title[:len(title) - len(extension)]\n return title\n except AttributeError:\n return None\n\n def download_result(self, result):\n \"\"\"\n Download result from provider.\n\n This is used when a blackhole is used for sending the nzb file to the nzb client.\n For now the url and the post data is stored as one string in the db, using a pipe (|) to separate them.\n\n :param result: A SearchResult object.\n :return: The result of the nzb download (True/False).\n \"\"\"\n if not self.login():\n return False\n\n result_name = sanitize_filename(result.name)\n filename = join(self._get_storage_dir(), result_name + '.' + self.provider_type)\n\n if result.url.startswith('http'):\n self.session.headers.update({\n 'Referer': '/'.join(result.url.split('/')[:3]) + '/'\n })\n\n log.info('Downloading {result} from {provider} at {url}',\n {'result': result.name, 'provider': self.name, 'url': result.url})\n\n verify = False if self.public else None\n\n url, data = result.url.split('|')\n\n data = {\n data.split('=')[1]: 'on',\n 'action': 'nzb',\n }\n\n if download_file(url, filename, method='POST', data=data, session=self.session,\n headers=self.headers, verify=verify):\n\n if self._verify_download(filename):\n log.info('Saved {result} to {location}',\n {'result': result.name, 'location': filename})\n return True\n\n return False\n\n def download_nzb_for_post(self, result):\n \"\"\"\n Download the nzb content, prior to sending it to the nzb download client.\n\n :param result: Nzb SearchResult object.\n :return: The content of the nzb file if successful else None.\n \"\"\"\n if not self.login():\n return False\n\n # For now to separate the url and the post data, where splitting it with a pipe.\n url, data = result.url.split('|')\n\n data = {\n data.split('=')[1]: 'on',\n 'action': 'nzb',\n }\n\n log.info('Downloading {result} from {provider} at {url} and data {data}',\n {'result': result.name, 'provider': self.name, 'url': result.url, 'data': data})\n\n verify = False if self.public else None\n\n response = self.session.post(url, data=data, headers=self.session.headers,\n verify=verify, hooks={}, allow_redirects=True)\n if not response or not response.content:\n log.warning('Failed to download the NZB from BinSearch')\n return None\n\n # Validate that the result has the content of a valid nzb.\n if not BinSearchProvider.nzb_check_segment.search(response.content):\n log.warning('Result returned from BinSearch was not a valid NZB')\n return None\n\n return response.content\n\n def _get_size(self, item):\n \"\"\"\n Get result size.\n\n Overwrite this, as the default _get_size() from nzb_provider isn't working for us.\n :param item:\n :return: size in bytes or -1\n \"\"\"\n return item.get('size', -1)\n\n\nprovider = BinSearchProvider()\n", "path": "medusa/providers/nzb/binsearch.py"}]} | 3,901 | 688 |
gh_patches_debug_41674 | rasdani/github-patches | git_diff | liberapay__liberapay.com-441 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Passing bad data to basic auth can result in a 500
https://sentry.changaco.oy.lc/share/issue/322e3532/
To reproduce: `curl --basic -u admin:admin 'http://localhost:8339/'`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `liberapay/security/csrf.py`
Content:
```
1 """Cross Site Request Forgery middleware, borrowed from Django.
2
3 See also:
4
5 https://github.com/django/django/blob/master/django/middleware/csrf.py
6 https://docs.djangoproject.com/en/dev/ref/contrib/csrf/
7 https://github.com/gratipay/gratipay.com/issues/88
8
9 """
10 from __future__ import absolute_import, division, print_function, unicode_literals
11
12 from datetime import timedelta
13 import re
14
15 from .crypto import constant_time_compare, get_random_string
16
17
18 TOKEN_LENGTH = 32
19 CSRF_TOKEN = str('csrf_token') # bytes in python2, unicode in python3
20 CSRF_TIMEOUT = timedelta(days=7)
21
22 _get_new_token = lambda: get_random_string(TOKEN_LENGTH)
23 _token_re = re.compile(r'^[a-zA-Z0-9]{%d}$' % TOKEN_LENGTH)
24 _sanitize_token = lambda t: t if _token_re.match(t) else None
25
26
27 def extract_token_from_cookie(request):
28 """Given a Request object, return a csrf_token.
29 """
30 try:
31 token = request.headers.cookie[CSRF_TOKEN].value
32 except KeyError:
33 token = None
34 else:
35 token = _sanitize_token(token)
36
37 # Don't set a CSRF cookie on assets, to avoid busting the cache.
38 # Don't set it on callbacks, because we don't need it there.
39
40 if request.path.raw.startswith('/assets/') or request.path.raw.startswith('/callbacks/'):
41 token = None
42 else:
43 token = token or _get_new_token()
44
45 return {'csrf_token': token}
46
47
48 def reject_forgeries(request, response, csrf_token):
49 # Assume that anything not defined as 'safe' by RC2616 needs protection.
50 if request.line.method not in ('GET', 'HEAD', 'OPTIONS', 'TRACE'):
51
52 # except webhooks
53 if request.line.uri.startswith('/callbacks/'):
54 return
55 # and requests using HTTP auth
56 if b'Authorization' in request.headers:
57 return
58
59 # Check non-cookie token for match.
60 second_token = ""
61 if request.line.method == "POST":
62 if isinstance(request.body, dict):
63 second_token = request.body.get('csrf_token', '')
64
65 if second_token == "":
66 # Fall back to X-CSRF-TOKEN, to make things easier for AJAX,
67 # and possible for PUT/DELETE.
68 second_token = request.headers.get(b'X-CSRF-TOKEN', b'').decode('ascii', 'replace')
69
70 if not constant_time_compare(second_token, csrf_token):
71 raise response.error(403, "Bad CSRF cookie")
72
73
74 def add_token_to_response(response, csrf_token=None):
75 """Store the latest CSRF token as a cookie.
76 """
77 if csrf_token:
78 # Don't set httponly so that we can POST using XHR.
79 # https://github.com/gratipay/gratipay.com/issues/3030
80 response.set_cookie(CSRF_TOKEN, csrf_token, expires=CSRF_TIMEOUT, httponly=False)
81
```
Path: `liberapay/security/authentication.py`
Content:
```
1 """Defines website authentication helpers.
2 """
3 import binascii
4
5 from six.moves.urllib.parse import urlencode
6
7 from pando import Response
8
9 from liberapay.constants import SESSION, SESSION_TIMEOUT
10 from liberapay.exceptions import LoginRequired
11 from liberapay.models.participant import Participant
12
13
14 class _ANON(object):
15 ANON = True
16 is_admin = False
17 id = None
18 __bool__ = __nonzero__ = lambda *a: False
19 get_tip_to = lambda self, tippee: Participant._zero_tip_dict(tippee)
20 __repr__ = lambda self: '<ANON>'
21
22
23 ANON = _ANON()
24
25
26 def _get_body(request):
27 try:
28 body = request.body
29 except Response:
30 return
31 if not isinstance(body, dict):
32 return
33 return body
34
35
36 def sign_in_with_form_data(body, state):
37 p = None
38 _, website = state['_'], state['website']
39
40 if body.get('log-in.id'):
41 id = body.pop('log-in.id')
42 password = body.pop('log-in.password', None)
43 k = 'email' if '@' in id else 'username'
44 if password:
45 p = Participant.authenticate(
46 k, 'password',
47 id, password,
48 )
49 if not p:
50 state['log-in.error'] = _("Bad username or password.")
51 elif k == 'username':
52 state['log-in.error'] = _("\"{0}\" is not a valid email address.", id)
53 return
54 else:
55 email = id
56 p = Participant._from_thing('email', email)
57 if p:
58 p.start_session()
59 qs = {'log-in.id': p.id, 'log-in.token': p.session_token}
60 p.send_email(
61 'login_link',
62 email=email,
63 link=p.url('settings/', qs),
64 link_validity=SESSION_TIMEOUT,
65 )
66 state['log-in.email-sent-to'] = email
67 else:
68 state['log-in.error'] = _(
69 "We didn't find any account whose primary email address is {0}.",
70 email
71 )
72 p = None
73
74 elif 'sign-in.email' in body:
75 response = state['response']
76 kind = body.pop('sign-in.kind')
77 if kind not in ('individual', 'organization'):
78 raise response.error(400, 'bad kind')
79 email = body.pop('sign-in.email')
80 if not email:
81 raise response.error(400, 'email is required')
82 with website.db.get_cursor() as c:
83 p = Participant.make_active(
84 kind, body.pop('sign-in.username', None),
85 body.pop('sign-in.password', None), cursor=c,
86 )
87 p.set_email_lang(state['request'].headers.get(b'Accept-Language'), cursor=c)
88 p.add_email(email, cursor=c)
89 p.authenticated = True
90
91 return p
92
93
94 def start_user_as_anon():
95 """Make sure we always have a user object, regardless of exceptions during authentication.
96 """
97 return {'user': ANON}
98
99
100 def authenticate_user_if_possible(request, response, state, user, _):
101 """This signs the user in.
102 """
103 if request.line.uri.startswith('/assets/'):
104 return
105
106 # HTTP auth
107 if b'Authorization' in request.headers:
108 header = request.headers[b'Authorization']
109 if not header.startswith(b'Basic '):
110 raise response.error(401, 'Unsupported authentication method')
111 try:
112 creds = binascii.a2b_base64(header[len('Basic '):]).decode('utf8').split(':', 1)
113 except (binascii.Error, UnicodeDecodeError):
114 raise response.error(400, 'Malformed "Authorization" header')
115 participant = Participant.authenticate('id', 'password', *creds)
116 if not participant:
117 raise response.error(401, 'Invalid credentials')
118 return {'user': participant}
119
120 # Cookie and form auth
121 # We want to try cookie auth first, but we want form auth to supersede it
122 p = None
123 if SESSION in request.headers.cookie:
124 creds = request.headers.cookie[SESSION].value.split(':', 1)
125 p = Participant.authenticate('id', 'session', *creds)
126 if p:
127 state['user'] = p
128 session_p, p = p, None
129 session_suffix = ''
130 redirect_url = request.line.uri
131 if request.method == 'POST':
132 body = _get_body(request)
133 if body:
134 p = sign_in_with_form_data(body, state)
135 carry_on = body.pop('log-in.carry-on', None)
136 if not p and carry_on:
137 p_email = session_p and (
138 session_p.email or session_p.get_emails()[0].address
139 )
140 if p_email != carry_on:
141 state['log-in.carry-on'] = carry_on
142 raise LoginRequired
143 redirect_url = body.get('sign-in.back-to') or redirect_url
144 elif request.method == 'GET' and request.qs.get('log-in.id'):
145 id, token = request.qs.pop('log-in.id'), request.qs.pop('log-in.token')
146 p = Participant.authenticate('id', 'session', id, token)
147 if not p and (not session_p or session_p.id != id):
148 raise response.error(400, _("This login link is expired or invalid."))
149 else:
150 qs = '?' + urlencode(request.qs, doseq=True) if request.qs else ''
151 redirect_url = request.path.raw + qs
152 session_p = p
153 session_suffix = '.em'
154 if p:
155 if session_p:
156 session_p.sign_out(response.headers.cookie)
157 if p.status == 'closed':
158 p.update_status('active')
159 p.sign_in(response.headers.cookie, session_suffix)
160 state['user'] = p
161 if request.body.pop('form.repost', None) != 'true':
162 response.redirect(redirect_url)
163
164
165 def add_auth_to_response(response, request=None, user=ANON):
166 if request is None:
167 return # early parsing must've failed
168 if request.line.uri.startswith('/assets/'):
169 return # assets never get auth headers
170
171 if SESSION in request.headers.cookie:
172 if not user.ANON:
173 user.keep_signed_in(response.headers.cookie)
174
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/liberapay/security/authentication.py b/liberapay/security/authentication.py
--- a/liberapay/security/authentication.py
+++ b/liberapay/security/authentication.py
@@ -109,10 +109,12 @@
if not header.startswith(b'Basic '):
raise response.error(401, 'Unsupported authentication method')
try:
- creds = binascii.a2b_base64(header[len('Basic '):]).decode('utf8').split(':', 1)
- except (binascii.Error, UnicodeDecodeError):
+ uid, pwd = binascii.a2b_base64(header[len('Basic '):]).decode('utf8').split(':', 1)
+ except (binascii.Error, UnicodeDecodeError, ValueError):
raise response.error(400, 'Malformed "Authorization" header')
- participant = Participant.authenticate('id', 'password', *creds)
+ if not uid.isdigit():
+ raise response.error(401, 'Invalid user id: expected an integer, got `%s`' % uid)
+ participant = Participant.authenticate('id', 'password', uid, pwd)
if not participant:
raise response.error(401, 'Invalid credentials')
return {'user': participant}
diff --git a/liberapay/security/csrf.py b/liberapay/security/csrf.py
--- a/liberapay/security/csrf.py
+++ b/liberapay/security/csrf.py
@@ -1,4 +1,4 @@
-"""Cross Site Request Forgery middleware, borrowed from Django.
+"""Cross Site Request Forgery middleware, originally borrowed from Django.
See also:
@@ -18,6 +18,7 @@
TOKEN_LENGTH = 32
CSRF_TOKEN = str('csrf_token') # bytes in python2, unicode in python3
CSRF_TIMEOUT = timedelta(days=7)
+SAFE_METHODS = {'GET', 'HEAD', 'OPTIONS', 'TRACE'}
_get_new_token = lambda: get_random_string(TOKEN_LENGTH)
_token_re = re.compile(r'^[a-zA-Z0-9]{%d}$' % TOKEN_LENGTH)
@@ -27,34 +28,36 @@
def extract_token_from_cookie(request):
"""Given a Request object, return a csrf_token.
"""
- try:
- token = request.headers.cookie[CSRF_TOKEN].value
- except KeyError:
- token = None
- else:
- token = _sanitize_token(token)
- # Don't set a CSRF cookie on assets, to avoid busting the cache.
- # Don't set it on callbacks, because we don't need it there.
+ off = (
+ # Turn off CSRF protection on assets, to avoid busting the cache.
+ request.path.raw.startswith('/assets/') or
+ # Turn off CSRF protection on callbacks, so they can receive POST requests.
+ request.path.raw.startswith('/callbacks/') or
+ # Turn off CSRF when using HTTP auth, so API users can use POST and others.
+ b'Authorization' in request.headers
+ )
- if request.path.raw.startswith('/assets/') or request.path.raw.startswith('/callbacks/'):
+ if off:
token = None
else:
- token = token or _get_new_token()
+ try:
+ token = request.headers.cookie[CSRF_TOKEN].value
+ except KeyError:
+ token = _get_new_token()
+ else:
+ token = _sanitize_token(token) or _get_new_token()
return {'csrf_token': token}
def reject_forgeries(request, response, csrf_token):
- # Assume that anything not defined as 'safe' by RC2616 needs protection.
- if request.line.method not in ('GET', 'HEAD', 'OPTIONS', 'TRACE'):
-
- # except webhooks
- if request.line.uri.startswith('/callbacks/'):
- return
- # and requests using HTTP auth
- if b'Authorization' in request.headers:
- return
+ if csrf_token is None:
+ # CSRF protection is turned off for this request
+ return
+
+ # Assume that anything not defined as 'safe' by RFC7231 needs protection.
+ if request.line.method not in SAFE_METHODS:
# Check non-cookie token for match.
second_token = ""
| {"golden_diff": "diff --git a/liberapay/security/authentication.py b/liberapay/security/authentication.py\n--- a/liberapay/security/authentication.py\n+++ b/liberapay/security/authentication.py\n@@ -109,10 +109,12 @@\n if not header.startswith(b'Basic '):\n raise response.error(401, 'Unsupported authentication method')\n try:\n- creds = binascii.a2b_base64(header[len('Basic '):]).decode('utf8').split(':', 1)\n- except (binascii.Error, UnicodeDecodeError):\n+ uid, pwd = binascii.a2b_base64(header[len('Basic '):]).decode('utf8').split(':', 1)\n+ except (binascii.Error, UnicodeDecodeError, ValueError):\n raise response.error(400, 'Malformed \"Authorization\" header')\n- participant = Participant.authenticate('id', 'password', *creds)\n+ if not uid.isdigit():\n+ raise response.error(401, 'Invalid user id: expected an integer, got `%s`' % uid)\n+ participant = Participant.authenticate('id', 'password', uid, pwd)\n if not participant:\n raise response.error(401, 'Invalid credentials')\n return {'user': participant}\ndiff --git a/liberapay/security/csrf.py b/liberapay/security/csrf.py\n--- a/liberapay/security/csrf.py\n+++ b/liberapay/security/csrf.py\n@@ -1,4 +1,4 @@\n-\"\"\"Cross Site Request Forgery middleware, borrowed from Django.\n+\"\"\"Cross Site Request Forgery middleware, originally borrowed from Django.\n \n See also:\n \n@@ -18,6 +18,7 @@\n TOKEN_LENGTH = 32\n CSRF_TOKEN = str('csrf_token') # bytes in python2, unicode in python3\n CSRF_TIMEOUT = timedelta(days=7)\n+SAFE_METHODS = {'GET', 'HEAD', 'OPTIONS', 'TRACE'}\n \n _get_new_token = lambda: get_random_string(TOKEN_LENGTH)\n _token_re = re.compile(r'^[a-zA-Z0-9]{%d}$' % TOKEN_LENGTH)\n@@ -27,34 +28,36 @@\n def extract_token_from_cookie(request):\n \"\"\"Given a Request object, return a csrf_token.\n \"\"\"\n- try:\n- token = request.headers.cookie[CSRF_TOKEN].value\n- except KeyError:\n- token = None\n- else:\n- token = _sanitize_token(token)\n \n- # Don't set a CSRF cookie on assets, to avoid busting the cache.\n- # Don't set it on callbacks, because we don't need it there.\n+ off = (\n+ # Turn off CSRF protection on assets, to avoid busting the cache.\n+ request.path.raw.startswith('/assets/') or\n+ # Turn off CSRF protection on callbacks, so they can receive POST requests.\n+ request.path.raw.startswith('/callbacks/') or\n+ # Turn off CSRF when using HTTP auth, so API users can use POST and others.\n+ b'Authorization' in request.headers\n+ )\n \n- if request.path.raw.startswith('/assets/') or request.path.raw.startswith('/callbacks/'):\n+ if off:\n token = None\n else:\n- token = token or _get_new_token()\n+ try:\n+ token = request.headers.cookie[CSRF_TOKEN].value\n+ except KeyError:\n+ token = _get_new_token()\n+ else:\n+ token = _sanitize_token(token) or _get_new_token()\n \n return {'csrf_token': token}\n \n \n def reject_forgeries(request, response, csrf_token):\n- # Assume that anything not defined as 'safe' by RC2616 needs protection.\n- if request.line.method not in ('GET', 'HEAD', 'OPTIONS', 'TRACE'):\n-\n- # except webhooks\n- if request.line.uri.startswith('/callbacks/'):\n- return\n- # and requests using HTTP auth\n- if b'Authorization' in request.headers:\n- return\n+ if csrf_token is None:\n+ # CSRF protection is turned off for this request\n+ return\n+\n+ # Assume that anything not defined as 'safe' by RFC7231 needs protection.\n+ if request.line.method not in SAFE_METHODS:\n \n # Check non-cookie token for match.\n second_token = \"\"\n", "issue": "Passing bad data to basic auth can result in a 500\nhttps://sentry.changaco.oy.lc/share/issue/322e3532/\n\nTo reproduce: `curl --basic -u admin:admin 'http://localhost:8339/'`\n\n", "before_files": [{"content": "\"\"\"Cross Site Request Forgery middleware, borrowed from Django.\n\nSee also:\n\n https://github.com/django/django/blob/master/django/middleware/csrf.py\n https://docs.djangoproject.com/en/dev/ref/contrib/csrf/\n https://github.com/gratipay/gratipay.com/issues/88\n\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom datetime import timedelta\nimport re\n\nfrom .crypto import constant_time_compare, get_random_string\n\n\nTOKEN_LENGTH = 32\nCSRF_TOKEN = str('csrf_token') # bytes in python2, unicode in python3\nCSRF_TIMEOUT = timedelta(days=7)\n\n_get_new_token = lambda: get_random_string(TOKEN_LENGTH)\n_token_re = re.compile(r'^[a-zA-Z0-9]{%d}$' % TOKEN_LENGTH)\n_sanitize_token = lambda t: t if _token_re.match(t) else None\n\n\ndef extract_token_from_cookie(request):\n \"\"\"Given a Request object, return a csrf_token.\n \"\"\"\n try:\n token = request.headers.cookie[CSRF_TOKEN].value\n except KeyError:\n token = None\n else:\n token = _sanitize_token(token)\n\n # Don't set a CSRF cookie on assets, to avoid busting the cache.\n # Don't set it on callbacks, because we don't need it there.\n\n if request.path.raw.startswith('/assets/') or request.path.raw.startswith('/callbacks/'):\n token = None\n else:\n token = token or _get_new_token()\n\n return {'csrf_token': token}\n\n\ndef reject_forgeries(request, response, csrf_token):\n # Assume that anything not defined as 'safe' by RC2616 needs protection.\n if request.line.method not in ('GET', 'HEAD', 'OPTIONS', 'TRACE'):\n\n # except webhooks\n if request.line.uri.startswith('/callbacks/'):\n return\n # and requests using HTTP auth\n if b'Authorization' in request.headers:\n return\n\n # Check non-cookie token for match.\n second_token = \"\"\n if request.line.method == \"POST\":\n if isinstance(request.body, dict):\n second_token = request.body.get('csrf_token', '')\n\n if second_token == \"\":\n # Fall back to X-CSRF-TOKEN, to make things easier for AJAX,\n # and possible for PUT/DELETE.\n second_token = request.headers.get(b'X-CSRF-TOKEN', b'').decode('ascii', 'replace')\n\n if not constant_time_compare(second_token, csrf_token):\n raise response.error(403, \"Bad CSRF cookie\")\n\n\ndef add_token_to_response(response, csrf_token=None):\n \"\"\"Store the latest CSRF token as a cookie.\n \"\"\"\n if csrf_token:\n # Don't set httponly so that we can POST using XHR.\n # https://github.com/gratipay/gratipay.com/issues/3030\n response.set_cookie(CSRF_TOKEN, csrf_token, expires=CSRF_TIMEOUT, httponly=False)\n", "path": "liberapay/security/csrf.py"}, {"content": "\"\"\"Defines website authentication helpers.\n\"\"\"\nimport binascii\n\nfrom six.moves.urllib.parse import urlencode\n\nfrom pando import Response\n\nfrom liberapay.constants import SESSION, SESSION_TIMEOUT\nfrom liberapay.exceptions import LoginRequired\nfrom liberapay.models.participant import Participant\n\n\nclass _ANON(object):\n ANON = True\n is_admin = False\n id = None\n __bool__ = __nonzero__ = lambda *a: False\n get_tip_to = lambda self, tippee: Participant._zero_tip_dict(tippee)\n __repr__ = lambda self: '<ANON>'\n\n\nANON = _ANON()\n\n\ndef _get_body(request):\n try:\n body = request.body\n except Response:\n return\n if not isinstance(body, dict):\n return\n return body\n\n\ndef sign_in_with_form_data(body, state):\n p = None\n _, website = state['_'], state['website']\n\n if body.get('log-in.id'):\n id = body.pop('log-in.id')\n password = body.pop('log-in.password', None)\n k = 'email' if '@' in id else 'username'\n if password:\n p = Participant.authenticate(\n k, 'password',\n id, password,\n )\n if not p:\n state['log-in.error'] = _(\"Bad username or password.\")\n elif k == 'username':\n state['log-in.error'] = _(\"\\\"{0}\\\" is not a valid email address.\", id)\n return\n else:\n email = id\n p = Participant._from_thing('email', email)\n if p:\n p.start_session()\n qs = {'log-in.id': p.id, 'log-in.token': p.session_token}\n p.send_email(\n 'login_link',\n email=email,\n link=p.url('settings/', qs),\n link_validity=SESSION_TIMEOUT,\n )\n state['log-in.email-sent-to'] = email\n else:\n state['log-in.error'] = _(\n \"We didn't find any account whose primary email address is {0}.\",\n email\n )\n p = None\n\n elif 'sign-in.email' in body:\n response = state['response']\n kind = body.pop('sign-in.kind')\n if kind not in ('individual', 'organization'):\n raise response.error(400, 'bad kind')\n email = body.pop('sign-in.email')\n if not email:\n raise response.error(400, 'email is required')\n with website.db.get_cursor() as c:\n p = Participant.make_active(\n kind, body.pop('sign-in.username', None),\n body.pop('sign-in.password', None), cursor=c,\n )\n p.set_email_lang(state['request'].headers.get(b'Accept-Language'), cursor=c)\n p.add_email(email, cursor=c)\n p.authenticated = True\n\n return p\n\n\ndef start_user_as_anon():\n \"\"\"Make sure we always have a user object, regardless of exceptions during authentication.\n \"\"\"\n return {'user': ANON}\n\n\ndef authenticate_user_if_possible(request, response, state, user, _):\n \"\"\"This signs the user in.\n \"\"\"\n if request.line.uri.startswith('/assets/'):\n return\n\n # HTTP auth\n if b'Authorization' in request.headers:\n header = request.headers[b'Authorization']\n if not header.startswith(b'Basic '):\n raise response.error(401, 'Unsupported authentication method')\n try:\n creds = binascii.a2b_base64(header[len('Basic '):]).decode('utf8').split(':', 1)\n except (binascii.Error, UnicodeDecodeError):\n raise response.error(400, 'Malformed \"Authorization\" header')\n participant = Participant.authenticate('id', 'password', *creds)\n if not participant:\n raise response.error(401, 'Invalid credentials')\n return {'user': participant}\n\n # Cookie and form auth\n # We want to try cookie auth first, but we want form auth to supersede it\n p = None\n if SESSION in request.headers.cookie:\n creds = request.headers.cookie[SESSION].value.split(':', 1)\n p = Participant.authenticate('id', 'session', *creds)\n if p:\n state['user'] = p\n session_p, p = p, None\n session_suffix = ''\n redirect_url = request.line.uri\n if request.method == 'POST':\n body = _get_body(request)\n if body:\n p = sign_in_with_form_data(body, state)\n carry_on = body.pop('log-in.carry-on', None)\n if not p and carry_on:\n p_email = session_p and (\n session_p.email or session_p.get_emails()[0].address\n )\n if p_email != carry_on:\n state['log-in.carry-on'] = carry_on\n raise LoginRequired\n redirect_url = body.get('sign-in.back-to') or redirect_url\n elif request.method == 'GET' and request.qs.get('log-in.id'):\n id, token = request.qs.pop('log-in.id'), request.qs.pop('log-in.token')\n p = Participant.authenticate('id', 'session', id, token)\n if not p and (not session_p or session_p.id != id):\n raise response.error(400, _(\"This login link is expired or invalid.\"))\n else:\n qs = '?' + urlencode(request.qs, doseq=True) if request.qs else ''\n redirect_url = request.path.raw + qs\n session_p = p\n session_suffix = '.em'\n if p:\n if session_p:\n session_p.sign_out(response.headers.cookie)\n if p.status == 'closed':\n p.update_status('active')\n p.sign_in(response.headers.cookie, session_suffix)\n state['user'] = p\n if request.body.pop('form.repost', None) != 'true':\n response.redirect(redirect_url)\n\n\ndef add_auth_to_response(response, request=None, user=ANON):\n if request is None:\n return # early parsing must've failed\n if request.line.uri.startswith('/assets/'):\n return # assets never get auth headers\n\n if SESSION in request.headers.cookie:\n if not user.ANON:\n user.keep_signed_in(response.headers.cookie)\n", "path": "liberapay/security/authentication.py"}], "after_files": [{"content": "\"\"\"Cross Site Request Forgery middleware, originally borrowed from Django.\n\nSee also:\n\n https://github.com/django/django/blob/master/django/middleware/csrf.py\n https://docs.djangoproject.com/en/dev/ref/contrib/csrf/\n https://github.com/gratipay/gratipay.com/issues/88\n\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom datetime import timedelta\nimport re\n\nfrom .crypto import constant_time_compare, get_random_string\n\n\nTOKEN_LENGTH = 32\nCSRF_TOKEN = str('csrf_token') # bytes in python2, unicode in python3\nCSRF_TIMEOUT = timedelta(days=7)\nSAFE_METHODS = {'GET', 'HEAD', 'OPTIONS', 'TRACE'}\n\n_get_new_token = lambda: get_random_string(TOKEN_LENGTH)\n_token_re = re.compile(r'^[a-zA-Z0-9]{%d}$' % TOKEN_LENGTH)\n_sanitize_token = lambda t: t if _token_re.match(t) else None\n\n\ndef extract_token_from_cookie(request):\n \"\"\"Given a Request object, return a csrf_token.\n \"\"\"\n\n off = (\n # Turn off CSRF protection on assets, to avoid busting the cache.\n request.path.raw.startswith('/assets/') or\n # Turn off CSRF protection on callbacks, so they can receive POST requests.\n request.path.raw.startswith('/callbacks/') or\n # Turn off CSRF when using HTTP auth, so API users can use POST and others.\n b'Authorization' in request.headers\n )\n\n if off:\n token = None\n else:\n try:\n token = request.headers.cookie[CSRF_TOKEN].value\n except KeyError:\n token = _get_new_token()\n else:\n token = _sanitize_token(token) or _get_new_token()\n\n return {'csrf_token': token}\n\n\ndef reject_forgeries(request, response, csrf_token):\n if csrf_token is None:\n # CSRF protection is turned off for this request\n return\n\n # Assume that anything not defined as 'safe' by RFC7231 needs protection.\n if request.line.method not in SAFE_METHODS:\n\n # Check non-cookie token for match.\n second_token = \"\"\n if request.line.method == \"POST\":\n if isinstance(request.body, dict):\n second_token = request.body.get('csrf_token', '')\n\n if second_token == \"\":\n # Fall back to X-CSRF-TOKEN, to make things easier for AJAX,\n # and possible for PUT/DELETE.\n second_token = request.headers.get(b'X-CSRF-TOKEN', b'').decode('ascii', 'replace')\n\n if not constant_time_compare(second_token, csrf_token):\n raise response.error(403, \"Bad CSRF cookie\")\n\n\ndef add_token_to_response(response, csrf_token=None):\n \"\"\"Store the latest CSRF token as a cookie.\n \"\"\"\n if csrf_token:\n # Don't set httponly so that we can POST using XHR.\n # https://github.com/gratipay/gratipay.com/issues/3030\n response.set_cookie(CSRF_TOKEN, csrf_token, expires=CSRF_TIMEOUT, httponly=False)\n", "path": "liberapay/security/csrf.py"}, {"content": "\"\"\"Defines website authentication helpers.\n\"\"\"\nimport binascii\n\nfrom six.moves.urllib.parse import urlencode\n\nfrom pando import Response\n\nfrom liberapay.constants import SESSION, SESSION_TIMEOUT\nfrom liberapay.exceptions import LoginRequired\nfrom liberapay.models.participant import Participant\n\n\nclass _ANON(object):\n ANON = True\n is_admin = False\n id = None\n __bool__ = __nonzero__ = lambda *a: False\n get_tip_to = lambda self, tippee: Participant._zero_tip_dict(tippee)\n __repr__ = lambda self: '<ANON>'\n\n\nANON = _ANON()\n\n\ndef _get_body(request):\n try:\n body = request.body\n except Response:\n return\n if not isinstance(body, dict):\n return\n return body\n\n\ndef sign_in_with_form_data(body, state):\n p = None\n _, website = state['_'], state['website']\n\n if body.get('log-in.id'):\n id = body.pop('log-in.id')\n password = body.pop('log-in.password', None)\n k = 'email' if '@' in id else 'username'\n if password:\n p = Participant.authenticate(\n k, 'password',\n id, password,\n )\n if not p:\n state['log-in.error'] = _(\"Bad username or password.\")\n elif k == 'username':\n state['log-in.error'] = _(\"\\\"{0}\\\" is not a valid email address.\", id)\n return\n else:\n email = id\n p = Participant._from_thing('email', email)\n if p:\n p.start_session()\n qs = {'log-in.id': p.id, 'log-in.token': p.session_token}\n p.send_email(\n 'login_link',\n email=email,\n link=p.url('settings/', qs),\n link_validity=SESSION_TIMEOUT,\n )\n state['log-in.email-sent-to'] = email\n else:\n state['log-in.error'] = _(\n \"We didn't find any account whose primary email address is {0}.\",\n email\n )\n p = None\n\n elif 'sign-in.email' in body:\n response = state['response']\n kind = body.pop('sign-in.kind')\n if kind not in ('individual', 'organization'):\n raise response.error(400, 'bad kind')\n email = body.pop('sign-in.email')\n if not email:\n raise response.error(400, 'email is required')\n with website.db.get_cursor() as c:\n p = Participant.make_active(\n kind, body.pop('sign-in.username', None),\n body.pop('sign-in.password', None), cursor=c,\n )\n p.set_email_lang(state['request'].headers.get(b'Accept-Language'), cursor=c)\n p.add_email(email, cursor=c)\n p.authenticated = True\n\n return p\n\n\ndef start_user_as_anon():\n \"\"\"Make sure we always have a user object, regardless of exceptions during authentication.\n \"\"\"\n return {'user': ANON}\n\n\ndef authenticate_user_if_possible(request, response, state, user, _):\n \"\"\"This signs the user in.\n \"\"\"\n if request.line.uri.startswith('/assets/'):\n return\n\n # HTTP auth\n if b'Authorization' in request.headers:\n header = request.headers[b'Authorization']\n if not header.startswith(b'Basic '):\n raise response.error(401, 'Unsupported authentication method')\n try:\n uid, pwd = binascii.a2b_base64(header[len('Basic '):]).decode('utf8').split(':', 1)\n except (binascii.Error, UnicodeDecodeError, ValueError):\n raise response.error(400, 'Malformed \"Authorization\" header')\n if not uid.isdigit():\n raise response.error(401, 'Invalid user id: expected an integer, got `%s`' % uid)\n participant = Participant.authenticate('id', 'password', uid, pwd)\n if not participant:\n raise response.error(401, 'Invalid credentials')\n return {'user': participant}\n\n # Cookie and form auth\n # We want to try cookie auth first, but we want form auth to supersede it\n p = None\n if SESSION in request.headers.cookie:\n creds = request.headers.cookie[SESSION].value.split(':', 1)\n p = Participant.authenticate('id', 'session', *creds)\n if p:\n state['user'] = p\n session_p, p = p, None\n session_suffix = ''\n redirect_url = request.line.uri\n if request.method == 'POST':\n body = _get_body(request)\n if body:\n p = sign_in_with_form_data(body, state)\n carry_on = body.pop('log-in.carry-on', None)\n if not p and carry_on:\n p_email = session_p and (\n session_p.email or session_p.get_emails()[0].address\n )\n if p_email != carry_on:\n state['log-in.carry-on'] = carry_on\n raise LoginRequired\n redirect_url = body.get('sign-in.back-to') or redirect_url\n elif request.method == 'GET' and request.qs.get('log-in.id'):\n id, token = request.qs.pop('log-in.id'), request.qs.pop('log-in.token')\n p = Participant.authenticate('id', 'session', id, token)\n if not p and (not session_p or session_p.id != id):\n raise response.error(400, _(\"This login link is expired or invalid.\"))\n else:\n qs = '?' + urlencode(request.qs, doseq=True) if request.qs else ''\n redirect_url = request.path.raw + qs\n session_p = p\n session_suffix = '.em'\n if p:\n if session_p:\n session_p.sign_out(response.headers.cookie)\n if p.status == 'closed':\n p.update_status('active')\n p.sign_in(response.headers.cookie, session_suffix)\n state['user'] = p\n if request.body.pop('form.repost', None) != 'true':\n response.redirect(redirect_url)\n\n\ndef add_auth_to_response(response, request=None, user=ANON):\n if request is None:\n return # early parsing must've failed\n if request.line.uri.startswith('/assets/'):\n return # assets never get auth headers\n\n if SESSION in request.headers.cookie:\n if not user.ANON:\n user.keep_signed_in(response.headers.cookie)\n", "path": "liberapay/security/authentication.py"}]} | 2,936 | 951 |
gh_patches_debug_7075 | rasdani/github-patches | git_diff | streamlink__streamlink-3619 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to use --player-fifo with MPV
Streamlink 2.0.0
mpv 0.33.0-76-g93066ff12f Copyright © 2000-2020 mpv/MPlayer/mplayer2 projects
```
streamlink --player-fifo https://www.twitch.tv/channelName best
[cli][info] Found matching plugin twitch for URL https://www.twitch.tv/channelName
[cli][info] Available streams: audio_only, 160p (worst), 360p, 480p, 720p, 720p60, 1080p60 (best)
[cli][info] Opening stream: 1080p60 (hls)
[cli][info] Creating pipe streamlinkpipe-1140
[cli][info] Starting player: "c:\Programs\MPV\mpv.exe"
error: Failed to start player: "c:\Programs\MPV\mpv.exe" (Process exited prematurely)
[cli][info] Closing currently open stream...
```
Works normally without FIFO
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink_cli/output.py`
Content:
```
1 import logging
2 import os
3 import re
4 import shlex
5 import subprocess
6 import sys
7 from time import sleep
8
9 from streamlink_cli.compat import is_win32, stdout
10 from streamlink_cli.constants import PLAYER_ARGS_INPUT_DEFAULT, PLAYER_ARGS_INPUT_FALLBACK, SUPPORTED_PLAYERS
11 from streamlink_cli.utils import ignored
12
13 if is_win32:
14 import msvcrt
15
16 log = logging.getLogger("streamlink.cli.output")
17
18
19 class Output:
20 def __init__(self):
21 self.opened = False
22
23 def open(self):
24 self._open()
25 self.opened = True
26
27 def close(self):
28 if self.opened:
29 self._close()
30
31 self.opened = False
32
33 def write(self, data):
34 if not self.opened:
35 raise OSError("Output is not opened")
36
37 return self._write(data)
38
39 def _open(self):
40 pass
41
42 def _close(self):
43 pass
44
45 def _write(self, data):
46 pass
47
48
49 class FileOutput(Output):
50 def __init__(self, filename=None, fd=None, record=None):
51 super().__init__()
52 self.filename = filename
53 self.fd = fd
54 self.record = record
55
56 def _open(self):
57 if self.filename:
58 self.fd = open(self.filename, "wb")
59
60 if self.record:
61 self.record.open()
62
63 if is_win32:
64 msvcrt.setmode(self.fd.fileno(), os.O_BINARY)
65
66 def _close(self):
67 if self.fd is not stdout:
68 self.fd.close()
69 if self.record:
70 self.record.close()
71
72 def _write(self, data):
73 self.fd.write(data)
74 if self.record:
75 self.record.write(data)
76
77
78 class PlayerOutput(Output):
79 PLAYER_TERMINATE_TIMEOUT = 10.0
80
81 _re_player_args_input = re.compile("|".join(map(
82 lambda const: re.escape(f"{{{const}}}"),
83 [PLAYER_ARGS_INPUT_DEFAULT, PLAYER_ARGS_INPUT_FALLBACK]
84 )))
85
86 def __init__(self, cmd, args="", filename=None, quiet=True, kill=True,
87 call=False, http=None, namedpipe=None, record=None, title=None):
88 super().__init__()
89 self.cmd = cmd
90 self.args = args
91 self.kill = kill
92 self.call = call
93 self.quiet = quiet
94
95 self.filename = filename
96 self.namedpipe = namedpipe
97 self.http = http
98 self.title = title
99 self.player = None
100 self.player_name = self.supported_player(self.cmd)
101 self.record = record
102
103 if self.namedpipe or self.filename or self.http:
104 self.stdin = sys.stdin
105 else:
106 self.stdin = subprocess.PIPE
107
108 if self.quiet:
109 self.stdout = open(os.devnull, "w")
110 self.stderr = open(os.devnull, "w")
111 else:
112 self.stdout = sys.stdout
113 self.stderr = sys.stderr
114
115 if not self._re_player_args_input.search(self.args):
116 self.args += f"{' ' if self.args else ''}{{{PLAYER_ARGS_INPUT_DEFAULT}}}"
117
118 @property
119 def running(self):
120 sleep(0.5)
121 return self.player.poll() is None
122
123 @classmethod
124 def supported_player(cls, cmd):
125 """
126 Check if the current player supports adding a title
127
128 :param cmd: command to test
129 :return: name of the player|None
130 """
131 if not is_win32:
132 # under a POSIX system use shlex to find the actual command
133 # under windows this is not an issue because executables end in .exe
134 cmd = shlex.split(cmd)[0]
135
136 cmd = os.path.basename(cmd.lower())
137 for player, possiblecmds in SUPPORTED_PLAYERS.items():
138 for possiblecmd in possiblecmds:
139 if cmd.startswith(possiblecmd):
140 return player
141
142 @classmethod
143 def _mpv_title_escape(cls, title_string):
144 # mpv has a "disable property-expansion" token which must be handled
145 # in order to accurately represent $$ in title
146 if r'\$>' in title_string:
147 processed_title = ""
148 double_dollars = True
149 i = dollars = 0
150 while i < len(title_string):
151 if double_dollars:
152 if title_string[i] == "\\":
153 if title_string[i + 1] == "$":
154 processed_title += "$"
155 dollars += 1
156 i += 1
157 if title_string[i + 1] == ">" and dollars % 2 == 1:
158 double_dollars = False
159 processed_title += ">"
160 i += 1
161 else:
162 processed_title += "\\"
163 elif title_string[i] == "$":
164 processed_title += "$$"
165 else:
166 dollars = 0
167 processed_title += title_string[i]
168 else:
169 if title_string[i:i + 2] == "\\$":
170 processed_title += "$"
171 i += 1
172 else:
173 processed_title += title_string[i]
174 i += 1
175 return processed_title
176 else:
177 # not possible for property-expansion to be disabled, happy days
178 return title_string.replace("$", "$$").replace(r'\$$', "$")
179
180 def _create_arguments(self):
181 if self.namedpipe:
182 filename = self.namedpipe.path
183 elif self.filename:
184 filename = self.filename
185 elif self.http:
186 filename = self.http.url
187 else:
188 filename = "-"
189 extra_args = []
190
191 if self.title is not None:
192 # vlc
193 if self.player_name == "vlc":
194 # see https://wiki.videolan.org/Documentation:Format_String/, allow escaping with \$
195 self.title = self.title.replace("$", "$$").replace(r'\$$', "$")
196 extra_args.extend(["--input-title-format", self.title])
197
198 # mpv
199 if self.player_name == "mpv":
200 # see https://mpv.io/manual/stable/#property-expansion, allow escaping with \$, respect mpv's $>
201 self.title = self._mpv_title_escape(self.title)
202 extra_args.append(f"--force-media-title={self.title}")
203
204 # potplayer
205 if self.player_name == "potplayer":
206 if filename != "-":
207 # PotPlayer - About - Command Line
208 # You can specify titles for URLs by separating them with a backslash (\) at the end of URLs.
209 # eg. "http://...\title of this url"
210 self.title = self.title.replace('"', '')
211 filename = filename[:-1] + '\\' + self.title + filename[-1]
212
213 args = self.args.format(**{PLAYER_ARGS_INPUT_DEFAULT: filename, PLAYER_ARGS_INPUT_FALLBACK: filename})
214 cmd = self.cmd
215
216 # player command
217 if is_win32:
218 eargs = subprocess.list2cmdline(extra_args)
219 # do not insert and extra " " when there are no extra_args
220 return " ".join([cmd] + ([eargs] if eargs else []) + [args])
221 return shlex.split(cmd) + extra_args + shlex.split(args)
222
223 def _open(self):
224 try:
225 if self.record:
226 self.record.open()
227 if self.call and self.filename:
228 self._open_call()
229 else:
230 self._open_subprocess()
231 finally:
232 if self.quiet:
233 # Output streams no longer needed in parent process
234 self.stdout.close()
235 self.stderr.close()
236
237 def _open_call(self):
238 args = self._create_arguments()
239 if is_win32:
240 fargs = args
241 else:
242 fargs = subprocess.list2cmdline(args)
243 log.debug(f"Calling: {fargs}")
244
245 subprocess.call(args,
246 stdout=self.stdout,
247 stderr=self.stderr)
248
249 def _open_subprocess(self):
250 # Force bufsize=0 on all Python versions to avoid writing the
251 # unflushed buffer when closing a broken input pipe
252 args = self._create_arguments()
253 if is_win32:
254 fargs = args
255 else:
256 fargs = subprocess.list2cmdline(args)
257 log.debug(f"Opening subprocess: {fargs}")
258
259 self.player = subprocess.Popen(args,
260 stdin=self.stdin, bufsize=0,
261 stdout=self.stdout,
262 stderr=self.stderr)
263 # Wait 0.5 seconds to see if program exited prematurely
264 if not self.running:
265 raise OSError("Process exited prematurely")
266
267 if self.namedpipe:
268 self.namedpipe.open("wb")
269 elif self.http:
270 self.http.open()
271
272 def _close(self):
273 # Close input to the player first to signal the end of the
274 # stream and allow the player to terminate of its own accord
275 if self.namedpipe:
276 self.namedpipe.close()
277 elif self.http:
278 self.http.close()
279 elif not self.filename:
280 self.player.stdin.close()
281
282 if self.record:
283 self.record.close()
284
285 if self.kill:
286 with ignored(Exception):
287 self.player.terminate()
288 if not is_win32:
289 t, timeout = 0.0, self.PLAYER_TERMINATE_TIMEOUT
290 while self.player.poll() is None and t < timeout:
291 sleep(0.5)
292 t += 0.5
293
294 if not self.player.returncode:
295 self.player.kill()
296 self.player.wait()
297
298 def _write(self, data):
299 if self.record:
300 self.record.write(data)
301
302 if self.namedpipe:
303 self.namedpipe.write(data)
304 elif self.http:
305 self.http.write(data)
306 else:
307 self.player.stdin.write(data)
308
309
310 __all__ = ["PlayerOutput", "FileOutput"]
311
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink_cli/output.py b/src/streamlink_cli/output.py
--- a/src/streamlink_cli/output.py
+++ b/src/streamlink_cli/output.py
@@ -180,6 +180,11 @@
def _create_arguments(self):
if self.namedpipe:
filename = self.namedpipe.path
+ if is_win32:
+ if self.player_name == "vlc":
+ filename = f"stream://\\{filename}"
+ elif self.player_name == "mpv":
+ filename = f"file://{filename}"
elif self.filename:
filename = self.filename
elif self.http:
| {"golden_diff": "diff --git a/src/streamlink_cli/output.py b/src/streamlink_cli/output.py\n--- a/src/streamlink_cli/output.py\n+++ b/src/streamlink_cli/output.py\n@@ -180,6 +180,11 @@\n def _create_arguments(self):\n if self.namedpipe:\n filename = self.namedpipe.path\n+ if is_win32:\n+ if self.player_name == \"vlc\":\n+ filename = f\"stream://\\\\{filename}\"\n+ elif self.player_name == \"mpv\":\n+ filename = f\"file://{filename}\"\n elif self.filename:\n filename = self.filename\n elif self.http:\n", "issue": "Unable to use --player-fifo with MPV\nStreamlink 2.0.0\r\nmpv 0.33.0-76-g93066ff12f Copyright \u00a9 2000-2020 mpv/MPlayer/mplayer2 projects\r\n\r\n```\r\nstreamlink --player-fifo https://www.twitch.tv/channelName best\r\n[cli][info] Found matching plugin twitch for URL https://www.twitch.tv/channelName \r\n[cli][info] Available streams: audio_only, 160p (worst), 360p, 480p, 720p, 720p60, 1080p60 (best)\r\n[cli][info] Opening stream: 1080p60 (hls)\r\n[cli][info] Creating pipe streamlinkpipe-1140\r\n[cli][info] Starting player: \"c:\\Programs\\MPV\\mpv.exe\"\r\nerror: Failed to start player: \"c:\\Programs\\MPV\\mpv.exe\" (Process exited prematurely)\r\n[cli][info] Closing currently open stream...\r\n```\r\n\r\nWorks normally without FIFO\n", "before_files": [{"content": "import logging\nimport os\nimport re\nimport shlex\nimport subprocess\nimport sys\nfrom time import sleep\n\nfrom streamlink_cli.compat import is_win32, stdout\nfrom streamlink_cli.constants import PLAYER_ARGS_INPUT_DEFAULT, PLAYER_ARGS_INPUT_FALLBACK, SUPPORTED_PLAYERS\nfrom streamlink_cli.utils import ignored\n\nif is_win32:\n import msvcrt\n\nlog = logging.getLogger(\"streamlink.cli.output\")\n\n\nclass Output:\n def __init__(self):\n self.opened = False\n\n def open(self):\n self._open()\n self.opened = True\n\n def close(self):\n if self.opened:\n self._close()\n\n self.opened = False\n\n def write(self, data):\n if not self.opened:\n raise OSError(\"Output is not opened\")\n\n return self._write(data)\n\n def _open(self):\n pass\n\n def _close(self):\n pass\n\n def _write(self, data):\n pass\n\n\nclass FileOutput(Output):\n def __init__(self, filename=None, fd=None, record=None):\n super().__init__()\n self.filename = filename\n self.fd = fd\n self.record = record\n\n def _open(self):\n if self.filename:\n self.fd = open(self.filename, \"wb\")\n\n if self.record:\n self.record.open()\n\n if is_win32:\n msvcrt.setmode(self.fd.fileno(), os.O_BINARY)\n\n def _close(self):\n if self.fd is not stdout:\n self.fd.close()\n if self.record:\n self.record.close()\n\n def _write(self, data):\n self.fd.write(data)\n if self.record:\n self.record.write(data)\n\n\nclass PlayerOutput(Output):\n PLAYER_TERMINATE_TIMEOUT = 10.0\n\n _re_player_args_input = re.compile(\"|\".join(map(\n lambda const: re.escape(f\"{{{const}}}\"),\n [PLAYER_ARGS_INPUT_DEFAULT, PLAYER_ARGS_INPUT_FALLBACK]\n )))\n\n def __init__(self, cmd, args=\"\", filename=None, quiet=True, kill=True,\n call=False, http=None, namedpipe=None, record=None, title=None):\n super().__init__()\n self.cmd = cmd\n self.args = args\n self.kill = kill\n self.call = call\n self.quiet = quiet\n\n self.filename = filename\n self.namedpipe = namedpipe\n self.http = http\n self.title = title\n self.player = None\n self.player_name = self.supported_player(self.cmd)\n self.record = record\n\n if self.namedpipe or self.filename or self.http:\n self.stdin = sys.stdin\n else:\n self.stdin = subprocess.PIPE\n\n if self.quiet:\n self.stdout = open(os.devnull, \"w\")\n self.stderr = open(os.devnull, \"w\")\n else:\n self.stdout = sys.stdout\n self.stderr = sys.stderr\n\n if not self._re_player_args_input.search(self.args):\n self.args += f\"{' ' if self.args else ''}{{{PLAYER_ARGS_INPUT_DEFAULT}}}\"\n\n @property\n def running(self):\n sleep(0.5)\n return self.player.poll() is None\n\n @classmethod\n def supported_player(cls, cmd):\n \"\"\"\n Check if the current player supports adding a title\n\n :param cmd: command to test\n :return: name of the player|None\n \"\"\"\n if not is_win32:\n # under a POSIX system use shlex to find the actual command\n # under windows this is not an issue because executables end in .exe\n cmd = shlex.split(cmd)[0]\n\n cmd = os.path.basename(cmd.lower())\n for player, possiblecmds in SUPPORTED_PLAYERS.items():\n for possiblecmd in possiblecmds:\n if cmd.startswith(possiblecmd):\n return player\n\n @classmethod\n def _mpv_title_escape(cls, title_string):\n # mpv has a \"disable property-expansion\" token which must be handled\n # in order to accurately represent $$ in title\n if r'\\$>' in title_string:\n processed_title = \"\"\n double_dollars = True\n i = dollars = 0\n while i < len(title_string):\n if double_dollars:\n if title_string[i] == \"\\\\\":\n if title_string[i + 1] == \"$\":\n processed_title += \"$\"\n dollars += 1\n i += 1\n if title_string[i + 1] == \">\" and dollars % 2 == 1:\n double_dollars = False\n processed_title += \">\"\n i += 1\n else:\n processed_title += \"\\\\\"\n elif title_string[i] == \"$\":\n processed_title += \"$$\"\n else:\n dollars = 0\n processed_title += title_string[i]\n else:\n if title_string[i:i + 2] == \"\\\\$\":\n processed_title += \"$\"\n i += 1\n else:\n processed_title += title_string[i]\n i += 1\n return processed_title\n else:\n # not possible for property-expansion to be disabled, happy days\n return title_string.replace(\"$\", \"$$\").replace(r'\\$$', \"$\")\n\n def _create_arguments(self):\n if self.namedpipe:\n filename = self.namedpipe.path\n elif self.filename:\n filename = self.filename\n elif self.http:\n filename = self.http.url\n else:\n filename = \"-\"\n extra_args = []\n\n if self.title is not None:\n # vlc\n if self.player_name == \"vlc\":\n # see https://wiki.videolan.org/Documentation:Format_String/, allow escaping with \\$\n self.title = self.title.replace(\"$\", \"$$\").replace(r'\\$$', \"$\")\n extra_args.extend([\"--input-title-format\", self.title])\n\n # mpv\n if self.player_name == \"mpv\":\n # see https://mpv.io/manual/stable/#property-expansion, allow escaping with \\$, respect mpv's $>\n self.title = self._mpv_title_escape(self.title)\n extra_args.append(f\"--force-media-title={self.title}\")\n\n # potplayer\n if self.player_name == \"potplayer\":\n if filename != \"-\":\n # PotPlayer - About - Command Line\n # You can specify titles for URLs by separating them with a backslash (\\) at the end of URLs.\n # eg. \"http://...\\title of this url\"\n self.title = self.title.replace('\"', '')\n filename = filename[:-1] + '\\\\' + self.title + filename[-1]\n\n args = self.args.format(**{PLAYER_ARGS_INPUT_DEFAULT: filename, PLAYER_ARGS_INPUT_FALLBACK: filename})\n cmd = self.cmd\n\n # player command\n if is_win32:\n eargs = subprocess.list2cmdline(extra_args)\n # do not insert and extra \" \" when there are no extra_args\n return \" \".join([cmd] + ([eargs] if eargs else []) + [args])\n return shlex.split(cmd) + extra_args + shlex.split(args)\n\n def _open(self):\n try:\n if self.record:\n self.record.open()\n if self.call and self.filename:\n self._open_call()\n else:\n self._open_subprocess()\n finally:\n if self.quiet:\n # Output streams no longer needed in parent process\n self.stdout.close()\n self.stderr.close()\n\n def _open_call(self):\n args = self._create_arguments()\n if is_win32:\n fargs = args\n else:\n fargs = subprocess.list2cmdline(args)\n log.debug(f\"Calling: {fargs}\")\n\n subprocess.call(args,\n stdout=self.stdout,\n stderr=self.stderr)\n\n def _open_subprocess(self):\n # Force bufsize=0 on all Python versions to avoid writing the\n # unflushed buffer when closing a broken input pipe\n args = self._create_arguments()\n if is_win32:\n fargs = args\n else:\n fargs = subprocess.list2cmdline(args)\n log.debug(f\"Opening subprocess: {fargs}\")\n\n self.player = subprocess.Popen(args,\n stdin=self.stdin, bufsize=0,\n stdout=self.stdout,\n stderr=self.stderr)\n # Wait 0.5 seconds to see if program exited prematurely\n if not self.running:\n raise OSError(\"Process exited prematurely\")\n\n if self.namedpipe:\n self.namedpipe.open(\"wb\")\n elif self.http:\n self.http.open()\n\n def _close(self):\n # Close input to the player first to signal the end of the\n # stream and allow the player to terminate of its own accord\n if self.namedpipe:\n self.namedpipe.close()\n elif self.http:\n self.http.close()\n elif not self.filename:\n self.player.stdin.close()\n\n if self.record:\n self.record.close()\n\n if self.kill:\n with ignored(Exception):\n self.player.terminate()\n if not is_win32:\n t, timeout = 0.0, self.PLAYER_TERMINATE_TIMEOUT\n while self.player.poll() is None and t < timeout:\n sleep(0.5)\n t += 0.5\n\n if not self.player.returncode:\n self.player.kill()\n self.player.wait()\n\n def _write(self, data):\n if self.record:\n self.record.write(data)\n\n if self.namedpipe:\n self.namedpipe.write(data)\n elif self.http:\n self.http.write(data)\n else:\n self.player.stdin.write(data)\n\n\n__all__ = [\"PlayerOutput\", \"FileOutput\"]\n", "path": "src/streamlink_cli/output.py"}], "after_files": [{"content": "import logging\nimport os\nimport re\nimport shlex\nimport subprocess\nimport sys\nfrom time import sleep\n\nfrom streamlink_cli.compat import is_win32, stdout\nfrom streamlink_cli.constants import PLAYER_ARGS_INPUT_DEFAULT, PLAYER_ARGS_INPUT_FALLBACK, SUPPORTED_PLAYERS\nfrom streamlink_cli.utils import ignored\n\nif is_win32:\n import msvcrt\n\nlog = logging.getLogger(\"streamlink.cli.output\")\n\n\nclass Output:\n def __init__(self):\n self.opened = False\n\n def open(self):\n self._open()\n self.opened = True\n\n def close(self):\n if self.opened:\n self._close()\n\n self.opened = False\n\n def write(self, data):\n if not self.opened:\n raise OSError(\"Output is not opened\")\n\n return self._write(data)\n\n def _open(self):\n pass\n\n def _close(self):\n pass\n\n def _write(self, data):\n pass\n\n\nclass FileOutput(Output):\n def __init__(self, filename=None, fd=None, record=None):\n super().__init__()\n self.filename = filename\n self.fd = fd\n self.record = record\n\n def _open(self):\n if self.filename:\n self.fd = open(self.filename, \"wb\")\n\n if self.record:\n self.record.open()\n\n if is_win32:\n msvcrt.setmode(self.fd.fileno(), os.O_BINARY)\n\n def _close(self):\n if self.fd is not stdout:\n self.fd.close()\n if self.record:\n self.record.close()\n\n def _write(self, data):\n self.fd.write(data)\n if self.record:\n self.record.write(data)\n\n\nclass PlayerOutput(Output):\n PLAYER_TERMINATE_TIMEOUT = 10.0\n\n _re_player_args_input = re.compile(\"|\".join(map(\n lambda const: re.escape(f\"{{{const}}}\"),\n [PLAYER_ARGS_INPUT_DEFAULT, PLAYER_ARGS_INPUT_FALLBACK]\n )))\n\n def __init__(self, cmd, args=\"\", filename=None, quiet=True, kill=True,\n call=False, http=None, namedpipe=None, record=None, title=None):\n super().__init__()\n self.cmd = cmd\n self.args = args\n self.kill = kill\n self.call = call\n self.quiet = quiet\n\n self.filename = filename\n self.namedpipe = namedpipe\n self.http = http\n self.title = title\n self.player = None\n self.player_name = self.supported_player(self.cmd)\n self.record = record\n\n if self.namedpipe or self.filename or self.http:\n self.stdin = sys.stdin\n else:\n self.stdin = subprocess.PIPE\n\n if self.quiet:\n self.stdout = open(os.devnull, \"w\")\n self.stderr = open(os.devnull, \"w\")\n else:\n self.stdout = sys.stdout\n self.stderr = sys.stderr\n\n if not self._re_player_args_input.search(self.args):\n self.args += f\"{' ' if self.args else ''}{{{PLAYER_ARGS_INPUT_DEFAULT}}}\"\n\n @property\n def running(self):\n sleep(0.5)\n return self.player.poll() is None\n\n @classmethod\n def supported_player(cls, cmd):\n \"\"\"\n Check if the current player supports adding a title\n\n :param cmd: command to test\n :return: name of the player|None\n \"\"\"\n if not is_win32:\n # under a POSIX system use shlex to find the actual command\n # under windows this is not an issue because executables end in .exe\n cmd = shlex.split(cmd)[0]\n\n cmd = os.path.basename(cmd.lower())\n for player, possiblecmds in SUPPORTED_PLAYERS.items():\n for possiblecmd in possiblecmds:\n if cmd.startswith(possiblecmd):\n return player\n\n @classmethod\n def _mpv_title_escape(cls, title_string):\n # mpv has a \"disable property-expansion\" token which must be handled\n # in order to accurately represent $$ in title\n if r'\\$>' in title_string:\n processed_title = \"\"\n double_dollars = True\n i = dollars = 0\n while i < len(title_string):\n if double_dollars:\n if title_string[i] == \"\\\\\":\n if title_string[i + 1] == \"$\":\n processed_title += \"$\"\n dollars += 1\n i += 1\n if title_string[i + 1] == \">\" and dollars % 2 == 1:\n double_dollars = False\n processed_title += \">\"\n i += 1\n else:\n processed_title += \"\\\\\"\n elif title_string[i] == \"$\":\n processed_title += \"$$\"\n else:\n dollars = 0\n processed_title += title_string[i]\n else:\n if title_string[i:i + 2] == \"\\\\$\":\n processed_title += \"$\"\n i += 1\n else:\n processed_title += title_string[i]\n i += 1\n return processed_title\n else:\n # not possible for property-expansion to be disabled, happy days\n return title_string.replace(\"$\", \"$$\").replace(r'\\$$', \"$\")\n\n def _create_arguments(self):\n if self.namedpipe:\n filename = self.namedpipe.path\n if is_win32:\n if self.player_name == \"vlc\":\n filename = f\"stream://\\\\{filename}\"\n elif self.player_name == \"mpv\":\n filename = f\"file://{filename}\"\n elif self.filename:\n filename = self.filename\n elif self.http:\n filename = self.http.url\n else:\n filename = \"-\"\n extra_args = []\n\n if self.title is not None:\n # vlc\n if self.player_name == \"vlc\":\n # see https://wiki.videolan.org/Documentation:Format_String/, allow escaping with \\$\n self.title = self.title.replace(\"$\", \"$$\").replace(r'\\$$', \"$\")\n extra_args.extend([\"--input-title-format\", self.title])\n\n # mpv\n if self.player_name == \"mpv\":\n # see https://mpv.io/manual/stable/#property-expansion, allow escaping with \\$, respect mpv's $>\n self.title = self._mpv_title_escape(self.title)\n extra_args.append(f\"--force-media-title={self.title}\")\n\n # potplayer\n if self.player_name == \"potplayer\":\n if filename != \"-\":\n # PotPlayer - About - Command Line\n # You can specify titles for URLs by separating them with a backslash (\\) at the end of URLs.\n # eg. \"http://...\\title of this url\"\n self.title = self.title.replace('\"', '')\n filename = filename[:-1] + '\\\\' + self.title + filename[-1]\n\n args = self.args.format(**{PLAYER_ARGS_INPUT_DEFAULT: filename, PLAYER_ARGS_INPUT_FALLBACK: filename})\n cmd = self.cmd\n\n # player command\n if is_win32:\n eargs = subprocess.list2cmdline(extra_args)\n # do not insert and extra \" \" when there are no extra_args\n return \" \".join([cmd] + ([eargs] if eargs else []) + [args])\n return shlex.split(cmd) + extra_args + shlex.split(args)\n\n def _open(self):\n try:\n if self.record:\n self.record.open()\n if self.call and self.filename:\n self._open_call()\n else:\n self._open_subprocess()\n finally:\n if self.quiet:\n # Output streams no longer needed in parent process\n self.stdout.close()\n self.stderr.close()\n\n def _open_call(self):\n args = self._create_arguments()\n if is_win32:\n fargs = args\n else:\n fargs = subprocess.list2cmdline(args)\n log.debug(f\"Calling: {fargs}\")\n\n subprocess.call(args,\n stdout=self.stdout,\n stderr=self.stderr)\n\n def _open_subprocess(self):\n # Force bufsize=0 on all Python versions to avoid writing the\n # unflushed buffer when closing a broken input pipe\n args = self._create_arguments()\n if is_win32:\n fargs = args\n else:\n fargs = subprocess.list2cmdline(args)\n log.debug(f\"Opening subprocess: {fargs}\")\n\n self.player = subprocess.Popen(args,\n stdin=self.stdin, bufsize=0,\n stdout=self.stdout,\n stderr=self.stderr)\n # Wait 0.5 seconds to see if program exited prematurely\n if not self.running:\n raise OSError(\"Process exited prematurely\")\n\n if self.namedpipe:\n self.namedpipe.open(\"wb\")\n elif self.http:\n self.http.open()\n\n def _close(self):\n # Close input to the player first to signal the end of the\n # stream and allow the player to terminate of its own accord\n if self.namedpipe:\n self.namedpipe.close()\n elif self.http:\n self.http.close()\n elif not self.filename:\n self.player.stdin.close()\n\n if self.record:\n self.record.close()\n\n if self.kill:\n with ignored(Exception):\n self.player.terminate()\n if not is_win32:\n t, timeout = 0.0, self.PLAYER_TERMINATE_TIMEOUT\n while self.player.poll() is None and t < timeout:\n sleep(0.5)\n t += 0.5\n\n if not self.player.returncode:\n self.player.kill()\n self.player.wait()\n\n def _write(self, data):\n if self.record:\n self.record.write(data)\n\n if self.namedpipe:\n self.namedpipe.write(data)\n elif self.http:\n self.http.write(data)\n else:\n self.player.stdin.write(data)\n\n\n__all__ = [\"PlayerOutput\", \"FileOutput\"]\n", "path": "src/streamlink_cli/output.py"}]} | 3,438 | 139 |
gh_patches_debug_17762 | rasdani/github-patches | git_diff | pytorch__TensorRT-2505 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`aten.arange.start_step`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `py/torch_tensorrt/dynamo/conversion/ops_evaluators.py`
Content:
```
1 import logging
2 import operator
3 from typing import Dict, Sequence, Tuple, Union
4
5 import torch
6 from torch.fx.node import Argument, Node, Target
7 from torch_tensorrt.dynamo.conversion._ConversionContext import ConversionContext
8 from torch_tensorrt.dynamo.conversion._ConverterRegistry import (
9 ConverterRegistry,
10 dynamo_tensorrt_converter,
11 )
12 from torch_tensorrt.fx.types import TRTTensor
13
14 _LOGGER: logging.Logger = logging.getLogger(__name__)
15
16
17 def getitem_validator(getitem_node: Node) -> bool:
18 from torch_tensorrt.dynamo.conversion._ConverterRegistry import DYNAMO_CONVERTERS
19
20 # Getitem nodes can only be converted if their parent node also can
21 return getitem_node.args[0] in DYNAMO_CONVERTERS
22
23
24 # TODO: Subsequent evaluators should be registered here with their own validators
25 @dynamo_tensorrt_converter(operator.getitem, capability_validator=getitem_validator)
26 @dynamo_tensorrt_converter(torch.ops.aten.detach.default)
27 def generic_evaluator(
28 ctx: ConversionContext,
29 target: Target,
30 args: Tuple[Argument, ...],
31 kwargs: Dict[str, Argument],
32 name: str,
33 ) -> Union[TRTTensor, Sequence[TRTTensor]]:
34 _LOGGER.debug(
35 f"Evaluating {ConverterRegistry.qualified_name_or_str(target)} on object with name: {name}"
36 )
37 return target(*args)
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/py/torch_tensorrt/dynamo/conversion/ops_evaluators.py b/py/torch_tensorrt/dynamo/conversion/ops_evaluators.py
--- a/py/torch_tensorrt/dynamo/conversion/ops_evaluators.py
+++ b/py/torch_tensorrt/dynamo/conversion/ops_evaluators.py
@@ -2,6 +2,7 @@
import operator
from typing import Dict, Sequence, Tuple, Union
+import numpy as np
import torch
from torch.fx.node import Argument, Node, Target
from torch_tensorrt.dynamo.conversion._ConversionContext import ConversionContext
@@ -35,3 +36,14 @@
f"Evaluating {ConverterRegistry.qualified_name_or_str(target)} on object with name: {name}"
)
return target(*args)
+
+
+@dynamo_tensorrt_converter(torch.ops.aten.arange.start_step)
+def aten_ops_arange_start_step(
+ ctx: ConversionContext,
+ target: Target,
+ args: Tuple[Argument, ...],
+ kwargs: Dict[str, Argument],
+ name: str,
+) -> Union[TRTTensor, Sequence[TRTTensor]]:
+ return np.arange(*args)
| {"golden_diff": "diff --git a/py/torch_tensorrt/dynamo/conversion/ops_evaluators.py b/py/torch_tensorrt/dynamo/conversion/ops_evaluators.py\n--- a/py/torch_tensorrt/dynamo/conversion/ops_evaluators.py\n+++ b/py/torch_tensorrt/dynamo/conversion/ops_evaluators.py\n@@ -2,6 +2,7 @@\n import operator\n from typing import Dict, Sequence, Tuple, Union\n \n+import numpy as np\n import torch\n from torch.fx.node import Argument, Node, Target\n from torch_tensorrt.dynamo.conversion._ConversionContext import ConversionContext\n@@ -35,3 +36,14 @@\n f\"Evaluating {ConverterRegistry.qualified_name_or_str(target)} on object with name: {name}\"\n )\n return target(*args)\n+\n+\n+@dynamo_tensorrt_converter(torch.ops.aten.arange.start_step)\n+def aten_ops_arange_start_step(\n+ ctx: ConversionContext,\n+ target: Target,\n+ args: Tuple[Argument, ...],\n+ kwargs: Dict[str, Argument],\n+ name: str,\n+) -> Union[TRTTensor, Sequence[TRTTensor]]:\n+ return np.arange(*args)\n", "issue": "`aten.arange.start_step`\n\n", "before_files": [{"content": "import logging\nimport operator\nfrom typing import Dict, Sequence, Tuple, Union\n\nimport torch\nfrom torch.fx.node import Argument, Node, Target\nfrom torch_tensorrt.dynamo.conversion._ConversionContext import ConversionContext\nfrom torch_tensorrt.dynamo.conversion._ConverterRegistry import (\n ConverterRegistry,\n dynamo_tensorrt_converter,\n)\nfrom torch_tensorrt.fx.types import TRTTensor\n\n_LOGGER: logging.Logger = logging.getLogger(__name__)\n\n\ndef getitem_validator(getitem_node: Node) -> bool:\n from torch_tensorrt.dynamo.conversion._ConverterRegistry import DYNAMO_CONVERTERS\n\n # Getitem nodes can only be converted if their parent node also can\n return getitem_node.args[0] in DYNAMO_CONVERTERS\n\n\n# TODO: Subsequent evaluators should be registered here with their own validators\n@dynamo_tensorrt_converter(operator.getitem, capability_validator=getitem_validator)\n@dynamo_tensorrt_converter(torch.ops.aten.detach.default)\ndef generic_evaluator(\n ctx: ConversionContext,\n target: Target,\n args: Tuple[Argument, ...],\n kwargs: Dict[str, Argument],\n name: str,\n) -> Union[TRTTensor, Sequence[TRTTensor]]:\n _LOGGER.debug(\n f\"Evaluating {ConverterRegistry.qualified_name_or_str(target)} on object with name: {name}\"\n )\n return target(*args)\n", "path": "py/torch_tensorrt/dynamo/conversion/ops_evaluators.py"}], "after_files": [{"content": "import logging\nimport operator\nfrom typing import Dict, Sequence, Tuple, Union\n\nimport numpy as np\nimport torch\nfrom torch.fx.node import Argument, Node, Target\nfrom torch_tensorrt.dynamo.conversion._ConversionContext import ConversionContext\nfrom torch_tensorrt.dynamo.conversion._ConverterRegistry import (\n ConverterRegistry,\n dynamo_tensorrt_converter,\n)\nfrom torch_tensorrt.fx.types import TRTTensor\n\n_LOGGER: logging.Logger = logging.getLogger(__name__)\n\n\ndef getitem_validator(getitem_node: Node) -> bool:\n from torch_tensorrt.dynamo.conversion._ConverterRegistry import DYNAMO_CONVERTERS\n\n # Getitem nodes can only be converted if their parent node also can\n return getitem_node.args[0] in DYNAMO_CONVERTERS\n\n\n# TODO: Subsequent evaluators should be registered here with their own validators\n@dynamo_tensorrt_converter(operator.getitem, capability_validator=getitem_validator)\n@dynamo_tensorrt_converter(torch.ops.aten.detach.default)\ndef generic_evaluator(\n ctx: ConversionContext,\n target: Target,\n args: Tuple[Argument, ...],\n kwargs: Dict[str, Argument],\n name: str,\n) -> Union[TRTTensor, Sequence[TRTTensor]]:\n _LOGGER.debug(\n f\"Evaluating {ConverterRegistry.qualified_name_or_str(target)} on object with name: {name}\"\n )\n return target(*args)\n\n\n@dynamo_tensorrt_converter(torch.ops.aten.arange.start_step)\ndef aten_ops_arange_start_step(\n ctx: ConversionContext,\n target: Target,\n args: Tuple[Argument, ...],\n kwargs: Dict[str, Argument],\n name: str,\n) -> Union[TRTTensor, Sequence[TRTTensor]]:\n return np.arange(*args)\n", "path": "py/torch_tensorrt/dynamo/conversion/ops_evaluators.py"}]} | 640 | 262 |
gh_patches_debug_39746 | rasdani/github-patches | git_diff | unionai-oss__pandera-960 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
mypy: pa.typing.Series does not support pd.DatetimeTZDtype as a type argument
**Describe the bug**
Under mypy, `pa.typing.Series` does not support `pd.DatetimeTZDtype` as a valid type argument.
Bug occurs with and without `pandera.mypy` plugin
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandera.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandera.
#### Code Sample, a copy-pastable example
```python
from typing import Annotated
import pandas as pd
import pandera as pa
from pandera.typing import Series
class MySchema(pa.SchemaModel):
# both lines taken from the pandera documentation
a: Series[Annotated[pd.DatetimeTZDtype, "ns", "UTC"]]
b: Series[pd.DatetimeTZDtype] = pa.Field(dtype_kwargs={"unit": "ns", "tz": "UTC"})
```
#### Expected behavior
mypy should show no errors
#### Actual behaviour
mypy produces the following errors
```
test.py:8: error: Value of type variable "GenericDtype" of "Series" cannot be "DatetimeTZDtype"
test.py:9: error: Value of type variable "GenericDtype" of "Series" cannot be "DatetimeTZDtype"
```
#### Versions
```
pandas==1.5.0
pandas-stubs==1.4.3.220807
pandera==0.13.3
mypy==0.982
mypy-extensions==0.4.3
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pandera/typing/common.py`
Content:
```
1 """Common typing functionality."""
2 # pylint:disable=abstract-method,disable=too-many-ancestors
3
4 import inspect
5 from typing import TYPE_CHECKING, Any, Generic, Optional, Type, TypeVar
6
7 import pandas as pd
8 import typing_inspect
9
10 from .. import dtypes
11 from ..engines import numpy_engine, pandas_engine
12
13 Bool = dtypes.Bool #: ``"bool"`` numpy dtype
14 Date = dtypes.Date #: ``datetime.date`` object dtype
15 DateTime = dtypes.DateTime #: ``"datetime64[ns]"`` numpy dtype
16 Decimal = dtypes.Decimal #: ``decimal.Decimal`` object dtype
17 Timedelta = dtypes.Timedelta #: ``"timedelta64[ns]"`` numpy dtype
18 Category = dtypes.Category #: pandas ``"categorical"`` datatype
19 Float = dtypes.Float #: ``"float"`` numpy dtype
20 Float16 = dtypes.Float16 #: ``"float16"`` numpy dtype
21 Float32 = dtypes.Float32 #: ``"float32"`` numpy dtype
22 Float64 = dtypes.Float64 #: ``"float64"`` numpy dtype
23 Int = dtypes.Int #: ``"int"`` numpy dtype
24 Int8 = dtypes.Int8 #: ``"int8"`` numpy dtype
25 Int16 = dtypes.Int16 #: ``"int16"`` numpy dtype
26 Int32 = dtypes.Int32 #: ``"int32"`` numpy dtype
27 Int64 = dtypes.Int64 #: ``"int64"`` numpy dtype
28 UInt8 = dtypes.UInt8 #: ``"uint8"`` numpy dtype
29 UInt16 = dtypes.UInt16 #: ``"uint16"`` numpy dtype
30 UInt32 = dtypes.UInt32 #: ``"uint32"`` numpy dtype
31 UInt64 = dtypes.UInt64 #: ``"uint64"`` numpy dtype
32 INT8 = pandas_engine.INT8 #: ``"Int8"`` pandas dtype:: pandas 0.24.0+
33 INT16 = pandas_engine.INT16 #: ``"Int16"`` pandas dtype: pandas 0.24.0+
34 INT32 = pandas_engine.INT32 #: ``"Int32"`` pandas dtype: pandas 0.24.0+
35 INT64 = pandas_engine.INT64 #: ``"Int64"`` pandas dtype: pandas 0.24.0+
36 UINT8 = pandas_engine.UINT8 #: ``"UInt8"`` pandas dtype:: pandas 0.24.0+
37 UINT16 = pandas_engine.UINT16 #: ``"UInt16"`` pandas dtype: pandas 0.24.0+
38 UINT32 = pandas_engine.UINT32 #: ``"UInt32"`` pandas dtype: pandas 0.24.0+
39 UINT64 = pandas_engine.UINT64 #: ``"UInt64"`` pandas dtype: pandas 0.24.0+
40 Object = numpy_engine.Object #: ``"object"`` numpy dtype
41 String = dtypes.String #: ``"str"`` numpy dtype
42 #: ``"string"`` pandas dtypes: pandas 1.0.0+. For <1.0.0, this enum will
43 #: fall back on the str-as-object-array representation.
44 STRING = pandas_engine.STRING #: ``"str"`` numpy dtype
45 BOOL = pandas_engine.BOOL #: ``"str"`` numpy dtype
46
47 try:
48 Geometry = pandas_engine.Geometry # : ``"geometry"`` geopandas dtype
49 GEOPANDAS_INSTALLED = True
50 except AttributeError:
51 GEOPANDAS_INSTALLED = False
52
53 if GEOPANDAS_INSTALLED:
54 GenericDtype = TypeVar( # type: ignore
55 "GenericDtype",
56 bool,
57 int,
58 str,
59 float,
60 pd.core.dtypes.base.ExtensionDtype,
61 Bool,
62 Date,
63 DateTime,
64 Decimal,
65 Timedelta,
66 Category,
67 Float,
68 Float16,
69 Float32,
70 Float64,
71 Int,
72 Int8,
73 Int16,
74 Int32,
75 Int64,
76 UInt8,
77 UInt16,
78 UInt32,
79 UInt64,
80 INT8,
81 INT16,
82 INT32,
83 INT64,
84 UINT8,
85 UINT16,
86 UINT32,
87 UINT64,
88 Object,
89 String,
90 STRING,
91 Geometry,
92 covariant=True,
93 )
94 else:
95 GenericDtype = TypeVar( # type: ignore
96 "GenericDtype",
97 bool,
98 int,
99 str,
100 float,
101 pd.core.dtypes.base.ExtensionDtype,
102 Bool,
103 Date,
104 DateTime,
105 Decimal,
106 Timedelta,
107 Category,
108 Float,
109 Float16,
110 Float32,
111 Float64,
112 Int,
113 Int8,
114 Int16,
115 Int32,
116 Int64,
117 UInt8,
118 UInt16,
119 UInt32,
120 UInt64,
121 INT8,
122 INT16,
123 INT32,
124 INT64,
125 UINT8,
126 UINT16,
127 UINT32,
128 UINT64,
129 Object,
130 String,
131 STRING,
132 covariant=True,
133 )
134
135 Schema = TypeVar("Schema", bound="SchemaModel") # type: ignore
136
137
138 # pylint:disable=invalid-name
139 if TYPE_CHECKING:
140 T = TypeVar("T") # pragma: no cover
141 else:
142 T = Schema
143
144
145 class DataFrameBase(Generic[T]):
146 # pylint: disable=too-few-public-methods
147 """
148 Pandera Dataframe base class for validating dataframes on
149 initialization.
150 """
151
152 default_dtype: Optional[Type] = None
153
154 def __setattr__(self, name: str, value: Any) -> None:
155 # pylint: disable=no-member
156 object.__setattr__(self, name, value)
157 if name == "__orig_class__":
158 orig_class = getattr(self, "__orig_class__")
159 class_args = getattr(orig_class, "__args__", None)
160 if class_args is not None and any(
161 x.__name__ == "SchemaModel"
162 for x in inspect.getmro(class_args[0])
163 ):
164 schema_model = value.__args__[0]
165
166 # prevent the double validation problem by preventing checks for
167 # dataframes with a defined pandera.schema
168 pandera_accessor = getattr(self, "pandera")
169 if (
170 pandera_accessor.schema is None
171 or pandera_accessor.schema != schema_model.to_schema()
172 ):
173 pandera_accessor.add_schema(schema_model.to_schema())
174 self.__dict__ = schema_model.validate(self).__dict__
175
176
177 # pylint:disable=too-few-public-methods
178 class SeriesBase(Generic[GenericDtype]):
179 """Pandera Series base class to use for all pandas-like APIs."""
180
181 default_dtype: Optional[Type] = None
182
183 def __get__(
184 self, instance: object, owner: Type
185 ) -> str: # pragma: no cover
186 raise AttributeError("Series should resolve to Field-s")
187
188
189 # pylint:disable=too-few-public-methods
190 class IndexBase(Generic[GenericDtype]):
191 """Representation of pandas.Index, only used for type annotation.
192
193 *new in 0.5.0*
194 """
195
196 default_dtype: Optional[Type] = None
197
198 def __get__(
199 self, instance: object, owner: Type
200 ) -> str: # pragma: no cover
201 raise AttributeError("Indexes should resolve to pa.Index-s")
202
203
204 class AnnotationInfo: # pylint:disable=too-few-public-methods
205 """Captures extra information about an annotation.
206
207 Attributes:
208 origin: The non-parameterized generic class.
209 arg: The first generic type (SchemaModel does not support more than
210 1 argument).
211 literal: Whether the annotation is a literal.
212 optional: Whether the annotation is optional.
213 raw_annotation: The raw annotation.
214 metadata: Extra arguments passed to :data:`typing.Annotated`.
215 """
216
217 def __init__(self, raw_annotation: Type) -> None:
218 self._parse_annotation(raw_annotation)
219
220 @property
221 def is_generic_df(self) -> bool:
222 """True if the annotation is a DataFrameBase subclass."""
223 try:
224 if self.origin is None:
225 return False
226 return issubclass(self.origin, DataFrameBase)
227 except TypeError:
228 return False
229
230 def _parse_annotation(self, raw_annotation: Type) -> None:
231 """Parse key information from annotation.
232
233 :param annotation: A subscripted type.
234 :returns: Annotation
235 """
236 self.raw_annotation = raw_annotation
237 self.origin = self.arg = None
238
239 self.optional = typing_inspect.is_optional_type(raw_annotation)
240 if self.optional and typing_inspect.is_union_type(raw_annotation):
241 # Annotated with Optional or Union[..., NoneType]
242 # get_args -> (pandera.typing.Index[str], <class 'NoneType'>)
243 raw_annotation = typing_inspect.get_args(raw_annotation)[0]
244
245 self.origin = typing_inspect.get_origin(raw_annotation)
246 # Replace empty tuple returned from get_args by None
247 args = typing_inspect.get_args(raw_annotation) or None
248 self.arg = args[0] if args else args
249
250 self.metadata = getattr(self.arg, "__metadata__", None)
251 if self.metadata:
252 self.arg = typing_inspect.get_args(self.arg)[0]
253
254 self.literal = typing_inspect.is_literal_type(self.arg)
255 if self.literal:
256 self.arg = typing_inspect.get_args(self.arg)[0]
257
258 self.default_dtype = getattr(raw_annotation, "default_dtype", None)
259
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pandera/typing/common.py b/pandera/typing/common.py
--- a/pandera/typing/common.py
+++ b/pandera/typing/common.py
@@ -2,7 +2,7 @@
# pylint:disable=abstract-method,disable=too-many-ancestors
import inspect
-from typing import TYPE_CHECKING, Any, Generic, Optional, Type, TypeVar
+from typing import TYPE_CHECKING, Any, Generic, Optional, Type, TypeVar, Union
import pandas as pd
import typing_inspect
@@ -53,83 +53,85 @@
if GEOPANDAS_INSTALLED:
GenericDtype = TypeVar( # type: ignore
"GenericDtype",
- bool,
- int,
- str,
- float,
- pd.core.dtypes.base.ExtensionDtype,
- Bool,
- Date,
- DateTime,
- Decimal,
- Timedelta,
- Category,
- Float,
- Float16,
- Float32,
- Float64,
- Int,
- Int8,
- Int16,
- Int32,
- Int64,
- UInt8,
- UInt16,
- UInt32,
- UInt64,
- INT8,
- INT16,
- INT32,
- INT64,
- UINT8,
- UINT16,
- UINT32,
- UINT64,
- Object,
- String,
- STRING,
- Geometry,
- covariant=True,
+ bound=Union[
+ bool,
+ int,
+ str,
+ float,
+ pd.core.dtypes.base.ExtensionDtype,
+ Bool,
+ Date,
+ DateTime,
+ Decimal,
+ Timedelta,
+ Category,
+ Float,
+ Float16,
+ Float32,
+ Float64,
+ Int,
+ Int8,
+ Int16,
+ Int32,
+ Int64,
+ UInt8,
+ UInt16,
+ UInt32,
+ UInt64,
+ INT8,
+ INT16,
+ INT32,
+ INT64,
+ UINT8,
+ UINT16,
+ UINT32,
+ UINT64,
+ Object,
+ String,
+ STRING,
+ Geometry,
+ ],
)
else:
GenericDtype = TypeVar( # type: ignore
"GenericDtype",
- bool,
- int,
- str,
- float,
- pd.core.dtypes.base.ExtensionDtype,
- Bool,
- Date,
- DateTime,
- Decimal,
- Timedelta,
- Category,
- Float,
- Float16,
- Float32,
- Float64,
- Int,
- Int8,
- Int16,
- Int32,
- Int64,
- UInt8,
- UInt16,
- UInt32,
- UInt64,
- INT8,
- INT16,
- INT32,
- INT64,
- UINT8,
- UINT16,
- UINT32,
- UINT64,
- Object,
- String,
- STRING,
- covariant=True,
+ bound=Union[
+ bool,
+ int,
+ str,
+ float,
+ pd.core.dtypes.base.ExtensionDtype,
+ Bool,
+ Date,
+ DateTime,
+ Decimal,
+ Timedelta,
+ Category,
+ Float,
+ Float16,
+ Float32,
+ Float64,
+ Int,
+ Int8,
+ Int16,
+ Int32,
+ Int64,
+ UInt8,
+ UInt16,
+ UInt32,
+ UInt64,
+ INT8,
+ INT16,
+ INT32,
+ INT64,
+ UINT8,
+ UINT16,
+ UINT32,
+ UINT64,
+ Object,
+ String,
+ STRING,
+ ],
)
Schema = TypeVar("Schema", bound="SchemaModel") # type: ignore
| {"golden_diff": "diff --git a/pandera/typing/common.py b/pandera/typing/common.py\n--- a/pandera/typing/common.py\n+++ b/pandera/typing/common.py\n@@ -2,7 +2,7 @@\n # pylint:disable=abstract-method,disable=too-many-ancestors\n \n import inspect\n-from typing import TYPE_CHECKING, Any, Generic, Optional, Type, TypeVar\n+from typing import TYPE_CHECKING, Any, Generic, Optional, Type, TypeVar, Union\n \n import pandas as pd\n import typing_inspect\n@@ -53,83 +53,85 @@\n if GEOPANDAS_INSTALLED:\n GenericDtype = TypeVar( # type: ignore\n \"GenericDtype\",\n- bool,\n- int,\n- str,\n- float,\n- pd.core.dtypes.base.ExtensionDtype,\n- Bool,\n- Date,\n- DateTime,\n- Decimal,\n- Timedelta,\n- Category,\n- Float,\n- Float16,\n- Float32,\n- Float64,\n- Int,\n- Int8,\n- Int16,\n- Int32,\n- Int64,\n- UInt8,\n- UInt16,\n- UInt32,\n- UInt64,\n- INT8,\n- INT16,\n- INT32,\n- INT64,\n- UINT8,\n- UINT16,\n- UINT32,\n- UINT64,\n- Object,\n- String,\n- STRING,\n- Geometry,\n- covariant=True,\n+ bound=Union[\n+ bool,\n+ int,\n+ str,\n+ float,\n+ pd.core.dtypes.base.ExtensionDtype,\n+ Bool,\n+ Date,\n+ DateTime,\n+ Decimal,\n+ Timedelta,\n+ Category,\n+ Float,\n+ Float16,\n+ Float32,\n+ Float64,\n+ Int,\n+ Int8,\n+ Int16,\n+ Int32,\n+ Int64,\n+ UInt8,\n+ UInt16,\n+ UInt32,\n+ UInt64,\n+ INT8,\n+ INT16,\n+ INT32,\n+ INT64,\n+ UINT8,\n+ UINT16,\n+ UINT32,\n+ UINT64,\n+ Object,\n+ String,\n+ STRING,\n+ Geometry,\n+ ],\n )\n else:\n GenericDtype = TypeVar( # type: ignore\n \"GenericDtype\",\n- bool,\n- int,\n- str,\n- float,\n- pd.core.dtypes.base.ExtensionDtype,\n- Bool,\n- Date,\n- DateTime,\n- Decimal,\n- Timedelta,\n- Category,\n- Float,\n- Float16,\n- Float32,\n- Float64,\n- Int,\n- Int8,\n- Int16,\n- Int32,\n- Int64,\n- UInt8,\n- UInt16,\n- UInt32,\n- UInt64,\n- INT8,\n- INT16,\n- INT32,\n- INT64,\n- UINT8,\n- UINT16,\n- UINT32,\n- UINT64,\n- Object,\n- String,\n- STRING,\n- covariant=True,\n+ bound=Union[\n+ bool,\n+ int,\n+ str,\n+ float,\n+ pd.core.dtypes.base.ExtensionDtype,\n+ Bool,\n+ Date,\n+ DateTime,\n+ Decimal,\n+ Timedelta,\n+ Category,\n+ Float,\n+ Float16,\n+ Float32,\n+ Float64,\n+ Int,\n+ Int8,\n+ Int16,\n+ Int32,\n+ Int64,\n+ UInt8,\n+ UInt16,\n+ UInt32,\n+ UInt64,\n+ INT8,\n+ INT16,\n+ INT32,\n+ INT64,\n+ UINT8,\n+ UINT16,\n+ UINT32,\n+ UINT64,\n+ Object,\n+ String,\n+ STRING,\n+ ],\n )\n \n Schema = TypeVar(\"Schema\", bound=\"SchemaModel\") # type: ignore\n", "issue": "mypy: pa.typing.Series does not support pd.DatetimeTZDtype as a type argument\n**Describe the bug**\r\n\r\nUnder mypy, `pa.typing.Series` does not support `pd.DatetimeTZDtype` as a valid type argument.\r\n\r\nBug occurs with and without `pandera.mypy` plugin\r\n\r\n- [x] I have checked that this issue has not already been reported.\r\n- [x] I have confirmed this bug exists on the latest version of pandera.\r\n- [ ] (optional) I have confirmed this bug exists on the master branch of pandera.\r\n\r\n#### Code Sample, a copy-pastable example\r\n\r\n```python\r\nfrom typing import Annotated\r\nimport pandas as pd\r\nimport pandera as pa\r\nfrom pandera.typing import Series\r\n\r\nclass MySchema(pa.SchemaModel):\r\n # both lines taken from the pandera documentation\r\n a: Series[Annotated[pd.DatetimeTZDtype, \"ns\", \"UTC\"]]\r\n b: Series[pd.DatetimeTZDtype] = pa.Field(dtype_kwargs={\"unit\": \"ns\", \"tz\": \"UTC\"})\r\n```\r\n\r\n#### Expected behavior\r\n\r\nmypy should show no errors\r\n\r\n#### Actual behaviour\r\n\r\nmypy produces the following errors\r\n\r\n```\r\ntest.py:8: error: Value of type variable \"GenericDtype\" of \"Series\" cannot be \"DatetimeTZDtype\"\r\ntest.py:9: error: Value of type variable \"GenericDtype\" of \"Series\" cannot be \"DatetimeTZDtype\"\r\n```\r\n\r\n#### Versions\r\n\r\n```\r\npandas==1.5.0\r\npandas-stubs==1.4.3.220807\r\npandera==0.13.3\r\nmypy==0.982\r\nmypy-extensions==0.4.3\r\n```\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"Common typing functionality.\"\"\"\n# pylint:disable=abstract-method,disable=too-many-ancestors\n\nimport inspect\nfrom typing import TYPE_CHECKING, Any, Generic, Optional, Type, TypeVar\n\nimport pandas as pd\nimport typing_inspect\n\nfrom .. import dtypes\nfrom ..engines import numpy_engine, pandas_engine\n\nBool = dtypes.Bool #: ``\"bool\"`` numpy dtype\nDate = dtypes.Date #: ``datetime.date`` object dtype\nDateTime = dtypes.DateTime #: ``\"datetime64[ns]\"`` numpy dtype\nDecimal = dtypes.Decimal #: ``decimal.Decimal`` object dtype\nTimedelta = dtypes.Timedelta #: ``\"timedelta64[ns]\"`` numpy dtype\nCategory = dtypes.Category #: pandas ``\"categorical\"`` datatype\nFloat = dtypes.Float #: ``\"float\"`` numpy dtype\nFloat16 = dtypes.Float16 #: ``\"float16\"`` numpy dtype\nFloat32 = dtypes.Float32 #: ``\"float32\"`` numpy dtype\nFloat64 = dtypes.Float64 #: ``\"float64\"`` numpy dtype\nInt = dtypes.Int #: ``\"int\"`` numpy dtype\nInt8 = dtypes.Int8 #: ``\"int8\"`` numpy dtype\nInt16 = dtypes.Int16 #: ``\"int16\"`` numpy dtype\nInt32 = dtypes.Int32 #: ``\"int32\"`` numpy dtype\nInt64 = dtypes.Int64 #: ``\"int64\"`` numpy dtype\nUInt8 = dtypes.UInt8 #: ``\"uint8\"`` numpy dtype\nUInt16 = dtypes.UInt16 #: ``\"uint16\"`` numpy dtype\nUInt32 = dtypes.UInt32 #: ``\"uint32\"`` numpy dtype\nUInt64 = dtypes.UInt64 #: ``\"uint64\"`` numpy dtype\nINT8 = pandas_engine.INT8 #: ``\"Int8\"`` pandas dtype:: pandas 0.24.0+\nINT16 = pandas_engine.INT16 #: ``\"Int16\"`` pandas dtype: pandas 0.24.0+\nINT32 = pandas_engine.INT32 #: ``\"Int32\"`` pandas dtype: pandas 0.24.0+\nINT64 = pandas_engine.INT64 #: ``\"Int64\"`` pandas dtype: pandas 0.24.0+\nUINT8 = pandas_engine.UINT8 #: ``\"UInt8\"`` pandas dtype:: pandas 0.24.0+\nUINT16 = pandas_engine.UINT16 #: ``\"UInt16\"`` pandas dtype: pandas 0.24.0+\nUINT32 = pandas_engine.UINT32 #: ``\"UInt32\"`` pandas dtype: pandas 0.24.0+\nUINT64 = pandas_engine.UINT64 #: ``\"UInt64\"`` pandas dtype: pandas 0.24.0+\nObject = numpy_engine.Object #: ``\"object\"`` numpy dtype\nString = dtypes.String #: ``\"str\"`` numpy dtype\n#: ``\"string\"`` pandas dtypes: pandas 1.0.0+. For <1.0.0, this enum will\n#: fall back on the str-as-object-array representation.\nSTRING = pandas_engine.STRING #: ``\"str\"`` numpy dtype\nBOOL = pandas_engine.BOOL #: ``\"str\"`` numpy dtype\n\ntry:\n Geometry = pandas_engine.Geometry # : ``\"geometry\"`` geopandas dtype\n GEOPANDAS_INSTALLED = True\nexcept AttributeError:\n GEOPANDAS_INSTALLED = False\n\nif GEOPANDAS_INSTALLED:\n GenericDtype = TypeVar( # type: ignore\n \"GenericDtype\",\n bool,\n int,\n str,\n float,\n pd.core.dtypes.base.ExtensionDtype,\n Bool,\n Date,\n DateTime,\n Decimal,\n Timedelta,\n Category,\n Float,\n Float16,\n Float32,\n Float64,\n Int,\n Int8,\n Int16,\n Int32,\n Int64,\n UInt8,\n UInt16,\n UInt32,\n UInt64,\n INT8,\n INT16,\n INT32,\n INT64,\n UINT8,\n UINT16,\n UINT32,\n UINT64,\n Object,\n String,\n STRING,\n Geometry,\n covariant=True,\n )\nelse:\n GenericDtype = TypeVar( # type: ignore\n \"GenericDtype\",\n bool,\n int,\n str,\n float,\n pd.core.dtypes.base.ExtensionDtype,\n Bool,\n Date,\n DateTime,\n Decimal,\n Timedelta,\n Category,\n Float,\n Float16,\n Float32,\n Float64,\n Int,\n Int8,\n Int16,\n Int32,\n Int64,\n UInt8,\n UInt16,\n UInt32,\n UInt64,\n INT8,\n INT16,\n INT32,\n INT64,\n UINT8,\n UINT16,\n UINT32,\n UINT64,\n Object,\n String,\n STRING,\n covariant=True,\n )\n\nSchema = TypeVar(\"Schema\", bound=\"SchemaModel\") # type: ignore\n\n\n# pylint:disable=invalid-name\nif TYPE_CHECKING:\n T = TypeVar(\"T\") # pragma: no cover\nelse:\n T = Schema\n\n\nclass DataFrameBase(Generic[T]):\n # pylint: disable=too-few-public-methods\n \"\"\"\n Pandera Dataframe base class for validating dataframes on\n initialization.\n \"\"\"\n\n default_dtype: Optional[Type] = None\n\n def __setattr__(self, name: str, value: Any) -> None:\n # pylint: disable=no-member\n object.__setattr__(self, name, value)\n if name == \"__orig_class__\":\n orig_class = getattr(self, \"__orig_class__\")\n class_args = getattr(orig_class, \"__args__\", None)\n if class_args is not None and any(\n x.__name__ == \"SchemaModel\"\n for x in inspect.getmro(class_args[0])\n ):\n schema_model = value.__args__[0]\n\n # prevent the double validation problem by preventing checks for\n # dataframes with a defined pandera.schema\n pandera_accessor = getattr(self, \"pandera\")\n if (\n pandera_accessor.schema is None\n or pandera_accessor.schema != schema_model.to_schema()\n ):\n pandera_accessor.add_schema(schema_model.to_schema())\n self.__dict__ = schema_model.validate(self).__dict__\n\n\n# pylint:disable=too-few-public-methods\nclass SeriesBase(Generic[GenericDtype]):\n \"\"\"Pandera Series base class to use for all pandas-like APIs.\"\"\"\n\n default_dtype: Optional[Type] = None\n\n def __get__(\n self, instance: object, owner: Type\n ) -> str: # pragma: no cover\n raise AttributeError(\"Series should resolve to Field-s\")\n\n\n# pylint:disable=too-few-public-methods\nclass IndexBase(Generic[GenericDtype]):\n \"\"\"Representation of pandas.Index, only used for type annotation.\n\n *new in 0.5.0*\n \"\"\"\n\n default_dtype: Optional[Type] = None\n\n def __get__(\n self, instance: object, owner: Type\n ) -> str: # pragma: no cover\n raise AttributeError(\"Indexes should resolve to pa.Index-s\")\n\n\nclass AnnotationInfo: # pylint:disable=too-few-public-methods\n \"\"\"Captures extra information about an annotation.\n\n Attributes:\n origin: The non-parameterized generic class.\n arg: The first generic type (SchemaModel does not support more than\n 1 argument).\n literal: Whether the annotation is a literal.\n optional: Whether the annotation is optional.\n raw_annotation: The raw annotation.\n metadata: Extra arguments passed to :data:`typing.Annotated`.\n \"\"\"\n\n def __init__(self, raw_annotation: Type) -> None:\n self._parse_annotation(raw_annotation)\n\n @property\n def is_generic_df(self) -> bool:\n \"\"\"True if the annotation is a DataFrameBase subclass.\"\"\"\n try:\n if self.origin is None:\n return False\n return issubclass(self.origin, DataFrameBase)\n except TypeError:\n return False\n\n def _parse_annotation(self, raw_annotation: Type) -> None:\n \"\"\"Parse key information from annotation.\n\n :param annotation: A subscripted type.\n :returns: Annotation\n \"\"\"\n self.raw_annotation = raw_annotation\n self.origin = self.arg = None\n\n self.optional = typing_inspect.is_optional_type(raw_annotation)\n if self.optional and typing_inspect.is_union_type(raw_annotation):\n # Annotated with Optional or Union[..., NoneType]\n # get_args -> (pandera.typing.Index[str], <class 'NoneType'>)\n raw_annotation = typing_inspect.get_args(raw_annotation)[0]\n\n self.origin = typing_inspect.get_origin(raw_annotation)\n # Replace empty tuple returned from get_args by None\n args = typing_inspect.get_args(raw_annotation) or None\n self.arg = args[0] if args else args\n\n self.metadata = getattr(self.arg, \"__metadata__\", None)\n if self.metadata:\n self.arg = typing_inspect.get_args(self.arg)[0]\n\n self.literal = typing_inspect.is_literal_type(self.arg)\n if self.literal:\n self.arg = typing_inspect.get_args(self.arg)[0]\n\n self.default_dtype = getattr(raw_annotation, \"default_dtype\", None)\n", "path": "pandera/typing/common.py"}], "after_files": [{"content": "\"\"\"Common typing functionality.\"\"\"\n# pylint:disable=abstract-method,disable=too-many-ancestors\n\nimport inspect\nfrom typing import TYPE_CHECKING, Any, Generic, Optional, Type, TypeVar, Union\n\nimport pandas as pd\nimport typing_inspect\n\nfrom .. import dtypes\nfrom ..engines import numpy_engine, pandas_engine\n\nBool = dtypes.Bool #: ``\"bool\"`` numpy dtype\nDate = dtypes.Date #: ``datetime.date`` object dtype\nDateTime = dtypes.DateTime #: ``\"datetime64[ns]\"`` numpy dtype\nDecimal = dtypes.Decimal #: ``decimal.Decimal`` object dtype\nTimedelta = dtypes.Timedelta #: ``\"timedelta64[ns]\"`` numpy dtype\nCategory = dtypes.Category #: pandas ``\"categorical\"`` datatype\nFloat = dtypes.Float #: ``\"float\"`` numpy dtype\nFloat16 = dtypes.Float16 #: ``\"float16\"`` numpy dtype\nFloat32 = dtypes.Float32 #: ``\"float32\"`` numpy dtype\nFloat64 = dtypes.Float64 #: ``\"float64\"`` numpy dtype\nInt = dtypes.Int #: ``\"int\"`` numpy dtype\nInt8 = dtypes.Int8 #: ``\"int8\"`` numpy dtype\nInt16 = dtypes.Int16 #: ``\"int16\"`` numpy dtype\nInt32 = dtypes.Int32 #: ``\"int32\"`` numpy dtype\nInt64 = dtypes.Int64 #: ``\"int64\"`` numpy dtype\nUInt8 = dtypes.UInt8 #: ``\"uint8\"`` numpy dtype\nUInt16 = dtypes.UInt16 #: ``\"uint16\"`` numpy dtype\nUInt32 = dtypes.UInt32 #: ``\"uint32\"`` numpy dtype\nUInt64 = dtypes.UInt64 #: ``\"uint64\"`` numpy dtype\nINT8 = pandas_engine.INT8 #: ``\"Int8\"`` pandas dtype:: pandas 0.24.0+\nINT16 = pandas_engine.INT16 #: ``\"Int16\"`` pandas dtype: pandas 0.24.0+\nINT32 = pandas_engine.INT32 #: ``\"Int32\"`` pandas dtype: pandas 0.24.0+\nINT64 = pandas_engine.INT64 #: ``\"Int64\"`` pandas dtype: pandas 0.24.0+\nUINT8 = pandas_engine.UINT8 #: ``\"UInt8\"`` pandas dtype:: pandas 0.24.0+\nUINT16 = pandas_engine.UINT16 #: ``\"UInt16\"`` pandas dtype: pandas 0.24.0+\nUINT32 = pandas_engine.UINT32 #: ``\"UInt32\"`` pandas dtype: pandas 0.24.0+\nUINT64 = pandas_engine.UINT64 #: ``\"UInt64\"`` pandas dtype: pandas 0.24.0+\nObject = numpy_engine.Object #: ``\"object\"`` numpy dtype\nString = dtypes.String #: ``\"str\"`` numpy dtype\n#: ``\"string\"`` pandas dtypes: pandas 1.0.0+. For <1.0.0, this enum will\n#: fall back on the str-as-object-array representation.\nSTRING = pandas_engine.STRING #: ``\"str\"`` numpy dtype\nBOOL = pandas_engine.BOOL #: ``\"str\"`` numpy dtype\n\ntry:\n Geometry = pandas_engine.Geometry # : ``\"geometry\"`` geopandas dtype\n GEOPANDAS_INSTALLED = True\nexcept AttributeError:\n GEOPANDAS_INSTALLED = False\n\nif GEOPANDAS_INSTALLED:\n GenericDtype = TypeVar( # type: ignore\n \"GenericDtype\",\n bound=Union[\n bool,\n int,\n str,\n float,\n pd.core.dtypes.base.ExtensionDtype,\n Bool,\n Date,\n DateTime,\n Decimal,\n Timedelta,\n Category,\n Float,\n Float16,\n Float32,\n Float64,\n Int,\n Int8,\n Int16,\n Int32,\n Int64,\n UInt8,\n UInt16,\n UInt32,\n UInt64,\n INT8,\n INT16,\n INT32,\n INT64,\n UINT8,\n UINT16,\n UINT32,\n UINT64,\n Object,\n String,\n STRING,\n Geometry,\n ],\n )\nelse:\n GenericDtype = TypeVar( # type: ignore\n \"GenericDtype\",\n bound=Union[\n bool,\n int,\n str,\n float,\n pd.core.dtypes.base.ExtensionDtype,\n Bool,\n Date,\n DateTime,\n Decimal,\n Timedelta,\n Category,\n Float,\n Float16,\n Float32,\n Float64,\n Int,\n Int8,\n Int16,\n Int32,\n Int64,\n UInt8,\n UInt16,\n UInt32,\n UInt64,\n INT8,\n INT16,\n INT32,\n INT64,\n UINT8,\n UINT16,\n UINT32,\n UINT64,\n Object,\n String,\n STRING,\n ],\n )\n\nSchema = TypeVar(\"Schema\", bound=\"SchemaModel\") # type: ignore\n\n\n# pylint:disable=invalid-name\nif TYPE_CHECKING:\n T = TypeVar(\"T\") # pragma: no cover\nelse:\n T = Schema\n\n\nclass DataFrameBase(Generic[T]):\n # pylint: disable=too-few-public-methods\n \"\"\"\n Pandera Dataframe base class for validating dataframes on\n initialization.\n \"\"\"\n\n default_dtype: Optional[Type] = None\n\n def __setattr__(self, name: str, value: Any) -> None:\n # pylint: disable=no-member\n object.__setattr__(self, name, value)\n if name == \"__orig_class__\":\n orig_class = getattr(self, \"__orig_class__\")\n class_args = getattr(orig_class, \"__args__\", None)\n if class_args is not None and any(\n x.__name__ == \"SchemaModel\"\n for x in inspect.getmro(class_args[0])\n ):\n schema_model = value.__args__[0]\n\n # prevent the double validation problem by preventing checks for\n # dataframes with a defined pandera.schema\n pandera_accessor = getattr(self, \"pandera\")\n if (\n pandera_accessor.schema is None\n or pandera_accessor.schema != schema_model.to_schema()\n ):\n pandera_accessor.add_schema(schema_model.to_schema())\n self.__dict__ = schema_model.validate(self).__dict__\n\n\n# pylint:disable=too-few-public-methods\nclass SeriesBase(Generic[GenericDtype]):\n \"\"\"Pandera Series base class to use for all pandas-like APIs.\"\"\"\n\n default_dtype: Optional[Type] = None\n\n def __get__(\n self, instance: object, owner: Type\n ) -> str: # pragma: no cover\n raise AttributeError(\"Series should resolve to Field-s\")\n\n\n# pylint:disable=too-few-public-methods\nclass IndexBase(Generic[GenericDtype]):\n \"\"\"Representation of pandas.Index, only used for type annotation.\n\n *new in 0.5.0*\n \"\"\"\n\n default_dtype: Optional[Type] = None\n\n def __get__(\n self, instance: object, owner: Type\n ) -> str: # pragma: no cover\n raise AttributeError(\"Indexes should resolve to pa.Index-s\")\n\n\nclass AnnotationInfo: # pylint:disable=too-few-public-methods\n \"\"\"Captures extra information about an annotation.\n\n Attributes:\n origin: The non-parameterized generic class.\n arg: The first generic type (SchemaModel does not support more than\n 1 argument).\n literal: Whether the annotation is a literal.\n optional: Whether the annotation is optional.\n raw_annotation: The raw annotation.\n metadata: Extra arguments passed to :data:`typing.Annotated`.\n \"\"\"\n\n def __init__(self, raw_annotation: Type) -> None:\n self._parse_annotation(raw_annotation)\n\n @property\n def is_generic_df(self) -> bool:\n \"\"\"True if the annotation is a DataFrameBase subclass.\"\"\"\n try:\n if self.origin is None:\n return False\n return issubclass(self.origin, DataFrameBase)\n except TypeError:\n return False\n\n def _parse_annotation(self, raw_annotation: Type) -> None:\n \"\"\"Parse key information from annotation.\n\n :param annotation: A subscripted type.\n :returns: Annotation\n \"\"\"\n self.raw_annotation = raw_annotation\n self.origin = self.arg = None\n\n self.optional = typing_inspect.is_optional_type(raw_annotation)\n if self.optional and typing_inspect.is_union_type(raw_annotation):\n # Annotated with Optional or Union[..., NoneType]\n # get_args -> (pandera.typing.Index[str], <class 'NoneType'>)\n raw_annotation = typing_inspect.get_args(raw_annotation)[0]\n\n self.origin = typing_inspect.get_origin(raw_annotation)\n # Replace empty tuple returned from get_args by None\n args = typing_inspect.get_args(raw_annotation) or None\n self.arg = args[0] if args else args\n\n self.metadata = getattr(self.arg, \"__metadata__\", None)\n if self.metadata:\n self.arg = typing_inspect.get_args(self.arg)[0]\n\n self.literal = typing_inspect.is_literal_type(self.arg)\n if self.literal:\n self.arg = typing_inspect.get_args(self.arg)[0]\n\n self.default_dtype = getattr(raw_annotation, \"default_dtype\", None)\n", "path": "pandera/typing/common.py"}]} | 3,453 | 971 |
gh_patches_debug_11643 | rasdani/github-patches | git_diff | ansible__molecule-2232 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ValueError: invalid literal for int() with base 10: 'post20190705'
It seems that code from setup.py is broken as it makes the wrong assumption that `setuptools.__version__.` will always return only integers:
https://github.com/ansible/molecule/blob/master/setup.py#L36
```
RUNTIME_SETUPTOOLS_VERSION = tuple(map(int, setuptools.__version__.split('.')))
ValueError: invalid literal for int() with base 10: 'post20190705'
```
As seen here the minor version can be a non numeric value. The quick workaround is to only look at the fist two elements.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #! /usr/bin/env python
2 # Copyright (c) 2019 Red Hat, Inc.
3 # Copyright (c) 2015-2018 Cisco Systems, Inc.
4 #
5 # Permission is hereby granted, free of charge, to any person obtaining a copy
6 # of this software and associated documentation files (the "Software"), to
7 # deal in the Software without restriction, including without limitation the
8 # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
9 # sell copies of the Software, and to permit persons to whom the Software is
10 # furnished to do so, subject to the following conditions:
11 #
12 # The above copyright notice and this permission notice shall be included in
13 # all copies or substantial portions of the Software.
14 #
15 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
20 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
21 # DEALINGS IN THE SOFTWARE.
22 """Molecule distribution package setuptools installer."""
23
24 import setuptools
25
26 HAS_DIST_INFO_CMD = False
27 try:
28 import setuptools.command.dist_info
29
30 HAS_DIST_INFO_CMD = True
31 except ImportError:
32 """Setuptools version is too old."""
33
34
35 ALL_STRING_TYPES = tuple(map(type, ('', b'', u'')))
36 MIN_NATIVE_SETUPTOOLS_VERSION = 34, 4, 0
37 """Minimal setuptools having good read_configuration implementation."""
38
39 RUNTIME_SETUPTOOLS_VERSION = tuple(map(int, setuptools.__version__.split('.')))
40 """Setuptools imported now."""
41
42 READ_CONFIG_SHIM_NEEDED = RUNTIME_SETUPTOOLS_VERSION < MIN_NATIVE_SETUPTOOLS_VERSION
43
44
45 def str_if_nested_or_str(s):
46 """Turn input into a native string if possible."""
47 if isinstance(s, ALL_STRING_TYPES):
48 return str(s)
49 if isinstance(s, (list, tuple)):
50 return type(s)(map(str_if_nested_or_str, s))
51 if isinstance(s, (dict,)):
52 return stringify_dict_contents(s)
53 return s
54
55
56 def stringify_dict_contents(dct):
57 """Turn dict keys and values into native strings."""
58 return {str_if_nested_or_str(k): str_if_nested_or_str(v) for k, v in dct.items()}
59
60
61 if not READ_CONFIG_SHIM_NEEDED:
62 from setuptools.config import read_configuration, ConfigOptionsHandler
63 import setuptools.config
64 import setuptools.dist
65
66 # Set default value for 'use_scm_version'
67 setattr(setuptools.dist.Distribution, 'use_scm_version', False)
68
69 # Attach bool parser to 'use_scm_version' option
70 class ShimConfigOptionsHandler(ConfigOptionsHandler):
71 """Extension class for ConfigOptionsHandler."""
72
73 @property
74 def parsers(self):
75 """Return an option mapping with default data type parsers."""
76 _orig_parsers = super(ShimConfigOptionsHandler, self).parsers
77 return dict(use_scm_version=self._parse_bool, **_orig_parsers)
78
79 def parse_section_packages__find(self, section_options):
80 find_kwargs = super(
81 ShimConfigOptionsHandler, self
82 ).parse_section_packages__find(section_options)
83 return stringify_dict_contents(find_kwargs)
84
85 setuptools.config.ConfigOptionsHandler = ShimConfigOptionsHandler
86 else:
87 """This is a shim for setuptools<required."""
88 import functools
89 import io
90 import json
91 import sys
92 import warnings
93
94 try:
95 import setuptools.config
96
97 def filter_out_unknown_section(i):
98 def chi(self, *args, **kwargs):
99 i(self, *args, **kwargs)
100 self.sections = {
101 s: v for s, v in self.sections.items() if s != 'packages.find'
102 }
103
104 return chi
105
106 setuptools.config.ConfigHandler.__init__ = filter_out_unknown_section(
107 setuptools.config.ConfigHandler.__init__
108 )
109 except ImportError:
110 pass
111
112 def ignore_unknown_options(s):
113 @functools.wraps(s)
114 def sw(**attrs):
115 try:
116 ignore_warning_regex = (
117 r"Unknown distribution option: "
118 r"'(license_file|project_urls|python_requires)'"
119 )
120 warnings.filterwarnings(
121 'ignore',
122 message=ignore_warning_regex,
123 category=UserWarning,
124 module='distutils.dist',
125 )
126 return s(**attrs)
127 finally:
128 warnings.resetwarnings()
129
130 return sw
131
132 def parse_predicates(python_requires):
133 import itertools
134 import operator
135
136 sorted_operators_map = tuple(
137 sorted(
138 {
139 '>': operator.gt,
140 '<': operator.lt,
141 '>=': operator.ge,
142 '<=': operator.le,
143 '==': operator.eq,
144 '!=': operator.ne,
145 '': operator.eq,
146 }.items(),
147 key=lambda i: len(i[0]),
148 reverse=True,
149 )
150 )
151
152 def is_decimal(s):
153 return type(u'')(s).isdecimal()
154
155 conditions = map(str.strip, python_requires.split(','))
156 for c in conditions:
157 for op_sign, op_func in sorted_operators_map:
158 if not c.startswith(op_sign):
159 continue
160 raw_ver = itertools.takewhile(
161 is_decimal, c[len(op_sign) :].strip().split('.')
162 )
163 ver = tuple(map(int, raw_ver))
164 yield op_func, ver
165 break
166
167 def validate_required_python_or_fail(python_requires=None):
168 if python_requires is None:
169 return
170
171 python_version = sys.version_info
172 preds = parse_predicates(python_requires)
173 for op, v in preds:
174 py_ver_slug = python_version[: max(len(v), 3)]
175 condition_matches = op(py_ver_slug, v)
176 if not condition_matches:
177 raise RuntimeError(
178 "requires Python '{}' but the running Python is {}".format(
179 python_requires, '.'.join(map(str, python_version[:3]))
180 )
181 )
182
183 def verify_required_python_runtime(s):
184 @functools.wraps(s)
185 def sw(**attrs):
186 try:
187 validate_required_python_or_fail(attrs.get('python_requires'))
188 except RuntimeError as re:
189 sys.exit('{} {!s}'.format(attrs['name'], re))
190 return s(**attrs)
191
192 return sw
193
194 setuptools.setup = ignore_unknown_options(setuptools.setup)
195 setuptools.setup = verify_required_python_runtime(setuptools.setup)
196
197 try:
198 from configparser import ConfigParser, NoSectionError
199 except ImportError:
200 from ConfigParser import ConfigParser, NoSectionError
201
202 ConfigParser.read_file = ConfigParser.readfp
203
204 def maybe_read_files(d):
205 """Read files if the string starts with `file:` marker."""
206 FILE_FUNC_MARKER = 'file:'
207
208 d = d.strip()
209 if not d.startswith(FILE_FUNC_MARKER):
210 return d
211 descs = []
212 for fname in map(str.strip, str(d[len(FILE_FUNC_MARKER) :]).split(',')):
213 with io.open(fname, encoding='utf-8') as f:
214 descs.append(f.read())
215 return ''.join(descs)
216
217 def cfg_val_to_list(v):
218 """Turn config val to list and filter out empty lines."""
219 return list(filter(bool, map(str.strip, str(v).strip().splitlines())))
220
221 def cfg_val_to_dict(v):
222 """Turn config val to dict and filter out empty lines."""
223 return dict(
224 map(
225 lambda l: list(map(str.strip, l.split('=', 1))),
226 filter(bool, map(str.strip, str(v).strip().splitlines())),
227 )
228 )
229
230 def cfg_val_to_primitive(v):
231 """Parse primitive config val to appropriate data type."""
232 return json.loads(v.strip().lower())
233
234 def read_configuration(filepath):
235 """Read metadata and options from setup.cfg located at filepath."""
236 cfg = ConfigParser()
237 with io.open(filepath, encoding='utf-8') as f:
238 cfg.read_file(f)
239
240 md = dict(cfg.items('metadata'))
241 for list_key in 'classifiers', 'keywords', 'project_urls':
242 try:
243 md[list_key] = cfg_val_to_list(md[list_key])
244 except KeyError:
245 pass
246 try:
247 md['long_description'] = maybe_read_files(md['long_description'])
248 except KeyError:
249 pass
250 opt = dict(cfg.items('options'))
251 for list_key in 'include_package_data', 'use_scm_version', 'zip_safe':
252 try:
253 opt[list_key] = cfg_val_to_primitive(opt[list_key])
254 except KeyError:
255 pass
256 for list_key in 'scripts', 'install_requires', 'setup_requires':
257 try:
258 opt[list_key] = cfg_val_to_list(opt[list_key])
259 except KeyError:
260 pass
261 try:
262 opt['package_dir'] = cfg_val_to_dict(opt['package_dir'])
263 except KeyError:
264 pass
265 try:
266 opt_package_data = dict(cfg.items('options.package_data'))
267 if not opt_package_data.get('', '').strip():
268 opt_package_data[''] = opt_package_data['*']
269 del opt_package_data['*']
270 except (KeyError, NoSectionError):
271 opt_package_data = {}
272 try:
273 opt_extras_require = dict(cfg.items('options.extras_require'))
274 opt['extras_require'] = {}
275 for k, v in opt_extras_require.items():
276 opt['extras_require'][k] = cfg_val_to_list(v)
277 except NoSectionError:
278 pass
279 opt['package_data'] = {}
280 for k, v in opt_package_data.items():
281 opt['package_data'][k] = cfg_val_to_list(v)
282 try:
283 opt_exclude_package_data = dict(cfg.items('options.exclude_package_data'))
284 if (
285 not opt_exclude_package_data.get('', '').strip()
286 and '*' in opt_exclude_package_data
287 ):
288 opt_exclude_package_data[''] = opt_exclude_package_data['*']
289 del opt_exclude_package_data['*']
290 except NoSectionError:
291 pass
292 else:
293 opt['exclude_package_data'] = {}
294 for k, v in opt_exclude_package_data.items():
295 opt['exclude_package_data'][k] = cfg_val_to_list(v)
296 cur_pkgs = opt.get('packages', '').strip()
297 if '\n' in cur_pkgs:
298 opt['packages'] = cfg_val_to_list(opt['packages'])
299 elif cur_pkgs.startswith('find:'):
300 opt_packages_find = stringify_dict_contents(
301 dict(cfg.items('options.packages.find'))
302 )
303 opt['packages'] = setuptools.find_packages(**opt_packages_find)
304 return {'metadata': md, 'options': opt}
305
306
307 def cut_local_version_on_upload(version):
308 """Generate a PEP440 local version if uploading to PyPI."""
309 import os
310 import setuptools_scm.version # only present during setup time
311
312 IS_PYPI_UPLOAD = os.getenv('PYPI_UPLOAD') == 'true' # set in tox.ini
313 return (
314 ''
315 if IS_PYPI_UPLOAD
316 else setuptools_scm.version.get_local_node_and_date(version)
317 )
318
319
320 if HAS_DIST_INFO_CMD:
321
322 class patched_dist_info(setuptools.command.dist_info.dist_info):
323 def run(self):
324 self.egg_base = str_if_nested_or_str(self.egg_base)
325 return setuptools.command.dist_info.dist_info.run(self)
326
327
328 declarative_setup_params = read_configuration('setup.cfg')
329 """Declarative metadata and options as read by setuptools."""
330
331
332 setup_params = {}
333 """Explicit metadata for passing into setuptools.setup() call."""
334
335 setup_params = dict(setup_params, **declarative_setup_params['metadata'])
336 setup_params = dict(setup_params, **declarative_setup_params['options'])
337
338 if HAS_DIST_INFO_CMD:
339 setup_params['cmdclass'] = {'dist_info': patched_dist_info}
340
341 setup_params['use_scm_version'] = {'local_scheme': cut_local_version_on_upload}
342
343 # Patch incorrectly decoded package_dir option
344 # ``egg_info`` demands native strings failing with unicode under Python 2
345 # Ref https://github.com/pypa/setuptools/issues/1136
346 setup_params = stringify_dict_contents(setup_params)
347
348
349 __name__ == '__main__' and setuptools.setup(**setup_params)
350
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -33,10 +33,11 @@
ALL_STRING_TYPES = tuple(map(type, ('', b'', u'')))
-MIN_NATIVE_SETUPTOOLS_VERSION = 34, 4, 0
+MIN_NATIVE_SETUPTOOLS_VERSION = 34, 4
"""Minimal setuptools having good read_configuration implementation."""
-RUNTIME_SETUPTOOLS_VERSION = tuple(map(int, setuptools.__version__.split('.')))
+# Patch version can be a non integer value, like 'post20190705'
+RUNTIME_SETUPTOOLS_VERSION = tuple(map(int, setuptools.__version__.split('.')[:2]))
"""Setuptools imported now."""
READ_CONFIG_SHIM_NEEDED = RUNTIME_SETUPTOOLS_VERSION < MIN_NATIVE_SETUPTOOLS_VERSION
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -33,10 +33,11 @@\n \n \n ALL_STRING_TYPES = tuple(map(type, ('', b'', u'')))\n-MIN_NATIVE_SETUPTOOLS_VERSION = 34, 4, 0\n+MIN_NATIVE_SETUPTOOLS_VERSION = 34, 4\n \"\"\"Minimal setuptools having good read_configuration implementation.\"\"\"\n \n-RUNTIME_SETUPTOOLS_VERSION = tuple(map(int, setuptools.__version__.split('.')))\n+# Patch version can be a non integer value, like 'post20190705'\n+RUNTIME_SETUPTOOLS_VERSION = tuple(map(int, setuptools.__version__.split('.')[:2]))\n \"\"\"Setuptools imported now.\"\"\"\n \n READ_CONFIG_SHIM_NEEDED = RUNTIME_SETUPTOOLS_VERSION < MIN_NATIVE_SETUPTOOLS_VERSION\n", "issue": "ValueError: invalid literal for int() with base 10: 'post20190705'\nIt seems that code from setup.py is broken as it makes the wrong assumption that `setuptools.__version__.` will always return only integers:\r\n\r\nhttps://github.com/ansible/molecule/blob/master/setup.py#L36\r\n\r\n```\r\n RUNTIME_SETUPTOOLS_VERSION = tuple(map(int, setuptools.__version__.split('.')))\r\n ValueError: invalid literal for int() with base 10: 'post20190705'\r\n```\r\n\r\nAs seen here the minor version can be a non numeric value. The quick workaround is to only look at the fist two elements.\n", "before_files": [{"content": "#! /usr/bin/env python\n# Copyright (c) 2019 Red Hat, Inc.\n# Copyright (c) 2015-2018 Cisco Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to\n# deal in the Software without restriction, including without limitation the\n# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n# sell copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\"\"\"Molecule distribution package setuptools installer.\"\"\"\n\nimport setuptools\n\nHAS_DIST_INFO_CMD = False\ntry:\n import setuptools.command.dist_info\n\n HAS_DIST_INFO_CMD = True\nexcept ImportError:\n \"\"\"Setuptools version is too old.\"\"\"\n\n\nALL_STRING_TYPES = tuple(map(type, ('', b'', u'')))\nMIN_NATIVE_SETUPTOOLS_VERSION = 34, 4, 0\n\"\"\"Minimal setuptools having good read_configuration implementation.\"\"\"\n\nRUNTIME_SETUPTOOLS_VERSION = tuple(map(int, setuptools.__version__.split('.')))\n\"\"\"Setuptools imported now.\"\"\"\n\nREAD_CONFIG_SHIM_NEEDED = RUNTIME_SETUPTOOLS_VERSION < MIN_NATIVE_SETUPTOOLS_VERSION\n\n\ndef str_if_nested_or_str(s):\n \"\"\"Turn input into a native string if possible.\"\"\"\n if isinstance(s, ALL_STRING_TYPES):\n return str(s)\n if isinstance(s, (list, tuple)):\n return type(s)(map(str_if_nested_or_str, s))\n if isinstance(s, (dict,)):\n return stringify_dict_contents(s)\n return s\n\n\ndef stringify_dict_contents(dct):\n \"\"\"Turn dict keys and values into native strings.\"\"\"\n return {str_if_nested_or_str(k): str_if_nested_or_str(v) for k, v in dct.items()}\n\n\nif not READ_CONFIG_SHIM_NEEDED:\n from setuptools.config import read_configuration, ConfigOptionsHandler\n import setuptools.config\n import setuptools.dist\n\n # Set default value for 'use_scm_version'\n setattr(setuptools.dist.Distribution, 'use_scm_version', False)\n\n # Attach bool parser to 'use_scm_version' option\n class ShimConfigOptionsHandler(ConfigOptionsHandler):\n \"\"\"Extension class for ConfigOptionsHandler.\"\"\"\n\n @property\n def parsers(self):\n \"\"\"Return an option mapping with default data type parsers.\"\"\"\n _orig_parsers = super(ShimConfigOptionsHandler, self).parsers\n return dict(use_scm_version=self._parse_bool, **_orig_parsers)\n\n def parse_section_packages__find(self, section_options):\n find_kwargs = super(\n ShimConfigOptionsHandler, self\n ).parse_section_packages__find(section_options)\n return stringify_dict_contents(find_kwargs)\n\n setuptools.config.ConfigOptionsHandler = ShimConfigOptionsHandler\nelse:\n \"\"\"This is a shim for setuptools<required.\"\"\"\n import functools\n import io\n import json\n import sys\n import warnings\n\n try:\n import setuptools.config\n\n def filter_out_unknown_section(i):\n def chi(self, *args, **kwargs):\n i(self, *args, **kwargs)\n self.sections = {\n s: v for s, v in self.sections.items() if s != 'packages.find'\n }\n\n return chi\n\n setuptools.config.ConfigHandler.__init__ = filter_out_unknown_section(\n setuptools.config.ConfigHandler.__init__\n )\n except ImportError:\n pass\n\n def ignore_unknown_options(s):\n @functools.wraps(s)\n def sw(**attrs):\n try:\n ignore_warning_regex = (\n r\"Unknown distribution option: \"\n r\"'(license_file|project_urls|python_requires)'\"\n )\n warnings.filterwarnings(\n 'ignore',\n message=ignore_warning_regex,\n category=UserWarning,\n module='distutils.dist',\n )\n return s(**attrs)\n finally:\n warnings.resetwarnings()\n\n return sw\n\n def parse_predicates(python_requires):\n import itertools\n import operator\n\n sorted_operators_map = tuple(\n sorted(\n {\n '>': operator.gt,\n '<': operator.lt,\n '>=': operator.ge,\n '<=': operator.le,\n '==': operator.eq,\n '!=': operator.ne,\n '': operator.eq,\n }.items(),\n key=lambda i: len(i[0]),\n reverse=True,\n )\n )\n\n def is_decimal(s):\n return type(u'')(s).isdecimal()\n\n conditions = map(str.strip, python_requires.split(','))\n for c in conditions:\n for op_sign, op_func in sorted_operators_map:\n if not c.startswith(op_sign):\n continue\n raw_ver = itertools.takewhile(\n is_decimal, c[len(op_sign) :].strip().split('.')\n )\n ver = tuple(map(int, raw_ver))\n yield op_func, ver\n break\n\n def validate_required_python_or_fail(python_requires=None):\n if python_requires is None:\n return\n\n python_version = sys.version_info\n preds = parse_predicates(python_requires)\n for op, v in preds:\n py_ver_slug = python_version[: max(len(v), 3)]\n condition_matches = op(py_ver_slug, v)\n if not condition_matches:\n raise RuntimeError(\n \"requires Python '{}' but the running Python is {}\".format(\n python_requires, '.'.join(map(str, python_version[:3]))\n )\n )\n\n def verify_required_python_runtime(s):\n @functools.wraps(s)\n def sw(**attrs):\n try:\n validate_required_python_or_fail(attrs.get('python_requires'))\n except RuntimeError as re:\n sys.exit('{} {!s}'.format(attrs['name'], re))\n return s(**attrs)\n\n return sw\n\n setuptools.setup = ignore_unknown_options(setuptools.setup)\n setuptools.setup = verify_required_python_runtime(setuptools.setup)\n\n try:\n from configparser import ConfigParser, NoSectionError\n except ImportError:\n from ConfigParser import ConfigParser, NoSectionError\n\n ConfigParser.read_file = ConfigParser.readfp\n\n def maybe_read_files(d):\n \"\"\"Read files if the string starts with `file:` marker.\"\"\"\n FILE_FUNC_MARKER = 'file:'\n\n d = d.strip()\n if not d.startswith(FILE_FUNC_MARKER):\n return d\n descs = []\n for fname in map(str.strip, str(d[len(FILE_FUNC_MARKER) :]).split(',')):\n with io.open(fname, encoding='utf-8') as f:\n descs.append(f.read())\n return ''.join(descs)\n\n def cfg_val_to_list(v):\n \"\"\"Turn config val to list and filter out empty lines.\"\"\"\n return list(filter(bool, map(str.strip, str(v).strip().splitlines())))\n\n def cfg_val_to_dict(v):\n \"\"\"Turn config val to dict and filter out empty lines.\"\"\"\n return dict(\n map(\n lambda l: list(map(str.strip, l.split('=', 1))),\n filter(bool, map(str.strip, str(v).strip().splitlines())),\n )\n )\n\n def cfg_val_to_primitive(v):\n \"\"\"Parse primitive config val to appropriate data type.\"\"\"\n return json.loads(v.strip().lower())\n\n def read_configuration(filepath):\n \"\"\"Read metadata and options from setup.cfg located at filepath.\"\"\"\n cfg = ConfigParser()\n with io.open(filepath, encoding='utf-8') as f:\n cfg.read_file(f)\n\n md = dict(cfg.items('metadata'))\n for list_key in 'classifiers', 'keywords', 'project_urls':\n try:\n md[list_key] = cfg_val_to_list(md[list_key])\n except KeyError:\n pass\n try:\n md['long_description'] = maybe_read_files(md['long_description'])\n except KeyError:\n pass\n opt = dict(cfg.items('options'))\n for list_key in 'include_package_data', 'use_scm_version', 'zip_safe':\n try:\n opt[list_key] = cfg_val_to_primitive(opt[list_key])\n except KeyError:\n pass\n for list_key in 'scripts', 'install_requires', 'setup_requires':\n try:\n opt[list_key] = cfg_val_to_list(opt[list_key])\n except KeyError:\n pass\n try:\n opt['package_dir'] = cfg_val_to_dict(opt['package_dir'])\n except KeyError:\n pass\n try:\n opt_package_data = dict(cfg.items('options.package_data'))\n if not opt_package_data.get('', '').strip():\n opt_package_data[''] = opt_package_data['*']\n del opt_package_data['*']\n except (KeyError, NoSectionError):\n opt_package_data = {}\n try:\n opt_extras_require = dict(cfg.items('options.extras_require'))\n opt['extras_require'] = {}\n for k, v in opt_extras_require.items():\n opt['extras_require'][k] = cfg_val_to_list(v)\n except NoSectionError:\n pass\n opt['package_data'] = {}\n for k, v in opt_package_data.items():\n opt['package_data'][k] = cfg_val_to_list(v)\n try:\n opt_exclude_package_data = dict(cfg.items('options.exclude_package_data'))\n if (\n not opt_exclude_package_data.get('', '').strip()\n and '*' in opt_exclude_package_data\n ):\n opt_exclude_package_data[''] = opt_exclude_package_data['*']\n del opt_exclude_package_data['*']\n except NoSectionError:\n pass\n else:\n opt['exclude_package_data'] = {}\n for k, v in opt_exclude_package_data.items():\n opt['exclude_package_data'][k] = cfg_val_to_list(v)\n cur_pkgs = opt.get('packages', '').strip()\n if '\\n' in cur_pkgs:\n opt['packages'] = cfg_val_to_list(opt['packages'])\n elif cur_pkgs.startswith('find:'):\n opt_packages_find = stringify_dict_contents(\n dict(cfg.items('options.packages.find'))\n )\n opt['packages'] = setuptools.find_packages(**opt_packages_find)\n return {'metadata': md, 'options': opt}\n\n\ndef cut_local_version_on_upload(version):\n \"\"\"Generate a PEP440 local version if uploading to PyPI.\"\"\"\n import os\n import setuptools_scm.version # only present during setup time\n\n IS_PYPI_UPLOAD = os.getenv('PYPI_UPLOAD') == 'true' # set in tox.ini\n return (\n ''\n if IS_PYPI_UPLOAD\n else setuptools_scm.version.get_local_node_and_date(version)\n )\n\n\nif HAS_DIST_INFO_CMD:\n\n class patched_dist_info(setuptools.command.dist_info.dist_info):\n def run(self):\n self.egg_base = str_if_nested_or_str(self.egg_base)\n return setuptools.command.dist_info.dist_info.run(self)\n\n\ndeclarative_setup_params = read_configuration('setup.cfg')\n\"\"\"Declarative metadata and options as read by setuptools.\"\"\"\n\n\nsetup_params = {}\n\"\"\"Explicit metadata for passing into setuptools.setup() call.\"\"\"\n\nsetup_params = dict(setup_params, **declarative_setup_params['metadata'])\nsetup_params = dict(setup_params, **declarative_setup_params['options'])\n\nif HAS_DIST_INFO_CMD:\n setup_params['cmdclass'] = {'dist_info': patched_dist_info}\n\nsetup_params['use_scm_version'] = {'local_scheme': cut_local_version_on_upload}\n\n# Patch incorrectly decoded package_dir option\n# ``egg_info`` demands native strings failing with unicode under Python 2\n# Ref https://github.com/pypa/setuptools/issues/1136\nsetup_params = stringify_dict_contents(setup_params)\n\n\n__name__ == '__main__' and setuptools.setup(**setup_params)\n", "path": "setup.py"}], "after_files": [{"content": "#! /usr/bin/env python\n# Copyright (c) 2019 Red Hat, Inc.\n# Copyright (c) 2015-2018 Cisco Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to\n# deal in the Software without restriction, including without limitation the\n# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n# sell copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\"\"\"Molecule distribution package setuptools installer.\"\"\"\n\nimport setuptools\n\nHAS_DIST_INFO_CMD = False\ntry:\n import setuptools.command.dist_info\n\n HAS_DIST_INFO_CMD = True\nexcept ImportError:\n \"\"\"Setuptools version is too old.\"\"\"\n\n\nALL_STRING_TYPES = tuple(map(type, ('', b'', u'')))\nMIN_NATIVE_SETUPTOOLS_VERSION = 34, 4\n\"\"\"Minimal setuptools having good read_configuration implementation.\"\"\"\n\n# Patch version can be a non integer value, like 'post20190705'\nRUNTIME_SETUPTOOLS_VERSION = tuple(map(int, setuptools.__version__.split('.')[:2]))\n\"\"\"Setuptools imported now.\"\"\"\n\nREAD_CONFIG_SHIM_NEEDED = RUNTIME_SETUPTOOLS_VERSION < MIN_NATIVE_SETUPTOOLS_VERSION\n\n\ndef str_if_nested_or_str(s):\n \"\"\"Turn input into a native string if possible.\"\"\"\n if isinstance(s, ALL_STRING_TYPES):\n return str(s)\n if isinstance(s, (list, tuple)):\n return type(s)(map(str_if_nested_or_str, s))\n if isinstance(s, (dict,)):\n return stringify_dict_contents(s)\n return s\n\n\ndef stringify_dict_contents(dct):\n \"\"\"Turn dict keys and values into native strings.\"\"\"\n return {str_if_nested_or_str(k): str_if_nested_or_str(v) for k, v in dct.items()}\n\n\nif not READ_CONFIG_SHIM_NEEDED:\n from setuptools.config import read_configuration, ConfigOptionsHandler\n import setuptools.config\n import setuptools.dist\n\n # Set default value for 'use_scm_version'\n setattr(setuptools.dist.Distribution, 'use_scm_version', False)\n\n # Attach bool parser to 'use_scm_version' option\n class ShimConfigOptionsHandler(ConfigOptionsHandler):\n \"\"\"Extension class for ConfigOptionsHandler.\"\"\"\n\n @property\n def parsers(self):\n \"\"\"Return an option mapping with default data type parsers.\"\"\"\n _orig_parsers = super(ShimConfigOptionsHandler, self).parsers\n return dict(use_scm_version=self._parse_bool, **_orig_parsers)\n\n def parse_section_packages__find(self, section_options):\n find_kwargs = super(\n ShimConfigOptionsHandler, self\n ).parse_section_packages__find(section_options)\n return stringify_dict_contents(find_kwargs)\n\n setuptools.config.ConfigOptionsHandler = ShimConfigOptionsHandler\nelse:\n \"\"\"This is a shim for setuptools<required.\"\"\"\n import functools\n import io\n import json\n import sys\n import warnings\n\n try:\n import setuptools.config\n\n def filter_out_unknown_section(i):\n def chi(self, *args, **kwargs):\n i(self, *args, **kwargs)\n self.sections = {\n s: v for s, v in self.sections.items() if s != 'packages.find'\n }\n\n return chi\n\n setuptools.config.ConfigHandler.__init__ = filter_out_unknown_section(\n setuptools.config.ConfigHandler.__init__\n )\n except ImportError:\n pass\n\n def ignore_unknown_options(s):\n @functools.wraps(s)\n def sw(**attrs):\n try:\n ignore_warning_regex = (\n r\"Unknown distribution option: \"\n r\"'(license_file|project_urls|python_requires)'\"\n )\n warnings.filterwarnings(\n 'ignore',\n message=ignore_warning_regex,\n category=UserWarning,\n module='distutils.dist',\n )\n return s(**attrs)\n finally:\n warnings.resetwarnings()\n\n return sw\n\n def parse_predicates(python_requires):\n import itertools\n import operator\n\n sorted_operators_map = tuple(\n sorted(\n {\n '>': operator.gt,\n '<': operator.lt,\n '>=': operator.ge,\n '<=': operator.le,\n '==': operator.eq,\n '!=': operator.ne,\n '': operator.eq,\n }.items(),\n key=lambda i: len(i[0]),\n reverse=True,\n )\n )\n\n def is_decimal(s):\n return type(u'')(s).isdecimal()\n\n conditions = map(str.strip, python_requires.split(','))\n for c in conditions:\n for op_sign, op_func in sorted_operators_map:\n if not c.startswith(op_sign):\n continue\n raw_ver = itertools.takewhile(\n is_decimal, c[len(op_sign) :].strip().split('.')\n )\n ver = tuple(map(int, raw_ver))\n yield op_func, ver\n break\n\n def validate_required_python_or_fail(python_requires=None):\n if python_requires is None:\n return\n\n python_version = sys.version_info\n preds = parse_predicates(python_requires)\n for op, v in preds:\n py_ver_slug = python_version[: max(len(v), 3)]\n condition_matches = op(py_ver_slug, v)\n if not condition_matches:\n raise RuntimeError(\n \"requires Python '{}' but the running Python is {}\".format(\n python_requires, '.'.join(map(str, python_version[:3]))\n )\n )\n\n def verify_required_python_runtime(s):\n @functools.wraps(s)\n def sw(**attrs):\n try:\n validate_required_python_or_fail(attrs.get('python_requires'))\n except RuntimeError as re:\n sys.exit('{} {!s}'.format(attrs['name'], re))\n return s(**attrs)\n\n return sw\n\n setuptools.setup = ignore_unknown_options(setuptools.setup)\n setuptools.setup = verify_required_python_runtime(setuptools.setup)\n\n try:\n from configparser import ConfigParser, NoSectionError\n except ImportError:\n from ConfigParser import ConfigParser, NoSectionError\n\n ConfigParser.read_file = ConfigParser.readfp\n\n def maybe_read_files(d):\n \"\"\"Read files if the string starts with `file:` marker.\"\"\"\n FILE_FUNC_MARKER = 'file:'\n\n d = d.strip()\n if not d.startswith(FILE_FUNC_MARKER):\n return d\n descs = []\n for fname in map(str.strip, str(d[len(FILE_FUNC_MARKER) :]).split(',')):\n with io.open(fname, encoding='utf-8') as f:\n descs.append(f.read())\n return ''.join(descs)\n\n def cfg_val_to_list(v):\n \"\"\"Turn config val to list and filter out empty lines.\"\"\"\n return list(filter(bool, map(str.strip, str(v).strip().splitlines())))\n\n def cfg_val_to_dict(v):\n \"\"\"Turn config val to dict and filter out empty lines.\"\"\"\n return dict(\n map(\n lambda l: list(map(str.strip, l.split('=', 1))),\n filter(bool, map(str.strip, str(v).strip().splitlines())),\n )\n )\n\n def cfg_val_to_primitive(v):\n \"\"\"Parse primitive config val to appropriate data type.\"\"\"\n return json.loads(v.strip().lower())\n\n def read_configuration(filepath):\n \"\"\"Read metadata and options from setup.cfg located at filepath.\"\"\"\n cfg = ConfigParser()\n with io.open(filepath, encoding='utf-8') as f:\n cfg.read_file(f)\n\n md = dict(cfg.items('metadata'))\n for list_key in 'classifiers', 'keywords', 'project_urls':\n try:\n md[list_key] = cfg_val_to_list(md[list_key])\n except KeyError:\n pass\n try:\n md['long_description'] = maybe_read_files(md['long_description'])\n except KeyError:\n pass\n opt = dict(cfg.items('options'))\n for list_key in 'include_package_data', 'use_scm_version', 'zip_safe':\n try:\n opt[list_key] = cfg_val_to_primitive(opt[list_key])\n except KeyError:\n pass\n for list_key in 'scripts', 'install_requires', 'setup_requires':\n try:\n opt[list_key] = cfg_val_to_list(opt[list_key])\n except KeyError:\n pass\n try:\n opt['package_dir'] = cfg_val_to_dict(opt['package_dir'])\n except KeyError:\n pass\n try:\n opt_package_data = dict(cfg.items('options.package_data'))\n if not opt_package_data.get('', '').strip():\n opt_package_data[''] = opt_package_data['*']\n del opt_package_data['*']\n except (KeyError, NoSectionError):\n opt_package_data = {}\n try:\n opt_extras_require = dict(cfg.items('options.extras_require'))\n opt['extras_require'] = {}\n for k, v in opt_extras_require.items():\n opt['extras_require'][k] = cfg_val_to_list(v)\n except NoSectionError:\n pass\n opt['package_data'] = {}\n for k, v in opt_package_data.items():\n opt['package_data'][k] = cfg_val_to_list(v)\n try:\n opt_exclude_package_data = dict(cfg.items('options.exclude_package_data'))\n if (\n not opt_exclude_package_data.get('', '').strip()\n and '*' in opt_exclude_package_data\n ):\n opt_exclude_package_data[''] = opt_exclude_package_data['*']\n del opt_exclude_package_data['*']\n except NoSectionError:\n pass\n else:\n opt['exclude_package_data'] = {}\n for k, v in opt_exclude_package_data.items():\n opt['exclude_package_data'][k] = cfg_val_to_list(v)\n cur_pkgs = opt.get('packages', '').strip()\n if '\\n' in cur_pkgs:\n opt['packages'] = cfg_val_to_list(opt['packages'])\n elif cur_pkgs.startswith('find:'):\n opt_packages_find = stringify_dict_contents(\n dict(cfg.items('options.packages.find'))\n )\n opt['packages'] = setuptools.find_packages(**opt_packages_find)\n return {'metadata': md, 'options': opt}\n\n\ndef cut_local_version_on_upload(version):\n \"\"\"Generate a PEP440 local version if uploading to PyPI.\"\"\"\n import os\n import setuptools_scm.version # only present during setup time\n\n IS_PYPI_UPLOAD = os.getenv('PYPI_UPLOAD') == 'true' # set in tox.ini\n return (\n ''\n if IS_PYPI_UPLOAD\n else setuptools_scm.version.get_local_node_and_date(version)\n )\n\n\nif HAS_DIST_INFO_CMD:\n\n class patched_dist_info(setuptools.command.dist_info.dist_info):\n def run(self):\n self.egg_base = str_if_nested_or_str(self.egg_base)\n return setuptools.command.dist_info.dist_info.run(self)\n\n\ndeclarative_setup_params = read_configuration('setup.cfg')\n\"\"\"Declarative metadata and options as read by setuptools.\"\"\"\n\n\nsetup_params = {}\n\"\"\"Explicit metadata for passing into setuptools.setup() call.\"\"\"\n\nsetup_params = dict(setup_params, **declarative_setup_params['metadata'])\nsetup_params = dict(setup_params, **declarative_setup_params['options'])\n\nif HAS_DIST_INFO_CMD:\n setup_params['cmdclass'] = {'dist_info': patched_dist_info}\n\nsetup_params['use_scm_version'] = {'local_scheme': cut_local_version_on_upload}\n\n# Patch incorrectly decoded package_dir option\n# ``egg_info`` demands native strings failing with unicode under Python 2\n# Ref https://github.com/pypa/setuptools/issues/1136\nsetup_params = stringify_dict_contents(setup_params)\n\n\n__name__ == '__main__' and setuptools.setup(**setup_params)\n", "path": "setup.py"}]} | 4,003 | 195 |
gh_patches_debug_18681 | rasdani/github-patches | git_diff | pytorch__vision-5583 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Extra dot in 0.12 docs
Looks like something similar to the previous release happened https://github.com/pytorch/vision/issues/4754
The newly pushed docs are https://pytorch.org/vision/0.12./ instead of https://pytorch.org/vision/0.12 (extra dot at the end), and the version tag shows a lot of extra info:

@mattip is this something you could help us with, like you did in https://github.com/pytorch/vision/pull/4755 ?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/source/conf.py`
Content:
```
1 #!/usr/bin/env python3
2 #
3 # PyTorch documentation build configuration file, created by
4 # sphinx-quickstart on Fri Dec 23 13:31:47 2016.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 # If extensions (or modules to document with autodoc) are in another directory,
16 # add these directories to sys.path here. If the directory is relative to the
17 # documentation root, use os.path.abspath to make it absolute, like shown here.
18 #
19 # import os
20 # import sys
21 # sys.path.insert(0, os.path.abspath('.'))
22
23 import pytorch_sphinx_theme
24 import torchvision
25
26
27 # -- General configuration ------------------------------------------------
28
29 # Required version of sphinx is set from docs/requirements.txt
30
31 # Add any Sphinx extension module names here, as strings. They can be
32 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
33 # ones.
34 extensions = [
35 "sphinx.ext.autodoc",
36 "sphinx.ext.autosummary",
37 "sphinx.ext.doctest",
38 "sphinx.ext.intersphinx",
39 "sphinx.ext.todo",
40 "sphinx.ext.mathjax",
41 "sphinx.ext.napoleon",
42 "sphinx.ext.viewcode",
43 "sphinx.ext.duration",
44 "sphinx_gallery.gen_gallery",
45 "sphinx_copybutton",
46 ]
47
48 sphinx_gallery_conf = {
49 "examples_dirs": "../../gallery/", # path to your example scripts
50 "gallery_dirs": "auto_examples", # path to where to save gallery generated output
51 "backreferences_dir": "gen_modules/backreferences",
52 "doc_module": ("torchvision",),
53 }
54
55 napoleon_use_ivar = True
56 napoleon_numpy_docstring = False
57 napoleon_google_docstring = True
58
59
60 # Add any paths that contain templates here, relative to this directory.
61 templates_path = ["_templates"]
62
63 # The suffix(es) of source filenames.
64 # You can specify multiple suffix as a list of string:
65 #
66 source_suffix = {
67 ".rst": "restructuredtext",
68 }
69
70 # The master toctree document.
71 master_doc = "index"
72
73 # General information about the project.
74 project = "Torchvision"
75 copyright = "2017-present, Torch Contributors"
76 author = "Torch Contributors"
77
78 # The version info for the project you're documenting, acts as replacement for
79 # |version| and |release|, also used in various other places throughout the
80 # built documents.
81 #
82 # The short X.Y version.
83 # TODO: change to [:2] at v1.0
84 version = "main (" + torchvision.__version__ + " )"
85 # The full version, including alpha/beta/rc tags.
86 # TODO: verify this works as expected
87 release = "main"
88
89 # The language for content autogenerated by Sphinx. Refer to documentation
90 # for a list of supported languages.
91 #
92 # This is also used if you do content translation via gettext catalogs.
93 # Usually you set "language" from the command line for these cases.
94 language = None
95
96 # List of patterns, relative to source directory, that match files and
97 # directories to ignore when looking for source files.
98 # This patterns also effect to html_static_path and html_extra_path
99 exclude_patterns = []
100
101 # The name of the Pygments (syntax highlighting) style to use.
102 pygments_style = "sphinx"
103
104 # If true, `todo` and `todoList` produce output, else they produce nothing.
105 todo_include_todos = True
106
107
108 # -- Options for HTML output ----------------------------------------------
109
110 # The theme to use for HTML and HTML Help pages. See the documentation for
111 # a list of builtin themes.
112 #
113 html_theme = "pytorch_sphinx_theme"
114 html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
115
116 # Theme options are theme-specific and customize the look and feel of a theme
117 # further. For a list of options available for each theme, see the
118 # documentation.
119 #
120 html_theme_options = {
121 "collapse_navigation": False,
122 "display_version": True,
123 "logo_only": True,
124 "pytorch_project": "docs",
125 "navigation_with_keys": True,
126 "analytics_id": "UA-117752657-2",
127 }
128
129 html_logo = "_static/img/pytorch-logo-dark.svg"
130
131 # Add any paths that contain custom static files (such as style sheets) here,
132 # relative to this directory. They are copied after the builtin static files,
133 # so a file named "default.css" will overwrite the builtin "default.css".
134 html_static_path = ["_static"]
135
136 # TODO: remove this once https://github.com/pytorch/pytorch_sphinx_theme/issues/125 is fixed
137 html_css_files = [
138 "css/custom_torchvision.css",
139 ]
140
141 # -- Options for HTMLHelp output ------------------------------------------
142
143 # Output file base name for HTML help builder.
144 htmlhelp_basename = "PyTorchdoc"
145
146
147 autosummary_generate = True
148
149
150 # -- Options for LaTeX output ---------------------------------------------
151 latex_elements = {
152 # The paper size ('letterpaper' or 'a4paper').
153 #
154 # 'papersize': 'letterpaper',
155 # The font size ('10pt', '11pt' or '12pt').
156 #
157 # 'pointsize': '10pt',
158 # Additional stuff for the LaTeX preamble.
159 #
160 # 'preamble': '',
161 # Latex figure (float) alignment
162 #
163 # 'figure_align': 'htbp',
164 }
165
166
167 # Grouping the document tree into LaTeX files. List of tuples
168 # (source start file, target name, title,
169 # author, documentclass [howto, manual, or own class]).
170 latex_documents = [
171 (master_doc, "pytorch.tex", "torchvision Documentation", "Torch Contributors", "manual"),
172 ]
173
174
175 # -- Options for manual page output ---------------------------------------
176
177 # One entry per manual page. List of tuples
178 # (source start file, name, description, authors, manual section).
179 man_pages = [(master_doc, "torchvision", "torchvision Documentation", [author], 1)]
180
181
182 # -- Options for Texinfo output -------------------------------------------
183
184 # Grouping the document tree into Texinfo files. List of tuples
185 # (source start file, target name, title, author,
186 # dir menu entry, description, category)
187 texinfo_documents = [
188 (
189 master_doc,
190 "torchvision",
191 "torchvision Documentation",
192 author,
193 "torchvision",
194 "One line description of project.",
195 "Miscellaneous",
196 ),
197 ]
198
199
200 # Example configuration for intersphinx: refer to the Python standard library.
201 intersphinx_mapping = {
202 "python": ("https://docs.python.org/3/", None),
203 "torch": ("https://pytorch.org/docs/stable/", None),
204 "numpy": ("https://numpy.org/doc/stable/", None),
205 "PIL": ("https://pillow.readthedocs.io/en/stable/", None),
206 "matplotlib": ("https://matplotlib.org/stable/", None),
207 }
208
209 # -- A patch that prevents Sphinx from cross-referencing ivar tags -------
210 # See http://stackoverflow.com/a/41184353/3343043
211
212 from docutils import nodes
213 from sphinx import addnodes
214 from sphinx.util.docfields import TypedField
215
216
217 def patched_make_field(self, types, domain, items, **kw):
218 # `kw` catches `env=None` needed for newer sphinx while maintaining
219 # backwards compatibility when passed along further down!
220
221 # type: (list, unicode, tuple) -> nodes.field # noqa: F821
222 def handle_item(fieldarg, content):
223 par = nodes.paragraph()
224 par += addnodes.literal_strong("", fieldarg) # Patch: this line added
225 # par.extend(self.make_xrefs(self.rolename, domain, fieldarg,
226 # addnodes.literal_strong))
227 if fieldarg in types:
228 par += nodes.Text(" (")
229 # NOTE: using .pop() here to prevent a single type node to be
230 # inserted twice into the doctree, which leads to
231 # inconsistencies later when references are resolved
232 fieldtype = types.pop(fieldarg)
233 if len(fieldtype) == 1 and isinstance(fieldtype[0], nodes.Text):
234 typename = "".join(n.astext() for n in fieldtype)
235 typename = typename.replace("int", "python:int")
236 typename = typename.replace("long", "python:long")
237 typename = typename.replace("float", "python:float")
238 typename = typename.replace("type", "python:type")
239 par.extend(self.make_xrefs(self.typerolename, domain, typename, addnodes.literal_emphasis, **kw))
240 else:
241 par += fieldtype
242 par += nodes.Text(")")
243 par += nodes.Text(" -- ")
244 par += content
245 return par
246
247 fieldname = nodes.field_name("", self.label)
248 if len(items) == 1 and self.can_collapse:
249 fieldarg, content = items[0]
250 bodynode = handle_item(fieldarg, content)
251 else:
252 bodynode = self.list_type()
253 for fieldarg, content in items:
254 bodynode += nodes.list_item("", handle_item(fieldarg, content))
255 fieldbody = nodes.field_body("", bodynode)
256 return nodes.field("", fieldname, fieldbody)
257
258
259 TypedField.make_field = patched_make_field
260
261
262 def inject_minigalleries(app, what, name, obj, options, lines):
263 """Inject a minigallery into a docstring.
264
265 This avoids having to manually write the .. minigallery directive for every item we want a minigallery for,
266 as it would be easy to miss some.
267
268 This callback is called after the .. auto directives (like ..autoclass) have been processed,
269 and modifies the lines parameter inplace to add the .. minigallery that will show which examples
270 are using which object.
271
272 It's a bit hacky, but not *that* hacky when you consider that the recommended way is to do pretty much the same,
273 but instead with templates using autosummary (which we don't want to use):
274 (https://sphinx-gallery.github.io/stable/configuration.html#auto-documenting-your-api-with-links-to-examples)
275
276 For docs on autodoc-process-docstring, see the autodoc docs:
277 https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html
278 """
279
280 if what in ("class", "function"):
281 lines.append(f".. minigallery:: {name}")
282 lines.append(f" :add-heading: Examples using ``{name.split('.')[-1]}``:")
283 # avoid heading entirely to avoid warning. As a bonud it actually renders better
284 lines.append(" :heading-level: 9")
285 lines.append("\n")
286
287
288 def setup(app):
289 app.connect("autodoc-process-docstring", inject_minigalleries)
290
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -20,6 +20,8 @@
# import sys
# sys.path.insert(0, os.path.abspath('.'))
+import os
+
import pytorch_sphinx_theme
import torchvision
@@ -80,11 +82,16 @@
# built documents.
#
# The short X.Y version.
-# TODO: change to [:2] at v1.0
version = "main (" + torchvision.__version__ + " )"
# The full version, including alpha/beta/rc tags.
-# TODO: verify this works as expected
release = "main"
+VERSION = os.environ.get("VERSION", None)
+if VERSION:
+ # Turn 1.11.0aHASH into 1.11 (major.minor only)
+ version = ".".join(version.split(".")[:2])
+ html_title = " ".join((project, version, "documentation"))
+ release = version
+
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
| {"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -20,6 +20,8 @@\n # import sys\n # sys.path.insert(0, os.path.abspath('.'))\n \n+import os\n+\n import pytorch_sphinx_theme\n import torchvision\n \n@@ -80,11 +82,16 @@\n # built documents.\n #\n # The short X.Y version.\n-# TODO: change to [:2] at v1.0\n version = \"main (\" + torchvision.__version__ + \" )\"\n # The full version, including alpha/beta/rc tags.\n-# TODO: verify this works as expected\n release = \"main\"\n+VERSION = os.environ.get(\"VERSION\", None)\n+if VERSION:\n+ # Turn 1.11.0aHASH into 1.11 (major.minor only)\n+ version = \".\".join(version.split(\".\")[:2])\n+ html_title = \" \".join((project, version, \"documentation\"))\n+ release = version\n+\n \n # The language for content autogenerated by Sphinx. Refer to documentation\n # for a list of supported languages.\n", "issue": "Extra dot in 0.12 docs\nLooks like something similar to the previous release happened https://github.com/pytorch/vision/issues/4754\r\n\r\nThe newly pushed docs are https://pytorch.org/vision/0.12./ instead of https://pytorch.org/vision/0.12 (extra dot at the end), and the version tag shows a lot of extra info:\r\n\r\n\r\n\r\n\r\n@mattip is this something you could help us with, like you did in https://github.com/pytorch/vision/pull/4755 ?\n", "before_files": [{"content": "#!/usr/bin/env python3\n#\n# PyTorch documentation build configuration file, created by\n# sphinx-quickstart on Fri Dec 23 13:31:47 2016.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\nimport pytorch_sphinx_theme\nimport torchvision\n\n\n# -- General configuration ------------------------------------------------\n\n# Required version of sphinx is set from docs/requirements.txt\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.autosummary\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.mathjax\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.duration\",\n \"sphinx_gallery.gen_gallery\",\n \"sphinx_copybutton\",\n]\n\nsphinx_gallery_conf = {\n \"examples_dirs\": \"../../gallery/\", # path to your example scripts\n \"gallery_dirs\": \"auto_examples\", # path to where to save gallery generated output\n \"backreferences_dir\": \"gen_modules/backreferences\",\n \"doc_module\": (\"torchvision\",),\n}\n\nnapoleon_use_ivar = True\nnapoleon_numpy_docstring = False\nnapoleon_google_docstring = True\n\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\nsource_suffix = {\n \".rst\": \"restructuredtext\",\n}\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"Torchvision\"\ncopyright = \"2017-present, Torch Contributors\"\nauthor = \"Torch Contributors\"\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\n# TODO: change to [:2] at v1.0\nversion = \"main (\" + torchvision.__version__ + \" )\"\n# The full version, including alpha/beta/rc tags.\n# TODO: verify this works as expected\nrelease = \"main\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = []\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"pytorch_sphinx_theme\"\nhtml_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n \"collapse_navigation\": False,\n \"display_version\": True,\n \"logo_only\": True,\n \"pytorch_project\": \"docs\",\n \"navigation_with_keys\": True,\n \"analytics_id\": \"UA-117752657-2\",\n}\n\nhtml_logo = \"_static/img/pytorch-logo-dark.svg\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# TODO: remove this once https://github.com/pytorch/pytorch_sphinx_theme/issues/125 is fixed\nhtml_css_files = [\n \"css/custom_torchvision.css\",\n]\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"PyTorchdoc\"\n\n\nautosummary_generate = True\n\n\n# -- Options for LaTeX output ---------------------------------------------\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, \"pytorch.tex\", \"torchvision Documentation\", \"Torch Contributors\", \"manual\"),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"torchvision\", \"torchvision Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"torchvision\",\n \"torchvision Documentation\",\n author,\n \"torchvision\",\n \"One line description of project.\",\n \"Miscellaneous\",\n ),\n]\n\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3/\", None),\n \"torch\": (\"https://pytorch.org/docs/stable/\", None),\n \"numpy\": (\"https://numpy.org/doc/stable/\", None),\n \"PIL\": (\"https://pillow.readthedocs.io/en/stable/\", None),\n \"matplotlib\": (\"https://matplotlib.org/stable/\", None),\n}\n\n# -- A patch that prevents Sphinx from cross-referencing ivar tags -------\n# See http://stackoverflow.com/a/41184353/3343043\n\nfrom docutils import nodes\nfrom sphinx import addnodes\nfrom sphinx.util.docfields import TypedField\n\n\ndef patched_make_field(self, types, domain, items, **kw):\n # `kw` catches `env=None` needed for newer sphinx while maintaining\n # backwards compatibility when passed along further down!\n\n # type: (list, unicode, tuple) -> nodes.field # noqa: F821\n def handle_item(fieldarg, content):\n par = nodes.paragraph()\n par += addnodes.literal_strong(\"\", fieldarg) # Patch: this line added\n # par.extend(self.make_xrefs(self.rolename, domain, fieldarg,\n # addnodes.literal_strong))\n if fieldarg in types:\n par += nodes.Text(\" (\")\n # NOTE: using .pop() here to prevent a single type node to be\n # inserted twice into the doctree, which leads to\n # inconsistencies later when references are resolved\n fieldtype = types.pop(fieldarg)\n if len(fieldtype) == 1 and isinstance(fieldtype[0], nodes.Text):\n typename = \"\".join(n.astext() for n in fieldtype)\n typename = typename.replace(\"int\", \"python:int\")\n typename = typename.replace(\"long\", \"python:long\")\n typename = typename.replace(\"float\", \"python:float\")\n typename = typename.replace(\"type\", \"python:type\")\n par.extend(self.make_xrefs(self.typerolename, domain, typename, addnodes.literal_emphasis, **kw))\n else:\n par += fieldtype\n par += nodes.Text(\")\")\n par += nodes.Text(\" -- \")\n par += content\n return par\n\n fieldname = nodes.field_name(\"\", self.label)\n if len(items) == 1 and self.can_collapse:\n fieldarg, content = items[0]\n bodynode = handle_item(fieldarg, content)\n else:\n bodynode = self.list_type()\n for fieldarg, content in items:\n bodynode += nodes.list_item(\"\", handle_item(fieldarg, content))\n fieldbody = nodes.field_body(\"\", bodynode)\n return nodes.field(\"\", fieldname, fieldbody)\n\n\nTypedField.make_field = patched_make_field\n\n\ndef inject_minigalleries(app, what, name, obj, options, lines):\n \"\"\"Inject a minigallery into a docstring.\n\n This avoids having to manually write the .. minigallery directive for every item we want a minigallery for,\n as it would be easy to miss some.\n\n This callback is called after the .. auto directives (like ..autoclass) have been processed,\n and modifies the lines parameter inplace to add the .. minigallery that will show which examples\n are using which object.\n\n It's a bit hacky, but not *that* hacky when you consider that the recommended way is to do pretty much the same,\n but instead with templates using autosummary (which we don't want to use):\n (https://sphinx-gallery.github.io/stable/configuration.html#auto-documenting-your-api-with-links-to-examples)\n\n For docs on autodoc-process-docstring, see the autodoc docs:\n https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html\n \"\"\"\n\n if what in (\"class\", \"function\"):\n lines.append(f\".. minigallery:: {name}\")\n lines.append(f\" :add-heading: Examples using ``{name.split('.')[-1]}``:\")\n # avoid heading entirely to avoid warning. As a bonud it actually renders better\n lines.append(\" :heading-level: 9\")\n lines.append(\"\\n\")\n\n\ndef setup(app):\n app.connect(\"autodoc-process-docstring\", inject_minigalleries)\n", "path": "docs/source/conf.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n#\n# PyTorch documentation build configuration file, created by\n# sphinx-quickstart on Fri Dec 23 13:31:47 2016.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\nimport os\n\nimport pytorch_sphinx_theme\nimport torchvision\n\n\n# -- General configuration ------------------------------------------------\n\n# Required version of sphinx is set from docs/requirements.txt\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.autosummary\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.mathjax\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.duration\",\n \"sphinx_gallery.gen_gallery\",\n \"sphinx_copybutton\",\n]\n\nsphinx_gallery_conf = {\n \"examples_dirs\": \"../../gallery/\", # path to your example scripts\n \"gallery_dirs\": \"auto_examples\", # path to where to save gallery generated output\n \"backreferences_dir\": \"gen_modules/backreferences\",\n \"doc_module\": (\"torchvision\",),\n}\n\nnapoleon_use_ivar = True\nnapoleon_numpy_docstring = False\nnapoleon_google_docstring = True\n\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\nsource_suffix = {\n \".rst\": \"restructuredtext\",\n}\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"Torchvision\"\ncopyright = \"2017-present, Torch Contributors\"\nauthor = \"Torch Contributors\"\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = \"main (\" + torchvision.__version__ + \" )\"\n# The full version, including alpha/beta/rc tags.\nrelease = \"main\"\nVERSION = os.environ.get(\"VERSION\", None)\nif VERSION:\n # Turn 1.11.0aHASH into 1.11 (major.minor only)\n version = \".\".join(version.split(\".\")[:2])\n html_title = \" \".join((project, version, \"documentation\"))\n release = version\n\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = []\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"pytorch_sphinx_theme\"\nhtml_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n \"collapse_navigation\": False,\n \"display_version\": True,\n \"logo_only\": True,\n \"pytorch_project\": \"docs\",\n \"navigation_with_keys\": True,\n \"analytics_id\": \"UA-117752657-2\",\n}\n\nhtml_logo = \"_static/img/pytorch-logo-dark.svg\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# TODO: remove this once https://github.com/pytorch/pytorch_sphinx_theme/issues/125 is fixed\nhtml_css_files = [\n \"css/custom_torchvision.css\",\n]\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"PyTorchdoc\"\n\n\nautosummary_generate = True\n\n\n# -- Options for LaTeX output ---------------------------------------------\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, \"pytorch.tex\", \"torchvision Documentation\", \"Torch Contributors\", \"manual\"),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"torchvision\", \"torchvision Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"torchvision\",\n \"torchvision Documentation\",\n author,\n \"torchvision\",\n \"One line description of project.\",\n \"Miscellaneous\",\n ),\n]\n\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3/\", None),\n \"torch\": (\"https://pytorch.org/docs/stable/\", None),\n \"numpy\": (\"https://numpy.org/doc/stable/\", None),\n \"PIL\": (\"https://pillow.readthedocs.io/en/stable/\", None),\n \"matplotlib\": (\"https://matplotlib.org/stable/\", None),\n}\n\n# -- A patch that prevents Sphinx from cross-referencing ivar tags -------\n# See http://stackoverflow.com/a/41184353/3343043\n\nfrom docutils import nodes\nfrom sphinx import addnodes\nfrom sphinx.util.docfields import TypedField\n\n\ndef patched_make_field(self, types, domain, items, **kw):\n # `kw` catches `env=None` needed for newer sphinx while maintaining\n # backwards compatibility when passed along further down!\n\n # type: (list, unicode, tuple) -> nodes.field # noqa: F821\n def handle_item(fieldarg, content):\n par = nodes.paragraph()\n par += addnodes.literal_strong(\"\", fieldarg) # Patch: this line added\n # par.extend(self.make_xrefs(self.rolename, domain, fieldarg,\n # addnodes.literal_strong))\n if fieldarg in types:\n par += nodes.Text(\" (\")\n # NOTE: using .pop() here to prevent a single type node to be\n # inserted twice into the doctree, which leads to\n # inconsistencies later when references are resolved\n fieldtype = types.pop(fieldarg)\n if len(fieldtype) == 1 and isinstance(fieldtype[0], nodes.Text):\n typename = \"\".join(n.astext() for n in fieldtype)\n typename = typename.replace(\"int\", \"python:int\")\n typename = typename.replace(\"long\", \"python:long\")\n typename = typename.replace(\"float\", \"python:float\")\n typename = typename.replace(\"type\", \"python:type\")\n par.extend(self.make_xrefs(self.typerolename, domain, typename, addnodes.literal_emphasis, **kw))\n else:\n par += fieldtype\n par += nodes.Text(\")\")\n par += nodes.Text(\" -- \")\n par += content\n return par\n\n fieldname = nodes.field_name(\"\", self.label)\n if len(items) == 1 and self.can_collapse:\n fieldarg, content = items[0]\n bodynode = handle_item(fieldarg, content)\n else:\n bodynode = self.list_type()\n for fieldarg, content in items:\n bodynode += nodes.list_item(\"\", handle_item(fieldarg, content))\n fieldbody = nodes.field_body(\"\", bodynode)\n return nodes.field(\"\", fieldname, fieldbody)\n\n\nTypedField.make_field = patched_make_field\n\n\ndef inject_minigalleries(app, what, name, obj, options, lines):\n \"\"\"Inject a minigallery into a docstring.\n\n This avoids having to manually write the .. minigallery directive for every item we want a minigallery for,\n as it would be easy to miss some.\n\n This callback is called after the .. auto directives (like ..autoclass) have been processed,\n and modifies the lines parameter inplace to add the .. minigallery that will show which examples\n are using which object.\n\n It's a bit hacky, but not *that* hacky when you consider that the recommended way is to do pretty much the same,\n but instead with templates using autosummary (which we don't want to use):\n (https://sphinx-gallery.github.io/stable/configuration.html#auto-documenting-your-api-with-links-to-examples)\n\n For docs on autodoc-process-docstring, see the autodoc docs:\n https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html\n \"\"\"\n\n if what in (\"class\", \"function\"):\n lines.append(f\".. minigallery:: {name}\")\n lines.append(f\" :add-heading: Examples using ``{name.split('.')[-1]}``:\")\n # avoid heading entirely to avoid warning. As a bonud it actually renders better\n lines.append(\" :heading-level: 9\")\n lines.append(\"\\n\")\n\n\ndef setup(app):\n app.connect(\"autodoc-process-docstring\", inject_minigalleries)\n", "path": "docs/source/conf.py"}]} | 3,614 | 252 |
gh_patches_debug_35582 | rasdani/github-patches | git_diff | pyca__cryptography-2840 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Raise an error on openssl 0.9.8
With an env var (probably `CRYPTOGRAPHY_ALLOW_OPENSSL_098`) to allow disabling it for this one release.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cryptography/hazmat/bindings/openssl/binding.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import collections
8 import os
9 import threading
10 import types
11 import warnings
12
13 from cryptography import utils
14 from cryptography.exceptions import InternalError
15 from cryptography.hazmat.bindings._openssl import ffi, lib
16 from cryptography.hazmat.bindings.openssl._conditional import CONDITIONAL_NAMES
17
18 _OpenSSLError = collections.namedtuple("_OpenSSLError",
19 ["code", "lib", "func", "reason"])
20 _OpenSSLErrorWithText = collections.namedtuple(
21 "_OpenSSLErrorWithText", ["code", "lib", "func", "reason", "reason_text"]
22 )
23
24
25 def _consume_errors(lib):
26 errors = []
27 while True:
28 code = lib.ERR_get_error()
29 if code == 0:
30 break
31
32 err_lib = lib.ERR_GET_LIB(code)
33 err_func = lib.ERR_GET_FUNC(code)
34 err_reason = lib.ERR_GET_REASON(code)
35
36 errors.append(_OpenSSLError(code, err_lib, err_func, err_reason))
37
38 return errors
39
40
41 def _openssl_assert(lib, ok):
42 if not ok:
43 errors = _consume_errors(lib)
44 errors_with_text = []
45 for err in errors:
46 err_text_reason = ffi.string(
47 lib.ERR_error_string(err.code, ffi.NULL)
48 )
49 errors_with_text.append(
50 _OpenSSLErrorWithText(
51 err.code, err.lib, err.func, err.reason, err_text_reason
52 )
53 )
54
55 raise InternalError(
56 "Unknown OpenSSL error. This error is commonly encountered when "
57 "another library is not cleaning up the OpenSSL error stack. If "
58 "you are using cryptography with another library that uses "
59 "OpenSSL try disabling it before reporting a bug. Otherwise "
60 "please file an issue at https://github.com/pyca/cryptography/"
61 "issues with information on how to reproduce "
62 "this. ({0!r})".format(errors_with_text),
63 errors_with_text
64 )
65
66
67 def ffi_callback(signature, name, **kwargs):
68 """Callback dispatcher
69
70 The ffi_callback() dispatcher keeps callbacks compatible between dynamic
71 and static callbacks.
72 """
73 def wrapper(func):
74 if lib.Cryptography_STATIC_CALLBACKS:
75 # def_extern() returns a decorator that sets the internal
76 # function pointer and returns the original function unmodified.
77 ffi.def_extern(name=name, **kwargs)(func)
78 callback = getattr(lib, name)
79 else:
80 # callback() wraps the function in a cdata function.
81 callback = ffi.callback(signature, **kwargs)(func)
82 return callback
83 return wrapper
84
85
86 @ffi_callback("int (*)(unsigned char *, int)",
87 name="Cryptography_rand_bytes",
88 error=-1)
89 def _osrandom_rand_bytes(buf, size):
90 signed = ffi.cast("char *", buf)
91 result = os.urandom(size)
92 signed[0:size] = result
93 return 1
94
95
96 @ffi_callback("int (*)(void)", name="Cryptography_rand_status")
97 def _osrandom_rand_status():
98 return 1
99
100
101 def build_conditional_library(lib, conditional_names):
102 conditional_lib = types.ModuleType("lib")
103 excluded_names = set()
104 for condition, names in conditional_names.items():
105 if not getattr(lib, condition):
106 excluded_names |= set(names)
107
108 for attr in dir(lib):
109 if attr not in excluded_names:
110 setattr(conditional_lib, attr, getattr(lib, attr))
111
112 return conditional_lib
113
114
115 class Binding(object):
116 """
117 OpenSSL API wrapper.
118 """
119 lib = None
120 ffi = ffi
121 _lib_loaded = False
122 _locks = None
123 _lock_cb_handle = None
124 _init_lock = threading.Lock()
125 _lock_init_lock = threading.Lock()
126
127 _osrandom_engine_id = ffi.new("const char[]", b"osrandom")
128 _osrandom_engine_name = ffi.new("const char[]", b"osrandom_engine")
129 _osrandom_method = ffi.new(
130 "RAND_METHOD *",
131 dict(bytes=_osrandom_rand_bytes,
132 pseudorand=_osrandom_rand_bytes,
133 status=_osrandom_rand_status)
134 )
135
136 def __init__(self):
137 self._ensure_ffi_initialized()
138
139 @classmethod
140 def _register_osrandom_engine(cls):
141 _openssl_assert(cls.lib, cls.lib.ERR_peek_error() == 0)
142
143 engine = cls.lib.ENGINE_new()
144 _openssl_assert(cls.lib, engine != cls.ffi.NULL)
145 try:
146 result = cls.lib.ENGINE_set_id(engine, cls._osrandom_engine_id)
147 _openssl_assert(cls.lib, result == 1)
148 result = cls.lib.ENGINE_set_name(engine, cls._osrandom_engine_name)
149 _openssl_assert(cls.lib, result == 1)
150 result = cls.lib.ENGINE_set_RAND(engine, cls._osrandom_method)
151 _openssl_assert(cls.lib, result == 1)
152 result = cls.lib.ENGINE_add(engine)
153 if result != 1:
154 errors = _consume_errors(cls.lib)
155 _openssl_assert(
156 cls.lib,
157 errors[0].reason == cls.lib.ENGINE_R_CONFLICTING_ENGINE_ID
158 )
159
160 finally:
161 result = cls.lib.ENGINE_free(engine)
162 _openssl_assert(cls.lib, result == 1)
163
164 @classmethod
165 def _ensure_ffi_initialized(cls):
166 with cls._init_lock:
167 if not cls._lib_loaded:
168 cls.lib = build_conditional_library(lib, CONDITIONAL_NAMES)
169 cls._lib_loaded = True
170 # initialize the SSL library
171 cls.lib.SSL_library_init()
172 # adds all ciphers/digests for EVP
173 cls.lib.OpenSSL_add_all_algorithms()
174 # loads error strings for libcrypto and libssl functions
175 cls.lib.SSL_load_error_strings()
176 cls._register_osrandom_engine()
177
178 @classmethod
179 def init_static_locks(cls):
180 with cls._lock_init_lock:
181 cls._ensure_ffi_initialized()
182
183 if not cls._lock_cb_handle:
184 wrapper = ffi_callback(
185 "void(int, int, const char *, int)",
186 name="Cryptography_locking_cb",
187 )
188 cls._lock_cb_handle = wrapper(cls._lock_cb)
189
190 # Use Python's implementation if available, importing _ssl triggers
191 # the setup for this.
192 __import__("_ssl")
193
194 if cls.lib.CRYPTO_get_locking_callback() != cls.ffi.NULL:
195 return
196
197 # If nothing else has setup a locking callback already, we set up
198 # our own
199 num_locks = cls.lib.CRYPTO_num_locks()
200 cls._locks = [threading.Lock() for n in range(num_locks)]
201
202 cls.lib.CRYPTO_set_locking_callback(cls._lock_cb_handle)
203
204 @classmethod
205 def _lock_cb(cls, mode, n, file, line):
206 lock = cls._locks[n]
207
208 if mode & cls.lib.CRYPTO_LOCK:
209 lock.acquire()
210 elif mode & cls.lib.CRYPTO_UNLOCK:
211 lock.release()
212 else:
213 raise RuntimeError(
214 "Unknown lock mode {0}: lock={1}, file={2}, line={3}.".format(
215 mode, n, file, line
216 )
217 )
218
219
220 # OpenSSL is not thread safe until the locks are initialized. We call this
221 # method in module scope so that it executes with the import lock. On
222 # Pythons < 3.4 this import lock is a global lock, which can prevent a race
223 # condition registering the OpenSSL locks. On Python 3.4+ the import lock
224 # is per module so this approach will not work.
225 Binding.init_static_locks()
226
227 if Binding.lib.SSLeay() < 0x10000000:
228 warnings.warn(
229 "OpenSSL version 0.9.8 is no longer supported by the OpenSSL project, "
230 "please upgrade. The next version of cryptography will drop support "
231 "for it.",
232 utils.DeprecatedIn12
233 )
234 elif Binding.lib.SSLeay() < 0x10001000:
235 warnings.warn(
236 "OpenSSL versions less than 1.0.1 are no longer supported by the "
237 "OpenSSL project, please upgrade. A future version of cryptography "
238 "will drop support for these versions.",
239 DeprecationWarning
240 )
241
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cryptography/hazmat/bindings/openssl/binding.py b/src/cryptography/hazmat/bindings/openssl/binding.py
--- a/src/cryptography/hazmat/bindings/openssl/binding.py
+++ b/src/cryptography/hazmat/bindings/openssl/binding.py
@@ -10,7 +10,6 @@
import types
import warnings
-from cryptography import utils
from cryptography.exceptions import InternalError
from cryptography.hazmat.bindings._openssl import ffi, lib
from cryptography.hazmat.bindings.openssl._conditional import CONDITIONAL_NAMES
@@ -217,6 +216,30 @@
)
+def _verify_openssl_version(version):
+ if version < 0x10000000:
+ if os.environ.get("CRYPTOGRAPHY_ALLOW_OPENSSL_098"):
+ warnings.warn(
+ "OpenSSL version 0.9.8 is no longer supported by the OpenSSL "
+ "project, please upgrade. The next version of cryptography "
+ "will completely remove support for it.",
+ DeprecationWarning
+ )
+ else:
+ raise RuntimeError(
+ "You are linking against OpenSSL 0.9.8, which is no longer "
+ "support by the OpenSSL project. You need to upgrade to a "
+ "newer version of OpenSSL."
+ )
+ elif version < 0x10001000:
+ warnings.warn(
+ "OpenSSL versions less than 1.0.1 are no longer supported by the "
+ "OpenSSL project, please upgrade. A future version of "
+ "cryptography will drop support for these versions of OpenSSL.",
+ DeprecationWarning
+ )
+
+
# OpenSSL is not thread safe until the locks are initialized. We call this
# method in module scope so that it executes with the import lock. On
# Pythons < 3.4 this import lock is a global lock, which can prevent a race
@@ -224,17 +247,4 @@
# is per module so this approach will not work.
Binding.init_static_locks()
-if Binding.lib.SSLeay() < 0x10000000:
- warnings.warn(
- "OpenSSL version 0.9.8 is no longer supported by the OpenSSL project, "
- "please upgrade. The next version of cryptography will drop support "
- "for it.",
- utils.DeprecatedIn12
- )
-elif Binding.lib.SSLeay() < 0x10001000:
- warnings.warn(
- "OpenSSL versions less than 1.0.1 are no longer supported by the "
- "OpenSSL project, please upgrade. A future version of cryptography "
- "will drop support for these versions.",
- DeprecationWarning
- )
+_verify_openssl_version(Binding.lib.SSLeay())
| {"golden_diff": "diff --git a/src/cryptography/hazmat/bindings/openssl/binding.py b/src/cryptography/hazmat/bindings/openssl/binding.py\n--- a/src/cryptography/hazmat/bindings/openssl/binding.py\n+++ b/src/cryptography/hazmat/bindings/openssl/binding.py\n@@ -10,7 +10,6 @@\n import types\n import warnings\n \n-from cryptography import utils\n from cryptography.exceptions import InternalError\n from cryptography.hazmat.bindings._openssl import ffi, lib\n from cryptography.hazmat.bindings.openssl._conditional import CONDITIONAL_NAMES\n@@ -217,6 +216,30 @@\n )\n \n \n+def _verify_openssl_version(version):\n+ if version < 0x10000000:\n+ if os.environ.get(\"CRYPTOGRAPHY_ALLOW_OPENSSL_098\"):\n+ warnings.warn(\n+ \"OpenSSL version 0.9.8 is no longer supported by the OpenSSL \"\n+ \"project, please upgrade. The next version of cryptography \"\n+ \"will completely remove support for it.\",\n+ DeprecationWarning\n+ )\n+ else:\n+ raise RuntimeError(\n+ \"You are linking against OpenSSL 0.9.8, which is no longer \"\n+ \"support by the OpenSSL project. You need to upgrade to a \"\n+ \"newer version of OpenSSL.\"\n+ )\n+ elif version < 0x10001000:\n+ warnings.warn(\n+ \"OpenSSL versions less than 1.0.1 are no longer supported by the \"\n+ \"OpenSSL project, please upgrade. A future version of \"\n+ \"cryptography will drop support for these versions of OpenSSL.\",\n+ DeprecationWarning\n+ )\n+\n+\n # OpenSSL is not thread safe until the locks are initialized. We call this\n # method in module scope so that it executes with the import lock. On\n # Pythons < 3.4 this import lock is a global lock, which can prevent a race\n@@ -224,17 +247,4 @@\n # is per module so this approach will not work.\n Binding.init_static_locks()\n \n-if Binding.lib.SSLeay() < 0x10000000:\n- warnings.warn(\n- \"OpenSSL version 0.9.8 is no longer supported by the OpenSSL project, \"\n- \"please upgrade. The next version of cryptography will drop support \"\n- \"for it.\",\n- utils.DeprecatedIn12\n- )\n-elif Binding.lib.SSLeay() < 0x10001000:\n- warnings.warn(\n- \"OpenSSL versions less than 1.0.1 are no longer supported by the \"\n- \"OpenSSL project, please upgrade. A future version of cryptography \"\n- \"will drop support for these versions.\",\n- DeprecationWarning\n- )\n+_verify_openssl_version(Binding.lib.SSLeay())\n", "issue": "Raise an error on openssl 0.9.8\nWith an env var (probably `CRYPTOGRAPHY_ALLOW_OPENSSL_098`) to allow disabling it for this one release.\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport collections\nimport os\nimport threading\nimport types\nimport warnings\n\nfrom cryptography import utils\nfrom cryptography.exceptions import InternalError\nfrom cryptography.hazmat.bindings._openssl import ffi, lib\nfrom cryptography.hazmat.bindings.openssl._conditional import CONDITIONAL_NAMES\n\n_OpenSSLError = collections.namedtuple(\"_OpenSSLError\",\n [\"code\", \"lib\", \"func\", \"reason\"])\n_OpenSSLErrorWithText = collections.namedtuple(\n \"_OpenSSLErrorWithText\", [\"code\", \"lib\", \"func\", \"reason\", \"reason_text\"]\n)\n\n\ndef _consume_errors(lib):\n errors = []\n while True:\n code = lib.ERR_get_error()\n if code == 0:\n break\n\n err_lib = lib.ERR_GET_LIB(code)\n err_func = lib.ERR_GET_FUNC(code)\n err_reason = lib.ERR_GET_REASON(code)\n\n errors.append(_OpenSSLError(code, err_lib, err_func, err_reason))\n\n return errors\n\n\ndef _openssl_assert(lib, ok):\n if not ok:\n errors = _consume_errors(lib)\n errors_with_text = []\n for err in errors:\n err_text_reason = ffi.string(\n lib.ERR_error_string(err.code, ffi.NULL)\n )\n errors_with_text.append(\n _OpenSSLErrorWithText(\n err.code, err.lib, err.func, err.reason, err_text_reason\n )\n )\n\n raise InternalError(\n \"Unknown OpenSSL error. This error is commonly encountered when \"\n \"another library is not cleaning up the OpenSSL error stack. If \"\n \"you are using cryptography with another library that uses \"\n \"OpenSSL try disabling it before reporting a bug. Otherwise \"\n \"please file an issue at https://github.com/pyca/cryptography/\"\n \"issues with information on how to reproduce \"\n \"this. ({0!r})\".format(errors_with_text),\n errors_with_text\n )\n\n\ndef ffi_callback(signature, name, **kwargs):\n \"\"\"Callback dispatcher\n\n The ffi_callback() dispatcher keeps callbacks compatible between dynamic\n and static callbacks.\n \"\"\"\n def wrapper(func):\n if lib.Cryptography_STATIC_CALLBACKS:\n # def_extern() returns a decorator that sets the internal\n # function pointer and returns the original function unmodified.\n ffi.def_extern(name=name, **kwargs)(func)\n callback = getattr(lib, name)\n else:\n # callback() wraps the function in a cdata function.\n callback = ffi.callback(signature, **kwargs)(func)\n return callback\n return wrapper\n\n\n@ffi_callback(\"int (*)(unsigned char *, int)\",\n name=\"Cryptography_rand_bytes\",\n error=-1)\ndef _osrandom_rand_bytes(buf, size):\n signed = ffi.cast(\"char *\", buf)\n result = os.urandom(size)\n signed[0:size] = result\n return 1\n\n\n@ffi_callback(\"int (*)(void)\", name=\"Cryptography_rand_status\")\ndef _osrandom_rand_status():\n return 1\n\n\ndef build_conditional_library(lib, conditional_names):\n conditional_lib = types.ModuleType(\"lib\")\n excluded_names = set()\n for condition, names in conditional_names.items():\n if not getattr(lib, condition):\n excluded_names |= set(names)\n\n for attr in dir(lib):\n if attr not in excluded_names:\n setattr(conditional_lib, attr, getattr(lib, attr))\n\n return conditional_lib\n\n\nclass Binding(object):\n \"\"\"\n OpenSSL API wrapper.\n \"\"\"\n lib = None\n ffi = ffi\n _lib_loaded = False\n _locks = None\n _lock_cb_handle = None\n _init_lock = threading.Lock()\n _lock_init_lock = threading.Lock()\n\n _osrandom_engine_id = ffi.new(\"const char[]\", b\"osrandom\")\n _osrandom_engine_name = ffi.new(\"const char[]\", b\"osrandom_engine\")\n _osrandom_method = ffi.new(\n \"RAND_METHOD *\",\n dict(bytes=_osrandom_rand_bytes,\n pseudorand=_osrandom_rand_bytes,\n status=_osrandom_rand_status)\n )\n\n def __init__(self):\n self._ensure_ffi_initialized()\n\n @classmethod\n def _register_osrandom_engine(cls):\n _openssl_assert(cls.lib, cls.lib.ERR_peek_error() == 0)\n\n engine = cls.lib.ENGINE_new()\n _openssl_assert(cls.lib, engine != cls.ffi.NULL)\n try:\n result = cls.lib.ENGINE_set_id(engine, cls._osrandom_engine_id)\n _openssl_assert(cls.lib, result == 1)\n result = cls.lib.ENGINE_set_name(engine, cls._osrandom_engine_name)\n _openssl_assert(cls.lib, result == 1)\n result = cls.lib.ENGINE_set_RAND(engine, cls._osrandom_method)\n _openssl_assert(cls.lib, result == 1)\n result = cls.lib.ENGINE_add(engine)\n if result != 1:\n errors = _consume_errors(cls.lib)\n _openssl_assert(\n cls.lib,\n errors[0].reason == cls.lib.ENGINE_R_CONFLICTING_ENGINE_ID\n )\n\n finally:\n result = cls.lib.ENGINE_free(engine)\n _openssl_assert(cls.lib, result == 1)\n\n @classmethod\n def _ensure_ffi_initialized(cls):\n with cls._init_lock:\n if not cls._lib_loaded:\n cls.lib = build_conditional_library(lib, CONDITIONAL_NAMES)\n cls._lib_loaded = True\n # initialize the SSL library\n cls.lib.SSL_library_init()\n # adds all ciphers/digests for EVP\n cls.lib.OpenSSL_add_all_algorithms()\n # loads error strings for libcrypto and libssl functions\n cls.lib.SSL_load_error_strings()\n cls._register_osrandom_engine()\n\n @classmethod\n def init_static_locks(cls):\n with cls._lock_init_lock:\n cls._ensure_ffi_initialized()\n\n if not cls._lock_cb_handle:\n wrapper = ffi_callback(\n \"void(int, int, const char *, int)\",\n name=\"Cryptography_locking_cb\",\n )\n cls._lock_cb_handle = wrapper(cls._lock_cb)\n\n # Use Python's implementation if available, importing _ssl triggers\n # the setup for this.\n __import__(\"_ssl\")\n\n if cls.lib.CRYPTO_get_locking_callback() != cls.ffi.NULL:\n return\n\n # If nothing else has setup a locking callback already, we set up\n # our own\n num_locks = cls.lib.CRYPTO_num_locks()\n cls._locks = [threading.Lock() for n in range(num_locks)]\n\n cls.lib.CRYPTO_set_locking_callback(cls._lock_cb_handle)\n\n @classmethod\n def _lock_cb(cls, mode, n, file, line):\n lock = cls._locks[n]\n\n if mode & cls.lib.CRYPTO_LOCK:\n lock.acquire()\n elif mode & cls.lib.CRYPTO_UNLOCK:\n lock.release()\n else:\n raise RuntimeError(\n \"Unknown lock mode {0}: lock={1}, file={2}, line={3}.\".format(\n mode, n, file, line\n )\n )\n\n\n# OpenSSL is not thread safe until the locks are initialized. We call this\n# method in module scope so that it executes with the import lock. On\n# Pythons < 3.4 this import lock is a global lock, which can prevent a race\n# condition registering the OpenSSL locks. On Python 3.4+ the import lock\n# is per module so this approach will not work.\nBinding.init_static_locks()\n\nif Binding.lib.SSLeay() < 0x10000000:\n warnings.warn(\n \"OpenSSL version 0.9.8 is no longer supported by the OpenSSL project, \"\n \"please upgrade. The next version of cryptography will drop support \"\n \"for it.\",\n utils.DeprecatedIn12\n )\nelif Binding.lib.SSLeay() < 0x10001000:\n warnings.warn(\n \"OpenSSL versions less than 1.0.1 are no longer supported by the \"\n \"OpenSSL project, please upgrade. A future version of cryptography \"\n \"will drop support for these versions.\",\n DeprecationWarning\n )\n", "path": "src/cryptography/hazmat/bindings/openssl/binding.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport collections\nimport os\nimport threading\nimport types\nimport warnings\n\nfrom cryptography.exceptions import InternalError\nfrom cryptography.hazmat.bindings._openssl import ffi, lib\nfrom cryptography.hazmat.bindings.openssl._conditional import CONDITIONAL_NAMES\n\n_OpenSSLError = collections.namedtuple(\"_OpenSSLError\",\n [\"code\", \"lib\", \"func\", \"reason\"])\n_OpenSSLErrorWithText = collections.namedtuple(\n \"_OpenSSLErrorWithText\", [\"code\", \"lib\", \"func\", \"reason\", \"reason_text\"]\n)\n\n\ndef _consume_errors(lib):\n errors = []\n while True:\n code = lib.ERR_get_error()\n if code == 0:\n break\n\n err_lib = lib.ERR_GET_LIB(code)\n err_func = lib.ERR_GET_FUNC(code)\n err_reason = lib.ERR_GET_REASON(code)\n\n errors.append(_OpenSSLError(code, err_lib, err_func, err_reason))\n\n return errors\n\n\ndef _openssl_assert(lib, ok):\n if not ok:\n errors = _consume_errors(lib)\n errors_with_text = []\n for err in errors:\n err_text_reason = ffi.string(\n lib.ERR_error_string(err.code, ffi.NULL)\n )\n errors_with_text.append(\n _OpenSSLErrorWithText(\n err.code, err.lib, err.func, err.reason, err_text_reason\n )\n )\n\n raise InternalError(\n \"Unknown OpenSSL error. This error is commonly encountered when \"\n \"another library is not cleaning up the OpenSSL error stack. If \"\n \"you are using cryptography with another library that uses \"\n \"OpenSSL try disabling it before reporting a bug. Otherwise \"\n \"please file an issue at https://github.com/pyca/cryptography/\"\n \"issues with information on how to reproduce \"\n \"this. ({0!r})\".format(errors_with_text),\n errors_with_text\n )\n\n\ndef ffi_callback(signature, name, **kwargs):\n \"\"\"Callback dispatcher\n\n The ffi_callback() dispatcher keeps callbacks compatible between dynamic\n and static callbacks.\n \"\"\"\n def wrapper(func):\n if lib.Cryptography_STATIC_CALLBACKS:\n # def_extern() returns a decorator that sets the internal\n # function pointer and returns the original function unmodified.\n ffi.def_extern(name=name, **kwargs)(func)\n callback = getattr(lib, name)\n else:\n # callback() wraps the function in a cdata function.\n callback = ffi.callback(signature, **kwargs)(func)\n return callback\n return wrapper\n\n\n@ffi_callback(\"int (*)(unsigned char *, int)\",\n name=\"Cryptography_rand_bytes\",\n error=-1)\ndef _osrandom_rand_bytes(buf, size):\n signed = ffi.cast(\"char *\", buf)\n result = os.urandom(size)\n signed[0:size] = result\n return 1\n\n\n@ffi_callback(\"int (*)(void)\", name=\"Cryptography_rand_status\")\ndef _osrandom_rand_status():\n return 1\n\n\ndef build_conditional_library(lib, conditional_names):\n conditional_lib = types.ModuleType(\"lib\")\n excluded_names = set()\n for condition, names in conditional_names.items():\n if not getattr(lib, condition):\n excluded_names |= set(names)\n\n for attr in dir(lib):\n if attr not in excluded_names:\n setattr(conditional_lib, attr, getattr(lib, attr))\n\n return conditional_lib\n\n\nclass Binding(object):\n \"\"\"\n OpenSSL API wrapper.\n \"\"\"\n lib = None\n ffi = ffi\n _lib_loaded = False\n _locks = None\n _lock_cb_handle = None\n _init_lock = threading.Lock()\n _lock_init_lock = threading.Lock()\n\n _osrandom_engine_id = ffi.new(\"const char[]\", b\"osrandom\")\n _osrandom_engine_name = ffi.new(\"const char[]\", b\"osrandom_engine\")\n _osrandom_method = ffi.new(\n \"RAND_METHOD *\",\n dict(bytes=_osrandom_rand_bytes,\n pseudorand=_osrandom_rand_bytes,\n status=_osrandom_rand_status)\n )\n\n def __init__(self):\n self._ensure_ffi_initialized()\n\n @classmethod\n def _register_osrandom_engine(cls):\n _openssl_assert(cls.lib, cls.lib.ERR_peek_error() == 0)\n\n engine = cls.lib.ENGINE_new()\n _openssl_assert(cls.lib, engine != cls.ffi.NULL)\n try:\n result = cls.lib.ENGINE_set_id(engine, cls._osrandom_engine_id)\n _openssl_assert(cls.lib, result == 1)\n result = cls.lib.ENGINE_set_name(engine, cls._osrandom_engine_name)\n _openssl_assert(cls.lib, result == 1)\n result = cls.lib.ENGINE_set_RAND(engine, cls._osrandom_method)\n _openssl_assert(cls.lib, result == 1)\n result = cls.lib.ENGINE_add(engine)\n if result != 1:\n errors = _consume_errors(cls.lib)\n _openssl_assert(\n cls.lib,\n errors[0].reason == cls.lib.ENGINE_R_CONFLICTING_ENGINE_ID\n )\n\n finally:\n result = cls.lib.ENGINE_free(engine)\n _openssl_assert(cls.lib, result == 1)\n\n @classmethod\n def _ensure_ffi_initialized(cls):\n with cls._init_lock:\n if not cls._lib_loaded:\n cls.lib = build_conditional_library(lib, CONDITIONAL_NAMES)\n cls._lib_loaded = True\n # initialize the SSL library\n cls.lib.SSL_library_init()\n # adds all ciphers/digests for EVP\n cls.lib.OpenSSL_add_all_algorithms()\n # loads error strings for libcrypto and libssl functions\n cls.lib.SSL_load_error_strings()\n cls._register_osrandom_engine()\n\n @classmethod\n def init_static_locks(cls):\n with cls._lock_init_lock:\n cls._ensure_ffi_initialized()\n\n if not cls._lock_cb_handle:\n wrapper = ffi_callback(\n \"void(int, int, const char *, int)\",\n name=\"Cryptography_locking_cb\",\n )\n cls._lock_cb_handle = wrapper(cls._lock_cb)\n\n # Use Python's implementation if available, importing _ssl triggers\n # the setup for this.\n __import__(\"_ssl\")\n\n if cls.lib.CRYPTO_get_locking_callback() != cls.ffi.NULL:\n return\n\n # If nothing else has setup a locking callback already, we set up\n # our own\n num_locks = cls.lib.CRYPTO_num_locks()\n cls._locks = [threading.Lock() for n in range(num_locks)]\n\n cls.lib.CRYPTO_set_locking_callback(cls._lock_cb_handle)\n\n @classmethod\n def _lock_cb(cls, mode, n, file, line):\n lock = cls._locks[n]\n\n if mode & cls.lib.CRYPTO_LOCK:\n lock.acquire()\n elif mode & cls.lib.CRYPTO_UNLOCK:\n lock.release()\n else:\n raise RuntimeError(\n \"Unknown lock mode {0}: lock={1}, file={2}, line={3}.\".format(\n mode, n, file, line\n )\n )\n\n\ndef _verify_openssl_version(version):\n if version < 0x10000000:\n if os.environ.get(\"CRYPTOGRAPHY_ALLOW_OPENSSL_098\"):\n warnings.warn(\n \"OpenSSL version 0.9.8 is no longer supported by the OpenSSL \"\n \"project, please upgrade. The next version of cryptography \"\n \"will completely remove support for it.\",\n DeprecationWarning\n )\n else:\n raise RuntimeError(\n \"You are linking against OpenSSL 0.9.8, which is no longer \"\n \"support by the OpenSSL project. You need to upgrade to a \"\n \"newer version of OpenSSL.\"\n )\n elif version < 0x10001000:\n warnings.warn(\n \"OpenSSL versions less than 1.0.1 are no longer supported by the \"\n \"OpenSSL project, please upgrade. A future version of \"\n \"cryptography will drop support for these versions of OpenSSL.\",\n DeprecationWarning\n )\n\n\n# OpenSSL is not thread safe until the locks are initialized. We call this\n# method in module scope so that it executes with the import lock. On\n# Pythons < 3.4 this import lock is a global lock, which can prevent a race\n# condition registering the OpenSSL locks. On Python 3.4+ the import lock\n# is per module so this approach will not work.\nBinding.init_static_locks()\n\n_verify_openssl_version(Binding.lib.SSLeay())\n", "path": "src/cryptography/hazmat/bindings/openssl/binding.py"}]} | 2,761 | 654 |
gh_patches_debug_15752 | rasdani/github-patches | git_diff | boto__boto-2598 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Route Tables: update describe_route_tables to support additional route types (VPC peering connection, NIC)
Routes can be created to network interfaces and VPC peering connections, however these routes are not reflected properly in the DescribeRouteTables response.
Doc here:
- http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeRouteTables.html
...however, the doc does not provide actual examples of the AWS response for these routes.
As such, I've repro'ed both route types and captured the AWS response XML here:
```
import boto, boto.vpc
conn = boto.vpc.connect_to_region('us-west-1')
vpc1 = conn.create_vpc("10.0.0.0/16");
vpc2 = conn.create_vpc("11.0.0.0/16");
route_table1 = conn.get_all_route_tables(filters={'association.main':'true','vpc-id':vpc1.id})[0]
# PCX route
vpc_pcx = conn.create_vpc_peering_connection(vpc1.id, vpc2.id)
conn.create_route(route_table1.id, "11.0.0.0/16", vpc_peering_connection_id=vpc_pcx.id)
# NIC route
subnet = conn.create_subnet(vpc1.id, "10.0.0.0/18")
eni = conn.create_network_interface(subnet.id)
conn.create_route(route_table1.id, "10.0.0.0/15", interface_id=eni.id)
# Describe route table
In [21]: route_table1 = conn.get_all_route_tables(filters={'association.main':'true','vpc-id':vpc1.id})[0]
2014-09-09 23:11:07,715 boto [DEBUG]:Method: GET
2014-09-09 23:11:07,715 boto [DEBUG]:Path: /
2014-09-09 23:11:07,715 boto [DEBUG]:Data:
2014-09-09 23:11:07,715 boto [DEBUG]:Headers: {}
2014-09-09 23:11:07,715 boto [DEBUG]:Host: ec2.us-west-1.amazonaws.com
2014-09-09 23:11:07,716 boto [DEBUG]:Port: 443
2014-09-09 23:11:07,716 boto [DEBUG]:Params: {'Filter.1.Name': 'association.main', 'Filter.2.Value.1': u'vpc-3f658b5a', 'Action': 'DescribeRouteTables', 'Version': '2014-05-01', 'Filter.2.Name': 'vpc-id', 'Filter.1.Value.1': 'true'}
2014-09-09 23:11:07,716 boto [DEBUG]:Token: None
2014-09-09 23:11:07,717 boto [DEBUG]:using _calc_signature_2
2014-09-09 23:11:07,717 boto [DEBUG]:query string: AWSAccessKeyId=AKIAI7ZA6HB42ZXF3LOA&Action=DescribeRouteTables&Filter.1.Name=association.main&Filter.1.Value.1=true&Filter.2.Name=vpc-id&Filter.2.Value.1=vpc-3f658b5a&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2014-09-09T23%3A11%3A07Z&Version=2014-05-01
2014-09-09 23:11:07,718 boto [DEBUG]:string_to_sign: GET
ec2.us-west-1.amazonaws.com
/
AWSAccessKeyId=AKIAI7ZA6HB42ZXF3LOA&Action=DescribeRouteTables&Filter.1.Name=association.main&Filter.1.Value.1=true&Filter.2.Name=vpc-id&Filter.2.Value.1=vpc-3f658b5a&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2014-09-09T23%3A11%3A07Z&Version=2014-05-01
2014-09-09 23:11:07,718 boto [DEBUG]:len(b64)=44
2014-09-09 23:11:07,718 boto [DEBUG]:base64 encoded digest: qAcNssrx85LkIJIUymuvM7dB1v527ogS8PGrsicg1f4=
2014-09-09 23:11:07,718 boto [DEBUG]:query_string: AWSAccessKeyId=AKIAI7ZA6HB42ZXF3LOA&Action=DescribeRouteTables&Filter.1.Name=association.main&Filter.1.Value.1=true&Filter.2.Name=vpc-id&Filter.2.Value.1=vpc-3f658b5a&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2014-09-09T23%3A11%3A07Z&Version=2014-05-01 Signature: qAcNssrx85LkIJIUymuvM7dB1v527ogS8PGrsicg1f4=
2014-09-09 23:11:07,718 boto [DEBUG]:Final headers: {'Content-Length': 0, 'Host': 'ec2.us-west-1.amazonaws.com', 'User-Agent': 'Boto/2.32.1 Python/2.6.5 Linux/2.6.32-45-server'}
2014-09-09 23:11:07,840 boto [DEBUG]:Response headers: [('transfer-encoding', 'chunked'), ('vary', 'Accept-Encoding'), ('server', 'AmazonEC2'), ('content-type', 'text/xml;charset=UTF-8'), ('date', 'Tue, 09 Sep 2014 23:11:07 GMT')]
2014-09-09 23:11:07,840 boto [DEBUG]:<?xml version="1.0" encoding="UTF-8"?>
<DescribeRouteTablesResponse xmlns="http://ec2.amazonaws.com/doc/2014-05-01/">
<requestId>e22ff1da-fb06-4e0f-bda6-8555c18f0af6</requestId>
<routeTableSet>
<item>
<routeTableId>rtb-a754bcc2</routeTableId>
<vpcId>vpc-3f658b5a</vpcId>
<routeSet>
<item>
<destinationCidrBlock>10.0.0.0/16</destinationCidrBlock>
<gatewayId>local</gatewayId>
<state>active</state>
<origin>CreateRouteTable</origin>
</item>
<item>
<destinationCidrBlock>11.0.0.0/16</destinationCidrBlock>
<vpcPeeringConnectionId>pcx-efc52b86</vpcPeeringConnectionId>
<state>blackhole</state>
<origin>CreateRoute</origin>
</item>
<item>
<destinationCidrBlock>10.0.0.0/15</destinationCidrBlock>
<networkInterfaceId>eni-884ec1d1</networkInterfaceId>
<state>blackhole</state>
<origin>CreateRoute</origin>
</item>
</routeSet>
<associationSet>
<item>
<routeTableAssociationId>rtbassoc-111ef574</routeTableAssociationId>
<routeTableId>rtb-a754bcc2</routeTableId>
<main>true</main>
</item>
</associationSet>
<propagatingVgwSet/>
<tagSet/>
</item>
</routeTableSet>
</DescribeRouteTablesResponse>
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `boto/vpc/routetable.py`
Content:
```
1 # Copyright (c) 2009-2010 Mitch Garnaat http://garnaat.org/
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a
4 # copy of this software and associated documentation files (the
5 # "Software"), to deal in the Software without restriction, including
6 # without limitation the rights to use, copy, modify, merge, publish, dis-
7 # tribute, sublicense, and/or sell copies of the Software, and to permit
8 # persons to whom the Software is furnished to do so, subject to the fol-
9 # lowing conditions:
10 #
11 # The above copyright notice and this permission notice shall be included
12 # in all copies or substantial portions of the Software.
13 #
14 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
15 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
16 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
17 # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
18 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
19 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
20 # IN THE SOFTWARE.
21
22 """
23 Represents a Route Table
24 """
25
26 from boto.ec2.ec2object import TaggedEC2Object
27 from boto.resultset import ResultSet
28
29 class RouteTable(TaggedEC2Object):
30
31 def __init__(self, connection=None):
32 super(RouteTable, self).__init__(connection)
33 self.id = None
34 self.vpc_id = None
35 self.routes = []
36 self.associations = []
37
38 def __repr__(self):
39 return 'RouteTable:%s' % self.id
40
41 def startElement(self, name, attrs, connection):
42 result = super(RouteTable, self).startElement(name, attrs, connection)
43
44 if result is not None:
45 # Parent found an interested element, just return it
46 return result
47
48 if name == 'routeSet':
49 self.routes = ResultSet([('item', Route)])
50 return self.routes
51 elif name == 'associationSet':
52 self.associations = ResultSet([('item', RouteAssociation)])
53 return self.associations
54 else:
55 return None
56
57 def endElement(self, name, value, connection):
58 if name == 'routeTableId':
59 self.id = value
60 elif name == 'vpcId':
61 self.vpc_id = value
62 else:
63 setattr(self, name, value)
64
65 class Route(object):
66 def __init__(self, connection=None):
67 self.destination_cidr_block = None
68 self.gateway_id = None
69 self.instance_id = None
70 self.state = None
71
72 def __repr__(self):
73 return 'Route:%s' % self.destination_cidr_block
74
75 def startElement(self, name, attrs, connection):
76 return None
77
78 def endElement(self, name, value, connection):
79 if name == 'destinationCidrBlock':
80 self.destination_cidr_block = value
81 elif name == 'gatewayId':
82 self.gateway_id = value
83 elif name == 'instanceId':
84 self.instance_id = value
85 elif name == 'state':
86 self.state = value
87
88 class RouteAssociation(object):
89 def __init__(self, connection=None):
90 self.id = None
91 self.route_table_id = None
92 self.subnet_id = None
93 self.main = False
94
95 def __repr__(self):
96 return 'RouteAssociation:%s' % self.id
97
98 def startElement(self, name, attrs, connection):
99 return None
100
101 def endElement(self, name, value, connection):
102 if name == 'routeTableAssociationId':
103 self.id = value
104 elif name == 'routeTableId':
105 self.route_table_id = value
106 elif name == 'subnetId':
107 self.subnet_id = value
108 elif name == 'main':
109 self.main = value == 'true'
110
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/boto/vpc/routetable.py b/boto/vpc/routetable.py
--- a/boto/vpc/routetable.py
+++ b/boto/vpc/routetable.py
@@ -67,6 +67,8 @@
self.destination_cidr_block = None
self.gateway_id = None
self.instance_id = None
+ self.interface_id = None
+ self.vpc_peering_connection_id = None
self.state = None
def __repr__(self):
@@ -82,6 +84,10 @@
self.gateway_id = value
elif name == 'instanceId':
self.instance_id = value
+ elif name == 'networkInterfaceId':
+ self.interface_id = value
+ elif name == 'vpcPeeringConnectionId':
+ self.vpc_peering_connection_id = value
elif name == 'state':
self.state = value
| {"golden_diff": "diff --git a/boto/vpc/routetable.py b/boto/vpc/routetable.py\n--- a/boto/vpc/routetable.py\n+++ b/boto/vpc/routetable.py\n@@ -67,6 +67,8 @@\n self.destination_cidr_block = None\n self.gateway_id = None\n self.instance_id = None\n+ self.interface_id = None\n+ self.vpc_peering_connection_id = None\n self.state = None\n \n def __repr__(self):\n@@ -82,6 +84,10 @@\n self.gateway_id = value\n elif name == 'instanceId':\n self.instance_id = value\n+ elif name == 'networkInterfaceId':\n+ self.interface_id = value\n+ elif name == 'vpcPeeringConnectionId':\n+ self.vpc_peering_connection_id = value\n elif name == 'state':\n self.state = value\n", "issue": "Route Tables: update describe_route_tables to support additional route types (VPC peering connection, NIC)\nRoutes can be created to network interfaces and VPC peering connections, however these routes are not reflected properly in the DescribeRouteTables response.\n\nDoc here:\n- http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeRouteTables.html\n\n...however, the doc does not provide actual examples of the AWS response for these routes.\n\nAs such, I've repro'ed both route types and captured the AWS response XML here:\n\n```\nimport boto, boto.vpc\nconn = boto.vpc.connect_to_region('us-west-1')\n\nvpc1 = conn.create_vpc(\"10.0.0.0/16\");\nvpc2 = conn.create_vpc(\"11.0.0.0/16\");\nroute_table1 = conn.get_all_route_tables(filters={'association.main':'true','vpc-id':vpc1.id})[0]\n\n# PCX route\nvpc_pcx = conn.create_vpc_peering_connection(vpc1.id, vpc2.id)\nconn.create_route(route_table1.id, \"11.0.0.0/16\", vpc_peering_connection_id=vpc_pcx.id)\n\n# NIC route\nsubnet = conn.create_subnet(vpc1.id, \"10.0.0.0/18\")\neni = conn.create_network_interface(subnet.id)\nconn.create_route(route_table1.id, \"10.0.0.0/15\", interface_id=eni.id)\n\n# Describe route table\nIn [21]: route_table1 = conn.get_all_route_tables(filters={'association.main':'true','vpc-id':vpc1.id})[0]\n2014-09-09 23:11:07,715 boto [DEBUG]:Method: GET\n2014-09-09 23:11:07,715 boto [DEBUG]:Path: /\n2014-09-09 23:11:07,715 boto [DEBUG]:Data: \n2014-09-09 23:11:07,715 boto [DEBUG]:Headers: {}\n2014-09-09 23:11:07,715 boto [DEBUG]:Host: ec2.us-west-1.amazonaws.com\n2014-09-09 23:11:07,716 boto [DEBUG]:Port: 443\n2014-09-09 23:11:07,716 boto [DEBUG]:Params: {'Filter.1.Name': 'association.main', 'Filter.2.Value.1': u'vpc-3f658b5a', 'Action': 'DescribeRouteTables', 'Version': '2014-05-01', 'Filter.2.Name': 'vpc-id', 'Filter.1.Value.1': 'true'}\n2014-09-09 23:11:07,716 boto [DEBUG]:Token: None\n2014-09-09 23:11:07,717 boto [DEBUG]:using _calc_signature_2\n2014-09-09 23:11:07,717 boto [DEBUG]:query string: AWSAccessKeyId=AKIAI7ZA6HB42ZXF3LOA&Action=DescribeRouteTables&Filter.1.Name=association.main&Filter.1.Value.1=true&Filter.2.Name=vpc-id&Filter.2.Value.1=vpc-3f658b5a&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2014-09-09T23%3A11%3A07Z&Version=2014-05-01\n2014-09-09 23:11:07,718 boto [DEBUG]:string_to_sign: GET\nec2.us-west-1.amazonaws.com\n/\nAWSAccessKeyId=AKIAI7ZA6HB42ZXF3LOA&Action=DescribeRouteTables&Filter.1.Name=association.main&Filter.1.Value.1=true&Filter.2.Name=vpc-id&Filter.2.Value.1=vpc-3f658b5a&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2014-09-09T23%3A11%3A07Z&Version=2014-05-01\n2014-09-09 23:11:07,718 boto [DEBUG]:len(b64)=44\n2014-09-09 23:11:07,718 boto [DEBUG]:base64 encoded digest: qAcNssrx85LkIJIUymuvM7dB1v527ogS8PGrsicg1f4=\n2014-09-09 23:11:07,718 boto [DEBUG]:query_string: AWSAccessKeyId=AKIAI7ZA6HB42ZXF3LOA&Action=DescribeRouteTables&Filter.1.Name=association.main&Filter.1.Value.1=true&Filter.2.Name=vpc-id&Filter.2.Value.1=vpc-3f658b5a&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2014-09-09T23%3A11%3A07Z&Version=2014-05-01 Signature: qAcNssrx85LkIJIUymuvM7dB1v527ogS8PGrsicg1f4=\n2014-09-09 23:11:07,718 boto [DEBUG]:Final headers: {'Content-Length': 0, 'Host': 'ec2.us-west-1.amazonaws.com', 'User-Agent': 'Boto/2.32.1 Python/2.6.5 Linux/2.6.32-45-server'}\n2014-09-09 23:11:07,840 boto [DEBUG]:Response headers: [('transfer-encoding', 'chunked'), ('vary', 'Accept-Encoding'), ('server', 'AmazonEC2'), ('content-type', 'text/xml;charset=UTF-8'), ('date', 'Tue, 09 Sep 2014 23:11:07 GMT')]\n2014-09-09 23:11:07,840 boto [DEBUG]:<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<DescribeRouteTablesResponse xmlns=\"http://ec2.amazonaws.com/doc/2014-05-01/\">\n <requestId>e22ff1da-fb06-4e0f-bda6-8555c18f0af6</requestId>\n <routeTableSet>\n <item>\n <routeTableId>rtb-a754bcc2</routeTableId>\n <vpcId>vpc-3f658b5a</vpcId>\n <routeSet>\n <item>\n <destinationCidrBlock>10.0.0.0/16</destinationCidrBlock>\n <gatewayId>local</gatewayId>\n <state>active</state>\n <origin>CreateRouteTable</origin>\n </item>\n <item>\n <destinationCidrBlock>11.0.0.0/16</destinationCidrBlock>\n <vpcPeeringConnectionId>pcx-efc52b86</vpcPeeringConnectionId>\n <state>blackhole</state>\n <origin>CreateRoute</origin>\n </item>\n <item>\n <destinationCidrBlock>10.0.0.0/15</destinationCidrBlock>\n <networkInterfaceId>eni-884ec1d1</networkInterfaceId>\n <state>blackhole</state>\n <origin>CreateRoute</origin>\n </item>\n </routeSet>\n <associationSet>\n <item>\n <routeTableAssociationId>rtbassoc-111ef574</routeTableAssociationId>\n <routeTableId>rtb-a754bcc2</routeTableId>\n <main>true</main>\n </item>\n </associationSet>\n <propagatingVgwSet/>\n <tagSet/>\n </item>\n </routeTableSet>\n</DescribeRouteTablesResponse>\n```\n\n", "before_files": [{"content": "# Copyright (c) 2009-2010 Mitch Garnaat http://garnaat.org/\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n\n\"\"\"\nRepresents a Route Table\n\"\"\"\n\nfrom boto.ec2.ec2object import TaggedEC2Object\nfrom boto.resultset import ResultSet\n\nclass RouteTable(TaggedEC2Object):\n\n def __init__(self, connection=None):\n super(RouteTable, self).__init__(connection)\n self.id = None\n self.vpc_id = None\n self.routes = []\n self.associations = []\n\n def __repr__(self):\n return 'RouteTable:%s' % self.id\n\n def startElement(self, name, attrs, connection):\n result = super(RouteTable, self).startElement(name, attrs, connection)\n\n if result is not None:\n # Parent found an interested element, just return it\n return result\n\n if name == 'routeSet':\n self.routes = ResultSet([('item', Route)])\n return self.routes\n elif name == 'associationSet':\n self.associations = ResultSet([('item', RouteAssociation)])\n return self.associations\n else:\n return None\n\n def endElement(self, name, value, connection):\n if name == 'routeTableId':\n self.id = value\n elif name == 'vpcId':\n self.vpc_id = value\n else:\n setattr(self, name, value)\n\nclass Route(object):\n def __init__(self, connection=None):\n self.destination_cidr_block = None\n self.gateway_id = None\n self.instance_id = None\n self.state = None\n\n def __repr__(self):\n return 'Route:%s' % self.destination_cidr_block\n\n def startElement(self, name, attrs, connection):\n return None\n\n def endElement(self, name, value, connection):\n if name == 'destinationCidrBlock':\n self.destination_cidr_block = value\n elif name == 'gatewayId':\n self.gateway_id = value\n elif name == 'instanceId':\n self.instance_id = value\n elif name == 'state':\n self.state = value\n\nclass RouteAssociation(object):\n def __init__(self, connection=None):\n self.id = None\n self.route_table_id = None\n self.subnet_id = None\n self.main = False\n\n def __repr__(self):\n return 'RouteAssociation:%s' % self.id\n\n def startElement(self, name, attrs, connection):\n return None\n\n def endElement(self, name, value, connection):\n if name == 'routeTableAssociationId':\n self.id = value\n elif name == 'routeTableId':\n self.route_table_id = value\n elif name == 'subnetId':\n self.subnet_id = value\n elif name == 'main':\n self.main = value == 'true'\n", "path": "boto/vpc/routetable.py"}], "after_files": [{"content": "# Copyright (c) 2009-2010 Mitch Garnaat http://garnaat.org/\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n\n\"\"\"\nRepresents a Route Table\n\"\"\"\n\nfrom boto.ec2.ec2object import TaggedEC2Object\nfrom boto.resultset import ResultSet\n\nclass RouteTable(TaggedEC2Object):\n\n def __init__(self, connection=None):\n super(RouteTable, self).__init__(connection)\n self.id = None\n self.vpc_id = None\n self.routes = []\n self.associations = []\n\n def __repr__(self):\n return 'RouteTable:%s' % self.id\n\n def startElement(self, name, attrs, connection):\n result = super(RouteTable, self).startElement(name, attrs, connection)\n\n if result is not None:\n # Parent found an interested element, just return it\n return result\n\n if name == 'routeSet':\n self.routes = ResultSet([('item', Route)])\n return self.routes\n elif name == 'associationSet':\n self.associations = ResultSet([('item', RouteAssociation)])\n return self.associations\n else:\n return None\n\n def endElement(self, name, value, connection):\n if name == 'routeTableId':\n self.id = value\n elif name == 'vpcId':\n self.vpc_id = value\n else:\n setattr(self, name, value)\n\nclass Route(object):\n def __init__(self, connection=None):\n self.destination_cidr_block = None\n self.gateway_id = None\n self.instance_id = None\n self.interface_id = None\n self.vpc_peering_connection_id = None\n self.state = None\n\n def __repr__(self):\n return 'Route:%s' % self.destination_cidr_block\n\n def startElement(self, name, attrs, connection):\n return None\n\n def endElement(self, name, value, connection):\n if name == 'destinationCidrBlock':\n self.destination_cidr_block = value\n elif name == 'gatewayId':\n self.gateway_id = value\n elif name == 'instanceId':\n self.instance_id = value\n elif name == 'networkInterfaceId':\n self.interface_id = value\n elif name == 'vpcPeeringConnectionId':\n self.vpc_peering_connection_id = value\n elif name == 'state':\n self.state = value\n\nclass RouteAssociation(object):\n def __init__(self, connection=None):\n self.id = None\n self.route_table_id = None\n self.subnet_id = None\n self.main = False\n\n def __repr__(self):\n return 'RouteAssociation:%s' % self.id\n\n def startElement(self, name, attrs, connection):\n return None\n\n def endElement(self, name, value, connection):\n if name == 'routeTableAssociationId':\n self.id = value\n elif name == 'routeTableId':\n self.route_table_id = value\n elif name == 'subnetId':\n self.subnet_id = value\n elif name == 'main':\n self.main = value == 'true'\n", "path": "boto/vpc/routetable.py"}]} | 3,319 | 203 |
gh_patches_debug_27222 | rasdani/github-patches | git_diff | psychopy__psychopy-2835 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Working directory issues with 2020.1.1 (most likely related to the new run window)
See https://discourse.psychopy.org/t/working-directory-change-with-2020-1-and-later/11143 and https://discourse.psychopy.org/t/both-the-stim3d-py-and-trialhandler-py-programs-with-psychopy-2020-1-1-do-not-run-properly/11201
At least on Mac (have not tested on Windows), since upgrading to 2020.1.1, if you run a script, the working directory is set to /Applications/PsychoPy3.app/Contents/Resources, rather than the directory that contains the script. This completely breaks any script that uses relative paths for relevant files expecting the working directory to be the script's directory. This includes some of the demos (see second link above) and my PyHab project.
In the short run you can just add the following to the top of every file where it matters:
```
import os
os.chdir(os.path.dirname(os.path.realpath(__file__)))
```
but this is basically a kludge and bad for backwards-compatibility. Does it need to be this way or can we get the new running window to use the script's directory as the working directory by default?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `psychopy/__init__.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 # Part of the PsychoPy library
5 # Copyright (C) 2002-2018 Jonathan Peirce (C) 2019 Open Science Tools Ltd.
6 # Distributed under the terms of the GNU General Public License (GPL).
7
8 # --------------------------------------------------------------------------
9 # This file is automatically generated during build (do not edit directly).
10 # --------------------------------------------------------------------------
11
12 import os
13 import sys
14
15 __version__ = '2020.1.2'
16 __license__ = 'GNU GPLv3 (or more recent equivalent)'
17 __author__ = 'Jonathan Peirce'
18 __author_email__ = '[email protected]'
19 __maintainer_email__ = '[email protected]'
20 __url__ = 'http://www.psychopy.org/'
21 __download_url__ = 'https://github.com/psychopy/psychopy/releases/'
22 __git_sha__ = 'n/a'
23 __build_platform__ = 'n/a'
24
25 __all__ = ["gui", "misc", "visual", "core",
26 "event", "data", "sound", "microphone"]
27
28 # for developers the following allows access to the current git sha from
29 # their repository
30 if __git_sha__ == 'n/a':
31 from subprocess import check_output, PIPE
32 # see if we're in a git repo and fetch from there
33 try:
34 thisFileLoc = os.path.split(__file__)[0]
35 output = check_output(['git', 'rev-parse', '--short', 'HEAD'],
36 cwd=thisFileLoc, stderr=PIPE)
37 except Exception:
38 output = False
39 if output:
40 __git_sha__ = output.strip() # remove final linefeed
41
42 # update preferences and the user paths
43 if 'installing' not in locals():
44 from psychopy.preferences import prefs
45 for pathName in prefs.general['paths']:
46 sys.path.append(pathName)
47
48 from psychopy.tools.versionchooser import useVersion, ensureMinimal
49
50 # import readline here to get around an issue with sounddevice
51 # issues GH-2230 GH-2344 GH-2662
52 try:
53 import readline
54 except ImportError:
55 pass # all that will happen is the stderr/stdout might get redirected
56
57
```
Path: `setupApp.py`
Content:
```
1 #!/usr/bin/env python
2 ################
3 # see notes at bottom for requirements
4 from __future__ import absolute_import, print_function
5 import glob
6 import os
7 import sys
8 from sys import platform
9 from distutils.core import setup
10 from pkg_resources import parse_version
11 # import versioneer
12 import psychopy
13 version = psychopy.__version__
14
15 # regenerate __init__.py only if we're in the source repos (not in a zip file)
16 try:
17 import createInitFile # won't exist in a sdist.zip
18 writeNewInit=True
19 except:
20 writeNewInit=False
21 if writeNewInit:
22 vStr = createInitFile.createInitFile(dist='bdist')
23
24 #define the extensions to compile if necess
25 packageData = []
26 requires = []
27
28 if platform != 'darwin':
29 raise RuntimeError("setupApp.py is only for building Mac Standalone bundle")
30
31 import bdist_mpkg
32 import py2app
33 resources = glob.glob('psychopy/app/Resources/*')
34 resources.append('/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/pyconfig.h')
35 frameworks = ["libavbin.dylib", "/usr/lib/libxml2.2.dylib", #"libyaml.dylib",
36 "libevent.dylib", "libffi.dylib",
37 "libmp3lame.0.dylib",
38 "/usr/local/Cellar/glfw/3.2.1/lib/libglfw.3.2.dylib",
39 ]
40 opencvLibs = glob.glob(os.path.join(sys.exec_prefix, 'lib', 'libopencv*.2.4.dylib'))
41 frameworks.extend(opencvLibs)
42
43 import macholib
44 #print("~"*60 + "macholib version: "+macholib.__version__)
45
46 if parse_version(macholib.__version__) <= parse_version('1.7'):
47 print("Applying macholib patch...")
48 import macholib.dyld
49 import macholib.MachOGraph
50 dyld_find_1_7 = macholib.dyld.dyld_find
51 def dyld_find(name, loader=None, **kwargs):
52 #print("~"*60 + "calling alternate dyld_find")
53 if loader is not None:
54 kwargs['loader_path'] = loader
55 return dyld_find_1_7(name, **kwargs)
56 macholib.MachOGraph.dyld_find = dyld_find
57
58 includes = ['Tkinter', 'tkFileDialog',
59 'imp', 'subprocess', 'shlex',
60 'shelve', # for scipy.io
61 '_elementtree', 'pyexpat', # for openpyxl
62 'hid',
63 'pyo', 'greenlet', 'zmq', 'tornado',
64 'psutil', # for iohub
65 'tobii_research', # need tobii_research file and tobiiresearch pkg
66 'pysoundcard', 'soundfile', 'sounddevice', 'readline',
67 'hid',
68 'xlwt', # writes excel files for pandas
69 'vlc', # install with pip install python-vlc
70 'msgpack_numpy',
71 'configparser',
72 ]
73 packages = ['wx', 'psychopy',
74 'pyglet', 'pygame', 'pytz', 'OpenGL', 'glfw',
75 'scipy', 'matplotlib', 'lxml', 'xml', 'openpyxl',
76 'moviepy', 'imageio', 'imageio_ffmpeg',
77 '_sounddevice_data', '_soundfile_data',
78 'cffi','pycparser',
79 'PIL', # 'Image',
80 'objc', 'Quartz', 'AppKit', 'QTKit', 'Cocoa',
81 'Foundation', 'CoreFoundation',
82 'pkg_resources', # needed for objc
83 'pyolib',
84 'requests', 'certifi', 'cryptography',
85 'pyosf',
86 # for unit testing
87 'coverage',
88 # handy external science libs
89 'serial',
90 'egi', 'pylink', 'tobiiresearch',
91 'pyxid', 'pyxid2', 'ftd2xx', # ftd2xx is used by cedrus
92 'pandas', 'tables', # 'cython',
93 'msgpack', 'yaml', 'gevent', # for ioHub
94 # these aren't needed, but liked
95 'psychopy_ext', 'pyfilesec',
96 'bidi', 'arabic_reshaper', # for right-left language conversions
97 # for Py3 compatibility
98 'future', 'past', 'lib2to3',
99 'json_tricks', # allows saving arrays/dates in json
100 'git', 'gitlab',
101 'astunparse', 'esprima', # for translating/adapting py/JS
102 'pylsl', 'pygaze', 'smite',
103 'cv2',
104 'badapted', 'darc_toolbox', # adaptive methods from Ben Vincent
105 'questplus',
106 'metapensiero.pj', 'dukpy', 'macropy',
107 ]
108
109 if sys.version_info.major >= 3:
110 packages.extend(['PyQt5'])
111 else:
112 # not available or not working under Python3:
113 includes.extend(['UserString', 'ioLabs', 'FileDialog'])
114 packages.extend(['PyQt4', 'labjack', 'rusocsci'])
115 # is available but py2app can't seem to find it:
116 packages.extend(['OpenGL'])
117
118 setup(
119 app=['psychopy/app/psychopyApp.py'],
120 options=dict(py2app=dict(
121 includes=includes,
122 packages=packages,
123 excludes=['bsddb', 'jinja2', 'IPython','ipython_genutils','nbconvert',
124 'libsz.2.dylib',
125 # 'stringprep',
126 'functools32',
127 ], # anything we need to forcibly exclude?
128 resources=resources,
129 argv_emulation=True,
130 site_packages=True,
131 frameworks=frameworks,
132 iconfile='psychopy/app/Resources/psychopy.icns',
133 plist=dict(
134 CFBundleIconFile='psychopy.icns',
135 CFBundleName = "PsychoPy3",
136 CFBundleShortVersionString = version, # must be in X.X.X format
137 CFBundleGetInfoString = "PsychoPy3 "+version,
138 CFBundleExecutable = "PsychoPy3",
139 CFBundleIdentifier = "org.psychopy.PsychoPy3",
140 CFBundleLicense = "GNU GPLv3+",
141 CFBundleDocumentTypes=[dict(CFBundleTypeExtensions=['*'],
142 CFBundleTypeRole='Editor')],
143 LSEnvironment=dict(PATH="/usr/local/git/bin:/usr/local/bin:"
144 "/usr/local:/usr/bin:/usr/sbin"),
145 ),
146 )) # end of the options dict
147 )
148
149
150 # ugly hack for opencv2:
151 # As of opencv 2.4.5 the cv2.so binary used rpath to a fixed
152 # location to find libs and even more annoyingly it then appended
153 # 'lib' to the rpath as well. These were fine for the packaged
154 # framework python but the libs in an app bundle are different.
155 # So, create symlinks so they appear in the same place as in framework python
156 rpath = "dist/PsychoPy3.app/Contents/Resources/"
157 for libPath in opencvLibs:
158 libname = os.path.split(libPath)[-1]
159 realPath = "../../Frameworks/"+libname # relative path (w.r.t. the fake)
160 fakePath = os.path.join(rpath, "lib", libname)
161 os.symlink(realPath, fakePath)
162 # they even did this for Python lib itself, which is in diff location
163 realPath = "../Frameworks/Python.framework/Python" # relative to the fake path
164 fakePath = os.path.join(rpath, "Python")
165 os.symlink(realPath, fakePath)
166
167 if writeNewInit:
168 # remove unwanted info about this system post-build
169 createInitFile.createInitFile(dist=None)
170
171 # running testApp from within the app raises wx errors
172 # shutil.rmtree("dist/PsychoPy3.app/Contents/Resources/lib/python2.6/psychopy/tests/testTheApp")
173
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/psychopy/__init__.py b/psychopy/__init__.py
--- a/psychopy/__init__.py
+++ b/psychopy/__init__.py
@@ -12,7 +12,7 @@
import os
import sys
-__version__ = '2020.1.2'
+__version__ = '2020.1.3'
__license__ = 'GNU GPLv3 (or more recent equivalent)'
__author__ = 'Jonathan Peirce'
__author_email__ = '[email protected]'
diff --git a/setupApp.py b/setupApp.py
--- a/setupApp.py
+++ b/setupApp.py
@@ -134,10 +134,11 @@
CFBundleIconFile='psychopy.icns',
CFBundleName = "PsychoPy3",
CFBundleShortVersionString = version, # must be in X.X.X format
- CFBundleGetInfoString = "PsychoPy3 "+version,
+ CFBundleVersion = version,
CFBundleExecutable = "PsychoPy3",
CFBundleIdentifier = "org.psychopy.PsychoPy3",
CFBundleLicense = "GNU GPLv3+",
+ NSHumanReadableCopyright = "Open Science Tools Ltd.",
CFBundleDocumentTypes=[dict(CFBundleTypeExtensions=['*'],
CFBundleTypeRole='Editor')],
LSEnvironment=dict(PATH="/usr/local/git/bin:/usr/local/bin:"
| {"golden_diff": "diff --git a/psychopy/__init__.py b/psychopy/__init__.py\n--- a/psychopy/__init__.py\n+++ b/psychopy/__init__.py\n@@ -12,7 +12,7 @@\n import os\n import sys\n \n-__version__ = '2020.1.2'\n+__version__ = '2020.1.3'\n __license__ = 'GNU GPLv3 (or more recent equivalent)'\n __author__ = 'Jonathan Peirce'\n __author_email__ = '[email protected]'\ndiff --git a/setupApp.py b/setupApp.py\n--- a/setupApp.py\n+++ b/setupApp.py\n@@ -134,10 +134,11 @@\n CFBundleIconFile='psychopy.icns',\n CFBundleName = \"PsychoPy3\",\n CFBundleShortVersionString = version, # must be in X.X.X format\n- CFBundleGetInfoString = \"PsychoPy3 \"+version,\n+ CFBundleVersion = version,\n CFBundleExecutable = \"PsychoPy3\",\n CFBundleIdentifier = \"org.psychopy.PsychoPy3\",\n CFBundleLicense = \"GNU GPLv3+\",\n+ NSHumanReadableCopyright = \"Open Science Tools Ltd.\",\n CFBundleDocumentTypes=[dict(CFBundleTypeExtensions=['*'],\n CFBundleTypeRole='Editor')],\n LSEnvironment=dict(PATH=\"/usr/local/git/bin:/usr/local/bin:\"\n", "issue": "Working directory issues with 2020.1.1 (most likely related to the new run window)\nSee https://discourse.psychopy.org/t/working-directory-change-with-2020-1-and-later/11143 and https://discourse.psychopy.org/t/both-the-stim3d-py-and-trialhandler-py-programs-with-psychopy-2020-1-1-do-not-run-properly/11201\r\n\r\nAt least on Mac (have not tested on Windows), since upgrading to 2020.1.1, if you run a script, the working directory is set to /Applications/PsychoPy3.app/Contents/Resources, rather than the directory that contains the script. This completely breaks any script that uses relative paths for relevant files expecting the working directory to be the script's directory. This includes some of the demos (see second link above) and my PyHab project.\r\n\r\nIn the short run you can just add the following to the top of every file where it matters:\r\n\r\n```\r\nimport os\r\nos.chdir(os.path.dirname(os.path.realpath(__file__)))\r\n```\r\n\r\nbut this is basically a kludge and bad for backwards-compatibility. Does it need to be this way or can we get the new running window to use the script's directory as the working directory by default?\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n# Part of the PsychoPy library\n# Copyright (C) 2002-2018 Jonathan Peirce (C) 2019 Open Science Tools Ltd.\n# Distributed under the terms of the GNU General Public License (GPL).\n\n# --------------------------------------------------------------------------\n# This file is automatically generated during build (do not edit directly).\n# --------------------------------------------------------------------------\n\nimport os\nimport sys\n\n__version__ = '2020.1.2'\n__license__ = 'GNU GPLv3 (or more recent equivalent)'\n__author__ = 'Jonathan Peirce'\n__author_email__ = '[email protected]'\n__maintainer_email__ = '[email protected]'\n__url__ = 'http://www.psychopy.org/'\n__download_url__ = 'https://github.com/psychopy/psychopy/releases/'\n__git_sha__ = 'n/a'\n__build_platform__ = 'n/a'\n\n__all__ = [\"gui\", \"misc\", \"visual\", \"core\",\n \"event\", \"data\", \"sound\", \"microphone\"]\n\n# for developers the following allows access to the current git sha from\n# their repository\nif __git_sha__ == 'n/a':\n from subprocess import check_output, PIPE\n # see if we're in a git repo and fetch from there\n try:\n thisFileLoc = os.path.split(__file__)[0]\n output = check_output(['git', 'rev-parse', '--short', 'HEAD'],\n cwd=thisFileLoc, stderr=PIPE)\n except Exception:\n output = False\n if output:\n __git_sha__ = output.strip() # remove final linefeed\n\n# update preferences and the user paths\nif 'installing' not in locals():\n from psychopy.preferences import prefs\n for pathName in prefs.general['paths']:\n sys.path.append(pathName)\n\n from psychopy.tools.versionchooser import useVersion, ensureMinimal\n\n# import readline here to get around an issue with sounddevice\n# issues GH-2230 GH-2344 GH-2662\ntry:\n import readline\nexcept ImportError:\n pass # all that will happen is the stderr/stdout might get redirected\n\n", "path": "psychopy/__init__.py"}, {"content": "#!/usr/bin/env python\n################\n# see notes at bottom for requirements\nfrom __future__ import absolute_import, print_function\nimport glob\nimport os\nimport sys\nfrom sys import platform\nfrom distutils.core import setup\nfrom pkg_resources import parse_version\n# import versioneer\nimport psychopy\nversion = psychopy.__version__\n\n# regenerate __init__.py only if we're in the source repos (not in a zip file)\ntry:\n import createInitFile # won't exist in a sdist.zip\n writeNewInit=True\nexcept:\n writeNewInit=False\nif writeNewInit:\n vStr = createInitFile.createInitFile(dist='bdist')\n\n#define the extensions to compile if necess\npackageData = []\nrequires = []\n\nif platform != 'darwin':\n raise RuntimeError(\"setupApp.py is only for building Mac Standalone bundle\")\n\nimport bdist_mpkg\nimport py2app\nresources = glob.glob('psychopy/app/Resources/*')\nresources.append('/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/pyconfig.h')\nframeworks = [\"libavbin.dylib\", \"/usr/lib/libxml2.2.dylib\", #\"libyaml.dylib\",\n \"libevent.dylib\", \"libffi.dylib\",\n \"libmp3lame.0.dylib\",\n \"/usr/local/Cellar/glfw/3.2.1/lib/libglfw.3.2.dylib\",\n ]\nopencvLibs = glob.glob(os.path.join(sys.exec_prefix, 'lib', 'libopencv*.2.4.dylib'))\nframeworks.extend(opencvLibs)\n\nimport macholib\n#print(\"~\"*60 + \"macholib version: \"+macholib.__version__)\n\nif parse_version(macholib.__version__) <= parse_version('1.7'):\n print(\"Applying macholib patch...\")\n import macholib.dyld\n import macholib.MachOGraph\n dyld_find_1_7 = macholib.dyld.dyld_find\n def dyld_find(name, loader=None, **kwargs):\n #print(\"~\"*60 + \"calling alternate dyld_find\")\n if loader is not None:\n kwargs['loader_path'] = loader\n return dyld_find_1_7(name, **kwargs)\n macholib.MachOGraph.dyld_find = dyld_find\n\nincludes = ['Tkinter', 'tkFileDialog',\n 'imp', 'subprocess', 'shlex',\n 'shelve', # for scipy.io\n '_elementtree', 'pyexpat', # for openpyxl\n 'hid',\n 'pyo', 'greenlet', 'zmq', 'tornado',\n 'psutil', # for iohub\n 'tobii_research', # need tobii_research file and tobiiresearch pkg\n 'pysoundcard', 'soundfile', 'sounddevice', 'readline',\n 'hid',\n 'xlwt', # writes excel files for pandas\n 'vlc', # install with pip install python-vlc\n 'msgpack_numpy',\n 'configparser',\n ]\npackages = ['wx', 'psychopy',\n 'pyglet', 'pygame', 'pytz', 'OpenGL', 'glfw',\n 'scipy', 'matplotlib', 'lxml', 'xml', 'openpyxl',\n 'moviepy', 'imageio', 'imageio_ffmpeg',\n '_sounddevice_data', '_soundfile_data',\n 'cffi','pycparser',\n 'PIL', # 'Image',\n 'objc', 'Quartz', 'AppKit', 'QTKit', 'Cocoa',\n 'Foundation', 'CoreFoundation',\n 'pkg_resources', # needed for objc\n 'pyolib',\n 'requests', 'certifi', 'cryptography',\n 'pyosf',\n # for unit testing\n 'coverage',\n # handy external science libs\n 'serial',\n 'egi', 'pylink', 'tobiiresearch',\n 'pyxid', 'pyxid2', 'ftd2xx', # ftd2xx is used by cedrus\n 'pandas', 'tables', # 'cython',\n 'msgpack', 'yaml', 'gevent', # for ioHub\n # these aren't needed, but liked\n 'psychopy_ext', 'pyfilesec',\n 'bidi', 'arabic_reshaper', # for right-left language conversions\n # for Py3 compatibility\n 'future', 'past', 'lib2to3',\n 'json_tricks', # allows saving arrays/dates in json\n 'git', 'gitlab',\n 'astunparse', 'esprima', # for translating/adapting py/JS\n 'pylsl', 'pygaze', 'smite',\n 'cv2',\n 'badapted', 'darc_toolbox', # adaptive methods from Ben Vincent\n 'questplus',\n 'metapensiero.pj', 'dukpy', 'macropy',\n ]\n\nif sys.version_info.major >= 3:\n packages.extend(['PyQt5'])\nelse:\n # not available or not working under Python3:\n includes.extend(['UserString', 'ioLabs', 'FileDialog'])\n packages.extend(['PyQt4', 'labjack', 'rusocsci'])\n # is available but py2app can't seem to find it:\n packages.extend(['OpenGL'])\n\nsetup(\n app=['psychopy/app/psychopyApp.py'],\n options=dict(py2app=dict(\n includes=includes,\n packages=packages,\n excludes=['bsddb', 'jinja2', 'IPython','ipython_genutils','nbconvert',\n 'libsz.2.dylib',\n # 'stringprep',\n 'functools32',\n ], # anything we need to forcibly exclude?\n resources=resources,\n argv_emulation=True,\n site_packages=True,\n frameworks=frameworks,\n iconfile='psychopy/app/Resources/psychopy.icns',\n plist=dict(\n CFBundleIconFile='psychopy.icns',\n CFBundleName = \"PsychoPy3\",\n CFBundleShortVersionString = version, # must be in X.X.X format\n CFBundleGetInfoString = \"PsychoPy3 \"+version,\n CFBundleExecutable = \"PsychoPy3\",\n CFBundleIdentifier = \"org.psychopy.PsychoPy3\",\n CFBundleLicense = \"GNU GPLv3+\",\n CFBundleDocumentTypes=[dict(CFBundleTypeExtensions=['*'],\n CFBundleTypeRole='Editor')],\n LSEnvironment=dict(PATH=\"/usr/local/git/bin:/usr/local/bin:\"\n \"/usr/local:/usr/bin:/usr/sbin\"),\n ),\n )) # end of the options dict\n)\n\n\n# ugly hack for opencv2:\n# As of opencv 2.4.5 the cv2.so binary used rpath to a fixed\n# location to find libs and even more annoyingly it then appended\n# 'lib' to the rpath as well. These were fine for the packaged\n# framework python but the libs in an app bundle are different.\n# So, create symlinks so they appear in the same place as in framework python\nrpath = \"dist/PsychoPy3.app/Contents/Resources/\"\nfor libPath in opencvLibs:\n libname = os.path.split(libPath)[-1]\n realPath = \"../../Frameworks/\"+libname # relative path (w.r.t. the fake)\n fakePath = os.path.join(rpath, \"lib\", libname)\n os.symlink(realPath, fakePath)\n# they even did this for Python lib itself, which is in diff location\nrealPath = \"../Frameworks/Python.framework/Python\" # relative to the fake path\nfakePath = os.path.join(rpath, \"Python\")\nos.symlink(realPath, fakePath)\n\nif writeNewInit:\n # remove unwanted info about this system post-build\n createInitFile.createInitFile(dist=None)\n\n# running testApp from within the app raises wx errors\n# shutil.rmtree(\"dist/PsychoPy3.app/Contents/Resources/lib/python2.6/psychopy/tests/testTheApp\")\n", "path": "setupApp.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n# Part of the PsychoPy library\n# Copyright (C) 2002-2018 Jonathan Peirce (C) 2019 Open Science Tools Ltd.\n# Distributed under the terms of the GNU General Public License (GPL).\n\n# --------------------------------------------------------------------------\n# This file is automatically generated during build (do not edit directly).\n# --------------------------------------------------------------------------\n\nimport os\nimport sys\n\n__version__ = '2020.1.3'\n__license__ = 'GNU GPLv3 (or more recent equivalent)'\n__author__ = 'Jonathan Peirce'\n__author_email__ = '[email protected]'\n__maintainer_email__ = '[email protected]'\n__url__ = 'http://www.psychopy.org/'\n__download_url__ = 'https://github.com/psychopy/psychopy/releases/'\n__git_sha__ = 'n/a'\n__build_platform__ = 'n/a'\n\n__all__ = [\"gui\", \"misc\", \"visual\", \"core\",\n \"event\", \"data\", \"sound\", \"microphone\"]\n\n# for developers the following allows access to the current git sha from\n# their repository\nif __git_sha__ == 'n/a':\n from subprocess import check_output, PIPE\n # see if we're in a git repo and fetch from there\n try:\n thisFileLoc = os.path.split(__file__)[0]\n output = check_output(['git', 'rev-parse', '--short', 'HEAD'],\n cwd=thisFileLoc, stderr=PIPE)\n except Exception:\n output = False\n if output:\n __git_sha__ = output.strip() # remove final linefeed\n\n# update preferences and the user paths\nif 'installing' not in locals():\n from psychopy.preferences import prefs\n for pathName in prefs.general['paths']:\n sys.path.append(pathName)\n\n from psychopy.tools.versionchooser import useVersion, ensureMinimal\n\n# import readline here to get around an issue with sounddevice\n# issues GH-2230 GH-2344 GH-2662\ntry:\n import readline\nexcept ImportError:\n pass # all that will happen is the stderr/stdout might get redirected\n\n", "path": "psychopy/__init__.py"}, {"content": "#!/usr/bin/env python\n################\n# see notes at bottom for requirements\nfrom __future__ import absolute_import, print_function\nimport glob\nimport os\nimport sys\nfrom sys import platform\nfrom distutils.core import setup\nfrom pkg_resources import parse_version\n# import versioneer\nimport psychopy\nversion = psychopy.__version__\n\n# regenerate __init__.py only if we're in the source repos (not in a zip file)\ntry:\n import createInitFile # won't exist in a sdist.zip\n writeNewInit=True\nexcept:\n writeNewInit=False\nif writeNewInit:\n vStr = createInitFile.createInitFile(dist='bdist')\n\n#define the extensions to compile if necess\npackageData = []\nrequires = []\n\nif platform != 'darwin':\n raise RuntimeError(\"setupApp.py is only for building Mac Standalone bundle\")\n\nimport bdist_mpkg\nimport py2app\nresources = glob.glob('psychopy/app/Resources/*')\nresources.append('/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/pyconfig.h')\nframeworks = [\"libavbin.dylib\", \"/usr/lib/libxml2.2.dylib\", #\"libyaml.dylib\",\n \"libevent.dylib\", \"libffi.dylib\",\n \"libmp3lame.0.dylib\",\n \"/usr/local/Cellar/glfw/3.2.1/lib/libglfw.3.2.dylib\",\n ]\nopencvLibs = glob.glob(os.path.join(sys.exec_prefix, 'lib', 'libopencv*.2.4.dylib'))\nframeworks.extend(opencvLibs)\n\nimport macholib\n#print(\"~\"*60 + \"macholib version: \"+macholib.__version__)\n\nif parse_version(macholib.__version__) <= parse_version('1.7'):\n print(\"Applying macholib patch...\")\n import macholib.dyld\n import macholib.MachOGraph\n dyld_find_1_7 = macholib.dyld.dyld_find\n def dyld_find(name, loader=None, **kwargs):\n #print(\"~\"*60 + \"calling alternate dyld_find\")\n if loader is not None:\n kwargs['loader_path'] = loader\n return dyld_find_1_7(name, **kwargs)\n macholib.MachOGraph.dyld_find = dyld_find\n\nincludes = ['Tkinter', 'tkFileDialog',\n 'imp', 'subprocess', 'shlex',\n 'shelve', # for scipy.io\n '_elementtree', 'pyexpat', # for openpyxl\n 'hid',\n 'pyo', 'greenlet', 'zmq', 'tornado',\n 'psutil', # for iohub\n 'tobii_research', # need tobii_research file and tobiiresearch pkg\n 'pysoundcard', 'soundfile', 'sounddevice', 'readline',\n 'hid',\n 'xlwt', # writes excel files for pandas\n 'vlc', # install with pip install python-vlc\n 'msgpack_numpy',\n 'configparser',\n ]\npackages = ['wx', 'psychopy',\n 'pyglet', 'pygame', 'pytz', 'OpenGL', 'glfw',\n 'scipy', 'matplotlib', 'lxml', 'xml', 'openpyxl',\n 'moviepy', 'imageio', 'imageio_ffmpeg',\n '_sounddevice_data', '_soundfile_data',\n 'cffi','pycparser',\n 'PIL', # 'Image',\n 'objc', 'Quartz', 'AppKit', 'QTKit', 'Cocoa',\n 'Foundation', 'CoreFoundation',\n 'pkg_resources', # needed for objc\n 'pyolib',\n 'requests', 'certifi', 'cryptography',\n 'pyosf',\n # for unit testing\n 'coverage',\n # handy external science libs\n 'serial',\n 'egi', 'pylink', 'tobiiresearch',\n 'pyxid', 'pyxid2', 'ftd2xx', # ftd2xx is used by cedrus\n 'pandas', 'tables', # 'cython',\n 'msgpack', 'yaml', 'gevent', # for ioHub\n # these aren't needed, but liked\n 'psychopy_ext', 'pyfilesec',\n 'bidi', 'arabic_reshaper', # for right-left language conversions\n # for Py3 compatibility\n 'future', 'past', 'lib2to3',\n 'json_tricks', # allows saving arrays/dates in json\n 'git', 'gitlab',\n 'astunparse', 'esprima', # for translating/adapting py/JS\n 'pylsl', 'pygaze', 'smite',\n 'cv2',\n 'badapted', 'darc_toolbox', # adaptive methods from Ben Vincent\n 'questplus',\n 'metapensiero.pj', 'dukpy', 'macropy',\n ]\n\nif sys.version_info.major >= 3:\n packages.extend(['PyQt5'])\nelse:\n # not available or not working under Python3:\n includes.extend(['UserString', 'ioLabs', 'FileDialog'])\n packages.extend(['PyQt4', 'labjack', 'rusocsci'])\n # is available but py2app can't seem to find it:\n packages.extend(['OpenGL'])\n\nsetup(\n app=['psychopy/app/psychopyApp.py'],\n options=dict(py2app=dict(\n includes=includes,\n packages=packages,\n excludes=['bsddb', 'jinja2', 'IPython','ipython_genutils','nbconvert',\n 'libsz.2.dylib',\n # 'stringprep',\n 'functools32',\n ], # anything we need to forcibly exclude?\n resources=resources,\n argv_emulation=True,\n site_packages=True,\n frameworks=frameworks,\n iconfile='psychopy/app/Resources/psychopy.icns',\n plist=dict(\n CFBundleIconFile='psychopy.icns',\n CFBundleName = \"PsychoPy3\",\n CFBundleShortVersionString = version, # must be in X.X.X format\n CFBundleVersion = version,\n CFBundleExecutable = \"PsychoPy3\",\n CFBundleIdentifier = \"org.psychopy.PsychoPy3\",\n CFBundleLicense = \"GNU GPLv3+\",\n NSHumanReadableCopyright = \"Open Science Tools Ltd.\",\n CFBundleDocumentTypes=[dict(CFBundleTypeExtensions=['*'],\n CFBundleTypeRole='Editor')],\n LSEnvironment=dict(PATH=\"/usr/local/git/bin:/usr/local/bin:\"\n \"/usr/local:/usr/bin:/usr/sbin\"),\n ),\n )) # end of the options dict\n)\n\n\n# ugly hack for opencv2:\n# As of opencv 2.4.5 the cv2.so binary used rpath to a fixed\n# location to find libs and even more annoyingly it then appended\n# 'lib' to the rpath as well. These were fine for the packaged\n# framework python but the libs in an app bundle are different.\n# So, create symlinks so they appear in the same place as in framework python\nrpath = \"dist/PsychoPy3.app/Contents/Resources/\"\nfor libPath in opencvLibs:\n libname = os.path.split(libPath)[-1]\n realPath = \"../../Frameworks/\"+libname # relative path (w.r.t. the fake)\n fakePath = os.path.join(rpath, \"lib\", libname)\n os.symlink(realPath, fakePath)\n# they even did this for Python lib itself, which is in diff location\nrealPath = \"../Frameworks/Python.framework/Python\" # relative to the fake path\nfakePath = os.path.join(rpath, \"Python\")\nos.symlink(realPath, fakePath)\n\nif writeNewInit:\n # remove unwanted info about this system post-build\n createInitFile.createInitFile(dist=None)\n\n# running testApp from within the app raises wx errors\n# shutil.rmtree(\"dist/PsychoPy3.app/Contents/Resources/lib/python2.6/psychopy/tests/testTheApp\")\n", "path": "setupApp.py"}]} | 3,332 | 328 |
gh_patches_debug_4846 | rasdani/github-patches | git_diff | carpentries__amy-438 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Event's start date must be earlier than it's end date
We should enforce that the start date is early that the end date.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `workshops/forms.py`
Content:
```
1 from django import forms
2 from django.forms import HiddenInput, CheckboxSelectMultiple
3
4 from crispy_forms.helper import FormHelper
5 from crispy_forms.layout import Layout, Div, HTML, Submit, Field
6 from crispy_forms.bootstrap import FormActions
7 from django_countries.fields import CountryField
8 from selectable import forms as selectable
9
10 from workshops.models import (
11 Award, Event, Lesson, Person, Task, KnowledgeDomain,
12 )
13 from workshops import lookups
14
15 INSTRUCTORS_NUM = 10 # how many instrutors to return from a search by default
16
17 AUTOCOMPLETE_HELP_TEXT = (
18 "Autocomplete field; type characters to view available options, "
19 "then select desired item from list."
20 )
21
22 DATE_HELP_TEXT = "Select date using widget, or enter in YYYY-MM-DD format."
23
24
25 class BootstrapHelper(FormHelper):
26 form_class = 'form-horizontal'
27 label_class = 'col-lg-2'
28 field_class = 'col-lg-8'
29
30 def __init__(self, form=None):
31 super().__init__(form)
32
33 self.attrs['role'] = 'form'
34 self.inputs.append(Submit('submit', 'Submit'))
35
36
37 class BootstrapHelperWithAdd(BootstrapHelper):
38 def __init__(self, form=None):
39 super().__init__(form)
40
41 self.inputs[-1] = Submit('submit', 'Add')
42
43
44 class BootstrapHelperFilter(FormHelper):
45 form_method = 'get'
46
47 def __init__(self, form=None):
48 super().__init__(form)
49 self.attrs['role'] = 'form'
50 self.inputs.append(Submit('', 'Submit'))
51
52
53 bootstrap_helper = BootstrapHelper()
54 bootstrap_helper_with_add = BootstrapHelperWithAdd()
55 bootstrap_helper_filter = BootstrapHelperFilter()
56
57
58 class InstructorsForm(forms.Form):
59 '''Represent instructor matching form.'''
60
61 latitude = forms.FloatField(label='Latitude',
62 min_value=-90.0,
63 max_value=90.0,
64 required=False)
65 longitude = forms.FloatField(label='Longitude',
66 min_value=-180.0,
67 max_value=180.0,
68 required=False)
69 airport = selectable.AutoCompleteSelectField(
70 lookup_class=lookups.AirportLookup,
71 label='Airport',
72 required=False,
73 widget=selectable.AutoComboboxSelectWidget(
74 lookup_class=lookups.AirportLookup,
75 ),
76 )
77
78 country = CountryField().formfield(required=False)
79
80 lessons = forms.ModelMultipleChoiceField(queryset=Lesson.objects.all(),
81 widget=CheckboxSelectMultiple(),
82 required=False)
83
84 def __init__(self, *args, **kwargs):
85 '''Build checkboxes for qualifications dynamically.'''
86 super(InstructorsForm, self).__init__(*args, **kwargs)
87 self.helper = FormHelper(self)
88 self.helper.form_class = 'form-inline'
89 self.helper.form_method = 'get'
90 self.helper.layout = Layout(
91 Div(
92 Div(
93 'latitude',
94 'longitude',
95 css_class='panel-body'
96 ),
97 css_class='panel panel-default ',
98 ),
99 HTML('<p>OR</p>'),
100 Div(
101 Div(
102 'airport',
103 css_class='panel-body'
104 ),
105 css_class='panel panel-default ',
106 ),
107 HTML('<p>OR</p>'),
108 Div(
109 Div(
110 'country',
111 css_class='panel-body'
112 ),
113 css_class='panel panel-default ',
114 ),
115 'lessons',
116 FormActions(
117 Submit('submit', 'Submit'),
118 ),
119 )
120
121 def clean(self):
122 cleaned_data = super(InstructorsForm, self).clean()
123 airport = cleaned_data.get('airport')
124 lat = cleaned_data.get('latitude')
125 long = cleaned_data.get('longitude')
126 country = cleaned_data.get('country')
127
128 sum = bool(airport) + bool(lat and long) + bool(country)
129 # user can specify only one: either airport, or lat&long, or country
130 if sum != 1:
131 raise forms.ValidationError('Must specify an airport, or latitude'
132 ' and longitude, or a country.')
133 return cleaned_data
134
135
136 class PersonBulkAddForm(forms.Form):
137 '''Represent CSV upload form for bulk adding people.'''
138
139 file = forms.FileField()
140
141
142 class SearchForm(forms.Form):
143 '''Represent general searching form.'''
144
145 term = forms.CharField(label='term',
146 max_length=100)
147 in_sites = forms.BooleanField(label='in sites',
148 required=False,
149 initial=True)
150 in_events = forms.BooleanField(label='in events',
151 required=False,
152 initial=True)
153 in_persons = forms.BooleanField(label='in persons',
154 required=False,
155 initial=True)
156 in_airports = forms.BooleanField(label='in airports',
157 required=False,
158 initial=True)
159
160
161 class DebriefForm(forms.Form):
162 '''Represent general debrief form.'''
163 begin_date = forms.DateField(
164 label='Begin date as YYYY-MM-DD',
165 input_formats=['%Y-%m-%d', ]
166 )
167 end_date = forms.DateField(
168 label='End date as YYYY-MD-DD',
169 input_formats=['%Y-%m-%d', ]
170 )
171
172
173 class EventForm(forms.ModelForm):
174
175 site = selectable.AutoCompleteSelectField(
176 lookup_class=lookups.SiteLookup,
177 label='Site',
178 required=True,
179 help_text=AUTOCOMPLETE_HELP_TEXT,
180 widget=selectable.AutoComboboxSelectWidget,
181 )
182
183 organizer = selectable.AutoCompleteSelectField(
184 lookup_class=lookups.SiteLookup,
185 label='Organizer',
186 required=False,
187 help_text=AUTOCOMPLETE_HELP_TEXT,
188 widget=selectable.AutoComboboxSelectWidget,
189 )
190
191 def __init__(self, *args, **kwargs):
192 super().__init__(*args, **kwargs)
193 self.fields['start'].help_text = DATE_HELP_TEXT
194 self.fields['end'].help_text = DATE_HELP_TEXT
195
196 def clean_slug(self):
197 # Ensure slug is not an integer value for Event.get_by_ident
198 data = self.cleaned_data['slug']
199
200 try:
201 int(data)
202 except ValueError:
203 pass
204 else:
205 raise forms.ValidationError("Slug must not be an integer-value.")
206
207 return data
208
209 class Meta:
210 model = Event
211 # reorder fields, don't display 'deleted' field
212 fields = ('slug', 'start', 'end', 'site', 'organizer',
213 'tags', 'url', 'reg_key', 'admin_fee', 'invoiced',
214 'attendance', 'notes')
215
216 class Media:
217 # thanks to this, {{ form.media }} in the template will generate
218 # a <link href=""> (for CSS files) or <script src=""> (for JS files)
219 js = ('calendar_popup.js', )
220
221
222 class TaskForm(forms.ModelForm):
223
224 person = selectable.AutoCompleteSelectField(
225 lookup_class=lookups.PersonLookup,
226 label='Person',
227 required=True,
228 help_text=AUTOCOMPLETE_HELP_TEXT,
229 widget=selectable.AutoComboboxSelectWidget,
230 )
231
232 class Meta:
233 model = Task
234 fields = '__all__'
235 widgets = {'event': HiddenInput}
236
237
238 class TaskFullForm(TaskForm):
239
240 event = selectable.AutoCompleteSelectField(
241 lookup_class=lookups.EventLookup,
242 label='Event',
243 required=True,
244 help_text=AUTOCOMPLETE_HELP_TEXT,
245 widget=selectable.AutoComboboxSelectWidget,
246 )
247
248 class Meta:
249 model = Task
250 fields = '__all__'
251
252
253 class PersonForm(forms.ModelForm):
254
255 airport = selectable.AutoCompleteSelectField(
256 lookup_class=lookups.AirportLookup,
257 label='Airport',
258 required=False,
259 help_text=AUTOCOMPLETE_HELP_TEXT,
260 widget=selectable.AutoComboboxSelectWidget,
261 )
262
263 lessons = forms.ModelMultipleChoiceField(required=False,
264 queryset=Lesson.objects.all())
265
266 domains = forms.ModelMultipleChoiceField(
267 required=False, queryset=KnowledgeDomain.objects.all()
268 )
269
270 class Meta:
271 model = Person
272 # don't display the 'password', 'user_permissions',
273 # 'groups' or 'is_superuser' fields
274 # + reorder fields
275 fields = ['username', 'personal', 'middle', 'family', 'may_contact',
276 'email', 'gender', 'airport', 'affiliation', 'github',
277 'twitter', 'url', 'notes', 'lessons', 'domains']
278
279
280 class PersonPermissionsForm(forms.ModelForm):
281 class Meta:
282 model = Person
283 # only display 'user_permissions', 'groups' and `is_superuser` fields
284 fields = [
285 'is_superuser',
286 'user_permissions',
287 'groups',
288 ]
289
290
291 class PersonMergeForm(forms.Form):
292
293 person_from = selectable.AutoCompleteSelectField(
294 lookup_class=lookups.PersonLookup,
295 label='Person From',
296 required=True,
297 help_text=AUTOCOMPLETE_HELP_TEXT,
298 widget=selectable.AutoComboboxSelectWidget,
299 )
300
301 person_to = selectable.AutoCompleteSelectField(
302 lookup_class=lookups.PersonLookup,
303 label='Person To',
304 required=True,
305 help_text=AUTOCOMPLETE_HELP_TEXT,
306 widget=selectable.AutoComboboxSelectWidget,
307 )
308
309
310 class BadgeAwardForm(forms.ModelForm):
311
312 person = selectable.AutoCompleteSelectField(
313 lookup_class=lookups.PersonLookup,
314 label='Person',
315 required=True,
316 help_text=AUTOCOMPLETE_HELP_TEXT,
317 widget=selectable.AutoComboboxSelectWidget,
318 )
319
320 event = selectable.AutoCompleteSelectField(
321 lookup_class=lookups.EventLookup,
322 label='Event',
323 required=False,
324 help_text=AUTOCOMPLETE_HELP_TEXT,
325 widget=selectable.AutoComboboxSelectWidget,
326 )
327
328 class Meta:
329 model = Award
330 fields = '__all__'
331 widgets = {'badge': HiddenInput}
332
333
334 class PersonAwardForm(forms.ModelForm):
335
336 event = selectable.AutoCompleteSelectField(
337 lookup_class=lookups.EventLookup,
338 label='Event',
339 required=False,
340 help_text=AUTOCOMPLETE_HELP_TEXT,
341 widget=selectable.AutoComboboxSelectWidget,
342 )
343
344 class Meta:
345 model = Award
346 fields = '__all__'
347 widgets = {'person': HiddenInput}
348
349
350 class PersonTaskForm(forms.ModelForm):
351
352 event = selectable.AutoCompleteSelectField(
353 lookup_class=lookups.EventLookup,
354 label='Event',
355 required=True,
356 help_text=AUTOCOMPLETE_HELP_TEXT,
357 widget=selectable.AutoComboboxSelectWidget,
358 )
359
360 class Meta:
361 model = Task
362 fields = '__all__'
363 widgets = {'person': HiddenInput}
364
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/workshops/forms.py b/workshops/forms.py
--- a/workshops/forms.py
+++ b/workshops/forms.py
@@ -206,6 +206,15 @@
return data
+ def clean_end(self):
+ """Ensure end >= start."""
+ start = self.cleaned_data['start']
+ end = self.cleaned_data['end']
+
+ if start and end and end < start:
+ raise forms.ValidationError('Must not be earlier than start date.')
+ return end
+
class Meta:
model = Event
# reorder fields, don't display 'deleted' field
| {"golden_diff": "diff --git a/workshops/forms.py b/workshops/forms.py\n--- a/workshops/forms.py\n+++ b/workshops/forms.py\n@@ -206,6 +206,15 @@\n \n return data\n \n+ def clean_end(self):\n+ \"\"\"Ensure end >= start.\"\"\"\n+ start = self.cleaned_data['start']\n+ end = self.cleaned_data['end']\n+\n+ if start and end and end < start:\n+ raise forms.ValidationError('Must not be earlier than start date.')\n+ return end\n+\n class Meta:\n model = Event\n # reorder fields, don't display 'deleted' field\n", "issue": "Event's start date must be earlier than it's end date\nWe should enforce that the start date is early that the end date.\n\n", "before_files": [{"content": "from django import forms\nfrom django.forms import HiddenInput, CheckboxSelectMultiple\n\nfrom crispy_forms.helper import FormHelper\nfrom crispy_forms.layout import Layout, Div, HTML, Submit, Field\nfrom crispy_forms.bootstrap import FormActions\nfrom django_countries.fields import CountryField\nfrom selectable import forms as selectable\n\nfrom workshops.models import (\n Award, Event, Lesson, Person, Task, KnowledgeDomain,\n)\nfrom workshops import lookups\n\nINSTRUCTORS_NUM = 10 # how many instrutors to return from a search by default\n\nAUTOCOMPLETE_HELP_TEXT = (\n \"Autocomplete field; type characters to view available options, \"\n \"then select desired item from list.\"\n)\n\nDATE_HELP_TEXT = \"Select date using widget, or enter in YYYY-MM-DD format.\"\n\n\nclass BootstrapHelper(FormHelper):\n form_class = 'form-horizontal'\n label_class = 'col-lg-2'\n field_class = 'col-lg-8'\n\n def __init__(self, form=None):\n super().__init__(form)\n\n self.attrs['role'] = 'form'\n self.inputs.append(Submit('submit', 'Submit'))\n\n\nclass BootstrapHelperWithAdd(BootstrapHelper):\n def __init__(self, form=None):\n super().__init__(form)\n\n self.inputs[-1] = Submit('submit', 'Add')\n\n\nclass BootstrapHelperFilter(FormHelper):\n form_method = 'get'\n\n def __init__(self, form=None):\n super().__init__(form)\n self.attrs['role'] = 'form'\n self.inputs.append(Submit('', 'Submit'))\n\n\nbootstrap_helper = BootstrapHelper()\nbootstrap_helper_with_add = BootstrapHelperWithAdd()\nbootstrap_helper_filter = BootstrapHelperFilter()\n\n\nclass InstructorsForm(forms.Form):\n '''Represent instructor matching form.'''\n\n latitude = forms.FloatField(label='Latitude',\n min_value=-90.0,\n max_value=90.0,\n required=False)\n longitude = forms.FloatField(label='Longitude',\n min_value=-180.0,\n max_value=180.0,\n required=False)\n airport = selectable.AutoCompleteSelectField(\n lookup_class=lookups.AirportLookup,\n label='Airport',\n required=False,\n widget=selectable.AutoComboboxSelectWidget(\n lookup_class=lookups.AirportLookup,\n ),\n )\n\n country = CountryField().formfield(required=False)\n\n lessons = forms.ModelMultipleChoiceField(queryset=Lesson.objects.all(),\n widget=CheckboxSelectMultiple(),\n required=False)\n\n def __init__(self, *args, **kwargs):\n '''Build checkboxes for qualifications dynamically.'''\n super(InstructorsForm, self).__init__(*args, **kwargs)\n self.helper = FormHelper(self)\n self.helper.form_class = 'form-inline'\n self.helper.form_method = 'get'\n self.helper.layout = Layout(\n Div(\n Div(\n 'latitude',\n 'longitude',\n css_class='panel-body'\n ),\n css_class='panel panel-default ',\n ),\n HTML('<p>OR</p>'),\n Div(\n Div(\n 'airport',\n css_class='panel-body'\n ),\n css_class='panel panel-default ',\n ),\n HTML('<p>OR</p>'),\n Div(\n Div(\n 'country',\n css_class='panel-body'\n ),\n css_class='panel panel-default ',\n ),\n 'lessons',\n FormActions(\n Submit('submit', 'Submit'),\n ),\n )\n\n def clean(self):\n cleaned_data = super(InstructorsForm, self).clean()\n airport = cleaned_data.get('airport')\n lat = cleaned_data.get('latitude')\n long = cleaned_data.get('longitude')\n country = cleaned_data.get('country')\n\n sum = bool(airport) + bool(lat and long) + bool(country)\n # user can specify only one: either airport, or lat&long, or country\n if sum != 1:\n raise forms.ValidationError('Must specify an airport, or latitude'\n ' and longitude, or a country.')\n return cleaned_data\n\n\nclass PersonBulkAddForm(forms.Form):\n '''Represent CSV upload form for bulk adding people.'''\n\n file = forms.FileField()\n\n\nclass SearchForm(forms.Form):\n '''Represent general searching form.'''\n\n term = forms.CharField(label='term',\n max_length=100)\n in_sites = forms.BooleanField(label='in sites',\n required=False,\n initial=True)\n in_events = forms.BooleanField(label='in events',\n required=False,\n initial=True)\n in_persons = forms.BooleanField(label='in persons',\n required=False,\n initial=True)\n in_airports = forms.BooleanField(label='in airports',\n required=False,\n initial=True)\n\n\nclass DebriefForm(forms.Form):\n '''Represent general debrief form.'''\n begin_date = forms.DateField(\n label='Begin date as YYYY-MM-DD',\n input_formats=['%Y-%m-%d', ]\n )\n end_date = forms.DateField(\n label='End date as YYYY-MD-DD',\n input_formats=['%Y-%m-%d', ]\n )\n\n\nclass EventForm(forms.ModelForm):\n\n site = selectable.AutoCompleteSelectField(\n lookup_class=lookups.SiteLookup,\n label='Site',\n required=True,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n organizer = selectable.AutoCompleteSelectField(\n lookup_class=lookups.SiteLookup,\n label='Organizer',\n required=False,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields['start'].help_text = DATE_HELP_TEXT\n self.fields['end'].help_text = DATE_HELP_TEXT\n\n def clean_slug(self):\n # Ensure slug is not an integer value for Event.get_by_ident\n data = self.cleaned_data['slug']\n\n try:\n int(data)\n except ValueError:\n pass\n else:\n raise forms.ValidationError(\"Slug must not be an integer-value.\")\n\n return data\n\n class Meta:\n model = Event\n # reorder fields, don't display 'deleted' field\n fields = ('slug', 'start', 'end', 'site', 'organizer',\n 'tags', 'url', 'reg_key', 'admin_fee', 'invoiced',\n 'attendance', 'notes')\n\n class Media:\n # thanks to this, {{ form.media }} in the template will generate\n # a <link href=\"\"> (for CSS files) or <script src=\"\"> (for JS files)\n js = ('calendar_popup.js', )\n\n\nclass TaskForm(forms.ModelForm):\n\n person = selectable.AutoCompleteSelectField(\n lookup_class=lookups.PersonLookup,\n label='Person',\n required=True,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n class Meta:\n model = Task\n fields = '__all__'\n widgets = {'event': HiddenInput}\n\n\nclass TaskFullForm(TaskForm):\n\n event = selectable.AutoCompleteSelectField(\n lookup_class=lookups.EventLookup,\n label='Event',\n required=True,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n class Meta:\n model = Task\n fields = '__all__'\n\n\nclass PersonForm(forms.ModelForm):\n\n airport = selectable.AutoCompleteSelectField(\n lookup_class=lookups.AirportLookup,\n label='Airport',\n required=False,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n lessons = forms.ModelMultipleChoiceField(required=False,\n queryset=Lesson.objects.all())\n\n domains = forms.ModelMultipleChoiceField(\n required=False, queryset=KnowledgeDomain.objects.all()\n )\n\n class Meta:\n model = Person\n # don't display the 'password', 'user_permissions',\n # 'groups' or 'is_superuser' fields\n # + reorder fields\n fields = ['username', 'personal', 'middle', 'family', 'may_contact',\n 'email', 'gender', 'airport', 'affiliation', 'github',\n 'twitter', 'url', 'notes', 'lessons', 'domains']\n\n\nclass PersonPermissionsForm(forms.ModelForm):\n class Meta:\n model = Person\n # only display 'user_permissions', 'groups' and `is_superuser` fields\n fields = [\n 'is_superuser',\n 'user_permissions',\n 'groups',\n ]\n\n\nclass PersonMergeForm(forms.Form):\n\n person_from = selectable.AutoCompleteSelectField(\n lookup_class=lookups.PersonLookup,\n label='Person From',\n required=True,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n person_to = selectable.AutoCompleteSelectField(\n lookup_class=lookups.PersonLookup,\n label='Person To',\n required=True,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n\nclass BadgeAwardForm(forms.ModelForm):\n\n person = selectable.AutoCompleteSelectField(\n lookup_class=lookups.PersonLookup,\n label='Person',\n required=True,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n event = selectable.AutoCompleteSelectField(\n lookup_class=lookups.EventLookup,\n label='Event',\n required=False,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n class Meta:\n model = Award\n fields = '__all__'\n widgets = {'badge': HiddenInput}\n\n\nclass PersonAwardForm(forms.ModelForm):\n\n event = selectable.AutoCompleteSelectField(\n lookup_class=lookups.EventLookup,\n label='Event',\n required=False,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n class Meta:\n model = Award\n fields = '__all__'\n widgets = {'person': HiddenInput}\n\n\nclass PersonTaskForm(forms.ModelForm):\n\n event = selectable.AutoCompleteSelectField(\n lookup_class=lookups.EventLookup,\n label='Event',\n required=True,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n class Meta:\n model = Task\n fields = '__all__'\n widgets = {'person': HiddenInput}\n", "path": "workshops/forms.py"}], "after_files": [{"content": "from django import forms\nfrom django.forms import HiddenInput, CheckboxSelectMultiple\n\nfrom crispy_forms.helper import FormHelper\nfrom crispy_forms.layout import Layout, Div, HTML, Submit, Field\nfrom crispy_forms.bootstrap import FormActions\nfrom django_countries.fields import CountryField\nfrom selectable import forms as selectable\n\nfrom workshops.models import (\n Award, Event, Lesson, Person, Task, KnowledgeDomain,\n)\nfrom workshops import lookups\n\nINSTRUCTORS_NUM = 10 # how many instrutors to return from a search by default\n\nAUTOCOMPLETE_HELP_TEXT = (\n \"Autocomplete field; type characters to view available options, \"\n \"then select desired item from list.\"\n)\n\nDATE_HELP_TEXT = \"Select date using widget, or enter in YYYY-MM-DD format.\"\n\n\nclass BootstrapHelper(FormHelper):\n form_class = 'form-horizontal'\n label_class = 'col-lg-2'\n field_class = 'col-lg-8'\n\n def __init__(self, form=None):\n super().__init__(form)\n\n self.attrs['role'] = 'form'\n self.inputs.append(Submit('submit', 'Submit'))\n\n\nclass BootstrapHelperWithAdd(BootstrapHelper):\n def __init__(self, form=None):\n super().__init__(form)\n\n self.inputs[-1] = Submit('submit', 'Add')\n\n\nclass BootstrapHelperFilter(FormHelper):\n form_method = 'get'\n\n def __init__(self, form=None):\n super().__init__(form)\n self.attrs['role'] = 'form'\n self.inputs.append(Submit('', 'Submit'))\n\n\nbootstrap_helper = BootstrapHelper()\nbootstrap_helper_with_add = BootstrapHelperWithAdd()\nbootstrap_helper_filter = BootstrapHelperFilter()\n\n\nclass InstructorsForm(forms.Form):\n '''Represent instructor matching form.'''\n\n latitude = forms.FloatField(label='Latitude',\n min_value=-90.0,\n max_value=90.0,\n required=False)\n longitude = forms.FloatField(label='Longitude',\n min_value=-180.0,\n max_value=180.0,\n required=False)\n airport = selectable.AutoCompleteSelectField(\n lookup_class=lookups.AirportLookup,\n label='Airport',\n required=False,\n widget=selectable.AutoComboboxSelectWidget(\n lookup_class=lookups.AirportLookup,\n ),\n )\n\n country = CountryField().formfield(required=False)\n\n lessons = forms.ModelMultipleChoiceField(queryset=Lesson.objects.all(),\n widget=CheckboxSelectMultiple(),\n required=False)\n\n def __init__(self, *args, **kwargs):\n '''Build checkboxes for qualifications dynamically.'''\n super(InstructorsForm, self).__init__(*args, **kwargs)\n self.helper = FormHelper(self)\n self.helper.form_class = 'form-inline'\n self.helper.form_method = 'get'\n self.helper.layout = Layout(\n Div(\n Div(\n 'latitude',\n 'longitude',\n css_class='panel-body'\n ),\n css_class='panel panel-default ',\n ),\n HTML('<p>OR</p>'),\n Div(\n Div(\n 'airport',\n css_class='panel-body'\n ),\n css_class='panel panel-default ',\n ),\n HTML('<p>OR</p>'),\n Div(\n Div(\n 'country',\n css_class='panel-body'\n ),\n css_class='panel panel-default ',\n ),\n 'lessons',\n FormActions(\n Submit('submit', 'Submit'),\n ),\n )\n\n def clean(self):\n cleaned_data = super(InstructorsForm, self).clean()\n airport = cleaned_data.get('airport')\n lat = cleaned_data.get('latitude')\n long = cleaned_data.get('longitude')\n country = cleaned_data.get('country')\n\n sum = bool(airport) + bool(lat and long) + bool(country)\n # user can specify only one: either airport, or lat&long, or country\n if sum != 1:\n raise forms.ValidationError('Must specify an airport, or latitude'\n ' and longitude, or a country.')\n return cleaned_data\n\n\nclass PersonBulkAddForm(forms.Form):\n '''Represent CSV upload form for bulk adding people.'''\n\n file = forms.FileField()\n\n\nclass SearchForm(forms.Form):\n '''Represent general searching form.'''\n\n term = forms.CharField(label='term',\n max_length=100)\n in_sites = forms.BooleanField(label='in sites',\n required=False,\n initial=True)\n in_events = forms.BooleanField(label='in events',\n required=False,\n initial=True)\n in_persons = forms.BooleanField(label='in persons',\n required=False,\n initial=True)\n in_airports = forms.BooleanField(label='in airports',\n required=False,\n initial=True)\n\n\nclass DebriefForm(forms.Form):\n '''Represent general debrief form.'''\n begin_date = forms.DateField(\n label='Begin date as YYYY-MM-DD',\n input_formats=['%Y-%m-%d', ]\n )\n end_date = forms.DateField(\n label='End date as YYYY-MD-DD',\n input_formats=['%Y-%m-%d', ]\n )\n\n\nclass EventForm(forms.ModelForm):\n\n site = selectable.AutoCompleteSelectField(\n lookup_class=lookups.SiteLookup,\n label='Site',\n required=True,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n organizer = selectable.AutoCompleteSelectField(\n lookup_class=lookups.SiteLookup,\n label='Organizer',\n required=False,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields['start'].help_text = DATE_HELP_TEXT\n self.fields['end'].help_text = DATE_HELP_TEXT\n\n def clean_slug(self):\n # Ensure slug is not an integer value for Event.get_by_ident\n data = self.cleaned_data['slug']\n\n try:\n int(data)\n except ValueError:\n pass\n else:\n raise forms.ValidationError(\"Slug must not be an integer-value.\")\n\n return data\n\n def clean_end(self):\n \"\"\"Ensure end >= start.\"\"\"\n start = self.cleaned_data['start']\n end = self.cleaned_data['end']\n\n if start and end and end < start:\n raise forms.ValidationError('Must not be earlier than start date.')\n return end\n\n class Meta:\n model = Event\n # reorder fields, don't display 'deleted' field\n fields = ('slug', 'start', 'end', 'site', 'organizer',\n 'tags', 'url', 'reg_key', 'admin_fee', 'invoiced',\n 'attendance', 'notes')\n\n class Media:\n # thanks to this, {{ form.media }} in the template will generate\n # a <link href=\"\"> (for CSS files) or <script src=\"\"> (for JS files)\n js = ('calendar_popup.js', )\n\n\nclass TaskForm(forms.ModelForm):\n\n person = selectable.AutoCompleteSelectField(\n lookup_class=lookups.PersonLookup,\n label='Person',\n required=True,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n class Meta:\n model = Task\n fields = '__all__'\n widgets = {'event': HiddenInput}\n\n\nclass TaskFullForm(TaskForm):\n\n event = selectable.AutoCompleteSelectField(\n lookup_class=lookups.EventLookup,\n label='Event',\n required=True,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n class Meta:\n model = Task\n fields = '__all__'\n\n\nclass PersonForm(forms.ModelForm):\n\n airport = selectable.AutoCompleteSelectField(\n lookup_class=lookups.AirportLookup,\n label='Airport',\n required=False,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n lessons = forms.ModelMultipleChoiceField(required=False,\n queryset=Lesson.objects.all())\n\n domains = forms.ModelMultipleChoiceField(\n required=False, queryset=KnowledgeDomain.objects.all()\n )\n\n class Meta:\n model = Person\n # don't display the 'password', 'user_permissions',\n # 'groups' or 'is_superuser' fields\n # + reorder fields\n fields = ['username', 'personal', 'middle', 'family', 'may_contact',\n 'email', 'gender', 'airport', 'affiliation', 'github',\n 'twitter', 'url', 'notes', 'lessons', 'domains']\n\n\nclass PersonPermissionsForm(forms.ModelForm):\n class Meta:\n model = Person\n # only display 'user_permissions', 'groups' and `is_superuser` fields\n fields = [\n 'is_superuser',\n 'user_permissions',\n 'groups',\n ]\n\n\nclass PersonMergeForm(forms.Form):\n\n person_from = selectable.AutoCompleteSelectField(\n lookup_class=lookups.PersonLookup,\n label='Person From',\n required=True,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n person_to = selectable.AutoCompleteSelectField(\n lookup_class=lookups.PersonLookup,\n label='Person To',\n required=True,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n\nclass BadgeAwardForm(forms.ModelForm):\n\n person = selectable.AutoCompleteSelectField(\n lookup_class=lookups.PersonLookup,\n label='Person',\n required=True,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n event = selectable.AutoCompleteSelectField(\n lookup_class=lookups.EventLookup,\n label='Event',\n required=False,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n class Meta:\n model = Award\n fields = '__all__'\n widgets = {'badge': HiddenInput}\n\n\nclass PersonAwardForm(forms.ModelForm):\n\n event = selectable.AutoCompleteSelectField(\n lookup_class=lookups.EventLookup,\n label='Event',\n required=False,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n class Meta:\n model = Award\n fields = '__all__'\n widgets = {'person': HiddenInput}\n\n\nclass PersonTaskForm(forms.ModelForm):\n\n event = selectable.AutoCompleteSelectField(\n lookup_class=lookups.EventLookup,\n label='Event',\n required=True,\n help_text=AUTOCOMPLETE_HELP_TEXT,\n widget=selectable.AutoComboboxSelectWidget,\n )\n\n class Meta:\n model = Task\n fields = '__all__'\n widgets = {'person': HiddenInput}\n", "path": "workshops/forms.py"}]} | 3,543 | 137 |
gh_patches_debug_7041 | rasdani/github-patches | git_diff | mozilla__bugbug-2806 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change needsdiagnosis model to consider moved milestone as `needsdiagnosis = True`
We have observed that needsdiagnosis model classifies certain issues that potentially need diagnosis as `needsdiagnosis = False`. While this is expected, I think it might be getting worse, as the issues data is unbalanced and has much more data points for `needsdiagnosis = False`. We've started a discussion in https://github.com/mozilla/webcompat-team-okrs/issues/256
It's worth mentioning that in the [recent sample of 22 issues](https://docs.google.com/spreadsheets/d/1F9vcSpLQ_hNBeZinsytGXlfXpJLW6vh7C0BJYtd9hIY/edit?pli=1#gid=1640243023) most of the issues that looked like false negatives didn't end up needing diagnosis (not reproducible, out of the scope of the project, etc.), so they're true negatives (prediction was correct for 21 of them). We'll continue tracking them to get a more representative sample.
As an example, issues that looked like they need diagnosis, but in the end, they didn't (prediction was correct):
https://github.com/webcompat/web-bugs/issues/100746
https://github.com/webcompat/web-bugs/issues/100676
https://github.com/webcompat/web-bugs/issues/100687
Issues that are false negatives (prediction was incorrect):
https://github.com/webcompat/web-bugs/issues/100495
https://github.com/webcompat/web-bugs/issues/100645
I was thinking of including an additional set of issues that will contribute to `needsdiagnosis = True` pool. We have recently added a `moved` [milestone](https://github.com/webcompat/web-bugs/issues?q=is%3Aissue+milestone%3Amoved+is%3Aclosed). These issues often don't need diagnosis and are moved to bugzilla or elsewhere, but their content should be contributed to `needsdiagnosis = True` rather than false.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bugbug/models/needsdiagnosis.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import logging
7
8 import xgboost
9 from sklearn.compose import ColumnTransformer
10 from sklearn.pipeline import Pipeline
11
12 from bugbug import feature_cleanup, issue_features, utils
13 from bugbug.model import IssueModel
14
15 logger = logging.getLogger(__name__)
16
17
18 class NeedsDiagnosisModel(IssueModel):
19 def __init__(self, lemmatization=False):
20 IssueModel.__init__(
21 self, owner="webcompat", repo="web-bugs", lemmatization=lemmatization
22 )
23
24 self.calculate_importance = False
25
26 feature_extractors = []
27
28 cleanup_functions = [
29 feature_cleanup.fileref(),
30 feature_cleanup.url(),
31 feature_cleanup.synonyms(),
32 ]
33
34 self.extraction_pipeline = Pipeline(
35 [
36 (
37 "issue_extractor",
38 issue_features.IssueExtractor(
39 feature_extractors, cleanup_functions, rollback=True
40 ),
41 ),
42 (
43 "union",
44 ColumnTransformer(
45 [
46 ("title", self.text_vectorizer(min_df=0.0001), "title"),
47 (
48 "first_comment",
49 self.text_vectorizer(min_df=0.0001),
50 "first_comment",
51 ),
52 ]
53 ),
54 ),
55 ]
56 )
57
58 self.clf = xgboost.XGBClassifier(n_jobs=utils.get_physical_cpu_count())
59 self.clf.set_params(predictor="cpu_predictor")
60
61 def get_labels(self):
62 classes = {}
63
64 for issue in self.github.get_issues():
65 # Skip issues with empty title or body
66 if issue["title"] is None or issue["body"] is None:
67 continue
68
69 # Skip issues that are not moderated yet as they don't have a meaningful title or body
70 if issue["title"] == "In the moderation queue.":
71 continue
72
73 for event in issue["events"]:
74 if (
75 event["event"] == "milestoned"
76 and event["milestone"]["title"] == "needsdiagnosis"
77 ):
78 classes[issue["number"]] = 0
79
80 if issue["number"] not in classes:
81 classes[issue["number"]] = 1
82
83 logger.info(
84 f"{sum(1 for label in classes.values() if label == 1)} issues have not been moved to needsdiagnosis"
85 )
86 logger.info(
87 f"{sum(1 for label in classes.values() if label == 0)} issues have been moved to needsdiagnosis"
88 )
89
90 return classes, [0, 1]
91
92 def get_feature_names(self):
93 return self.extraction_pipeline.named_steps["union"].get_feature_names()
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bugbug/models/needsdiagnosis.py b/bugbug/models/needsdiagnosis.py
--- a/bugbug/models/needsdiagnosis.py
+++ b/bugbug/models/needsdiagnosis.py
@@ -71,9 +71,9 @@
continue
for event in issue["events"]:
- if (
- event["event"] == "milestoned"
- and event["milestone"]["title"] == "needsdiagnosis"
+ if event["event"] == "milestoned" and (
+ event["milestone"]["title"] == "needsdiagnosis"
+ or event["milestone"]["title"] == "moved"
):
classes[issue["number"]] = 0
| {"golden_diff": "diff --git a/bugbug/models/needsdiagnosis.py b/bugbug/models/needsdiagnosis.py\n--- a/bugbug/models/needsdiagnosis.py\n+++ b/bugbug/models/needsdiagnosis.py\n@@ -71,9 +71,9 @@\n continue\n \n for event in issue[\"events\"]:\n- if (\n- event[\"event\"] == \"milestoned\"\n- and event[\"milestone\"][\"title\"] == \"needsdiagnosis\"\n+ if event[\"event\"] == \"milestoned\" and (\n+ event[\"milestone\"][\"title\"] == \"needsdiagnosis\"\n+ or event[\"milestone\"][\"title\"] == \"moved\"\n ):\n classes[issue[\"number\"]] = 0\n", "issue": "Change needsdiagnosis model to consider moved milestone as `needsdiagnosis = True`\nWe have observed that needsdiagnosis model classifies certain issues that potentially need diagnosis as `needsdiagnosis = False`. While this is expected, I think it might be getting worse, as the issues data is unbalanced and has much more data points for `needsdiagnosis = False`. We've started a discussion in https://github.com/mozilla/webcompat-team-okrs/issues/256 \r\n\r\nIt's worth mentioning that in the [recent sample of 22 issues](https://docs.google.com/spreadsheets/d/1F9vcSpLQ_hNBeZinsytGXlfXpJLW6vh7C0BJYtd9hIY/edit?pli=1#gid=1640243023) most of the issues that looked like false negatives didn't end up needing diagnosis (not reproducible, out of the scope of the project, etc.), so they're true negatives (prediction was correct for 21 of them). We'll continue tracking them to get a more representative sample.\r\n\r\nAs an example, issues that looked like they need diagnosis, but in the end, they didn't (prediction was correct):\r\nhttps://github.com/webcompat/web-bugs/issues/100746\r\nhttps://github.com/webcompat/web-bugs/issues/100676\r\nhttps://github.com/webcompat/web-bugs/issues/100687\r\n\r\nIssues that are false negatives (prediction was incorrect): \r\nhttps://github.com/webcompat/web-bugs/issues/100495\r\nhttps://github.com/webcompat/web-bugs/issues/100645\r\n\r\nI was thinking of including an additional set of issues that will contribute to `needsdiagnosis = True` pool. We have recently added a `moved` [milestone](https://github.com/webcompat/web-bugs/issues?q=is%3Aissue+milestone%3Amoved+is%3Aclosed). These issues often don't need diagnosis and are moved to bugzilla or elsewhere, but their content should be contributed to `needsdiagnosis = True` rather than false. \r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport logging\n\nimport xgboost\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.pipeline import Pipeline\n\nfrom bugbug import feature_cleanup, issue_features, utils\nfrom bugbug.model import IssueModel\n\nlogger = logging.getLogger(__name__)\n\n\nclass NeedsDiagnosisModel(IssueModel):\n def __init__(self, lemmatization=False):\n IssueModel.__init__(\n self, owner=\"webcompat\", repo=\"web-bugs\", lemmatization=lemmatization\n )\n\n self.calculate_importance = False\n\n feature_extractors = []\n\n cleanup_functions = [\n feature_cleanup.fileref(),\n feature_cleanup.url(),\n feature_cleanup.synonyms(),\n ]\n\n self.extraction_pipeline = Pipeline(\n [\n (\n \"issue_extractor\",\n issue_features.IssueExtractor(\n feature_extractors, cleanup_functions, rollback=True\n ),\n ),\n (\n \"union\",\n ColumnTransformer(\n [\n (\"title\", self.text_vectorizer(min_df=0.0001), \"title\"),\n (\n \"first_comment\",\n self.text_vectorizer(min_df=0.0001),\n \"first_comment\",\n ),\n ]\n ),\n ),\n ]\n )\n\n self.clf = xgboost.XGBClassifier(n_jobs=utils.get_physical_cpu_count())\n self.clf.set_params(predictor=\"cpu_predictor\")\n\n def get_labels(self):\n classes = {}\n\n for issue in self.github.get_issues():\n # Skip issues with empty title or body\n if issue[\"title\"] is None or issue[\"body\"] is None:\n continue\n\n # Skip issues that are not moderated yet as they don't have a meaningful title or body\n if issue[\"title\"] == \"In the moderation queue.\":\n continue\n\n for event in issue[\"events\"]:\n if (\n event[\"event\"] == \"milestoned\"\n and event[\"milestone\"][\"title\"] == \"needsdiagnosis\"\n ):\n classes[issue[\"number\"]] = 0\n\n if issue[\"number\"] not in classes:\n classes[issue[\"number\"]] = 1\n\n logger.info(\n f\"{sum(1 for label in classes.values() if label == 1)} issues have not been moved to needsdiagnosis\"\n )\n logger.info(\n f\"{sum(1 for label in classes.values() if label == 0)} issues have been moved to needsdiagnosis\"\n )\n\n return classes, [0, 1]\n\n def get_feature_names(self):\n return self.extraction_pipeline.named_steps[\"union\"].get_feature_names()\n", "path": "bugbug/models/needsdiagnosis.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport logging\n\nimport xgboost\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.pipeline import Pipeline\n\nfrom bugbug import feature_cleanup, issue_features, utils\nfrom bugbug.model import IssueModel\n\nlogger = logging.getLogger(__name__)\n\n\nclass NeedsDiagnosisModel(IssueModel):\n def __init__(self, lemmatization=False):\n IssueModel.__init__(\n self, owner=\"webcompat\", repo=\"web-bugs\", lemmatization=lemmatization\n )\n\n self.calculate_importance = False\n\n feature_extractors = []\n\n cleanup_functions = [\n feature_cleanup.fileref(),\n feature_cleanup.url(),\n feature_cleanup.synonyms(),\n ]\n\n self.extraction_pipeline = Pipeline(\n [\n (\n \"issue_extractor\",\n issue_features.IssueExtractor(\n feature_extractors, cleanup_functions, rollback=True\n ),\n ),\n (\n \"union\",\n ColumnTransformer(\n [\n (\"title\", self.text_vectorizer(min_df=0.0001), \"title\"),\n (\n \"first_comment\",\n self.text_vectorizer(min_df=0.0001),\n \"first_comment\",\n ),\n ]\n ),\n ),\n ]\n )\n\n self.clf = xgboost.XGBClassifier(n_jobs=utils.get_physical_cpu_count())\n self.clf.set_params(predictor=\"cpu_predictor\")\n\n def get_labels(self):\n classes = {}\n\n for issue in self.github.get_issues():\n # Skip issues with empty title or body\n if issue[\"title\"] is None or issue[\"body\"] is None:\n continue\n\n # Skip issues that are not moderated yet as they don't have a meaningful title or body\n if issue[\"title\"] == \"In the moderation queue.\":\n continue\n\n for event in issue[\"events\"]:\n if event[\"event\"] == \"milestoned\" and (\n event[\"milestone\"][\"title\"] == \"needsdiagnosis\"\n or event[\"milestone\"][\"title\"] == \"moved\"\n ):\n classes[issue[\"number\"]] = 0\n\n if issue[\"number\"] not in classes:\n classes[issue[\"number\"]] = 1\n\n logger.info(\n f\"{sum(1 for label in classes.values() if label == 1)} issues have not been moved to needsdiagnosis\"\n )\n logger.info(\n f\"{sum(1 for label in classes.values() if label == 0)} issues have been moved to needsdiagnosis\"\n )\n\n return classes, [0, 1]\n\n def get_feature_names(self):\n return self.extraction_pipeline.named_steps[\"union\"].get_feature_names()\n", "path": "bugbug/models/needsdiagnosis.py"}]} | 1,521 | 160 |
gh_patches_debug_6599 | rasdani/github-patches | git_diff | svthalia__concrexit-2585 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AttributeError: 'Event' object has no attribute 'number_regs'
Sentry Issue: [CONCREXIT-HC](https://sentry.io/organizations/thalia/issues/3639420824/?referrer=github_integration)
```
AttributeError: 'Event' object has no attribute 'number_regs'
(11 additional frame(s) were not displayed)
...
File "rest_framework/serializers.py", line 253, in data
self._data = self.to_representation(self.instance)
File "rest_framework/serializers.py", line 522, in to_representation
ret[field.field_name] = field.to_representation(attribute)
File "rest_framework/serializers.py", line 522, in to_representation
ret[field.field_name] = field.to_representation(attribute)
File "rest_framework/fields.py", line 1838, in to_representation
return method(value)
File "events/api/v2/serializers/event.py", line 86, in _num_participants
participant_count = instance.number_regs
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/events/api/v2/serializers/event.py`
Content:
```
1 from rest_framework import serializers
2
3 from activemembers.api.v2.serializers.member_group import MemberGroupSerializer
4 from announcements.api.v2.serializers import SlideSerializer
5 from documents.api.v2.serializers.document import DocumentSerializer
6 from events import services
7 from events.api.v2.serializers.event_registration import EventRegistrationSerializer
8 from events.models import Event, EventRegistration
9 from payments.api.v2.serializers.payment_amount import PaymentAmountSerializer
10 from thaliawebsite.api.v2.serializers import CleanedHTMLSerializer
11 from thaliawebsite.api.v2.serializers.cleaned_model_serializer import (
12 CleanedModelSerializer,
13 )
14 from utils.snippets import create_google_maps_url
15
16
17 class EventSerializer(CleanedModelSerializer):
18 """Serializer for events."""
19
20 class Meta:
21 model = Event
22 fields = (
23 "pk",
24 "title",
25 "description",
26 "caption",
27 "start",
28 "end",
29 "category",
30 "registration_start",
31 "registration_end",
32 "cancel_deadline",
33 "optional_registrations",
34 "location",
35 "price",
36 "fine",
37 "num_participants",
38 "max_participants",
39 "no_registration_message",
40 "cancel_too_late_message",
41 "has_fields",
42 "food_event",
43 "maps_url",
44 "user_permissions",
45 "user_registration",
46 "organisers",
47 "slide",
48 "documents",
49 )
50
51 description = CleanedHTMLSerializer()
52 organisers = MemberGroupSerializer(many=True)
53 user_registration = serializers.SerializerMethodField("_user_registration")
54 num_participants = serializers.SerializerMethodField("_num_participants")
55 maps_url = serializers.SerializerMethodField("_maps_url")
56 price = PaymentAmountSerializer()
57 fine = PaymentAmountSerializer()
58 slide = SlideSerializer()
59 documents = DocumentSerializer(many=True)
60 user_permissions = serializers.SerializerMethodField("_user_permissions")
61
62 def _user_registration(self, instance):
63 try:
64 if self.context["request"].member:
65 reg = instance.eventregistration_set.get(
66 member=self.context["request"].member
67 )
68 return EventRegistrationSerializer(
69 reg,
70 context=self.context,
71 fields=(
72 "pk",
73 "present",
74 "queue_position",
75 "is_cancelled",
76 "is_late_cancellation",
77 "date",
78 "payment",
79 ),
80 ).data
81 except EventRegistration.DoesNotExist:
82 pass
83 return None
84
85 def _num_participants(self, instance):
86 participant_count = instance.number_regs
87 if instance.max_participants and participant_count > instance.max_participants:
88 return instance.max_participants
89 return participant_count
90
91 def _user_permissions(self, instance):
92 member = self.context["request"].member
93 return services.event_permissions(member, instance)
94
95 def _maps_url(self, instance):
96 return create_google_maps_url(instance.map_location, zoom=13, size="450x250")
97
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/events/api/v2/serializers/event.py b/website/events/api/v2/serializers/event.py
--- a/website/events/api/v2/serializers/event.py
+++ b/website/events/api/v2/serializers/event.py
@@ -83,10 +83,7 @@
return None
def _num_participants(self, instance):
- participant_count = instance.number_regs
- if instance.max_participants and participant_count > instance.max_participants:
- return instance.max_participants
- return participant_count
+ return instance.participants.count()
def _user_permissions(self, instance):
member = self.context["request"].member
| {"golden_diff": "diff --git a/website/events/api/v2/serializers/event.py b/website/events/api/v2/serializers/event.py\n--- a/website/events/api/v2/serializers/event.py\n+++ b/website/events/api/v2/serializers/event.py\n@@ -83,10 +83,7 @@\n return None\n \n def _num_participants(self, instance):\n- participant_count = instance.number_regs\n- if instance.max_participants and participant_count > instance.max_participants:\n- return instance.max_participants\n- return participant_count\n+ return instance.participants.count()\n \n def _user_permissions(self, instance):\n member = self.context[\"request\"].member\n", "issue": "AttributeError: 'Event' object has no attribute 'number_regs'\nSentry Issue: [CONCREXIT-HC](https://sentry.io/organizations/thalia/issues/3639420824/?referrer=github_integration)\n\n```\nAttributeError: 'Event' object has no attribute 'number_regs'\n(11 additional frame(s) were not displayed)\n...\n File \"rest_framework/serializers.py\", line 253, in data\n self._data = self.to_representation(self.instance)\n File \"rest_framework/serializers.py\", line 522, in to_representation\n ret[field.field_name] = field.to_representation(attribute)\n File \"rest_framework/serializers.py\", line 522, in to_representation\n ret[field.field_name] = field.to_representation(attribute)\n File \"rest_framework/fields.py\", line 1838, in to_representation\n return method(value)\n File \"events/api/v2/serializers/event.py\", line 86, in _num_participants\n participant_count = instance.number_regs\n```\n", "before_files": [{"content": "from rest_framework import serializers\n\nfrom activemembers.api.v2.serializers.member_group import MemberGroupSerializer\nfrom announcements.api.v2.serializers import SlideSerializer\nfrom documents.api.v2.serializers.document import DocumentSerializer\nfrom events import services\nfrom events.api.v2.serializers.event_registration import EventRegistrationSerializer\nfrom events.models import Event, EventRegistration\nfrom payments.api.v2.serializers.payment_amount import PaymentAmountSerializer\nfrom thaliawebsite.api.v2.serializers import CleanedHTMLSerializer\nfrom thaliawebsite.api.v2.serializers.cleaned_model_serializer import (\n CleanedModelSerializer,\n)\nfrom utils.snippets import create_google_maps_url\n\n\nclass EventSerializer(CleanedModelSerializer):\n \"\"\"Serializer for events.\"\"\"\n\n class Meta:\n model = Event\n fields = (\n \"pk\",\n \"title\",\n \"description\",\n \"caption\",\n \"start\",\n \"end\",\n \"category\",\n \"registration_start\",\n \"registration_end\",\n \"cancel_deadline\",\n \"optional_registrations\",\n \"location\",\n \"price\",\n \"fine\",\n \"num_participants\",\n \"max_participants\",\n \"no_registration_message\",\n \"cancel_too_late_message\",\n \"has_fields\",\n \"food_event\",\n \"maps_url\",\n \"user_permissions\",\n \"user_registration\",\n \"organisers\",\n \"slide\",\n \"documents\",\n )\n\n description = CleanedHTMLSerializer()\n organisers = MemberGroupSerializer(many=True)\n user_registration = serializers.SerializerMethodField(\"_user_registration\")\n num_participants = serializers.SerializerMethodField(\"_num_participants\")\n maps_url = serializers.SerializerMethodField(\"_maps_url\")\n price = PaymentAmountSerializer()\n fine = PaymentAmountSerializer()\n slide = SlideSerializer()\n documents = DocumentSerializer(many=True)\n user_permissions = serializers.SerializerMethodField(\"_user_permissions\")\n\n def _user_registration(self, instance):\n try:\n if self.context[\"request\"].member:\n reg = instance.eventregistration_set.get(\n member=self.context[\"request\"].member\n )\n return EventRegistrationSerializer(\n reg,\n context=self.context,\n fields=(\n \"pk\",\n \"present\",\n \"queue_position\",\n \"is_cancelled\",\n \"is_late_cancellation\",\n \"date\",\n \"payment\",\n ),\n ).data\n except EventRegistration.DoesNotExist:\n pass\n return None\n\n def _num_participants(self, instance):\n participant_count = instance.number_regs\n if instance.max_participants and participant_count > instance.max_participants:\n return instance.max_participants\n return participant_count\n\n def _user_permissions(self, instance):\n member = self.context[\"request\"].member\n return services.event_permissions(member, instance)\n\n def _maps_url(self, instance):\n return create_google_maps_url(instance.map_location, zoom=13, size=\"450x250\")\n", "path": "website/events/api/v2/serializers/event.py"}], "after_files": [{"content": "from rest_framework import serializers\n\nfrom activemembers.api.v2.serializers.member_group import MemberGroupSerializer\nfrom announcements.api.v2.serializers import SlideSerializer\nfrom documents.api.v2.serializers.document import DocumentSerializer\nfrom events import services\nfrom events.api.v2.serializers.event_registration import EventRegistrationSerializer\nfrom events.models import Event, EventRegistration\nfrom payments.api.v2.serializers.payment_amount import PaymentAmountSerializer\nfrom thaliawebsite.api.v2.serializers import CleanedHTMLSerializer\nfrom thaliawebsite.api.v2.serializers.cleaned_model_serializer import (\n CleanedModelSerializer,\n)\nfrom utils.snippets import create_google_maps_url\n\n\nclass EventSerializer(CleanedModelSerializer):\n \"\"\"Serializer for events.\"\"\"\n\n class Meta:\n model = Event\n fields = (\n \"pk\",\n \"title\",\n \"description\",\n \"caption\",\n \"start\",\n \"end\",\n \"category\",\n \"registration_start\",\n \"registration_end\",\n \"cancel_deadline\",\n \"optional_registrations\",\n \"location\",\n \"price\",\n \"fine\",\n \"num_participants\",\n \"max_participants\",\n \"no_registration_message\",\n \"cancel_too_late_message\",\n \"has_fields\",\n \"food_event\",\n \"maps_url\",\n \"user_permissions\",\n \"user_registration\",\n \"organisers\",\n \"slide\",\n \"documents\",\n )\n\n description = CleanedHTMLSerializer()\n organisers = MemberGroupSerializer(many=True)\n user_registration = serializers.SerializerMethodField(\"_user_registration\")\n num_participants = serializers.SerializerMethodField(\"_num_participants\")\n maps_url = serializers.SerializerMethodField(\"_maps_url\")\n price = PaymentAmountSerializer()\n fine = PaymentAmountSerializer()\n slide = SlideSerializer()\n documents = DocumentSerializer(many=True)\n user_permissions = serializers.SerializerMethodField(\"_user_permissions\")\n\n def _user_registration(self, instance):\n try:\n if self.context[\"request\"].member:\n reg = instance.eventregistration_set.get(\n member=self.context[\"request\"].member\n )\n return EventRegistrationSerializer(\n reg,\n context=self.context,\n fields=(\n \"pk\",\n \"present\",\n \"queue_position\",\n \"is_cancelled\",\n \"is_late_cancellation\",\n \"date\",\n \"payment\",\n ),\n ).data\n except EventRegistration.DoesNotExist:\n pass\n return None\n\n def _num_participants(self, instance):\n return instance.participants.count()\n\n def _user_permissions(self, instance):\n member = self.context[\"request\"].member\n return services.event_permissions(member, instance)\n\n def _maps_url(self, instance):\n return create_google_maps_url(instance.map_location, zoom=13, size=\"450x250\")\n", "path": "website/events/api/v2/serializers/event.py"}]} | 1,301 | 150 |
gh_patches_debug_2279 | rasdani/github-patches | git_diff | geopandas__geopandas-648 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Descartes dependency
In the docs, geopandas lists descartes and matplotlib as optional dependencies. However, descartes is listed as an install_requires in the setup.py.
One of the two should be updated. I'd prefer to be able to pip install geopandas without installing matplotlib.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env/python
2 """Installation script
3
4 """
5
6 import os
7
8 try:
9 from setuptools import setup
10 except ImportError:
11 from distutils.core import setup
12
13 import versioneer
14
15 LONG_DESCRIPTION = """GeoPandas is a project to add support for geographic data to
16 `pandas`_ objects.
17
18 The goal of GeoPandas is to make working with geospatial data in
19 python easier. It combines the capabilities of `pandas`_ and `shapely`_,
20 providing geospatial operations in pandas and a high-level interface
21 to multiple geometries to shapely. GeoPandas enables you to easily do
22 operations in python that would otherwise require a spatial database
23 such as PostGIS.
24
25 .. _pandas: http://pandas.pydata.org
26 .. _shapely: http://toblerity.github.io/shapely
27 """
28
29 if os.environ.get('READTHEDOCS', False) == 'True':
30 INSTALL_REQUIRES = []
31 else:
32 INSTALL_REQUIRES = ['pandas', 'shapely', 'fiona', 'descartes', 'pyproj']
33
34 # get all data dirs in the datasets module
35 data_files = []
36
37 for item in os.listdir("geopandas/datasets"):
38 if not item.startswith('__'):
39 if os.path.isdir(os.path.join("geopandas/datasets/", item)):
40 data_files.append(os.path.join("datasets", item, '*'))
41 elif item.endswith('.zip'):
42 data_files.append(os.path.join("datasets", item))
43
44
45 setup(name='geopandas',
46 version=versioneer.get_version(),
47 description='Geographic pandas extensions',
48 license='BSD',
49 author='GeoPandas contributors',
50 author_email='[email protected]',
51 url='http://geopandas.org',
52 long_description=LONG_DESCRIPTION,
53 packages=['geopandas', 'geopandas.io', 'geopandas.tools',
54 'geopandas.datasets',
55 'geopandas.tests', 'geopandas.tools.tests'],
56 package_data={'geopandas': data_files},
57 install_requires=INSTALL_REQUIRES,
58 cmdclass=versioneer.get_cmdclass())
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -29,7 +29,7 @@
if os.environ.get('READTHEDOCS', False) == 'True':
INSTALL_REQUIRES = []
else:
- INSTALL_REQUIRES = ['pandas', 'shapely', 'fiona', 'descartes', 'pyproj']
+ INSTALL_REQUIRES = ['pandas', 'shapely', 'fiona', 'pyproj']
# get all data dirs in the datasets module
data_files = []
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -29,7 +29,7 @@\n if os.environ.get('READTHEDOCS', False) == 'True':\n INSTALL_REQUIRES = []\n else:\n- INSTALL_REQUIRES = ['pandas', 'shapely', 'fiona', 'descartes', 'pyproj']\n+ INSTALL_REQUIRES = ['pandas', 'shapely', 'fiona', 'pyproj']\n \n # get all data dirs in the datasets module\n data_files = []\n", "issue": "Descartes dependency\nIn the docs, geopandas lists descartes and matplotlib as optional dependencies. However, descartes is listed as an install_requires in the setup.py.\r\n\r\nOne of the two should be updated. I'd prefer to be able to pip install geopandas without installing matplotlib.\n", "before_files": [{"content": "#!/usr/bin/env/python\n\"\"\"Installation script\n\n\"\"\"\n\nimport os\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nimport versioneer\n\nLONG_DESCRIPTION = \"\"\"GeoPandas is a project to add support for geographic data to\n`pandas`_ objects.\n\nThe goal of GeoPandas is to make working with geospatial data in\npython easier. It combines the capabilities of `pandas`_ and `shapely`_,\nproviding geospatial operations in pandas and a high-level interface\nto multiple geometries to shapely. GeoPandas enables you to easily do\noperations in python that would otherwise require a spatial database\nsuch as PostGIS.\n\n.. _pandas: http://pandas.pydata.org\n.. _shapely: http://toblerity.github.io/shapely\n\"\"\"\n\nif os.environ.get('READTHEDOCS', False) == 'True':\n INSTALL_REQUIRES = []\nelse:\n INSTALL_REQUIRES = ['pandas', 'shapely', 'fiona', 'descartes', 'pyproj']\n\n# get all data dirs in the datasets module\ndata_files = []\n\nfor item in os.listdir(\"geopandas/datasets\"):\n if not item.startswith('__'):\n if os.path.isdir(os.path.join(\"geopandas/datasets/\", item)):\n data_files.append(os.path.join(\"datasets\", item, '*'))\n elif item.endswith('.zip'):\n data_files.append(os.path.join(\"datasets\", item))\n\n\nsetup(name='geopandas',\n version=versioneer.get_version(),\n description='Geographic pandas extensions',\n license='BSD',\n author='GeoPandas contributors',\n author_email='[email protected]',\n url='http://geopandas.org',\n long_description=LONG_DESCRIPTION,\n packages=['geopandas', 'geopandas.io', 'geopandas.tools',\n 'geopandas.datasets',\n 'geopandas.tests', 'geopandas.tools.tests'],\n package_data={'geopandas': data_files},\n install_requires=INSTALL_REQUIRES,\n cmdclass=versioneer.get_cmdclass())\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env/python\n\"\"\"Installation script\n\n\"\"\"\n\nimport os\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nimport versioneer\n\nLONG_DESCRIPTION = \"\"\"GeoPandas is a project to add support for geographic data to\n`pandas`_ objects.\n\nThe goal of GeoPandas is to make working with geospatial data in\npython easier. It combines the capabilities of `pandas`_ and `shapely`_,\nproviding geospatial operations in pandas and a high-level interface\nto multiple geometries to shapely. GeoPandas enables you to easily do\noperations in python that would otherwise require a spatial database\nsuch as PostGIS.\n\n.. _pandas: http://pandas.pydata.org\n.. _shapely: http://toblerity.github.io/shapely\n\"\"\"\n\nif os.environ.get('READTHEDOCS', False) == 'True':\n INSTALL_REQUIRES = []\nelse:\n INSTALL_REQUIRES = ['pandas', 'shapely', 'fiona', 'pyproj']\n\n# get all data dirs in the datasets module\ndata_files = []\n\nfor item in os.listdir(\"geopandas/datasets\"):\n if not item.startswith('__'):\n if os.path.isdir(os.path.join(\"geopandas/datasets/\", item)):\n data_files.append(os.path.join(\"datasets\", item, '*'))\n elif item.endswith('.zip'):\n data_files.append(os.path.join(\"datasets\", item))\n\n\nsetup(name='geopandas',\n version=versioneer.get_version(),\n description='Geographic pandas extensions',\n license='BSD',\n author='GeoPandas contributors',\n author_email='[email protected]',\n url='http://geopandas.org',\n long_description=LONG_DESCRIPTION,\n packages=['geopandas', 'geopandas.io', 'geopandas.tools',\n 'geopandas.datasets',\n 'geopandas.tests', 'geopandas.tools.tests'],\n package_data={'geopandas': data_files},\n install_requires=INSTALL_REQUIRES,\n cmdclass=versioneer.get_cmdclass())\n", "path": "setup.py"}]} | 888 | 123 |
gh_patches_debug_35062 | rasdani/github-patches | git_diff | TabbycatDebate__tabbycat-1406 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Reporting of total speaks in emails uses integers rather than floats
This was a bug report to the FB page, just reposting here for better tracking and if anyone else wants to look at it. Haven't tried to reproduce/look at the code but the report seems clear
In the (default?) email template that is sent to adjs the line *Proposition: Team A (Loss - X total speaks)* reports *X total speaks* as an integer which can lead to incorrect totals in formats with half points.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tabbycat/notifications/utils.py`
Content:
```
1 """Email generator functions
2
3 These functions assemble the necessary arguments to be parsed in email templates
4 to be sent to relevant parties. All these functions return a tuple with the first
5 element being a context dictionary with the available variables to be parsed in
6 the message. The second element is the Person object. All these functions are
7 called by NotificationQueueConsumer, which inserts the variables into a message,
8 using the participant object to fetch their email address and to record.
9
10 Objects should be fetched from the database here as it is an asyncronous process,
11 thus the object itself cannot be passed.
12 """
13
14 from django.utils.safestring import mark_safe
15 from django.utils.translation import gettext as _
16
17 from adjallocation.allocation import AdjudicatorAllocation
18 from draw.models import Debate
19 from results.result import BaseConsensusDebateResultWithSpeakers, DebateResult, VotingDebateResult
20 from results.utils import side_and_position_names
21 from options.utils import use_team_code_names
22 from participants.models import Person
23 from participants.prefetch import populate_win_counts
24 from tournaments.models import Round, Tournament
25
26
27 adj_position_names = {
28 AdjudicatorAllocation.POSITION_CHAIR: _("the chair"),
29 AdjudicatorAllocation.POSITION_ONLY: _("the only"),
30 AdjudicatorAllocation.POSITION_PANELLIST: _("a panellist"),
31 AdjudicatorAllocation.POSITION_TRAINEE: _("a trainee"),
32 }
33
34
35 def _assemble_panel(adjs):
36 adj_string = []
37 for adj, pos in adjs:
38 adj_string.append("%s (%s)" % (adj.name, adj_position_names[pos]))
39
40 return ", ".join(adj_string)
41
42
43 def adjudicator_assignment_email_generator(to, url, round_id):
44 emails = []
45 round = Round.objects.get(id=round_id)
46 tournament = round.tournament
47 draw = round.debate_set_with_prefetches(speakers=False).all()
48 use_codes = use_team_code_names(tournament, False)
49
50 for debate in draw:
51 matchup = debate.matchup_codes if use_codes else debate.matchup
52 context = {
53 'ROUND': round.name,
54 'VENUE': debate.venue.display_name if debate.venue is not None else _("TBA"),
55 'PANEL': _assemble_panel(debate.adjudicators.with_positions()),
56 'DRAW': matchup
57 }
58
59 for adj, pos in debate.adjudicators.with_positions():
60 try:
61 to.remove(adj.id)
62 except ValueError:
63 continue
64
65 context_user = context.copy()
66 context_user['USER'] = adj.name
67 context_user['POSITION'] = adj_position_names[pos]
68
69 if adj.url_key:
70 context_user['URL'] = url + adj.url_key + '/'
71
72 emails.append((context_user, adj))
73
74 return emails
75
76
77 def randomized_url_email_generator(to, url, tournament_id):
78 emails = []
79 tournament = Tournament.objects.get(id=tournament_id)
80
81 for instance in tournament.participants:
82 try:
83 to.remove(instance.id)
84 except ValueError:
85 continue
86 url_ind = url + instance.url_key + '/'
87
88 variables = {'USER': instance.name, 'URL': url_ind, 'KEY': instance.url_key, 'TOURN': str(tournament)}
89
90 emails.append((variables, instance))
91
92 return emails
93
94
95 def ballots_email_generator(to, debate_id):
96 emails = []
97 debate = Debate.objects.get(id=debate_id)
98 tournament = debate.round.tournament
99 results = DebateResult(debate.confirmed_ballot)
100 round_name = _("%(tournament)s %(round)s @ %(room)s") % {'tournament': str(tournament),
101 'round': debate.round.name, 'room': debate.venue.name}
102
103 use_codes = use_team_code_names(debate.round.tournament, False)
104
105 def _create_ballot(result, scoresheet):
106 ballot = "<ul>"
107
108 for side, (side_name, pos_names) in zip(tournament.sides, side_and_position_names(tournament)):
109 side_string = ""
110 if tournament.pref('teams_in_debate') == 'bp':
111 side_string += _("<li>%(side)s: %(team)s (%(points)d points with %(speaks)d total speaks)")
112 points = 4 - scoresheet.rank(side)
113 else:
114 side_string += _("<li>%(side)s: %(team)s (%(points)s - %(speaks)d total speaks)")
115 points = _("Win") if side == scoresheet.winner() else _("Loss")
116
117 ballot += side_string % {
118 'side': side_name,
119 'team': result.debateteams[side].team.code_name if use_codes else result.debateteams[side].team.short_name,
120 'speaks': scoresheet.get_total(side),
121 'points': points
122 }
123
124 ballot += "<ul>"
125
126 for pos, pos_name in zip(tournament.positions, pos_names):
127 ballot += _("<li>%(pos)s: %(speaker)s (%(score)s)</li>") % {
128 'pos': pos_name,
129 'speaker': result.get_speaker(side, pos).name,
130 'score': scoresheet.get_score(side, pos)
131 }
132
133 ballot += "</ul></li>"
134
135 ballot += "</ul>"
136
137 return mark_safe(ballot)
138
139 if isinstance(results, VotingDebateResult):
140 for (adj, ballot) in results.scoresheets.items():
141 if adj.email is None:
142 continue
143
144 context = {'DEBATE': round_name, 'USER': adj.name, 'SCORES': _create_ballot(results, ballot)}
145 emails.append((context, adj))
146 elif isinstance(results, BaseConsensusDebateResultWithSpeakers):
147 context = {'DEBATE': round_name, 'SCORES': _create_ballot(results, results.scoresheet)}
148
149 for adj in debate.debateadjudicator_set.all():
150 if adj.adjudicator.email is None:
151 continue
152
153 context_user = context.copy()
154 context_user['USER'] = adj.adjudicator.name
155
156 emails.append((context_user, adj.adjudicator))
157
158 return emails
159
160
161 def standings_email_generator(to, url, round_id):
162 emails = []
163 round = Round.objects.get(id=round_id)
164 tournament = round.tournament
165
166 teams = round.active_teams.prefetch_related('speaker_set')
167 populate_win_counts(teams)
168
169 context = {
170 'TOURN': str(tournament),
171 'ROUND': round.name,
172 'URL': url if tournament.pref('public_team_standings') else ""
173 }
174
175 for team in teams:
176 context_team = context.copy()
177 context_team['POINTS'] = str(team.points_count)
178 context_team['TEAM'] = team.short_name
179
180 for speaker in team.speaker_set.all():
181 try:
182 to.remove(speaker.id)
183 except ValueError:
184 continue
185
186 context_user = context_team.copy()
187 context_user['USER'] = speaker.name
188
189 emails.append((context_user, speaker))
190
191 return emails
192
193
194 def motion_release_email_generator(to, round_id):
195 emails = []
196 round = Round.objects.get(id=round_id)
197
198 def _create_motion_list():
199 motion_list = "<ul>"
200 for motion in round.motion_set.all():
201 motion_list += _("<li>%(text)s (%(ref)s)</li>") % {'text': motion.text, 'ref': motion.reference}
202
203 if motion.info_slide:
204 motion_list += " %s\n" % (motion.info_slide)
205
206 motion_list += "</ul>"
207
208 return mark_safe(motion_list)
209
210 context = {
211 'TOURN': str(round.tournament),
212 'ROUND': round.name,
213 'MOTIONS': _create_motion_list()
214 }
215
216 people = Person.objects.filter(id__in=to)
217 for person in people:
218 context_user = context.copy()
219 context_user['USER'] = person.name
220
221 emails.append((context_user, person))
222
223 return emails
224
225
226 def team_speaker_email_generator(to, tournament_id):
227 emails = []
228 tournament = Tournament.objects.get(id=tournament_id)
229
230 for team in tournament.team_set.all().prefetch_related('speaker_set', 'break_categories').select_related('institution'):
231 context = {
232 'TOURN': str(tournament),
233 'SHORT': team.short_name,
234 'LONG': team.long_name,
235 'CODE': team.code_name,
236 'BREAK': _(", ").join([breakq.name for breakq in team.break_categories.all()]),
237 'SPEAKERS': _(", ").join([p.name for p in team.speaker_set.all()]),
238 'INSTITUTION': str(team.institution),
239 'EMOJI': team.emoji
240 }
241
242 for speaker in team.speakers:
243 try:
244 to.remove(speaker.id)
245 except ValueError:
246 continue
247
248 context_user = context.copy()
249 context_user['USER'] = speaker.name
250
251 emails.append((context_user, speaker))
252
253 return emails
254
255
256 def team_draw_email_generator(to, url, round_id):
257 emails = []
258 round = Round.objects.get(id=round_id)
259 tournament = round.tournament
260 draw = round.debate_set_with_prefetches(speakers=True).all()
261 use_codes = use_team_code_names(tournament, False)
262
263 for debate in draw:
264 matchup = debate.matchup_codes if use_codes else debate.matchup
265 context = {
266 'ROUND': round.name,
267 'VENUE': debate.venue.name,
268 'PANEL': _assemble_panel(debate.adjudicators.with_positions()),
269 'DRAW': matchup
270 }
271
272 for dt in debate.debateteam_set.all():
273 context_team = context.copy()
274 context_team['TEAM'] = dt.team.code_name if use_codes else dt.team.short_name
275 context_team['SIDE'] = dt.get_side_name(tournament=tournament)
276
277 for speaker in dt.team.speakers:
278 try:
279 to.remove(speaker.id)
280 except ValueError:
281 continue
282
283 context_user = context_team.copy()
284 context_user['USER'] = speaker.name
285
286 emails.append((context_user, speaker))
287
288 return emails
289
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tabbycat/notifications/utils.py b/tabbycat/notifications/utils.py
--- a/tabbycat/notifications/utils.py
+++ b/tabbycat/notifications/utils.py
@@ -11,6 +11,7 @@
thus the object itself cannot be passed.
"""
+from django.utils import formats
from django.utils.safestring import mark_safe
from django.utils.translation import gettext as _
@@ -108,16 +109,16 @@
for side, (side_name, pos_names) in zip(tournament.sides, side_and_position_names(tournament)):
side_string = ""
if tournament.pref('teams_in_debate') == 'bp':
- side_string += _("<li>%(side)s: %(team)s (%(points)d points with %(speaks)d total speaks)")
+ side_string += _("<li>%(side)s: %(team)s (%(points)d points with %(speaks)s total speaks)")
points = 4 - scoresheet.rank(side)
else:
- side_string += _("<li>%(side)s: %(team)s (%(points)s - %(speaks)d total speaks)")
+ side_string += _("<li>%(side)s: %(team)s (%(points)s - %(speaks)s total speaks)")
points = _("Win") if side == scoresheet.winner() else _("Loss")
ballot += side_string % {
'side': side_name,
'team': result.debateteams[side].team.code_name if use_codes else result.debateteams[side].team.short_name,
- 'speaks': scoresheet.get_total(side),
+ 'speaks': formats.localize(scoresheet.get_total(side)),
'points': points
}
@@ -127,7 +128,7 @@
ballot += _("<li>%(pos)s: %(speaker)s (%(score)s)</li>") % {
'pos': pos_name,
'speaker': result.get_speaker(side, pos).name,
- 'score': scoresheet.get_score(side, pos)
+ 'score': formats.localize(scoresheet.get_score(side, pos))
}
ballot += "</ul></li>"
| {"golden_diff": "diff --git a/tabbycat/notifications/utils.py b/tabbycat/notifications/utils.py\n--- a/tabbycat/notifications/utils.py\n+++ b/tabbycat/notifications/utils.py\n@@ -11,6 +11,7 @@\n thus the object itself cannot be passed.\n \"\"\"\n \n+from django.utils import formats\n from django.utils.safestring import mark_safe\n from django.utils.translation import gettext as _\n \n@@ -108,16 +109,16 @@\n for side, (side_name, pos_names) in zip(tournament.sides, side_and_position_names(tournament)):\n side_string = \"\"\n if tournament.pref('teams_in_debate') == 'bp':\n- side_string += _(\"<li>%(side)s: %(team)s (%(points)d points with %(speaks)d total speaks)\")\n+ side_string += _(\"<li>%(side)s: %(team)s (%(points)d points with %(speaks)s total speaks)\")\n points = 4 - scoresheet.rank(side)\n else:\n- side_string += _(\"<li>%(side)s: %(team)s (%(points)s - %(speaks)d total speaks)\")\n+ side_string += _(\"<li>%(side)s: %(team)s (%(points)s - %(speaks)s total speaks)\")\n points = _(\"Win\") if side == scoresheet.winner() else _(\"Loss\")\n \n ballot += side_string % {\n 'side': side_name,\n 'team': result.debateteams[side].team.code_name if use_codes else result.debateteams[side].team.short_name,\n- 'speaks': scoresheet.get_total(side),\n+ 'speaks': formats.localize(scoresheet.get_total(side)),\n 'points': points\n }\n \n@@ -127,7 +128,7 @@\n ballot += _(\"<li>%(pos)s: %(speaker)s (%(score)s)</li>\") % {\n 'pos': pos_name,\n 'speaker': result.get_speaker(side, pos).name,\n- 'score': scoresheet.get_score(side, pos)\n+ 'score': formats.localize(scoresheet.get_score(side, pos))\n }\n \n ballot += \"</ul></li>\"\n", "issue": "Reporting of total speaks in emails uses integers rather than floats\nThis was a bug report to the FB page, just reposting here for better tracking and if anyone else wants to look at it. Haven't tried to reproduce/look at the code but the report seems clear\r\n\r\nIn the (default?) email template that is sent to adjs the line *Proposition: Team A (Loss - X total speaks)* reports *X total speaks* as an integer which can lead to incorrect totals in formats with half points. \n", "before_files": [{"content": "\"\"\"Email generator functions\n\nThese functions assemble the necessary arguments to be parsed in email templates\nto be sent to relevant parties. All these functions return a tuple with the first\nelement being a context dictionary with the available variables to be parsed in\nthe message. The second element is the Person object. All these functions are\ncalled by NotificationQueueConsumer, which inserts the variables into a message,\nusing the participant object to fetch their email address and to record.\n\nObjects should be fetched from the database here as it is an asyncronous process,\nthus the object itself cannot be passed.\n\"\"\"\n\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import gettext as _\n\nfrom adjallocation.allocation import AdjudicatorAllocation\nfrom draw.models import Debate\nfrom results.result import BaseConsensusDebateResultWithSpeakers, DebateResult, VotingDebateResult\nfrom results.utils import side_and_position_names\nfrom options.utils import use_team_code_names\nfrom participants.models import Person\nfrom participants.prefetch import populate_win_counts\nfrom tournaments.models import Round, Tournament\n\n\nadj_position_names = {\n AdjudicatorAllocation.POSITION_CHAIR: _(\"the chair\"),\n AdjudicatorAllocation.POSITION_ONLY: _(\"the only\"),\n AdjudicatorAllocation.POSITION_PANELLIST: _(\"a panellist\"),\n AdjudicatorAllocation.POSITION_TRAINEE: _(\"a trainee\"),\n}\n\n\ndef _assemble_panel(adjs):\n adj_string = []\n for adj, pos in adjs:\n adj_string.append(\"%s (%s)\" % (adj.name, adj_position_names[pos]))\n\n return \", \".join(adj_string)\n\n\ndef adjudicator_assignment_email_generator(to, url, round_id):\n emails = []\n round = Round.objects.get(id=round_id)\n tournament = round.tournament\n draw = round.debate_set_with_prefetches(speakers=False).all()\n use_codes = use_team_code_names(tournament, False)\n\n for debate in draw:\n matchup = debate.matchup_codes if use_codes else debate.matchup\n context = {\n 'ROUND': round.name,\n 'VENUE': debate.venue.display_name if debate.venue is not None else _(\"TBA\"),\n 'PANEL': _assemble_panel(debate.adjudicators.with_positions()),\n 'DRAW': matchup\n }\n\n for adj, pos in debate.adjudicators.with_positions():\n try:\n to.remove(adj.id)\n except ValueError:\n continue\n\n context_user = context.copy()\n context_user['USER'] = adj.name\n context_user['POSITION'] = adj_position_names[pos]\n\n if adj.url_key:\n context_user['URL'] = url + adj.url_key + '/'\n\n emails.append((context_user, adj))\n\n return emails\n\n\ndef randomized_url_email_generator(to, url, tournament_id):\n emails = []\n tournament = Tournament.objects.get(id=tournament_id)\n\n for instance in tournament.participants:\n try:\n to.remove(instance.id)\n except ValueError:\n continue\n url_ind = url + instance.url_key + '/'\n\n variables = {'USER': instance.name, 'URL': url_ind, 'KEY': instance.url_key, 'TOURN': str(tournament)}\n\n emails.append((variables, instance))\n\n return emails\n\n\ndef ballots_email_generator(to, debate_id):\n emails = []\n debate = Debate.objects.get(id=debate_id)\n tournament = debate.round.tournament\n results = DebateResult(debate.confirmed_ballot)\n round_name = _(\"%(tournament)s %(round)s @ %(room)s\") % {'tournament': str(tournament),\n 'round': debate.round.name, 'room': debate.venue.name}\n\n use_codes = use_team_code_names(debate.round.tournament, False)\n\n def _create_ballot(result, scoresheet):\n ballot = \"<ul>\"\n\n for side, (side_name, pos_names) in zip(tournament.sides, side_and_position_names(tournament)):\n side_string = \"\"\n if tournament.pref('teams_in_debate') == 'bp':\n side_string += _(\"<li>%(side)s: %(team)s (%(points)d points with %(speaks)d total speaks)\")\n points = 4 - scoresheet.rank(side)\n else:\n side_string += _(\"<li>%(side)s: %(team)s (%(points)s - %(speaks)d total speaks)\")\n points = _(\"Win\") if side == scoresheet.winner() else _(\"Loss\")\n\n ballot += side_string % {\n 'side': side_name,\n 'team': result.debateteams[side].team.code_name if use_codes else result.debateteams[side].team.short_name,\n 'speaks': scoresheet.get_total(side),\n 'points': points\n }\n\n ballot += \"<ul>\"\n\n for pos, pos_name in zip(tournament.positions, pos_names):\n ballot += _(\"<li>%(pos)s: %(speaker)s (%(score)s)</li>\") % {\n 'pos': pos_name,\n 'speaker': result.get_speaker(side, pos).name,\n 'score': scoresheet.get_score(side, pos)\n }\n\n ballot += \"</ul></li>\"\n\n ballot += \"</ul>\"\n\n return mark_safe(ballot)\n\n if isinstance(results, VotingDebateResult):\n for (adj, ballot) in results.scoresheets.items():\n if adj.email is None:\n continue\n\n context = {'DEBATE': round_name, 'USER': adj.name, 'SCORES': _create_ballot(results, ballot)}\n emails.append((context, adj))\n elif isinstance(results, BaseConsensusDebateResultWithSpeakers):\n context = {'DEBATE': round_name, 'SCORES': _create_ballot(results, results.scoresheet)}\n\n for adj in debate.debateadjudicator_set.all():\n if adj.adjudicator.email is None:\n continue\n\n context_user = context.copy()\n context_user['USER'] = adj.adjudicator.name\n\n emails.append((context_user, adj.adjudicator))\n\n return emails\n\n\ndef standings_email_generator(to, url, round_id):\n emails = []\n round = Round.objects.get(id=round_id)\n tournament = round.tournament\n\n teams = round.active_teams.prefetch_related('speaker_set')\n populate_win_counts(teams)\n\n context = {\n 'TOURN': str(tournament),\n 'ROUND': round.name,\n 'URL': url if tournament.pref('public_team_standings') else \"\"\n }\n\n for team in teams:\n context_team = context.copy()\n context_team['POINTS'] = str(team.points_count)\n context_team['TEAM'] = team.short_name\n\n for speaker in team.speaker_set.all():\n try:\n to.remove(speaker.id)\n except ValueError:\n continue\n\n context_user = context_team.copy()\n context_user['USER'] = speaker.name\n\n emails.append((context_user, speaker))\n\n return emails\n\n\ndef motion_release_email_generator(to, round_id):\n emails = []\n round = Round.objects.get(id=round_id)\n\n def _create_motion_list():\n motion_list = \"<ul>\"\n for motion in round.motion_set.all():\n motion_list += _(\"<li>%(text)s (%(ref)s)</li>\") % {'text': motion.text, 'ref': motion.reference}\n\n if motion.info_slide:\n motion_list += \" %s\\n\" % (motion.info_slide)\n\n motion_list += \"</ul>\"\n\n return mark_safe(motion_list)\n\n context = {\n 'TOURN': str(round.tournament),\n 'ROUND': round.name,\n 'MOTIONS': _create_motion_list()\n }\n\n people = Person.objects.filter(id__in=to)\n for person in people:\n context_user = context.copy()\n context_user['USER'] = person.name\n\n emails.append((context_user, person))\n\n return emails\n\n\ndef team_speaker_email_generator(to, tournament_id):\n emails = []\n tournament = Tournament.objects.get(id=tournament_id)\n\n for team in tournament.team_set.all().prefetch_related('speaker_set', 'break_categories').select_related('institution'):\n context = {\n 'TOURN': str(tournament),\n 'SHORT': team.short_name,\n 'LONG': team.long_name,\n 'CODE': team.code_name,\n 'BREAK': _(\", \").join([breakq.name for breakq in team.break_categories.all()]),\n 'SPEAKERS': _(\", \").join([p.name for p in team.speaker_set.all()]),\n 'INSTITUTION': str(team.institution),\n 'EMOJI': team.emoji\n }\n\n for speaker in team.speakers:\n try:\n to.remove(speaker.id)\n except ValueError:\n continue\n\n context_user = context.copy()\n context_user['USER'] = speaker.name\n\n emails.append((context_user, speaker))\n\n return emails\n\n\ndef team_draw_email_generator(to, url, round_id):\n emails = []\n round = Round.objects.get(id=round_id)\n tournament = round.tournament\n draw = round.debate_set_with_prefetches(speakers=True).all()\n use_codes = use_team_code_names(tournament, False)\n\n for debate in draw:\n matchup = debate.matchup_codes if use_codes else debate.matchup\n context = {\n 'ROUND': round.name,\n 'VENUE': debate.venue.name,\n 'PANEL': _assemble_panel(debate.adjudicators.with_positions()),\n 'DRAW': matchup\n }\n\n for dt in debate.debateteam_set.all():\n context_team = context.copy()\n context_team['TEAM'] = dt.team.code_name if use_codes else dt.team.short_name\n context_team['SIDE'] = dt.get_side_name(tournament=tournament)\n\n for speaker in dt.team.speakers:\n try:\n to.remove(speaker.id)\n except ValueError:\n continue\n\n context_user = context_team.copy()\n context_user['USER'] = speaker.name\n\n emails.append((context_user, speaker))\n\n return emails\n", "path": "tabbycat/notifications/utils.py"}], "after_files": [{"content": "\"\"\"Email generator functions\n\nThese functions assemble the necessary arguments to be parsed in email templates\nto be sent to relevant parties. All these functions return a tuple with the first\nelement being a context dictionary with the available variables to be parsed in\nthe message. The second element is the Person object. All these functions are\ncalled by NotificationQueueConsumer, which inserts the variables into a message,\nusing the participant object to fetch their email address and to record.\n\nObjects should be fetched from the database here as it is an asyncronous process,\nthus the object itself cannot be passed.\n\"\"\"\n\nfrom django.utils import formats\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import gettext as _\n\nfrom adjallocation.allocation import AdjudicatorAllocation\nfrom draw.models import Debate\nfrom results.result import BaseConsensusDebateResultWithSpeakers, DebateResult, VotingDebateResult\nfrom results.utils import side_and_position_names\nfrom options.utils import use_team_code_names\nfrom participants.models import Person\nfrom participants.prefetch import populate_win_counts\nfrom tournaments.models import Round, Tournament\n\n\nadj_position_names = {\n AdjudicatorAllocation.POSITION_CHAIR: _(\"the chair\"),\n AdjudicatorAllocation.POSITION_ONLY: _(\"the only\"),\n AdjudicatorAllocation.POSITION_PANELLIST: _(\"a panellist\"),\n AdjudicatorAllocation.POSITION_TRAINEE: _(\"a trainee\"),\n}\n\n\ndef _assemble_panel(adjs):\n adj_string = []\n for adj, pos in adjs:\n adj_string.append(\"%s (%s)\" % (adj.name, adj_position_names[pos]))\n\n return \", \".join(adj_string)\n\n\ndef adjudicator_assignment_email_generator(to, url, round_id):\n emails = []\n round = Round.objects.get(id=round_id)\n tournament = round.tournament\n draw = round.debate_set_with_prefetches(speakers=False).all()\n use_codes = use_team_code_names(tournament, False)\n\n for debate in draw:\n matchup = debate.matchup_codes if use_codes else debate.matchup\n context = {\n 'ROUND': round.name,\n 'VENUE': debate.venue.display_name if debate.venue is not None else _(\"TBA\"),\n 'PANEL': _assemble_panel(debate.adjudicators.with_positions()),\n 'DRAW': matchup\n }\n\n for adj, pos in debate.adjudicators.with_positions():\n try:\n to.remove(adj.id)\n except ValueError:\n continue\n\n context_user = context.copy()\n context_user['USER'] = adj.name\n context_user['POSITION'] = adj_position_names[pos]\n\n if adj.url_key:\n context_user['URL'] = url + adj.url_key + '/'\n\n emails.append((context_user, adj))\n\n return emails\n\n\ndef randomized_url_email_generator(to, url, tournament_id):\n emails = []\n tournament = Tournament.objects.get(id=tournament_id)\n\n for instance in tournament.participants:\n try:\n to.remove(instance.id)\n except ValueError:\n continue\n url_ind = url + instance.url_key + '/'\n\n variables = {'USER': instance.name, 'URL': url_ind, 'KEY': instance.url_key, 'TOURN': str(tournament)}\n\n emails.append((variables, instance))\n\n return emails\n\n\ndef ballots_email_generator(to, debate_id):\n emails = []\n debate = Debate.objects.get(id=debate_id)\n tournament = debate.round.tournament\n results = DebateResult(debate.confirmed_ballot)\n round_name = _(\"%(tournament)s %(round)s @ %(room)s\") % {'tournament': str(tournament),\n 'round': debate.round.name, 'room': debate.venue.name}\n\n use_codes = use_team_code_names(debate.round.tournament, False)\n\n def _create_ballot(result, scoresheet):\n ballot = \"<ul>\"\n\n for side, (side_name, pos_names) in zip(tournament.sides, side_and_position_names(tournament)):\n side_string = \"\"\n if tournament.pref('teams_in_debate') == 'bp':\n side_string += _(\"<li>%(side)s: %(team)s (%(points)d points with %(speaks)s total speaks)\")\n points = 4 - scoresheet.rank(side)\n else:\n side_string += _(\"<li>%(side)s: %(team)s (%(points)s - %(speaks)s total speaks)\")\n points = _(\"Win\") if side == scoresheet.winner() else _(\"Loss\")\n\n ballot += side_string % {\n 'side': side_name,\n 'team': result.debateteams[side].team.code_name if use_codes else result.debateteams[side].team.short_name,\n 'speaks': formats.localize(scoresheet.get_total(side)),\n 'points': points\n }\n\n ballot += \"<ul>\"\n\n for pos, pos_name in zip(tournament.positions, pos_names):\n ballot += _(\"<li>%(pos)s: %(speaker)s (%(score)s)</li>\") % {\n 'pos': pos_name,\n 'speaker': result.get_speaker(side, pos).name,\n 'score': formats.localize(scoresheet.get_score(side, pos))\n }\n\n ballot += \"</ul></li>\"\n\n ballot += \"</ul>\"\n\n return mark_safe(ballot)\n\n if isinstance(results, VotingDebateResult):\n for (adj, ballot) in results.scoresheets.items():\n if adj.email is None:\n continue\n\n context = {'DEBATE': round_name, 'USER': adj.name, 'SCORES': _create_ballot(results, ballot)}\n emails.append((context, adj))\n elif isinstance(results, BaseConsensusDebateResultWithSpeakers):\n context = {'DEBATE': round_name, 'SCORES': _create_ballot(results, results.scoresheet)}\n\n for adj in debate.debateadjudicator_set.all():\n if adj.adjudicator.email is None:\n continue\n\n context_user = context.copy()\n context_user['USER'] = adj.adjudicator.name\n\n emails.append((context_user, adj.adjudicator))\n\n return emails\n\n\ndef standings_email_generator(to, url, round_id):\n emails = []\n round = Round.objects.get(id=round_id)\n tournament = round.tournament\n\n teams = round.active_teams.prefetch_related('speaker_set')\n populate_win_counts(teams)\n\n context = {\n 'TOURN': str(tournament),\n 'ROUND': round.name,\n 'URL': url if tournament.pref('public_team_standings') else \"\"\n }\n\n for team in teams:\n context_team = context.copy()\n context_team['POINTS'] = str(team.points_count)\n context_team['TEAM'] = team.short_name\n\n for speaker in team.speaker_set.all():\n try:\n to.remove(speaker.id)\n except ValueError:\n continue\n\n context_user = context_team.copy()\n context_user['USER'] = speaker.name\n\n emails.append((context_user, speaker))\n\n return emails\n\n\ndef motion_release_email_generator(to, round_id):\n emails = []\n round = Round.objects.get(id=round_id)\n\n def _create_motion_list():\n motion_list = \"<ul>\"\n for motion in round.motion_set.all():\n motion_list += _(\"<li>%(text)s (%(ref)s)</li>\") % {'text': motion.text, 'ref': motion.reference}\n\n if motion.info_slide:\n motion_list += \" %s\\n\" % (motion.info_slide)\n\n motion_list += \"</ul>\"\n\n return mark_safe(motion_list)\n\n context = {\n 'TOURN': str(round.tournament),\n 'ROUND': round.name,\n 'MOTIONS': _create_motion_list()\n }\n\n people = Person.objects.filter(id__in=to)\n for person in people:\n context_user = context.copy()\n context_user['USER'] = person.name\n\n emails.append((context_user, person))\n\n return emails\n\n\ndef team_speaker_email_generator(to, tournament_id):\n emails = []\n tournament = Tournament.objects.get(id=tournament_id)\n\n for team in tournament.team_set.all().prefetch_related('speaker_set', 'break_categories').select_related('institution'):\n context = {\n 'TOURN': str(tournament),\n 'SHORT': team.short_name,\n 'LONG': team.long_name,\n 'CODE': team.code_name,\n 'BREAK': _(\", \").join([breakq.name for breakq in team.break_categories.all()]),\n 'SPEAKERS': _(\", \").join([p.name for p in team.speaker_set.all()]),\n 'INSTITUTION': str(team.institution),\n 'EMOJI': team.emoji\n }\n\n for speaker in team.speakers:\n try:\n to.remove(speaker.id)\n except ValueError:\n continue\n\n context_user = context.copy()\n context_user['USER'] = speaker.name\n\n emails.append((context_user, speaker))\n\n return emails\n\n\ndef team_draw_email_generator(to, url, round_id):\n emails = []\n round = Round.objects.get(id=round_id)\n tournament = round.tournament\n draw = round.debate_set_with_prefetches(speakers=True).all()\n use_codes = use_team_code_names(tournament, False)\n\n for debate in draw:\n matchup = debate.matchup_codes if use_codes else debate.matchup\n context = {\n 'ROUND': round.name,\n 'VENUE': debate.venue.name,\n 'PANEL': _assemble_panel(debate.adjudicators.with_positions()),\n 'DRAW': matchup\n }\n\n for dt in debate.debateteam_set.all():\n context_team = context.copy()\n context_team['TEAM'] = dt.team.code_name if use_codes else dt.team.short_name\n context_team['SIDE'] = dt.get_side_name(tournament=tournament)\n\n for speaker in dt.team.speakers:\n try:\n to.remove(speaker.id)\n except ValueError:\n continue\n\n context_user = context_team.copy()\n context_user['USER'] = speaker.name\n\n emails.append((context_user, speaker))\n\n return emails\n", "path": "tabbycat/notifications/utils.py"}]} | 3,297 | 474 |
gh_patches_debug_38231 | rasdani/github-patches | git_diff | pyro-ppl__pyro-365 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect result from Delta's batch_log_pdf
It looks like there may be a bug in Delta's `batch_log_pdf` method. When the value we're computing the log prob of doesn't match the parameter I expect `batch_log_pdf` to return `-inf` but it doesn't. For example:
```
x = Variable(torch.Tensor([[1.0]]))
y = Variable(torch.Tensor([[2.0]]))
# This is OK, it returns zero as expected:
print(Delta(x).batch_log_pdf(x))
# Here I expect -inf, but get 2.5500e+08
print(Delta(x).batch_log_pdf(y))
```
`log_pdf` works as expected.
(This isn't high priority for me.)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyro/distributions/__init__.py`
Content:
```
1 # abstract base class
2 from pyro.distributions.bernoulli import Bernoulli
3 from pyro.distributions.beta import Beta
4 from pyro.distributions.categorical import Categorical
5 from pyro.distributions.cauchy import Cauchy
6 from pyro.distributions.half_cauchy import HalfCauchy
7 from pyro.distributions.delta import Delta
8 from pyro.distributions.distribution import Distribution # noqa: F401
9 # specific distributions
10 from pyro.distributions.diag_normal import DiagNormal
11 from pyro.distributions.dirichlet import Dirichlet
12 from pyro.distributions.exponential import Exponential
13 from pyro.distributions.gamma import Gamma
14 from pyro.distributions.log_normal import LogNormal
15 from pyro.distributions.multinomial import Multinomial
16 from pyro.distributions.poisson import Poisson
17 from pyro.distributions.random_primitive import RandomPrimitive
18 from pyro.distributions.uniform import Uniform
19
20 # function aliases
21 diagnormal = DiagNormal()
22 lognormal = RandomPrimitive(LogNormal)
23 categorical = Categorical()
24 bernoulli = RandomPrimitive(Bernoulli)
25 beta = RandomPrimitive(Beta)
26 delta = Delta()
27 exponential = RandomPrimitive(Exponential)
28 gamma = RandomPrimitive(Gamma)
29 multinomial = RandomPrimitive(Multinomial)
30 poisson = RandomPrimitive(Poisson)
31 uniform = RandomPrimitive(Uniform)
32 dirichlet = RandomPrimitive(Dirichlet)
33 cauchy = RandomPrimitive(Cauchy)
34 halfcauchy = RandomPrimitive(HalfCauchy)
35
```
Path: `pyro/distributions/delta.py`
Content:
```
1 import torch
2 from torch.autograd import Variable
3
4 from pyro.distributions.distribution import Distribution
5
6
7 class Delta(Distribution):
8 """
9 :param v: support element *(any)*
10
11 Discrete distribution that assigns probability one to the single element in
12 its support. Delta distribution parameterized by a random choice should not
13 be used with MCMC based inference, as doing so produces incorrect results.
14 """
15 enumerable = True
16
17 def _sanitize_input(self, v):
18 if v is not None:
19 # stateless distribution
20 return v
21 elif self.v is not None:
22 # stateful distribution
23 return self.v
24 else:
25 raise ValueError("Parameter(s) were None")
26
27 def __init__(self, v=None, batch_size=1, *args, **kwargs):
28 """
29 Params:
30 `v` - value
31 """
32 self.v = v
33 if v is not None:
34 if v.dim() == 1 and batch_size > 1:
35 self.v = v.expand(v, v.size(0))
36 super(Delta, self).__init__(*args, **kwargs)
37
38 def sample(self, v=None):
39 v = self._sanitize_input(v)
40 if isinstance(v, Variable):
41 return v
42 return Variable(v)
43
44 def batch_log_pdf(self, x, v=None, batch_size=1):
45 v = self._sanitize_input(v)
46 if x.dim == 1:
47 x = x.expand(batch_size, x.size(0))
48 return (torch.eq(x, v.expand_as(x)) - 1).float() * 999999
49
50 def log_pdf(self, x, v=None, *args, **kwargs):
51 v = self._sanitize_input(v)
52 if torch.equal(x.data, v.data.expand_as(x.data)):
53 return Variable(torch.zeros(1).type_as(v.data))
54 return Variable(torch.Tensor([-float("inf")]).type_as(v.data))
55
56 def support(self, v=None):
57 """
58 Returns the delta distribution's support, as a tensor along the first dimension.
59
60 :param v: torch variable where each element of the tensor represents the point at
61 which the delta distribution is concentrated.
62 :return: torch variable enumerating the support of the delta distribution.
63 :rtype: torch.autograd.Variable.
64 """
65 v = self._sanitize_input(v)
66 # univariate case
67 return Variable(v.data)
68
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pyro/distributions/__init__.py b/pyro/distributions/__init__.py
--- a/pyro/distributions/__init__.py
+++ b/pyro/distributions/__init__.py
@@ -23,7 +23,7 @@
categorical = Categorical()
bernoulli = RandomPrimitive(Bernoulli)
beta = RandomPrimitive(Beta)
-delta = Delta()
+delta = RandomPrimitive(Delta)
exponential = RandomPrimitive(Exponential)
gamma = RandomPrimitive(Gamma)
multinomial = RandomPrimitive(Multinomial)
diff --git a/pyro/distributions/delta.py b/pyro/distributions/delta.py
--- a/pyro/distributions/delta.py
+++ b/pyro/distributions/delta.py
@@ -14,44 +14,40 @@
"""
enumerable = True
- def _sanitize_input(self, v):
- if v is not None:
- # stateless distribution
- return v
- elif self.v is not None:
- # stateful distribution
- return self.v
- else:
- raise ValueError("Parameter(s) were None")
-
- def __init__(self, v=None, batch_size=1, *args, **kwargs):
+ def __init__(self, v, batch_size=None, *args, **kwargs):
"""
Params:
`v` - value
"""
self.v = v
- if v is not None:
- if v.dim() == 1 and batch_size > 1:
- self.v = v.expand(v, v.size(0))
+ if not isinstance(self.v, Variable):
+ self.v = Variable(self.v)
+ if v.dim() == 1 and batch_size is not None:
+ self.v = v.expand(v, v.size(0))
super(Delta, self).__init__(*args, **kwargs)
- def sample(self, v=None):
- v = self._sanitize_input(v)
- if isinstance(v, Variable):
- return v
- return Variable(v)
+ def batch_shape(self, x=None):
+ event_dim = 1
+ v = self.v
+ if x is not None and x.size() != v.size():
+ v = self.v.expand_as(x)
+ return v.size()[:-event_dim]
+
+ def event_shape(self):
+ event_dim = 1
+ return self.v.size()[-event_dim:]
+
+ def shape(self, x=None):
+ return self.batch_shape(x) + self.event_shape()
- def batch_log_pdf(self, x, v=None, batch_size=1):
- v = self._sanitize_input(v)
- if x.dim == 1:
- x = x.expand(batch_size, x.size(0))
- return (torch.eq(x, v.expand_as(x)) - 1).float() * 999999
+ def sample(self):
+ return self.v
- def log_pdf(self, x, v=None, *args, **kwargs):
- v = self._sanitize_input(v)
- if torch.equal(x.data, v.data.expand_as(x.data)):
- return Variable(torch.zeros(1).type_as(v.data))
- return Variable(torch.Tensor([-float("inf")]).type_as(v.data))
+ def batch_log_pdf(self, x):
+ v = self.v
+ if x.size() != v.size():
+ v = v.expand_as(x)
+ return torch.sum(torch.eq(x, v).float().log(), -1)
def support(self, v=None):
"""
@@ -62,6 +58,4 @@
:return: torch variable enumerating the support of the delta distribution.
:rtype: torch.autograd.Variable.
"""
- v = self._sanitize_input(v)
- # univariate case
- return Variable(v.data)
+ return Variable(self.v.data)
| {"golden_diff": "diff --git a/pyro/distributions/__init__.py b/pyro/distributions/__init__.py\n--- a/pyro/distributions/__init__.py\n+++ b/pyro/distributions/__init__.py\n@@ -23,7 +23,7 @@\n categorical = Categorical()\n bernoulli = RandomPrimitive(Bernoulli)\n beta = RandomPrimitive(Beta)\n-delta = Delta()\n+delta = RandomPrimitive(Delta)\n exponential = RandomPrimitive(Exponential)\n gamma = RandomPrimitive(Gamma)\n multinomial = RandomPrimitive(Multinomial)\ndiff --git a/pyro/distributions/delta.py b/pyro/distributions/delta.py\n--- a/pyro/distributions/delta.py\n+++ b/pyro/distributions/delta.py\n@@ -14,44 +14,40 @@\n \"\"\"\n enumerable = True\n \n- def _sanitize_input(self, v):\n- if v is not None:\n- # stateless distribution\n- return v\n- elif self.v is not None:\n- # stateful distribution\n- return self.v\n- else:\n- raise ValueError(\"Parameter(s) were None\")\n-\n- def __init__(self, v=None, batch_size=1, *args, **kwargs):\n+ def __init__(self, v, batch_size=None, *args, **kwargs):\n \"\"\"\n Params:\n `v` - value\n \"\"\"\n self.v = v\n- if v is not None:\n- if v.dim() == 1 and batch_size > 1:\n- self.v = v.expand(v, v.size(0))\n+ if not isinstance(self.v, Variable):\n+ self.v = Variable(self.v)\n+ if v.dim() == 1 and batch_size is not None:\n+ self.v = v.expand(v, v.size(0))\n super(Delta, self).__init__(*args, **kwargs)\n \n- def sample(self, v=None):\n- v = self._sanitize_input(v)\n- if isinstance(v, Variable):\n- return v\n- return Variable(v)\n+ def batch_shape(self, x=None):\n+ event_dim = 1\n+ v = self.v\n+ if x is not None and x.size() != v.size():\n+ v = self.v.expand_as(x)\n+ return v.size()[:-event_dim]\n+\n+ def event_shape(self):\n+ event_dim = 1\n+ return self.v.size()[-event_dim:]\n+\n+ def shape(self, x=None):\n+ return self.batch_shape(x) + self.event_shape()\n \n- def batch_log_pdf(self, x, v=None, batch_size=1):\n- v = self._sanitize_input(v)\n- if x.dim == 1:\n- x = x.expand(batch_size, x.size(0))\n- return (torch.eq(x, v.expand_as(x)) - 1).float() * 999999\n+ def sample(self):\n+ return self.v\n \n- def log_pdf(self, x, v=None, *args, **kwargs):\n- v = self._sanitize_input(v)\n- if torch.equal(x.data, v.data.expand_as(x.data)):\n- return Variable(torch.zeros(1).type_as(v.data))\n- return Variable(torch.Tensor([-float(\"inf\")]).type_as(v.data))\n+ def batch_log_pdf(self, x):\n+ v = self.v\n+ if x.size() != v.size():\n+ v = v.expand_as(x)\n+ return torch.sum(torch.eq(x, v).float().log(), -1)\n \n def support(self, v=None):\n \"\"\"\n@@ -62,6 +58,4 @@\n :return: torch variable enumerating the support of the delta distribution.\n :rtype: torch.autograd.Variable.\n \"\"\"\n- v = self._sanitize_input(v)\n- # univariate case\n- return Variable(v.data)\n+ return Variable(self.v.data)\n", "issue": "Incorrect result from Delta's batch_log_pdf\nIt looks like there may be a bug in Delta's `batch_log_pdf` method. When the value we're computing the log prob of doesn't match the parameter I expect `batch_log_pdf` to return `-inf` but it doesn't. For example:\r\n\r\n```\r\nx = Variable(torch.Tensor([[1.0]]))\r\ny = Variable(torch.Tensor([[2.0]]))\r\n\r\n# This is OK, it returns zero as expected:\r\nprint(Delta(x).batch_log_pdf(x))\r\n\r\n# Here I expect -inf, but get 2.5500e+08\r\nprint(Delta(x).batch_log_pdf(y))\r\n```\r\n\r\n`log_pdf` works as expected.\r\n\r\n(This isn't high priority for me.)\n", "before_files": [{"content": "# abstract base class\nfrom pyro.distributions.bernoulli import Bernoulli\nfrom pyro.distributions.beta import Beta\nfrom pyro.distributions.categorical import Categorical\nfrom pyro.distributions.cauchy import Cauchy\nfrom pyro.distributions.half_cauchy import HalfCauchy\nfrom pyro.distributions.delta import Delta\nfrom pyro.distributions.distribution import Distribution # noqa: F401\n# specific distributions\nfrom pyro.distributions.diag_normal import DiagNormal\nfrom pyro.distributions.dirichlet import Dirichlet\nfrom pyro.distributions.exponential import Exponential\nfrom pyro.distributions.gamma import Gamma\nfrom pyro.distributions.log_normal import LogNormal\nfrom pyro.distributions.multinomial import Multinomial\nfrom pyro.distributions.poisson import Poisson\nfrom pyro.distributions.random_primitive import RandomPrimitive\nfrom pyro.distributions.uniform import Uniform\n\n# function aliases\ndiagnormal = DiagNormal()\nlognormal = RandomPrimitive(LogNormal)\ncategorical = Categorical()\nbernoulli = RandomPrimitive(Bernoulli)\nbeta = RandomPrimitive(Beta)\ndelta = Delta()\nexponential = RandomPrimitive(Exponential)\ngamma = RandomPrimitive(Gamma)\nmultinomial = RandomPrimitive(Multinomial)\npoisson = RandomPrimitive(Poisson)\nuniform = RandomPrimitive(Uniform)\ndirichlet = RandomPrimitive(Dirichlet)\ncauchy = RandomPrimitive(Cauchy)\nhalfcauchy = RandomPrimitive(HalfCauchy)\n", "path": "pyro/distributions/__init__.py"}, {"content": "import torch\nfrom torch.autograd import Variable\n\nfrom pyro.distributions.distribution import Distribution\n\n\nclass Delta(Distribution):\n \"\"\"\n :param v: support element *(any)*\n\n Discrete distribution that assigns probability one to the single element in\n its support. Delta distribution parameterized by a random choice should not\n be used with MCMC based inference, as doing so produces incorrect results.\n \"\"\"\n enumerable = True\n\n def _sanitize_input(self, v):\n if v is not None:\n # stateless distribution\n return v\n elif self.v is not None:\n # stateful distribution\n return self.v\n else:\n raise ValueError(\"Parameter(s) were None\")\n\n def __init__(self, v=None, batch_size=1, *args, **kwargs):\n \"\"\"\n Params:\n `v` - value\n \"\"\"\n self.v = v\n if v is not None:\n if v.dim() == 1 and batch_size > 1:\n self.v = v.expand(v, v.size(0))\n super(Delta, self).__init__(*args, **kwargs)\n\n def sample(self, v=None):\n v = self._sanitize_input(v)\n if isinstance(v, Variable):\n return v\n return Variable(v)\n\n def batch_log_pdf(self, x, v=None, batch_size=1):\n v = self._sanitize_input(v)\n if x.dim == 1:\n x = x.expand(batch_size, x.size(0))\n return (torch.eq(x, v.expand_as(x)) - 1).float() * 999999\n\n def log_pdf(self, x, v=None, *args, **kwargs):\n v = self._sanitize_input(v)\n if torch.equal(x.data, v.data.expand_as(x.data)):\n return Variable(torch.zeros(1).type_as(v.data))\n return Variable(torch.Tensor([-float(\"inf\")]).type_as(v.data))\n\n def support(self, v=None):\n \"\"\"\n Returns the delta distribution's support, as a tensor along the first dimension.\n\n :param v: torch variable where each element of the tensor represents the point at\n which the delta distribution is concentrated.\n :return: torch variable enumerating the support of the delta distribution.\n :rtype: torch.autograd.Variable.\n \"\"\"\n v = self._sanitize_input(v)\n # univariate case\n return Variable(v.data)\n", "path": "pyro/distributions/delta.py"}], "after_files": [{"content": "# abstract base class\nfrom pyro.distributions.bernoulli import Bernoulli\nfrom pyro.distributions.beta import Beta\nfrom pyro.distributions.categorical import Categorical\nfrom pyro.distributions.cauchy import Cauchy\nfrom pyro.distributions.half_cauchy import HalfCauchy\nfrom pyro.distributions.delta import Delta\nfrom pyro.distributions.distribution import Distribution # noqa: F401\n# specific distributions\nfrom pyro.distributions.diag_normal import DiagNormal\nfrom pyro.distributions.dirichlet import Dirichlet\nfrom pyro.distributions.exponential import Exponential\nfrom pyro.distributions.gamma import Gamma\nfrom pyro.distributions.log_normal import LogNormal\nfrom pyro.distributions.multinomial import Multinomial\nfrom pyro.distributions.poisson import Poisson\nfrom pyro.distributions.random_primitive import RandomPrimitive\nfrom pyro.distributions.uniform import Uniform\n\n# function aliases\ndiagnormal = DiagNormal()\nlognormal = RandomPrimitive(LogNormal)\ncategorical = Categorical()\nbernoulli = RandomPrimitive(Bernoulli)\nbeta = RandomPrimitive(Beta)\ndelta = RandomPrimitive(Delta)\nexponential = RandomPrimitive(Exponential)\ngamma = RandomPrimitive(Gamma)\nmultinomial = RandomPrimitive(Multinomial)\npoisson = RandomPrimitive(Poisson)\nuniform = RandomPrimitive(Uniform)\ndirichlet = RandomPrimitive(Dirichlet)\ncauchy = RandomPrimitive(Cauchy)\nhalfcauchy = RandomPrimitive(HalfCauchy)\n", "path": "pyro/distributions/__init__.py"}, {"content": "import torch\nfrom torch.autograd import Variable\n\nfrom pyro.distributions.distribution import Distribution\n\n\nclass Delta(Distribution):\n \"\"\"\n :param v: support element *(any)*\n\n Discrete distribution that assigns probability one to the single element in\n its support. Delta distribution parameterized by a random choice should not\n be used with MCMC based inference, as doing so produces incorrect results.\n \"\"\"\n enumerable = True\n\n def __init__(self, v, batch_size=None, *args, **kwargs):\n \"\"\"\n Params:\n `v` - value\n \"\"\"\n self.v = v\n if not isinstance(self.v, Variable):\n self.v = Variable(self.v)\n if v.dim() == 1 and batch_size is not None:\n self.v = v.expand(v, v.size(0))\n super(Delta, self).__init__(*args, **kwargs)\n\n def batch_shape(self, x=None):\n event_dim = 1\n v = self.v\n if x is not None and x.size() != v.size():\n v = self.v.expand_as(x)\n return v.size()[:-event_dim]\n\n def event_shape(self):\n event_dim = 1\n return self.v.size()[-event_dim:]\n\n def shape(self, x=None):\n return self.batch_shape(x) + self.event_shape()\n\n def sample(self):\n return self.v\n\n def batch_log_pdf(self, x):\n v = self.v\n if x.size() != v.size():\n v = v.expand_as(x)\n return torch.sum(torch.eq(x, v).float().log(), -1)\n\n def support(self, v=None):\n \"\"\"\n Returns the delta distribution's support, as a tensor along the first dimension.\n\n :param v: torch variable where each element of the tensor represents the point at\n which the delta distribution is concentrated.\n :return: torch variable enumerating the support of the delta distribution.\n :rtype: torch.autograd.Variable.\n \"\"\"\n return Variable(self.v.data)\n", "path": "pyro/distributions/delta.py"}]} | 1,472 | 859 |
gh_patches_debug_7945 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-4253 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `colossalai/booster/mixed_precision/mixed_precision_base.py`
Content:
```
1 from abc import ABC, abstractmethod
2 from typing import Callable, Optional, Tuple
3
4 import torch.nn as nn
5 from torch.optim import Optimizer
6
7 from colossalai.interface import OptimizerWrapper
8
9
10 class MixedPrecision(ABC):
11 """
12 An abstract class for mixed precision training.
13 """
14
15 @abstractmethod
16 def configure(self,
17 model: nn.Module,
18 optimizer: Optional[Optimizer] = None,
19 criterion: Optional[Callable] = None,
20 ) -> Tuple[nn.Module, OptimizerWrapper, Callable]:
21 # TODO: implement this method
22 pass
23
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/colossalai/booster/mixed_precision/mixed_precision_base.py b/colossalai/booster/mixed_precision/mixed_precision_base.py
--- a/colossalai/booster/mixed_precision/mixed_precision_base.py
+++ b/colossalai/booster/mixed_precision/mixed_precision_base.py
@@ -13,10 +13,11 @@
"""
@abstractmethod
- def configure(self,
- model: nn.Module,
- optimizer: Optional[Optimizer] = None,
- criterion: Optional[Callable] = None,
- ) -> Tuple[nn.Module, OptimizerWrapper, Callable]:
+ def configure(
+ self,
+ model: nn.Module,
+ optimizer: Optional[Optimizer] = None,
+ criterion: Optional[Callable] = None,
+ ) -> Tuple[nn.Module, OptimizerWrapper, Callable]:
# TODO: implement this method
pass
| {"golden_diff": "diff --git a/colossalai/booster/mixed_precision/mixed_precision_base.py b/colossalai/booster/mixed_precision/mixed_precision_base.py\n--- a/colossalai/booster/mixed_precision/mixed_precision_base.py\n+++ b/colossalai/booster/mixed_precision/mixed_precision_base.py\n@@ -13,10 +13,11 @@\n \"\"\"\n \n @abstractmethod\n- def configure(self,\n- model: nn.Module,\n- optimizer: Optional[Optimizer] = None,\n- criterion: Optional[Callable] = None,\n- ) -> Tuple[nn.Module, OptimizerWrapper, Callable]:\n+ def configure(\n+ self,\n+ model: nn.Module,\n+ optimizer: Optional[Optimizer] = None,\n+ criterion: Optional[Callable] = None,\n+ ) -> Tuple[nn.Module, OptimizerWrapper, Callable]:\n # TODO: implement this method\n pass\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "from abc import ABC, abstractmethod\nfrom typing import Callable, Optional, Tuple\n\nimport torch.nn as nn\nfrom torch.optim import Optimizer\n\nfrom colossalai.interface import OptimizerWrapper\n\n\nclass MixedPrecision(ABC):\n \"\"\"\n An abstract class for mixed precision training.\n \"\"\"\n\n @abstractmethod\n def configure(self,\n model: nn.Module,\n optimizer: Optional[Optimizer] = None,\n criterion: Optional[Callable] = None,\n ) -> Tuple[nn.Module, OptimizerWrapper, Callable]:\n # TODO: implement this method\n pass\n", "path": "colossalai/booster/mixed_precision/mixed_precision_base.py"}], "after_files": [{"content": "from abc import ABC, abstractmethod\nfrom typing import Callable, Optional, Tuple\n\nimport torch.nn as nn\nfrom torch.optim import Optimizer\n\nfrom colossalai.interface import OptimizerWrapper\n\n\nclass MixedPrecision(ABC):\n \"\"\"\n An abstract class for mixed precision training.\n \"\"\"\n\n @abstractmethod\n def configure(\n self,\n model: nn.Module,\n optimizer: Optional[Optimizer] = None,\n criterion: Optional[Callable] = None,\n ) -> Tuple[nn.Module, OptimizerWrapper, Callable]:\n # TODO: implement this method\n pass\n", "path": "colossalai/booster/mixed_precision/mixed_precision_base.py"}]} | 452 | 207 |
gh_patches_debug_16262 | rasdani/github-patches | git_diff | web2py__web2py-1459 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
using web2py with Passenger
I'm using web2py on [Dreamhost ](https://www.dreamhost.com/)with the recommended [Phusion Passenger](https://www.phusionpassenger.com/) as the deployment web server. I configured the web2py [handlers/wsgihandler.py](https://github.com/web2py/web2py/blob/master/handlers/wsgihandler.py) as recommended by Dreamhost users (adding lines to launch python in a virtual environment), creating passenger_wsgi.py. With just these edits, Passenger reported a timeout error from the application (web2py).
The Passenger [debugging startup page](https://github.com/phusion/passenger/wiki/Debugging-application-startup-problems) notes that Passenger uses stdout for communication with the application and assumes that it is not "closed, overwritten, or redirected" by the application.
Web2py overwrites stdout with stderr on line 31 in wsgihandler.py. When I comment out this line, web2py (and Passenger) startup just fine and seem to work, although I haven't done much testing beyond startup.
So, my question is, is this line necessary? If so, is there a fix to make this setup work properly? If web2py requires both file descriptors pointing to the same place, maybe set stderr = stdout instead?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `handlers/wsgihandler.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 This file is part of the web2py Web Framework
6 Copyrighted by Massimo Di Pierro <[email protected]>
7 License: LGPLv3 (http://www.gnu.org/licenses/lgpl.html)
8
9
10 This is a WSGI handler for Apache
11 Requires apache+mod_wsgi.
12
13 In httpd.conf put something like:
14
15 LoadModule wsgi_module modules/mod_wsgi.so
16 WSGIScriptAlias / /path/to/wsgihandler.py
17
18 """
19
20 # change these parameters as required
21 LOGGING = False
22 SOFTCRON = False
23
24 import sys
25 import os
26
27 path = os.path.dirname(os.path.abspath(__file__))
28 os.chdir(path)
29
30 if not os.path.isdir('applications'):
31 raise RuntimeError('Running from the wrong folder')
32
33 sys.path = [path] + [p for p in sys.path if not p == path]
34
35 sys.stdout = sys.stderr
36
37 import gluon.main
38
39 if LOGGING:
40 application = gluon.main.appfactory(wsgiapp=gluon.main.wsgibase,
41 logfilename='httpserver.log',
42 profiler_dir=None)
43 else:
44 application = gluon.main.wsgibase
45
46 if SOFTCRON:
47 from gluon.settings import global_settings
48 global_settings.web2py_crontype = 'soft'
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/handlers/wsgihandler.py b/handlers/wsgihandler.py
--- a/handlers/wsgihandler.py
+++ b/handlers/wsgihandler.py
@@ -7,22 +7,16 @@
License: LGPLv3 (http://www.gnu.org/licenses/lgpl.html)
-This is a WSGI handler for Apache
-Requires apache+mod_wsgi.
-
-In httpd.conf put something like:
-
- LoadModule wsgi_module modules/mod_wsgi.so
- WSGIScriptAlias / /path/to/wsgihandler.py
-
+This is a WSGI handler
"""
+import sys
+import os
+
# change these parameters as required
LOGGING = False
SOFTCRON = False
-import sys
-import os
path = os.path.dirname(os.path.abspath(__file__))
os.chdir(path)
@@ -32,8 +26,6 @@
sys.path = [path] + [p for p in sys.path if not p == path]
-sys.stdout = sys.stderr
-
import gluon.main
if LOGGING:
| {"golden_diff": "diff --git a/handlers/wsgihandler.py b/handlers/wsgihandler.py\n--- a/handlers/wsgihandler.py\n+++ b/handlers/wsgihandler.py\n@@ -7,22 +7,16 @@\n License: LGPLv3 (http://www.gnu.org/licenses/lgpl.html)\n \n \n-This is a WSGI handler for Apache\n-Requires apache+mod_wsgi.\n-\n-In httpd.conf put something like:\n-\n- LoadModule wsgi_module modules/mod_wsgi.so\n- WSGIScriptAlias / /path/to/wsgihandler.py\n-\n+This is a WSGI handler\n \"\"\"\n \n+import sys\n+import os\n+\n # change these parameters as required\n LOGGING = False\n SOFTCRON = False\n \n-import sys\n-import os\n \n path = os.path.dirname(os.path.abspath(__file__))\n os.chdir(path)\n@@ -32,8 +26,6 @@\n \n sys.path = [path] + [p for p in sys.path if not p == path]\n \n-sys.stdout = sys.stderr\n-\n import gluon.main\n \n if LOGGING:\n", "issue": "using web2py with Passenger\nI'm using web2py on [Dreamhost ](https://www.dreamhost.com/)with the recommended [Phusion Passenger](https://www.phusionpassenger.com/) as the deployment web server. I configured the web2py [handlers/wsgihandler.py](https://github.com/web2py/web2py/blob/master/handlers/wsgihandler.py) as recommended by Dreamhost users (adding lines to launch python in a virtual environment), creating passenger_wsgi.py. With just these edits, Passenger reported a timeout error from the application (web2py). \n\nThe Passenger [debugging startup page](https://github.com/phusion/passenger/wiki/Debugging-application-startup-problems) notes that Passenger uses stdout for communication with the application and assumes that it is not \"closed, overwritten, or redirected\" by the application. \n\nWeb2py overwrites stdout with stderr on line 31 in wsgihandler.py. When I comment out this line, web2py (and Passenger) startup just fine and seem to work, although I haven't done much testing beyond startup.\n\nSo, my question is, is this line necessary? If so, is there a fix to make this setup work properly? If web2py requires both file descriptors pointing to the same place, maybe set stderr = stdout instead?\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nThis file is part of the web2py Web Framework\nCopyrighted by Massimo Di Pierro <[email protected]>\nLicense: LGPLv3 (http://www.gnu.org/licenses/lgpl.html)\n\n\nThis is a WSGI handler for Apache\nRequires apache+mod_wsgi.\n\nIn httpd.conf put something like:\n\n LoadModule wsgi_module modules/mod_wsgi.so\n WSGIScriptAlias / /path/to/wsgihandler.py\n\n\"\"\"\n\n# change these parameters as required\nLOGGING = False\nSOFTCRON = False\n\nimport sys\nimport os\n\npath = os.path.dirname(os.path.abspath(__file__))\nos.chdir(path)\n\nif not os.path.isdir('applications'):\n raise RuntimeError('Running from the wrong folder')\n\nsys.path = [path] + [p for p in sys.path if not p == path]\n\nsys.stdout = sys.stderr\n\nimport gluon.main\n\nif LOGGING:\n application = gluon.main.appfactory(wsgiapp=gluon.main.wsgibase,\n logfilename='httpserver.log',\n profiler_dir=None)\nelse:\n application = gluon.main.wsgibase\n\nif SOFTCRON:\n from gluon.settings import global_settings\n global_settings.web2py_crontype = 'soft'\n", "path": "handlers/wsgihandler.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nThis file is part of the web2py Web Framework\nCopyrighted by Massimo Di Pierro <[email protected]>\nLicense: LGPLv3 (http://www.gnu.org/licenses/lgpl.html)\n\n\nThis is a WSGI handler\n\"\"\"\n\nimport sys\nimport os\n\n# change these parameters as required\nLOGGING = False\nSOFTCRON = False\n\n\npath = os.path.dirname(os.path.abspath(__file__))\nos.chdir(path)\n\nif not os.path.isdir('applications'):\n raise RuntimeError('Running from the wrong folder')\n\nsys.path = [path] + [p for p in sys.path if not p == path]\n\nimport gluon.main\n\nif LOGGING:\n application = gluon.main.appfactory(wsgiapp=gluon.main.wsgibase,\n logfilename='httpserver.log',\n profiler_dir=None)\nelse:\n application = gluon.main.wsgibase\n\nif SOFTCRON:\n from gluon.settings import global_settings\n global_settings.web2py_crontype = 'soft'\n", "path": "handlers/wsgihandler.py"}]} | 932 | 243 |
gh_patches_debug_24420 | rasdani/github-patches | git_diff | nautobot__nautobot-1381 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
nautobot-scheduler can't apply celery_backend_cleanup_1
<!--
NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.
This form is only for reporting reproducible bugs. If you need assistance
with Nautobot installation, or if you have a general question, please start a
discussion instead: https://github.com/nautobot/nautobot/discussions
Please describe the environment in which you are running Nautobot. Be sure
that you are running an unmodified instance of the latest stable release
before submitting a bug report, and that any plugins have been disabled.
-->
### Environment
* Python version: 3.9
* Nautobot version: 1.2.5
<!--
Describe in detail the exact steps that someone else can take to reproduce
this bug using the current stable release of Nautobot. Begin with the
creation of any necessary database objects and call out every operation
being performed explicitly. If reporting a bug in the REST API, be sure to
reconstruct the raw HTTP request(s) being made: Don't rely on a client
library such as pynautobot.
-->
### Steps to Reproduce
1. Run the scheduler as systemd-service
2.
3.
<!-- What did you expect to happen? -->
### Expected Behavior
Internal jobs run without an error
<!-- What happened instead? -->
### Observed Behavior
```
Feb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: [2022-02-08 04:00:00,000: INFO/MainProcess] Scheduler: Sending due task celery.backend_cleanup_1 (celery.backend_cleanup)
Feb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: [2022-02-08 04:00:00,001: ERROR/MainProcess] Message Error: Couldn't apply scheduled task celery.backend_cleanup_1: 'str' object has no attribute 'items'
Feb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: [' File "/opt/nautobot/bin/nautobot-server", line 8, in <module>\n sys.exit(main())\n', ' File "/opt/nautobot/lib/python3.9/site-packages/nautobot/cor>
Feb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: Traceback (most recent call last):
Feb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: File "/opt/nautobot/lib/python3.9/site-packages/celery/beat.py", line 402, in apply_async
Feb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: entry_kwargs = _evaluate_entry_kwargs(entry.kwargs)
Feb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: File "/opt/nautobot/lib/python3.9/site-packages/celery/beat.py", line 220, in _evaluate_entry_kwargs
Feb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: for k, v in entry_kwargs.items()
Feb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: AttributeError: 'str' object has no attribute 'items'
```
I see this error for every day.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nautobot/core/celery/schedulers.py`
Content:
```
1 import logging
2
3 from celery import current_app
4 from django_celery_beat.schedulers import ModelEntry, DatabaseScheduler
5
6 from nautobot.extras.models import ScheduledJob, ScheduledJobs
7
8
9 logger = logging.getLogger(__name__)
10
11
12 class NautobotScheduleEntry(ModelEntry):
13 """
14 Nautobot variant of the django-celery-beat ModelEntry which uses the
15 nautobot.extras.models.ScheduledJob model
16 """
17
18 def __init__(self, model, app=None):
19 """Initialize the model entry."""
20 self.app = app or current_app._get_current_object()
21 self.name = "{}_{}".format(model.name, model.pk)
22 self.task = model.task
23 self.args = model.args
24 self.kwargs = model.kwargs
25 try:
26 self.schedule = model.schedule
27 except model.DoesNotExist:
28 logger.error(
29 "Disabling schedule %s that was removed from database",
30 self.name,
31 )
32 self._disable(model)
33
34 self.options = {}
35 if model.queue:
36 self.options["queue"] = model.queue
37
38 self.options["headers"] = {}
39 self.total_run_count = model.total_run_count
40 self.model = model
41
42 if not model.last_run_at:
43 model.last_run_at = self._default_now()
44
45 self.last_run_at = model.last_run_at
46
47
48 class NautobotDatabaseScheduler(DatabaseScheduler):
49 """
50 Nautobot variant of the django-celery-beat DatabaseScheduler which uses the
51 nautobot.extras.models.ScheduledJob model
52 """
53
54 Entry = NautobotScheduleEntry
55 Model = ScheduledJob
56 Changes = ScheduledJobs
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nautobot/core/celery/schedulers.py b/nautobot/core/celery/schedulers.py
--- a/nautobot/core/celery/schedulers.py
+++ b/nautobot/core/celery/schedulers.py
@@ -2,6 +2,7 @@
from celery import current_app
from django_celery_beat.schedulers import ModelEntry, DatabaseScheduler
+from kombu.utils.json import loads
from nautobot.extras.models import ScheduledJob, ScheduledJobs
@@ -20,8 +21,14 @@
self.app = app or current_app._get_current_object()
self.name = "{}_{}".format(model.name, model.pk)
self.task = model.task
- self.args = model.args
- self.kwargs = model.kwargs
+ try:
+ # Nautobot scheduled jobs pass args/kwargs as constructed objects,
+ # but Celery built-in jobs such as celery.backend_cleanup pass them as JSON to be parsed
+ self.args = model.args if isinstance(model.args, (tuple, list)) else loads(model.args or "[]")
+ self.kwargs = model.kwargs if isinstance(model.kwargs, dict) else loads(model.kwargs or "{}")
+ except (TypeError, ValueError) as exc:
+ logger.exception("Removing schedule %s for argument deserialization error: %s", self.name, exc)
+ self._disable(model)
try:
self.schedule = model.schedule
except model.DoesNotExist:
| {"golden_diff": "diff --git a/nautobot/core/celery/schedulers.py b/nautobot/core/celery/schedulers.py\n--- a/nautobot/core/celery/schedulers.py\n+++ b/nautobot/core/celery/schedulers.py\n@@ -2,6 +2,7 @@\n \n from celery import current_app\n from django_celery_beat.schedulers import ModelEntry, DatabaseScheduler\n+from kombu.utils.json import loads\n \n from nautobot.extras.models import ScheduledJob, ScheduledJobs\n \n@@ -20,8 +21,14 @@\n self.app = app or current_app._get_current_object()\n self.name = \"{}_{}\".format(model.name, model.pk)\n self.task = model.task\n- self.args = model.args\n- self.kwargs = model.kwargs\n+ try:\n+ # Nautobot scheduled jobs pass args/kwargs as constructed objects,\n+ # but Celery built-in jobs such as celery.backend_cleanup pass them as JSON to be parsed\n+ self.args = model.args if isinstance(model.args, (tuple, list)) else loads(model.args or \"[]\")\n+ self.kwargs = model.kwargs if isinstance(model.kwargs, dict) else loads(model.kwargs or \"{}\")\n+ except (TypeError, ValueError) as exc:\n+ logger.exception(\"Removing schedule %s for argument deserialization error: %s\", self.name, exc)\n+ self._disable(model)\n try:\n self.schedule = model.schedule\n except model.DoesNotExist:\n", "issue": "nautobot-scheduler can't apply celery_backend_cleanup_1\n<!--\r\n NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.\r\n\r\n This form is only for reporting reproducible bugs. If you need assistance\r\n with Nautobot installation, or if you have a general question, please start a\r\n discussion instead: https://github.com/nautobot/nautobot/discussions\r\n\r\n Please describe the environment in which you are running Nautobot. Be sure\r\n that you are running an unmodified instance of the latest stable release\r\n before submitting a bug report, and that any plugins have been disabled.\r\n-->\r\n### Environment\r\n* Python version: 3.9\r\n* Nautobot version: 1.2.5\r\n\r\n<!--\r\n Describe in detail the exact steps that someone else can take to reproduce\r\n this bug using the current stable release of Nautobot. Begin with the\r\n creation of any necessary database objects and call out every operation\r\n being performed explicitly. If reporting a bug in the REST API, be sure to\r\n reconstruct the raw HTTP request(s) being made: Don't rely on a client\r\n library such as pynautobot.\r\n-->\r\n### Steps to Reproduce\r\n1. Run the scheduler as systemd-service\r\n2.\r\n3.\r\n\r\n<!-- What did you expect to happen? -->\r\n### Expected Behavior\r\nInternal jobs run without an error\r\n\r\n<!-- What happened instead? -->\r\n### Observed Behavior\r\n```\r\nFeb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: [2022-02-08 04:00:00,000: INFO/MainProcess] Scheduler: Sending due task celery.backend_cleanup_1 (celery.backend_cleanup)\r\nFeb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: [2022-02-08 04:00:00,001: ERROR/MainProcess] Message Error: Couldn't apply scheduled task celery.backend_cleanup_1: 'str' object has no attribute 'items'\r\nFeb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: [' File \"/opt/nautobot/bin/nautobot-server\", line 8, in <module>\\n sys.exit(main())\\n', ' File \"/opt/nautobot/lib/python3.9/site-packages/nautobot/cor>\r\nFeb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: Traceback (most recent call last):\r\nFeb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: File \"/opt/nautobot/lib/python3.9/site-packages/celery/beat.py\", line 402, in apply_async\r\nFeb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: entry_kwargs = _evaluate_entry_kwargs(entry.kwargs)\r\nFeb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: File \"/opt/nautobot/lib/python3.9/site-packages/celery/beat.py\", line 220, in _evaluate_entry_kwargs\r\nFeb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: for k, v in entry_kwargs.items()\r\nFeb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: AttributeError: 'str' object has no attribute 'items'\r\n```\r\n\r\nI see this error for every day.\n", "before_files": [{"content": "import logging\n\nfrom celery import current_app\nfrom django_celery_beat.schedulers import ModelEntry, DatabaseScheduler\n\nfrom nautobot.extras.models import ScheduledJob, ScheduledJobs\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass NautobotScheduleEntry(ModelEntry):\n \"\"\"\n Nautobot variant of the django-celery-beat ModelEntry which uses the\n nautobot.extras.models.ScheduledJob model\n \"\"\"\n\n def __init__(self, model, app=None):\n \"\"\"Initialize the model entry.\"\"\"\n self.app = app or current_app._get_current_object()\n self.name = \"{}_{}\".format(model.name, model.pk)\n self.task = model.task\n self.args = model.args\n self.kwargs = model.kwargs\n try:\n self.schedule = model.schedule\n except model.DoesNotExist:\n logger.error(\n \"Disabling schedule %s that was removed from database\",\n self.name,\n )\n self._disable(model)\n\n self.options = {}\n if model.queue:\n self.options[\"queue\"] = model.queue\n\n self.options[\"headers\"] = {}\n self.total_run_count = model.total_run_count\n self.model = model\n\n if not model.last_run_at:\n model.last_run_at = self._default_now()\n\n self.last_run_at = model.last_run_at\n\n\nclass NautobotDatabaseScheduler(DatabaseScheduler):\n \"\"\"\n Nautobot variant of the django-celery-beat DatabaseScheduler which uses the\n nautobot.extras.models.ScheduledJob model\n \"\"\"\n\n Entry = NautobotScheduleEntry\n Model = ScheduledJob\n Changes = ScheduledJobs\n", "path": "nautobot/core/celery/schedulers.py"}], "after_files": [{"content": "import logging\n\nfrom celery import current_app\nfrom django_celery_beat.schedulers import ModelEntry, DatabaseScheduler\nfrom kombu.utils.json import loads\n\nfrom nautobot.extras.models import ScheduledJob, ScheduledJobs\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass NautobotScheduleEntry(ModelEntry):\n \"\"\"\n Nautobot variant of the django-celery-beat ModelEntry which uses the\n nautobot.extras.models.ScheduledJob model\n \"\"\"\n\n def __init__(self, model, app=None):\n \"\"\"Initialize the model entry.\"\"\"\n self.app = app or current_app._get_current_object()\n self.name = \"{}_{}\".format(model.name, model.pk)\n self.task = model.task\n try:\n # Nautobot scheduled jobs pass args/kwargs as constructed objects,\n # but Celery built-in jobs such as celery.backend_cleanup pass them as JSON to be parsed\n self.args = model.args if isinstance(model.args, (tuple, list)) else loads(model.args or \"[]\")\n self.kwargs = model.kwargs if isinstance(model.kwargs, dict) else loads(model.kwargs or \"{}\")\n except (TypeError, ValueError) as exc:\n logger.exception(\"Removing schedule %s for argument deserialization error: %s\", self.name, exc)\n self._disable(model)\n try:\n self.schedule = model.schedule\n except model.DoesNotExist:\n logger.error(\n \"Disabling schedule %s that was removed from database\",\n self.name,\n )\n self._disable(model)\n\n self.options = {}\n if model.queue:\n self.options[\"queue\"] = model.queue\n\n self.options[\"headers\"] = {}\n self.total_run_count = model.total_run_count\n self.model = model\n\n if not model.last_run_at:\n model.last_run_at = self._default_now()\n\n self.last_run_at = model.last_run_at\n\n\nclass NautobotDatabaseScheduler(DatabaseScheduler):\n \"\"\"\n Nautobot variant of the django-celery-beat DatabaseScheduler which uses the\n nautobot.extras.models.ScheduledJob model\n \"\"\"\n\n Entry = NautobotScheduleEntry\n Model = ScheduledJob\n Changes = ScheduledJobs\n", "path": "nautobot/core/celery/schedulers.py"}]} | 1,615 | 314 |
gh_patches_debug_7072 | rasdani/github-patches | git_diff | StackStorm__st2-4407 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CLI ERROR: 'unicodeescape' codec can't decode bytes in position 238-239: truncated \UXXXXXXXX escape
##### SUMMARY
Running a command from the vSphere pack that executes a command on a windows guest and returns a string is successful, and I can see it in the st2 GUI, but the command line response indicates a problem handling the data.
##### ISSUE TYPE
- Bug Report
##### STACKSTORM VERSION
st2 3.0dev (5377ebb), on Python 2.7.6
##### OS / ENVIRONMENT / INSTALL METHOD
st2-docker with ST2_IMAGE_TAG=dev
##### STEPS TO REPRODUCE
Requires a working vSphere environment.
Install the vSphere pack with guest extensions from my branch:
```
# st2 pack install https://github.com/jeking3/stackstorm-vsphere.git=guestops
```
Run a command against a guest, for example:
```
root@b9bab223e4fa:/# st2 run vsphere.guest_dir_create vm_id=vm-45 username=Administrator password=******** prefix=issue_ suffix=_test
.
ERROR: 'unicodeescape' codec can't decode bytes in position 238-239: truncated \UXXXXXXXX escape
```
##### EXPECTED RESULTS
The action output in the GUI is reasonable:
```
{
"result": "C:\\Users\\ADMINI~1\\AppData\\Local\\Temp\\jking_vmware20_test",
"exit_code": 0,
"stderr": "/opt/stackstorm/st2/lib/python2.7/site-packages/cryptography/hazmat/primitives/constant_time.py:26: CryptographyDeprecationWarning: Support for your Python version is deprecated. The next version of cryptography will remove support. Please upgrade to a 2.7.x release that supports hmac.compare_digest as soon as possible.\n utils.DeprecatedIn23,\n",
"stdout": ""
}
```
However the CLI choked.
##### ACTUAL RESULTS
```
ERROR: 'unicodeescape' codec can't decode bytes in position 238-239: truncated \UXXXXXXXX escape
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `st2client/st2client/formatters/execution.py`
Content:
```
1 # Licensed to the StackStorm, Inc ('StackStorm') under one or more
2 # contributor license agreements. See the NOTICE file distributed with
3 # this work for additional information regarding copyright ownership.
4 # The ASF licenses this file to You under the Apache License, Version 2.0
5 # (the "License"); you may not use this file except in compliance with
6 # the License. You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 from __future__ import absolute_import
17
18 import ast
19 import logging
20 import struct
21
22 import yaml
23
24 from st2client import formatters
25 from st2client.config import get_config
26 from st2client.utils import jsutil
27 from st2client.utils import strutil
28 from st2client.utils.color import DisplayColors
29 from st2client.utils import schema
30 import six
31
32
33 LOG = logging.getLogger(__name__)
34
35 PLATFORM_MAXINT = 2 ** (struct.Struct('i').size * 8 - 1) - 1
36
37
38 def _print_bordered(text):
39 lines = text.split('\n')
40 width = max(len(s) for s in lines) + 2
41 res = ['\n+' + '-' * width + '+']
42 for s in lines:
43 res.append('| ' + (s + ' ' * width)[:width - 2] + ' |')
44 res.append('+' + '-' * width + '+')
45 return '\n'.join(res)
46
47
48 class ExecutionResult(formatters.Formatter):
49
50 @classmethod
51 def format(cls, entry, *args, **kwargs):
52 attrs = kwargs.get('attributes', [])
53 attribute_transform_functions = kwargs.get('attribute_transform_functions', {})
54 key = kwargs.get('key', None)
55 if key:
56 output = jsutil.get_value(entry.result, key)
57 else:
58 # drop entry to the dict so that jsutil can operate
59 entry = vars(entry)
60 output = ''
61 for attr in attrs:
62 value = jsutil.get_value(entry, attr)
63 value = strutil.strip_carriage_returns(strutil.unescape(value))
64 # TODO: This check is inherently flawed since it will crash st2client
65 # if the leading character is objectish start and last character is objectish
66 # end but the string isn't supposed to be a object. Try/Except will catch
67 # this for now, but this should be improved.
68 if (isinstance(value, six.string_types) and len(value) > 0 and
69 value[0] in ['{', '['] and value[len(value) - 1] in ['}', ']']):
70 try:
71 new_value = ast.literal_eval(value)
72 except:
73 new_value = value
74 if type(new_value) in [dict, list]:
75 value = new_value
76 if type(value) in [dict, list]:
77 # 1. To get a nice overhang indent get safe_dump to generate output with
78 # the attribute key and then remove the attribute key from the string.
79 # 2. Drop the trailing newline
80 # 3. Set width to maxint so pyyaml does not split text. Anything longer
81 # and likely we will see other issues like storage :P.
82 formatted_value = yaml.safe_dump({attr: value},
83 default_flow_style=False,
84 width=PLATFORM_MAXINT,
85 indent=2)[len(attr) + 2:-1]
86 value = ('\n' if isinstance(value, dict) else '') + formatted_value
87 value = strutil.dedupe_newlines(value)
88
89 # transform the value of our attribute so things like 'status'
90 # and 'timestamp' are formatted nicely
91 transform_function = attribute_transform_functions.get(attr,
92 lambda value: value)
93 value = transform_function(value=value)
94
95 output += ('\n' if output else '') + '%s: %s' % \
96 (DisplayColors.colorize(attr, DisplayColors.BLUE), value)
97
98 output_schema = entry.get('action', {}).get('output_schema')
99 schema_check = get_config()['general']['silence_schema_output']
100 if not output_schema and kwargs.get('with_schema'):
101 rendered_schema = {
102 'output_schema': schema.render_output_schema_from_output(entry['result'])
103 }
104
105 rendered_schema = yaml.safe_dump(rendered_schema, default_flow_style=False)
106 output += '\n'
107 output += _print_bordered(
108 "Based on the action output the following inferred schema was built:"
109 "\n\n"
110 "%s" % rendered_schema
111 )
112 elif not output_schema and not schema_check:
113 output += (
114 "\n\n** This action does not have an output_schema. "
115 "Run again with --with-schema to see a suggested schema."
116 )
117
118 if six.PY3:
119 return strutil.unescape(str(output))
120 else:
121 # Assume Python 2
122 return strutil.unescape(str(output)).decode('unicode_escape').encode('utf-8')
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/st2client/st2client/formatters/execution.py b/st2client/st2client/formatters/execution.py
--- a/st2client/st2client/formatters/execution.py
+++ b/st2client/st2client/formatters/execution.py
@@ -119,4 +119,9 @@
return strutil.unescape(str(output))
else:
# Assume Python 2
- return strutil.unescape(str(output)).decode('unicode_escape').encode('utf-8')
+ try:
+ result = strutil.unescape(str(output)).decode('unicode_escape').encode('utf-8')
+ except UnicodeDecodeError:
+ # String contains a value which is not an unicode escape sequence, ignore the error
+ result = strutil.unescape(str(output))
+ return result
| {"golden_diff": "diff --git a/st2client/st2client/formatters/execution.py b/st2client/st2client/formatters/execution.py\n--- a/st2client/st2client/formatters/execution.py\n+++ b/st2client/st2client/formatters/execution.py\n@@ -119,4 +119,9 @@\n return strutil.unescape(str(output))\n else:\n # Assume Python 2\n- return strutil.unescape(str(output)).decode('unicode_escape').encode('utf-8')\n+ try:\n+ result = strutil.unescape(str(output)).decode('unicode_escape').encode('utf-8')\n+ except UnicodeDecodeError:\n+ # String contains a value which is not an unicode escape sequence, ignore the error\n+ result = strutil.unescape(str(output))\n+ return result\n", "issue": "CLI ERROR: 'unicodeescape' codec can't decode bytes in position 238-239: truncated \\UXXXXXXXX escape\n##### SUMMARY\r\n\r\nRunning a command from the vSphere pack that executes a command on a windows guest and returns a string is successful, and I can see it in the st2 GUI, but the command line response indicates a problem handling the data.\r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### STACKSTORM VERSION\r\nst2 3.0dev (5377ebb), on Python 2.7.6\r\n\r\n##### OS / ENVIRONMENT / INSTALL METHOD\r\nst2-docker with ST2_IMAGE_TAG=dev\r\n\r\n##### STEPS TO REPRODUCE\r\nRequires a working vSphere environment.\r\nInstall the vSphere pack with guest extensions from my branch:\r\n```\r\n# st2 pack install https://github.com/jeking3/stackstorm-vsphere.git=guestops\r\n```\r\nRun a command against a guest, for example:\r\n```\r\nroot@b9bab223e4fa:/# st2 run vsphere.guest_dir_create vm_id=vm-45 username=Administrator password=******** prefix=issue_ suffix=_test\r\n.\r\nERROR: 'unicodeescape' codec can't decode bytes in position 238-239: truncated \\UXXXXXXXX escape\r\n```\r\n\r\n##### EXPECTED RESULTS\r\nThe action output in the GUI is reasonable:\r\n```\r\n{\r\n \"result\": \"C:\\\\Users\\\\ADMINI~1\\\\AppData\\\\Local\\\\Temp\\\\jking_vmware20_test\",\r\n \"exit_code\": 0,\r\n \"stderr\": \"/opt/stackstorm/st2/lib/python2.7/site-packages/cryptography/hazmat/primitives/constant_time.py:26: CryptographyDeprecationWarning: Support for your Python version is deprecated. The next version of cryptography will remove support. Please upgrade to a 2.7.x release that supports hmac.compare_digest as soon as possible.\\n utils.DeprecatedIn23,\\n\",\r\n \"stdout\": \"\"\r\n}\r\n```\r\nHowever the CLI choked.\r\n\r\n##### ACTUAL RESULTS\r\n```\r\nERROR: 'unicodeescape' codec can't decode bytes in position 238-239: truncated \\UXXXXXXXX escape\r\n```\r\n\n", "before_files": [{"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nimport ast\nimport logging\nimport struct\n\nimport yaml\n\nfrom st2client import formatters\nfrom st2client.config import get_config\nfrom st2client.utils import jsutil\nfrom st2client.utils import strutil\nfrom st2client.utils.color import DisplayColors\nfrom st2client.utils import schema\nimport six\n\n\nLOG = logging.getLogger(__name__)\n\nPLATFORM_MAXINT = 2 ** (struct.Struct('i').size * 8 - 1) - 1\n\n\ndef _print_bordered(text):\n lines = text.split('\\n')\n width = max(len(s) for s in lines) + 2\n res = ['\\n+' + '-' * width + '+']\n for s in lines:\n res.append('| ' + (s + ' ' * width)[:width - 2] + ' |')\n res.append('+' + '-' * width + '+')\n return '\\n'.join(res)\n\n\nclass ExecutionResult(formatters.Formatter):\n\n @classmethod\n def format(cls, entry, *args, **kwargs):\n attrs = kwargs.get('attributes', [])\n attribute_transform_functions = kwargs.get('attribute_transform_functions', {})\n key = kwargs.get('key', None)\n if key:\n output = jsutil.get_value(entry.result, key)\n else:\n # drop entry to the dict so that jsutil can operate\n entry = vars(entry)\n output = ''\n for attr in attrs:\n value = jsutil.get_value(entry, attr)\n value = strutil.strip_carriage_returns(strutil.unescape(value))\n # TODO: This check is inherently flawed since it will crash st2client\n # if the leading character is objectish start and last character is objectish\n # end but the string isn't supposed to be a object. Try/Except will catch\n # this for now, but this should be improved.\n if (isinstance(value, six.string_types) and len(value) > 0 and\n value[0] in ['{', '['] and value[len(value) - 1] in ['}', ']']):\n try:\n new_value = ast.literal_eval(value)\n except:\n new_value = value\n if type(new_value) in [dict, list]:\n value = new_value\n if type(value) in [dict, list]:\n # 1. To get a nice overhang indent get safe_dump to generate output with\n # the attribute key and then remove the attribute key from the string.\n # 2. Drop the trailing newline\n # 3. Set width to maxint so pyyaml does not split text. Anything longer\n # and likely we will see other issues like storage :P.\n formatted_value = yaml.safe_dump({attr: value},\n default_flow_style=False,\n width=PLATFORM_MAXINT,\n indent=2)[len(attr) + 2:-1]\n value = ('\\n' if isinstance(value, dict) else '') + formatted_value\n value = strutil.dedupe_newlines(value)\n\n # transform the value of our attribute so things like 'status'\n # and 'timestamp' are formatted nicely\n transform_function = attribute_transform_functions.get(attr,\n lambda value: value)\n value = transform_function(value=value)\n\n output += ('\\n' if output else '') + '%s: %s' % \\\n (DisplayColors.colorize(attr, DisplayColors.BLUE), value)\n\n output_schema = entry.get('action', {}).get('output_schema')\n schema_check = get_config()['general']['silence_schema_output']\n if not output_schema and kwargs.get('with_schema'):\n rendered_schema = {\n 'output_schema': schema.render_output_schema_from_output(entry['result'])\n }\n\n rendered_schema = yaml.safe_dump(rendered_schema, default_flow_style=False)\n output += '\\n'\n output += _print_bordered(\n \"Based on the action output the following inferred schema was built:\"\n \"\\n\\n\"\n \"%s\" % rendered_schema\n )\n elif not output_schema and not schema_check:\n output += (\n \"\\n\\n** This action does not have an output_schema. \"\n \"Run again with --with-schema to see a suggested schema.\"\n )\n\n if six.PY3:\n return strutil.unescape(str(output))\n else:\n # Assume Python 2\n return strutil.unescape(str(output)).decode('unicode_escape').encode('utf-8')\n", "path": "st2client/st2client/formatters/execution.py"}], "after_files": [{"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nimport ast\nimport logging\nimport struct\n\nimport yaml\n\nfrom st2client import formatters\nfrom st2client.config import get_config\nfrom st2client.utils import jsutil\nfrom st2client.utils import strutil\nfrom st2client.utils.color import DisplayColors\nfrom st2client.utils import schema\nimport six\n\n\nLOG = logging.getLogger(__name__)\n\nPLATFORM_MAXINT = 2 ** (struct.Struct('i').size * 8 - 1) - 1\n\n\ndef _print_bordered(text):\n lines = text.split('\\n')\n width = max(len(s) for s in lines) + 2\n res = ['\\n+' + '-' * width + '+']\n for s in lines:\n res.append('| ' + (s + ' ' * width)[:width - 2] + ' |')\n res.append('+' + '-' * width + '+')\n return '\\n'.join(res)\n\n\nclass ExecutionResult(formatters.Formatter):\n\n @classmethod\n def format(cls, entry, *args, **kwargs):\n attrs = kwargs.get('attributes', [])\n attribute_transform_functions = kwargs.get('attribute_transform_functions', {})\n key = kwargs.get('key', None)\n if key:\n output = jsutil.get_value(entry.result, key)\n else:\n # drop entry to the dict so that jsutil can operate\n entry = vars(entry)\n output = ''\n for attr in attrs:\n value = jsutil.get_value(entry, attr)\n value = strutil.strip_carriage_returns(strutil.unescape(value))\n # TODO: This check is inherently flawed since it will crash st2client\n # if the leading character is objectish start and last character is objectish\n # end but the string isn't supposed to be a object. Try/Except will catch\n # this for now, but this should be improved.\n if (isinstance(value, six.string_types) and len(value) > 0 and\n value[0] in ['{', '['] and value[len(value) - 1] in ['}', ']']):\n try:\n new_value = ast.literal_eval(value)\n except:\n new_value = value\n if type(new_value) in [dict, list]:\n value = new_value\n if type(value) in [dict, list]:\n # 1. To get a nice overhang indent get safe_dump to generate output with\n # the attribute key and then remove the attribute key from the string.\n # 2. Drop the trailing newline\n # 3. Set width to maxint so pyyaml does not split text. Anything longer\n # and likely we will see other issues like storage :P.\n formatted_value = yaml.safe_dump({attr: value},\n default_flow_style=False,\n width=PLATFORM_MAXINT,\n indent=2)[len(attr) + 2:-1]\n value = ('\\n' if isinstance(value, dict) else '') + formatted_value\n value = strutil.dedupe_newlines(value)\n\n # transform the value of our attribute so things like 'status'\n # and 'timestamp' are formatted nicely\n transform_function = attribute_transform_functions.get(attr,\n lambda value: value)\n value = transform_function(value=value)\n\n output += ('\\n' if output else '') + '%s: %s' % \\\n (DisplayColors.colorize(attr, DisplayColors.BLUE), value)\n\n output_schema = entry.get('action', {}).get('output_schema')\n schema_check = get_config()['general']['silence_schema_output']\n if not output_schema and kwargs.get('with_schema'):\n rendered_schema = {\n 'output_schema': schema.render_output_schema_from_output(entry['result'])\n }\n\n rendered_schema = yaml.safe_dump(rendered_schema, default_flow_style=False)\n output += '\\n'\n output += _print_bordered(\n \"Based on the action output the following inferred schema was built:\"\n \"\\n\\n\"\n \"%s\" % rendered_schema\n )\n elif not output_schema and not schema_check:\n output += (\n \"\\n\\n** This action does not have an output_schema. \"\n \"Run again with --with-schema to see a suggested schema.\"\n )\n\n if six.PY3:\n return strutil.unescape(str(output))\n else:\n # Assume Python 2\n try:\n result = strutil.unescape(str(output)).decode('unicode_escape').encode('utf-8')\n except UnicodeDecodeError:\n # String contains a value which is not an unicode escape sequence, ignore the error\n result = strutil.unescape(str(output))\n return result\n", "path": "st2client/st2client/formatters/execution.py"}]} | 2,119 | 178 |
gh_patches_debug_26198 | rasdani/github-patches | git_diff | fossasia__open-event-server-5215 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update populate_db.py to add Paypal sandbox keys.
**Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Update populate_db.py to add Paypal sandbox keys.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `populate_db.py`
Content:
```
1 from app import current_app
2 from app.models import db
3 from app.api.helpers.db import get_or_create # , save_to_db
4 from envparse import env
5
6 # Admin message settings
7 from app.api.helpers.system_mails import MAILS
8 from app.models.message_setting import MessageSettings
9
10 # Event Role-Service Permissions
11 from app.models.role import Role
12 from app.models.service import Service
13 from app.models.permission import Permission
14
15 from app.models.track import Track
16 from app.models.session import Session
17 from app.models.speaker import Speaker
18 from app.models.sponsor import Sponsor
19 from app.models.microlocation import Microlocation
20
21 from app.models.user import ORGANIZER, COORGANIZER, TRACK_ORGANIZER, MODERATOR, ATTENDEE, REGISTRAR
22
23 # Admin Panel Permissions
24 from app.models.panel_permission import PanelPermission
25 from app.models.custom_system_role import CustomSysRole
26
27 from app.models.setting import Setting
28 from app.models.image_size import ImageSizes
29 from app.models.module import Module
30
31 # EventTopic
32 from app.models.event_topic import EventTopic
33
34 # EventType
35 from app.models.event_type import EventType
36
37 # EventLocation
38 from app.models.event_location import EventLocation
39
40 # User Permissions
41 from app.models.user_permission import UserPermission
42 SALES = 'sales'
43
44
45 def create_roles():
46 get_or_create(Role, name=ORGANIZER, title_name='Organizer')
47 get_or_create(Role, name=COORGANIZER, title_name='Co-organizer')
48 get_or_create(Role, name=TRACK_ORGANIZER, title_name='Track Organizer')
49 get_or_create(Role, name=MODERATOR, title_name='Moderator')
50 get_or_create(Role, name=ATTENDEE, title_name='Attendee')
51 get_or_create(Role, name=REGISTRAR, title_name='Registrar')
52
53
54 def create_services():
55 track = Track.get_service_name()
56 session = Session.get_service_name()
57 speaker = Speaker.get_service_name()
58 sponsor = Sponsor.get_service_name()
59 microlocation = Microlocation.get_service_name()
60
61 get_or_create(Service, name=track)
62 get_or_create(Service, name=session)
63 get_or_create(Service, name=speaker)
64 get_or_create(Service, name=sponsor)
65 get_or_create(Service, name=microlocation)
66
67
68 def create_settings():
69 get_or_create(Setting, app_name='Open Event')
70
71 if current_app.config['DEVELOPMENT']:
72 # get the stripe keys from the env file and save it in the settings.
73 env.read_envfile()
74 stripe_secret_key = env('STRIPE_SECRET_KEY', default=None)
75 stripe_publishable_key = env('STRIPE_PUBLISHABLE_KEY', default=None)
76 stripe_client_id = env('STRIPE_CLIENT_ID', default=None)
77 fb_client_id = env('FACEBOOK_CLIENT_ID', default=None)
78 fb_client_secret = env('FACEBOOK_CLIENT_SECRET', default=None)
79 google_client_id = env('GOOGLE_CLIENT_ID', default=None)
80 google_client_secret = env('GOOGLE_CLIENT_SECRET', default=None)
81 tw_consumer_key = env('TWITTER_CONSUMER_KEY', default=None)
82 tw_consumer_secret = env('TWITTER_CONSUMER_SECRET', default=None)
83 in_client_id = env('INSTAGRAM_CLIENT_ID', default=None)
84 in_client_secret = env('INSTAGRAM_CLIENT_SECRET', default=None)
85
86 setting, _ = get_or_create(Setting, app_name='Open Event')
87 setting.stripe_client_id = stripe_client_id
88 setting.stripe_publishable_key = stripe_publishable_key
89 setting.stripe_secret_key = stripe_secret_key
90 setting.fb_client_id = fb_client_id
91 setting.fb_client_secret = fb_client_secret
92 setting.google_client_id = google_client_id
93 setting.google_client_secret = google_client_secret
94 setting.tw_consumer_key = tw_consumer_key
95 setting.tw_consumer_secret = tw_consumer_secret
96 setting.in_client_id = in_client_id
97 setting.in_client_secret = in_client_secret
98 db.session.add(setting)
99 db.session.commit()
100
101
102 def create_event_image_sizes():
103 get_or_create(
104 ImageSizes, type='event-image', full_width=1300,
105 full_height=500, full_aspect=True, full_quality=80,
106 icon_width=75, icon_height=30, icon_aspect=True,
107 icon_quality=80, thumbnail_width=500, thumbnail_height=200,
108 thumbnail_aspect=True, thumbnail_quality=80, logo_width=500,
109 logo_height=200
110 )
111
112
113 def create_speaker_image_sizes():
114 get_or_create(
115 ImageSizes, type='speaker-image', icon_size_width_height=35, icon_size_quality=80,
116 small_size_width_height=50, small_size_quality=80,
117 thumbnail_size_width_height=500, thumbnail_quality=80
118 )
119
120
121 def create_modules():
122 get_or_create(Module, donation_include=False)
123
124
125 def create_event_topics():
126 event_topic = ['Health & Wellness', 'Home & Lifestyle',
127 'Charity & Causes', 'Other', 'Religion & Spirituality',
128 'Community & Culture', 'Government & Politics',
129 'Government & Politics', 'Auto, Boat & Air',
130 'Travel & Outdoor', 'Hobbies & Special Interest',
131 'Sports & Fitness', 'Business & Professional',
132 'Music', 'Seasonal & Holiday',
133 'Film, Media & Entertainment', 'Family & Education',
134 'Science & Technology', 'Performing & Visual Arts',
135 'Food & Drink', 'Family & Education']
136 for topic in event_topic:
137 get_or_create(EventTopic, name=topic)
138
139
140 def create_event_types():
141 event_type = ['Camp, Treat & Retreat', 'Dinner or Gala',
142 'Other', 'Concert or Performance', 'Conference',
143 'Seminar or Talk', 'Convention',
144 'Festival or Fair', 'Tour',
145 'Screening', 'Game or Competition',
146 'Party or Social Gathering', 'Race or Endurance Event',
147 'Meeting or Networking Event', 'Attraction',
148 'Class, Training, or Workshop', 'Appearance or Signing',
149 'Tournament', 'Rally']
150 for type_ in event_type:
151 get_or_create(EventType, name=type_)
152
153
154 def create_event_locations():
155 event_location = ['India', 'Singapore', 'Berlin', 'New York', 'Hong Kong']
156 for loc_ in event_location:
157 get_or_create(EventLocation, name=loc_)
158
159
160 def create_permissions():
161 orgr = Role.query.get(1)
162 coorgr = Role.query.get(2)
163 track_orgr = Role.query.get(3)
164 mod = Role.query.get(4)
165 attend = Role.query.get(5)
166 regist = Role.query.get(6)
167
168 track = Service.query.get(1)
169 session = Service.query.get(2)
170 speaker = Service.query.get(3)
171 sponsor = Service.query.get(4)
172 microlocation = Service.query.get(5)
173
174 # For ORGANIZER
175 # All four permissions set to True
176 services = [track, session, speaker, sponsor, microlocation]
177 roles = [attend, regist]
178 for service in services:
179 perm, _ = get_or_create(Permission, role=orgr, service=service)
180 db.session.add(perm)
181
182 # For COORGANIZER
183 for service in services:
184 perm, _ = get_or_create(Permission, role=coorgr, service=service)
185 perm.can_create, perm.can_delete = False, False
186 db.session.add(perm)
187
188 # For TRACK_ORGANIZER
189 for service in services:
190 perm, _ = get_or_create(Permission, role=track_orgr, service=service)
191 if not service == track:
192 perm.can_create, perm.can_update, perm.can_delete = False, False, False
193 db.session.add(perm)
194
195 # For MODERATOR
196 for service in services:
197 perm, _ = get_or_create(Permission, role=mod, service=service)
198 perm.can_create, perm.can_update, perm.can_delete = False, False, False
199 db.session.add(perm)
200
201 # For ATTENDEE and REGISTRAR
202 services = [track, session, speaker, sponsor, microlocation]
203 roles = [attend, regist]
204 for role in roles:
205 for service in services:
206 perm, _ = get_or_create(Permission, role=role, service=service)
207 perm.can_create, perm.can_update, perm.can_delete = False, False, False
208 db.session.add(perm)
209
210
211 def create_custom_sys_roles():
212 role, _ = get_or_create(CustomSysRole, name='Sales Admin')
213 db.session.add(role)
214 role, _ = get_or_create(CustomSysRole, name='Marketer')
215 db.session.add(role)
216
217
218 def create_panel_permissions():
219 sales_admin = CustomSysRole.query.filter_by(name='Sales Admin').first()
220 perm, _ = get_or_create(PanelPermission, panel_name=SALES, role=sales_admin)
221 db.session.add(perm)
222 marketer = CustomSysRole.query.filter_by(name='Marketer').first()
223 perm, _ = get_or_create(PanelPermission, panel_name=SALES, role=marketer)
224 db.session.add(perm)
225
226
227 def create_user_permissions():
228 # Publish Event
229 user_perm, _ = get_or_create(UserPermission, name='publish_event',
230 description='Publish event (make event live)')
231 user_perm.verified_user = True
232 db.session.add(user_perm)
233
234 # Create Event
235 user_perm, _ = get_or_create(UserPermission, name='create_event',
236 description='Create event')
237 user_perm.verified_user, user_perm.unverified_user = True, False
238 db.session.add(user_perm)
239
240
241 def create_admin_message_settings():
242 default_mails = ["Next Event",
243 "Session Schedule Change",
244 "User email",
245 "Invitation For Papers",
246 "After Event",
247 "Ticket(s) Purchased",
248 "Session Accept or Reject",
249 "Event Published",
250 "Event Export Failed",
251 "Event Exported",
252 "Event Role Invitation",
253 "New Session Proposal"]
254 for mail in MAILS:
255 if mail in default_mails:
256 get_or_create(MessageSettings, action=mail, mail_status=True,
257 notification_status=True, user_control_status=True)
258 else:
259 get_or_create(
260 MessageSettings, action=mail, mail_status=False,
261 notification_status=False, user_control_status=False
262 )
263
264
265 def populate():
266 """
267 Create defined Roles, Services and Permissions.
268 """
269 print('Creating roles...')
270 create_roles()
271 print('Creating services...')
272 create_services()
273 print('Creating permissions...')
274 create_permissions()
275 print('Creating custom system roles...')
276 create_custom_sys_roles()
277 print('Creating admin panel permissions...')
278 create_panel_permissions()
279 print('Creating user permissions...')
280 create_user_permissions()
281 print('Creating settings...')
282 create_settings()
283 print('Creating modules...')
284 create_modules()
285 print('Creating event image size...')
286 create_event_image_sizes()
287 print('Creating speaker image size...')
288 create_speaker_image_sizes()
289 print('Creating Event Topics...')
290 create_event_topics()
291 print('Creating Event Types...')
292 create_event_types()
293 print('Creating Event Locations...')
294 create_event_locations()
295 print('Creating admin message settings...')
296 create_admin_message_settings()
297
298
299 def populate_without_print():
300 """
301 Create defined Roles, Services and Permissions.
302 """
303 create_roles()
304 create_services()
305 create_permissions()
306 create_custom_sys_roles()
307 create_panel_permissions()
308 create_user_permissions()
309 create_settings()
310 create_modules()
311 create_event_image_sizes()
312 create_speaker_image_sizes()
313 create_event_topics()
314 create_event_types()
315 create_event_locations()
316 create_admin_message_settings()
317
318 db.session.commit()
319
320
321 if __name__ == '__main__':
322 with current_app.app_context():
323 populate()
324
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/populate_db.py b/populate_db.py
--- a/populate_db.py
+++ b/populate_db.py
@@ -74,6 +74,8 @@
stripe_secret_key = env('STRIPE_SECRET_KEY', default=None)
stripe_publishable_key = env('STRIPE_PUBLISHABLE_KEY', default=None)
stripe_client_id = env('STRIPE_CLIENT_ID', default=None)
+ paypal_sandbox_client = env('PAYPAL_SANDBOX_CLIENT', default=None)
+ paypal_sandbox_secret = env('PAYPAL_SANDBOX_SECRET', default=None)
fb_client_id = env('FACEBOOK_CLIENT_ID', default=None)
fb_client_secret = env('FACEBOOK_CLIENT_SECRET', default=None)
google_client_id = env('GOOGLE_CLIENT_ID', default=None)
@@ -87,6 +89,8 @@
setting.stripe_client_id = stripe_client_id
setting.stripe_publishable_key = stripe_publishable_key
setting.stripe_secret_key = stripe_secret_key
+ setting.paypal_sandbox_client = paypal_sandbox_client
+ setting.paypal_sandbox_secret = paypal_sandbox_secret
setting.fb_client_id = fb_client_id
setting.fb_client_secret = fb_client_secret
setting.google_client_id = google_client_id
| {"golden_diff": "diff --git a/populate_db.py b/populate_db.py\n--- a/populate_db.py\n+++ b/populate_db.py\n@@ -74,6 +74,8 @@\n stripe_secret_key = env('STRIPE_SECRET_KEY', default=None)\n stripe_publishable_key = env('STRIPE_PUBLISHABLE_KEY', default=None)\n stripe_client_id = env('STRIPE_CLIENT_ID', default=None)\n+ paypal_sandbox_client = env('PAYPAL_SANDBOX_CLIENT', default=None)\n+ paypal_sandbox_secret = env('PAYPAL_SANDBOX_SECRET', default=None)\n fb_client_id = env('FACEBOOK_CLIENT_ID', default=None)\n fb_client_secret = env('FACEBOOK_CLIENT_SECRET', default=None)\n google_client_id = env('GOOGLE_CLIENT_ID', default=None)\n@@ -87,6 +89,8 @@\n setting.stripe_client_id = stripe_client_id\n setting.stripe_publishable_key = stripe_publishable_key\n setting.stripe_secret_key = stripe_secret_key\n+ setting.paypal_sandbox_client = paypal_sandbox_client\n+ setting.paypal_sandbox_secret = paypal_sandbox_secret\n setting.fb_client_id = fb_client_id\n setting.fb_client_secret = fb_client_secret\n setting.google_client_id = google_client_id\n", "issue": "Update populate_db.py to add Paypal sandbox keys.\n**Is your feature request related to a problem? Please describe.**\r\n<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->\r\nUpdate populate_db.py to add Paypal sandbox keys.\r\n\n", "before_files": [{"content": "from app import current_app\nfrom app.models import db\nfrom app.api.helpers.db import get_or_create # , save_to_db\nfrom envparse import env\n\n# Admin message settings\nfrom app.api.helpers.system_mails import MAILS\nfrom app.models.message_setting import MessageSettings\n\n# Event Role-Service Permissions\nfrom app.models.role import Role\nfrom app.models.service import Service\nfrom app.models.permission import Permission\n\nfrom app.models.track import Track\nfrom app.models.session import Session\nfrom app.models.speaker import Speaker\nfrom app.models.sponsor import Sponsor\nfrom app.models.microlocation import Microlocation\n\nfrom app.models.user import ORGANIZER, COORGANIZER, TRACK_ORGANIZER, MODERATOR, ATTENDEE, REGISTRAR\n\n# Admin Panel Permissions\nfrom app.models.panel_permission import PanelPermission\nfrom app.models.custom_system_role import CustomSysRole\n\nfrom app.models.setting import Setting\nfrom app.models.image_size import ImageSizes\nfrom app.models.module import Module\n\n# EventTopic\nfrom app.models.event_topic import EventTopic\n\n# EventType\nfrom app.models.event_type import EventType\n\n# EventLocation\nfrom app.models.event_location import EventLocation\n\n# User Permissions\nfrom app.models.user_permission import UserPermission\nSALES = 'sales'\n\n\ndef create_roles():\n get_or_create(Role, name=ORGANIZER, title_name='Organizer')\n get_or_create(Role, name=COORGANIZER, title_name='Co-organizer')\n get_or_create(Role, name=TRACK_ORGANIZER, title_name='Track Organizer')\n get_or_create(Role, name=MODERATOR, title_name='Moderator')\n get_or_create(Role, name=ATTENDEE, title_name='Attendee')\n get_or_create(Role, name=REGISTRAR, title_name='Registrar')\n\n\ndef create_services():\n track = Track.get_service_name()\n session = Session.get_service_name()\n speaker = Speaker.get_service_name()\n sponsor = Sponsor.get_service_name()\n microlocation = Microlocation.get_service_name()\n\n get_or_create(Service, name=track)\n get_or_create(Service, name=session)\n get_or_create(Service, name=speaker)\n get_or_create(Service, name=sponsor)\n get_or_create(Service, name=microlocation)\n\n\ndef create_settings():\n get_or_create(Setting, app_name='Open Event')\n\n if current_app.config['DEVELOPMENT']:\n # get the stripe keys from the env file and save it in the settings.\n env.read_envfile()\n stripe_secret_key = env('STRIPE_SECRET_KEY', default=None)\n stripe_publishable_key = env('STRIPE_PUBLISHABLE_KEY', default=None)\n stripe_client_id = env('STRIPE_CLIENT_ID', default=None)\n fb_client_id = env('FACEBOOK_CLIENT_ID', default=None)\n fb_client_secret = env('FACEBOOK_CLIENT_SECRET', default=None)\n google_client_id = env('GOOGLE_CLIENT_ID', default=None)\n google_client_secret = env('GOOGLE_CLIENT_SECRET', default=None)\n tw_consumer_key = env('TWITTER_CONSUMER_KEY', default=None)\n tw_consumer_secret = env('TWITTER_CONSUMER_SECRET', default=None)\n in_client_id = env('INSTAGRAM_CLIENT_ID', default=None)\n in_client_secret = env('INSTAGRAM_CLIENT_SECRET', default=None)\n\n setting, _ = get_or_create(Setting, app_name='Open Event')\n setting.stripe_client_id = stripe_client_id\n setting.stripe_publishable_key = stripe_publishable_key\n setting.stripe_secret_key = stripe_secret_key\n setting.fb_client_id = fb_client_id\n setting.fb_client_secret = fb_client_secret\n setting.google_client_id = google_client_id\n setting.google_client_secret = google_client_secret\n setting.tw_consumer_key = tw_consumer_key\n setting.tw_consumer_secret = tw_consumer_secret\n setting.in_client_id = in_client_id\n setting.in_client_secret = in_client_secret\n db.session.add(setting)\n db.session.commit()\n\n\ndef create_event_image_sizes():\n get_or_create(\n ImageSizes, type='event-image', full_width=1300,\n full_height=500, full_aspect=True, full_quality=80,\n icon_width=75, icon_height=30, icon_aspect=True,\n icon_quality=80, thumbnail_width=500, thumbnail_height=200,\n thumbnail_aspect=True, thumbnail_quality=80, logo_width=500,\n logo_height=200\n )\n\n\ndef create_speaker_image_sizes():\n get_or_create(\n ImageSizes, type='speaker-image', icon_size_width_height=35, icon_size_quality=80,\n small_size_width_height=50, small_size_quality=80,\n thumbnail_size_width_height=500, thumbnail_quality=80\n )\n\n\ndef create_modules():\n get_or_create(Module, donation_include=False)\n\n\ndef create_event_topics():\n event_topic = ['Health & Wellness', 'Home & Lifestyle',\n 'Charity & Causes', 'Other', 'Religion & Spirituality',\n 'Community & Culture', 'Government & Politics',\n 'Government & Politics', 'Auto, Boat & Air',\n 'Travel & Outdoor', 'Hobbies & Special Interest',\n 'Sports & Fitness', 'Business & Professional',\n 'Music', 'Seasonal & Holiday',\n 'Film, Media & Entertainment', 'Family & Education',\n 'Science & Technology', 'Performing & Visual Arts',\n 'Food & Drink', 'Family & Education']\n for topic in event_topic:\n get_or_create(EventTopic, name=topic)\n\n\ndef create_event_types():\n event_type = ['Camp, Treat & Retreat', 'Dinner or Gala',\n 'Other', 'Concert or Performance', 'Conference',\n 'Seminar or Talk', 'Convention',\n 'Festival or Fair', 'Tour',\n 'Screening', 'Game or Competition',\n 'Party or Social Gathering', 'Race or Endurance Event',\n 'Meeting or Networking Event', 'Attraction',\n 'Class, Training, or Workshop', 'Appearance or Signing',\n 'Tournament', 'Rally']\n for type_ in event_type:\n get_or_create(EventType, name=type_)\n\n\ndef create_event_locations():\n event_location = ['India', 'Singapore', 'Berlin', 'New York', 'Hong Kong']\n for loc_ in event_location:\n get_or_create(EventLocation, name=loc_)\n\n\ndef create_permissions():\n orgr = Role.query.get(1)\n coorgr = Role.query.get(2)\n track_orgr = Role.query.get(3)\n mod = Role.query.get(4)\n attend = Role.query.get(5)\n regist = Role.query.get(6)\n\n track = Service.query.get(1)\n session = Service.query.get(2)\n speaker = Service.query.get(3)\n sponsor = Service.query.get(4)\n microlocation = Service.query.get(5)\n\n # For ORGANIZER\n # All four permissions set to True\n services = [track, session, speaker, sponsor, microlocation]\n roles = [attend, regist]\n for service in services:\n perm, _ = get_or_create(Permission, role=orgr, service=service)\n db.session.add(perm)\n\n # For COORGANIZER\n for service in services:\n perm, _ = get_or_create(Permission, role=coorgr, service=service)\n perm.can_create, perm.can_delete = False, False\n db.session.add(perm)\n\n # For TRACK_ORGANIZER\n for service in services:\n perm, _ = get_or_create(Permission, role=track_orgr, service=service)\n if not service == track:\n perm.can_create, perm.can_update, perm.can_delete = False, False, False\n db.session.add(perm)\n\n # For MODERATOR\n for service in services:\n perm, _ = get_or_create(Permission, role=mod, service=service)\n perm.can_create, perm.can_update, perm.can_delete = False, False, False\n db.session.add(perm)\n\n # For ATTENDEE and REGISTRAR\n services = [track, session, speaker, sponsor, microlocation]\n roles = [attend, regist]\n for role in roles:\n for service in services:\n perm, _ = get_or_create(Permission, role=role, service=service)\n perm.can_create, perm.can_update, perm.can_delete = False, False, False\n db.session.add(perm)\n\n\ndef create_custom_sys_roles():\n role, _ = get_or_create(CustomSysRole, name='Sales Admin')\n db.session.add(role)\n role, _ = get_or_create(CustomSysRole, name='Marketer')\n db.session.add(role)\n\n\ndef create_panel_permissions():\n sales_admin = CustomSysRole.query.filter_by(name='Sales Admin').first()\n perm, _ = get_or_create(PanelPermission, panel_name=SALES, role=sales_admin)\n db.session.add(perm)\n marketer = CustomSysRole.query.filter_by(name='Marketer').first()\n perm, _ = get_or_create(PanelPermission, panel_name=SALES, role=marketer)\n db.session.add(perm)\n\n\ndef create_user_permissions():\n # Publish Event\n user_perm, _ = get_or_create(UserPermission, name='publish_event',\n description='Publish event (make event live)')\n user_perm.verified_user = True\n db.session.add(user_perm)\n\n # Create Event\n user_perm, _ = get_or_create(UserPermission, name='create_event',\n description='Create event')\n user_perm.verified_user, user_perm.unverified_user = True, False\n db.session.add(user_perm)\n\n\ndef create_admin_message_settings():\n default_mails = [\"Next Event\",\n \"Session Schedule Change\",\n \"User email\",\n \"Invitation For Papers\",\n \"After Event\",\n \"Ticket(s) Purchased\",\n \"Session Accept or Reject\",\n \"Event Published\",\n \"Event Export Failed\",\n \"Event Exported\",\n \"Event Role Invitation\",\n \"New Session Proposal\"]\n for mail in MAILS:\n if mail in default_mails:\n get_or_create(MessageSettings, action=mail, mail_status=True,\n notification_status=True, user_control_status=True)\n else:\n get_or_create(\n MessageSettings, action=mail, mail_status=False,\n notification_status=False, user_control_status=False\n )\n\n\ndef populate():\n \"\"\"\n Create defined Roles, Services and Permissions.\n \"\"\"\n print('Creating roles...')\n create_roles()\n print('Creating services...')\n create_services()\n print('Creating permissions...')\n create_permissions()\n print('Creating custom system roles...')\n create_custom_sys_roles()\n print('Creating admin panel permissions...')\n create_panel_permissions()\n print('Creating user permissions...')\n create_user_permissions()\n print('Creating settings...')\n create_settings()\n print('Creating modules...')\n create_modules()\n print('Creating event image size...')\n create_event_image_sizes()\n print('Creating speaker image size...')\n create_speaker_image_sizes()\n print('Creating Event Topics...')\n create_event_topics()\n print('Creating Event Types...')\n create_event_types()\n print('Creating Event Locations...')\n create_event_locations()\n print('Creating admin message settings...')\n create_admin_message_settings()\n\n\ndef populate_without_print():\n \"\"\"\n Create defined Roles, Services and Permissions.\n \"\"\"\n create_roles()\n create_services()\n create_permissions()\n create_custom_sys_roles()\n create_panel_permissions()\n create_user_permissions()\n create_settings()\n create_modules()\n create_event_image_sizes()\n create_speaker_image_sizes()\n create_event_topics()\n create_event_types()\n create_event_locations()\n create_admin_message_settings()\n\n db.session.commit()\n\n\nif __name__ == '__main__':\n with current_app.app_context():\n populate()\n", "path": "populate_db.py"}], "after_files": [{"content": "from app import current_app\nfrom app.models import db\nfrom app.api.helpers.db import get_or_create # , save_to_db\nfrom envparse import env\n\n# Admin message settings\nfrom app.api.helpers.system_mails import MAILS\nfrom app.models.message_setting import MessageSettings\n\n# Event Role-Service Permissions\nfrom app.models.role import Role\nfrom app.models.service import Service\nfrom app.models.permission import Permission\n\nfrom app.models.track import Track\nfrom app.models.session import Session\nfrom app.models.speaker import Speaker\nfrom app.models.sponsor import Sponsor\nfrom app.models.microlocation import Microlocation\n\nfrom app.models.user import ORGANIZER, COORGANIZER, TRACK_ORGANIZER, MODERATOR, ATTENDEE, REGISTRAR\n\n# Admin Panel Permissions\nfrom app.models.panel_permission import PanelPermission\nfrom app.models.custom_system_role import CustomSysRole\n\nfrom app.models.setting import Setting\nfrom app.models.image_size import ImageSizes\nfrom app.models.module import Module\n\n# EventTopic\nfrom app.models.event_topic import EventTopic\n\n# EventType\nfrom app.models.event_type import EventType\n\n# EventLocation\nfrom app.models.event_location import EventLocation\n\n# User Permissions\nfrom app.models.user_permission import UserPermission\nSALES = 'sales'\n\n\ndef create_roles():\n get_or_create(Role, name=ORGANIZER, title_name='Organizer')\n get_or_create(Role, name=COORGANIZER, title_name='Co-organizer')\n get_or_create(Role, name=TRACK_ORGANIZER, title_name='Track Organizer')\n get_or_create(Role, name=MODERATOR, title_name='Moderator')\n get_or_create(Role, name=ATTENDEE, title_name='Attendee')\n get_or_create(Role, name=REGISTRAR, title_name='Registrar')\n\n\ndef create_services():\n track = Track.get_service_name()\n session = Session.get_service_name()\n speaker = Speaker.get_service_name()\n sponsor = Sponsor.get_service_name()\n microlocation = Microlocation.get_service_name()\n\n get_or_create(Service, name=track)\n get_or_create(Service, name=session)\n get_or_create(Service, name=speaker)\n get_or_create(Service, name=sponsor)\n get_or_create(Service, name=microlocation)\n\n\ndef create_settings():\n get_or_create(Setting, app_name='Open Event')\n\n if current_app.config['DEVELOPMENT']:\n # get the stripe keys from the env file and save it in the settings.\n env.read_envfile()\n stripe_secret_key = env('STRIPE_SECRET_KEY', default=None)\n stripe_publishable_key = env('STRIPE_PUBLISHABLE_KEY', default=None)\n stripe_client_id = env('STRIPE_CLIENT_ID', default=None)\n paypal_sandbox_client = env('PAYPAL_SANDBOX_CLIENT', default=None)\n paypal_sandbox_secret = env('PAYPAL_SANDBOX_SECRET', default=None)\n fb_client_id = env('FACEBOOK_CLIENT_ID', default=None)\n fb_client_secret = env('FACEBOOK_CLIENT_SECRET', default=None)\n google_client_id = env('GOOGLE_CLIENT_ID', default=None)\n google_client_secret = env('GOOGLE_CLIENT_SECRET', default=None)\n tw_consumer_key = env('TWITTER_CONSUMER_KEY', default=None)\n tw_consumer_secret = env('TWITTER_CONSUMER_SECRET', default=None)\n in_client_id = env('INSTAGRAM_CLIENT_ID', default=None)\n in_client_secret = env('INSTAGRAM_CLIENT_SECRET', default=None)\n\n setting, _ = get_or_create(Setting, app_name='Open Event')\n setting.stripe_client_id = stripe_client_id\n setting.stripe_publishable_key = stripe_publishable_key\n setting.stripe_secret_key = stripe_secret_key\n setting.paypal_sandbox_client = paypal_sandbox_client\n setting.paypal_sandbox_secret = paypal_sandbox_secret\n setting.fb_client_id = fb_client_id\n setting.fb_client_secret = fb_client_secret\n setting.google_client_id = google_client_id\n setting.google_client_secret = google_client_secret\n setting.tw_consumer_key = tw_consumer_key\n setting.tw_consumer_secret = tw_consumer_secret\n setting.in_client_id = in_client_id\n setting.in_client_secret = in_client_secret\n db.session.add(setting)\n db.session.commit()\n\n\ndef create_event_image_sizes():\n get_or_create(\n ImageSizes, type='event-image', full_width=1300,\n full_height=500, full_aspect=True, full_quality=80,\n icon_width=75, icon_height=30, icon_aspect=True,\n icon_quality=80, thumbnail_width=500, thumbnail_height=200,\n thumbnail_aspect=True, thumbnail_quality=80, logo_width=500,\n logo_height=200\n )\n\n\ndef create_speaker_image_sizes():\n get_or_create(\n ImageSizes, type='speaker-image', icon_size_width_height=35, icon_size_quality=80,\n small_size_width_height=50, small_size_quality=80,\n thumbnail_size_width_height=500, thumbnail_quality=80\n )\n\n\ndef create_modules():\n get_or_create(Module, donation_include=False)\n\n\ndef create_event_topics():\n event_topic = ['Health & Wellness', 'Home & Lifestyle',\n 'Charity & Causes', 'Other', 'Religion & Spirituality',\n 'Community & Culture', 'Government & Politics',\n 'Government & Politics', 'Auto, Boat & Air',\n 'Travel & Outdoor', 'Hobbies & Special Interest',\n 'Sports & Fitness', 'Business & Professional',\n 'Music', 'Seasonal & Holiday',\n 'Film, Media & Entertainment', 'Family & Education',\n 'Science & Technology', 'Performing & Visual Arts',\n 'Food & Drink', 'Family & Education']\n for topic in event_topic:\n get_or_create(EventTopic, name=topic)\n\n\ndef create_event_types():\n event_type = ['Camp, Treat & Retreat', 'Dinner or Gala',\n 'Other', 'Concert or Performance', 'Conference',\n 'Seminar or Talk', 'Convention',\n 'Festival or Fair', 'Tour',\n 'Screening', 'Game or Competition',\n 'Party or Social Gathering', 'Race or Endurance Event',\n 'Meeting or Networking Event', 'Attraction',\n 'Class, Training, or Workshop', 'Appearance or Signing',\n 'Tournament', 'Rally']\n for type_ in event_type:\n get_or_create(EventType, name=type_)\n\n\ndef create_event_locations():\n event_location = ['India', 'Singapore', 'Berlin', 'New York', 'Hong Kong']\n for loc_ in event_location:\n get_or_create(EventLocation, name=loc_)\n\n\ndef create_permissions():\n orgr = Role.query.get(1)\n coorgr = Role.query.get(2)\n track_orgr = Role.query.get(3)\n mod = Role.query.get(4)\n attend = Role.query.get(5)\n regist = Role.query.get(6)\n\n track = Service.query.get(1)\n session = Service.query.get(2)\n speaker = Service.query.get(3)\n sponsor = Service.query.get(4)\n microlocation = Service.query.get(5)\n\n # For ORGANIZER\n # All four permissions set to True\n services = [track, session, speaker, sponsor, microlocation]\n roles = [attend, regist]\n for service in services:\n perm, _ = get_or_create(Permission, role=orgr, service=service)\n db.session.add(perm)\n\n # For COORGANIZER\n for service in services:\n perm, _ = get_or_create(Permission, role=coorgr, service=service)\n perm.can_create, perm.can_delete = False, False\n db.session.add(perm)\n\n # For TRACK_ORGANIZER\n for service in services:\n perm, _ = get_or_create(Permission, role=track_orgr, service=service)\n if not service == track:\n perm.can_create, perm.can_update, perm.can_delete = False, False, False\n db.session.add(perm)\n\n # For MODERATOR\n for service in services:\n perm, _ = get_or_create(Permission, role=mod, service=service)\n perm.can_create, perm.can_update, perm.can_delete = False, False, False\n db.session.add(perm)\n\n # For ATTENDEE and REGISTRAR\n services = [track, session, speaker, sponsor, microlocation]\n roles = [attend, regist]\n for role in roles:\n for service in services:\n perm, _ = get_or_create(Permission, role=role, service=service)\n perm.can_create, perm.can_update, perm.can_delete = False, False, False\n db.session.add(perm)\n\n\ndef create_custom_sys_roles():\n role, _ = get_or_create(CustomSysRole, name='Sales Admin')\n db.session.add(role)\n role, _ = get_or_create(CustomSysRole, name='Marketer')\n db.session.add(role)\n\n\ndef create_panel_permissions():\n sales_admin = CustomSysRole.query.filter_by(name='Sales Admin').first()\n perm, _ = get_or_create(PanelPermission, panel_name=SALES, role=sales_admin)\n db.session.add(perm)\n marketer = CustomSysRole.query.filter_by(name='Marketer').first()\n perm, _ = get_or_create(PanelPermission, panel_name=SALES, role=marketer)\n db.session.add(perm)\n\n\ndef create_user_permissions():\n # Publish Event\n user_perm, _ = get_or_create(UserPermission, name='publish_event',\n description='Publish event (make event live)')\n user_perm.verified_user = True\n db.session.add(user_perm)\n\n # Create Event\n user_perm, _ = get_or_create(UserPermission, name='create_event',\n description='Create event')\n user_perm.verified_user, user_perm.unverified_user = True, False\n db.session.add(user_perm)\n\n\ndef create_admin_message_settings():\n default_mails = [\"Next Event\",\n \"Session Schedule Change\",\n \"User email\",\n \"Invitation For Papers\",\n \"After Event\",\n \"Ticket(s) Purchased\",\n \"Session Accept or Reject\",\n \"Event Published\",\n \"Event Export Failed\",\n \"Event Exported\",\n \"Event Role Invitation\",\n \"New Session Proposal\"]\n for mail in MAILS:\n if mail in default_mails:\n get_or_create(MessageSettings, action=mail, mail_status=True,\n notification_status=True, user_control_status=True)\n else:\n get_or_create(\n MessageSettings, action=mail, mail_status=False,\n notification_status=False, user_control_status=False\n )\n\n\ndef populate():\n \"\"\"\n Create defined Roles, Services and Permissions.\n \"\"\"\n print('Creating roles...')\n create_roles()\n print('Creating services...')\n create_services()\n print('Creating permissions...')\n create_permissions()\n print('Creating custom system roles...')\n create_custom_sys_roles()\n print('Creating admin panel permissions...')\n create_panel_permissions()\n print('Creating user permissions...')\n create_user_permissions()\n print('Creating settings...')\n create_settings()\n print('Creating modules...')\n create_modules()\n print('Creating event image size...')\n create_event_image_sizes()\n print('Creating speaker image size...')\n create_speaker_image_sizes()\n print('Creating Event Topics...')\n create_event_topics()\n print('Creating Event Types...')\n create_event_types()\n print('Creating Event Locations...')\n create_event_locations()\n print('Creating admin message settings...')\n create_admin_message_settings()\n\n\ndef populate_without_print():\n \"\"\"\n Create defined Roles, Services and Permissions.\n \"\"\"\n create_roles()\n create_services()\n create_permissions()\n create_custom_sys_roles()\n create_panel_permissions()\n create_user_permissions()\n create_settings()\n create_modules()\n create_event_image_sizes()\n create_speaker_image_sizes()\n create_event_topics()\n create_event_types()\n create_event_locations()\n create_admin_message_settings()\n\n db.session.commit()\n\n\nif __name__ == '__main__':\n with current_app.app_context():\n populate()\n", "path": "populate_db.py"}]} | 3,757 | 279 |
gh_patches_debug_34635 | rasdani/github-patches | git_diff | pypi__warehouse-9983 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Vulnerability reporting task is not resilient to deleted releases
Currently attempting to publish a vulnerability report for a notification that includes release versions that have been deleted (or never existed) causes an error and prevents the entire report from being stored.
Example: https://sentry.io/share/issue/69362937025d4524be01918a80f830d1/
If the release is not found, we should just skip that version and continue storing the vulnerability report. If *none* of the versions are found, we should error.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/integrations/vulnerabilities/utils.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13
14 from sqlalchemy import func, orm
15 from sqlalchemy.orm.exc import NoResultFound
16
17 from warehouse.integrations import vulnerabilities
18 from warehouse.integrations.vulnerabilities.models import VulnerabilityRecord
19 from warehouse.packaging.models import Project, Release
20
21
22 def _get_project(request, vuln_report: vulnerabilities.VulnerabilityReportRequest):
23 return (
24 request.db.query(Project)
25 .filter(
26 Project.normalized_name == func.normalize_pep426_name(vuln_report.project)
27 )
28 .one()
29 )
30
31
32 def _get_release(request, project: Project, version):
33 return (
34 request.db.query(Release)
35 .filter(Release.project_id == project.id)
36 .filter(Release.version == version)
37 .one()
38 )
39
40
41 def _get_vuln_record(
42 request, vuln_report: vulnerabilities.VulnerabilityReportRequest, origin
43 ):
44 return (
45 request.db.query(VulnerabilityRecord)
46 .filter(VulnerabilityRecord.id == vuln_report.vulnerability_id)
47 .filter(VulnerabilityRecord.source == origin)
48 .options(orm.joinedload(VulnerabilityRecord.releases))
49 .one()
50 )
51
52
53 def _delete_vuln_record(request, vuln_record: VulnerabilityRecord):
54 request.db.delete(vuln_record)
55
56
57 def _add_vuln_record(request, vuln_record: VulnerabilityRecord):
58 request.db.add(vuln_record)
59
60
61 def _analyze_vulnerability(request, vulnerability_report, origin, metrics):
62 try:
63 report = vulnerabilities.VulnerabilityReportRequest.from_api_request(
64 request=vulnerability_report
65 )
66 except vulnerabilities.InvalidVulnerabilityReportRequest as exc:
67 metrics.increment(
68 f"warehouse.vulnerabilities.error.{exc.reason}", tags=[f"origin:{origin}"]
69 )
70 raise
71
72 metrics.increment("warehouse.vulnerabilities.valid", tags=[f"origin:{origin}"])
73
74 try:
75 vulnerability_record = _get_vuln_record(request, report, origin)
76
77 if not report.versions:
78 # No versions indicates the vulnerability is no longer considered
79 # valid, so delete it.
80 _delete_vuln_record(request, vulnerability_record)
81 return
82
83 except NoResultFound:
84 if not report.versions:
85 return
86
87 vulnerability_record = VulnerabilityRecord(
88 id=report.vulnerability_id,
89 source=origin,
90 link=report.advisory_link,
91 aliases=report.aliases,
92 )
93 _add_vuln_record(request, vulnerability_record)
94
95 try:
96 project = _get_project(request, report)
97 except NoResultFound:
98 metrics.increment(
99 "warehouse.vulnerabilities.error.project_not_found",
100 tags=[f"origin:{origin}"],
101 )
102 raise
103
104 for version in report.versions:
105 try:
106 release = _get_release(request, project, version)
107 except NoResultFound:
108 metrics.increment(
109 "warehouse.vulnerabilities.error.release_not_found",
110 tags=[f"origin:{origin}"],
111 )
112 raise
113
114 if release not in vulnerability_record.releases:
115 vulnerability_record.releases.append(release)
116
117 # Unassociate any releases that no longer apply.
118 for release in list(vulnerability_record.releases):
119 if release.version not in report.versions:
120 vulnerability_record.releases.remove(release)
121
122
123 def analyze_vulnerability(request, vulnerability_report, origin, metrics):
124 metrics.increment("warehouse.vulnerabilities.received", tags=[f"origin:{origin}"])
125 try:
126 with metrics.timed(
127 "warehouse.vulnerabilities.analysis", tags=[f"origin:{origin}"]
128 ):
129 _analyze_vulnerability(
130 request=request,
131 vulnerability_report=vulnerability_report,
132 origin=origin,
133 metrics=metrics,
134 )
135
136 metrics.increment(
137 "warehouse.vulnerabilities.processed", tags=[f"origin:{origin}"]
138 )
139 except (vulnerabilities.InvalidVulnerabilityReportRequest, NoResultFound):
140 raise
141 except Exception:
142 metrics.increment(
143 "warehouse.vulnerabilities.error.unknown", tags=[f"origin:{origin}"]
144 )
145 raise
146
147
148 def analyze_vulnerabilities(request, vulnerability_reports, origin, metrics):
149 from warehouse.integrations.vulnerabilities import tasks
150
151 if not isinstance(vulnerability_reports, list):
152 metrics.increment(
153 "warehouse.vulnerabilities.error.format", tags=[f"origin:{origin}"]
154 )
155 raise vulnerabilities.InvalidVulnerabilityReportRequest(
156 "Invalid format: payload is not a list", "format"
157 )
158
159 for vulnerability_report in vulnerability_reports:
160 request.task(tasks.analyze_vulnerability_task).delay(
161 vulnerability_report=vulnerability_report,
162 origin=origin,
163 )
164
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/warehouse/integrations/vulnerabilities/utils.py b/warehouse/integrations/vulnerabilities/utils.py
--- a/warehouse/integrations/vulnerabilities/utils.py
+++ b/warehouse/integrations/vulnerabilities/utils.py
@@ -10,7 +10,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-
+from pyramid.httpexceptions import HTTPBadRequest
from sqlalchemy import func, orm
from sqlalchemy.orm.exc import NoResultFound
@@ -101,19 +101,26 @@
)
raise
+ found_releases = False # by now, we don't have any release found
+
for version in report.versions:
try:
release = _get_release(request, project, version)
+ found_releases = True # at least one release found
except NoResultFound:
metrics.increment(
"warehouse.vulnerabilities.error.release_not_found",
tags=[f"origin:{origin}"],
)
- raise
+ continue # skip that release
if release not in vulnerability_record.releases:
vulnerability_record.releases.append(release)
+ if not found_releases:
+ # no releases found, then raise an exception
+ raise HTTPBadRequest("None of the releases were found")
+
# Unassociate any releases that no longer apply.
for release in list(vulnerability_record.releases):
if release.version not in report.versions:
@@ -136,7 +143,11 @@
metrics.increment(
"warehouse.vulnerabilities.processed", tags=[f"origin:{origin}"]
)
- except (vulnerabilities.InvalidVulnerabilityReportRequest, NoResultFound):
+ except (
+ vulnerabilities.InvalidVulnerabilityReportRequest,
+ NoResultFound,
+ HTTPBadRequest,
+ ):
raise
except Exception:
metrics.increment(
| {"golden_diff": "diff --git a/warehouse/integrations/vulnerabilities/utils.py b/warehouse/integrations/vulnerabilities/utils.py\n--- a/warehouse/integrations/vulnerabilities/utils.py\n+++ b/warehouse/integrations/vulnerabilities/utils.py\n@@ -10,7 +10,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n-\n+from pyramid.httpexceptions import HTTPBadRequest\n from sqlalchemy import func, orm\n from sqlalchemy.orm.exc import NoResultFound\n \n@@ -101,19 +101,26 @@\n )\n raise\n \n+ found_releases = False # by now, we don't have any release found\n+\n for version in report.versions:\n try:\n release = _get_release(request, project, version)\n+ found_releases = True # at least one release found\n except NoResultFound:\n metrics.increment(\n \"warehouse.vulnerabilities.error.release_not_found\",\n tags=[f\"origin:{origin}\"],\n )\n- raise\n+ continue # skip that release\n \n if release not in vulnerability_record.releases:\n vulnerability_record.releases.append(release)\n \n+ if not found_releases:\n+ # no releases found, then raise an exception\n+ raise HTTPBadRequest(\"None of the releases were found\")\n+\n # Unassociate any releases that no longer apply.\n for release in list(vulnerability_record.releases):\n if release.version not in report.versions:\n@@ -136,7 +143,11 @@\n metrics.increment(\n \"warehouse.vulnerabilities.processed\", tags=[f\"origin:{origin}\"]\n )\n- except (vulnerabilities.InvalidVulnerabilityReportRequest, NoResultFound):\n+ except (\n+ vulnerabilities.InvalidVulnerabilityReportRequest,\n+ NoResultFound,\n+ HTTPBadRequest,\n+ ):\n raise\n except Exception:\n metrics.increment(\n", "issue": "Vulnerability reporting task is not resilient to deleted releases\nCurrently attempting to publish a vulnerability report for a notification that includes release versions that have been deleted (or never existed) causes an error and prevents the entire report from being stored.\r\n\r\nExample: https://sentry.io/share/issue/69362937025d4524be01918a80f830d1/\r\n\r\nIf the release is not found, we should just skip that version and continue storing the vulnerability report. If *none* of the versions are found, we should error.\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom sqlalchemy import func, orm\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom warehouse.integrations import vulnerabilities\nfrom warehouse.integrations.vulnerabilities.models import VulnerabilityRecord\nfrom warehouse.packaging.models import Project, Release\n\n\ndef _get_project(request, vuln_report: vulnerabilities.VulnerabilityReportRequest):\n return (\n request.db.query(Project)\n .filter(\n Project.normalized_name == func.normalize_pep426_name(vuln_report.project)\n )\n .one()\n )\n\n\ndef _get_release(request, project: Project, version):\n return (\n request.db.query(Release)\n .filter(Release.project_id == project.id)\n .filter(Release.version == version)\n .one()\n )\n\n\ndef _get_vuln_record(\n request, vuln_report: vulnerabilities.VulnerabilityReportRequest, origin\n):\n return (\n request.db.query(VulnerabilityRecord)\n .filter(VulnerabilityRecord.id == vuln_report.vulnerability_id)\n .filter(VulnerabilityRecord.source == origin)\n .options(orm.joinedload(VulnerabilityRecord.releases))\n .one()\n )\n\n\ndef _delete_vuln_record(request, vuln_record: VulnerabilityRecord):\n request.db.delete(vuln_record)\n\n\ndef _add_vuln_record(request, vuln_record: VulnerabilityRecord):\n request.db.add(vuln_record)\n\n\ndef _analyze_vulnerability(request, vulnerability_report, origin, metrics):\n try:\n report = vulnerabilities.VulnerabilityReportRequest.from_api_request(\n request=vulnerability_report\n )\n except vulnerabilities.InvalidVulnerabilityReportRequest as exc:\n metrics.increment(\n f\"warehouse.vulnerabilities.error.{exc.reason}\", tags=[f\"origin:{origin}\"]\n )\n raise\n\n metrics.increment(\"warehouse.vulnerabilities.valid\", tags=[f\"origin:{origin}\"])\n\n try:\n vulnerability_record = _get_vuln_record(request, report, origin)\n\n if not report.versions:\n # No versions indicates the vulnerability is no longer considered\n # valid, so delete it.\n _delete_vuln_record(request, vulnerability_record)\n return\n\n except NoResultFound:\n if not report.versions:\n return\n\n vulnerability_record = VulnerabilityRecord(\n id=report.vulnerability_id,\n source=origin,\n link=report.advisory_link,\n aliases=report.aliases,\n )\n _add_vuln_record(request, vulnerability_record)\n\n try:\n project = _get_project(request, report)\n except NoResultFound:\n metrics.increment(\n \"warehouse.vulnerabilities.error.project_not_found\",\n tags=[f\"origin:{origin}\"],\n )\n raise\n\n for version in report.versions:\n try:\n release = _get_release(request, project, version)\n except NoResultFound:\n metrics.increment(\n \"warehouse.vulnerabilities.error.release_not_found\",\n tags=[f\"origin:{origin}\"],\n )\n raise\n\n if release not in vulnerability_record.releases:\n vulnerability_record.releases.append(release)\n\n # Unassociate any releases that no longer apply.\n for release in list(vulnerability_record.releases):\n if release.version not in report.versions:\n vulnerability_record.releases.remove(release)\n\n\ndef analyze_vulnerability(request, vulnerability_report, origin, metrics):\n metrics.increment(\"warehouse.vulnerabilities.received\", tags=[f\"origin:{origin}\"])\n try:\n with metrics.timed(\n \"warehouse.vulnerabilities.analysis\", tags=[f\"origin:{origin}\"]\n ):\n _analyze_vulnerability(\n request=request,\n vulnerability_report=vulnerability_report,\n origin=origin,\n metrics=metrics,\n )\n\n metrics.increment(\n \"warehouse.vulnerabilities.processed\", tags=[f\"origin:{origin}\"]\n )\n except (vulnerabilities.InvalidVulnerabilityReportRequest, NoResultFound):\n raise\n except Exception:\n metrics.increment(\n \"warehouse.vulnerabilities.error.unknown\", tags=[f\"origin:{origin}\"]\n )\n raise\n\n\ndef analyze_vulnerabilities(request, vulnerability_reports, origin, metrics):\n from warehouse.integrations.vulnerabilities import tasks\n\n if not isinstance(vulnerability_reports, list):\n metrics.increment(\n \"warehouse.vulnerabilities.error.format\", tags=[f\"origin:{origin}\"]\n )\n raise vulnerabilities.InvalidVulnerabilityReportRequest(\n \"Invalid format: payload is not a list\", \"format\"\n )\n\n for vulnerability_report in vulnerability_reports:\n request.task(tasks.analyze_vulnerability_task).delay(\n vulnerability_report=vulnerability_report,\n origin=origin,\n )\n", "path": "warehouse/integrations/vulnerabilities/utils.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom pyramid.httpexceptions import HTTPBadRequest\nfrom sqlalchemy import func, orm\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom warehouse.integrations import vulnerabilities\nfrom warehouse.integrations.vulnerabilities.models import VulnerabilityRecord\nfrom warehouse.packaging.models import Project, Release\n\n\ndef _get_project(request, vuln_report: vulnerabilities.VulnerabilityReportRequest):\n return (\n request.db.query(Project)\n .filter(\n Project.normalized_name == func.normalize_pep426_name(vuln_report.project)\n )\n .one()\n )\n\n\ndef _get_release(request, project: Project, version):\n return (\n request.db.query(Release)\n .filter(Release.project_id == project.id)\n .filter(Release.version == version)\n .one()\n )\n\n\ndef _get_vuln_record(\n request, vuln_report: vulnerabilities.VulnerabilityReportRequest, origin\n):\n return (\n request.db.query(VulnerabilityRecord)\n .filter(VulnerabilityRecord.id == vuln_report.vulnerability_id)\n .filter(VulnerabilityRecord.source == origin)\n .options(orm.joinedload(VulnerabilityRecord.releases))\n .one()\n )\n\n\ndef _delete_vuln_record(request, vuln_record: VulnerabilityRecord):\n request.db.delete(vuln_record)\n\n\ndef _add_vuln_record(request, vuln_record: VulnerabilityRecord):\n request.db.add(vuln_record)\n\n\ndef _analyze_vulnerability(request, vulnerability_report, origin, metrics):\n try:\n report = vulnerabilities.VulnerabilityReportRequest.from_api_request(\n request=vulnerability_report\n )\n except vulnerabilities.InvalidVulnerabilityReportRequest as exc:\n metrics.increment(\n f\"warehouse.vulnerabilities.error.{exc.reason}\", tags=[f\"origin:{origin}\"]\n )\n raise\n\n metrics.increment(\"warehouse.vulnerabilities.valid\", tags=[f\"origin:{origin}\"])\n\n try:\n vulnerability_record = _get_vuln_record(request, report, origin)\n\n if not report.versions:\n # No versions indicates the vulnerability is no longer considered\n # valid, so delete it.\n _delete_vuln_record(request, vulnerability_record)\n return\n\n except NoResultFound:\n if not report.versions:\n return\n\n vulnerability_record = VulnerabilityRecord(\n id=report.vulnerability_id,\n source=origin,\n link=report.advisory_link,\n aliases=report.aliases,\n )\n _add_vuln_record(request, vulnerability_record)\n\n try:\n project = _get_project(request, report)\n except NoResultFound:\n metrics.increment(\n \"warehouse.vulnerabilities.error.project_not_found\",\n tags=[f\"origin:{origin}\"],\n )\n raise\n\n found_releases = False # by now, we don't have any release found\n\n for version in report.versions:\n try:\n release = _get_release(request, project, version)\n found_releases = True # at least one release found\n except NoResultFound:\n metrics.increment(\n \"warehouse.vulnerabilities.error.release_not_found\",\n tags=[f\"origin:{origin}\"],\n )\n continue # skip that release\n\n if release not in vulnerability_record.releases:\n vulnerability_record.releases.append(release)\n\n if not found_releases:\n # no releases found, then raise an exception\n raise HTTPBadRequest(\"None of the releases were found\")\n\n # Unassociate any releases that no longer apply.\n for release in list(vulnerability_record.releases):\n if release.version not in report.versions:\n vulnerability_record.releases.remove(release)\n\n\ndef analyze_vulnerability(request, vulnerability_report, origin, metrics):\n metrics.increment(\"warehouse.vulnerabilities.received\", tags=[f\"origin:{origin}\"])\n try:\n with metrics.timed(\n \"warehouse.vulnerabilities.analysis\", tags=[f\"origin:{origin}\"]\n ):\n _analyze_vulnerability(\n request=request,\n vulnerability_report=vulnerability_report,\n origin=origin,\n metrics=metrics,\n )\n\n metrics.increment(\n \"warehouse.vulnerabilities.processed\", tags=[f\"origin:{origin}\"]\n )\n except (\n vulnerabilities.InvalidVulnerabilityReportRequest,\n NoResultFound,\n HTTPBadRequest,\n ):\n raise\n except Exception:\n metrics.increment(\n \"warehouse.vulnerabilities.error.unknown\", tags=[f\"origin:{origin}\"]\n )\n raise\n\n\ndef analyze_vulnerabilities(request, vulnerability_reports, origin, metrics):\n from warehouse.integrations.vulnerabilities import tasks\n\n if not isinstance(vulnerability_reports, list):\n metrics.increment(\n \"warehouse.vulnerabilities.error.format\", tags=[f\"origin:{origin}\"]\n )\n raise vulnerabilities.InvalidVulnerabilityReportRequest(\n \"Invalid format: payload is not a list\", \"format\"\n )\n\n for vulnerability_report in vulnerability_reports:\n request.task(tasks.analyze_vulnerability_task).delay(\n vulnerability_report=vulnerability_report,\n origin=origin,\n )\n", "path": "warehouse/integrations/vulnerabilities/utils.py"}]} | 1,901 | 418 |
gh_patches_debug_22182 | rasdani/github-patches | git_diff | StackStorm__st2-3260 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Config.schema's are not applied if there are no configs in /opt/stackstorm/configs/
Hi,
This a odd one, just built a new 2.2 dev instance to finish up stackstorm-networking_utils#1 and I've found that the following schema for `networking_utils`:
```yaml
isp_db:
description: "Path to ISP GeoIP2 database"
type: "string"
required: true
default: "/opt/geoip2/GeoIP2-ISP.mmdb"
city_db:
description: "Path to City GeoIP2 database"
type: "string"
required: true
default: "/opt/geoip2/GeoLite2-City.mmdb"
```
Is only applied if at least 1 config file exists in `/opt/stackstorm/configs/` (for any pack). Without this I was key errors on `isp_db` in my `geoip` action.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `st2common/st2common/util/config_loader.py`
Content:
```
1 # Licensed to the StackStorm, Inc ('StackStorm') under one or more
2 # contributor license agreements. See the NOTICE file distributed with
3 # this work for additional information regarding copyright ownership.
4 # The ASF licenses this file to You under the Apache License, Version 2.0
5 # (the "License"); you may not use this file except in compliance with
6 # the License. You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import copy
17
18 import six
19
20 from oslo_config import cfg
21
22 from st2common import log as logging
23 from st2common.persistence.pack import ConfigSchema
24 from st2common.persistence.pack import Config
25 from st2common.content import utils as content_utils
26 from st2common.util import jinja as jinja_utils
27 from st2common.util.templating import render_template_with_system_and_user_context
28 from st2common.util.config_parser import ContentPackConfigParser
29 from st2common.exceptions.db import StackStormDBObjectNotFoundError
30
31 __all__ = [
32 'ContentPackConfigLoader'
33 ]
34
35 LOG = logging.getLogger(__name__)
36
37
38 class ContentPackConfigLoader(object):
39 """
40 Class which loads and resolves all the config values and returns a dictionary of resolved values
41 which can be passed to the resource.
42
43 It loads and resolves values in the following order:
44
45 1. Static values from <pack path>/config.yaml file
46 2. Dynamic and or static values from /opt/stackstorm/configs/<pack name>.yaml file.
47
48 Values are merged from left to right which means values from "<pack name>.yaml" file have
49 precedence and override values from pack local config file.
50 """
51
52 def __init__(self, pack_name, user=None):
53 self.pack_name = pack_name
54 self.user = user or cfg.CONF.system_user.user
55
56 self.pack_path = content_utils.get_pack_base_path(pack_name=pack_name)
57 self._config_parser = ContentPackConfigParser(pack_name=pack_name)
58
59 def get_config(self):
60 result = {}
61
62 # 1. Retrieve values from pack local config.yaml file
63 config = self._config_parser.get_config()
64
65 if config:
66 config = config.config or {}
67 result.update(config)
68
69 # Retrieve corresponding ConfigDB and ConfigSchemaDB object
70 # Note: ConfigSchemaDB is optional right now. If it doesn't exist, we assume every value
71 # is of a type string
72 try:
73 config_db = Config.get_by_pack(value=self.pack_name)
74 except StackStormDBObjectNotFoundError:
75 # Corresponding pack config doesn't exist, return early
76 return result
77
78 try:
79 config_schema_db = ConfigSchema.get_by_pack(value=self.pack_name)
80 except StackStormDBObjectNotFoundError:
81 config_schema_db = None
82
83 # 2. Retrieve values from "global" pack config file (if available) and resolve them if
84 # necessary
85 config = self._get_values_for_config(config_schema_db=config_schema_db,
86 config_db=config_db)
87 result.update(config)
88
89 return result
90
91 def _get_values_for_config(self, config_schema_db, config_db):
92 schema_values = getattr(config_schema_db, 'attributes', {})
93 config_values = getattr(config_db, 'values', {})
94
95 config = copy.deepcopy(config_values)
96
97 # Assign dynamic config values based on the values in the datastore
98 config = self._assign_dynamic_config_values(schema=schema_values, config=config)
99
100 # If config_schema is available we do a second pass and set default values for required
101 # items which values are not provided / available in the config itself
102 config = self._assign_default_values(schema=schema_values, config=config)
103 return config
104
105 def _assign_dynamic_config_values(self, schema, config, parent_keys=None):
106 """
107 Assign dynamic config value for a particular config item if the ite utilizes a Jinja
108 expression for dynamic config values.
109
110 Note: This method mutates config argument in place.
111
112 :rtype: ``dict``
113 """
114 parent_keys = parent_keys or []
115
116 for config_item_key, config_item_value in six.iteritems(config):
117 schema_item = schema.get(config_item_key, {})
118 is_dictionary = isinstance(config_item_value, dict)
119
120 # Inspect nested object properties
121 if is_dictionary:
122 parent_keys += [config_item_key]
123 self._assign_dynamic_config_values(schema=schema_item.get('properties', {}),
124 config=config[config_item_key],
125 parent_keys=parent_keys)
126 else:
127 is_jinja_expression = jinja_utils.is_jinja_expression(value=config_item_value)
128
129 if is_jinja_expression:
130 # Resolve / render the Jinja template expression
131 full_config_item_key = '.'.join(parent_keys + [config_item_key])
132 value = self._get_datastore_value_for_expression(key=full_config_item_key,
133 value=config_item_value,
134 config_schema_item=schema_item)
135
136 config[config_item_key] = value
137 else:
138 # Static value, no resolution needed
139 config[config_item_key] = config_item_value
140
141 return config
142
143 def _assign_default_values(self, schema, config):
144 """
145 Assign default values for particular config if default values are provided in the config
146 schema and a value is not specified in the config.
147
148 Note: This method mutates config argument in place.
149
150 :rtype: ``dict``
151 """
152 for schema_item_key, schema_item in six.iteritems(schema):
153 default_value = schema_item.get('default', None)
154 is_required = schema_item.get('required', False)
155 is_object = schema_item.get('type', None) == 'object'
156 has_properties = schema_item.get('properties', None)
157
158 if is_required and default_value and not config.get(schema_item_key, None):
159 config[schema_item_key] = default_value
160
161 # Inspect nested object properties
162 if is_object and has_properties:
163 if not config.get(schema_item_key, None):
164 config[schema_item_key] = {}
165
166 self._assign_default_values(schema=schema_item['properties'],
167 config=config[schema_item_key])
168
169 return config
170
171 def _get_datastore_value_for_expression(self, key, value, config_schema_item=None):
172 """
173 Retrieve datastore value by first resolving the datastore expression and then retrieving
174 the value from the datastore.
175
176 :param key: Full path to the config item key (e.g. "token" / "auth.settings.token", etc.)
177 """
178 from st2common.services.config import deserialize_key_value
179
180 config_schema_item = config_schema_item or {}
181 secret = config_schema_item.get('secret', False)
182
183 try:
184 value = render_template_with_system_and_user_context(value=value,
185 user=self.user)
186 except Exception as e:
187 # Throw a more user-friendly exception on failed render
188 exc_class = type(e)
189 original_msg = str(e)
190 msg = ('Failed to render dynamic configuration value for key "%s" with value '
191 '"%s" for pack "%s" config: %s ' % (key, value, self.pack_name, original_msg))
192 raise exc_class(msg)
193
194 if value:
195 # Deserialize the value
196 value = deserialize_key_value(value=value, secret=secret)
197 else:
198 value = None
199
200 return value
201
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/st2common/st2common/util/config_loader.py b/st2common/st2common/util/config_loader.py
--- a/st2common/st2common/util/config_loader.py
+++ b/st2common/st2common/util/config_loader.py
@@ -20,6 +20,7 @@
from oslo_config import cfg
from st2common import log as logging
+from st2common.models.db.pack import ConfigDB
from st2common.persistence.pack import ConfigSchema
from st2common.persistence.pack import Config
from st2common.content import utils as content_utils
@@ -72,8 +73,10 @@
try:
config_db = Config.get_by_pack(value=self.pack_name)
except StackStormDBObjectNotFoundError:
- # Corresponding pack config doesn't exist, return early
- return result
+ # Corresponding pack config doesn't exist. We set config_db to an empty config so
+ # that the default values from config schema are still correctly applied even if
+ # pack doesn't contain a config.
+ config_db = ConfigDB(pack=self.pack_name, values={})
try:
config_schema_db = ConfigSchema.get_by_pack(value=self.pack_name)
| {"golden_diff": "diff --git a/st2common/st2common/util/config_loader.py b/st2common/st2common/util/config_loader.py\n--- a/st2common/st2common/util/config_loader.py\n+++ b/st2common/st2common/util/config_loader.py\n@@ -20,6 +20,7 @@\n from oslo_config import cfg\n \n from st2common import log as logging\n+from st2common.models.db.pack import ConfigDB\n from st2common.persistence.pack import ConfigSchema\n from st2common.persistence.pack import Config\n from st2common.content import utils as content_utils\n@@ -72,8 +73,10 @@\n try:\n config_db = Config.get_by_pack(value=self.pack_name)\n except StackStormDBObjectNotFoundError:\n- # Corresponding pack config doesn't exist, return early\n- return result\n+ # Corresponding pack config doesn't exist. We set config_db to an empty config so\n+ # that the default values from config schema are still correctly applied even if\n+ # pack doesn't contain a config.\n+ config_db = ConfigDB(pack=self.pack_name, values={})\n \n try:\n config_schema_db = ConfigSchema.get_by_pack(value=self.pack_name)\n", "issue": "Config.schema's are not applied if there are no configs in /opt/stackstorm/configs/\nHi,\r\n\r\nThis a odd one, just built a new 2.2 dev instance to finish up stackstorm-networking_utils#1 and I've found that the following schema for `networking_utils`:\r\n\r\n```yaml\r\nisp_db:\r\n description: \"Path to ISP GeoIP2 database\"\r\n type: \"string\"\r\n required: true\r\n default: \"/opt/geoip2/GeoIP2-ISP.mmdb\"\r\ncity_db:\r\n description: \"Path to City GeoIP2 database\"\r\n type: \"string\"\r\n required: true\r\n default: \"/opt/geoip2/GeoLite2-City.mmdb\"\r\n```\r\n\r\nIs only applied if at least 1 config file exists in `/opt/stackstorm/configs/` (for any pack). Without this I was key errors on `isp_db` in my `geoip` action.\n", "before_files": [{"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport copy\n\nimport six\n\nfrom oslo_config import cfg\n\nfrom st2common import log as logging\nfrom st2common.persistence.pack import ConfigSchema\nfrom st2common.persistence.pack import Config\nfrom st2common.content import utils as content_utils\nfrom st2common.util import jinja as jinja_utils\nfrom st2common.util.templating import render_template_with_system_and_user_context\nfrom st2common.util.config_parser import ContentPackConfigParser\nfrom st2common.exceptions.db import StackStormDBObjectNotFoundError\n\n__all__ = [\n 'ContentPackConfigLoader'\n]\n\nLOG = logging.getLogger(__name__)\n\n\nclass ContentPackConfigLoader(object):\n \"\"\"\n Class which loads and resolves all the config values and returns a dictionary of resolved values\n which can be passed to the resource.\n\n It loads and resolves values in the following order:\n\n 1. Static values from <pack path>/config.yaml file\n 2. Dynamic and or static values from /opt/stackstorm/configs/<pack name>.yaml file.\n\n Values are merged from left to right which means values from \"<pack name>.yaml\" file have\n precedence and override values from pack local config file.\n \"\"\"\n\n def __init__(self, pack_name, user=None):\n self.pack_name = pack_name\n self.user = user or cfg.CONF.system_user.user\n\n self.pack_path = content_utils.get_pack_base_path(pack_name=pack_name)\n self._config_parser = ContentPackConfigParser(pack_name=pack_name)\n\n def get_config(self):\n result = {}\n\n # 1. Retrieve values from pack local config.yaml file\n config = self._config_parser.get_config()\n\n if config:\n config = config.config or {}\n result.update(config)\n\n # Retrieve corresponding ConfigDB and ConfigSchemaDB object\n # Note: ConfigSchemaDB is optional right now. If it doesn't exist, we assume every value\n # is of a type string\n try:\n config_db = Config.get_by_pack(value=self.pack_name)\n except StackStormDBObjectNotFoundError:\n # Corresponding pack config doesn't exist, return early\n return result\n\n try:\n config_schema_db = ConfigSchema.get_by_pack(value=self.pack_name)\n except StackStormDBObjectNotFoundError:\n config_schema_db = None\n\n # 2. Retrieve values from \"global\" pack config file (if available) and resolve them if\n # necessary\n config = self._get_values_for_config(config_schema_db=config_schema_db,\n config_db=config_db)\n result.update(config)\n\n return result\n\n def _get_values_for_config(self, config_schema_db, config_db):\n schema_values = getattr(config_schema_db, 'attributes', {})\n config_values = getattr(config_db, 'values', {})\n\n config = copy.deepcopy(config_values)\n\n # Assign dynamic config values based on the values in the datastore\n config = self._assign_dynamic_config_values(schema=schema_values, config=config)\n\n # If config_schema is available we do a second pass and set default values for required\n # items which values are not provided / available in the config itself\n config = self._assign_default_values(schema=schema_values, config=config)\n return config\n\n def _assign_dynamic_config_values(self, schema, config, parent_keys=None):\n \"\"\"\n Assign dynamic config value for a particular config item if the ite utilizes a Jinja\n expression for dynamic config values.\n\n Note: This method mutates config argument in place.\n\n :rtype: ``dict``\n \"\"\"\n parent_keys = parent_keys or []\n\n for config_item_key, config_item_value in six.iteritems(config):\n schema_item = schema.get(config_item_key, {})\n is_dictionary = isinstance(config_item_value, dict)\n\n # Inspect nested object properties\n if is_dictionary:\n parent_keys += [config_item_key]\n self._assign_dynamic_config_values(schema=schema_item.get('properties', {}),\n config=config[config_item_key],\n parent_keys=parent_keys)\n else:\n is_jinja_expression = jinja_utils.is_jinja_expression(value=config_item_value)\n\n if is_jinja_expression:\n # Resolve / render the Jinja template expression\n full_config_item_key = '.'.join(parent_keys + [config_item_key])\n value = self._get_datastore_value_for_expression(key=full_config_item_key,\n value=config_item_value,\n config_schema_item=schema_item)\n\n config[config_item_key] = value\n else:\n # Static value, no resolution needed\n config[config_item_key] = config_item_value\n\n return config\n\n def _assign_default_values(self, schema, config):\n \"\"\"\n Assign default values for particular config if default values are provided in the config\n schema and a value is not specified in the config.\n\n Note: This method mutates config argument in place.\n\n :rtype: ``dict``\n \"\"\"\n for schema_item_key, schema_item in six.iteritems(schema):\n default_value = schema_item.get('default', None)\n is_required = schema_item.get('required', False)\n is_object = schema_item.get('type', None) == 'object'\n has_properties = schema_item.get('properties', None)\n\n if is_required and default_value and not config.get(schema_item_key, None):\n config[schema_item_key] = default_value\n\n # Inspect nested object properties\n if is_object and has_properties:\n if not config.get(schema_item_key, None):\n config[schema_item_key] = {}\n\n self._assign_default_values(schema=schema_item['properties'],\n config=config[schema_item_key])\n\n return config\n\n def _get_datastore_value_for_expression(self, key, value, config_schema_item=None):\n \"\"\"\n Retrieve datastore value by first resolving the datastore expression and then retrieving\n the value from the datastore.\n\n :param key: Full path to the config item key (e.g. \"token\" / \"auth.settings.token\", etc.)\n \"\"\"\n from st2common.services.config import deserialize_key_value\n\n config_schema_item = config_schema_item or {}\n secret = config_schema_item.get('secret', False)\n\n try:\n value = render_template_with_system_and_user_context(value=value,\n user=self.user)\n except Exception as e:\n # Throw a more user-friendly exception on failed render\n exc_class = type(e)\n original_msg = str(e)\n msg = ('Failed to render dynamic configuration value for key \"%s\" with value '\n '\"%s\" for pack \"%s\" config: %s ' % (key, value, self.pack_name, original_msg))\n raise exc_class(msg)\n\n if value:\n # Deserialize the value\n value = deserialize_key_value(value=value, secret=secret)\n else:\n value = None\n\n return value\n", "path": "st2common/st2common/util/config_loader.py"}], "after_files": [{"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport copy\n\nimport six\n\nfrom oslo_config import cfg\n\nfrom st2common import log as logging\nfrom st2common.models.db.pack import ConfigDB\nfrom st2common.persistence.pack import ConfigSchema\nfrom st2common.persistence.pack import Config\nfrom st2common.content import utils as content_utils\nfrom st2common.util import jinja as jinja_utils\nfrom st2common.util.templating import render_template_with_system_and_user_context\nfrom st2common.util.config_parser import ContentPackConfigParser\nfrom st2common.exceptions.db import StackStormDBObjectNotFoundError\n\n__all__ = [\n 'ContentPackConfigLoader'\n]\n\nLOG = logging.getLogger(__name__)\n\n\nclass ContentPackConfigLoader(object):\n \"\"\"\n Class which loads and resolves all the config values and returns a dictionary of resolved values\n which can be passed to the resource.\n\n It loads and resolves values in the following order:\n\n 1. Static values from <pack path>/config.yaml file\n 2. Dynamic and or static values from /opt/stackstorm/configs/<pack name>.yaml file.\n\n Values are merged from left to right which means values from \"<pack name>.yaml\" file have\n precedence and override values from pack local config file.\n \"\"\"\n\n def __init__(self, pack_name, user=None):\n self.pack_name = pack_name\n self.user = user or cfg.CONF.system_user.user\n\n self.pack_path = content_utils.get_pack_base_path(pack_name=pack_name)\n self._config_parser = ContentPackConfigParser(pack_name=pack_name)\n\n def get_config(self):\n result = {}\n\n # 1. Retrieve values from pack local config.yaml file\n config = self._config_parser.get_config()\n\n if config:\n config = config.config or {}\n result.update(config)\n\n # Retrieve corresponding ConfigDB and ConfigSchemaDB object\n # Note: ConfigSchemaDB is optional right now. If it doesn't exist, we assume every value\n # is of a type string\n try:\n config_db = Config.get_by_pack(value=self.pack_name)\n except StackStormDBObjectNotFoundError:\n # Corresponding pack config doesn't exist. We set config_db to an empty config so\n # that the default values from config schema are still correctly applied even if\n # pack doesn't contain a config.\n config_db = ConfigDB(pack=self.pack_name, values={})\n\n try:\n config_schema_db = ConfigSchema.get_by_pack(value=self.pack_name)\n except StackStormDBObjectNotFoundError:\n config_schema_db = None\n\n # 2. Retrieve values from \"global\" pack config file (if available) and resolve them if\n # necessary\n config = self._get_values_for_config(config_schema_db=config_schema_db,\n config_db=config_db)\n result.update(config)\n\n return result\n\n def _get_values_for_config(self, config_schema_db, config_db):\n schema_values = getattr(config_schema_db, 'attributes', {})\n config_values = getattr(config_db, 'values', {})\n\n config = copy.deepcopy(config_values)\n\n # Assign dynamic config values based on the values in the datastore\n config = self._assign_dynamic_config_values(schema=schema_values, config=config)\n\n # If config_schema is available we do a second pass and set default values for required\n # items which values are not provided / available in the config itself\n config = self._assign_default_values(schema=schema_values, config=config)\n return config\n\n def _assign_dynamic_config_values(self, schema, config, parent_keys=None):\n \"\"\"\n Assign dynamic config value for a particular config item if the ite utilizes a Jinja\n expression for dynamic config values.\n\n Note: This method mutates config argument in place.\n\n :rtype: ``dict``\n \"\"\"\n parent_keys = parent_keys or []\n\n for config_item_key, config_item_value in six.iteritems(config):\n schema_item = schema.get(config_item_key, {})\n is_dictionary = isinstance(config_item_value, dict)\n\n # Inspect nested object properties\n if is_dictionary:\n parent_keys += [config_item_key]\n self._assign_dynamic_config_values(schema=schema_item.get('properties', {}),\n config=config[config_item_key],\n parent_keys=parent_keys)\n else:\n is_jinja_expression = jinja_utils.is_jinja_expression(value=config_item_value)\n\n if is_jinja_expression:\n # Resolve / render the Jinja template expression\n full_config_item_key = '.'.join(parent_keys + [config_item_key])\n value = self._get_datastore_value_for_expression(key=full_config_item_key,\n value=config_item_value,\n config_schema_item=schema_item)\n\n config[config_item_key] = value\n else:\n # Static value, no resolution needed\n config[config_item_key] = config_item_value\n\n return config\n\n def _assign_default_values(self, schema, config):\n \"\"\"\n Assign default values for particular config if default values are provided in the config\n schema and a value is not specified in the config.\n\n Note: This method mutates config argument in place.\n\n :rtype: ``dict``\n \"\"\"\n for schema_item_key, schema_item in six.iteritems(schema):\n default_value = schema_item.get('default', None)\n is_required = schema_item.get('required', False)\n is_object = schema_item.get('type', None) == 'object'\n has_properties = schema_item.get('properties', None)\n\n if is_required and default_value and not config.get(schema_item_key, None):\n config[schema_item_key] = default_value\n\n # Inspect nested object properties\n if is_object and has_properties:\n if not config.get(schema_item_key, None):\n config[schema_item_key] = {}\n\n self._assign_default_values(schema=schema_item['properties'],\n config=config[schema_item_key])\n\n return config\n\n def _get_datastore_value_for_expression(self, key, value, config_schema_item=None):\n \"\"\"\n Retrieve datastore value by first resolving the datastore expression and then retrieving\n the value from the datastore.\n\n :param key: Full path to the config item key (e.g. \"token\" / \"auth.settings.token\", etc.)\n \"\"\"\n from st2common.services.config import deserialize_key_value\n\n config_schema_item = config_schema_item or {}\n secret = config_schema_item.get('secret', False)\n\n try:\n value = render_template_with_system_and_user_context(value=value,\n user=self.user)\n except Exception as e:\n # Throw a more user-friendly exception on failed render\n exc_class = type(e)\n original_msg = str(e)\n msg = ('Failed to render dynamic configuration value for key \"%s\" with value '\n '\"%s\" for pack \"%s\" config: %s ' % (key, value, self.pack_name, original_msg))\n raise exc_class(msg)\n\n if value:\n # Deserialize the value\n value = deserialize_key_value(value=value, secret=secret)\n else:\n value = None\n\n return value\n", "path": "st2common/st2common/util/config_loader.py"}]} | 2,594 | 258 |
gh_patches_debug_30029 | rasdani/github-patches | git_diff | vllm-project__vllm-2750 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Minor] Fix test_cache.py CI test failure
While working on https://github.com/vllm-project/vllm/pull/2677, I noticed there was a test failure that came from a race condition from merging https://github.com/vllm-project/vllm/pull/2279 and https://github.com/vllm-project/vllm/pull/2616 "at the same time"
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vllm/utils.py`
Content:
```
1 import enum
2 import os
3 import socket
4 import subprocess
5 import uuid
6 from platform import uname
7 from typing import List, Tuple, Union
8 from packaging.version import parse, Version
9
10 import psutil
11 import torch
12 import asyncio
13 from functools import partial
14 from typing import (
15 Awaitable,
16 Callable,
17 TypeVar,
18 )
19 from collections import OrderedDict
20 from typing import Any, Hashable, Optional
21
22 from vllm.logger import init_logger
23
24 T = TypeVar("T")
25 logger = init_logger(__name__)
26
27 STR_DTYPE_TO_TORCH_DTYPE = {
28 "half": torch.half,
29 "bfloat16": torch.bfloat16,
30 "float": torch.float,
31 "fp8_e5m2": torch.uint8,
32 }
33
34
35 class Device(enum.Enum):
36 GPU = enum.auto()
37 CPU = enum.auto()
38
39
40 class Counter:
41
42 def __init__(self, start: int = 0) -> None:
43 self.counter = start
44
45 def __next__(self) -> int:
46 i = self.counter
47 self.counter += 1
48 return i
49
50 def reset(self) -> None:
51 self.counter = 0
52
53
54 class LRUCache:
55
56 def __init__(self, capacity: int):
57 self.cache = OrderedDict()
58 self.capacity = capacity
59
60 def __contains__(self, key: Hashable) -> bool:
61 return key in self.cache
62
63 def __len__(self) -> int:
64 return len(self.cache)
65
66 def __getitem__(self, key: Hashable) -> Any:
67 return self.get(key)
68
69 def __setitem__(self, key: Hashable, value: Any) -> None:
70 self.put(key, value)
71
72 def __delitem__(self, key: Hashable) -> None:
73 self.pop(key)
74
75 def touch(self, key: Hashable) -> None:
76 self.cache.move_to_end(key)
77
78 def get(self, key: Hashable, default_value: Optional[Any] = None) -> int:
79 if key in self.cache:
80 value = self.cache[key]
81 self.cache.move_to_end(key)
82 else:
83 value = default_value
84 return value
85
86 def put(self, key: Hashable, value: Any) -> None:
87 self.cache[key] = value
88 self.cache.move_to_end(key)
89 self._remove_old_if_needed()
90
91 def _on_remove(self, key: Hashable, value: Any):
92 pass
93
94 def remove_oldest(self):
95 if not self.cache:
96 return
97 key, value = self.cache.popitem(last=False)
98 self._on_remove(key, value)
99
100 def _remove_old_if_needed(self) -> None:
101 while len(self.cache) > self.capacity:
102 self.remove_oldest()
103
104 def pop(self, key: int, default_value: Optional[Any] = None) -> Any:
105 run_on_remove = key in self.cache
106 value = self.cache.pop(key, default_value)
107 if run_on_remove:
108 self._on_remove(key, value)
109 return value
110
111 def clear(self):
112 while len(self.cache) > 0:
113 self.remove_oldest()
114 self.cache.clear()
115
116
117 def is_hip() -> bool:
118 return torch.version.hip is not None
119
120
121 def get_max_shared_memory_bytes(gpu: int = 0) -> int:
122 """Returns the maximum shared memory per thread block in bytes."""
123 # NOTE: This import statement should be executed lazily since
124 # the Neuron-X backend does not have the `cuda_utils` module.
125 from vllm._C import cuda_utils
126
127 max_shared_mem = cuda_utils.get_max_shared_memory_per_block_device_attribute(
128 gpu)
129 # value 0 will cause MAX_SEQ_LEN become negative and test_attention.py will fail
130 assert max_shared_mem > 0, "max_shared_mem can not be zero"
131 return int(max_shared_mem)
132
133
134 def get_cpu_memory() -> int:
135 """Returns the total CPU memory of the node in bytes."""
136 return psutil.virtual_memory().total
137
138
139 def random_uuid() -> str:
140 return str(uuid.uuid4().hex)
141
142
143 def in_wsl() -> bool:
144 # Reference: https://github.com/microsoft/WSL/issues/4071
145 return "microsoft" in " ".join(uname()).lower()
146
147
148 def make_async(func: Callable[..., T]) -> Callable[..., Awaitable[T]]:
149 """Take a blocking function, and run it on in an executor thread.
150
151 This function prevents the blocking function from blocking the
152 asyncio event loop.
153 The code in this function needs to be thread safe.
154 """
155
156 def _async_wrapper(*args, **kwargs) -> asyncio.Future:
157 loop = asyncio.get_event_loop()
158 p_func = partial(func, *args, **kwargs)
159 return loop.run_in_executor(executor=None, func=p_func)
160
161 return _async_wrapper
162
163
164 def get_ip() -> str:
165 s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
166 s.connect(("8.8.8.8", 80)) # Doesn't need to be reachable
167 return s.getsockname()[0]
168
169
170 def get_distributed_init_method(ip: str, port: int) -> str:
171 return f"tcp://{ip}:{port}"
172
173
174 def get_open_port() -> int:
175 with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
176 s.bind(("", 0))
177 return s.getsockname()[1]
178
179
180 def set_cuda_visible_devices(device_ids: List[int]) -> None:
181 os.environ["CUDA_VISIBLE_DEVICES"] = ",".join(map(str, device_ids))
182
183
184 def get_nvcc_cuda_version() -> Version:
185 cuda_home = os.environ.get('CUDA_HOME')
186 if not cuda_home:
187 cuda_home = '/usr/local/cuda'
188 logger.info(
189 f'CUDA_HOME is not found in the environment. Using {cuda_home} as CUDA_HOME.'
190 )
191 nvcc_output = subprocess.check_output([cuda_home + "/bin/nvcc", "-V"],
192 universal_newlines=True)
193 output = nvcc_output.split()
194 release_idx = output.index("release") + 1
195 nvcc_cuda_version = parse(output[release_idx].split(",")[0])
196 return nvcc_cuda_version
197
198
199 def _generate_random_fp8_e5m2(
200 tensor: torch.tensor,
201 low: float,
202 high: float,
203 ) -> None:
204 # NOTE(zhaoyang): Due to NaN and Inf representation for fp8 data type,
205 # it may occur Inf or NaN if we directly use torch.randint
206 # to generate random data for fp8 data.
207 # For example, s.11111.00 in fp8e5m2 format repesents Inf.
208 # | E4M3 | E5M2
209 #-----|-------------|-------------------
210 # Inf | N/A | s.11111.00
211 # NaN | s.1111.111 | s.11111.{01,10,11}
212 from vllm._C import cache_ops
213 tensor_tmp = torch.empty_like(tensor, dtype=torch.float16)
214 tensor_tmp.uniform_(low, high)
215 cache_ops.convert_fp8_e5m2(tensor_tmp, tensor)
216 del tensor_tmp
217
218
219 def create_kv_caches_with_random(
220 num_blocks: int,
221 block_size: int,
222 num_layers: int,
223 num_heads: int,
224 head_size: int,
225 cache_dtype: Optional[Union[str, torch.dtype]],
226 model_dtype: Optional[Union[str, torch.dtype]] = None,
227 seed: Optional[int] = 0,
228 device: Optional[str] = "cuda",
229 ) -> Tuple[List[torch.Tensor], List[torch.Tensor]]:
230 torch.random.manual_seed(seed)
231 if torch.cuda.is_available():
232 torch.cuda.manual_seed(seed)
233
234 if isinstance(cache_dtype, str):
235 if cache_dtype == "auto":
236 if isinstance(model_dtype, str):
237 torch_dtype = STR_DTYPE_TO_TORCH_DTYPE[model_dtype]
238 elif isinstance(model_dtype, torch.dtype):
239 torch_dtype = model_dtype
240 else:
241 raise ValueError(f"Invalid model dtype: {model_dtype}")
242 elif cache_dtype in ["half", "bfloat16", "float"]:
243 torch_dtype = STR_DTYPE_TO_TORCH_DTYPE[cache_dtype]
244 elif cache_dtype == "fp8_e5m2":
245 torch_dtype = torch.uint8
246 else:
247 raise ValueError(f"Invalid kv cache dtype: {cache_dtype}")
248 elif isinstance(cache_dtype, torch.dtype):
249 torch_dtype = cache_dtype
250 else:
251 raise ValueError(f"Invalid kv cache dtype: {cache_dtype}")
252
253 scale = head_size**-0.5
254 x = 16 // torch.tensor([], dtype=torch_dtype).element_size()
255 key_cache_shape = (num_blocks, num_heads, head_size // x, block_size, x)
256 key_caches = []
257 for _ in range(num_layers):
258 key_cache = torch.empty(size=key_cache_shape,
259 dtype=torch_dtype,
260 device=device)
261 if cache_dtype in ["auto", "half", "bfloat16", "float"]:
262 key_cache.uniform_(-scale, scale)
263 elif cache_dtype == 'fp8_e5m2':
264 _generate_random_fp8_e5m2(key_cache, -scale, scale)
265 key_caches.append(key_cache)
266
267 value_cache_shape = (num_blocks, num_heads, head_size, block_size)
268 value_caches = []
269 for _ in range(num_layers):
270 value_cache = torch.empty(size=value_cache_shape,
271 dtype=torch_dtype,
272 device=device)
273 if cache_dtype in ["auto", "half", "bfloat16", "float"]:
274 value_cache.uniform_(-scale, scale)
275 elif cache_dtype == 'fp8_e5m2':
276 _generate_random_fp8_e5m2(value_cache, -scale, scale)
277 value_caches.append(value_cache)
278 return key_caches, value_caches
279
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vllm/utils.py b/vllm/utils.py
--- a/vllm/utils.py
+++ b/vllm/utils.py
@@ -258,10 +258,13 @@
key_cache = torch.empty(size=key_cache_shape,
dtype=torch_dtype,
device=device)
- if cache_dtype in ["auto", "half", "bfloat16", "float"]:
- key_cache.uniform_(-scale, scale)
- elif cache_dtype == 'fp8_e5m2':
+ if cache_dtype == 'fp8_e5m2':
_generate_random_fp8_e5m2(key_cache, -scale, scale)
+ elif torch_dtype in [torch.half, torch.bfloat16, torch.float]:
+ key_cache.uniform_(-scale, scale)
+ else:
+ raise ValueError(
+ f"Does not support key cache of type {cache_dtype}")
key_caches.append(key_cache)
value_cache_shape = (num_blocks, num_heads, head_size, block_size)
@@ -270,9 +273,12 @@
value_cache = torch.empty(size=value_cache_shape,
dtype=torch_dtype,
device=device)
- if cache_dtype in ["auto", "half", "bfloat16", "float"]:
- value_cache.uniform_(-scale, scale)
- elif cache_dtype == 'fp8_e5m2':
+ if cache_dtype == 'fp8_e5m2':
_generate_random_fp8_e5m2(value_cache, -scale, scale)
+ elif torch_dtype in [torch.half, torch.bfloat16, torch.float]:
+ value_cache.uniform_(-scale, scale)
+ else:
+ raise ValueError(
+ f"Does not support value cache of type {cache_dtype}")
value_caches.append(value_cache)
return key_caches, value_caches
| {"golden_diff": "diff --git a/vllm/utils.py b/vllm/utils.py\n--- a/vllm/utils.py\n+++ b/vllm/utils.py\n@@ -258,10 +258,13 @@\n key_cache = torch.empty(size=key_cache_shape,\n dtype=torch_dtype,\n device=device)\n- if cache_dtype in [\"auto\", \"half\", \"bfloat16\", \"float\"]:\n- key_cache.uniform_(-scale, scale)\n- elif cache_dtype == 'fp8_e5m2':\n+ if cache_dtype == 'fp8_e5m2':\n _generate_random_fp8_e5m2(key_cache, -scale, scale)\n+ elif torch_dtype in [torch.half, torch.bfloat16, torch.float]:\n+ key_cache.uniform_(-scale, scale)\n+ else:\n+ raise ValueError(\n+ f\"Does not support key cache of type {cache_dtype}\")\n key_caches.append(key_cache)\n \n value_cache_shape = (num_blocks, num_heads, head_size, block_size)\n@@ -270,9 +273,12 @@\n value_cache = torch.empty(size=value_cache_shape,\n dtype=torch_dtype,\n device=device)\n- if cache_dtype in [\"auto\", \"half\", \"bfloat16\", \"float\"]:\n- value_cache.uniform_(-scale, scale)\n- elif cache_dtype == 'fp8_e5m2':\n+ if cache_dtype == 'fp8_e5m2':\n _generate_random_fp8_e5m2(value_cache, -scale, scale)\n+ elif torch_dtype in [torch.half, torch.bfloat16, torch.float]:\n+ value_cache.uniform_(-scale, scale)\n+ else:\n+ raise ValueError(\n+ f\"Does not support value cache of type {cache_dtype}\")\n value_caches.append(value_cache)\n return key_caches, value_caches\n", "issue": "[Minor] Fix test_cache.py CI test failure\nWhile working on https://github.com/vllm-project/vllm/pull/2677, I noticed there was a test failure that came from a race condition from merging https://github.com/vllm-project/vllm/pull/2279 and https://github.com/vllm-project/vllm/pull/2616 \"at the same time\"\n", "before_files": [{"content": "import enum\nimport os\nimport socket\nimport subprocess\nimport uuid\nfrom platform import uname\nfrom typing import List, Tuple, Union\nfrom packaging.version import parse, Version\n\nimport psutil\nimport torch\nimport asyncio\nfrom functools import partial\nfrom typing import (\n Awaitable,\n Callable,\n TypeVar,\n)\nfrom collections import OrderedDict\nfrom typing import Any, Hashable, Optional\n\nfrom vllm.logger import init_logger\n\nT = TypeVar(\"T\")\nlogger = init_logger(__name__)\n\nSTR_DTYPE_TO_TORCH_DTYPE = {\n \"half\": torch.half,\n \"bfloat16\": torch.bfloat16,\n \"float\": torch.float,\n \"fp8_e5m2\": torch.uint8,\n}\n\n\nclass Device(enum.Enum):\n GPU = enum.auto()\n CPU = enum.auto()\n\n\nclass Counter:\n\n def __init__(self, start: int = 0) -> None:\n self.counter = start\n\n def __next__(self) -> int:\n i = self.counter\n self.counter += 1\n return i\n\n def reset(self) -> None:\n self.counter = 0\n\n\nclass LRUCache:\n\n def __init__(self, capacity: int):\n self.cache = OrderedDict()\n self.capacity = capacity\n\n def __contains__(self, key: Hashable) -> bool:\n return key in self.cache\n\n def __len__(self) -> int:\n return len(self.cache)\n\n def __getitem__(self, key: Hashable) -> Any:\n return self.get(key)\n\n def __setitem__(self, key: Hashable, value: Any) -> None:\n self.put(key, value)\n\n def __delitem__(self, key: Hashable) -> None:\n self.pop(key)\n\n def touch(self, key: Hashable) -> None:\n self.cache.move_to_end(key)\n\n def get(self, key: Hashable, default_value: Optional[Any] = None) -> int:\n if key in self.cache:\n value = self.cache[key]\n self.cache.move_to_end(key)\n else:\n value = default_value\n return value\n\n def put(self, key: Hashable, value: Any) -> None:\n self.cache[key] = value\n self.cache.move_to_end(key)\n self._remove_old_if_needed()\n\n def _on_remove(self, key: Hashable, value: Any):\n pass\n\n def remove_oldest(self):\n if not self.cache:\n return\n key, value = self.cache.popitem(last=False)\n self._on_remove(key, value)\n\n def _remove_old_if_needed(self) -> None:\n while len(self.cache) > self.capacity:\n self.remove_oldest()\n\n def pop(self, key: int, default_value: Optional[Any] = None) -> Any:\n run_on_remove = key in self.cache\n value = self.cache.pop(key, default_value)\n if run_on_remove:\n self._on_remove(key, value)\n return value\n\n def clear(self):\n while len(self.cache) > 0:\n self.remove_oldest()\n self.cache.clear()\n\n\ndef is_hip() -> bool:\n return torch.version.hip is not None\n\n\ndef get_max_shared_memory_bytes(gpu: int = 0) -> int:\n \"\"\"Returns the maximum shared memory per thread block in bytes.\"\"\"\n # NOTE: This import statement should be executed lazily since\n # the Neuron-X backend does not have the `cuda_utils` module.\n from vllm._C import cuda_utils\n\n max_shared_mem = cuda_utils.get_max_shared_memory_per_block_device_attribute(\n gpu)\n # value 0 will cause MAX_SEQ_LEN become negative and test_attention.py will fail\n assert max_shared_mem > 0, \"max_shared_mem can not be zero\"\n return int(max_shared_mem)\n\n\ndef get_cpu_memory() -> int:\n \"\"\"Returns the total CPU memory of the node in bytes.\"\"\"\n return psutil.virtual_memory().total\n\n\ndef random_uuid() -> str:\n return str(uuid.uuid4().hex)\n\n\ndef in_wsl() -> bool:\n # Reference: https://github.com/microsoft/WSL/issues/4071\n return \"microsoft\" in \" \".join(uname()).lower()\n\n\ndef make_async(func: Callable[..., T]) -> Callable[..., Awaitable[T]]:\n \"\"\"Take a blocking function, and run it on in an executor thread.\n\n This function prevents the blocking function from blocking the\n asyncio event loop.\n The code in this function needs to be thread safe.\n \"\"\"\n\n def _async_wrapper(*args, **kwargs) -> asyncio.Future:\n loop = asyncio.get_event_loop()\n p_func = partial(func, *args, **kwargs)\n return loop.run_in_executor(executor=None, func=p_func)\n\n return _async_wrapper\n\n\ndef get_ip() -> str:\n s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n s.connect((\"8.8.8.8\", 80)) # Doesn't need to be reachable\n return s.getsockname()[0]\n\n\ndef get_distributed_init_method(ip: str, port: int) -> str:\n return f\"tcp://{ip}:{port}\"\n\n\ndef get_open_port() -> int:\n with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:\n s.bind((\"\", 0))\n return s.getsockname()[1]\n\n\ndef set_cuda_visible_devices(device_ids: List[int]) -> None:\n os.environ[\"CUDA_VISIBLE_DEVICES\"] = \",\".join(map(str, device_ids))\n\n\ndef get_nvcc_cuda_version() -> Version:\n cuda_home = os.environ.get('CUDA_HOME')\n if not cuda_home:\n cuda_home = '/usr/local/cuda'\n logger.info(\n f'CUDA_HOME is not found in the environment. Using {cuda_home} as CUDA_HOME.'\n )\n nvcc_output = subprocess.check_output([cuda_home + \"/bin/nvcc\", \"-V\"],\n universal_newlines=True)\n output = nvcc_output.split()\n release_idx = output.index(\"release\") + 1\n nvcc_cuda_version = parse(output[release_idx].split(\",\")[0])\n return nvcc_cuda_version\n\n\ndef _generate_random_fp8_e5m2(\n tensor: torch.tensor,\n low: float,\n high: float,\n) -> None:\n # NOTE(zhaoyang): Due to NaN and Inf representation for fp8 data type,\n # it may occur Inf or NaN if we directly use torch.randint\n # to generate random data for fp8 data.\n # For example, s.11111.00 in fp8e5m2 format repesents Inf.\n # | E4M3 | E5M2\n #-----|-------------|-------------------\n # Inf | N/A | s.11111.00\n # NaN | s.1111.111 | s.11111.{01,10,11}\n from vllm._C import cache_ops\n tensor_tmp = torch.empty_like(tensor, dtype=torch.float16)\n tensor_tmp.uniform_(low, high)\n cache_ops.convert_fp8_e5m2(tensor_tmp, tensor)\n del tensor_tmp\n\n\ndef create_kv_caches_with_random(\n num_blocks: int,\n block_size: int,\n num_layers: int,\n num_heads: int,\n head_size: int,\n cache_dtype: Optional[Union[str, torch.dtype]],\n model_dtype: Optional[Union[str, torch.dtype]] = None,\n seed: Optional[int] = 0,\n device: Optional[str] = \"cuda\",\n) -> Tuple[List[torch.Tensor], List[torch.Tensor]]:\n torch.random.manual_seed(seed)\n if torch.cuda.is_available():\n torch.cuda.manual_seed(seed)\n\n if isinstance(cache_dtype, str):\n if cache_dtype == \"auto\":\n if isinstance(model_dtype, str):\n torch_dtype = STR_DTYPE_TO_TORCH_DTYPE[model_dtype]\n elif isinstance(model_dtype, torch.dtype):\n torch_dtype = model_dtype\n else:\n raise ValueError(f\"Invalid model dtype: {model_dtype}\")\n elif cache_dtype in [\"half\", \"bfloat16\", \"float\"]:\n torch_dtype = STR_DTYPE_TO_TORCH_DTYPE[cache_dtype]\n elif cache_dtype == \"fp8_e5m2\":\n torch_dtype = torch.uint8\n else:\n raise ValueError(f\"Invalid kv cache dtype: {cache_dtype}\")\n elif isinstance(cache_dtype, torch.dtype):\n torch_dtype = cache_dtype\n else:\n raise ValueError(f\"Invalid kv cache dtype: {cache_dtype}\")\n\n scale = head_size**-0.5\n x = 16 // torch.tensor([], dtype=torch_dtype).element_size()\n key_cache_shape = (num_blocks, num_heads, head_size // x, block_size, x)\n key_caches = []\n for _ in range(num_layers):\n key_cache = torch.empty(size=key_cache_shape,\n dtype=torch_dtype,\n device=device)\n if cache_dtype in [\"auto\", \"half\", \"bfloat16\", \"float\"]:\n key_cache.uniform_(-scale, scale)\n elif cache_dtype == 'fp8_e5m2':\n _generate_random_fp8_e5m2(key_cache, -scale, scale)\n key_caches.append(key_cache)\n\n value_cache_shape = (num_blocks, num_heads, head_size, block_size)\n value_caches = []\n for _ in range(num_layers):\n value_cache = torch.empty(size=value_cache_shape,\n dtype=torch_dtype,\n device=device)\n if cache_dtype in [\"auto\", \"half\", \"bfloat16\", \"float\"]:\n value_cache.uniform_(-scale, scale)\n elif cache_dtype == 'fp8_e5m2':\n _generate_random_fp8_e5m2(value_cache, -scale, scale)\n value_caches.append(value_cache)\n return key_caches, value_caches\n", "path": "vllm/utils.py"}], "after_files": [{"content": "import enum\nimport os\nimport socket\nimport subprocess\nimport uuid\nfrom platform import uname\nfrom typing import List, Tuple, Union\nfrom packaging.version import parse, Version\n\nimport psutil\nimport torch\nimport asyncio\nfrom functools import partial\nfrom typing import (\n Awaitable,\n Callable,\n TypeVar,\n)\nfrom collections import OrderedDict\nfrom typing import Any, Hashable, Optional\n\nfrom vllm.logger import init_logger\n\nT = TypeVar(\"T\")\nlogger = init_logger(__name__)\n\nSTR_DTYPE_TO_TORCH_DTYPE = {\n \"half\": torch.half,\n \"bfloat16\": torch.bfloat16,\n \"float\": torch.float,\n \"fp8_e5m2\": torch.uint8,\n}\n\n\nclass Device(enum.Enum):\n GPU = enum.auto()\n CPU = enum.auto()\n\n\nclass Counter:\n\n def __init__(self, start: int = 0) -> None:\n self.counter = start\n\n def __next__(self) -> int:\n i = self.counter\n self.counter += 1\n return i\n\n def reset(self) -> None:\n self.counter = 0\n\n\nclass LRUCache:\n\n def __init__(self, capacity: int):\n self.cache = OrderedDict()\n self.capacity = capacity\n\n def __contains__(self, key: Hashable) -> bool:\n return key in self.cache\n\n def __len__(self) -> int:\n return len(self.cache)\n\n def __getitem__(self, key: Hashable) -> Any:\n return self.get(key)\n\n def __setitem__(self, key: Hashable, value: Any) -> None:\n self.put(key, value)\n\n def __delitem__(self, key: Hashable) -> None:\n self.pop(key)\n\n def touch(self, key: Hashable) -> None:\n self.cache.move_to_end(key)\n\n def get(self, key: Hashable, default_value: Optional[Any] = None) -> int:\n if key in self.cache:\n value = self.cache[key]\n self.cache.move_to_end(key)\n else:\n value = default_value\n return value\n\n def put(self, key: Hashable, value: Any) -> None:\n self.cache[key] = value\n self.cache.move_to_end(key)\n self._remove_old_if_needed()\n\n def _on_remove(self, key: Hashable, value: Any):\n pass\n\n def remove_oldest(self):\n if not self.cache:\n return\n key, value = self.cache.popitem(last=False)\n self._on_remove(key, value)\n\n def _remove_old_if_needed(self) -> None:\n while len(self.cache) > self.capacity:\n self.remove_oldest()\n\n def pop(self, key: int, default_value: Optional[Any] = None) -> Any:\n run_on_remove = key in self.cache\n value = self.cache.pop(key, default_value)\n if run_on_remove:\n self._on_remove(key, value)\n return value\n\n def clear(self):\n while len(self.cache) > 0:\n self.remove_oldest()\n self.cache.clear()\n\n\ndef is_hip() -> bool:\n return torch.version.hip is not None\n\n\ndef get_max_shared_memory_bytes(gpu: int = 0) -> int:\n \"\"\"Returns the maximum shared memory per thread block in bytes.\"\"\"\n # NOTE: This import statement should be executed lazily since\n # the Neuron-X backend does not have the `cuda_utils` module.\n from vllm._C import cuda_utils\n\n max_shared_mem = cuda_utils.get_max_shared_memory_per_block_device_attribute(\n gpu)\n # value 0 will cause MAX_SEQ_LEN become negative and test_attention.py will fail\n assert max_shared_mem > 0, \"max_shared_mem can not be zero\"\n return int(max_shared_mem)\n\n\ndef get_cpu_memory() -> int:\n \"\"\"Returns the total CPU memory of the node in bytes.\"\"\"\n return psutil.virtual_memory().total\n\n\ndef random_uuid() -> str:\n return str(uuid.uuid4().hex)\n\n\ndef in_wsl() -> bool:\n # Reference: https://github.com/microsoft/WSL/issues/4071\n return \"microsoft\" in \" \".join(uname()).lower()\n\n\ndef make_async(func: Callable[..., T]) -> Callable[..., Awaitable[T]]:\n \"\"\"Take a blocking function, and run it on in an executor thread.\n\n This function prevents the blocking function from blocking the\n asyncio event loop.\n The code in this function needs to be thread safe.\n \"\"\"\n\n def _async_wrapper(*args, **kwargs) -> asyncio.Future:\n loop = asyncio.get_event_loop()\n p_func = partial(func, *args, **kwargs)\n return loop.run_in_executor(executor=None, func=p_func)\n\n return _async_wrapper\n\n\ndef get_ip() -> str:\n s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n s.connect((\"8.8.8.8\", 80)) # Doesn't need to be reachable\n return s.getsockname()[0]\n\n\ndef get_distributed_init_method(ip: str, port: int) -> str:\n return f\"tcp://{ip}:{port}\"\n\n\ndef get_open_port() -> int:\n with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:\n s.bind((\"\", 0))\n return s.getsockname()[1]\n\n\ndef set_cuda_visible_devices(device_ids: List[int]) -> None:\n os.environ[\"CUDA_VISIBLE_DEVICES\"] = \",\".join(map(str, device_ids))\n\n\ndef get_nvcc_cuda_version() -> Version:\n cuda_home = os.environ.get('CUDA_HOME')\n if not cuda_home:\n cuda_home = '/usr/local/cuda'\n logger.info(\n f'CUDA_HOME is not found in the environment. Using {cuda_home} as CUDA_HOME.'\n )\n nvcc_output = subprocess.check_output([cuda_home + \"/bin/nvcc\", \"-V\"],\n universal_newlines=True)\n output = nvcc_output.split()\n release_idx = output.index(\"release\") + 1\n nvcc_cuda_version = parse(output[release_idx].split(\",\")[0])\n return nvcc_cuda_version\n\n\ndef _generate_random_fp8_e5m2(\n tensor: torch.tensor,\n low: float,\n high: float,\n) -> None:\n # NOTE(zhaoyang): Due to NaN and Inf representation for fp8 data type,\n # it may occur Inf or NaN if we directly use torch.randint\n # to generate random data for fp8 data.\n # For example, s.11111.00 in fp8e5m2 format repesents Inf.\n # | E4M3 | E5M2\n #-----|-------------|-------------------\n # Inf | N/A | s.11111.00\n # NaN | s.1111.111 | s.11111.{01,10,11}\n from vllm._C import cache_ops\n tensor_tmp = torch.empty_like(tensor, dtype=torch.float16)\n tensor_tmp.uniform_(low, high)\n cache_ops.convert_fp8_e5m2(tensor_tmp, tensor)\n del tensor_tmp\n\n\ndef create_kv_caches_with_random(\n num_blocks: int,\n block_size: int,\n num_layers: int,\n num_heads: int,\n head_size: int,\n cache_dtype: Optional[Union[str, torch.dtype]],\n model_dtype: Optional[Union[str, torch.dtype]] = None,\n seed: Optional[int] = 0,\n device: Optional[str] = \"cuda\",\n) -> Tuple[List[torch.Tensor], List[torch.Tensor]]:\n torch.random.manual_seed(seed)\n if torch.cuda.is_available():\n torch.cuda.manual_seed(seed)\n\n if isinstance(cache_dtype, str):\n if cache_dtype == \"auto\":\n if isinstance(model_dtype, str):\n torch_dtype = STR_DTYPE_TO_TORCH_DTYPE[model_dtype]\n elif isinstance(model_dtype, torch.dtype):\n torch_dtype = model_dtype\n else:\n raise ValueError(f\"Invalid model dtype: {model_dtype}\")\n elif cache_dtype in [\"half\", \"bfloat16\", \"float\"]:\n torch_dtype = STR_DTYPE_TO_TORCH_DTYPE[cache_dtype]\n elif cache_dtype == \"fp8_e5m2\":\n torch_dtype = torch.uint8\n else:\n raise ValueError(f\"Invalid kv cache dtype: {cache_dtype}\")\n elif isinstance(cache_dtype, torch.dtype):\n torch_dtype = cache_dtype\n else:\n raise ValueError(f\"Invalid kv cache dtype: {cache_dtype}\")\n\n scale = head_size**-0.5\n x = 16 // torch.tensor([], dtype=torch_dtype).element_size()\n key_cache_shape = (num_blocks, num_heads, head_size // x, block_size, x)\n key_caches = []\n for _ in range(num_layers):\n key_cache = torch.empty(size=key_cache_shape,\n dtype=torch_dtype,\n device=device)\n if cache_dtype == 'fp8_e5m2':\n _generate_random_fp8_e5m2(key_cache, -scale, scale)\n elif torch_dtype in [torch.half, torch.bfloat16, torch.float]:\n key_cache.uniform_(-scale, scale)\n else:\n raise ValueError(\n f\"Does not support key cache of type {cache_dtype}\")\n key_caches.append(key_cache)\n\n value_cache_shape = (num_blocks, num_heads, head_size, block_size)\n value_caches = []\n for _ in range(num_layers):\n value_cache = torch.empty(size=value_cache_shape,\n dtype=torch_dtype,\n device=device)\n if cache_dtype == 'fp8_e5m2':\n _generate_random_fp8_e5m2(value_cache, -scale, scale)\n elif torch_dtype in [torch.half, torch.bfloat16, torch.float]:\n value_cache.uniform_(-scale, scale)\n else:\n raise ValueError(\n f\"Does not support value cache of type {cache_dtype}\")\n value_caches.append(value_cache)\n return key_caches, value_caches\n", "path": "vllm/utils.py"}]} | 3,275 | 414 |
gh_patches_debug_52105 | rasdani/github-patches | git_diff | sktime__sktime-3566 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] check_estimator tests_to_exclude doesn't exclude the specified tests
**Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
Specifying a list of tests in the `tests_to_exclude` variable in `check_estimator` doesn't exclude the specified tests.
**To Reproduce**
<!--
Add a Minimal, Complete, and Verifiable example (for more details, see e.g. https://stackoverflow.com/help/mcve
If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com
-->
```python
from sktime.utils.estimator_checks import check_estimator
from sktime.forecasting.naive import NaiveForecaster
log = check_estimator(
NaiveForecaster, return_exceptions=True, tests_to_exclude=["test_predict_residuals"]
)
should_be_empty = [key for key in log.keys() if "test_predict_residuals" in key]
assert len(should_be_empty) == 0
```
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
Any tests in the tests_to_excluded should be actually excluded!
(FYI for whoever takes this on. The problem is somewhere [here](https://github.com/sktime/sktime/blob/fe6f43c2e06e586c2dd16102aef5eef4bd4b6ce6/sktime/tests/test_all_estimators.py#L378))
**Versions**
<details>
<!--
Please run the following code snippet and paste the output here:
from sktime import show_versions; show_versions()
-->
sktime==0.13.4
python==3.8.5
</details>
<!-- Thanks for contributing! -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sktime/utils/estimator_checks.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Estimator checker for extension."""
3
4 __author__ = ["fkiraly"]
5 __all__ = ["check_estimator"]
6
7 from inspect import isclass
8
9
10 def check_estimator(
11 estimator,
12 return_exceptions=True,
13 tests_to_run=None,
14 fixtures_to_run=None,
15 verbose=True,
16 tests_to_exclude=None,
17 fixtures_to_exclude=None,
18 ):
19 """Run all tests on one single estimator.
20
21 Tests that are run on estimator:
22 all tests in test_all_estimators
23 all interface compatibility tests from the module of estimator's scitype
24 for example, test_all_forecasters if estimator is a forecaster
25
26 Parameters
27 ----------
28 estimator : estimator class or estimator instance
29 return_exception : bool, optional, default=True
30 whether to return exceptions/failures, or raise them
31 if True: returns exceptions in results
32 if False: raises exceptions as they occur
33 tests_to_run : str or list of str, optional. Default = run all tests.
34 Names (test/function name string) of tests to run.
35 sub-sets tests that are run to the tests given here.
36 fixtures_to_run : str or list of str, optional. Default = run all tests.
37 pytest test-fixture combination codes, which test-fixture combinations to run.
38 sub-sets tests and fixtures to run to the list given here.
39 If both tests_to_run and fixtures_to_run are provided, runs the *union*,
40 i.e., all test-fixture combinations for tests in tests_to_run,
41 plus all test-fixture combinations in fixtures_to_run.
42 verbose : str, optional, default=True.
43 whether to print out informative summary of tests run.
44 tests_to_exclude : str or list of str, names of tests to exclude. default = None
45 removes tests that should not be run, after subsetting via tests_to_run.
46 fixtures_to_exclude : str or list of str, fixtures to exclude. default = None
47 removes test-fixture combinations that should not be run.
48 This is done after subsetting via fixtures_to_run.
49
50 Returns
51 -------
52 results : dict of results of the tests in self
53 keys are test/fixture strings, identical as in pytest, e.g., test[fixture]
54 entries are the string "PASSED" if the test passed,
55 or the exception raised if the test did not pass
56 returned only if all tests pass, or return_exceptions=True
57
58 Raises
59 ------
60 if return_exception=False, raises any exception produced by the tests directly
61
62 Examples
63 --------
64 >>> from sktime.forecasting.theta import ThetaForecaster
65 >>> from sktime.utils.estimator_checks import check_estimator
66 >>> results = check_estimator(ThetaForecaster, tests_to_run="test_pred_int_tag")
67 All tests PASSED!
68 >>> check_estimator(
69 ... ThetaForecaster, fixtures_to_run="test_score[ThetaForecaster-y:1cols-fh=1]"
70 ... )
71 All tests PASSED!
72 {'test_score[ThetaForecaster-y:1cols-fh=1]': 'PASSED'}
73 """
74 from sktime.base import BaseEstimator
75 from sktime.classification.early_classification.tests.test_all_early_classifiers import ( # noqa E501
76 TestAllEarlyClassifiers,
77 )
78 from sktime.classification.tests.test_all_classifiers import TestAllClassifiers
79 from sktime.dists_kernels.tests.test_all_dist_kernels import (
80 TestAllPairwiseTransformers,
81 TestAllPanelTransformers,
82 )
83 from sktime.forecasting.tests.test_all_forecasters import TestAllForecasters
84 from sktime.registry import scitype
85 from sktime.regression.tests.test_all_regressors import TestAllRegressors
86 from sktime.tests.test_all_estimators import TestAllEstimators, TestAllObjects
87 from sktime.transformations.tests.test_all_transformers import TestAllTransformers
88
89 testclass_dict = dict()
90 testclass_dict["classifier"] = TestAllClassifiers
91 testclass_dict["early_classifier"] = TestAllEarlyClassifiers
92 testclass_dict["forecaster"] = TestAllForecasters
93 testclass_dict["regressor"] = TestAllRegressors
94 testclass_dict["transformer"] = TestAllTransformers
95 testclass_dict["transformer-pairwise"] = TestAllPairwiseTransformers
96 testclass_dict["transformer-pairwise-panel"] = TestAllPanelTransformers
97
98 results = TestAllObjects().run_tests(
99 estimator=estimator,
100 return_exceptions=return_exceptions,
101 tests_to_run=tests_to_run,
102 fixtures_to_run=fixtures_to_run,
103 tests_to_exclude=tests_to_exclude,
104 fixtures_to_exclude=fixtures_to_exclude,
105 )
106
107 def is_estimator(obj):
108 """Return whether obj is an estimator class or estimator object."""
109 if isclass(obj):
110 return issubclass(obj, BaseEstimator)
111 else:
112 return isinstance(obj, BaseEstimator)
113
114 if is_estimator(estimator):
115 results_estimator = TestAllEstimators().run_tests(
116 estimator=estimator,
117 return_exceptions=return_exceptions,
118 tests_to_run=tests_to_run,
119 fixtures_to_run=fixtures_to_run,
120 tests_to_exclude=tests_to_exclude,
121 fixtures_to_exclude=fixtures_to_exclude,
122 )
123 results.update(results_estimator)
124
125 try:
126 scitype_of_estimator = scitype(estimator)
127 except Exception:
128 scitype_of_estimator = ""
129
130 if scitype_of_estimator in testclass_dict.keys():
131 results_scitype = testclass_dict[scitype_of_estimator]().run_tests(
132 estimator=estimator,
133 return_exceptions=return_exceptions,
134 tests_to_run=tests_to_run,
135 fixtures_to_run=fixtures_to_run,
136 )
137 results.update(results_scitype)
138
139 failed_tests = [key for key in results.keys() if results[key] != "PASSED"]
140 if len(failed_tests) > 0:
141 msg = failed_tests
142 msg = ["FAILED: " + x for x in msg]
143 msg = "\n".join(msg)
144 else:
145 msg = "All tests PASSED!"
146
147 if verbose:
148 # printing is an intended feature, for console usage and interactive debugging
149 print(msg) # noqa T001
150
151 return results
152
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sktime/utils/estimator_checks.py b/sktime/utils/estimator_checks.py
--- a/sktime/utils/estimator_checks.py
+++ b/sktime/utils/estimator_checks.py
@@ -133,6 +133,8 @@
return_exceptions=return_exceptions,
tests_to_run=tests_to_run,
fixtures_to_run=fixtures_to_run,
+ tests_to_exclude=tests_to_exclude,
+ fixtures_to_exclude=fixtures_to_exclude,
)
results.update(results_scitype)
| {"golden_diff": "diff --git a/sktime/utils/estimator_checks.py b/sktime/utils/estimator_checks.py\n--- a/sktime/utils/estimator_checks.py\n+++ b/sktime/utils/estimator_checks.py\n@@ -133,6 +133,8 @@\n return_exceptions=return_exceptions,\n tests_to_run=tests_to_run,\n fixtures_to_run=fixtures_to_run,\n+ tests_to_exclude=tests_to_exclude,\n+ fixtures_to_exclude=fixtures_to_exclude,\n )\n results.update(results_scitype)\n", "issue": "[BUG] check_estimator tests_to_exclude doesn't exclude the specified tests\n**Describe the bug**\r\n<!--\r\nA clear and concise description of what the bug is.\r\n-->\r\nSpecifying a list of tests in the `tests_to_exclude` variable in `check_estimator` doesn't exclude the specified tests.\r\n\r\n**To Reproduce**\r\n<!--\r\nAdd a Minimal, Complete, and Verifiable example (for more details, see e.g. https://stackoverflow.com/help/mcve\r\n\r\nIf the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com\r\n-->\r\n\r\n```python\r\nfrom sktime.utils.estimator_checks import check_estimator\r\nfrom sktime.forecasting.naive import NaiveForecaster\r\n\r\nlog = check_estimator(\r\n NaiveForecaster, return_exceptions=True, tests_to_exclude=[\"test_predict_residuals\"]\r\n)\r\n\r\nshould_be_empty = [key for key in log.keys() if \"test_predict_residuals\" in key]\r\n\r\nassert len(should_be_empty) == 0\r\n```\r\n\r\n**Expected behavior**\r\n<!--\r\nA clear and concise description of what you expected to happen.\r\n-->\r\nAny tests in the tests_to_excluded should be actually excluded!\r\n(FYI for whoever takes this on. The problem is somewhere [here](https://github.com/sktime/sktime/blob/fe6f43c2e06e586c2dd16102aef5eef4bd4b6ce6/sktime/tests/test_all_estimators.py#L378))\r\n\r\n**Versions**\r\n<details>\r\n\r\n<!--\r\nPlease run the following code snippet and paste the output here:\r\n \r\nfrom sktime import show_versions; show_versions()\r\n-->\r\nsktime==0.13.4 \r\npython==3.8.5\r\n</details> \r\n\r\n<!-- Thanks for contributing! -->\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Estimator checker for extension.\"\"\"\n\n__author__ = [\"fkiraly\"]\n__all__ = [\"check_estimator\"]\n\nfrom inspect import isclass\n\n\ndef check_estimator(\n estimator,\n return_exceptions=True,\n tests_to_run=None,\n fixtures_to_run=None,\n verbose=True,\n tests_to_exclude=None,\n fixtures_to_exclude=None,\n):\n \"\"\"Run all tests on one single estimator.\n\n Tests that are run on estimator:\n all tests in test_all_estimators\n all interface compatibility tests from the module of estimator's scitype\n for example, test_all_forecasters if estimator is a forecaster\n\n Parameters\n ----------\n estimator : estimator class or estimator instance\n return_exception : bool, optional, default=True\n whether to return exceptions/failures, or raise them\n if True: returns exceptions in results\n if False: raises exceptions as they occur\n tests_to_run : str or list of str, optional. Default = run all tests.\n Names (test/function name string) of tests to run.\n sub-sets tests that are run to the tests given here.\n fixtures_to_run : str or list of str, optional. Default = run all tests.\n pytest test-fixture combination codes, which test-fixture combinations to run.\n sub-sets tests and fixtures to run to the list given here.\n If both tests_to_run and fixtures_to_run are provided, runs the *union*,\n i.e., all test-fixture combinations for tests in tests_to_run,\n plus all test-fixture combinations in fixtures_to_run.\n verbose : str, optional, default=True.\n whether to print out informative summary of tests run.\n tests_to_exclude : str or list of str, names of tests to exclude. default = None\n removes tests that should not be run, after subsetting via tests_to_run.\n fixtures_to_exclude : str or list of str, fixtures to exclude. default = None\n removes test-fixture combinations that should not be run.\n This is done after subsetting via fixtures_to_run.\n\n Returns\n -------\n results : dict of results of the tests in self\n keys are test/fixture strings, identical as in pytest, e.g., test[fixture]\n entries are the string \"PASSED\" if the test passed,\n or the exception raised if the test did not pass\n returned only if all tests pass, or return_exceptions=True\n\n Raises\n ------\n if return_exception=False, raises any exception produced by the tests directly\n\n Examples\n --------\n >>> from sktime.forecasting.theta import ThetaForecaster\n >>> from sktime.utils.estimator_checks import check_estimator\n >>> results = check_estimator(ThetaForecaster, tests_to_run=\"test_pred_int_tag\")\n All tests PASSED!\n >>> check_estimator(\n ... ThetaForecaster, fixtures_to_run=\"test_score[ThetaForecaster-y:1cols-fh=1]\"\n ... )\n All tests PASSED!\n {'test_score[ThetaForecaster-y:1cols-fh=1]': 'PASSED'}\n \"\"\"\n from sktime.base import BaseEstimator\n from sktime.classification.early_classification.tests.test_all_early_classifiers import ( # noqa E501\n TestAllEarlyClassifiers,\n )\n from sktime.classification.tests.test_all_classifiers import TestAllClassifiers\n from sktime.dists_kernels.tests.test_all_dist_kernels import (\n TestAllPairwiseTransformers,\n TestAllPanelTransformers,\n )\n from sktime.forecasting.tests.test_all_forecasters import TestAllForecasters\n from sktime.registry import scitype\n from sktime.regression.tests.test_all_regressors import TestAllRegressors\n from sktime.tests.test_all_estimators import TestAllEstimators, TestAllObjects\n from sktime.transformations.tests.test_all_transformers import TestAllTransformers\n\n testclass_dict = dict()\n testclass_dict[\"classifier\"] = TestAllClassifiers\n testclass_dict[\"early_classifier\"] = TestAllEarlyClassifiers\n testclass_dict[\"forecaster\"] = TestAllForecasters\n testclass_dict[\"regressor\"] = TestAllRegressors\n testclass_dict[\"transformer\"] = TestAllTransformers\n testclass_dict[\"transformer-pairwise\"] = TestAllPairwiseTransformers\n testclass_dict[\"transformer-pairwise-panel\"] = TestAllPanelTransformers\n\n results = TestAllObjects().run_tests(\n estimator=estimator,\n return_exceptions=return_exceptions,\n tests_to_run=tests_to_run,\n fixtures_to_run=fixtures_to_run,\n tests_to_exclude=tests_to_exclude,\n fixtures_to_exclude=fixtures_to_exclude,\n )\n\n def is_estimator(obj):\n \"\"\"Return whether obj is an estimator class or estimator object.\"\"\"\n if isclass(obj):\n return issubclass(obj, BaseEstimator)\n else:\n return isinstance(obj, BaseEstimator)\n\n if is_estimator(estimator):\n results_estimator = TestAllEstimators().run_tests(\n estimator=estimator,\n return_exceptions=return_exceptions,\n tests_to_run=tests_to_run,\n fixtures_to_run=fixtures_to_run,\n tests_to_exclude=tests_to_exclude,\n fixtures_to_exclude=fixtures_to_exclude,\n )\n results.update(results_estimator)\n\n try:\n scitype_of_estimator = scitype(estimator)\n except Exception:\n scitype_of_estimator = \"\"\n\n if scitype_of_estimator in testclass_dict.keys():\n results_scitype = testclass_dict[scitype_of_estimator]().run_tests(\n estimator=estimator,\n return_exceptions=return_exceptions,\n tests_to_run=tests_to_run,\n fixtures_to_run=fixtures_to_run,\n )\n results.update(results_scitype)\n\n failed_tests = [key for key in results.keys() if results[key] != \"PASSED\"]\n if len(failed_tests) > 0:\n msg = failed_tests\n msg = [\"FAILED: \" + x for x in msg]\n msg = \"\\n\".join(msg)\n else:\n msg = \"All tests PASSED!\"\n\n if verbose:\n # printing is an intended feature, for console usage and interactive debugging\n print(msg) # noqa T001\n\n return results\n", "path": "sktime/utils/estimator_checks.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Estimator checker for extension.\"\"\"\n\n__author__ = [\"fkiraly\"]\n__all__ = [\"check_estimator\"]\n\nfrom inspect import isclass\n\n\ndef check_estimator(\n estimator,\n return_exceptions=True,\n tests_to_run=None,\n fixtures_to_run=None,\n verbose=True,\n tests_to_exclude=None,\n fixtures_to_exclude=None,\n):\n \"\"\"Run all tests on one single estimator.\n\n Tests that are run on estimator:\n all tests in test_all_estimators\n all interface compatibility tests from the module of estimator's scitype\n for example, test_all_forecasters if estimator is a forecaster\n\n Parameters\n ----------\n estimator : estimator class or estimator instance\n return_exception : bool, optional, default=True\n whether to return exceptions/failures, or raise them\n if True: returns exceptions in results\n if False: raises exceptions as they occur\n tests_to_run : str or list of str, optional. Default = run all tests.\n Names (test/function name string) of tests to run.\n sub-sets tests that are run to the tests given here.\n fixtures_to_run : str or list of str, optional. Default = run all tests.\n pytest test-fixture combination codes, which test-fixture combinations to run.\n sub-sets tests and fixtures to run to the list given here.\n If both tests_to_run and fixtures_to_run are provided, runs the *union*,\n i.e., all test-fixture combinations for tests in tests_to_run,\n plus all test-fixture combinations in fixtures_to_run.\n verbose : str, optional, default=True.\n whether to print out informative summary of tests run.\n tests_to_exclude : str or list of str, names of tests to exclude. default = None\n removes tests that should not be run, after subsetting via tests_to_run.\n fixtures_to_exclude : str or list of str, fixtures to exclude. default = None\n removes test-fixture combinations that should not be run.\n This is done after subsetting via fixtures_to_run.\n\n Returns\n -------\n results : dict of results of the tests in self\n keys are test/fixture strings, identical as in pytest, e.g., test[fixture]\n entries are the string \"PASSED\" if the test passed,\n or the exception raised if the test did not pass\n returned only if all tests pass, or return_exceptions=True\n\n Raises\n ------\n if return_exception=False, raises any exception produced by the tests directly\n\n Examples\n --------\n >>> from sktime.forecasting.theta import ThetaForecaster\n >>> from sktime.utils.estimator_checks import check_estimator\n >>> results = check_estimator(ThetaForecaster, tests_to_run=\"test_pred_int_tag\")\n All tests PASSED!\n >>> check_estimator(\n ... ThetaForecaster, fixtures_to_run=\"test_score[ThetaForecaster-y:1cols-fh=1]\"\n ... )\n All tests PASSED!\n {'test_score[ThetaForecaster-y:1cols-fh=1]': 'PASSED'}\n \"\"\"\n from sktime.base import BaseEstimator\n from sktime.classification.early_classification.tests.test_all_early_classifiers import ( # noqa E501\n TestAllEarlyClassifiers,\n )\n from sktime.classification.tests.test_all_classifiers import TestAllClassifiers\n from sktime.dists_kernels.tests.test_all_dist_kernels import (\n TestAllPairwiseTransformers,\n TestAllPanelTransformers,\n )\n from sktime.forecasting.tests.test_all_forecasters import TestAllForecasters\n from sktime.registry import scitype\n from sktime.regression.tests.test_all_regressors import TestAllRegressors\n from sktime.tests.test_all_estimators import TestAllEstimators, TestAllObjects\n from sktime.transformations.tests.test_all_transformers import TestAllTransformers\n\n testclass_dict = dict()\n testclass_dict[\"classifier\"] = TestAllClassifiers\n testclass_dict[\"early_classifier\"] = TestAllEarlyClassifiers\n testclass_dict[\"forecaster\"] = TestAllForecasters\n testclass_dict[\"regressor\"] = TestAllRegressors\n testclass_dict[\"transformer\"] = TestAllTransformers\n testclass_dict[\"transformer-pairwise\"] = TestAllPairwiseTransformers\n testclass_dict[\"transformer-pairwise-panel\"] = TestAllPanelTransformers\n\n results = TestAllObjects().run_tests(\n estimator=estimator,\n return_exceptions=return_exceptions,\n tests_to_run=tests_to_run,\n fixtures_to_run=fixtures_to_run,\n tests_to_exclude=tests_to_exclude,\n fixtures_to_exclude=fixtures_to_exclude,\n )\n\n def is_estimator(obj):\n \"\"\"Return whether obj is an estimator class or estimator object.\"\"\"\n if isclass(obj):\n return issubclass(obj, BaseEstimator)\n else:\n return isinstance(obj, BaseEstimator)\n\n if is_estimator(estimator):\n results_estimator = TestAllEstimators().run_tests(\n estimator=estimator,\n return_exceptions=return_exceptions,\n tests_to_run=tests_to_run,\n fixtures_to_run=fixtures_to_run,\n tests_to_exclude=tests_to_exclude,\n fixtures_to_exclude=fixtures_to_exclude,\n )\n results.update(results_estimator)\n\n try:\n scitype_of_estimator = scitype(estimator)\n except Exception:\n scitype_of_estimator = \"\"\n\n if scitype_of_estimator in testclass_dict.keys():\n results_scitype = testclass_dict[scitype_of_estimator]().run_tests(\n estimator=estimator,\n return_exceptions=return_exceptions,\n tests_to_run=tests_to_run,\n fixtures_to_run=fixtures_to_run,\n tests_to_exclude=tests_to_exclude,\n fixtures_to_exclude=fixtures_to_exclude,\n )\n results.update(results_scitype)\n\n failed_tests = [key for key in results.keys() if results[key] != \"PASSED\"]\n if len(failed_tests) > 0:\n msg = failed_tests\n msg = [\"FAILED: \" + x for x in msg]\n msg = \"\\n\".join(msg)\n else:\n msg = \"All tests PASSED!\"\n\n if verbose:\n # printing is an intended feature, for console usage and interactive debugging\n print(msg) # noqa T001\n\n return results\n", "path": "sktime/utils/estimator_checks.py"}]} | 2,359 | 117 |
gh_patches_debug_35991 | rasdani/github-patches | git_diff | ansible__ansible-modules-core-2590 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ini_file module: bracklets in key break idempotence
Issue Type: Bug Report
Ansible Version: ansible-playbook 2.0.0 (devel 5d645f3a8b) last updated 2015/11/13 13:27:13 (GMT +200)
Environment: Ubuntu 15.04
Summary: When using ini_file for modifying /etc/php5/fpm/pool.d/www.conf it will not recognize strings which include "[" or "]" in key and append it again to the file. Using Ansible 1.9.4 this works just as expected.
Example: You want to set
```
php_admin_value[sendmail_path]: "/usr/sbin/sendmail -t -i -f [email protected]"
```
using this hash:
``` yaml
php_admin_value[sendmail_path]:
value: '"/usr/sbin/sendmail -t -i -f [email protected]"'
```
On first run Ansible will append this string to the file just as expected. On second run it will append it again, even though it's already present. When removing the bracklets it will only be inserted once.
Removing the bracklets from value will work as expected (not changed one second run)
ini_file module: bracklets in key break idempotence
Issue Type: Bug Report
Ansible Version: ansible-playbook 2.0.0 (devel 5d645f3a8b) last updated 2015/11/13 13:27:13 (GMT +200)
Environment: Ubuntu 15.04
Summary: When using ini_file for modifying /etc/php5/fpm/pool.d/www.conf it will not recognize strings which include "[" or "]" in key and append it again to the file. Using Ansible 1.9.4 this works just as expected.
Example: You want to set
```
php_admin_value[sendmail_path]: "/usr/sbin/sendmail -t -i -f [email protected]"
```
using this hash:
``` yaml
php_admin_value[sendmail_path]:
value: '"/usr/sbin/sendmail -t -i -f [email protected]"'
```
On first run Ansible will append this string to the file just as expected. On second run it will append it again, even though it's already present. When removing the bracklets it will only be inserted once.
Removing the bracklets from value will work as expected (not changed one second run)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `files/ini_file.py`
Content:
```
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # (c) 2012, Jan-Piet Mens <jpmens () gmail.com>
5 # (c) 2015, Ales Nosek <anosek.nosek () gmail.com>
6 #
7 # This file is part of Ansible
8 #
9 # Ansible is free software: you can redistribute it and/or modify
10 # it under the terms of the GNU General Public License as published by
11 # the Free Software Foundation, either version 3 of the License, or
12 # (at your option) any later version.
13 #
14 # Ansible is distributed in the hope that it will be useful,
15 # but WITHOUT ANY WARRANTY; without even the implied warranty of
16 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
17 # GNU General Public License for more details.
18 #
19 # You should have received a copy of the GNU General Public License
20 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
21 #
22
23 DOCUMENTATION = '''
24 ---
25 module: ini_file
26 short_description: Tweak settings in INI files
27 extends_documentation_fragment: files
28 description:
29 - Manage (add, remove, change) individual settings in an INI-style file without having
30 to manage the file as a whole with, say, M(template) or M(assemble). Adds missing
31 sections if they don't exist.
32 - Before version 2.0, comments are discarded when the source file is read, and therefore will not show up in the destination file.
33 version_added: "0.9"
34 options:
35 dest:
36 description:
37 - Path to the INI-style file; this file is created if required
38 required: true
39 default: null
40 section:
41 description:
42 - Section name in INI file. This is added if C(state=present) automatically when
43 a single value is being set.
44 required: true
45 default: null
46 option:
47 description:
48 - if set (required for changing a I(value)), this is the name of the option.
49 - May be omitted if adding/removing a whole I(section).
50 required: false
51 default: null
52 value:
53 description:
54 - the string value to be associated with an I(option). May be omitted when removing an I(option).
55 required: false
56 default: null
57 backup:
58 description:
59 - Create a backup file including the timestamp information so you can get
60 the original file back if you somehow clobbered it incorrectly.
61 required: false
62 default: "no"
63 choices: [ "yes", "no" ]
64 others:
65 description:
66 - all arguments accepted by the M(file) module also work here
67 required: false
68 state:
69 description:
70 - If set to C(absent) the option or section will be removed if present instead of created.
71 required: false
72 default: "present"
73 choices: [ "present", "absent" ]
74 notes:
75 - While it is possible to add an I(option) without specifying a I(value), this makes
76 no sense.
77 - A section named C(default) cannot be added by the module, but if it exists, individual
78 options within the section can be updated. (This is a limitation of Python's I(ConfigParser).)
79 Either use M(template) to create a base INI file with a C([default]) section, or use
80 M(lineinfile) to add the missing line.
81 requirements: [ ConfigParser ]
82 author:
83 - "Jan-Piet Mens (@jpmens)"
84 - "Ales Nosek (@noseka1)"
85 '''
86
87 EXAMPLES = '''
88 # Ensure "fav=lemonade is in section "[drinks]" in specified file
89 - ini_file: dest=/etc/conf section=drinks option=fav value=lemonade mode=0600 backup=yes
90
91 - ini_file: dest=/etc/anotherconf
92 section=drinks
93 option=temperature
94 value=cold
95 backup=yes
96 '''
97
98 import ConfigParser
99 import sys
100 import os
101
102 # ==============================================================
103 # do_ini
104
105 def do_ini(module, filename, section=None, option=None, value=None, state='present', backup=False):
106
107
108 if not os.path.exists(filename):
109 try:
110 open(filename,'w').close()
111 except:
112 module.fail_json(msg="Destination file %s not writable" % filename)
113 ini_file = open(filename, 'r')
114 try:
115 ini_lines = ini_file.readlines()
116 # append a fake section line to simplify the logic
117 ini_lines.append('[')
118 finally:
119 ini_file.close()
120
121 within_section = not section
122 section_start = 0
123 changed = False
124
125 for index, line in enumerate(ini_lines):
126 if line.startswith('[%s]' % section):
127 within_section = True
128 section_start = index
129 elif line.startswith('['):
130 if within_section:
131 if state == 'present':
132 # insert missing option line at the end of the section
133 ini_lines.insert(index, '%s = %s\n' % (option, value))
134 changed = True
135 elif state == 'absent' and not option:
136 # remove the entire section
137 del ini_lines[section_start:index]
138 changed = True
139 break
140 else:
141 if within_section and option:
142 if state == 'present':
143 # change the existing option line
144 if re.match('%s *=' % option, line) \
145 or re.match('# *%s *=' % option, line) \
146 or re.match('; *%s *=' % option, line):
147 newline = '%s = %s\n' % (option, value)
148 changed = ini_lines[index] != newline
149 ini_lines[index] = newline
150 if changed:
151 # remove all possible option occurences from the rest of the section
152 index = index + 1
153 while index < len(ini_lines):
154 line = ini_lines[index]
155 if line.startswith('['):
156 break
157 if re.match('%s *=' % option, line):
158 del ini_lines[index]
159 else:
160 index = index + 1
161 break
162 else:
163 # comment out the existing option line
164 if re.match('%s *=' % option, line):
165 ini_lines[index] = '#%s' % ini_lines[index]
166 changed = True
167 break
168
169 # remove the fake section line
170 del ini_lines[-1:]
171
172 if not within_section and option and state == 'present':
173 ini_lines.append('[%s]\n' % section)
174 ini_lines.append('%s = %s\n' % (option, value))
175 changed = True
176
177
178 if changed and not module.check_mode:
179 if backup:
180 module.backup_local(filename)
181 ini_file = open(filename, 'w')
182 try:
183 ini_file.writelines(ini_lines)
184 finally:
185 ini_file.close()
186
187 return changed
188
189 # ==============================================================
190 # main
191
192 def main():
193
194 module = AnsibleModule(
195 argument_spec = dict(
196 dest = dict(required=True),
197 section = dict(required=True),
198 option = dict(required=False),
199 value = dict(required=False),
200 backup = dict(default='no', type='bool'),
201 state = dict(default='present', choices=['present', 'absent'])
202 ),
203 add_file_common_args = True,
204 supports_check_mode = True
205 )
206
207 info = dict()
208
209 dest = os.path.expanduser(module.params['dest'])
210 section = module.params['section']
211 option = module.params['option']
212 value = module.params['value']
213 state = module.params['state']
214 backup = module.params['backup']
215
216 changed = do_ini(module, dest, section, option, value, state, backup)
217
218 file_args = module.load_file_common_arguments(module.params)
219 changed = module.set_fs_attributes_if_different(file_args, changed)
220
221 # Mission complete
222 module.exit_json(dest=dest, changed=changed, msg="OK")
223
224 # import module snippets
225 from ansible.module_utils.basic import *
226 if __name__ == '__main__':
227 main()
228
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/files/ini_file.py b/files/ini_file.py
--- a/files/ini_file.py
+++ b/files/ini_file.py
@@ -99,6 +99,22 @@
import sys
import os
+# ==============================================================
+# match_opt
+
+def match_opt(option, line):
+ option = re.escape(option)
+ return re.match('%s *=' % option, line) \
+ or re.match('# *%s *=' % option, line) \
+ or re.match('; *%s *=' % option, line)
+
+# ==============================================================
+# match_active_opt
+
+def match_active_opt(option, line):
+ option = re.escape(option)
+ return re.match('%s *=' % option, line)
+
# ==============================================================
# do_ini
@@ -141,9 +157,7 @@
if within_section and option:
if state == 'present':
# change the existing option line
- if re.match('%s *=' % option, line) \
- or re.match('# *%s *=' % option, line) \
- or re.match('; *%s *=' % option, line):
+ if match_opt(option, line):
newline = '%s = %s\n' % (option, value)
changed = ini_lines[index] != newline
ini_lines[index] = newline
@@ -154,14 +168,14 @@
line = ini_lines[index]
if line.startswith('['):
break
- if re.match('%s *=' % option, line):
+ if match_active_opt(option, line):
del ini_lines[index]
else:
index = index + 1
break
else:
# comment out the existing option line
- if re.match('%s *=' % option, line):
+ if match_active_opt(option, line):
ini_lines[index] = '#%s' % ini_lines[index]
changed = True
break
| {"golden_diff": "diff --git a/files/ini_file.py b/files/ini_file.py\n--- a/files/ini_file.py\n+++ b/files/ini_file.py\n@@ -99,6 +99,22 @@\n import sys\n import os\n \n+# ==============================================================\n+# match_opt\n+\n+def match_opt(option, line):\n+ option = re.escape(option)\n+ return re.match('%s *=' % option, line) \\\n+ or re.match('# *%s *=' % option, line) \\\n+ or re.match('; *%s *=' % option, line)\n+\n+# ==============================================================\n+# match_active_opt\n+\n+def match_active_opt(option, line):\n+ option = re.escape(option)\n+ return re.match('%s *=' % option, line)\n+\n # ==============================================================\n # do_ini\n \n@@ -141,9 +157,7 @@\n if within_section and option:\n if state == 'present':\n # change the existing option line\n- if re.match('%s *=' % option, line) \\\n- or re.match('# *%s *=' % option, line) \\\n- or re.match('; *%s *=' % option, line):\n+ if match_opt(option, line):\n newline = '%s = %s\\n' % (option, value)\n changed = ini_lines[index] != newline\n ini_lines[index] = newline\n@@ -154,14 +168,14 @@\n line = ini_lines[index]\n if line.startswith('['):\n break\n- if re.match('%s *=' % option, line):\n+ if match_active_opt(option, line):\n del ini_lines[index]\n else:\n index = index + 1\n break\n else:\n # comment out the existing option line\n- if re.match('%s *=' % option, line):\n+ if match_active_opt(option, line):\n ini_lines[index] = '#%s' % ini_lines[index]\n changed = True\n break\n", "issue": "ini_file module: bracklets in key break idempotence\nIssue Type: Bug Report\nAnsible Version: ansible-playbook 2.0.0 (devel 5d645f3a8b) last updated 2015/11/13 13:27:13 (GMT +200)\nEnvironment: Ubuntu 15.04\nSummary: When using ini_file for modifying /etc/php5/fpm/pool.d/www.conf it will not recognize strings which include \"[\" or \"]\" in key and append it again to the file. Using Ansible 1.9.4 this works just as expected.\n\nExample: You want to set\n\n```\nphp_admin_value[sendmail_path]: \"/usr/sbin/sendmail -t -i -f [email protected]\"\n```\n\nusing this hash:\n\n``` yaml\nphp_admin_value[sendmail_path]:\n value: '\"/usr/sbin/sendmail -t -i -f [email protected]\"'\n```\n\nOn first run Ansible will append this string to the file just as expected. On second run it will append it again, even though it's already present. When removing the bracklets it will only be inserted once.\n\nRemoving the bracklets from value will work as expected (not changed one second run)\n\nini_file module: bracklets in key break idempotence\nIssue Type: Bug Report\nAnsible Version: ansible-playbook 2.0.0 (devel 5d645f3a8b) last updated 2015/11/13 13:27:13 (GMT +200)\nEnvironment: Ubuntu 15.04\nSummary: When using ini_file for modifying /etc/php5/fpm/pool.d/www.conf it will not recognize strings which include \"[\" or \"]\" in key and append it again to the file. Using Ansible 1.9.4 this works just as expected.\n\nExample: You want to set\n\n```\nphp_admin_value[sendmail_path]: \"/usr/sbin/sendmail -t -i -f [email protected]\"\n```\n\nusing this hash:\n\n``` yaml\nphp_admin_value[sendmail_path]:\n value: '\"/usr/sbin/sendmail -t -i -f [email protected]\"'\n```\n\nOn first run Ansible will append this string to the file just as expected. On second run it will append it again, even though it's already present. When removing the bracklets it will only be inserted once.\n\nRemoving the bracklets from value will work as expected (not changed one second run)\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2012, Jan-Piet Mens <jpmens () gmail.com>\n# (c) 2015, Ales Nosek <anosek.nosek () gmail.com>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\n\nDOCUMENTATION = '''\n---\nmodule: ini_file\nshort_description: Tweak settings in INI files\nextends_documentation_fragment: files\ndescription:\n - Manage (add, remove, change) individual settings in an INI-style file without having\n to manage the file as a whole with, say, M(template) or M(assemble). Adds missing\n sections if they don't exist.\n - Before version 2.0, comments are discarded when the source file is read, and therefore will not show up in the destination file.\nversion_added: \"0.9\"\noptions:\n dest:\n description:\n - Path to the INI-style file; this file is created if required\n required: true\n default: null\n section:\n description:\n - Section name in INI file. This is added if C(state=present) automatically when\n a single value is being set.\n required: true\n default: null\n option:\n description:\n - if set (required for changing a I(value)), this is the name of the option.\n - May be omitted if adding/removing a whole I(section).\n required: false\n default: null\n value:\n description:\n - the string value to be associated with an I(option). May be omitted when removing an I(option).\n required: false\n default: null\n backup:\n description:\n - Create a backup file including the timestamp information so you can get\n the original file back if you somehow clobbered it incorrectly.\n required: false\n default: \"no\"\n choices: [ \"yes\", \"no\" ]\n others:\n description:\n - all arguments accepted by the M(file) module also work here\n required: false\n state:\n description:\n - If set to C(absent) the option or section will be removed if present instead of created.\n required: false\n default: \"present\"\n choices: [ \"present\", \"absent\" ]\nnotes:\n - While it is possible to add an I(option) without specifying a I(value), this makes\n no sense.\n - A section named C(default) cannot be added by the module, but if it exists, individual\n options within the section can be updated. (This is a limitation of Python's I(ConfigParser).)\n Either use M(template) to create a base INI file with a C([default]) section, or use\n M(lineinfile) to add the missing line.\nrequirements: [ ConfigParser ]\nauthor:\n - \"Jan-Piet Mens (@jpmens)\"\n - \"Ales Nosek (@noseka1)\"\n'''\n\nEXAMPLES = '''\n# Ensure \"fav=lemonade is in section \"[drinks]\" in specified file\n- ini_file: dest=/etc/conf section=drinks option=fav value=lemonade mode=0600 backup=yes\n\n- ini_file: dest=/etc/anotherconf\n section=drinks\n option=temperature\n value=cold\n backup=yes\n'''\n\nimport ConfigParser\nimport sys\nimport os\n\n# ==============================================================\n# do_ini\n\ndef do_ini(module, filename, section=None, option=None, value=None, state='present', backup=False):\n\n\n if not os.path.exists(filename):\n try:\n open(filename,'w').close()\n except:\n module.fail_json(msg=\"Destination file %s not writable\" % filename)\n ini_file = open(filename, 'r')\n try:\n ini_lines = ini_file.readlines()\n # append a fake section line to simplify the logic\n ini_lines.append('[')\n finally:\n ini_file.close()\n\n within_section = not section\n section_start = 0\n changed = False\n\n for index, line in enumerate(ini_lines):\n if line.startswith('[%s]' % section):\n within_section = True\n section_start = index\n elif line.startswith('['):\n if within_section:\n if state == 'present':\n # insert missing option line at the end of the section\n ini_lines.insert(index, '%s = %s\\n' % (option, value))\n changed = True\n elif state == 'absent' and not option:\n # remove the entire section\n del ini_lines[section_start:index]\n changed = True\n break\n else:\n if within_section and option:\n if state == 'present':\n # change the existing option line\n if re.match('%s *=' % option, line) \\\n or re.match('# *%s *=' % option, line) \\\n or re.match('; *%s *=' % option, line):\n newline = '%s = %s\\n' % (option, value)\n changed = ini_lines[index] != newline\n ini_lines[index] = newline\n if changed:\n # remove all possible option occurences from the rest of the section\n index = index + 1\n while index < len(ini_lines):\n line = ini_lines[index]\n if line.startswith('['):\n break\n if re.match('%s *=' % option, line):\n del ini_lines[index]\n else:\n index = index + 1\n break\n else:\n # comment out the existing option line\n if re.match('%s *=' % option, line):\n ini_lines[index] = '#%s' % ini_lines[index]\n changed = True\n break\n\n # remove the fake section line\n del ini_lines[-1:]\n\n if not within_section and option and state == 'present':\n ini_lines.append('[%s]\\n' % section)\n ini_lines.append('%s = %s\\n' % (option, value))\n changed = True\n\n\n if changed and not module.check_mode:\n if backup:\n module.backup_local(filename)\n ini_file = open(filename, 'w')\n try:\n ini_file.writelines(ini_lines)\n finally:\n ini_file.close()\n\n return changed\n\n# ==============================================================\n# main\n\ndef main():\n\n module = AnsibleModule(\n argument_spec = dict(\n dest = dict(required=True),\n section = dict(required=True),\n option = dict(required=False),\n value = dict(required=False),\n backup = dict(default='no', type='bool'),\n state = dict(default='present', choices=['present', 'absent'])\n ),\n add_file_common_args = True,\n supports_check_mode = True\n )\n\n info = dict()\n\n dest = os.path.expanduser(module.params['dest'])\n section = module.params['section']\n option = module.params['option']\n value = module.params['value']\n state = module.params['state']\n backup = module.params['backup']\n\n changed = do_ini(module, dest, section, option, value, state, backup)\n\n file_args = module.load_file_common_arguments(module.params)\n changed = module.set_fs_attributes_if_different(file_args, changed)\n\n # Mission complete\n module.exit_json(dest=dest, changed=changed, msg=\"OK\")\n\n# import module snippets\nfrom ansible.module_utils.basic import *\nif __name__ == '__main__':\n main()\n", "path": "files/ini_file.py"}], "after_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2012, Jan-Piet Mens <jpmens () gmail.com>\n# (c) 2015, Ales Nosek <anosek.nosek () gmail.com>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\n\nDOCUMENTATION = '''\n---\nmodule: ini_file\nshort_description: Tweak settings in INI files\nextends_documentation_fragment: files\ndescription:\n - Manage (add, remove, change) individual settings in an INI-style file without having\n to manage the file as a whole with, say, M(template) or M(assemble). Adds missing\n sections if they don't exist.\n - Before version 2.0, comments are discarded when the source file is read, and therefore will not show up in the destination file.\nversion_added: \"0.9\"\noptions:\n dest:\n description:\n - Path to the INI-style file; this file is created if required\n required: true\n default: null\n section:\n description:\n - Section name in INI file. This is added if C(state=present) automatically when\n a single value is being set.\n required: true\n default: null\n option:\n description:\n - if set (required for changing a I(value)), this is the name of the option.\n - May be omitted if adding/removing a whole I(section).\n required: false\n default: null\n value:\n description:\n - the string value to be associated with an I(option). May be omitted when removing an I(option).\n required: false\n default: null\n backup:\n description:\n - Create a backup file including the timestamp information so you can get\n the original file back if you somehow clobbered it incorrectly.\n required: false\n default: \"no\"\n choices: [ \"yes\", \"no\" ]\n others:\n description:\n - all arguments accepted by the M(file) module also work here\n required: false\n state:\n description:\n - If set to C(absent) the option or section will be removed if present instead of created.\n required: false\n default: \"present\"\n choices: [ \"present\", \"absent\" ]\nnotes:\n - While it is possible to add an I(option) without specifying a I(value), this makes\n no sense.\n - A section named C(default) cannot be added by the module, but if it exists, individual\n options within the section can be updated. (This is a limitation of Python's I(ConfigParser).)\n Either use M(template) to create a base INI file with a C([default]) section, or use\n M(lineinfile) to add the missing line.\nrequirements: [ ConfigParser ]\nauthor:\n - \"Jan-Piet Mens (@jpmens)\"\n - \"Ales Nosek (@noseka1)\"\n'''\n\nEXAMPLES = '''\n# Ensure \"fav=lemonade is in section \"[drinks]\" in specified file\n- ini_file: dest=/etc/conf section=drinks option=fav value=lemonade mode=0600 backup=yes\n\n- ini_file: dest=/etc/anotherconf\n section=drinks\n option=temperature\n value=cold\n backup=yes\n'''\n\nimport ConfigParser\nimport sys\nimport os\n\n# ==============================================================\n# match_opt\n\ndef match_opt(option, line):\n option = re.escape(option)\n return re.match('%s *=' % option, line) \\\n or re.match('# *%s *=' % option, line) \\\n or re.match('; *%s *=' % option, line)\n\n# ==============================================================\n# match_active_opt\n\ndef match_active_opt(option, line):\n option = re.escape(option)\n return re.match('%s *=' % option, line)\n\n# ==============================================================\n# do_ini\n\ndef do_ini(module, filename, section=None, option=None, value=None, state='present', backup=False):\n\n\n if not os.path.exists(filename):\n try:\n open(filename,'w').close()\n except:\n module.fail_json(msg=\"Destination file %s not writable\" % filename)\n ini_file = open(filename, 'r')\n try:\n ini_lines = ini_file.readlines()\n # append a fake section line to simplify the logic\n ini_lines.append('[')\n finally:\n ini_file.close()\n\n within_section = not section\n section_start = 0\n changed = False\n\n for index, line in enumerate(ini_lines):\n if line.startswith('[%s]' % section):\n within_section = True\n section_start = index\n elif line.startswith('['):\n if within_section:\n if state == 'present':\n # insert missing option line at the end of the section\n ini_lines.insert(index, '%s = %s\\n' % (option, value))\n changed = True\n elif state == 'absent' and not option:\n # remove the entire section\n del ini_lines[section_start:index]\n changed = True\n break\n else:\n if within_section and option:\n if state == 'present':\n # change the existing option line\n if match_opt(option, line):\n newline = '%s = %s\\n' % (option, value)\n changed = ini_lines[index] != newline\n ini_lines[index] = newline\n if changed:\n # remove all possible option occurences from the rest of the section\n index = index + 1\n while index < len(ini_lines):\n line = ini_lines[index]\n if line.startswith('['):\n break\n if match_active_opt(option, line):\n del ini_lines[index]\n else:\n index = index + 1\n break\n else:\n # comment out the existing option line\n if match_active_opt(option, line):\n ini_lines[index] = '#%s' % ini_lines[index]\n changed = True\n break\n\n # remove the fake section line\n del ini_lines[-1:]\n\n if not within_section and option and state == 'present':\n ini_lines.append('[%s]\\n' % section)\n ini_lines.append('%s = %s\\n' % (option, value))\n changed = True\n\n\n if changed and not module.check_mode:\n if backup:\n module.backup_local(filename)\n ini_file = open(filename, 'w')\n try:\n ini_file.writelines(ini_lines)\n finally:\n ini_file.close()\n\n return changed\n\n# ==============================================================\n# main\n\ndef main():\n\n module = AnsibleModule(\n argument_spec = dict(\n dest = dict(required=True),\n section = dict(required=True),\n option = dict(required=False),\n value = dict(required=False),\n backup = dict(default='no', type='bool'),\n state = dict(default='present', choices=['present', 'absent'])\n ),\n add_file_common_args = True,\n supports_check_mode = True\n )\n\n info = dict()\n\n dest = os.path.expanduser(module.params['dest'])\n section = module.params['section']\n option = module.params['option']\n value = module.params['value']\n state = module.params['state']\n backup = module.params['backup']\n\n changed = do_ini(module, dest, section, option, value, state, backup)\n\n file_args = module.load_file_common_arguments(module.params)\n changed = module.set_fs_attributes_if_different(file_args, changed)\n\n # Mission complete\n module.exit_json(dest=dest, changed=changed, msg=\"OK\")\n\n# import module snippets\nfrom ansible.module_utils.basic import *\nif __name__ == '__main__':\n main()\n", "path": "files/ini_file.py"}]} | 3,145 | 435 |
gh_patches_debug_601 | rasdani/github-patches | git_diff | pex-tool__pex-1288 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.35
On the docket:
+ [x] Ensure venv pex does not enter a re-exec loop. #1286
+ [x] Improve resolve error information. #1287
+ [x] Expose Pex tools via a pex-tools console script. #1279
+ [x] Fix auto-created `--venv` core scripts. (#1278)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.34"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.34"
+__version__ = "2.1.35"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.34\"\n+__version__ = \"2.1.35\"\n", "issue": "Release 2.1.35\nOn the docket:\r\n+ [x] Ensure venv pex does not enter a re-exec loop. #1286\r\n+ [x] Improve resolve error information. #1287 \r\n+ [x] Expose Pex tools via a pex-tools console script. #1279\r\n+ [x] Fix auto-created `--venv` core scripts. (#1278)\r\n\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.34\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.35\"\n", "path": "pex/version.py"}]} | 406 | 96 |
gh_patches_debug_17947 | rasdani/github-patches | git_diff | conda__conda-build-690 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Entry point pattern doesn't match entrypoints:with.dots
Hi,
Could you please have a look to this. I define entry point as
``` python
entry_points={
'console_scripts': [
'poultry = poultry.main:dispatcher.dispatch',
],
},
```
https://github.com/dimazest/poultry/blob/e0f39277f0a219a4d3cd461b69ce5dd4422fc9dd/setup.py#L62
which entry point pattern at https://github.com/conda/conda-build/blob/master/conda_build/scripts.py#L27 doesn't match.
Thanks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda_build/scripts.py`
Content:
```
1 '''
2 Module for creating entry points and scripts for PyPI packages.
3 '''
4
5 from __future__ import absolute_import, division, print_function
6
7 import re
8 import os
9 import sys
10 import shutil
11 from os.path import dirname, isdir, join
12
13 import conda.config as cc
14
15 from conda_build.config import config
16
17
18 PY_TMPL = """\
19 if __name__ == '__main__':
20 import sys
21 from %s import %s
22
23 sys.exit(%s())
24 """
25
26 bin_dirname = 'Scripts' if sys.platform == 'win32' else 'bin'
27
28 entry_pat = re.compile('\s*([\w\-\.]+)\s*=\s*([\w.]+):(\w+)\s*$')
29
30
31 def iter_entry_points(items):
32 for item in items:
33 m = entry_pat.match(item)
34 if m is None:
35 sys.exit("Error cound not match entry point: %r" % item)
36 yield m.groups()
37
38
39 def create_entry_point(path, module, func):
40 pyscript = PY_TMPL % (module, func, func)
41 if sys.platform == 'win32':
42 with open(path + '-script.py', 'w') as fo:
43 fo.write(pyscript)
44 shutil.copyfile(join(dirname(__file__), 'cli-%d.exe' % cc.bits),
45 path + '.exe')
46 else:
47 with open(path, 'w') as fo:
48 fo.write('#!%s\n' % config.build_python)
49 fo.write(pyscript)
50 os.chmod(path, int('755', 8))
51
52
53 def create_entry_points(items):
54 if not items:
55 return
56 bin_dir = join(config.build_prefix, bin_dirname)
57 if not isdir(bin_dir):
58 os.mkdir(bin_dir)
59 for cmd, module, func in iter_entry_points(items):
60 create_entry_point(join(bin_dir, cmd), module, func)
61
62
63 def prepend_bin_path(env, prefix, prepend_prefix=False):
64 env['PATH'] = join(prefix, bin_dirname) + os.pathsep + env['PATH']
65 if sys.platform == "win32":
66 env['PATH'] = join(prefix, "Library", "bin") + os.pathsep + env['PATH']
67 if prepend_prefix:
68 env['PATH'] = prefix + os.pathsep + env['PATH']
69 return env
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conda_build/scripts.py b/conda_build/scripts.py
--- a/conda_build/scripts.py
+++ b/conda_build/scripts.py
@@ -18,14 +18,14 @@
PY_TMPL = """\
if __name__ == '__main__':
import sys
- from %s import %s
+ import %(module)s
- sys.exit(%s())
+ sys.exit(%(module)s.%(func)s())
"""
bin_dirname = 'Scripts' if sys.platform == 'win32' else 'bin'
-entry_pat = re.compile('\s*([\w\-\.]+)\s*=\s*([\w.]+):(\w+)\s*$')
+entry_pat = re.compile('\s*([\w\-\.]+)\s*=\s*([\w.]+):([\w.]+)\s*$')
def iter_entry_points(items):
@@ -37,7 +37,7 @@
def create_entry_point(path, module, func):
- pyscript = PY_TMPL % (module, func, func)
+ pyscript = PY_TMPL % {'module': module, 'func': func}
if sys.platform == 'win32':
with open(path + '-script.py', 'w') as fo:
fo.write(pyscript)
| {"golden_diff": "diff --git a/conda_build/scripts.py b/conda_build/scripts.py\n--- a/conda_build/scripts.py\n+++ b/conda_build/scripts.py\n@@ -18,14 +18,14 @@\n PY_TMPL = \"\"\"\\\n if __name__ == '__main__':\n import sys\n- from %s import %s\n+ import %(module)s\n \n- sys.exit(%s())\n+ sys.exit(%(module)s.%(func)s())\n \"\"\"\n \n bin_dirname = 'Scripts' if sys.platform == 'win32' else 'bin'\n \n-entry_pat = re.compile('\\s*([\\w\\-\\.]+)\\s*=\\s*([\\w.]+):(\\w+)\\s*$')\n+entry_pat = re.compile('\\s*([\\w\\-\\.]+)\\s*=\\s*([\\w.]+):([\\w.]+)\\s*$')\n \n \n def iter_entry_points(items):\n@@ -37,7 +37,7 @@\n \n \n def create_entry_point(path, module, func):\n- pyscript = PY_TMPL % (module, func, func)\n+ pyscript = PY_TMPL % {'module': module, 'func': func}\n if sys.platform == 'win32':\n with open(path + '-script.py', 'w') as fo:\n fo.write(pyscript)\n", "issue": "Entry point pattern doesn't match entrypoints:with.dots\nHi,\n\nCould you please have a look to this. I define entry point as\n\n``` python\nentry_points={\n 'console_scripts': [\n 'poultry = poultry.main:dispatcher.dispatch',\n ],\n},\n```\n\nhttps://github.com/dimazest/poultry/blob/e0f39277f0a219a4d3cd461b69ce5dd4422fc9dd/setup.py#L62\n\nwhich entry point pattern at https://github.com/conda/conda-build/blob/master/conda_build/scripts.py#L27 doesn't match.\n\nThanks.\n\n", "before_files": [{"content": "'''\nModule for creating entry points and scripts for PyPI packages.\n'''\n\nfrom __future__ import absolute_import, division, print_function\n\nimport re\nimport os\nimport sys\nimport shutil\nfrom os.path import dirname, isdir, join\n\nimport conda.config as cc\n\nfrom conda_build.config import config\n\n\nPY_TMPL = \"\"\"\\\nif __name__ == '__main__':\n import sys\n from %s import %s\n\n sys.exit(%s())\n\"\"\"\n\nbin_dirname = 'Scripts' if sys.platform == 'win32' else 'bin'\n\nentry_pat = re.compile('\\s*([\\w\\-\\.]+)\\s*=\\s*([\\w.]+):(\\w+)\\s*$')\n\n\ndef iter_entry_points(items):\n for item in items:\n m = entry_pat.match(item)\n if m is None:\n sys.exit(\"Error cound not match entry point: %r\" % item)\n yield m.groups()\n\n\ndef create_entry_point(path, module, func):\n pyscript = PY_TMPL % (module, func, func)\n if sys.platform == 'win32':\n with open(path + '-script.py', 'w') as fo:\n fo.write(pyscript)\n shutil.copyfile(join(dirname(__file__), 'cli-%d.exe' % cc.bits),\n path + '.exe')\n else:\n with open(path, 'w') as fo:\n fo.write('#!%s\\n' % config.build_python)\n fo.write(pyscript)\n os.chmod(path, int('755', 8))\n\n\ndef create_entry_points(items):\n if not items:\n return\n bin_dir = join(config.build_prefix, bin_dirname)\n if not isdir(bin_dir):\n os.mkdir(bin_dir)\n for cmd, module, func in iter_entry_points(items):\n create_entry_point(join(bin_dir, cmd), module, func)\n\n\ndef prepend_bin_path(env, prefix, prepend_prefix=False):\n env['PATH'] = join(prefix, bin_dirname) + os.pathsep + env['PATH']\n if sys.platform == \"win32\":\n env['PATH'] = join(prefix, \"Library\", \"bin\") + os.pathsep + env['PATH']\n if prepend_prefix:\n env['PATH'] = prefix + os.pathsep + env['PATH']\n return env\n", "path": "conda_build/scripts.py"}], "after_files": [{"content": "'''\nModule for creating entry points and scripts for PyPI packages.\n'''\n\nfrom __future__ import absolute_import, division, print_function\n\nimport re\nimport os\nimport sys\nimport shutil\nfrom os.path import dirname, isdir, join\n\nimport conda.config as cc\n\nfrom conda_build.config import config\n\n\nPY_TMPL = \"\"\"\\\nif __name__ == '__main__':\n import sys\n import %(module)s\n\n sys.exit(%(module)s.%(func)s())\n\"\"\"\n\nbin_dirname = 'Scripts' if sys.platform == 'win32' else 'bin'\n\nentry_pat = re.compile('\\s*([\\w\\-\\.]+)\\s*=\\s*([\\w.]+):([\\w.]+)\\s*$')\n\n\ndef iter_entry_points(items):\n for item in items:\n m = entry_pat.match(item)\n if m is None:\n sys.exit(\"Error cound not match entry point: %r\" % item)\n yield m.groups()\n\n\ndef create_entry_point(path, module, func):\n pyscript = PY_TMPL % {'module': module, 'func': func}\n if sys.platform == 'win32':\n with open(path + '-script.py', 'w') as fo:\n fo.write(pyscript)\n shutil.copyfile(join(dirname(__file__), 'cli-%d.exe' % cc.bits),\n path + '.exe')\n else:\n with open(path, 'w') as fo:\n fo.write('#!%s\\n' % config.build_python)\n fo.write(pyscript)\n os.chmod(path, int('755', 8))\n\n\ndef create_entry_points(items):\n if not items:\n return\n bin_dir = join(config.build_prefix, bin_dirname)\n if not isdir(bin_dir):\n os.mkdir(bin_dir)\n for cmd, module, func in iter_entry_points(items):\n create_entry_point(join(bin_dir, cmd), module, func)\n\n\ndef prepend_bin_path(env, prefix, prepend_prefix=False):\n env['PATH'] = join(prefix, bin_dirname) + os.pathsep + env['PATH']\n if sys.platform == \"win32\":\n env['PATH'] = join(prefix, \"Library\", \"bin\") + os.pathsep + env['PATH']\n if prepend_prefix:\n env['PATH'] = prefix + os.pathsep + env['PATH']\n return env\n", "path": "conda_build/scripts.py"}]} | 1,039 | 283 |
gh_patches_debug_22050 | rasdani/github-patches | git_diff | kartoza__prj.app-301 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
In the pending approval menu, only show menu items that have pending approvals
The **Pending approval** menu should be filtered to show only items in the active project where there are pending approvals.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django_project/core/custom_middleware.py`
Content:
```
1 # coding=utf-8
2 # flake8: noqa
3 """
4 core.custom_middleware
5 """
6 from base.models import Project
7
8
9 class NavContextMiddleware(object):
10 """
11 Adds the required navigation variables to each response
12 """
13
14 def __init__(self):
15 pass
16
17 @staticmethod
18 def process_template_response(request, response):
19 """
20 Add 'the_project', 'the_entry', 'the_version' to context for the
21 navigation.
22
23 Justification: To make the navigation functional, we need to know
24 which Project (or Version, Committee etc) the current context
25 relates to. This is required for URLs. Rather than include lots of
26 if/else in the navigation template, it seems cleaner to add the
27 above variables to the context here.
28
29 :param request: Http Request obj
30 :param response: Http Response obj
31 :return: context :rtype: dict
32 """
33 context = response.context_data
34
35 if context.get('project', None):
36 context['the_project'] = context.get('project')
37 else:
38 if request.user.is_staff:
39 context['the_projects'] = Project.objects.all()
40 else:
41 context['the_projects'] = Project.approved_objects.filter(
42 private=False
43 )
44
45 if context.get('version', None):
46 context['the_version'] = context.get('version')
47 context['the_project'] = context.get('version').project
48
49 if context.get('committee', None):
50 context['the_committee'] = context.get('committee')
51 context['the_project'] = context.get('committee').project
52
53 if context.get('ballot', None):
54 context['the_committee'] = context.get('ballot').committee
55 context['the_project'] = context.get('ballot').committee.project
56
57 if context.get('category', None):
58 context['the_project'] = context.get('category').project
59
60 if context.get('ballots', None):
61 try:
62 context['the_project'] = \
63 context.get('ballots')[0].committee.project
64 except (KeyError, IndexError):
65 pass
66
67 if context.get('entry', None):
68 context['the_entry'] = context.get('entry')
69 context['the_version'] = context.get('entry').version
70 context['the_project'] = context.get('entry').version.project
71
72 if context.get('committees', None):
73 try:
74 context['the_project'] = context.get('committees')[0].project
75 except (KeyError, IndexError):
76 pass
77
78 if context.get('versions', None):
79 try:
80 context['the_project'] = context.get('versions')[0].project
81 except (KeyError, IndexError):
82 pass
83
84 if context.get('entries', None):
85 try:
86 context['the_version'] = context.get('entries')[0].version
87 context['the_project'] = \
88 context.get('entries')[0].version.project
89 except (KeyError, IndexError):
90 pass
91
92 if context.get('categories', None):
93 try:
94 context['the_project'] = \
95 context.get('categories')[0].project
96 except (KeyError, IndexError):
97 pass
98
99 return response
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django_project/core/custom_middleware.py b/django_project/core/custom_middleware.py
--- a/django_project/core/custom_middleware.py
+++ b/django_project/core/custom_middleware.py
@@ -3,7 +3,8 @@
"""
core.custom_middleware
"""
-from base.models import Project
+from base.models import Project, Version
+from changes.models import Category, SponsorshipLevel, SponsorshipPeriod
class NavContextMiddleware(object):
@@ -34,6 +35,15 @@
if context.get('project', None):
context['the_project'] = context.get('project')
+ context['has_pending_versions'] = Version.unapproved_objects.filter(
+ project=context.get('project')).exists()
+ context['has_pending_categories'] = Category.unapproved_objects.filter(
+ project=context.get('project')).exists()
+ context['has_pending_sponsor_lvl'] = SponsorshipLevel.unapproved_objects.filter(
+ project=context.get('project')).exists()
+ context['has_pending_sponsor_period'] = SponsorshipPeriod.unapproved_objects.filter(
+ project=context.get('project')).exists()
+
else:
if request.user.is_staff:
context['the_projects'] = Project.objects.all()
| {"golden_diff": "diff --git a/django_project/core/custom_middleware.py b/django_project/core/custom_middleware.py\n--- a/django_project/core/custom_middleware.py\n+++ b/django_project/core/custom_middleware.py\n@@ -3,7 +3,8 @@\n \"\"\"\n core.custom_middleware\n \"\"\"\n-from base.models import Project\n+from base.models import Project, Version\n+from changes.models import Category, SponsorshipLevel, SponsorshipPeriod\n \n \n class NavContextMiddleware(object):\n@@ -34,6 +35,15 @@\n \n if context.get('project', None):\n context['the_project'] = context.get('project')\n+ context['has_pending_versions'] = Version.unapproved_objects.filter(\n+ project=context.get('project')).exists()\n+ context['has_pending_categories'] = Category.unapproved_objects.filter(\n+ project=context.get('project')).exists()\n+ context['has_pending_sponsor_lvl'] = SponsorshipLevel.unapproved_objects.filter(\n+ project=context.get('project')).exists()\n+ context['has_pending_sponsor_period'] = SponsorshipPeriod.unapproved_objects.filter(\n+ project=context.get('project')).exists()\n+\n else:\n if request.user.is_staff:\n context['the_projects'] = Project.objects.all()\n", "issue": "In the pending approval menu, only show menu items that have pending approvals\nThe **Pending approval** menu should be filtered to show only items in the active project where there are pending approvals.\n\n", "before_files": [{"content": "# coding=utf-8\n# flake8: noqa\n\"\"\"\ncore.custom_middleware\n\"\"\"\nfrom base.models import Project\n\n\nclass NavContextMiddleware(object):\n \"\"\"\n Adds the required navigation variables to each response\n \"\"\"\n\n def __init__(self):\n pass\n\n @staticmethod\n def process_template_response(request, response):\n \"\"\"\n Add 'the_project', 'the_entry', 'the_version' to context for the\n navigation.\n\n Justification: To make the navigation functional, we need to know\n which Project (or Version, Committee etc) the current context\n relates to. This is required for URLs. Rather than include lots of\n if/else in the navigation template, it seems cleaner to add the\n above variables to the context here.\n\n :param request: Http Request obj\n :param response: Http Response obj\n :return: context :rtype: dict\n \"\"\"\n context = response.context_data\n\n if context.get('project', None):\n context['the_project'] = context.get('project')\n else:\n if request.user.is_staff:\n context['the_projects'] = Project.objects.all()\n else:\n context['the_projects'] = Project.approved_objects.filter(\n private=False\n )\n\n if context.get('version', None):\n context['the_version'] = context.get('version')\n context['the_project'] = context.get('version').project\n\n if context.get('committee', None):\n context['the_committee'] = context.get('committee')\n context['the_project'] = context.get('committee').project\n\n if context.get('ballot', None):\n context['the_committee'] = context.get('ballot').committee\n context['the_project'] = context.get('ballot').committee.project\n\n if context.get('category', None):\n context['the_project'] = context.get('category').project\n\n if context.get('ballots', None):\n try:\n context['the_project'] = \\\n context.get('ballots')[0].committee.project\n except (KeyError, IndexError):\n pass\n\n if context.get('entry', None):\n context['the_entry'] = context.get('entry')\n context['the_version'] = context.get('entry').version\n context['the_project'] = context.get('entry').version.project\n\n if context.get('committees', None):\n try:\n context['the_project'] = context.get('committees')[0].project\n except (KeyError, IndexError):\n pass\n\n if context.get('versions', None):\n try:\n context['the_project'] = context.get('versions')[0].project\n except (KeyError, IndexError):\n pass\n\n if context.get('entries', None):\n try:\n context['the_version'] = context.get('entries')[0].version\n context['the_project'] = \\\n context.get('entries')[0].version.project\n except (KeyError, IndexError):\n pass\n\n if context.get('categories', None):\n try:\n context['the_project'] = \\\n context.get('categories')[0].project\n except (KeyError, IndexError):\n pass\n\n return response\n", "path": "django_project/core/custom_middleware.py"}], "after_files": [{"content": "# coding=utf-8\n# flake8: noqa\n\"\"\"\ncore.custom_middleware\n\"\"\"\nfrom base.models import Project, Version\nfrom changes.models import Category, SponsorshipLevel, SponsorshipPeriod\n\n\nclass NavContextMiddleware(object):\n \"\"\"\n Adds the required navigation variables to each response\n \"\"\"\n\n def __init__(self):\n pass\n\n @staticmethod\n def process_template_response(request, response):\n \"\"\"\n Add 'the_project', 'the_entry', 'the_version' to context for the\n navigation.\n\n Justification: To make the navigation functional, we need to know\n which Project (or Version, Committee etc) the current context\n relates to. This is required for URLs. Rather than include lots of\n if/else in the navigation template, it seems cleaner to add the\n above variables to the context here.\n\n :param request: Http Request obj\n :param response: Http Response obj\n :return: context :rtype: dict\n \"\"\"\n context = response.context_data\n\n if context.get('project', None):\n context['the_project'] = context.get('project')\n context['has_pending_versions'] = Version.unapproved_objects.filter(\n project=context.get('project')).exists()\n context['has_pending_categories'] = Category.unapproved_objects.filter(\n project=context.get('project')).exists()\n context['has_pending_sponsor_lvl'] = SponsorshipLevel.unapproved_objects.filter(\n project=context.get('project')).exists()\n context['has_pending_sponsor_period'] = SponsorshipPeriod.unapproved_objects.filter(\n project=context.get('project')).exists()\n\n else:\n if request.user.is_staff:\n context['the_projects'] = Project.objects.all()\n else:\n context['the_projects'] = Project.approved_objects.filter(\n private=False\n )\n\n if context.get('version', None):\n context['the_version'] = context.get('version')\n context['the_project'] = context.get('version').project\n\n if context.get('committee', None):\n context['the_committee'] = context.get('committee')\n context['the_project'] = context.get('committee').project\n\n if context.get('ballot', None):\n context['the_committee'] = context.get('ballot').committee\n context['the_project'] = context.get('ballot').committee.project\n\n if context.get('category', None):\n context['the_project'] = context.get('category').project\n\n if context.get('ballots', None):\n try:\n context['the_project'] = \\\n context.get('ballots')[0].committee.project\n except (KeyError, IndexError):\n pass\n\n if context.get('entry', None):\n context['the_entry'] = context.get('entry')\n context['the_version'] = context.get('entry').version\n context['the_project'] = context.get('entry').version.project\n\n if context.get('committees', None):\n try:\n context['the_project'] = context.get('committees')[0].project\n except (KeyError, IndexError):\n pass\n\n if context.get('versions', None):\n try:\n context['the_project'] = context.get('versions')[0].project\n except (KeyError, IndexError):\n pass\n\n if context.get('entries', None):\n try:\n context['the_version'] = context.get('entries')[0].version\n context['the_project'] = \\\n context.get('entries')[0].version.project\n except (KeyError, IndexError):\n pass\n\n if context.get('categories', None):\n try:\n context['the_project'] = \\\n context.get('categories')[0].project\n except (KeyError, IndexError):\n pass\n\n return response\n", "path": "django_project/core/custom_middleware.py"}]} | 1,186 | 263 |
gh_patches_debug_18 | rasdani/github-patches | git_diff | OCHA-DAP__hdx-ckan-2012 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Title of the Colombia page should be "Colombia Country Page" or alike
Right now the title of the [Colombia country page](https://data.hdx.rwlabs.org/group/col) is "Colombia crisis page". I think it should read "Colombia Country Page" or similar. Any ideas?

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext-hdx_theme/ckanext/hdx_theme/version.py`
Content:
```
1 hdx_version = 'v0.5.10'
2
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py
+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
@@ -1 +1 @@
-hdx_version = 'v0.5.10'
+hdx_version = 'v0.5.11'
| {"golden_diff": "diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n@@ -1 +1 @@\n-hdx_version = 'v0.5.10'\n+hdx_version = 'v0.5.11'\n", "issue": "Title of the Colombia page should be \"Colombia Country Page\" or alike\nRight now the title of the [Colombia country page](https://data.hdx.rwlabs.org/group/col) is \"Colombia crisis page\". I think it should read \"Colombia Country Page\" or similar. Any ideas? \n\n\n\n", "before_files": [{"content": "hdx_version = 'v0.5.10'\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}], "after_files": [{"content": "hdx_version = 'v0.5.11'\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}]} | 437 | 108 |
gh_patches_debug_19637 | rasdani/github-patches | git_diff | iterative__dvc-4390 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Issues with dvc get and dvc get-url after 1.0
Hi,
I've been excited to explore DVC as I think it can offer a lot for some of the things I'm working on. I've had a couple of issues with `dvc get` and `dvc get-url` that I *think* are probably bugs. I've put them together to avoid creating lots of issues; apologies if you'd rather they were separate, I can move them into different threads.
I am testing as of v1.1.1.
## dvc get does not recreate directories correctly
After v1.0 `dvc get` pointed to a directory does not result in correctly placed files. All of the files are placed in the root, rathe than subdirectories. I've only tested this with a git repository that is not a DVC repository. It worked in v0.94, but does not work correctly in v1.0 or v1.1. `dvc import` works correctly.
The following example gets [this directory](https://github.com/explosion/projects/tree/master/nel-emerson/scripts):
```
$ dvc get https://github.com/explosion/projects nel-emerson
$ tree nel-emerson/
nel-emerson/
├── README.md
├── __init__.py
├── el_recipe.py
├── el_tutorial.py
├── emerson_annotated_text.jsonl
├── emerson_input_text.txt
├── entities.csv
├── nel_schema.png
├── notebook_video.ipynb
└── requirements.txt
0 directories, 10 files
```
And here's `dvc import`, which works correctly:
```
$ dvc import https://github.com/explosion/projects nel-emerson
$ tree nel-emerson/
nel-emerson/
├── README.md
├── __init__.py
├── input
│ └── entities.csv
├── prodigy
│ ├── emerson_annotated_text.jsonl
│ └── emerson_input_text.txt
├── requirements.txt
└── scripts
├── el_recipe.py
├── el_tutorial.py
├── nel_schema.png
└── notebook_video.ipynb
3 directories, 10 files
```
## dvc get-url fails on URLs with weak ETAGs
I understand why `import-url` needs the strong ETag, but I don't think it makes sense for `get-url`, so I suspect this is a bug. If so it's a regression after v1.0, as it didn't happen in v0.94.
```
$ dvc get-url https://raw.githubusercontent.com/explosion/projects/master/nel-emerson/requirements.txt
ERROR: failed to get 'https://raw.githubusercontent.com/explosion/projects/master/nel-emerson/requirements.txt' - Weak ETags are not supported. (Etag: 'W/"24c3b7f31292b04ac8fa9938c6e14baed94a089944652e8abf7dbb9459ae5b56"', URL: 'https://raw.githubusercontent.com/explosion/projects/master/nel-emerson/requirements.txt')
```
## dvc get-url does not follow redirects
This is maybe more of a feature request than a bug report, although if it's expected behaviour, I think at least the error message needs to be different. I also think it's an important feature, as it's pretty common to have an https URL sitting in front of an AWS bucket or other file storage, as direct storage URLs don't allow much flexibility for moving data.
There's also a specific, very common scenario where this comes up: binary data such as trained models are often attached to Github releases as release artefacts. These artefacts are addressed as URLs that redirect to S3. For instance:
```
$ dvc get-url https://github.com/explosion/spacy-models/releases/download/ro_core_news_sm-2.3.1/ro_core_news_sm-2.3.1.tar.gz
ERROR: failed to get 'https://github.com/explosion/spacy-models/releases/download/ro_core_news_sm-2.3.1/ro_core_news_sm-2.3.1.tar.gz' - dependency 'https://github.com/explosion/spacy-models/releases/download/ro_core_news_sm-2.3.1/ro_core_news_sm-2.3.1.tar.gz' does not exist
```
While `wget` works fine:
```
wget https://github.com/explosion/spacy-models/releases/download/ro_core_news_sm-2.3.1/ro_core_news_sm-2.3.1.tar.gz
--2020-06-29 19:29:31-- https://github.com/explosion/spacy-models/releases/download/ro_core_news_sm-2.3.1/ro_core_news_sm-2.3.1.tar.gz
Resolving github.com (github.com)... 140.82.118.4
Connecting to github.com (github.com)|140.82.118.4|:443... connected.
HTTP request sent, awaiting response... 302 Found
...
Saving to: ‘ro_core_news_sm-2.3.1.tar.gz
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/tree/http.py`
Content:
```
1 import logging
2 import os.path
3 import threading
4
5 from funcy import cached_property, memoize, wrap_prop, wrap_with
6
7 import dvc.prompt as prompt
8 from dvc.exceptions import DvcException, HTTPError
9 from dvc.path_info import HTTPURLInfo
10 from dvc.progress import Tqdm
11 from dvc.scheme import Schemes
12
13 from .base import BaseTree
14
15 logger = logging.getLogger(__name__)
16
17
18 @wrap_with(threading.Lock())
19 @memoize
20 def ask_password(host, user):
21 return prompt.password(
22 "Enter a password for "
23 "host '{host}' user '{user}'".format(host=host, user=user)
24 )
25
26
27 class HTTPTree(BaseTree): # pylint:disable=abstract-method
28 scheme = Schemes.HTTP
29 PATH_CLS = HTTPURLInfo
30 PARAM_CHECKSUM = "etag"
31 CAN_TRAVERSE = False
32
33 SESSION_RETRIES = 5
34 SESSION_BACKOFF_FACTOR = 0.1
35 REQUEST_TIMEOUT = 10
36 CHUNK_SIZE = 2 ** 16
37
38 def __init__(self, repo, config):
39 super().__init__(repo, config)
40
41 url = config.get("url")
42 if url:
43 self.path_info = self.PATH_CLS(url)
44 user = config.get("user", None)
45 if user:
46 self.path_info.user = user
47 else:
48 self.path_info = None
49
50 self.auth = config.get("auth", None)
51 self.custom_auth_header = config.get("custom_auth_header", None)
52 self.password = config.get("password", None)
53 self.ask_password = config.get("ask_password", False)
54 self.headers = {}
55
56 def _auth_method(self, path_info=None):
57 from requests.auth import HTTPBasicAuth, HTTPDigestAuth
58
59 if path_info is None:
60 path_info = self.path_info
61
62 if self.auth:
63 if self.ask_password and self.password is None:
64 host, user = path_info.host, path_info.user
65 self.password = ask_password(host, user)
66 if self.auth == "basic":
67 return HTTPBasicAuth(path_info.user, self.password)
68 if self.auth == "digest":
69 return HTTPDigestAuth(path_info.user, self.password)
70 if self.auth == "custom" and self.custom_auth_header:
71 self.headers.update({self.custom_auth_header: self.password})
72 return None
73
74 @wrap_prop(threading.Lock())
75 @cached_property
76 def _session(self):
77 import requests
78 from requests.adapters import HTTPAdapter
79 from urllib3.util.retry import Retry
80
81 session = requests.Session()
82
83 retries = Retry(
84 total=self.SESSION_RETRIES,
85 backoff_factor=self.SESSION_BACKOFF_FACTOR,
86 )
87
88 session.mount("http://", HTTPAdapter(max_retries=retries))
89 session.mount("https://", HTTPAdapter(max_retries=retries))
90
91 return session
92
93 def request(self, method, url, **kwargs):
94 import requests
95
96 kwargs.setdefault("allow_redirects", True)
97 kwargs.setdefault("timeout", self.REQUEST_TIMEOUT)
98
99 try:
100 res = self._session.request(
101 method,
102 url,
103 auth=self._auth_method(),
104 headers=self.headers,
105 **kwargs,
106 )
107
108 redirect_no_location = (
109 kwargs["allow_redirects"]
110 and res.status_code in (301, 302)
111 and "location" not in res.headers
112 )
113
114 if redirect_no_location:
115 # AWS s3 doesn't like to add a location header to its redirects
116 # from https://s3.amazonaws.com/<bucket name>/* type URLs.
117 # This should be treated as an error
118 raise requests.exceptions.RequestException
119
120 return res
121
122 except requests.exceptions.RequestException:
123 raise DvcException(f"could not perform a {method} request")
124
125 def exists(self, path_info, use_dvcignore=True):
126 return bool(self.request("HEAD", path_info.url))
127
128 def get_file_hash(self, path_info):
129 url = path_info.url
130 headers = self.request("HEAD", url).headers
131 etag = headers.get("ETag") or headers.get("Content-MD5")
132
133 if not etag:
134 raise DvcException(
135 "could not find an ETag or "
136 "Content-MD5 header for '{url}'".format(url=url)
137 )
138
139 return etag
140
141 def _download(self, from_info, to_file, name=None, no_progress_bar=False):
142 response = self.request("GET", from_info.url, stream=True)
143 if response.status_code != 200:
144 raise HTTPError(response.status_code, response.reason)
145 with open(to_file, "wb") as fd:
146 with Tqdm.wrapattr(
147 fd,
148 "write",
149 total=None
150 if no_progress_bar
151 else self._content_length(response),
152 leave=False,
153 desc=from_info.url if name is None else name,
154 disable=no_progress_bar,
155 ) as fd_wrapped:
156 for chunk in response.iter_content(chunk_size=self.CHUNK_SIZE):
157 fd_wrapped.write(chunk)
158
159 def _upload(self, from_file, to_info, name=None, no_progress_bar=False):
160 def chunks():
161 with open(from_file, "rb") as fd:
162 with Tqdm.wrapattr(
163 fd,
164 "read",
165 total=None
166 if no_progress_bar
167 else os.path.getsize(from_file),
168 leave=False,
169 desc=to_info.url if name is None else name,
170 disable=no_progress_bar,
171 ) as fd_wrapped:
172 while True:
173 chunk = fd_wrapped.read(self.CHUNK_SIZE)
174 if not chunk:
175 break
176 yield chunk
177
178 response = self.request("POST", to_info.url, data=chunks())
179 if response.status_code not in (200, 201):
180 raise HTTPError(response.status_code, response.reason)
181
182 @staticmethod
183 def _content_length(response):
184 res = response.headers.get("Content-Length")
185 return int(res) if res else None
186
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dvc/tree/http.py b/dvc/tree/http.py
--- a/dvc/tree/http.py
+++ b/dvc/tree/http.py
@@ -122,12 +122,27 @@
except requests.exceptions.RequestException:
raise DvcException(f"could not perform a {method} request")
+ def _head(self, url):
+ response = self.request("HEAD", url)
+ if response.ok:
+ return response
+
+ # Sometimes servers are configured to forbid HEAD requests
+ # Context: https://github.com/iterative/dvc/issues/4131
+ with self.request("GET", url, stream=True) as r:
+ if r.ok:
+ return r
+
+ return response
+
def exists(self, path_info, use_dvcignore=True):
- return bool(self.request("HEAD", path_info.url))
+ return bool(self._head(path_info.url))
def get_file_hash(self, path_info):
url = path_info.url
- headers = self.request("HEAD", url).headers
+
+ headers = self._head(url).headers
+
etag = headers.get("ETag") or headers.get("Content-MD5")
if not etag:
| {"golden_diff": "diff --git a/dvc/tree/http.py b/dvc/tree/http.py\n--- a/dvc/tree/http.py\n+++ b/dvc/tree/http.py\n@@ -122,12 +122,27 @@\n except requests.exceptions.RequestException:\n raise DvcException(f\"could not perform a {method} request\")\n \n+ def _head(self, url):\n+ response = self.request(\"HEAD\", url)\n+ if response.ok:\n+ return response\n+\n+ # Sometimes servers are configured to forbid HEAD requests\n+ # Context: https://github.com/iterative/dvc/issues/4131\n+ with self.request(\"GET\", url, stream=True) as r:\n+ if r.ok:\n+ return r\n+\n+ return response\n+\n def exists(self, path_info, use_dvcignore=True):\n- return bool(self.request(\"HEAD\", path_info.url))\n+ return bool(self._head(path_info.url))\n \n def get_file_hash(self, path_info):\n url = path_info.url\n- headers = self.request(\"HEAD\", url).headers\n+\n+ headers = self._head(url).headers\n+\n etag = headers.get(\"ETag\") or headers.get(\"Content-MD5\")\n \n if not etag:\n", "issue": "Issues with dvc get and dvc get-url after 1.0\nHi,\r\n\r\nI've been excited to explore DVC as I think it can offer a lot for some of the things I'm working on. I've had a couple of issues with `dvc get` and `dvc get-url` that I *think* are probably bugs. I've put them together to avoid creating lots of issues; apologies if you'd rather they were separate, I can move them into different threads.\r\n\r\nI am testing as of v1.1.1.\r\n\r\n## dvc get does not recreate directories correctly\r\n\r\nAfter v1.0 `dvc get` pointed to a directory does not result in correctly placed files. All of the files are placed in the root, rathe than subdirectories. I've only tested this with a git repository that is not a DVC repository. It worked in v0.94, but does not work correctly in v1.0 or v1.1. `dvc import` works correctly.\r\n\r\nThe following example gets [this directory](https://github.com/explosion/projects/tree/master/nel-emerson/scripts):\r\n\r\n```\r\n$ dvc get https://github.com/explosion/projects nel-emerson\r\n$ tree nel-emerson/\r\nnel-emerson/\r\n\u251c\u2500\u2500 README.md\r\n\u251c\u2500\u2500 __init__.py\r\n\u251c\u2500\u2500 el_recipe.py\r\n\u251c\u2500\u2500 el_tutorial.py\r\n\u251c\u2500\u2500 emerson_annotated_text.jsonl\r\n\u251c\u2500\u2500 emerson_input_text.txt\r\n\u251c\u2500\u2500 entities.csv\r\n\u251c\u2500\u2500 nel_schema.png\r\n\u251c\u2500\u2500 notebook_video.ipynb\r\n\u2514\u2500\u2500 requirements.txt\r\n0 directories, 10 files\r\n```\r\n\r\nAnd here's `dvc import`, which works correctly:\r\n\r\n```\r\n$ dvc import https://github.com/explosion/projects nel-emerson\r\n$ tree nel-emerson/\r\nnel-emerson/\r\n\u251c\u2500\u2500 README.md\r\n\u251c\u2500\u2500 __init__.py\r\n\u251c\u2500\u2500 input\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 entities.csv\r\n\u251c\u2500\u2500 prodigy\r\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 emerson_annotated_text.jsonl\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 emerson_input_text.txt\r\n\u251c\u2500\u2500 requirements.txt\r\n\u2514\u2500\u2500 scripts\r\n \u251c\u2500\u2500 el_recipe.py\r\n \u251c\u2500\u2500 el_tutorial.py\r\n \u251c\u2500\u2500 nel_schema.png\r\n \u2514\u2500\u2500 notebook_video.ipynb\r\n3 directories, 10 files\r\n```\r\n\r\n## dvc get-url fails on URLs with weak ETAGs\r\n\r\nI understand why `import-url` needs the strong ETag, but I don't think it makes sense for `get-url`, so I suspect this is a bug. If so it's a regression after v1.0, as it didn't happen in v0.94.\r\n\r\n```\r\n$ dvc get-url https://raw.githubusercontent.com/explosion/projects/master/nel-emerson/requirements.txt\r\nERROR: failed to get 'https://raw.githubusercontent.com/explosion/projects/master/nel-emerson/requirements.txt' - Weak ETags are not supported. (Etag: 'W/\"24c3b7f31292b04ac8fa9938c6e14baed94a089944652e8abf7dbb9459ae5b56\"', URL: 'https://raw.githubusercontent.com/explosion/projects/master/nel-emerson/requirements.txt')\r\n```\r\n\r\n## dvc get-url does not follow redirects\r\n\r\nThis is maybe more of a feature request than a bug report, although if it's expected behaviour, I think at least the error message needs to be different. I also think it's an important feature, as it's pretty common to have an https URL sitting in front of an AWS bucket or other file storage, as direct storage URLs don't allow much flexibility for moving data.\r\n\r\nThere's also a specific, very common scenario where this comes up: binary data such as trained models are often attached to Github releases as release artefacts. These artefacts are addressed as URLs that redirect to S3. For instance:\r\n\r\n```\r\n$ dvc get-url https://github.com/explosion/spacy-models/releases/download/ro_core_news_sm-2.3.1/ro_core_news_sm-2.3.1.tar.gz\r\nERROR: failed to get 'https://github.com/explosion/spacy-models/releases/download/ro_core_news_sm-2.3.1/ro_core_news_sm-2.3.1.tar.gz' - dependency 'https://github.com/explosion/spacy-models/releases/download/ro_core_news_sm-2.3.1/ro_core_news_sm-2.3.1.tar.gz' does not exist\r\n```\r\n\r\nWhile `wget` works fine:\r\n\r\n```\r\nwget https://github.com/explosion/spacy-models/releases/download/ro_core_news_sm-2.3.1/ro_core_news_sm-2.3.1.tar.gz\r\n--2020-06-29 19:29:31-- https://github.com/explosion/spacy-models/releases/download/ro_core_news_sm-2.3.1/ro_core_news_sm-2.3.1.tar.gz\r\nResolving github.com (github.com)... 140.82.118.4\r\nConnecting to github.com (github.com)|140.82.118.4|:443... connected.\r\nHTTP request sent, awaiting response... 302 Found\r\n...\r\nSaving to: \u2018ro_core_news_sm-2.3.1.tar.gz\r\n```\r\n\r\n\n", "before_files": [{"content": "import logging\nimport os.path\nimport threading\n\nfrom funcy import cached_property, memoize, wrap_prop, wrap_with\n\nimport dvc.prompt as prompt\nfrom dvc.exceptions import DvcException, HTTPError\nfrom dvc.path_info import HTTPURLInfo\nfrom dvc.progress import Tqdm\nfrom dvc.scheme import Schemes\n\nfrom .base import BaseTree\n\nlogger = logging.getLogger(__name__)\n\n\n@wrap_with(threading.Lock())\n@memoize\ndef ask_password(host, user):\n return prompt.password(\n \"Enter a password for \"\n \"host '{host}' user '{user}'\".format(host=host, user=user)\n )\n\n\nclass HTTPTree(BaseTree): # pylint:disable=abstract-method\n scheme = Schemes.HTTP\n PATH_CLS = HTTPURLInfo\n PARAM_CHECKSUM = \"etag\"\n CAN_TRAVERSE = False\n\n SESSION_RETRIES = 5\n SESSION_BACKOFF_FACTOR = 0.1\n REQUEST_TIMEOUT = 10\n CHUNK_SIZE = 2 ** 16\n\n def __init__(self, repo, config):\n super().__init__(repo, config)\n\n url = config.get(\"url\")\n if url:\n self.path_info = self.PATH_CLS(url)\n user = config.get(\"user\", None)\n if user:\n self.path_info.user = user\n else:\n self.path_info = None\n\n self.auth = config.get(\"auth\", None)\n self.custom_auth_header = config.get(\"custom_auth_header\", None)\n self.password = config.get(\"password\", None)\n self.ask_password = config.get(\"ask_password\", False)\n self.headers = {}\n\n def _auth_method(self, path_info=None):\n from requests.auth import HTTPBasicAuth, HTTPDigestAuth\n\n if path_info is None:\n path_info = self.path_info\n\n if self.auth:\n if self.ask_password and self.password is None:\n host, user = path_info.host, path_info.user\n self.password = ask_password(host, user)\n if self.auth == \"basic\":\n return HTTPBasicAuth(path_info.user, self.password)\n if self.auth == \"digest\":\n return HTTPDigestAuth(path_info.user, self.password)\n if self.auth == \"custom\" and self.custom_auth_header:\n self.headers.update({self.custom_auth_header: self.password})\n return None\n\n @wrap_prop(threading.Lock())\n @cached_property\n def _session(self):\n import requests\n from requests.adapters import HTTPAdapter\n from urllib3.util.retry import Retry\n\n session = requests.Session()\n\n retries = Retry(\n total=self.SESSION_RETRIES,\n backoff_factor=self.SESSION_BACKOFF_FACTOR,\n )\n\n session.mount(\"http://\", HTTPAdapter(max_retries=retries))\n session.mount(\"https://\", HTTPAdapter(max_retries=retries))\n\n return session\n\n def request(self, method, url, **kwargs):\n import requests\n\n kwargs.setdefault(\"allow_redirects\", True)\n kwargs.setdefault(\"timeout\", self.REQUEST_TIMEOUT)\n\n try:\n res = self._session.request(\n method,\n url,\n auth=self._auth_method(),\n headers=self.headers,\n **kwargs,\n )\n\n redirect_no_location = (\n kwargs[\"allow_redirects\"]\n and res.status_code in (301, 302)\n and \"location\" not in res.headers\n )\n\n if redirect_no_location:\n # AWS s3 doesn't like to add a location header to its redirects\n # from https://s3.amazonaws.com/<bucket name>/* type URLs.\n # This should be treated as an error\n raise requests.exceptions.RequestException\n\n return res\n\n except requests.exceptions.RequestException:\n raise DvcException(f\"could not perform a {method} request\")\n\n def exists(self, path_info, use_dvcignore=True):\n return bool(self.request(\"HEAD\", path_info.url))\n\n def get_file_hash(self, path_info):\n url = path_info.url\n headers = self.request(\"HEAD\", url).headers\n etag = headers.get(\"ETag\") or headers.get(\"Content-MD5\")\n\n if not etag:\n raise DvcException(\n \"could not find an ETag or \"\n \"Content-MD5 header for '{url}'\".format(url=url)\n )\n\n return etag\n\n def _download(self, from_info, to_file, name=None, no_progress_bar=False):\n response = self.request(\"GET\", from_info.url, stream=True)\n if response.status_code != 200:\n raise HTTPError(response.status_code, response.reason)\n with open(to_file, \"wb\") as fd:\n with Tqdm.wrapattr(\n fd,\n \"write\",\n total=None\n if no_progress_bar\n else self._content_length(response),\n leave=False,\n desc=from_info.url if name is None else name,\n disable=no_progress_bar,\n ) as fd_wrapped:\n for chunk in response.iter_content(chunk_size=self.CHUNK_SIZE):\n fd_wrapped.write(chunk)\n\n def _upload(self, from_file, to_info, name=None, no_progress_bar=False):\n def chunks():\n with open(from_file, \"rb\") as fd:\n with Tqdm.wrapattr(\n fd,\n \"read\",\n total=None\n if no_progress_bar\n else os.path.getsize(from_file),\n leave=False,\n desc=to_info.url if name is None else name,\n disable=no_progress_bar,\n ) as fd_wrapped:\n while True:\n chunk = fd_wrapped.read(self.CHUNK_SIZE)\n if not chunk:\n break\n yield chunk\n\n response = self.request(\"POST\", to_info.url, data=chunks())\n if response.status_code not in (200, 201):\n raise HTTPError(response.status_code, response.reason)\n\n @staticmethod\n def _content_length(response):\n res = response.headers.get(\"Content-Length\")\n return int(res) if res else None\n", "path": "dvc/tree/http.py"}], "after_files": [{"content": "import logging\nimport os.path\nimport threading\n\nfrom funcy import cached_property, memoize, wrap_prop, wrap_with\n\nimport dvc.prompt as prompt\nfrom dvc.exceptions import DvcException, HTTPError\nfrom dvc.path_info import HTTPURLInfo\nfrom dvc.progress import Tqdm\nfrom dvc.scheme import Schemes\n\nfrom .base import BaseTree\n\nlogger = logging.getLogger(__name__)\n\n\n@wrap_with(threading.Lock())\n@memoize\ndef ask_password(host, user):\n return prompt.password(\n \"Enter a password for \"\n \"host '{host}' user '{user}'\".format(host=host, user=user)\n )\n\n\nclass HTTPTree(BaseTree): # pylint:disable=abstract-method\n scheme = Schemes.HTTP\n PATH_CLS = HTTPURLInfo\n PARAM_CHECKSUM = \"etag\"\n CAN_TRAVERSE = False\n\n SESSION_RETRIES = 5\n SESSION_BACKOFF_FACTOR = 0.1\n REQUEST_TIMEOUT = 10\n CHUNK_SIZE = 2 ** 16\n\n def __init__(self, repo, config):\n super().__init__(repo, config)\n\n url = config.get(\"url\")\n if url:\n self.path_info = self.PATH_CLS(url)\n user = config.get(\"user\", None)\n if user:\n self.path_info.user = user\n else:\n self.path_info = None\n\n self.auth = config.get(\"auth\", None)\n self.custom_auth_header = config.get(\"custom_auth_header\", None)\n self.password = config.get(\"password\", None)\n self.ask_password = config.get(\"ask_password\", False)\n self.headers = {}\n\n def _auth_method(self, path_info=None):\n from requests.auth import HTTPBasicAuth, HTTPDigestAuth\n\n if path_info is None:\n path_info = self.path_info\n\n if self.auth:\n if self.ask_password and self.password is None:\n host, user = path_info.host, path_info.user\n self.password = ask_password(host, user)\n if self.auth == \"basic\":\n return HTTPBasicAuth(path_info.user, self.password)\n if self.auth == \"digest\":\n return HTTPDigestAuth(path_info.user, self.password)\n if self.auth == \"custom\" and self.custom_auth_header:\n self.headers.update({self.custom_auth_header: self.password})\n return None\n\n @wrap_prop(threading.Lock())\n @cached_property\n def _session(self):\n import requests\n from requests.adapters import HTTPAdapter\n from urllib3.util.retry import Retry\n\n session = requests.Session()\n\n retries = Retry(\n total=self.SESSION_RETRIES,\n backoff_factor=self.SESSION_BACKOFF_FACTOR,\n )\n\n session.mount(\"http://\", HTTPAdapter(max_retries=retries))\n session.mount(\"https://\", HTTPAdapter(max_retries=retries))\n\n return session\n\n def request(self, method, url, **kwargs):\n import requests\n\n kwargs.setdefault(\"allow_redirects\", True)\n kwargs.setdefault(\"timeout\", self.REQUEST_TIMEOUT)\n\n try:\n res = self._session.request(\n method,\n url,\n auth=self._auth_method(),\n headers=self.headers,\n **kwargs,\n )\n\n redirect_no_location = (\n kwargs[\"allow_redirects\"]\n and res.status_code in (301, 302)\n and \"location\" not in res.headers\n )\n\n if redirect_no_location:\n # AWS s3 doesn't like to add a location header to its redirects\n # from https://s3.amazonaws.com/<bucket name>/* type URLs.\n # This should be treated as an error\n raise requests.exceptions.RequestException\n\n return res\n\n except requests.exceptions.RequestException:\n raise DvcException(f\"could not perform a {method} request\")\n\n def _head(self, url):\n response = self.request(\"HEAD\", url)\n if response.ok:\n return response\n\n # Sometimes servers are configured to forbid HEAD requests\n # Context: https://github.com/iterative/dvc/issues/4131\n with self.request(\"GET\", url, stream=True) as r:\n if r.ok:\n return r\n\n return response\n\n def exists(self, path_info, use_dvcignore=True):\n return bool(self._head(path_info.url))\n\n def get_file_hash(self, path_info):\n url = path_info.url\n\n headers = self._head(url).headers\n\n etag = headers.get(\"ETag\") or headers.get(\"Content-MD5\")\n\n if not etag:\n raise DvcException(\n \"could not find an ETag or \"\n \"Content-MD5 header for '{url}'\".format(url=url)\n )\n\n return etag\n\n def _download(self, from_info, to_file, name=None, no_progress_bar=False):\n response = self.request(\"GET\", from_info.url, stream=True)\n if response.status_code != 200:\n raise HTTPError(response.status_code, response.reason)\n with open(to_file, \"wb\") as fd:\n with Tqdm.wrapattr(\n fd,\n \"write\",\n total=None\n if no_progress_bar\n else self._content_length(response),\n leave=False,\n desc=from_info.url if name is None else name,\n disable=no_progress_bar,\n ) as fd_wrapped:\n for chunk in response.iter_content(chunk_size=self.CHUNK_SIZE):\n fd_wrapped.write(chunk)\n\n def _upload(self, from_file, to_info, name=None, no_progress_bar=False):\n def chunks():\n with open(from_file, \"rb\") as fd:\n with Tqdm.wrapattr(\n fd,\n \"read\",\n total=None\n if no_progress_bar\n else os.path.getsize(from_file),\n leave=False,\n desc=to_info.url if name is None else name,\n disable=no_progress_bar,\n ) as fd_wrapped:\n while True:\n chunk = fd_wrapped.read(self.CHUNK_SIZE)\n if not chunk:\n break\n yield chunk\n\n response = self.request(\"POST\", to_info.url, data=chunks())\n if response.status_code not in (200, 201):\n raise HTTPError(response.status_code, response.reason)\n\n @staticmethod\n def _content_length(response):\n res = response.headers.get(\"Content-Length\")\n return int(res) if res else None\n", "path": "dvc/tree/http.py"}]} | 3,173 | 279 |
gh_patches_debug_8740 | rasdani/github-patches | git_diff | ansible__ansible-modules-extras-2047 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pkg5_publisher throws KeyError with unexpected output from `pkg publisher -Ftsv`
##### Issue Type:
Bug Report
##### Ansible Version:
```
$ ansible --version
ansible 2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = None
```
##### Summary:
I have a system with the output of publishers listed like so:
```
$ pkg publisher
PUBLISHER TYPE STATUS URI
delphix.com origin online file:///data/ips_repo/
ms.omniti.com origin online file:///data/ips_repo/
omnios origin online file:///data/ips_repo/
on-nightly origin online file:///net/pharos/export/pkg/dlpx-os-gate/nightly/repo.redist/
openindiana.org
```
When I try to use the `pkg5_publisher` with the system configured like this, I get the following Python exception:
```
$ sudo python /tmp/ansible/ansible-tmp-1442291882.82-184265772797279/pkg5_publisher
Traceback (most recent call last):
File "/tmp/ansible/ansible-tmp-1442291882.82-184265772797279/pkg5_publisher", line 1902, in <module>
main()
File "/tmp/ansible/ansible-tmp-1442291882.82-184265772797279/pkg5_publisher", line 95, in main
modify_publisher(module, module.params)
File "/tmp/ansible/ansible-tmp-1442291882.82-184265772797279/pkg5_publisher", line 102, in modify_publisher
existing = get_publishers(module)
File "/tmp/ansible/ansible-tmp-1442291882.82-184265772797279/pkg5_publisher", line 184, in get_publishers
publishers[name][values['type']].append(values['uri'])
KeyError: ''
```
Looking at the source, `pkg5_publisher` tries to parse each line from the output of `pkg publisher -Ftsv` and assumes there will be a value for each column. Obviously this is not a valid assumption to make, as my system is proof this isn't guaranteed to be true, so it'd be nice of the module was able to handle this situation gracefully.
If I manually remove this publisher from the system with:
```
$ sudo pkg unset-publisher openindiana.org
$ pkg publisher -Ftsv
PUBLISHER STICKY PREFERRED ENABLED TYPE STATUS URI
delphix.com true true true origin online file:///data/ips_repo/
ms.omniti.com true false true origin online file:///data/ips_repo/
omnios true false true origin online file:///data/ips_repo/
on-nightly true false true origin online file:///net/pharos/export/pkg/dlpx-os-gate/nightly/repo.redist/
```
then the `pkg5_publisher` module will work, but the whole point of using this module is to remove the need to manually configure the system.
To make matters worse, I can't simply use:
```
- name: Remove openindiana.org Publisher
command: pkg unset-publisher openindiana.org
```
prior to using `pkg5_publisher` because that command will fail if the publisher doesn't exist (e.g. on the second invocation of the playbook or role, after the publisher was removed the first time).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `packaging/os/pkg5_publisher.py`
Content:
```
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # Copyright 2014 Peter Oliver <[email protected]>
5 #
6 # This program is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU General Public License as published by
8 # the Free Software Foundation, either version 3 of the License, or
9 # (at your option) any later version.
10 #
11 # This program is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with this program. If not, see <http://www.gnu.org/licenses/>.
18
19 DOCUMENTATION = '''
20 ---
21 module: pkg5_publisher
22 author: "Peter Oliver (@mavit)"
23 short_description: Manages Solaris 11 Image Packaging System publishers
24 version_added: 1.9
25 description:
26 - IPS packages are the native packages in Solaris 11 and higher.
27 - This modules will configure which publishers a client will download IPS
28 packages from.
29 options:
30 name:
31 description:
32 - The publisher's name.
33 required: true
34 aliases: [ publisher ]
35 state:
36 description:
37 - Whether to ensure that a publisher is present or absent.
38 required: false
39 default: present
40 choices: [ present, absent ]
41 sticky:
42 description:
43 - Packages installed from a sticky repository can only receive updates
44 from that repository.
45 required: false
46 default: null
47 choices: [ true, false ]
48 enabled:
49 description:
50 - Is the repository enabled or disabled?
51 required: false
52 default: null
53 choices: [ true, false ]
54 origin:
55 description:
56 - A path or URL to the repository.
57 - Multiple values may be provided.
58 required: false
59 default: null
60 mirror:
61 description:
62 - A path or URL to the repository mirror.
63 - Multiple values may be provided.
64 required: false
65 default: null
66 '''
67 EXAMPLES = '''
68 # Fetch packages for the solaris publisher direct from Oracle:
69 - pkg5_publisher: name=solaris sticky=true origin=https://pkg.oracle.com/solaris/support/
70
71 # Configure a publisher for locally-produced packages:
72 - pkg5_publisher: name=site origin=https://pkg.example.com/site/
73 '''
74
75 def main():
76 module = AnsibleModule(
77 argument_spec=dict(
78 name=dict(required=True, aliases=['publisher']),
79 state=dict(default='present', choices=['present', 'absent']),
80 sticky=dict(type='bool'),
81 enabled=dict(type='bool'),
82 # search_after=dict(),
83 # search_before=dict(),
84 origin=dict(type='list'),
85 mirror=dict(type='list'),
86 )
87 )
88
89 for option in ['origin', 'mirror']:
90 if module.params[option] == ['']:
91 module.params[option] = []
92
93 if module.params['state'] == 'present':
94 modify_publisher(module, module.params)
95 else:
96 unset_publisher(module, module.params['name'])
97
98
99 def modify_publisher(module, params):
100 name = params['name']
101 existing = get_publishers(module)
102
103 if name in existing:
104 for option in ['origin', 'mirror', 'sticky', 'enabled']:
105 if params[option] != None:
106 if params[option] != existing[name][option]:
107 return set_publisher(module, params)
108 else:
109 return set_publisher(module, params)
110
111 module.exit_json()
112
113
114 def set_publisher(module, params):
115 name = params['name']
116 args = []
117
118 if params['origin'] != None:
119 args.append('--remove-origin=*')
120 args.extend(['--add-origin=' + u for u in params['origin']])
121 if params['mirror'] != None:
122 args.append('--remove-mirror=*')
123 args.extend(['--add-mirror=' + u for u in params['mirror']])
124
125 if params['sticky'] != None and params['sticky']:
126 args.append('--sticky')
127 elif params['sticky'] != None:
128 args.append('--non-sticky')
129
130 if params['enabled'] != None and params['enabled']:
131 args.append('--enable')
132 elif params['enabled'] != None:
133 args.append('--disable')
134
135 rc, out, err = module.run_command(
136 ["pkg", "set-publisher"] + args + [name],
137 check_rc=True
138 )
139 response = {
140 'rc': rc,
141 'results': [out],
142 'msg': err,
143 'changed': True,
144 }
145 module.exit_json(**response)
146
147
148 def unset_publisher(module, publisher):
149 if not publisher in get_publishers(module):
150 module.exit_json()
151
152 rc, out, err = module.run_command(
153 ["pkg", "unset-publisher", publisher],
154 check_rc=True
155 )
156 response = {
157 'rc': rc,
158 'results': [out],
159 'msg': err,
160 'changed': True,
161 }
162 module.exit_json(**response)
163
164
165 def get_publishers(module):
166 rc, out, err = module.run_command(["pkg", "publisher", "-Ftsv"], True)
167
168 lines = out.splitlines()
169 keys = lines.pop(0).lower().split("\t")
170
171 publishers = {}
172 for line in lines:
173 values = dict(zip(keys, map(unstringify, line.split("\t"))))
174 name = values['publisher']
175
176 if not name in publishers:
177 publishers[name] = dict(
178 (k, values[k]) for k in ['sticky', 'enabled']
179 )
180 publishers[name]['origin'] = []
181 publishers[name]['mirror'] = []
182
183 publishers[name][values['type']].append(values['uri'])
184
185 return publishers
186
187
188 def unstringify(val):
189 if val == "-":
190 return None
191 elif val == "true":
192 return True
193 elif val == "false":
194 return False
195 else:
196 return val
197
198
199 from ansible.module_utils.basic import *
200 main()
201
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/packaging/os/pkg5_publisher.py b/packaging/os/pkg5_publisher.py
--- a/packaging/os/pkg5_publisher.py
+++ b/packaging/os/pkg5_publisher.py
@@ -180,13 +180,14 @@
publishers[name]['origin'] = []
publishers[name]['mirror'] = []
- publishers[name][values['type']].append(values['uri'])
+ if values['type'] is not None:
+ publishers[name][values['type']].append(values['uri'])
return publishers
def unstringify(val):
- if val == "-":
+ if val == "-" or val == '':
return None
elif val == "true":
return True
| {"golden_diff": "diff --git a/packaging/os/pkg5_publisher.py b/packaging/os/pkg5_publisher.py\n--- a/packaging/os/pkg5_publisher.py\n+++ b/packaging/os/pkg5_publisher.py\n@@ -180,13 +180,14 @@\n publishers[name]['origin'] = []\n publishers[name]['mirror'] = []\n \n- publishers[name][values['type']].append(values['uri'])\n+ if values['type'] is not None:\n+ publishers[name][values['type']].append(values['uri'])\n \n return publishers\n \n \n def unstringify(val):\n- if val == \"-\":\n+ if val == \"-\" or val == '':\n return None\n elif val == \"true\":\n return True\n", "issue": "pkg5_publisher throws KeyError with unexpected output from `pkg publisher -Ftsv`\n##### Issue Type:\n\nBug Report\n##### Ansible Version:\n\n```\n$ ansible --version\nansible 2.0.0\n config file = /etc/ansible/ansible.cfg\n configured module search path = None\n```\n##### Summary:\n\nI have a system with the output of publishers listed like so:\n\n```\n$ pkg publisher\nPUBLISHER TYPE STATUS URI\ndelphix.com origin online file:///data/ips_repo/\nms.omniti.com origin online file:///data/ips_repo/\nomnios origin online file:///data/ips_repo/\non-nightly origin online file:///net/pharos/export/pkg/dlpx-os-gate/nightly/repo.redist/\nopenindiana.org \n```\n\nWhen I try to use the `pkg5_publisher` with the system configured like this, I get the following Python exception:\n\n```\n$ sudo python /tmp/ansible/ansible-tmp-1442291882.82-184265772797279/pkg5_publisher\nTraceback (most recent call last):\n File \"/tmp/ansible/ansible-tmp-1442291882.82-184265772797279/pkg5_publisher\", line 1902, in <module>\n main()\n File \"/tmp/ansible/ansible-tmp-1442291882.82-184265772797279/pkg5_publisher\", line 95, in main\n modify_publisher(module, module.params)\n File \"/tmp/ansible/ansible-tmp-1442291882.82-184265772797279/pkg5_publisher\", line 102, in modify_publisher\n existing = get_publishers(module)\n File \"/tmp/ansible/ansible-tmp-1442291882.82-184265772797279/pkg5_publisher\", line 184, in get_publishers\n publishers[name][values['type']].append(values['uri'])\nKeyError: ''\n```\n\nLooking at the source, `pkg5_publisher` tries to parse each line from the output of `pkg publisher -Ftsv` and assumes there will be a value for each column. Obviously this is not a valid assumption to make, as my system is proof this isn't guaranteed to be true, so it'd be nice of the module was able to handle this situation gracefully.\n\nIf I manually remove this publisher from the system with:\n\n```\n$ sudo pkg unset-publisher openindiana.org\n$ pkg publisher -Ftsv\nPUBLISHER STICKY PREFERRED ENABLED TYPE STATUS URI\ndelphix.com true true true origin online file:///data/ips_repo/\nms.omniti.com true false true origin online file:///data/ips_repo/\nomnios true false true origin online file:///data/ips_repo/\non-nightly true false true origin online file:///net/pharos/export/pkg/dlpx-os-gate/nightly/repo.redist/\n```\n\nthen the `pkg5_publisher` module will work, but the whole point of using this module is to remove the need to manually configure the system.\n\nTo make matters worse, I can't simply use:\n\n```\n- name: Remove openindiana.org Publisher\n command: pkg unset-publisher openindiana.org\n```\n\nprior to using `pkg5_publisher` because that command will fail if the publisher doesn't exist (e.g. on the second invocation of the playbook or role, after the publisher was removed the first time).\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright 2014 Peter Oliver <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n\nDOCUMENTATION = '''\n---\nmodule: pkg5_publisher\nauthor: \"Peter Oliver (@mavit)\"\nshort_description: Manages Solaris 11 Image Packaging System publishers\nversion_added: 1.9\ndescription:\n - IPS packages are the native packages in Solaris 11 and higher.\n - This modules will configure which publishers a client will download IPS\n packages from.\noptions:\n name:\n description:\n - The publisher's name.\n required: true\n aliases: [ publisher ]\n state:\n description:\n - Whether to ensure that a publisher is present or absent.\n required: false\n default: present\n choices: [ present, absent ]\n sticky:\n description:\n - Packages installed from a sticky repository can only receive updates\n from that repository.\n required: false\n default: null\n choices: [ true, false ]\n enabled:\n description:\n - Is the repository enabled or disabled?\n required: false\n default: null\n choices: [ true, false ]\n origin:\n description:\n - A path or URL to the repository.\n - Multiple values may be provided.\n required: false\n default: null\n mirror:\n description:\n - A path or URL to the repository mirror.\n - Multiple values may be provided.\n required: false\n default: null\n'''\nEXAMPLES = '''\n# Fetch packages for the solaris publisher direct from Oracle:\n- pkg5_publisher: name=solaris sticky=true origin=https://pkg.oracle.com/solaris/support/\n\n# Configure a publisher for locally-produced packages:\n- pkg5_publisher: name=site origin=https://pkg.example.com/site/\n'''\n\ndef main():\n module = AnsibleModule(\n argument_spec=dict(\n name=dict(required=True, aliases=['publisher']),\n state=dict(default='present', choices=['present', 'absent']),\n sticky=dict(type='bool'),\n enabled=dict(type='bool'),\n # search_after=dict(),\n # search_before=dict(),\n origin=dict(type='list'),\n mirror=dict(type='list'),\n )\n )\n\n for option in ['origin', 'mirror']:\n if module.params[option] == ['']:\n module.params[option] = []\n\n if module.params['state'] == 'present':\n modify_publisher(module, module.params)\n else:\n unset_publisher(module, module.params['name'])\n\n\ndef modify_publisher(module, params):\n name = params['name']\n existing = get_publishers(module)\n\n if name in existing:\n for option in ['origin', 'mirror', 'sticky', 'enabled']:\n if params[option] != None:\n if params[option] != existing[name][option]:\n return set_publisher(module, params)\n else:\n return set_publisher(module, params)\n\n module.exit_json()\n\n\ndef set_publisher(module, params):\n name = params['name']\n args = []\n\n if params['origin'] != None:\n args.append('--remove-origin=*')\n args.extend(['--add-origin=' + u for u in params['origin']])\n if params['mirror'] != None:\n args.append('--remove-mirror=*')\n args.extend(['--add-mirror=' + u for u in params['mirror']])\n\n if params['sticky'] != None and params['sticky']:\n args.append('--sticky')\n elif params['sticky'] != None:\n args.append('--non-sticky')\n\n if params['enabled'] != None and params['enabled']:\n args.append('--enable')\n elif params['enabled'] != None:\n args.append('--disable')\n\n rc, out, err = module.run_command(\n [\"pkg\", \"set-publisher\"] + args + [name],\n check_rc=True\n )\n response = {\n 'rc': rc,\n 'results': [out],\n 'msg': err,\n 'changed': True,\n }\n module.exit_json(**response)\n\n\ndef unset_publisher(module, publisher):\n if not publisher in get_publishers(module):\n module.exit_json()\n\n rc, out, err = module.run_command(\n [\"pkg\", \"unset-publisher\", publisher],\n check_rc=True\n )\n response = {\n 'rc': rc,\n 'results': [out],\n 'msg': err,\n 'changed': True,\n }\n module.exit_json(**response)\n\n\ndef get_publishers(module):\n rc, out, err = module.run_command([\"pkg\", \"publisher\", \"-Ftsv\"], True)\n\n lines = out.splitlines()\n keys = lines.pop(0).lower().split(\"\\t\")\n\n publishers = {}\n for line in lines:\n values = dict(zip(keys, map(unstringify, line.split(\"\\t\"))))\n name = values['publisher']\n\n if not name in publishers:\n publishers[name] = dict(\n (k, values[k]) for k in ['sticky', 'enabled']\n )\n publishers[name]['origin'] = []\n publishers[name]['mirror'] = []\n\n publishers[name][values['type']].append(values['uri'])\n\n return publishers\n\n\ndef unstringify(val):\n if val == \"-\":\n return None\n elif val == \"true\":\n return True\n elif val == \"false\":\n return False\n else:\n return val\n\n\nfrom ansible.module_utils.basic import *\nmain()\n", "path": "packaging/os/pkg5_publisher.py"}], "after_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright 2014 Peter Oliver <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n\nDOCUMENTATION = '''\n---\nmodule: pkg5_publisher\nauthor: \"Peter Oliver (@mavit)\"\nshort_description: Manages Solaris 11 Image Packaging System publishers\nversion_added: 1.9\ndescription:\n - IPS packages are the native packages in Solaris 11 and higher.\n - This modules will configure which publishers a client will download IPS\n packages from.\noptions:\n name:\n description:\n - The publisher's name.\n required: true\n aliases: [ publisher ]\n state:\n description:\n - Whether to ensure that a publisher is present or absent.\n required: false\n default: present\n choices: [ present, absent ]\n sticky:\n description:\n - Packages installed from a sticky repository can only receive updates\n from that repository.\n required: false\n default: null\n choices: [ true, false ]\n enabled:\n description:\n - Is the repository enabled or disabled?\n required: false\n default: null\n choices: [ true, false ]\n origin:\n description:\n - A path or URL to the repository.\n - Multiple values may be provided.\n required: false\n default: null\n mirror:\n description:\n - A path or URL to the repository mirror.\n - Multiple values may be provided.\n required: false\n default: null\n'''\nEXAMPLES = '''\n# Fetch packages for the solaris publisher direct from Oracle:\n- pkg5_publisher: name=solaris sticky=true origin=https://pkg.oracle.com/solaris/support/\n\n# Configure a publisher for locally-produced packages:\n- pkg5_publisher: name=site origin=https://pkg.example.com/site/\n'''\n\ndef main():\n module = AnsibleModule(\n argument_spec=dict(\n name=dict(required=True, aliases=['publisher']),\n state=dict(default='present', choices=['present', 'absent']),\n sticky=dict(type='bool'),\n enabled=dict(type='bool'),\n # search_after=dict(),\n # search_before=dict(),\n origin=dict(type='list'),\n mirror=dict(type='list'),\n )\n )\n\n for option in ['origin', 'mirror']:\n if module.params[option] == ['']:\n module.params[option] = []\n\n if module.params['state'] == 'present':\n modify_publisher(module, module.params)\n else:\n unset_publisher(module, module.params['name'])\n\n\ndef modify_publisher(module, params):\n name = params['name']\n existing = get_publishers(module)\n\n if name in existing:\n for option in ['origin', 'mirror', 'sticky', 'enabled']:\n if params[option] != None:\n if params[option] != existing[name][option]:\n return set_publisher(module, params)\n else:\n return set_publisher(module, params)\n\n module.exit_json()\n\n\ndef set_publisher(module, params):\n name = params['name']\n args = []\n\n if params['origin'] != None:\n args.append('--remove-origin=*')\n args.extend(['--add-origin=' + u for u in params['origin']])\n if params['mirror'] != None:\n args.append('--remove-mirror=*')\n args.extend(['--add-mirror=' + u for u in params['mirror']])\n\n if params['sticky'] != None and params['sticky']:\n args.append('--sticky')\n elif params['sticky'] != None:\n args.append('--non-sticky')\n\n if params['enabled'] != None and params['enabled']:\n args.append('--enable')\n elif params['enabled'] != None:\n args.append('--disable')\n\n rc, out, err = module.run_command(\n [\"pkg\", \"set-publisher\"] + args + [name],\n check_rc=True\n )\n response = {\n 'rc': rc,\n 'results': [out],\n 'msg': err,\n 'changed': True,\n }\n module.exit_json(**response)\n\n\ndef unset_publisher(module, publisher):\n if not publisher in get_publishers(module):\n module.exit_json()\n\n rc, out, err = module.run_command(\n [\"pkg\", \"unset-publisher\", publisher],\n check_rc=True\n )\n response = {\n 'rc': rc,\n 'results': [out],\n 'msg': err,\n 'changed': True,\n }\n module.exit_json(**response)\n\n\ndef get_publishers(module):\n rc, out, err = module.run_command([\"pkg\", \"publisher\", \"-Ftsv\"], True)\n\n lines = out.splitlines()\n keys = lines.pop(0).lower().split(\"\\t\")\n\n publishers = {}\n for line in lines:\n values = dict(zip(keys, map(unstringify, line.split(\"\\t\"))))\n name = values['publisher']\n\n if not name in publishers:\n publishers[name] = dict(\n (k, values[k]) for k in ['sticky', 'enabled']\n )\n publishers[name]['origin'] = []\n publishers[name]['mirror'] = []\n\n if values['type'] is not None:\n publishers[name][values['type']].append(values['uri'])\n\n return publishers\n\n\ndef unstringify(val):\n if val == \"-\" or val == '':\n return None\n elif val == \"true\":\n return True\n elif val == \"false\":\n return False\n else:\n return val\n\n\nfrom ansible.module_utils.basic import *\nmain()\n", "path": "packaging/os/pkg5_publisher.py"}]} | 2,922 | 160 |
gh_patches_debug_43071 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-1917 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: rctcbc_gov_uk returning extra incorrect dates
### I Have A Problem With:
A specific source
### What's Your Problem
I've been using this integration for a while without issue.
I've been away for a few weeks, and come home to find 3 entries for each entry for each week, when there should only be one.
See attached screenshot.
The Wednesday entries are the only valid ones. Not sure why they're being duplicated though. :-/
<img width="1184" alt="Screenshot 2024-03-28 at 03 44 55" src="https://github.com/mampfes/hacs_waste_collection_schedule/assets/63560223/7cecf6a6-9ee5-42cb-875f-50d91877ceeb">
### Source (if relevant)
rctcbc_gov_uk
### Logs
```Shell
no relevant logs
```
### Relevant Configuration
```YAML
waste_collection_schedule:
sources:
- name: rctcbc_gov_uk
args:
uprn: "200003766278"
calendar_title: "Bin Collection"
```
### Checklist Source Error
- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)
- [X] Checked that the website of your service provider is still working
- [X] Tested my attributes on the service provider website (if possible)
- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version)
### Checklist Sensor Error
- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)
### Required
- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py`
Content:
```
1 from datetime import datetime
2
3 import requests
4 from bs4 import BeautifulSoup
5 from waste_collection_schedule import Collection
6
7 TITLE = "Rhondda Cynon Taf County Borough Council"
8 DESCRIPTION = "Source for rctcbc.gov.uk services for Rhondda Cynon Taf County Borough Council, Wales, UK"
9 URL = "rctcbc.gov.uk"
10 TEST_CASES = {
11 "Test_001": {"uprn": "10024274791"},
12 "Test_002": {"uprn": "100100718352"},
13 "Test_003": {"uprn": 100100733093},
14 }
15 ICON_MAP = {
16 "BLACK BAGS": "mdi:trash-can",
17 "RECYCLING": "mdi:recycle",
18 "FOOD WASTE": "mdi:food",
19 "GARDEN WASTE": "mdi:leaf",
20 }
21
22
23 class Source:
24 def __init__(self, uprn):
25 self._uprn = str(uprn)
26
27 def fetch(self):
28 s = requests.Session()
29 # website appears to display ~4 months worth of collections, so iterate through those pages
30 entries = []
31 for month in range(0, 4):
32 r = s.get(
33 f"https://www.rctcbc.gov.uk/EN/Resident/RecyclingandWaste/RecyclingandWasteCollectionDays.aspx?uprn={self._uprn}&month={month}"
34 )
35 soup = BeautifulSoup(r.text, "html.parser")
36 calendar_month = soup.find("div", {"class": "calendar-month"})
37 calendar_day = soup.find_all(
38 "div", {"class": "card-body card-body-padding"}
39 )
40 for day in calendar_day:
41 pickups = day.find_all("a")
42 if len(pickups) != 0:
43 d = day.find("div", {"class": "card-title"})
44 dt = d.text.strip() + " " + calendar_month.text.strip()
45 for pickup in pickups:
46 entries.append(
47 Collection(
48 date=datetime.strptime(
49 dt,
50 "%d %B %Y",
51 ).date(),
52 t=pickup.text,
53 icon=ICON_MAP.get(pickup.text.upper()),
54 )
55 )
56
57 return entries
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py
@@ -1,8 +1,8 @@
from datetime import datetime
import requests
-from bs4 import BeautifulSoup
-from waste_collection_schedule import Collection
+from bs4 import BeautifulSoup, Tag
+from waste_collection_schedule import Collection # type: ignore[attr-defined]
TITLE = "Rhondda Cynon Taf County Borough Council"
DESCRIPTION = "Source for rctcbc.gov.uk services for Rhondda Cynon Taf County Borough Council, Wales, UK"
@@ -24,34 +24,73 @@
def __init__(self, uprn):
self._uprn = str(uprn)
- def fetch(self):
+ def extract_collections(self, calendar: Tag | BeautifulSoup) -> list[Collection]:
+ calendar_month = calendar.find("div", {"class": "calendar-month"})
+ if not calendar_month or not isinstance(calendar_month, Tag):
+ return []
+ month = calendar_month.text.strip()
+ calendar_days = calendar.find_all(
+ "div", {"class": "card-body card-body-padding"}
+ )
+
+ entries = []
+ for day in calendar_days:
+ pickups = day.find_all("a")
+ if len(pickups) != 0:
+ d = day.find("div", {"class": "card-title"})
+ if not d or not isinstance(d, Tag):
+ continue
+ dt = d.text.strip() + " " + month
+ for pickup in pickups:
+ entries.append(
+ Collection(
+ date=datetime.strptime(
+ dt,
+ "%d %B %Y",
+ ).date(),
+ t=pickup.text,
+ icon=ICON_MAP.get(pickup.text.upper()),
+ )
+ )
+ return entries
+
+ def extract_from_printable_calendar(
+ self, soup: BeautifulSoup
+ ) -> list[Collection] | None:
+ entries = []
+ printable_calendar = soup.find("div", {"class": "printableCalendar"})
+ if not printable_calendar or not isinstance(printable_calendar, Tag):
+ return None
+
+ calendars = printable_calendar.find_all(
+ "div", {"class": "calendar-wrap onlyPrint"}
+ )
+ if not calendars:
+ return None
+
+ for calendar in calendars:
+ if not calendar or not isinstance(calendar, Tag):
+ continue
+ entries += self.extract_collections(calendar)
+ return entries or None
+
+ def fetch(self) -> list[Collection]:
s = requests.Session()
# website appears to display ~4 months worth of collections, so iterate through those pages
- entries = []
+ entries: list[Collection] = []
for month in range(0, 4):
r = s.get(
f"https://www.rctcbc.gov.uk/EN/Resident/RecyclingandWaste/RecyclingandWasteCollectionDays.aspx?uprn={self._uprn}&month={month}"
)
soup = BeautifulSoup(r.text, "html.parser")
- calendar_month = soup.find("div", {"class": "calendar-month"})
- calendar_day = soup.find_all(
- "div", {"class": "card-body card-body-padding"}
- )
- for day in calendar_day:
- pickups = day.find_all("a")
- if len(pickups) != 0:
- d = day.find("div", {"class": "card-title"})
- dt = d.text.strip() + " " + calendar_month.text.strip()
- for pickup in pickups:
- entries.append(
- Collection(
- date=datetime.strptime(
- dt,
- "%d %B %Y",
- ).date(),
- t=pickup.text,
- icon=ICON_MAP.get(pickup.text.upper()),
- )
- )
+ printable_calendar_entries = self.extract_from_printable_calendar(soup)
+ if printable_calendar_entries:
+ return printable_calendar_entries
+
+ # OLD METHOD IF THEY EVER REMOVE THE PRINTABLE CALENDAR AGAIN:
+ calendar = soup.find("div", {"class": "monthlyCalendar"}) or soup
+ if not isinstance(calendar, Tag):
+ continue
+ entries += self.extract_collections(calendar)
return entries
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py\n@@ -1,8 +1,8 @@\n from datetime import datetime\n \n import requests\n-from bs4 import BeautifulSoup\n-from waste_collection_schedule import Collection\n+from bs4 import BeautifulSoup, Tag\n+from waste_collection_schedule import Collection # type: ignore[attr-defined]\n \n TITLE = \"Rhondda Cynon Taf County Borough Council\"\n DESCRIPTION = \"Source for rctcbc.gov.uk services for Rhondda Cynon Taf County Borough Council, Wales, UK\"\n@@ -24,34 +24,73 @@\n def __init__(self, uprn):\n self._uprn = str(uprn)\n \n- def fetch(self):\n+ def extract_collections(self, calendar: Tag | BeautifulSoup) -> list[Collection]:\n+ calendar_month = calendar.find(\"div\", {\"class\": \"calendar-month\"})\n+ if not calendar_month or not isinstance(calendar_month, Tag):\n+ return []\n+ month = calendar_month.text.strip()\n+ calendar_days = calendar.find_all(\n+ \"div\", {\"class\": \"card-body card-body-padding\"}\n+ )\n+\n+ entries = []\n+ for day in calendar_days:\n+ pickups = day.find_all(\"a\")\n+ if len(pickups) != 0:\n+ d = day.find(\"div\", {\"class\": \"card-title\"})\n+ if not d or not isinstance(d, Tag):\n+ continue\n+ dt = d.text.strip() + \" \" + month\n+ for pickup in pickups:\n+ entries.append(\n+ Collection(\n+ date=datetime.strptime(\n+ dt,\n+ \"%d %B %Y\",\n+ ).date(),\n+ t=pickup.text,\n+ icon=ICON_MAP.get(pickup.text.upper()),\n+ )\n+ )\n+ return entries\n+\n+ def extract_from_printable_calendar(\n+ self, soup: BeautifulSoup\n+ ) -> list[Collection] | None:\n+ entries = []\n+ printable_calendar = soup.find(\"div\", {\"class\": \"printableCalendar\"})\n+ if not printable_calendar or not isinstance(printable_calendar, Tag):\n+ return None\n+\n+ calendars = printable_calendar.find_all(\n+ \"div\", {\"class\": \"calendar-wrap onlyPrint\"}\n+ )\n+ if not calendars:\n+ return None\n+\n+ for calendar in calendars:\n+ if not calendar or not isinstance(calendar, Tag):\n+ continue\n+ entries += self.extract_collections(calendar)\n+ return entries or None\n+\n+ def fetch(self) -> list[Collection]:\n s = requests.Session()\n # website appears to display ~4 months worth of collections, so iterate through those pages\n- entries = []\n+ entries: list[Collection] = []\n for month in range(0, 4):\n r = s.get(\n f\"https://www.rctcbc.gov.uk/EN/Resident/RecyclingandWaste/RecyclingandWasteCollectionDays.aspx?uprn={self._uprn}&month={month}\"\n )\n soup = BeautifulSoup(r.text, \"html.parser\")\n- calendar_month = soup.find(\"div\", {\"class\": \"calendar-month\"})\n- calendar_day = soup.find_all(\n- \"div\", {\"class\": \"card-body card-body-padding\"}\n- )\n- for day in calendar_day:\n- pickups = day.find_all(\"a\")\n- if len(pickups) != 0:\n- d = day.find(\"div\", {\"class\": \"card-title\"})\n- dt = d.text.strip() + \" \" + calendar_month.text.strip()\n- for pickup in pickups:\n- entries.append(\n- Collection(\n- date=datetime.strptime(\n- dt,\n- \"%d %B %Y\",\n- ).date(),\n- t=pickup.text,\n- icon=ICON_MAP.get(pickup.text.upper()),\n- )\n- )\n+ printable_calendar_entries = self.extract_from_printable_calendar(soup)\n+ if printable_calendar_entries:\n+ return printable_calendar_entries\n+\n+ # OLD METHOD IF THEY EVER REMOVE THE PRINTABLE CALENDAR AGAIN:\n+ calendar = soup.find(\"div\", {\"class\": \"monthlyCalendar\"}) or soup\n+ if not isinstance(calendar, Tag):\n+ continue\n+ entries += self.extract_collections(calendar)\n \n return entries\n", "issue": "[Bug]: rctcbc_gov_uk returning extra incorrect dates \n### I Have A Problem With:\n\nA specific source\n\n### What's Your Problem\n\nI've been using this integration for a while without issue.\r\nI've been away for a few weeks, and come home to find 3 entries for each entry for each week, when there should only be one.\r\n\r\nSee attached screenshot.\r\nThe Wednesday entries are the only valid ones. Not sure why they're being duplicated though. :-/\r\n\r\n<img width=\"1184\" alt=\"Screenshot 2024-03-28 at 03 44 55\" src=\"https://github.com/mampfes/hacs_waste_collection_schedule/assets/63560223/7cecf6a6-9ee5-42cb-875f-50d91877ceeb\">\r\n\n\n### Source (if relevant)\n\nrctcbc_gov_uk\n\n### Logs\n\n```Shell\nno relevant logs\n```\n\n\n### Relevant Configuration\n\n```YAML\nwaste_collection_schedule:\r\n sources:\r\n - name: rctcbc_gov_uk\r\n args:\r\n uprn: \"200003766278\"\r\n calendar_title: \"Bin Collection\"\n```\n\n\n### Checklist Source Error\n\n- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\n- [X] Checked that the website of your service provider is still working\n- [X] Tested my attributes on the service provider website (if possible)\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\n\n### Checklist Sensor Error\n\n- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\n\n### Required\n\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "from datetime import datetime\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection\n\nTITLE = \"Rhondda Cynon Taf County Borough Council\"\nDESCRIPTION = \"Source for rctcbc.gov.uk services for Rhondda Cynon Taf County Borough Council, Wales, UK\"\nURL = \"rctcbc.gov.uk\"\nTEST_CASES = {\n \"Test_001\": {\"uprn\": \"10024274791\"},\n \"Test_002\": {\"uprn\": \"100100718352\"},\n \"Test_003\": {\"uprn\": 100100733093},\n}\nICON_MAP = {\n \"BLACK BAGS\": \"mdi:trash-can\",\n \"RECYCLING\": \"mdi:recycle\",\n \"FOOD WASTE\": \"mdi:food\",\n \"GARDEN WASTE\": \"mdi:leaf\",\n}\n\n\nclass Source:\n def __init__(self, uprn):\n self._uprn = str(uprn)\n\n def fetch(self):\n s = requests.Session()\n # website appears to display ~4 months worth of collections, so iterate through those pages\n entries = []\n for month in range(0, 4):\n r = s.get(\n f\"https://www.rctcbc.gov.uk/EN/Resident/RecyclingandWaste/RecyclingandWasteCollectionDays.aspx?uprn={self._uprn}&month={month}\"\n )\n soup = BeautifulSoup(r.text, \"html.parser\")\n calendar_month = soup.find(\"div\", {\"class\": \"calendar-month\"})\n calendar_day = soup.find_all(\n \"div\", {\"class\": \"card-body card-body-padding\"}\n )\n for day in calendar_day:\n pickups = day.find_all(\"a\")\n if len(pickups) != 0:\n d = day.find(\"div\", {\"class\": \"card-title\"})\n dt = d.text.strip() + \" \" + calendar_month.text.strip()\n for pickup in pickups:\n entries.append(\n Collection(\n date=datetime.strptime(\n dt,\n \"%d %B %Y\",\n ).date(),\n t=pickup.text,\n icon=ICON_MAP.get(pickup.text.upper()),\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py"}], "after_files": [{"content": "from datetime import datetime\n\nimport requests\nfrom bs4 import BeautifulSoup, Tag\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Rhondda Cynon Taf County Borough Council\"\nDESCRIPTION = \"Source for rctcbc.gov.uk services for Rhondda Cynon Taf County Borough Council, Wales, UK\"\nURL = \"rctcbc.gov.uk\"\nTEST_CASES = {\n \"Test_001\": {\"uprn\": \"10024274791\"},\n \"Test_002\": {\"uprn\": \"100100718352\"},\n \"Test_003\": {\"uprn\": 100100733093},\n}\nICON_MAP = {\n \"BLACK BAGS\": \"mdi:trash-can\",\n \"RECYCLING\": \"mdi:recycle\",\n \"FOOD WASTE\": \"mdi:food\",\n \"GARDEN WASTE\": \"mdi:leaf\",\n}\n\n\nclass Source:\n def __init__(self, uprn):\n self._uprn = str(uprn)\n\n def extract_collections(self, calendar: Tag | BeautifulSoup) -> list[Collection]:\n calendar_month = calendar.find(\"div\", {\"class\": \"calendar-month\"})\n if not calendar_month or not isinstance(calendar_month, Tag):\n return []\n month = calendar_month.text.strip()\n calendar_days = calendar.find_all(\n \"div\", {\"class\": \"card-body card-body-padding\"}\n )\n\n entries = []\n for day in calendar_days:\n pickups = day.find_all(\"a\")\n if len(pickups) != 0:\n d = day.find(\"div\", {\"class\": \"card-title\"})\n if not d or not isinstance(d, Tag):\n continue\n dt = d.text.strip() + \" \" + month\n for pickup in pickups:\n entries.append(\n Collection(\n date=datetime.strptime(\n dt,\n \"%d %B %Y\",\n ).date(),\n t=pickup.text,\n icon=ICON_MAP.get(pickup.text.upper()),\n )\n )\n return entries\n\n def extract_from_printable_calendar(\n self, soup: BeautifulSoup\n ) -> list[Collection] | None:\n entries = []\n printable_calendar = soup.find(\"div\", {\"class\": \"printableCalendar\"})\n if not printable_calendar or not isinstance(printable_calendar, Tag):\n return None\n\n calendars = printable_calendar.find_all(\n \"div\", {\"class\": \"calendar-wrap onlyPrint\"}\n )\n if not calendars:\n return None\n\n for calendar in calendars:\n if not calendar or not isinstance(calendar, Tag):\n continue\n entries += self.extract_collections(calendar)\n return entries or None\n\n def fetch(self) -> list[Collection]:\n s = requests.Session()\n # website appears to display ~4 months worth of collections, so iterate through those pages\n entries: list[Collection] = []\n for month in range(0, 4):\n r = s.get(\n f\"https://www.rctcbc.gov.uk/EN/Resident/RecyclingandWaste/RecyclingandWasteCollectionDays.aspx?uprn={self._uprn}&month={month}\"\n )\n soup = BeautifulSoup(r.text, \"html.parser\")\n printable_calendar_entries = self.extract_from_printable_calendar(soup)\n if printable_calendar_entries:\n return printable_calendar_entries\n\n # OLD METHOD IF THEY EVER REMOVE THE PRINTABLE CALENDAR AGAIN:\n calendar = soup.find(\"div\", {\"class\": \"monthlyCalendar\"}) or soup\n if not isinstance(calendar, Tag):\n continue\n entries += self.extract_collections(calendar)\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py"}]} | 1,371 | 1,013 |
gh_patches_debug_9834 | rasdani/github-patches | git_diff | PaddlePaddle__models-350 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deep Speech Model will core dump when using nvidia-docker
The log as below. It seems that the same memory is operated by C++ and Python.
```text
root@53ac34831359:/ds2/examples/librispeech# bash run_train.sh
----------- Configuration Arguments -----------
augment_conf_path: conf/augmentation.config
batch_size: 64
dev_manifest: data/librispeech/manifest.dev
init_model_path: None
is_local: 1
learning_rate: 0.0005
max_duration: 27.0
mean_std_path: data/librispeech/mean_std.npz
min_duration: 0.0
num_conv_layers: 2
num_iter_print: 100
num_passes: 50
num_proc_data: 12
num_rnn_layers: 3
output_model_dir: ./checkpoints/libri
rnn_layer_size: 2048
share_rnn_weights: 1
shuffle_method: batch_shuffle_clipped
specgram_type: linear
train_manifest: data/librispeech/manifest.train
trainer_count: 1
use_gpu: 1
use_gru: 0
use_sortagrad: 1
vocab_path: data/librispeech/vocab.txt
------------------------------------------------
I0917 06:55:44.676560 8826 Util.cpp:166] commandline: --use_gpu=1 --trainer_count=1
[INFO 2017-09-17 06:55:53,917 layers.py:2539] output for __conv_0__: c = 32, h = 81, w = 54, size = 139968
[INFO 2017-09-17 06:55:53,918 layers.py:3062] output for __batch_norm_0__: c = 32, h = 81, w = 54, size = 139968
[INFO 2017-09-17 06:55:53,919 layers.py:2539] output for __conv_1__: c = 32, h = 41, w = 54, size = 70848
[INFO 2017-09-17 06:55:53,920 layers.py:3062] output for __batch_norm_1__: c = 32, h = 41, w = 54, size = 70848
/ds2/checkpoints/libri
I0917 06:55:53.998572 8826 GradientMachine.cpp:85] Initing parameters..
I0917 06:55:55.796345 8826 GradientMachine.cpp:92] Init parameters done.
...........*** Aborted at 1505631395 (unix time) try "date -d @1505631395" if you are using GNU date ***
PC: @ 0x0 (unknown)
*** SIGSEGV (@0x50) received by PID 8826 (TID 0x7f646527a700) from PID 80; stack trace: ***
@ 0x7f656ec1f390 (unknown)
@ 0x7f656ee3773c (unknown)
@ 0x7f656ee40851 (unknown)
@ 0x7f656ee3b564 (unknown)
@ 0x7f656ee3fda9 (unknown)
@ 0x7f656e98756d (unknown)
@ 0x7f656ee3b564 (unknown)
@ 0x7f656e987624 __libc_dlopen_mode
@ 0x7f656e959a45 (unknown)
@ 0x7f656ec1ca99 __pthread_once_slow
@ 0x7f656e959b64 backtrace
@ 0x7f656c945ec3 check_callers.part.0
@ 0x7f656c946546 can_elide_temp_unary
@ 0x7f656c930f33 array_power
@ 0x55372c PyNumber_Power
@ 0x4c6050 PyEval_EvalFrameEx
@ 0x4c2765 PyEval_EvalCodeEx
@ 0x4ca8d1 PyEval_EvalFrameEx
@ 0x4c2765 PyEval_EvalCodeEx
@ 0x4ca099 PyEval_EvalFrameEx
@ 0x4c2765 PyEval_EvalCodeEx
@ 0x4ca8d1 PyEval_EvalFrameEx
@ 0x4c2765 PyEval_EvalCodeEx
@ 0x4ca099 PyEval_EvalFrameEx
@ 0x4c2765 PyEval_EvalCodeEx
@ 0x4ca8d1 PyEval_EvalFrameEx
@ 0x4c2765 PyEval_EvalCodeEx
@ 0x4ca8d1 PyEval_EvalFrameEx
@ 0x4c2765 PyEval_EvalCodeEx
@ 0x4ca8d1 PyEval_EvalFrameEx
@ 0x4c2765 PyEval_EvalCodeEx
@ 0x4de8b8 (unknown)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `deep_speech_2/data_utils/featurizer/audio_featurizer.py`
Content:
```
1 """Contains the audio featurizer class."""
2 from __future__ import absolute_import
3 from __future__ import division
4 from __future__ import print_function
5
6 import numpy as np
7 from data_utils.utility import read_manifest
8 from data_utils.audio import AudioSegment
9 from python_speech_features import mfcc
10 from python_speech_features import delta
11
12
13 class AudioFeaturizer(object):
14 """Audio featurizer, for extracting features from audio contents of
15 AudioSegment or SpeechSegment.
16
17 Currently, it supports feature types of linear spectrogram and mfcc.
18
19 :param specgram_type: Specgram feature type. Options: 'linear'.
20 :type specgram_type: str
21 :param stride_ms: Striding size (in milliseconds) for generating frames.
22 :type stride_ms: float
23 :param window_ms: Window size (in milliseconds) for generating frames.
24 :type window_ms: float
25 :param max_freq: When specgram_type is 'linear', only FFT bins
26 corresponding to frequencies between [0, max_freq] are
27 returned; when specgram_type is 'mfcc', max_feq is the
28 highest band edge of mel filters.
29 :types max_freq: None|float
30 :param target_sample_rate: Audio are resampled (if upsampling or
31 downsampling is allowed) to this before
32 extracting spectrogram features.
33 :type target_sample_rate: float
34 :param use_dB_normalization: Whether to normalize the audio to a certain
35 decibels before extracting the features.
36 :type use_dB_normalization: bool
37 :param target_dB: Target audio decibels for normalization.
38 :type target_dB: float
39 """
40
41 def __init__(self,
42 specgram_type='linear',
43 stride_ms=10.0,
44 window_ms=20.0,
45 max_freq=None,
46 target_sample_rate=16000,
47 use_dB_normalization=True,
48 target_dB=-20):
49 self._specgram_type = specgram_type
50 self._stride_ms = stride_ms
51 self._window_ms = window_ms
52 self._max_freq = max_freq
53 self._target_sample_rate = target_sample_rate
54 self._use_dB_normalization = use_dB_normalization
55 self._target_dB = target_dB
56
57 def featurize(self,
58 audio_segment,
59 allow_downsampling=True,
60 allow_upsampling=True):
61 """Extract audio features from AudioSegment or SpeechSegment.
62
63 :param audio_segment: Audio/speech segment to extract features from.
64 :type audio_segment: AudioSegment|SpeechSegment
65 :param allow_downsampling: Whether to allow audio downsampling before
66 featurizing.
67 :type allow_downsampling: bool
68 :param allow_upsampling: Whether to allow audio upsampling before
69 featurizing.
70 :type allow_upsampling: bool
71 :return: Spectrogram audio feature in 2darray.
72 :rtype: ndarray
73 :raises ValueError: If audio sample rate is not supported.
74 """
75 # upsampling or downsampling
76 if ((audio_segment.sample_rate > self._target_sample_rate and
77 allow_downsampling) or
78 (audio_segment.sample_rate < self._target_sample_rate and
79 allow_upsampling)):
80 audio_segment.resample(self._target_sample_rate)
81 if audio_segment.sample_rate != self._target_sample_rate:
82 raise ValueError("Audio sample rate is not supported. "
83 "Turn allow_downsampling or allow up_sampling on.")
84 # decibel normalization
85 if self._use_dB_normalization:
86 audio_segment.normalize(target_db=self._target_dB)
87 # extract spectrogram
88 return self._compute_specgram(audio_segment.samples,
89 audio_segment.sample_rate)
90
91 def _compute_specgram(self, samples, sample_rate):
92 """Extract various audio features."""
93 if self._specgram_type == 'linear':
94 return self._compute_linear_specgram(
95 samples, sample_rate, self._stride_ms, self._window_ms,
96 self._max_freq)
97 elif self._specgram_type == 'mfcc':
98 return self._compute_mfcc(samples, sample_rate, self._stride_ms,
99 self._window_ms, self._max_freq)
100 else:
101 raise ValueError("Unknown specgram_type %s. "
102 "Supported values: linear." % self._specgram_type)
103
104 def _compute_linear_specgram(self,
105 samples,
106 sample_rate,
107 stride_ms=10.0,
108 window_ms=20.0,
109 max_freq=None,
110 eps=1e-14):
111 """Compute the linear spectrogram from FFT energy."""
112 if max_freq is None:
113 max_freq = sample_rate / 2
114 if max_freq > sample_rate / 2:
115 raise ValueError("max_freq must be greater than half of "
116 "sample rate.")
117 if stride_ms > window_ms:
118 raise ValueError("Stride size must not be greater than "
119 "window size.")
120 stride_size = int(0.001 * sample_rate * stride_ms)
121 window_size = int(0.001 * sample_rate * window_ms)
122 specgram, freqs = self._specgram_real(
123 samples,
124 window_size=window_size,
125 stride_size=stride_size,
126 sample_rate=sample_rate)
127 ind = np.where(freqs <= max_freq)[0][-1] + 1
128 return np.log(specgram[:ind, :] + eps)
129
130 def _specgram_real(self, samples, window_size, stride_size, sample_rate):
131 """Compute the spectrogram for samples from a real signal."""
132 # extract strided windows
133 truncate_size = (len(samples) - window_size) % stride_size
134 samples = samples[:len(samples) - truncate_size]
135 nshape = (window_size, (len(samples) - window_size) // stride_size + 1)
136 nstrides = (samples.strides[0], samples.strides[0] * stride_size)
137 windows = np.lib.stride_tricks.as_strided(
138 samples, shape=nshape, strides=nstrides)
139 assert np.all(
140 windows[:, 1] == samples[stride_size:(stride_size + window_size)])
141 # window weighting, squared Fast Fourier Transform (fft), scaling
142 weighting = np.hanning(window_size)[:, None]
143 fft = np.fft.rfft(windows * weighting, axis=0)
144 fft = np.absolute(fft)**2
145 scale = np.sum(weighting**2) * sample_rate
146 fft[1:-1, :] *= (2.0 / scale)
147 fft[(0, -1), :] /= scale
148 # prepare fft frequency list
149 freqs = float(sample_rate) / window_size * np.arange(fft.shape[0])
150 return fft, freqs
151
152 def _compute_mfcc(self,
153 samples,
154 sample_rate,
155 stride_ms=10.0,
156 window_ms=20.0,
157 max_freq=None):
158 """Compute mfcc from samples."""
159 if max_freq is None:
160 max_freq = sample_rate / 2
161 if max_freq > sample_rate / 2:
162 raise ValueError("max_freq must not be greater than half of "
163 "sample rate.")
164 if stride_ms > window_ms:
165 raise ValueError("Stride size must not be greater than "
166 "window size.")
167 # compute the 13 cepstral coefficients, and the first one is replaced
168 # by log(frame energy)
169 mfcc_feat = mfcc(
170 signal=samples,
171 samplerate=sample_rate,
172 winlen=0.001 * window_ms,
173 winstep=0.001 * stride_ms,
174 highfreq=max_freq)
175 # Deltas
176 d_mfcc_feat = delta(mfcc_feat, 2)
177 # Deltas-Deltas
178 dd_mfcc_feat = delta(d_mfcc_feat, 2)
179 # transpose
180 mfcc_feat = np.transpose(mfcc_feat)
181 d_mfcc_feat = np.transpose(d_mfcc_feat)
182 dd_mfcc_feat = np.transpose(dd_mfcc_feat)
183 # concat above three features
184 concat_mfcc_feat = np.concatenate(
185 (mfcc_feat, d_mfcc_feat, dd_mfcc_feat))
186 return concat_mfcc_feat
187
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/deep_speech_2/data_utils/featurizer/audio_featurizer.py b/deep_speech_2/data_utils/featurizer/audio_featurizer.py
--- a/deep_speech_2/data_utils/featurizer/audio_featurizer.py
+++ b/deep_speech_2/data_utils/featurizer/audio_featurizer.py
@@ -141,7 +141,8 @@
# window weighting, squared Fast Fourier Transform (fft), scaling
weighting = np.hanning(window_size)[:, None]
fft = np.fft.rfft(windows * weighting, axis=0)
- fft = np.absolute(fft)**2
+ fft = np.absolute(fft)
+ fft = fft**2
scale = np.sum(weighting**2) * sample_rate
fft[1:-1, :] *= (2.0 / scale)
fft[(0, -1), :] /= scale
| {"golden_diff": "diff --git a/deep_speech_2/data_utils/featurizer/audio_featurizer.py b/deep_speech_2/data_utils/featurizer/audio_featurizer.py\n--- a/deep_speech_2/data_utils/featurizer/audio_featurizer.py\n+++ b/deep_speech_2/data_utils/featurizer/audio_featurizer.py\n@@ -141,7 +141,8 @@\n # window weighting, squared Fast Fourier Transform (fft), scaling\n weighting = np.hanning(window_size)[:, None]\n fft = np.fft.rfft(windows * weighting, axis=0)\n- fft = np.absolute(fft)**2\n+ fft = np.absolute(fft)\n+ fft = fft**2\n scale = np.sum(weighting**2) * sample_rate\n fft[1:-1, :] *= (2.0 / scale)\n fft[(0, -1), :] /= scale\n", "issue": "Deep Speech Model will core dump when using nvidia-docker\nThe log as below. It seems that the same memory is operated by C++ and Python.\r\n\r\n\r\n```text\r\nroot@53ac34831359:/ds2/examples/librispeech# bash run_train.sh\r\n----------- Configuration Arguments -----------\r\naugment_conf_path: conf/augmentation.config\r\nbatch_size: 64\r\ndev_manifest: data/librispeech/manifest.dev\r\ninit_model_path: None\r\nis_local: 1\r\nlearning_rate: 0.0005\r\nmax_duration: 27.0\r\nmean_std_path: data/librispeech/mean_std.npz\r\nmin_duration: 0.0\r\nnum_conv_layers: 2\r\nnum_iter_print: 100\r\nnum_passes: 50\r\nnum_proc_data: 12\r\nnum_rnn_layers: 3\r\noutput_model_dir: ./checkpoints/libri\r\nrnn_layer_size: 2048\r\nshare_rnn_weights: 1\r\nshuffle_method: batch_shuffle_clipped\r\nspecgram_type: linear\r\ntrain_manifest: data/librispeech/manifest.train\r\ntrainer_count: 1\r\nuse_gpu: 1\r\nuse_gru: 0\r\nuse_sortagrad: 1\r\nvocab_path: data/librispeech/vocab.txt\r\n------------------------------------------------\r\nI0917 06:55:44.676560 8826 Util.cpp:166] commandline: --use_gpu=1 --trainer_count=1\r\n[INFO 2017-09-17 06:55:53,917 layers.py:2539] output for __conv_0__: c = 32, h = 81, w = 54, size = 139968\r\n[INFO 2017-09-17 06:55:53,918 layers.py:3062] output for __batch_norm_0__: c = 32, h = 81, w = 54, size = 139968\r\n[INFO 2017-09-17 06:55:53,919 layers.py:2539] output for __conv_1__: c = 32, h = 41, w = 54, size = 70848\r\n[INFO 2017-09-17 06:55:53,920 layers.py:3062] output for __batch_norm_1__: c = 32, h = 41, w = 54, size = 70848\r\n/ds2/checkpoints/libri\r\nI0917 06:55:53.998572 8826 GradientMachine.cpp:85] Initing parameters..\r\nI0917 06:55:55.796345 8826 GradientMachine.cpp:92] Init parameters done.\r\n...........*** Aborted at 1505631395 (unix time) try \"date -d @1505631395\" if you are using GNU date ***\r\nPC: @ 0x0 (unknown)\r\n*** SIGSEGV (@0x50) received by PID 8826 (TID 0x7f646527a700) from PID 80; stack trace: ***\r\n @ 0x7f656ec1f390 (unknown)\r\n @ 0x7f656ee3773c (unknown)\r\n @ 0x7f656ee40851 (unknown)\r\n @ 0x7f656ee3b564 (unknown)\r\n @ 0x7f656ee3fda9 (unknown)\r\n @ 0x7f656e98756d (unknown)\r\n @ 0x7f656ee3b564 (unknown)\r\n @ 0x7f656e987624 __libc_dlopen_mode\r\n @ 0x7f656e959a45 (unknown)\r\n @ 0x7f656ec1ca99 __pthread_once_slow\r\n @ 0x7f656e959b64 backtrace\r\n @ 0x7f656c945ec3 check_callers.part.0\r\n @ 0x7f656c946546 can_elide_temp_unary\r\n @ 0x7f656c930f33 array_power\r\n @ 0x55372c PyNumber_Power\r\n @ 0x4c6050 PyEval_EvalFrameEx\r\n @ 0x4c2765 PyEval_EvalCodeEx\r\n @ 0x4ca8d1 PyEval_EvalFrameEx\r\n @ 0x4c2765 PyEval_EvalCodeEx\r\n @ 0x4ca099 PyEval_EvalFrameEx\r\n @ 0x4c2765 PyEval_EvalCodeEx\r\n @ 0x4ca8d1 PyEval_EvalFrameEx\r\n @ 0x4c2765 PyEval_EvalCodeEx\r\n @ 0x4ca099 PyEval_EvalFrameEx\r\n @ 0x4c2765 PyEval_EvalCodeEx\r\n @ 0x4ca8d1 PyEval_EvalFrameEx\r\n @ 0x4c2765 PyEval_EvalCodeEx\r\n @ 0x4ca8d1 PyEval_EvalFrameEx\r\n @ 0x4c2765 PyEval_EvalCodeEx\r\n @ 0x4ca8d1 PyEval_EvalFrameEx\r\n @ 0x4c2765 PyEval_EvalCodeEx\r\n @ 0x4de8b8 (unknown)\r\n```\n", "before_files": [{"content": "\"\"\"Contains the audio featurizer class.\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nfrom data_utils.utility import read_manifest\nfrom data_utils.audio import AudioSegment\nfrom python_speech_features import mfcc\nfrom python_speech_features import delta\n\n\nclass AudioFeaturizer(object):\n \"\"\"Audio featurizer, for extracting features from audio contents of\n AudioSegment or SpeechSegment.\n\n Currently, it supports feature types of linear spectrogram and mfcc.\n\n :param specgram_type: Specgram feature type. Options: 'linear'.\n :type specgram_type: str\n :param stride_ms: Striding size (in milliseconds) for generating frames.\n :type stride_ms: float\n :param window_ms: Window size (in milliseconds) for generating frames.\n :type window_ms: float\n :param max_freq: When specgram_type is 'linear', only FFT bins\n corresponding to frequencies between [0, max_freq] are\n returned; when specgram_type is 'mfcc', max_feq is the\n highest band edge of mel filters.\n :types max_freq: None|float\n :param target_sample_rate: Audio are resampled (if upsampling or\n downsampling is allowed) to this before\n extracting spectrogram features.\n :type target_sample_rate: float\n :param use_dB_normalization: Whether to normalize the audio to a certain\n decibels before extracting the features.\n :type use_dB_normalization: bool\n :param target_dB: Target audio decibels for normalization.\n :type target_dB: float\n \"\"\"\n\n def __init__(self,\n specgram_type='linear',\n stride_ms=10.0,\n window_ms=20.0,\n max_freq=None,\n target_sample_rate=16000,\n use_dB_normalization=True,\n target_dB=-20):\n self._specgram_type = specgram_type\n self._stride_ms = stride_ms\n self._window_ms = window_ms\n self._max_freq = max_freq\n self._target_sample_rate = target_sample_rate\n self._use_dB_normalization = use_dB_normalization\n self._target_dB = target_dB\n\n def featurize(self,\n audio_segment,\n allow_downsampling=True,\n allow_upsampling=True):\n \"\"\"Extract audio features from AudioSegment or SpeechSegment.\n\n :param audio_segment: Audio/speech segment to extract features from.\n :type audio_segment: AudioSegment|SpeechSegment\n :param allow_downsampling: Whether to allow audio downsampling before\n featurizing.\n :type allow_downsampling: bool\n :param allow_upsampling: Whether to allow audio upsampling before\n featurizing.\n :type allow_upsampling: bool\n :return: Spectrogram audio feature in 2darray.\n :rtype: ndarray\n :raises ValueError: If audio sample rate is not supported.\n \"\"\"\n # upsampling or downsampling\n if ((audio_segment.sample_rate > self._target_sample_rate and\n allow_downsampling) or\n (audio_segment.sample_rate < self._target_sample_rate and\n allow_upsampling)):\n audio_segment.resample(self._target_sample_rate)\n if audio_segment.sample_rate != self._target_sample_rate:\n raise ValueError(\"Audio sample rate is not supported. \"\n \"Turn allow_downsampling or allow up_sampling on.\")\n # decibel normalization\n if self._use_dB_normalization:\n audio_segment.normalize(target_db=self._target_dB)\n # extract spectrogram\n return self._compute_specgram(audio_segment.samples,\n audio_segment.sample_rate)\n\n def _compute_specgram(self, samples, sample_rate):\n \"\"\"Extract various audio features.\"\"\"\n if self._specgram_type == 'linear':\n return self._compute_linear_specgram(\n samples, sample_rate, self._stride_ms, self._window_ms,\n self._max_freq)\n elif self._specgram_type == 'mfcc':\n return self._compute_mfcc(samples, sample_rate, self._stride_ms,\n self._window_ms, self._max_freq)\n else:\n raise ValueError(\"Unknown specgram_type %s. \"\n \"Supported values: linear.\" % self._specgram_type)\n\n def _compute_linear_specgram(self,\n samples,\n sample_rate,\n stride_ms=10.0,\n window_ms=20.0,\n max_freq=None,\n eps=1e-14):\n \"\"\"Compute the linear spectrogram from FFT energy.\"\"\"\n if max_freq is None:\n max_freq = sample_rate / 2\n if max_freq > sample_rate / 2:\n raise ValueError(\"max_freq must be greater than half of \"\n \"sample rate.\")\n if stride_ms > window_ms:\n raise ValueError(\"Stride size must not be greater than \"\n \"window size.\")\n stride_size = int(0.001 * sample_rate * stride_ms)\n window_size = int(0.001 * sample_rate * window_ms)\n specgram, freqs = self._specgram_real(\n samples,\n window_size=window_size,\n stride_size=stride_size,\n sample_rate=sample_rate)\n ind = np.where(freqs <= max_freq)[0][-1] + 1\n return np.log(specgram[:ind, :] + eps)\n\n def _specgram_real(self, samples, window_size, stride_size, sample_rate):\n \"\"\"Compute the spectrogram for samples from a real signal.\"\"\"\n # extract strided windows\n truncate_size = (len(samples) - window_size) % stride_size\n samples = samples[:len(samples) - truncate_size]\n nshape = (window_size, (len(samples) - window_size) // stride_size + 1)\n nstrides = (samples.strides[0], samples.strides[0] * stride_size)\n windows = np.lib.stride_tricks.as_strided(\n samples, shape=nshape, strides=nstrides)\n assert np.all(\n windows[:, 1] == samples[stride_size:(stride_size + window_size)])\n # window weighting, squared Fast Fourier Transform (fft), scaling\n weighting = np.hanning(window_size)[:, None]\n fft = np.fft.rfft(windows * weighting, axis=0)\n fft = np.absolute(fft)**2\n scale = np.sum(weighting**2) * sample_rate\n fft[1:-1, :] *= (2.0 / scale)\n fft[(0, -1), :] /= scale\n # prepare fft frequency list\n freqs = float(sample_rate) / window_size * np.arange(fft.shape[0])\n return fft, freqs\n\n def _compute_mfcc(self,\n samples,\n sample_rate,\n stride_ms=10.0,\n window_ms=20.0,\n max_freq=None):\n \"\"\"Compute mfcc from samples.\"\"\"\n if max_freq is None:\n max_freq = sample_rate / 2\n if max_freq > sample_rate / 2:\n raise ValueError(\"max_freq must not be greater than half of \"\n \"sample rate.\")\n if stride_ms > window_ms:\n raise ValueError(\"Stride size must not be greater than \"\n \"window size.\")\n # compute the 13 cepstral coefficients, and the first one is replaced\n # by log(frame energy)\n mfcc_feat = mfcc(\n signal=samples,\n samplerate=sample_rate,\n winlen=0.001 * window_ms,\n winstep=0.001 * stride_ms,\n highfreq=max_freq)\n # Deltas\n d_mfcc_feat = delta(mfcc_feat, 2)\n # Deltas-Deltas\n dd_mfcc_feat = delta(d_mfcc_feat, 2)\n # transpose\n mfcc_feat = np.transpose(mfcc_feat)\n d_mfcc_feat = np.transpose(d_mfcc_feat)\n dd_mfcc_feat = np.transpose(dd_mfcc_feat)\n # concat above three features\n concat_mfcc_feat = np.concatenate(\n (mfcc_feat, d_mfcc_feat, dd_mfcc_feat))\n return concat_mfcc_feat\n", "path": "deep_speech_2/data_utils/featurizer/audio_featurizer.py"}], "after_files": [{"content": "\"\"\"Contains the audio featurizer class.\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nfrom data_utils.utility import read_manifest\nfrom data_utils.audio import AudioSegment\nfrom python_speech_features import mfcc\nfrom python_speech_features import delta\n\n\nclass AudioFeaturizer(object):\n \"\"\"Audio featurizer, for extracting features from audio contents of\n AudioSegment or SpeechSegment.\n\n Currently, it supports feature types of linear spectrogram and mfcc.\n\n :param specgram_type: Specgram feature type. Options: 'linear'.\n :type specgram_type: str\n :param stride_ms: Striding size (in milliseconds) for generating frames.\n :type stride_ms: float\n :param window_ms: Window size (in milliseconds) for generating frames.\n :type window_ms: float\n :param max_freq: When specgram_type is 'linear', only FFT bins\n corresponding to frequencies between [0, max_freq] are\n returned; when specgram_type is 'mfcc', max_feq is the\n highest band edge of mel filters.\n :types max_freq: None|float\n :param target_sample_rate: Audio are resampled (if upsampling or\n downsampling is allowed) to this before\n extracting spectrogram features.\n :type target_sample_rate: float\n :param use_dB_normalization: Whether to normalize the audio to a certain\n decibels before extracting the features.\n :type use_dB_normalization: bool\n :param target_dB: Target audio decibels for normalization.\n :type target_dB: float\n \"\"\"\n\n def __init__(self,\n specgram_type='linear',\n stride_ms=10.0,\n window_ms=20.0,\n max_freq=None,\n target_sample_rate=16000,\n use_dB_normalization=True,\n target_dB=-20):\n self._specgram_type = specgram_type\n self._stride_ms = stride_ms\n self._window_ms = window_ms\n self._max_freq = max_freq\n self._target_sample_rate = target_sample_rate\n self._use_dB_normalization = use_dB_normalization\n self._target_dB = target_dB\n\n def featurize(self,\n audio_segment,\n allow_downsampling=True,\n allow_upsampling=True):\n \"\"\"Extract audio features from AudioSegment or SpeechSegment.\n\n :param audio_segment: Audio/speech segment to extract features from.\n :type audio_segment: AudioSegment|SpeechSegment\n :param allow_downsampling: Whether to allow audio downsampling before\n featurizing.\n :type allow_downsampling: bool\n :param allow_upsampling: Whether to allow audio upsampling before\n featurizing.\n :type allow_upsampling: bool\n :return: Spectrogram audio feature in 2darray.\n :rtype: ndarray\n :raises ValueError: If audio sample rate is not supported.\n \"\"\"\n # upsampling or downsampling\n if ((audio_segment.sample_rate > self._target_sample_rate and\n allow_downsampling) or\n (audio_segment.sample_rate < self._target_sample_rate and\n allow_upsampling)):\n audio_segment.resample(self._target_sample_rate)\n if audio_segment.sample_rate != self._target_sample_rate:\n raise ValueError(\"Audio sample rate is not supported. \"\n \"Turn allow_downsampling or allow up_sampling on.\")\n # decibel normalization\n if self._use_dB_normalization:\n audio_segment.normalize(target_db=self._target_dB)\n # extract spectrogram\n return self._compute_specgram(audio_segment.samples,\n audio_segment.sample_rate)\n\n def _compute_specgram(self, samples, sample_rate):\n \"\"\"Extract various audio features.\"\"\"\n if self._specgram_type == 'linear':\n return self._compute_linear_specgram(\n samples, sample_rate, self._stride_ms, self._window_ms,\n self._max_freq)\n elif self._specgram_type == 'mfcc':\n return self._compute_mfcc(samples, sample_rate, self._stride_ms,\n self._window_ms, self._max_freq)\n else:\n raise ValueError(\"Unknown specgram_type %s. \"\n \"Supported values: linear.\" % self._specgram_type)\n\n def _compute_linear_specgram(self,\n samples,\n sample_rate,\n stride_ms=10.0,\n window_ms=20.0,\n max_freq=None,\n eps=1e-14):\n \"\"\"Compute the linear spectrogram from FFT energy.\"\"\"\n if max_freq is None:\n max_freq = sample_rate / 2\n if max_freq > sample_rate / 2:\n raise ValueError(\"max_freq must be greater than half of \"\n \"sample rate.\")\n if stride_ms > window_ms:\n raise ValueError(\"Stride size must not be greater than \"\n \"window size.\")\n stride_size = int(0.001 * sample_rate * stride_ms)\n window_size = int(0.001 * sample_rate * window_ms)\n specgram, freqs = self._specgram_real(\n samples,\n window_size=window_size,\n stride_size=stride_size,\n sample_rate=sample_rate)\n ind = np.where(freqs <= max_freq)[0][-1] + 1\n return np.log(specgram[:ind, :] + eps)\n\n def _specgram_real(self, samples, window_size, stride_size, sample_rate):\n \"\"\"Compute the spectrogram for samples from a real signal.\"\"\"\n # extract strided windows\n truncate_size = (len(samples) - window_size) % stride_size\n samples = samples[:len(samples) - truncate_size]\n nshape = (window_size, (len(samples) - window_size) // stride_size + 1)\n nstrides = (samples.strides[0], samples.strides[0] * stride_size)\n windows = np.lib.stride_tricks.as_strided(\n samples, shape=nshape, strides=nstrides)\n assert np.all(\n windows[:, 1] == samples[stride_size:(stride_size + window_size)])\n # window weighting, squared Fast Fourier Transform (fft), scaling\n weighting = np.hanning(window_size)[:, None]\n fft = np.fft.rfft(windows * weighting, axis=0)\n fft = np.absolute(fft)\n fft = fft**2\n scale = np.sum(weighting**2) * sample_rate\n fft[1:-1, :] *= (2.0 / scale)\n fft[(0, -1), :] /= scale\n # prepare fft frequency list\n freqs = float(sample_rate) / window_size * np.arange(fft.shape[0])\n return fft, freqs\n\n def _compute_mfcc(self,\n samples,\n sample_rate,\n stride_ms=10.0,\n window_ms=20.0,\n max_freq=None):\n \"\"\"Compute mfcc from samples.\"\"\"\n if max_freq is None:\n max_freq = sample_rate / 2\n if max_freq > sample_rate / 2:\n raise ValueError(\"max_freq must not be greater than half of \"\n \"sample rate.\")\n if stride_ms > window_ms:\n raise ValueError(\"Stride size must not be greater than \"\n \"window size.\")\n # compute the 13 cepstral coefficients, and the first one is replaced\n # by log(frame energy)\n mfcc_feat = mfcc(\n signal=samples,\n samplerate=sample_rate,\n winlen=0.001 * window_ms,\n winstep=0.001 * stride_ms,\n highfreq=max_freq)\n # Deltas\n d_mfcc_feat = delta(mfcc_feat, 2)\n # Deltas-Deltas\n dd_mfcc_feat = delta(d_mfcc_feat, 2)\n # transpose\n mfcc_feat = np.transpose(mfcc_feat)\n d_mfcc_feat = np.transpose(d_mfcc_feat)\n dd_mfcc_feat = np.transpose(dd_mfcc_feat)\n # concat above three features\n concat_mfcc_feat = np.concatenate(\n (mfcc_feat, d_mfcc_feat, dd_mfcc_feat))\n return concat_mfcc_feat\n", "path": "deep_speech_2/data_utils/featurizer/audio_featurizer.py"}]} | 3,933 | 206 |
gh_patches_debug_36601 | rasdani/github-patches | git_diff | nipy__nipype-1489 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Shim warning when using IPython >= 4.0.0
One of our C-PAC users came across the following deprecation warning during their pipeline run:
> /clusterdata/apps/anaconda/lib/python2.7/site-packages/IPython/kernel/**init**.py:13: ShimWarning: The `IPython.kernel` package has been deprecated. You should import from ipykernel or jupyter_client instead.
>
> "You should import from ipykernel or jupyter_client instead.", ShimWarning)
>
> /clusterdata/apps/anaconda/lib/python2.7/site-packages/IPython/parallel.py:13: ShimWarning: The `IPython.parallel` package has been deprecated. You should import from ipyparallel instead.
>
> "You should import from ipyparallel instead.", ShimWarning)
I was able to determine that this is because, starting in IPython 4.0.0, the IPython developers have chosen to separate IPython.parallel and IPython.kernel into separate packages. Is there some way that nipype could take this into account by determining if the IPython version number is >= 4.0.0, stopping execution if so (and the packages are not installed), and warning users to download these two packages (or some other appropriate behavior)? If the packages are installed, the new import statements could be used instead.
Shim warning when using IPython >= 4.0.0
One of our C-PAC users came across the following deprecation warning during their pipeline run:
> /clusterdata/apps/anaconda/lib/python2.7/site-packages/IPython/kernel/**init**.py:13: ShimWarning: The `IPython.kernel` package has been deprecated. You should import from ipykernel or jupyter_client instead.
>
> "You should import from ipykernel or jupyter_client instead.", ShimWarning)
>
> /clusterdata/apps/anaconda/lib/python2.7/site-packages/IPython/parallel.py:13: ShimWarning: The `IPython.parallel` package has been deprecated. You should import from ipyparallel instead.
>
> "You should import from ipyparallel instead.", ShimWarning)
I was able to determine that this is because, starting in IPython 4.0.0, the IPython developers have chosen to separate IPython.parallel and IPython.kernel into separate packages. Is there some way that nipype could take this into account by determining if the IPython version number is >= 4.0.0, stopping execution if so (and the packages are not installed), and warning users to download these two packages (or some other appropriate behavior)? If the packages are installed, the new import statements could be used instead.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nipype/pipeline/plugins/ipython.py`
Content:
```
1 # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
2 # vi: set ft=python sts=4 ts=4 sw=4 et:
3 """Parallel workflow execution via IPython controller
4 """
5
6 from future import standard_library
7 standard_library.install_aliases()
8
9 from pickle import dumps
10
11 import sys
12
13 IPython_not_loaded = False
14 try:
15 from IPython import __version__ as IPyversion
16 from ipyparallel.error import TimeoutError
17 except:
18 IPython_not_loaded = True
19
20 from .base import (DistributedPluginBase, logger, report_crash)
21
22
23 def execute_task(pckld_task, node_config, updatehash):
24 from socket import gethostname
25 from traceback import format_exc
26 from nipype import config, logging
27 traceback = None
28 result = None
29 import os
30 cwd = os.getcwd()
31 try:
32 config.update_config(node_config)
33 logging.update_logging(config)
34 from pickle import loads
35 task = loads(pckld_task)
36 result = task.run(updatehash=updatehash)
37 except:
38 traceback = format_exc()
39 result = task.result
40 os.chdir(cwd)
41 return result, traceback, gethostname()
42
43
44 class IPythonPlugin(DistributedPluginBase):
45 """Execute workflow with ipython
46 """
47
48 def __init__(self, plugin_args=None):
49 if IPython_not_loaded:
50 raise ImportError('ipyparallel could not be imported')
51 super(IPythonPlugin, self).__init__(plugin_args=plugin_args)
52 self.iparallel = None
53 self.taskclient = None
54 self.taskmap = {}
55 self._taskid = 0
56
57 def run(self, graph, config, updatehash=False):
58 """Executes a pre-defined pipeline is distributed approaches
59 based on IPython's ipyparallel processing interface
60 """
61 # retrieve clients again
62 try:
63 name = 'ipyparallel'
64 __import__(name)
65 self.iparallel = sys.modules[name]
66 except ImportError:
67 raise ImportError("Ipython kernel not found. Parallel execution "
68 "will be unavailable")
69 try:
70 self.taskclient = self.iparallel.Client()
71 except Exception as e:
72 if isinstance(e, TimeoutError):
73 raise Exception("No IPython clients found.")
74 if isinstance(e, IOError):
75 raise Exception("ipcluster/ipcontroller has not been started")
76 if isinstance(e, ValueError):
77 raise Exception("Ipython kernel not installed")
78 raise e
79 return super(IPythonPlugin, self).run(graph, config, updatehash=updatehash)
80
81 def _get_result(self, taskid):
82 if taskid not in self.taskmap:
83 raise ValueError('Task %d not in pending list' % taskid)
84 if self.taskmap[taskid].ready():
85 result, traceback, hostname = self.taskmap[taskid].get()
86 result_out = dict(result=None, traceback=None)
87 result_out['result'] = result
88 result_out['traceback'] = traceback
89 result_out['hostname'] = hostname
90 return result_out
91 else:
92 return None
93
94 def _submit_job(self, node, updatehash=False):
95 pckld_node = dumps(node, 2)
96 result_object = self.taskclient.load_balanced_view().apply(execute_task,
97 pckld_node,
98 node.config,
99 updatehash)
100 self._taskid += 1
101 self.taskmap[self._taskid] = result_object
102 return self._taskid
103
104 def _report_crash(self, node, result=None):
105 if result and result['traceback']:
106 node._result = result['result']
107 node._traceback = result['traceback']
108 return report_crash(node,
109 traceback=result['traceback'])
110 else:
111 return report_crash(node)
112
113 def _clear_task(self, taskid):
114 if IPyversion >= '0.11':
115 logger.debug("Clearing id: %d" % taskid)
116 self.taskclient.purge_results(self.taskmap[taskid])
117 del self.taskmap[taskid]
118
```
Path: `nipype/pipeline/plugins/ipythonx.py`
Content:
```
1 # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
2 # vi: set ft=python sts=4 ts=4 sw=4 et:
3 """Parallel workflow execution via IPython controller
4 """
5
6 import sys
7
8 IPython_not_loaded = False
9 try:
10 from IPython import __version__ as IPyversion
11 from IPython.kernel.contexts import ConnectionRefusedError
12 except:
13 IPython_not_loaded = True
14
15
16 from .base import (DistributedPluginBase, logger, report_crash)
17
18
19 class IPythonXPlugin(DistributedPluginBase):
20 """Execute workflow with ipython
21 """
22
23 def __init__(self, plugin_args=None):
24 if IPython_not_loaded:
25 raise ImportError('ipyparallel could not be imported')
26 super(IPythonXPlugin, self).__init__(plugin_args=plugin_args)
27 self.ipyclient = None
28 self.taskclient = None
29
30 def run(self, graph, config, updatehash=False):
31 """Executes a pre-defined pipeline is distributed approaches
32 based on IPython's ipyparallel processing interface
33 """
34 # retrieve clients again
35 try:
36 name = 'IPython.kernel.client'
37 __import__(name)
38 self.ipyclient = sys.modules[name]
39 except ImportError:
40 raise ImportError("Ipython kernel not found. Parallel execution "
41 "will be unavailable")
42 try:
43 self.taskclient = self.ipyclient.TaskClient()
44 except Exception as e:
45 if isinstance(e, ConnectionRefusedError):
46 raise Exception("No IPython clients found.")
47 if isinstance(e, ValueError):
48 raise Exception("Ipython kernel not installed")
49 return super(IPythonXPlugin, self).run(graph, config, updatehash=updatehash)
50
51 def _get_result(self, taskid):
52 return self.taskclient.get_task_result(taskid, block=False)
53
54 def _submit_job(self, node, updatehash=False):
55 cmdstr = """import sys
56 from traceback import format_exception
57 traceback=None
58 result=None
59 try:
60 result = task.run(updatehash=updatehash)
61 except:
62 etype, eval, etr = sys.exc_info()
63 traceback = format_exception(etype,eval,etr)
64 result = task.result
65 """
66 task = self.ipyclient.StringTask(cmdstr,
67 push=dict(task=node,
68 updatehash=updatehash),
69 pull=['result', 'traceback'])
70 return self.taskclient.run(task, block=False)
71
72 def _report_crash(self, node, result=None):
73 if result and result['traceback']:
74 node._result = result['result']
75 node._traceback = result['traceback']
76 return report_crash(node,
77 traceback=result['traceback'])
78 else:
79 return report_crash(node)
80
81 def _clear_task(self, taskid):
82 if IPyversion >= '0.10.1':
83 logger.debug("Clearing id: %d" % taskid)
84 self.taskclient.clear(taskid)
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nipype/pipeline/plugins/ipython.py b/nipype/pipeline/plugins/ipython.py
--- a/nipype/pipeline/plugins/ipython.py
+++ b/nipype/pipeline/plugins/ipython.py
@@ -5,7 +5,6 @@
from future import standard_library
standard_library.install_aliases()
-
from pickle import dumps
import sys
@@ -47,7 +46,7 @@
def __init__(self, plugin_args=None):
if IPython_not_loaded:
- raise ImportError('ipyparallel could not be imported')
+ raise ImportError('Please install ipyparallel to use this plugin.')
super(IPythonPlugin, self).__init__(plugin_args=plugin_args)
self.iparallel = None
self.taskclient = None
@@ -64,7 +63,7 @@
__import__(name)
self.iparallel = sys.modules[name]
except ImportError:
- raise ImportError("Ipython kernel not found. Parallel execution "
+ raise ImportError("ipyparallel not found. Parallel execution "
"will be unavailable")
try:
self.taskclient = self.iparallel.Client()
diff --git a/nipype/pipeline/plugins/ipythonx.py b/nipype/pipeline/plugins/ipythonx.py
--- a/nipype/pipeline/plugins/ipythonx.py
+++ b/nipype/pipeline/plugins/ipythonx.py
@@ -5,11 +5,13 @@
import sys
+from ...interfaces.base import LooseVersion
IPython_not_loaded = False
try:
from IPython import __version__ as IPyversion
- from IPython.kernel.contexts import ConnectionRefusedError
-except:
+ if LooseVersion(IPyversion) < LooseVersion('0.11'):
+ from IPython.kernel.contexts import ConnectionRefusedError
+except ImportError:
IPython_not_loaded = True
@@ -21,6 +23,12 @@
"""
def __init__(self, plugin_args=None):
+ if LooseVersion(IPyversion) > LooseVersion('0.10.1'):
+ raise EnvironmentError(('The IPythonX plugin can only be used with'
+ ' older IPython versions. Please use the '
+ 'IPython plugin instead.'
+ ))
+ DeprecationWarning('This plugin will be deprecated as of version 0.13')
if IPython_not_loaded:
raise ImportError('ipyparallel could not be imported')
super(IPythonXPlugin, self).__init__(plugin_args=plugin_args)
| {"golden_diff": "diff --git a/nipype/pipeline/plugins/ipython.py b/nipype/pipeline/plugins/ipython.py\n--- a/nipype/pipeline/plugins/ipython.py\n+++ b/nipype/pipeline/plugins/ipython.py\n@@ -5,7 +5,6 @@\n \n from future import standard_library\n standard_library.install_aliases()\n-\n from pickle import dumps\n \n import sys\n@@ -47,7 +46,7 @@\n \n def __init__(self, plugin_args=None):\n if IPython_not_loaded:\n- raise ImportError('ipyparallel could not be imported')\n+ raise ImportError('Please install ipyparallel to use this plugin.')\n super(IPythonPlugin, self).__init__(plugin_args=plugin_args)\n self.iparallel = None\n self.taskclient = None\n@@ -64,7 +63,7 @@\n __import__(name)\n self.iparallel = sys.modules[name]\n except ImportError:\n- raise ImportError(\"Ipython kernel not found. Parallel execution \"\n+ raise ImportError(\"ipyparallel not found. Parallel execution \"\n \"will be unavailable\")\n try:\n self.taskclient = self.iparallel.Client()\ndiff --git a/nipype/pipeline/plugins/ipythonx.py b/nipype/pipeline/plugins/ipythonx.py\n--- a/nipype/pipeline/plugins/ipythonx.py\n+++ b/nipype/pipeline/plugins/ipythonx.py\n@@ -5,11 +5,13 @@\n \n import sys\n \n+from ...interfaces.base import LooseVersion\n IPython_not_loaded = False\n try:\n from IPython import __version__ as IPyversion\n- from IPython.kernel.contexts import ConnectionRefusedError\n-except:\n+ if LooseVersion(IPyversion) < LooseVersion('0.11'):\n+ from IPython.kernel.contexts import ConnectionRefusedError\n+except ImportError:\n IPython_not_loaded = True\n \n \n@@ -21,6 +23,12 @@\n \"\"\"\n \n def __init__(self, plugin_args=None):\n+ if LooseVersion(IPyversion) > LooseVersion('0.10.1'):\n+ raise EnvironmentError(('The IPythonX plugin can only be used with'\n+ ' older IPython versions. Please use the '\n+ 'IPython plugin instead.'\n+ ))\n+ DeprecationWarning('This plugin will be deprecated as of version 0.13')\n if IPython_not_loaded:\n raise ImportError('ipyparallel could not be imported')\n super(IPythonXPlugin, self).__init__(plugin_args=plugin_args)\n", "issue": "Shim warning when using IPython >= 4.0.0\nOne of our C-PAC users came across the following deprecation warning during their pipeline run:\n\n> /clusterdata/apps/anaconda/lib/python2.7/site-packages/IPython/kernel/**init**.py:13: ShimWarning: The `IPython.kernel` package has been deprecated. You should import from ipykernel or jupyter_client instead.\n> \n> \"You should import from ipykernel or jupyter_client instead.\", ShimWarning)\n> \n> /clusterdata/apps/anaconda/lib/python2.7/site-packages/IPython/parallel.py:13: ShimWarning: The `IPython.parallel` package has been deprecated. You should import from ipyparallel instead.\n> \n> \"You should import from ipyparallel instead.\", ShimWarning)\n\nI was able to determine that this is because, starting in IPython 4.0.0, the IPython developers have chosen to separate IPython.parallel and IPython.kernel into separate packages. Is there some way that nipype could take this into account by determining if the IPython version number is >= 4.0.0, stopping execution if so (and the packages are not installed), and warning users to download these two packages (or some other appropriate behavior)? If the packages are installed, the new import statements could be used instead.\n\nShim warning when using IPython >= 4.0.0\nOne of our C-PAC users came across the following deprecation warning during their pipeline run:\n\n> /clusterdata/apps/anaconda/lib/python2.7/site-packages/IPython/kernel/**init**.py:13: ShimWarning: The `IPython.kernel` package has been deprecated. You should import from ipykernel or jupyter_client instead.\n> \n> \"You should import from ipykernel or jupyter_client instead.\", ShimWarning)\n> \n> /clusterdata/apps/anaconda/lib/python2.7/site-packages/IPython/parallel.py:13: ShimWarning: The `IPython.parallel` package has been deprecated. You should import from ipyparallel instead.\n> \n> \"You should import from ipyparallel instead.\", ShimWarning)\n\nI was able to determine that this is because, starting in IPython 4.0.0, the IPython developers have chosen to separate IPython.parallel and IPython.kernel into separate packages. Is there some way that nipype could take this into account by determining if the IPython version number is >= 4.0.0, stopping execution if so (and the packages are not installed), and warning users to download these two packages (or some other appropriate behavior)? If the packages are installed, the new import statements could be used instead.\n\n", "before_files": [{"content": "# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\n\"\"\"Parallel workflow execution via IPython controller\n\"\"\"\n\nfrom future import standard_library\nstandard_library.install_aliases()\n\nfrom pickle import dumps\n\nimport sys\n\nIPython_not_loaded = False\ntry:\n from IPython import __version__ as IPyversion\n from ipyparallel.error import TimeoutError\nexcept:\n IPython_not_loaded = True\n\nfrom .base import (DistributedPluginBase, logger, report_crash)\n\n\ndef execute_task(pckld_task, node_config, updatehash):\n from socket import gethostname\n from traceback import format_exc\n from nipype import config, logging\n traceback = None\n result = None\n import os\n cwd = os.getcwd()\n try:\n config.update_config(node_config)\n logging.update_logging(config)\n from pickle import loads\n task = loads(pckld_task)\n result = task.run(updatehash=updatehash)\n except:\n traceback = format_exc()\n result = task.result\n os.chdir(cwd)\n return result, traceback, gethostname()\n\n\nclass IPythonPlugin(DistributedPluginBase):\n \"\"\"Execute workflow with ipython\n \"\"\"\n\n def __init__(self, plugin_args=None):\n if IPython_not_loaded:\n raise ImportError('ipyparallel could not be imported')\n super(IPythonPlugin, self).__init__(plugin_args=plugin_args)\n self.iparallel = None\n self.taskclient = None\n self.taskmap = {}\n self._taskid = 0\n\n def run(self, graph, config, updatehash=False):\n \"\"\"Executes a pre-defined pipeline is distributed approaches\n based on IPython's ipyparallel processing interface\n \"\"\"\n # retrieve clients again\n try:\n name = 'ipyparallel'\n __import__(name)\n self.iparallel = sys.modules[name]\n except ImportError:\n raise ImportError(\"Ipython kernel not found. Parallel execution \"\n \"will be unavailable\")\n try:\n self.taskclient = self.iparallel.Client()\n except Exception as e:\n if isinstance(e, TimeoutError):\n raise Exception(\"No IPython clients found.\")\n if isinstance(e, IOError):\n raise Exception(\"ipcluster/ipcontroller has not been started\")\n if isinstance(e, ValueError):\n raise Exception(\"Ipython kernel not installed\")\n raise e\n return super(IPythonPlugin, self).run(graph, config, updatehash=updatehash)\n\n def _get_result(self, taskid):\n if taskid not in self.taskmap:\n raise ValueError('Task %d not in pending list' % taskid)\n if self.taskmap[taskid].ready():\n result, traceback, hostname = self.taskmap[taskid].get()\n result_out = dict(result=None, traceback=None)\n result_out['result'] = result\n result_out['traceback'] = traceback\n result_out['hostname'] = hostname\n return result_out\n else:\n return None\n\n def _submit_job(self, node, updatehash=False):\n pckld_node = dumps(node, 2)\n result_object = self.taskclient.load_balanced_view().apply(execute_task,\n pckld_node,\n node.config,\n updatehash)\n self._taskid += 1\n self.taskmap[self._taskid] = result_object\n return self._taskid\n\n def _report_crash(self, node, result=None):\n if result and result['traceback']:\n node._result = result['result']\n node._traceback = result['traceback']\n return report_crash(node,\n traceback=result['traceback'])\n else:\n return report_crash(node)\n\n def _clear_task(self, taskid):\n if IPyversion >= '0.11':\n logger.debug(\"Clearing id: %d\" % taskid)\n self.taskclient.purge_results(self.taskmap[taskid])\n del self.taskmap[taskid]\n", "path": "nipype/pipeline/plugins/ipython.py"}, {"content": "# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\n\"\"\"Parallel workflow execution via IPython controller\n\"\"\"\n\nimport sys\n\nIPython_not_loaded = False\ntry:\n from IPython import __version__ as IPyversion\n from IPython.kernel.contexts import ConnectionRefusedError\nexcept:\n IPython_not_loaded = True\n\n\nfrom .base import (DistributedPluginBase, logger, report_crash)\n\n\nclass IPythonXPlugin(DistributedPluginBase):\n \"\"\"Execute workflow with ipython\n \"\"\"\n\n def __init__(self, plugin_args=None):\n if IPython_not_loaded:\n raise ImportError('ipyparallel could not be imported')\n super(IPythonXPlugin, self).__init__(plugin_args=plugin_args)\n self.ipyclient = None\n self.taskclient = None\n\n def run(self, graph, config, updatehash=False):\n \"\"\"Executes a pre-defined pipeline is distributed approaches\n based on IPython's ipyparallel processing interface\n \"\"\"\n # retrieve clients again\n try:\n name = 'IPython.kernel.client'\n __import__(name)\n self.ipyclient = sys.modules[name]\n except ImportError:\n raise ImportError(\"Ipython kernel not found. Parallel execution \"\n \"will be unavailable\")\n try:\n self.taskclient = self.ipyclient.TaskClient()\n except Exception as e:\n if isinstance(e, ConnectionRefusedError):\n raise Exception(\"No IPython clients found.\")\n if isinstance(e, ValueError):\n raise Exception(\"Ipython kernel not installed\")\n return super(IPythonXPlugin, self).run(graph, config, updatehash=updatehash)\n\n def _get_result(self, taskid):\n return self.taskclient.get_task_result(taskid, block=False)\n\n def _submit_job(self, node, updatehash=False):\n cmdstr = \"\"\"import sys\nfrom traceback import format_exception\ntraceback=None\nresult=None\ntry:\n result = task.run(updatehash=updatehash)\nexcept:\n etype, eval, etr = sys.exc_info()\n traceback = format_exception(etype,eval,etr)\n result = task.result\n\"\"\"\n task = self.ipyclient.StringTask(cmdstr,\n push=dict(task=node,\n updatehash=updatehash),\n pull=['result', 'traceback'])\n return self.taskclient.run(task, block=False)\n\n def _report_crash(self, node, result=None):\n if result and result['traceback']:\n node._result = result['result']\n node._traceback = result['traceback']\n return report_crash(node,\n traceback=result['traceback'])\n else:\n return report_crash(node)\n\n def _clear_task(self, taskid):\n if IPyversion >= '0.10.1':\n logger.debug(\"Clearing id: %d\" % taskid)\n self.taskclient.clear(taskid)\n", "path": "nipype/pipeline/plugins/ipythonx.py"}], "after_files": [{"content": "# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\n\"\"\"Parallel workflow execution via IPython controller\n\"\"\"\n\nfrom future import standard_library\nstandard_library.install_aliases()\nfrom pickle import dumps\n\nimport sys\n\nIPython_not_loaded = False\ntry:\n from IPython import __version__ as IPyversion\n from ipyparallel.error import TimeoutError\nexcept:\n IPython_not_loaded = True\n\nfrom .base import (DistributedPluginBase, logger, report_crash)\n\n\ndef execute_task(pckld_task, node_config, updatehash):\n from socket import gethostname\n from traceback import format_exc\n from nipype import config, logging\n traceback = None\n result = None\n import os\n cwd = os.getcwd()\n try:\n config.update_config(node_config)\n logging.update_logging(config)\n from pickle import loads\n task = loads(pckld_task)\n result = task.run(updatehash=updatehash)\n except:\n traceback = format_exc()\n result = task.result\n os.chdir(cwd)\n return result, traceback, gethostname()\n\n\nclass IPythonPlugin(DistributedPluginBase):\n \"\"\"Execute workflow with ipython\n \"\"\"\n\n def __init__(self, plugin_args=None):\n if IPython_not_loaded:\n raise ImportError('Please install ipyparallel to use this plugin.')\n super(IPythonPlugin, self).__init__(plugin_args=plugin_args)\n self.iparallel = None\n self.taskclient = None\n self.taskmap = {}\n self._taskid = 0\n\n def run(self, graph, config, updatehash=False):\n \"\"\"Executes a pre-defined pipeline is distributed approaches\n based on IPython's ipyparallel processing interface\n \"\"\"\n # retrieve clients again\n try:\n name = 'ipyparallel'\n __import__(name)\n self.iparallel = sys.modules[name]\n except ImportError:\n raise ImportError(\"ipyparallel not found. Parallel execution \"\n \"will be unavailable\")\n try:\n self.taskclient = self.iparallel.Client()\n except Exception as e:\n if isinstance(e, TimeoutError):\n raise Exception(\"No IPython clients found.\")\n if isinstance(e, IOError):\n raise Exception(\"ipcluster/ipcontroller has not been started\")\n if isinstance(e, ValueError):\n raise Exception(\"Ipython kernel not installed\")\n raise e\n return super(IPythonPlugin, self).run(graph, config, updatehash=updatehash)\n\n def _get_result(self, taskid):\n if taskid not in self.taskmap:\n raise ValueError('Task %d not in pending list' % taskid)\n if self.taskmap[taskid].ready():\n result, traceback, hostname = self.taskmap[taskid].get()\n result_out = dict(result=None, traceback=None)\n result_out['result'] = result\n result_out['traceback'] = traceback\n result_out['hostname'] = hostname\n return result_out\n else:\n return None\n\n def _submit_job(self, node, updatehash=False):\n pckld_node = dumps(node, 2)\n result_object = self.taskclient.load_balanced_view().apply(execute_task,\n pckld_node,\n node.config,\n updatehash)\n self._taskid += 1\n self.taskmap[self._taskid] = result_object\n return self._taskid\n\n def _report_crash(self, node, result=None):\n if result and result['traceback']:\n node._result = result['result']\n node._traceback = result['traceback']\n return report_crash(node,\n traceback=result['traceback'])\n else:\n return report_crash(node)\n\n def _clear_task(self, taskid):\n if IPyversion >= '0.11':\n logger.debug(\"Clearing id: %d\" % taskid)\n self.taskclient.purge_results(self.taskmap[taskid])\n del self.taskmap[taskid]\n", "path": "nipype/pipeline/plugins/ipython.py"}, {"content": "# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\n\"\"\"Parallel workflow execution via IPython controller\n\"\"\"\n\nimport sys\n\nfrom ...interfaces.base import LooseVersion\nIPython_not_loaded = False\ntry:\n from IPython import __version__ as IPyversion\n if LooseVersion(IPyversion) < LooseVersion('0.11'):\n from IPython.kernel.contexts import ConnectionRefusedError\nexcept ImportError:\n IPython_not_loaded = True\n\n\nfrom .base import (DistributedPluginBase, logger, report_crash)\n\n\nclass IPythonXPlugin(DistributedPluginBase):\n \"\"\"Execute workflow with ipython\n \"\"\"\n\n def __init__(self, plugin_args=None):\n if LooseVersion(IPyversion) > LooseVersion('0.10.1'):\n raise EnvironmentError(('The IPythonX plugin can only be used with'\n ' older IPython versions. Please use the '\n 'IPython plugin instead.'\n ))\n DeprecationWarning('This plugin will be deprecated as of version 0.13')\n if IPython_not_loaded:\n raise ImportError('ipyparallel could not be imported')\n super(IPythonXPlugin, self).__init__(plugin_args=plugin_args)\n self.ipyclient = None\n self.taskclient = None\n\n def run(self, graph, config, updatehash=False):\n \"\"\"Executes a pre-defined pipeline is distributed approaches\n based on IPython's ipyparallel processing interface\n \"\"\"\n # retrieve clients again\n try:\n name = 'IPython.kernel.client'\n __import__(name)\n self.ipyclient = sys.modules[name]\n except ImportError:\n raise ImportError(\"Ipython kernel not found. Parallel execution \"\n \"will be unavailable\")\n try:\n self.taskclient = self.ipyclient.TaskClient()\n except Exception as e:\n if isinstance(e, ConnectionRefusedError):\n raise Exception(\"No IPython clients found.\")\n if isinstance(e, ValueError):\n raise Exception(\"Ipython kernel not installed\")\n return super(IPythonXPlugin, self).run(graph, config, updatehash=updatehash)\n\n def _get_result(self, taskid):\n return self.taskclient.get_task_result(taskid, block=False)\n\n def _submit_job(self, node, updatehash=False):\n cmdstr = \"\"\"import sys\nfrom traceback import format_exception\ntraceback=None\nresult=None\ntry:\n result = task.run(updatehash=updatehash)\nexcept:\n etype, eval, etr = sys.exc_info()\n traceback = format_exception(etype,eval,etr)\n result = task.result\n\"\"\"\n task = self.ipyclient.StringTask(cmdstr,\n push=dict(task=node,\n updatehash=updatehash),\n pull=['result', 'traceback'])\n return self.taskclient.run(task, block=False)\n\n def _report_crash(self, node, result=None):\n if result and result['traceback']:\n node._result = result['result']\n node._traceback = result['traceback']\n return report_crash(node,\n traceback=result['traceback'])\n else:\n return report_crash(node)\n\n def _clear_task(self, taskid):\n if IPyversion >= '0.10.1':\n logger.debug(\"Clearing id: %d\" % taskid)\n self.taskclient.clear(taskid)\n", "path": "nipype/pipeline/plugins/ipythonx.py"}]} | 2,807 | 566 |
gh_patches_debug_9314 | rasdani/github-patches | git_diff | ansible__ansible-lint-110 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
False positive detecting OctalPermissionsRule
When trying to setgid permissions
```
[ANSIBLE0008] Octal file permissions must contain leading zero
mode: 02775
```
Looks like the regex requires exactly 3 digits, which is not always correct.
```
# At least an indent, "mode:", optional whitespace, any digits, EOL
mode_regex = re.compile(r'^\s+mode:\s*[0-9]+\s*$')
# Same as above, but with a leading zero before three digits
valid_mode_regex = re.compile(r'^\s+mode:\s*0[0-7]{3}\s*$')
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/ansiblelint/rules/OctalPermissionsRule.py`
Content:
```
1 # Copyright (c) 2013-2014 Will Thames <[email protected]>
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to deal
5 # in the Software without restriction, including without limitation the rights
6 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7 # copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
19 # THE SOFTWARE.
20
21 from ansiblelint import AnsibleLintRule
22 import re
23
24
25 class OctalPermissionsRule(AnsibleLintRule):
26 id = 'ANSIBLE0008'
27 shortdesc = 'Octal file permissions must contain leading zero'
28 description = 'Numeric file permissions without leading zero can behave' + \
29 'in unexpected ways. See ' + \
30 'http://docs.ansible.com/ansible/file_module.html'
31 tags = ['formatting']
32
33 # At least an indent, "mode:", optional whitespace, any digits, EOL
34 mode_regex = re.compile(r'^\s+mode:\s*[0-9]+\s*$')
35 # Same as above, but with a leading zero before three digits
36 valid_mode_regex = re.compile(r'^\s+mode:\s*0[0-7]{3}\s*$')
37
38 def match(self, file, line):
39 if re.match(self.mode_regex, line):
40 return not re.match(self.valid_mode_regex, line)
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/ansiblelint/rules/OctalPermissionsRule.py b/lib/ansiblelint/rules/OctalPermissionsRule.py
--- a/lib/ansiblelint/rules/OctalPermissionsRule.py
+++ b/lib/ansiblelint/rules/OctalPermissionsRule.py
@@ -33,7 +33,7 @@
# At least an indent, "mode:", optional whitespace, any digits, EOL
mode_regex = re.compile(r'^\s+mode:\s*[0-9]+\s*$')
# Same as above, but with a leading zero before three digits
- valid_mode_regex = re.compile(r'^\s+mode:\s*0[0-7]{3}\s*$')
+ valid_mode_regex = re.compile(r'^\s+mode:\s*0[0-7]{3,4}\s*$')
def match(self, file, line):
if re.match(self.mode_regex, line):
| {"golden_diff": "diff --git a/lib/ansiblelint/rules/OctalPermissionsRule.py b/lib/ansiblelint/rules/OctalPermissionsRule.py\n--- a/lib/ansiblelint/rules/OctalPermissionsRule.py\n+++ b/lib/ansiblelint/rules/OctalPermissionsRule.py\n@@ -33,7 +33,7 @@\n # At least an indent, \"mode:\", optional whitespace, any digits, EOL\n mode_regex = re.compile(r'^\\s+mode:\\s*[0-9]+\\s*$')\n # Same as above, but with a leading zero before three digits\n- valid_mode_regex = re.compile(r'^\\s+mode:\\s*0[0-7]{3}\\s*$')\n+ valid_mode_regex = re.compile(r'^\\s+mode:\\s*0[0-7]{3,4}\\s*$')\n \n def match(self, file, line):\n if re.match(self.mode_regex, line):\n", "issue": "False positive detecting OctalPermissionsRule\nWhen trying to setgid permissions\n\n```\n[ANSIBLE0008] Octal file permissions must contain leading zero\nmode: 02775\n```\n\nLooks like the regex requires exactly 3 digits, which is not always correct.\n\n```\n# At least an indent, \"mode:\", optional whitespace, any digits, EOL\nmode_regex = re.compile(r'^\\s+mode:\\s*[0-9]+\\s*$')\n# Same as above, but with a leading zero before three digits\nvalid_mode_regex = re.compile(r'^\\s+mode:\\s*0[0-7]{3}\\s*$')\n```\n\n", "before_files": [{"content": "# Copyright (c) 2013-2014 Will Thames <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n\nfrom ansiblelint import AnsibleLintRule\nimport re\n\n\nclass OctalPermissionsRule(AnsibleLintRule):\n id = 'ANSIBLE0008'\n shortdesc = 'Octal file permissions must contain leading zero'\n description = 'Numeric file permissions without leading zero can behave' + \\\n 'in unexpected ways. See ' + \\\n 'http://docs.ansible.com/ansible/file_module.html'\n tags = ['formatting']\n\n # At least an indent, \"mode:\", optional whitespace, any digits, EOL\n mode_regex = re.compile(r'^\\s+mode:\\s*[0-9]+\\s*$')\n # Same as above, but with a leading zero before three digits\n valid_mode_regex = re.compile(r'^\\s+mode:\\s*0[0-7]{3}\\s*$')\n\n def match(self, file, line):\n if re.match(self.mode_regex, line):\n return not re.match(self.valid_mode_regex, line)\n", "path": "lib/ansiblelint/rules/OctalPermissionsRule.py"}], "after_files": [{"content": "# Copyright (c) 2013-2014 Will Thames <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n\nfrom ansiblelint import AnsibleLintRule\nimport re\n\n\nclass OctalPermissionsRule(AnsibleLintRule):\n id = 'ANSIBLE0008'\n shortdesc = 'Octal file permissions must contain leading zero'\n description = 'Numeric file permissions without leading zero can behave' + \\\n 'in unexpected ways. See ' + \\\n 'http://docs.ansible.com/ansible/file_module.html'\n tags = ['formatting']\n\n # At least an indent, \"mode:\", optional whitespace, any digits, EOL\n mode_regex = re.compile(r'^\\s+mode:\\s*[0-9]+\\s*$')\n # Same as above, but with a leading zero before three digits\n valid_mode_regex = re.compile(r'^\\s+mode:\\s*0[0-7]{3,4}\\s*$')\n\n def match(self, file, line):\n if re.match(self.mode_regex, line):\n return not re.match(self.valid_mode_regex, line)\n", "path": "lib/ansiblelint/rules/OctalPermissionsRule.py"}]} | 939 | 198 |
gh_patches_debug_10929 | rasdani/github-patches | git_diff | crytic__slither-403 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix embark and etherscan GH actions
- Embark fails on all our github actions (including crytic-compile) since ~1 month. The tests work locally, but it seems that some recent changes in the latest embark makes it fails on a CI
- Etherscan has now a constraint on the number of request per IP, which makes the CI fails from time to time. We need to add an API key to prevent it
Fix embark and etherscan GH actions
- Embark fails on all our github actions (including crytic-compile) since ~1 month. The tests work locally, but it seems that some recent changes in the latest embark makes it fails on a CI
- Etherscan has now a constraint on the number of request per IP, which makes the CI fails from time to time. We need to add an API key to prevent it
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup, find_packages
2
3 setup(
4 name='slither-analyzer',
5 description='Slither is a Solidity static analysis framework written in Python 3.',
6 url='https://github.com/crytic/slither',
7 author='Trail of Bits',
8 version='0.6.9',
9 packages=find_packages(),
10 python_requires='>=3.6',
11 install_requires=['prettytable>=0.7.2',
12 'pysha3>=1.0.2',
13 'crytic-compile>=0.1.6'],
14 # 'crytic-compile'],
15 # dependency_links=['git+https://github.com/crytic/crytic-compile.git@master#egg=crytic-compile'],
16 license='AGPL-3.0',
17 long_description=open('README.md').read(),
18 entry_points={
19 'console_scripts': [
20 'slither = slither.__main__:main',
21 'slither-check-upgradeability = slither.tools.upgradeability.__main__:main',
22 'slither-find-paths = slither.tools.possible_paths.__main__:main',
23 'slither-simil = slither.tools.similarity.__main__:main',
24 'slither-flat = slither.tools.flattening.__main__:main',
25 'slither-format = slither.tools.slither_format.__main__:main',
26 'slither-check-erc = slither.tools.erc_conformance.__main__:main',
27 'slither-check-kspec = slither.tools.kspec_coverage.__main__:main'
28 ]
29 }
30 )
31
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -10,9 +10,9 @@
python_requires='>=3.6',
install_requires=['prettytable>=0.7.2',
'pysha3>=1.0.2',
- 'crytic-compile>=0.1.6'],
-# 'crytic-compile'],
-# dependency_links=['git+https://github.com/crytic/crytic-compile.git@master#egg=crytic-compile'],
+# 'crytic-compile>=0.1.6'],
+ 'crytic-compile'],
+ dependency_links=['git+https://github.com/crytic/crytic-compile.git@dev#egg=crytic-compile'],
license='AGPL-3.0',
long_description=open('README.md').read(),
entry_points={
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -10,9 +10,9 @@\n python_requires='>=3.6',\n install_requires=['prettytable>=0.7.2',\n 'pysha3>=1.0.2',\n- 'crytic-compile>=0.1.6'],\n-# 'crytic-compile'],\n-# dependency_links=['git+https://github.com/crytic/crytic-compile.git@master#egg=crytic-compile'],\n+# 'crytic-compile>=0.1.6'],\n+ 'crytic-compile'],\n+ dependency_links=['git+https://github.com/crytic/crytic-compile.git@dev#egg=crytic-compile'],\n license='AGPL-3.0',\n long_description=open('README.md').read(),\n entry_points={\n", "issue": "Fix embark and etherscan GH actions\n- Embark fails on all our github actions (including crytic-compile) since ~1 month. The tests work locally, but it seems that some recent changes in the latest embark makes it fails on a CI\r\n- Etherscan has now a constraint on the number of request per IP, which makes the CI fails from time to time. We need to add an API key to prevent it\nFix embark and etherscan GH actions\n- Embark fails on all our github actions (including crytic-compile) since ~1 month. The tests work locally, but it seems that some recent changes in the latest embark makes it fails on a CI\r\n- Etherscan has now a constraint on the number of request per IP, which makes the CI fails from time to time. We need to add an API key to prevent it\n", "before_files": [{"content": "from setuptools import setup, find_packages\n\nsetup(\n name='slither-analyzer',\n description='Slither is a Solidity static analysis framework written in Python 3.',\n url='https://github.com/crytic/slither',\n author='Trail of Bits',\n version='0.6.9',\n packages=find_packages(),\n python_requires='>=3.6',\n install_requires=['prettytable>=0.7.2',\n 'pysha3>=1.0.2',\n 'crytic-compile>=0.1.6'],\n# 'crytic-compile'],\n# dependency_links=['git+https://github.com/crytic/crytic-compile.git@master#egg=crytic-compile'],\n license='AGPL-3.0',\n long_description=open('README.md').read(),\n entry_points={\n 'console_scripts': [\n 'slither = slither.__main__:main',\n 'slither-check-upgradeability = slither.tools.upgradeability.__main__:main',\n 'slither-find-paths = slither.tools.possible_paths.__main__:main',\n 'slither-simil = slither.tools.similarity.__main__:main',\n 'slither-flat = slither.tools.flattening.__main__:main',\n 'slither-format = slither.tools.slither_format.__main__:main',\n 'slither-check-erc = slither.tools.erc_conformance.__main__:main',\n 'slither-check-kspec = slither.tools.kspec_coverage.__main__:main'\n ]\n }\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup, find_packages\n\nsetup(\n name='slither-analyzer',\n description='Slither is a Solidity static analysis framework written in Python 3.',\n url='https://github.com/crytic/slither',\n author='Trail of Bits',\n version='0.6.9',\n packages=find_packages(),\n python_requires='>=3.6',\n install_requires=['prettytable>=0.7.2',\n 'pysha3>=1.0.2',\n# 'crytic-compile>=0.1.6'],\n 'crytic-compile'],\n dependency_links=['git+https://github.com/crytic/crytic-compile.git@dev#egg=crytic-compile'],\n license='AGPL-3.0',\n long_description=open('README.md').read(),\n entry_points={\n 'console_scripts': [\n 'slither = slither.__main__:main',\n 'slither-check-upgradeability = slither.tools.upgradeability.__main__:main',\n 'slither-find-paths = slither.tools.possible_paths.__main__:main',\n 'slither-simil = slither.tools.similarity.__main__:main',\n 'slither-flat = slither.tools.flattening.__main__:main',\n 'slither-format = slither.tools.slither_format.__main__:main',\n 'slither-check-erc = slither.tools.erc_conformance.__main__:main',\n 'slither-check-kspec = slither.tools.kspec_coverage.__main__:main'\n ]\n }\n)\n", "path": "setup.py"}]} | 822 | 196 |
gh_patches_debug_28153 | rasdani/github-patches | git_diff | qutebrowser__qutebrowser-4222 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use URL patterns for content.host_blocking.whitelist
For historical reasons, the `content.host_blocking.whitelist` setting uses [fnmatch](https://docs.python.org/3/library/fnmatch.html) (see `is_whitelisted_host` in `browser/adblock.py`).
Now that we have [URL patterns](https://github.com/qutebrowser/qutebrowser/blob/master/qutebrowser/utils/urlmatch.py), it should use those instead.
Use URL patterns for content.host_blocking.whitelist
For historical reasons, the `content.host_blocking.whitelist` setting uses [fnmatch](https://docs.python.org/3/library/fnmatch.html) (see `is_whitelisted_host` in `browser/adblock.py`).
Now that we have [URL patterns](https://github.com/qutebrowser/qutebrowser/blob/master/qutebrowser/utils/urlmatch.py), it should use those instead.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qutebrowser/browser/adblock.py`
Content:
```
1 # vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:
2
3 # Copyright 2014-2018 Florian Bruhin (The Compiler) <[email protected]>
4 #
5 # This file is part of qutebrowser.
6 #
7 # qutebrowser is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # qutebrowser is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.
19
20 """Functions related to ad blocking."""
21
22 import io
23 import os.path
24 import functools
25 import posixpath
26 import zipfile
27 import fnmatch
28
29 from qutebrowser.browser import downloads
30 from qutebrowser.config import config
31 from qutebrowser.utils import objreg, standarddir, log, message
32 from qutebrowser.commands import cmdutils
33
34
35 def guess_zip_filename(zf):
36 """Guess which file to use inside a zip file.
37
38 Args:
39 zf: A ZipFile instance.
40 """
41 files = zf.namelist()
42 if len(files) == 1:
43 return files[0]
44 else:
45 for e in files:
46 if posixpath.splitext(e)[0].lower() == 'hosts':
47 return e
48 raise FileNotFoundError("No hosts file found in zip")
49
50
51 def get_fileobj(byte_io):
52 """Get a usable file object to read the hosts file from."""
53 byte_io.seek(0) # rewind downloaded file
54 if zipfile.is_zipfile(byte_io):
55 byte_io.seek(0) # rewind what zipfile.is_zipfile did
56 zf = zipfile.ZipFile(byte_io)
57 filename = guess_zip_filename(zf)
58 byte_io = zf.open(filename, mode='r')
59 else:
60 byte_io.seek(0) # rewind what zipfile.is_zipfile did
61 return byte_io
62
63
64 def is_whitelisted_host(host):
65 """Check if the given host is on the adblock whitelist.
66
67 Args:
68 host: The host of the request as string.
69 """
70 for pattern in config.val.content.host_blocking.whitelist:
71 if fnmatch.fnmatch(host, pattern.lower()):
72 return True
73 return False
74
75
76 class FakeDownload:
77
78 """A download stub to use on_download_finished with local files."""
79
80 def __init__(self, fileobj):
81 self.basename = os.path.basename(fileobj.name)
82 self.fileobj = fileobj
83 self.successful = True
84
85
86 class HostBlocker:
87
88 """Manage blocked hosts based from /etc/hosts-like files.
89
90 Attributes:
91 _blocked_hosts: A set of blocked hosts.
92 _config_blocked_hosts: A set of blocked hosts from ~/.config.
93 _in_progress: The DownloadItems which are currently downloading.
94 _done_count: How many files have been read successfully.
95 _local_hosts_file: The path to the blocked-hosts file.
96 _config_hosts_file: The path to a blocked-hosts in ~/.config
97 """
98
99 def __init__(self):
100 self._blocked_hosts = set()
101 self._config_blocked_hosts = set()
102 self._in_progress = []
103 self._done_count = 0
104
105 data_dir = standarddir.data()
106 self._local_hosts_file = os.path.join(data_dir, 'blocked-hosts')
107 self._update_files()
108
109 config_dir = standarddir.config()
110 self._config_hosts_file = os.path.join(config_dir, 'blocked-hosts')
111
112 config.instance.changed.connect(self._update_files)
113
114 def is_blocked(self, url):
115 """Check if the given URL (as QUrl) is blocked."""
116 if not config.val.content.host_blocking.enabled:
117 return False
118 host = url.host()
119 return ((host in self._blocked_hosts or
120 host in self._config_blocked_hosts) and
121 not is_whitelisted_host(host))
122
123 def _read_hosts_file(self, filename, target):
124 """Read hosts from the given filename.
125
126 Args:
127 filename: The file to read.
128 target: The set to store the hosts in.
129
130 Return:
131 True if a read was attempted, False otherwise
132 """
133 if not os.path.exists(filename):
134 return False
135
136 try:
137 with open(filename, 'r', encoding='utf-8') as f:
138 for line in f:
139 target.add(line.strip())
140 except (OSError, UnicodeDecodeError):
141 log.misc.exception("Failed to read host blocklist!")
142
143 return True
144
145 def read_hosts(self):
146 """Read hosts from the existing blocked-hosts file."""
147 self._blocked_hosts = set()
148
149 self._read_hosts_file(self._config_hosts_file,
150 self._config_blocked_hosts)
151
152 found = self._read_hosts_file(self._local_hosts_file,
153 self._blocked_hosts)
154
155 if not found:
156 args = objreg.get('args')
157 if (config.val.content.host_blocking.lists and
158 args.basedir is None and
159 config.val.content.host_blocking.enabled):
160 message.info("Run :adblock-update to get adblock lists.")
161
162 @cmdutils.register(instance='host-blocker')
163 def adblock_update(self):
164 """Update the adblock block lists.
165
166 This updates `~/.local/share/qutebrowser/blocked-hosts` with downloaded
167 host lists and re-reads `~/.config/qutebrowser/blocked-hosts`.
168 """
169 self._read_hosts_file(self._config_hosts_file,
170 self._config_blocked_hosts)
171 self._blocked_hosts = set()
172 self._done_count = 0
173 download_manager = objreg.get('qtnetwork-download-manager')
174 for url in config.val.content.host_blocking.lists:
175 if url.scheme() == 'file':
176 filename = url.toLocalFile()
177 try:
178 fileobj = open(filename, 'rb')
179 except OSError as e:
180 message.error("adblock: Error while reading {}: {}".format(
181 filename, e.strerror))
182 continue
183 download = FakeDownload(fileobj)
184 self._in_progress.append(download)
185 self.on_download_finished(download)
186 else:
187 fobj = io.BytesIO()
188 fobj.name = 'adblock: ' + url.host()
189 target = downloads.FileObjDownloadTarget(fobj)
190 download = download_manager.get(url, target=target,
191 auto_remove=True)
192 self._in_progress.append(download)
193 download.finished.connect(
194 functools.partial(self.on_download_finished, download))
195
196 def _parse_line(self, line):
197 """Parse a line from a host file.
198
199 Args:
200 line: The bytes object to parse.
201
202 Returns:
203 True if parsing succeeded, False otherwise.
204 """
205 if line.startswith(b'#'):
206 # Ignoring comments early so we don't have to care about
207 # encoding errors in them.
208 return True
209
210 try:
211 line = line.decode('utf-8')
212 except UnicodeDecodeError:
213 log.misc.error("Failed to decode: {!r}".format(line))
214 return False
215
216 # Remove comments
217 try:
218 hash_idx = line.index('#')
219 line = line[:hash_idx]
220 except ValueError:
221 pass
222
223 line = line.strip()
224 # Skip empty lines
225 if not line:
226 return True
227
228 parts = line.split()
229 if len(parts) == 1:
230 # "one host per line" format
231 hosts = [parts[0]]
232 else:
233 # /etc/hosts format
234 hosts = parts[1:]
235
236 for host in hosts:
237 if ('.' in host and
238 not host.endswith('.localdomain') and
239 host != '0.0.0.0'):
240 self._blocked_hosts.add(host)
241
242 return True
243
244 def _merge_file(self, byte_io):
245 """Read and merge host files.
246
247 Args:
248 byte_io: The BytesIO object of the completed download.
249
250 Return:
251 A set of the merged hosts.
252 """
253 error_count = 0
254 line_count = 0
255 try:
256 f = get_fileobj(byte_io)
257 except (OSError, zipfile.BadZipFile, zipfile.LargeZipFile,
258 LookupError) as e:
259 message.error("adblock: Error while reading {}: {} - {}".format(
260 byte_io.name, e.__class__.__name__, e))
261 return
262
263 for line in f:
264 line_count += 1
265 ok = self._parse_line(line)
266 if not ok:
267 error_count += 1
268
269 log.misc.debug("{}: read {} lines".format(byte_io.name, line_count))
270 if error_count > 0:
271 message.error("adblock: {} read errors for {}".format(
272 error_count, byte_io.name))
273
274 def on_lists_downloaded(self):
275 """Install block lists after files have been downloaded."""
276 with open(self._local_hosts_file, 'w', encoding='utf-8') as f:
277 for host in sorted(self._blocked_hosts):
278 f.write(host + '\n')
279 message.info("adblock: Read {} hosts from {} sources.".format(
280 len(self._blocked_hosts), self._done_count))
281
282 @config.change_filter('content.host_blocking.lists')
283 def _update_files(self):
284 """Update files when the config changed."""
285 if not config.val.content.host_blocking.lists:
286 try:
287 os.remove(self._local_hosts_file)
288 except FileNotFoundError:
289 pass
290 except OSError as e:
291 log.misc.exception("Failed to delete hosts file: {}".format(e))
292
293 def on_download_finished(self, download):
294 """Check if all downloads are finished and if so, trigger reading.
295
296 Arguments:
297 download: The finished DownloadItem.
298 """
299 self._in_progress.remove(download)
300 if download.successful:
301 self._done_count += 1
302 try:
303 self._merge_file(download.fileobj)
304 finally:
305 download.fileobj.close()
306 if not self._in_progress:
307 try:
308 self.on_lists_downloaded()
309 except OSError:
310 log.misc.exception("Failed to write host block list!")
311
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/qutebrowser/browser/adblock.py b/qutebrowser/browser/adblock.py
--- a/qutebrowser/browser/adblock.py
+++ b/qutebrowser/browser/adblock.py
@@ -24,11 +24,10 @@
import functools
import posixpath
import zipfile
-import fnmatch
from qutebrowser.browser import downloads
from qutebrowser.config import config
-from qutebrowser.utils import objreg, standarddir, log, message
+from qutebrowser.utils import objreg, standarddir, log, message, urlmatch
from qutebrowser.commands import cmdutils
@@ -61,14 +60,14 @@
return byte_io
-def is_whitelisted_host(host):
- """Check if the given host is on the adblock whitelist.
+def is_whitelisted_url(url):
+ """Check if the given url is on the adblock whitelist.
Args:
- host: The host of the request as string.
+ url: The url to check in QUrl form.
"""
for pattern in config.val.content.host_blocking.whitelist:
- if fnmatch.fnmatch(host, pattern.lower()):
+ if pattern.matches(url):
return True
return False
@@ -118,7 +117,7 @@
host = url.host()
return ((host in self._blocked_hosts or
host in self._config_blocked_hosts) and
- not is_whitelisted_host(host))
+ not is_whitelisted_url(url))
def _read_hosts_file(self, filename, target):
"""Read hosts from the given filename.
| {"golden_diff": "diff --git a/qutebrowser/browser/adblock.py b/qutebrowser/browser/adblock.py\n--- a/qutebrowser/browser/adblock.py\n+++ b/qutebrowser/browser/adblock.py\n@@ -24,11 +24,10 @@\n import functools\n import posixpath\n import zipfile\n-import fnmatch\n \n from qutebrowser.browser import downloads\n from qutebrowser.config import config\n-from qutebrowser.utils import objreg, standarddir, log, message\n+from qutebrowser.utils import objreg, standarddir, log, message, urlmatch\n from qutebrowser.commands import cmdutils\n \n \n@@ -61,14 +60,14 @@\n return byte_io\n \n \n-def is_whitelisted_host(host):\n- \"\"\"Check if the given host is on the adblock whitelist.\n+def is_whitelisted_url(url):\n+ \"\"\"Check if the given url is on the adblock whitelist.\n \n Args:\n- host: The host of the request as string.\n+ url: The url to check in QUrl form.\n \"\"\"\n for pattern in config.val.content.host_blocking.whitelist:\n- if fnmatch.fnmatch(host, pattern.lower()):\n+ if pattern.matches(url):\n return True\n return False\n \n@@ -118,7 +117,7 @@\n host = url.host()\n return ((host in self._blocked_hosts or\n host in self._config_blocked_hosts) and\n- not is_whitelisted_host(host))\n+ not is_whitelisted_url(url))\n \n def _read_hosts_file(self, filename, target):\n \"\"\"Read hosts from the given filename.\n", "issue": "Use URL patterns for content.host_blocking.whitelist\nFor historical reasons, the `content.host_blocking.whitelist` setting uses [fnmatch](https://docs.python.org/3/library/fnmatch.html) (see `is_whitelisted_host` in `browser/adblock.py`).\r\n\r\nNow that we have [URL patterns](https://github.com/qutebrowser/qutebrowser/blob/master/qutebrowser/utils/urlmatch.py), it should use those instead.\nUse URL patterns for content.host_blocking.whitelist\nFor historical reasons, the `content.host_blocking.whitelist` setting uses [fnmatch](https://docs.python.org/3/library/fnmatch.html) (see `is_whitelisted_host` in `browser/adblock.py`).\r\n\r\nNow that we have [URL patterns](https://github.com/qutebrowser/qutebrowser/blob/master/qutebrowser/utils/urlmatch.py), it should use those instead.\n", "before_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2014-2018 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Functions related to ad blocking.\"\"\"\n\nimport io\nimport os.path\nimport functools\nimport posixpath\nimport zipfile\nimport fnmatch\n\nfrom qutebrowser.browser import downloads\nfrom qutebrowser.config import config\nfrom qutebrowser.utils import objreg, standarddir, log, message\nfrom qutebrowser.commands import cmdutils\n\n\ndef guess_zip_filename(zf):\n \"\"\"Guess which file to use inside a zip file.\n\n Args:\n zf: A ZipFile instance.\n \"\"\"\n files = zf.namelist()\n if len(files) == 1:\n return files[0]\n else:\n for e in files:\n if posixpath.splitext(e)[0].lower() == 'hosts':\n return e\n raise FileNotFoundError(\"No hosts file found in zip\")\n\n\ndef get_fileobj(byte_io):\n \"\"\"Get a usable file object to read the hosts file from.\"\"\"\n byte_io.seek(0) # rewind downloaded file\n if zipfile.is_zipfile(byte_io):\n byte_io.seek(0) # rewind what zipfile.is_zipfile did\n zf = zipfile.ZipFile(byte_io)\n filename = guess_zip_filename(zf)\n byte_io = zf.open(filename, mode='r')\n else:\n byte_io.seek(0) # rewind what zipfile.is_zipfile did\n return byte_io\n\n\ndef is_whitelisted_host(host):\n \"\"\"Check if the given host is on the adblock whitelist.\n\n Args:\n host: The host of the request as string.\n \"\"\"\n for pattern in config.val.content.host_blocking.whitelist:\n if fnmatch.fnmatch(host, pattern.lower()):\n return True\n return False\n\n\nclass FakeDownload:\n\n \"\"\"A download stub to use on_download_finished with local files.\"\"\"\n\n def __init__(self, fileobj):\n self.basename = os.path.basename(fileobj.name)\n self.fileobj = fileobj\n self.successful = True\n\n\nclass HostBlocker:\n\n \"\"\"Manage blocked hosts based from /etc/hosts-like files.\n\n Attributes:\n _blocked_hosts: A set of blocked hosts.\n _config_blocked_hosts: A set of blocked hosts from ~/.config.\n _in_progress: The DownloadItems which are currently downloading.\n _done_count: How many files have been read successfully.\n _local_hosts_file: The path to the blocked-hosts file.\n _config_hosts_file: The path to a blocked-hosts in ~/.config\n \"\"\"\n\n def __init__(self):\n self._blocked_hosts = set()\n self._config_blocked_hosts = set()\n self._in_progress = []\n self._done_count = 0\n\n data_dir = standarddir.data()\n self._local_hosts_file = os.path.join(data_dir, 'blocked-hosts')\n self._update_files()\n\n config_dir = standarddir.config()\n self._config_hosts_file = os.path.join(config_dir, 'blocked-hosts')\n\n config.instance.changed.connect(self._update_files)\n\n def is_blocked(self, url):\n \"\"\"Check if the given URL (as QUrl) is blocked.\"\"\"\n if not config.val.content.host_blocking.enabled:\n return False\n host = url.host()\n return ((host in self._blocked_hosts or\n host in self._config_blocked_hosts) and\n not is_whitelisted_host(host))\n\n def _read_hosts_file(self, filename, target):\n \"\"\"Read hosts from the given filename.\n\n Args:\n filename: The file to read.\n target: The set to store the hosts in.\n\n Return:\n True if a read was attempted, False otherwise\n \"\"\"\n if not os.path.exists(filename):\n return False\n\n try:\n with open(filename, 'r', encoding='utf-8') as f:\n for line in f:\n target.add(line.strip())\n except (OSError, UnicodeDecodeError):\n log.misc.exception(\"Failed to read host blocklist!\")\n\n return True\n\n def read_hosts(self):\n \"\"\"Read hosts from the existing blocked-hosts file.\"\"\"\n self._blocked_hosts = set()\n\n self._read_hosts_file(self._config_hosts_file,\n self._config_blocked_hosts)\n\n found = self._read_hosts_file(self._local_hosts_file,\n self._blocked_hosts)\n\n if not found:\n args = objreg.get('args')\n if (config.val.content.host_blocking.lists and\n args.basedir is None and\n config.val.content.host_blocking.enabled):\n message.info(\"Run :adblock-update to get adblock lists.\")\n\n @cmdutils.register(instance='host-blocker')\n def adblock_update(self):\n \"\"\"Update the adblock block lists.\n\n This updates `~/.local/share/qutebrowser/blocked-hosts` with downloaded\n host lists and re-reads `~/.config/qutebrowser/blocked-hosts`.\n \"\"\"\n self._read_hosts_file(self._config_hosts_file,\n self._config_blocked_hosts)\n self._blocked_hosts = set()\n self._done_count = 0\n download_manager = objreg.get('qtnetwork-download-manager')\n for url in config.val.content.host_blocking.lists:\n if url.scheme() == 'file':\n filename = url.toLocalFile()\n try:\n fileobj = open(filename, 'rb')\n except OSError as e:\n message.error(\"adblock: Error while reading {}: {}\".format(\n filename, e.strerror))\n continue\n download = FakeDownload(fileobj)\n self._in_progress.append(download)\n self.on_download_finished(download)\n else:\n fobj = io.BytesIO()\n fobj.name = 'adblock: ' + url.host()\n target = downloads.FileObjDownloadTarget(fobj)\n download = download_manager.get(url, target=target,\n auto_remove=True)\n self._in_progress.append(download)\n download.finished.connect(\n functools.partial(self.on_download_finished, download))\n\n def _parse_line(self, line):\n \"\"\"Parse a line from a host file.\n\n Args:\n line: The bytes object to parse.\n\n Returns:\n True if parsing succeeded, False otherwise.\n \"\"\"\n if line.startswith(b'#'):\n # Ignoring comments early so we don't have to care about\n # encoding errors in them.\n return True\n\n try:\n line = line.decode('utf-8')\n except UnicodeDecodeError:\n log.misc.error(\"Failed to decode: {!r}\".format(line))\n return False\n\n # Remove comments\n try:\n hash_idx = line.index('#')\n line = line[:hash_idx]\n except ValueError:\n pass\n\n line = line.strip()\n # Skip empty lines\n if not line:\n return True\n\n parts = line.split()\n if len(parts) == 1:\n # \"one host per line\" format\n hosts = [parts[0]]\n else:\n # /etc/hosts format\n hosts = parts[1:]\n\n for host in hosts:\n if ('.' in host and\n not host.endswith('.localdomain') and\n host != '0.0.0.0'):\n self._blocked_hosts.add(host)\n\n return True\n\n def _merge_file(self, byte_io):\n \"\"\"Read and merge host files.\n\n Args:\n byte_io: The BytesIO object of the completed download.\n\n Return:\n A set of the merged hosts.\n \"\"\"\n error_count = 0\n line_count = 0\n try:\n f = get_fileobj(byte_io)\n except (OSError, zipfile.BadZipFile, zipfile.LargeZipFile,\n LookupError) as e:\n message.error(\"adblock: Error while reading {}: {} - {}\".format(\n byte_io.name, e.__class__.__name__, e))\n return\n\n for line in f:\n line_count += 1\n ok = self._parse_line(line)\n if not ok:\n error_count += 1\n\n log.misc.debug(\"{}: read {} lines\".format(byte_io.name, line_count))\n if error_count > 0:\n message.error(\"adblock: {} read errors for {}\".format(\n error_count, byte_io.name))\n\n def on_lists_downloaded(self):\n \"\"\"Install block lists after files have been downloaded.\"\"\"\n with open(self._local_hosts_file, 'w', encoding='utf-8') as f:\n for host in sorted(self._blocked_hosts):\n f.write(host + '\\n')\n message.info(\"adblock: Read {} hosts from {} sources.\".format(\n len(self._blocked_hosts), self._done_count))\n\n @config.change_filter('content.host_blocking.lists')\n def _update_files(self):\n \"\"\"Update files when the config changed.\"\"\"\n if not config.val.content.host_blocking.lists:\n try:\n os.remove(self._local_hosts_file)\n except FileNotFoundError:\n pass\n except OSError as e:\n log.misc.exception(\"Failed to delete hosts file: {}\".format(e))\n\n def on_download_finished(self, download):\n \"\"\"Check if all downloads are finished and if so, trigger reading.\n\n Arguments:\n download: The finished DownloadItem.\n \"\"\"\n self._in_progress.remove(download)\n if download.successful:\n self._done_count += 1\n try:\n self._merge_file(download.fileobj)\n finally:\n download.fileobj.close()\n if not self._in_progress:\n try:\n self.on_lists_downloaded()\n except OSError:\n log.misc.exception(\"Failed to write host block list!\")\n", "path": "qutebrowser/browser/adblock.py"}], "after_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2014-2018 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Functions related to ad blocking.\"\"\"\n\nimport io\nimport os.path\nimport functools\nimport posixpath\nimport zipfile\n\nfrom qutebrowser.browser import downloads\nfrom qutebrowser.config import config\nfrom qutebrowser.utils import objreg, standarddir, log, message, urlmatch\nfrom qutebrowser.commands import cmdutils\n\n\ndef guess_zip_filename(zf):\n \"\"\"Guess which file to use inside a zip file.\n\n Args:\n zf: A ZipFile instance.\n \"\"\"\n files = zf.namelist()\n if len(files) == 1:\n return files[0]\n else:\n for e in files:\n if posixpath.splitext(e)[0].lower() == 'hosts':\n return e\n raise FileNotFoundError(\"No hosts file found in zip\")\n\n\ndef get_fileobj(byte_io):\n \"\"\"Get a usable file object to read the hosts file from.\"\"\"\n byte_io.seek(0) # rewind downloaded file\n if zipfile.is_zipfile(byte_io):\n byte_io.seek(0) # rewind what zipfile.is_zipfile did\n zf = zipfile.ZipFile(byte_io)\n filename = guess_zip_filename(zf)\n byte_io = zf.open(filename, mode='r')\n else:\n byte_io.seek(0) # rewind what zipfile.is_zipfile did\n return byte_io\n\n\ndef is_whitelisted_url(url):\n \"\"\"Check if the given url is on the adblock whitelist.\n\n Args:\n url: The url to check in QUrl form.\n \"\"\"\n for pattern in config.val.content.host_blocking.whitelist:\n if pattern.matches(url):\n return True\n return False\n\n\nclass FakeDownload:\n\n \"\"\"A download stub to use on_download_finished with local files.\"\"\"\n\n def __init__(self, fileobj):\n self.basename = os.path.basename(fileobj.name)\n self.fileobj = fileobj\n self.successful = True\n\n\nclass HostBlocker:\n\n \"\"\"Manage blocked hosts based from /etc/hosts-like files.\n\n Attributes:\n _blocked_hosts: A set of blocked hosts.\n _config_blocked_hosts: A set of blocked hosts from ~/.config.\n _in_progress: The DownloadItems which are currently downloading.\n _done_count: How many files have been read successfully.\n _local_hosts_file: The path to the blocked-hosts file.\n _config_hosts_file: The path to a blocked-hosts in ~/.config\n \"\"\"\n\n def __init__(self):\n self._blocked_hosts = set()\n self._config_blocked_hosts = set()\n self._in_progress = []\n self._done_count = 0\n\n data_dir = standarddir.data()\n self._local_hosts_file = os.path.join(data_dir, 'blocked-hosts')\n self._update_files()\n\n config_dir = standarddir.config()\n self._config_hosts_file = os.path.join(config_dir, 'blocked-hosts')\n\n config.instance.changed.connect(self._update_files)\n\n def is_blocked(self, url):\n \"\"\"Check if the given URL (as QUrl) is blocked.\"\"\"\n if not config.val.content.host_blocking.enabled:\n return False\n host = url.host()\n return ((host in self._blocked_hosts or\n host in self._config_blocked_hosts) and\n not is_whitelisted_url(url))\n\n def _read_hosts_file(self, filename, target):\n \"\"\"Read hosts from the given filename.\n\n Args:\n filename: The file to read.\n target: The set to store the hosts in.\n\n Return:\n True if a read was attempted, False otherwise\n \"\"\"\n if not os.path.exists(filename):\n return False\n\n try:\n with open(filename, 'r', encoding='utf-8') as f:\n for line in f:\n target.add(line.strip())\n except (OSError, UnicodeDecodeError):\n log.misc.exception(\"Failed to read host blocklist!\")\n\n return True\n\n def read_hosts(self):\n \"\"\"Read hosts from the existing blocked-hosts file.\"\"\"\n self._blocked_hosts = set()\n\n self._read_hosts_file(self._config_hosts_file,\n self._config_blocked_hosts)\n\n found = self._read_hosts_file(self._local_hosts_file,\n self._blocked_hosts)\n\n if not found:\n args = objreg.get('args')\n if (config.val.content.host_blocking.lists and\n args.basedir is None and\n config.val.content.host_blocking.enabled):\n message.info(\"Run :adblock-update to get adblock lists.\")\n\n @cmdutils.register(instance='host-blocker')\n def adblock_update(self):\n \"\"\"Update the adblock block lists.\n\n This updates `~/.local/share/qutebrowser/blocked-hosts` with downloaded\n host lists and re-reads `~/.config/qutebrowser/blocked-hosts`.\n \"\"\"\n self._read_hosts_file(self._config_hosts_file,\n self._config_blocked_hosts)\n self._blocked_hosts = set()\n self._done_count = 0\n download_manager = objreg.get('qtnetwork-download-manager')\n for url in config.val.content.host_blocking.lists:\n if url.scheme() == 'file':\n filename = url.toLocalFile()\n try:\n fileobj = open(filename, 'rb')\n except OSError as e:\n message.error(\"adblock: Error while reading {}: {}\".format(\n filename, e.strerror))\n continue\n download = FakeDownload(fileobj)\n self._in_progress.append(download)\n self.on_download_finished(download)\n else:\n fobj = io.BytesIO()\n fobj.name = 'adblock: ' + url.host()\n target = downloads.FileObjDownloadTarget(fobj)\n download = download_manager.get(url, target=target,\n auto_remove=True)\n self._in_progress.append(download)\n download.finished.connect(\n functools.partial(self.on_download_finished, download))\n\n def _parse_line(self, line):\n \"\"\"Parse a line from a host file.\n\n Args:\n line: The bytes object to parse.\n\n Returns:\n True if parsing succeeded, False otherwise.\n \"\"\"\n if line.startswith(b'#'):\n # Ignoring comments early so we don't have to care about\n # encoding errors in them.\n return True\n\n try:\n line = line.decode('utf-8')\n except UnicodeDecodeError:\n log.misc.error(\"Failed to decode: {!r}\".format(line))\n return False\n\n # Remove comments\n try:\n hash_idx = line.index('#')\n line = line[:hash_idx]\n except ValueError:\n pass\n\n line = line.strip()\n # Skip empty lines\n if not line:\n return True\n\n parts = line.split()\n if len(parts) == 1:\n # \"one host per line\" format\n hosts = [parts[0]]\n else:\n # /etc/hosts format\n hosts = parts[1:]\n\n for host in hosts:\n if ('.' in host and\n not host.endswith('.localdomain') and\n host != '0.0.0.0'):\n self._blocked_hosts.add(host)\n\n return True\n\n def _merge_file(self, byte_io):\n \"\"\"Read and merge host files.\n\n Args:\n byte_io: The BytesIO object of the completed download.\n\n Return:\n A set of the merged hosts.\n \"\"\"\n error_count = 0\n line_count = 0\n try:\n f = get_fileobj(byte_io)\n except (OSError, zipfile.BadZipFile, zipfile.LargeZipFile,\n LookupError) as e:\n message.error(\"adblock: Error while reading {}: {} - {}\".format(\n byte_io.name, e.__class__.__name__, e))\n return\n\n for line in f:\n line_count += 1\n ok = self._parse_line(line)\n if not ok:\n error_count += 1\n\n log.misc.debug(\"{}: read {} lines\".format(byte_io.name, line_count))\n if error_count > 0:\n message.error(\"adblock: {} read errors for {}\".format(\n error_count, byte_io.name))\n\n def on_lists_downloaded(self):\n \"\"\"Install block lists after files have been downloaded.\"\"\"\n with open(self._local_hosts_file, 'w', encoding='utf-8') as f:\n for host in sorted(self._blocked_hosts):\n f.write(host + '\\n')\n message.info(\"adblock: Read {} hosts from {} sources.\".format(\n len(self._blocked_hosts), self._done_count))\n\n @config.change_filter('content.host_blocking.lists')\n def _update_files(self):\n \"\"\"Update files when the config changed.\"\"\"\n if not config.val.content.host_blocking.lists:\n try:\n os.remove(self._local_hosts_file)\n except FileNotFoundError:\n pass\n except OSError as e:\n log.misc.exception(\"Failed to delete hosts file: {}\".format(e))\n\n def on_download_finished(self, download):\n \"\"\"Check if all downloads are finished and if so, trigger reading.\n\n Arguments:\n download: The finished DownloadItem.\n \"\"\"\n self._in_progress.remove(download)\n if download.successful:\n self._done_count += 1\n try:\n self._merge_file(download.fileobj)\n finally:\n download.fileobj.close()\n if not self._in_progress:\n try:\n self.on_lists_downloaded()\n except OSError:\n log.misc.exception(\"Failed to write host block list!\")\n", "path": "qutebrowser/browser/adblock.py"}]} | 3,508 | 351 |
gh_patches_debug_57101 | rasdani/github-patches | git_diff | sbi-dev__sbi-422 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BoxUniform should automatically convert to float32
I lost count of how many times I searched for a bug related to this... ;)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sbi/utils/torchutils.py`
Content:
```
1 # This file is part of sbi, a toolkit for simulation-based inference. sbi is licensed
2 # under the Affero General Public License v3, see <https://www.gnu.org/licenses/>.
3
4 """Various PyTorch utility functions."""
5
6 import warnings
7 from typing import Union
8
9 import numpy as np
10 import torch
11 from torch import Tensor, float32
12 from torch.distributions import Independent, Uniform
13
14 from sbi import utils as utils
15 from sbi.types import Array, OneOrMore, ScalarFloat
16
17
18 def process_device(device: str) -> str:
19 """Set and return the default device to cpu or gpu."""
20
21 if not device == "cpu":
22 if device == "gpu":
23 device = "cuda"
24 try:
25 torch.zeros(1).to(device)
26 warnings.warn(
27 """GPU was selected as a device for training the neural network. Note
28 that we expect **no** significant speed ups in training for the
29 default architectures we provide. Using the GPU will be effective
30 only for large neural networks with operations that are fast on the
31 GPU, e.g., for a CNN or RNN `embedding_net`."""
32 )
33 except (RuntimeError, AssertionError):
34 warnings.warn(f"Device {device} not available, falling back to CPU.")
35 device = "cpu"
36
37 return device
38
39
40 def tile(x, n):
41 if not utils.is_positive_int(n):
42 raise TypeError("Argument `n` must be a positive integer.")
43 x_ = x.reshape(-1)
44 x_ = x_.repeat(n)
45 x_ = x_.reshape(n, -1)
46 x_ = x_.transpose(1, 0)
47 x_ = x_.reshape(-1)
48 return x_
49
50
51 def sum_except_batch(x, num_batch_dims=1):
52 """Sums all elements of `x` except for the first `num_batch_dims` dimensions."""
53 if not utils.is_nonnegative_int(num_batch_dims):
54 raise TypeError("Number of batch dimensions must be a non-negative integer.")
55 reduce_dims = list(range(num_batch_dims, x.ndimension()))
56 return torch.sum(x, dim=reduce_dims)
57
58
59 def split_leading_dim(x, shape):
60 """Reshapes the leading dim of `x` to have the given shape."""
61 new_shape = torch.Size(shape) + x.shape[1:]
62 return torch.reshape(x, new_shape)
63
64
65 def merge_leading_dims(x, num_dims):
66 """Reshapes the tensor `x` such that the first `num_dims` dimensions are merged to one."""
67 if not utils.is_positive_int(num_dims):
68 raise TypeError("Number of leading dims must be a positive integer.")
69 if num_dims > x.dim():
70 raise ValueError(
71 "Number of leading dims can't be greater than total number of dims."
72 )
73 new_shape = torch.Size([-1]) + x.shape[num_dims:]
74 return torch.reshape(x, new_shape)
75
76
77 def repeat_rows(x, num_reps):
78 """Each row of tensor `x` is repeated `num_reps` times along leading dimension."""
79 if not utils.is_positive_int(num_reps):
80 raise TypeError("Number of repetitions must be a positive integer.")
81 shape = x.shape
82 x = x.unsqueeze(1)
83 x = x.expand(shape[0], num_reps, *shape[1:])
84 return merge_leading_dims(x, num_dims=2)
85
86
87 def tensor2numpy(x):
88 return x.detach().cpu().numpy()
89
90
91 def logabsdet(x):
92 """Returns the log absolute determinant of square matrix x."""
93 # Note: torch.logdet() only works for positive determinant.
94 _, res = torch.slogdet(x)
95 return res
96
97
98 def random_orthogonal(size):
99 """
100 Returns a random orthogonal matrix as a 2-dim tensor of shape [size, size].
101 """
102
103 # Use the QR decomposition of a random Gaussian matrix.
104 x = torch.randn(size, size)
105 q, _ = torch.qr(x)
106 return q
107
108
109 def get_num_parameters(model):
110 """
111 Returns the number of trainable parameters in a model of type nets.Module
112 :param model: nets.Module containing trainable parameters
113 :return: number of trainable parameters in model
114 """
115 num_parameters = 0
116 for parameter in model.parameters():
117 num_parameters += torch.numel(parameter)
118 return num_parameters
119
120
121 def create_alternating_binary_mask(features, even=True):
122 """
123 Creates a binary mask of a given dimension which alternates its masking.
124
125 :param features: Dimension of mask.
126 :param even: If True, even values are assigned 1s, odd 0s. If False, vice versa.
127 :return: Alternating binary mask of type torch.Tensor.
128 """
129 mask = torch.zeros(features).byte()
130 start = 0 if even else 1
131 mask[start::2] += 1
132 return mask
133
134
135 def create_mid_split_binary_mask(features):
136 """
137 Creates a binary mask of a given dimension which splits its masking at the midpoint.
138
139 :param features: Dimension of mask.
140 :return: Binary mask split at midpoint of type torch.Tensor
141 """
142 mask = torch.zeros(features).byte()
143 midpoint = features // 2 if features % 2 == 0 else features // 2 + 1
144 mask[:midpoint] += 1
145 return mask
146
147
148 def create_random_binary_mask(features):
149 """
150 Creates a random binary mask of a given dimension with half of its entries
151 randomly set to 1s.
152
153 :param features: Dimension of mask.
154 :return: Binary mask with half of its entries set to 1s, of type torch.Tensor.
155 """
156 mask = torch.zeros(features).byte()
157 weights = torch.ones(features).float()
158 num_samples = features // 2 if features % 2 == 0 else features // 2 + 1
159 indices = torch.multinomial(
160 input=weights, num_samples=num_samples, replacement=False
161 )
162 mask[indices] += 1
163 return mask
164
165
166 def searchsorted(bin_locations, inputs, eps=1e-6):
167 bin_locations[..., -1] += eps
168 return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1
169
170
171 def cbrt(x):
172 """Cube root. Equivalent to torch.pow(x, 1/3), but numerically stable."""
173 return torch.sign(x) * torch.exp(torch.log(torch.abs(x)) / 3.0)
174
175
176 def get_temperature(max_value, bound=1 - 1e-3):
177 """
178 For a dataset with max value 'max_value', returns the temperature such that
179
180 sigmoid(temperature * max_value) = bound.
181
182 If temperature is greater than 1, returns 1.
183
184 :param max_value:
185 :param bound:
186 :return:
187 """
188 max_value = torch.Tensor([max_value])
189 bound = torch.Tensor([bound])
190 temperature = min(-(1 / max_value) * (torch.log1p(-bound) - torch.log(bound)), 1)
191 return temperature
192
193
194 def gaussian_kde_log_eval(samples, query):
195 N, D = samples.shape[0], samples.shape[-1]
196 std = N ** (-1 / (D + 4))
197 precision = (1 / (std ** 2)) * torch.eye(D)
198 a = query - samples
199 b = a @ precision
200 c = -0.5 * torch.sum(a * b, dim=-1)
201 d = -np.log(N) - (D / 2) * np.log(2 * np.pi) - D * np.log(std)
202 c += d
203 return torch.logsumexp(c, dim=-1)
204
205
206 class BoxUniform(Independent):
207 def __init__(
208 self, low: ScalarFloat, high: ScalarFloat, reinterpreted_batch_ndims: int = 1,
209 ):
210 """Multidimensional uniform distribution defined on a box.
211
212 A `Uniform` distribution initialized with e.g. a parameter vector low or high of
213 length 3 will result in a /batch/ dimension of length 3. A log_prob evaluation
214 will then output three numbers, one for each of the independent Uniforms in
215 the batch. Instead, a `BoxUniform` initialized in the same way has three
216 /event/ dimensions, and returns a scalar log_prob corresponding to whether
217 the evaluated point is in the box defined by low and high or outside.
218
219 Refer to torch.distributions.Uniform and torch.distributions.Independent for
220 further documentation.
221
222 Args:
223 low: lower range (inclusive).
224 high: upper range (exclusive).
225 reinterpreted_batch_ndims (int): the number of batch dims to
226 reinterpret as event dims.
227 """
228
229 super().__init__(Uniform(low=low, high=high), reinterpreted_batch_ndims)
230
231
232 def ensure_theta_batched(theta: Tensor) -> Tensor:
233 r"""
234 Return parameter set theta that has a batch dimension, i.e. has shape
235 (1, shape_of_single_theta)
236
237 Args:
238 theta: parameters $\theta$, of shape (n) or (1,n)
239 Returns:
240 Batched parameter set $\theta$
241 """
242
243 # => ensure theta has shape (1, dim_parameter)
244 if theta.ndim == 1:
245 theta = theta.unsqueeze(0)
246
247 return theta
248
249
250 def ensure_x_batched(x: Tensor) -> Tensor:
251 """
252 Return simulation output x that has a batch dimension, i.e. has shape
253 (1, shape_of_single_x).
254
255 Args:
256 x: simulation output of shape (n) or (1,n).
257 Returns:
258 Batched simulation output x.
259 """
260
261 # ensure x has shape (1, shape_of_single_x). If shape[0] > 1, we assume that
262 # the batch-dimension is missing, even though ndim might be >1 (e.g. for images)
263 if x.shape[0] > 1 or x.ndim == 1:
264 x = x.unsqueeze(0)
265
266 return x
267
268
269 def atleast_2d_many(*arys: Array) -> OneOrMore[Tensor]:
270 """Return tensors with at least dimension 2.
271
272 Tensors or arrays of dimension 0 or 1 will get additional dimension(s) prepended.
273
274 Returns:
275 Tensor or list of tensors all with dimension >= 2.
276 """
277 if len(arys) == 1:
278 arr = arys[0]
279 if isinstance(arr, np.ndarray):
280 arr = torch.from_numpy(arr)
281 return atleast_2d(arr)
282 else:
283 return [atleast_2d_many(arr) for arr in arys]
284
285
286 def atleast_2d(t: Tensor) -> Tensor:
287 return t if t.ndim >= 2 else t.reshape(1, -1)
288
289
290 def maybe_add_batch_dim_to_size(s: torch.Size) -> torch.Size:
291 """
292 Take a torch.Size and add a batch dimension to it if dimensionality of size is 1.
293
294 (N) -> (1,N)
295 (1,N) -> (1,N)
296 (N,M) -> (N,M)
297 (1,N,M) -> (1,N,M)
298
299 Args:
300 s: Input size, possibly without batch dimension.
301
302 Returns: Batch size.
303
304 """
305 return s if len(s) >= 2 else torch.Size([1]) + s
306
307
308 def atleast_2d_float32_tensor(arr: Union[Tensor, np.ndarray]) -> Tensor:
309 return atleast_2d(torch.as_tensor(arr, dtype=float32))
310
311
312 def batched_first_of_batch(t: Tensor) -> Tensor:
313 """
314 Takes in a tensor of shape (N, M) and outputs tensor of shape (1,M).
315 """
316 return t[:1]
317
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sbi/utils/torchutils.py b/sbi/utils/torchutils.py
--- a/sbi/utils/torchutils.py
+++ b/sbi/utils/torchutils.py
@@ -226,7 +226,13 @@
reinterpret as event dims.
"""
- super().__init__(Uniform(low=low, high=high), reinterpreted_batch_ndims)
+ super().__init__(
+ Uniform(
+ low=torch.as_tensor(low, dtype=torch.float32),
+ high=torch.as_tensor(high, dtype=torch.float32),
+ ),
+ reinterpreted_batch_ndims,
+ )
def ensure_theta_batched(theta: Tensor) -> Tensor:
| {"golden_diff": "diff --git a/sbi/utils/torchutils.py b/sbi/utils/torchutils.py\n--- a/sbi/utils/torchutils.py\n+++ b/sbi/utils/torchutils.py\n@@ -226,7 +226,13 @@\n reinterpret as event dims.\n \"\"\"\n \n- super().__init__(Uniform(low=low, high=high), reinterpreted_batch_ndims)\n+ super().__init__(\n+ Uniform(\n+ low=torch.as_tensor(low, dtype=torch.float32),\n+ high=torch.as_tensor(high, dtype=torch.float32),\n+ ),\n+ reinterpreted_batch_ndims,\n+ )\n \n \n def ensure_theta_batched(theta: Tensor) -> Tensor:\n", "issue": "BoxUniform should automatically convert to float32\nI lost count of how many times I searched for a bug related to this... ;)\n", "before_files": [{"content": "# This file is part of sbi, a toolkit for simulation-based inference. sbi is licensed\n# under the Affero General Public License v3, see <https://www.gnu.org/licenses/>.\n\n\"\"\"Various PyTorch utility functions.\"\"\"\n\nimport warnings\nfrom typing import Union\n\nimport numpy as np\nimport torch\nfrom torch import Tensor, float32\nfrom torch.distributions import Independent, Uniform\n\nfrom sbi import utils as utils\nfrom sbi.types import Array, OneOrMore, ScalarFloat\n\n\ndef process_device(device: str) -> str:\n \"\"\"Set and return the default device to cpu or gpu.\"\"\"\n\n if not device == \"cpu\":\n if device == \"gpu\":\n device = \"cuda\"\n try:\n torch.zeros(1).to(device)\n warnings.warn(\n \"\"\"GPU was selected as a device for training the neural network. Note\n that we expect **no** significant speed ups in training for the\n default architectures we provide. Using the GPU will be effective\n only for large neural networks with operations that are fast on the\n GPU, e.g., for a CNN or RNN `embedding_net`.\"\"\"\n )\n except (RuntimeError, AssertionError):\n warnings.warn(f\"Device {device} not available, falling back to CPU.\")\n device = \"cpu\"\n\n return device\n\n\ndef tile(x, n):\n if not utils.is_positive_int(n):\n raise TypeError(\"Argument `n` must be a positive integer.\")\n x_ = x.reshape(-1)\n x_ = x_.repeat(n)\n x_ = x_.reshape(n, -1)\n x_ = x_.transpose(1, 0)\n x_ = x_.reshape(-1)\n return x_\n\n\ndef sum_except_batch(x, num_batch_dims=1):\n \"\"\"Sums all elements of `x` except for the first `num_batch_dims` dimensions.\"\"\"\n if not utils.is_nonnegative_int(num_batch_dims):\n raise TypeError(\"Number of batch dimensions must be a non-negative integer.\")\n reduce_dims = list(range(num_batch_dims, x.ndimension()))\n return torch.sum(x, dim=reduce_dims)\n\n\ndef split_leading_dim(x, shape):\n \"\"\"Reshapes the leading dim of `x` to have the given shape.\"\"\"\n new_shape = torch.Size(shape) + x.shape[1:]\n return torch.reshape(x, new_shape)\n\n\ndef merge_leading_dims(x, num_dims):\n \"\"\"Reshapes the tensor `x` such that the first `num_dims` dimensions are merged to one.\"\"\"\n if not utils.is_positive_int(num_dims):\n raise TypeError(\"Number of leading dims must be a positive integer.\")\n if num_dims > x.dim():\n raise ValueError(\n \"Number of leading dims can't be greater than total number of dims.\"\n )\n new_shape = torch.Size([-1]) + x.shape[num_dims:]\n return torch.reshape(x, new_shape)\n\n\ndef repeat_rows(x, num_reps):\n \"\"\"Each row of tensor `x` is repeated `num_reps` times along leading dimension.\"\"\"\n if not utils.is_positive_int(num_reps):\n raise TypeError(\"Number of repetitions must be a positive integer.\")\n shape = x.shape\n x = x.unsqueeze(1)\n x = x.expand(shape[0], num_reps, *shape[1:])\n return merge_leading_dims(x, num_dims=2)\n\n\ndef tensor2numpy(x):\n return x.detach().cpu().numpy()\n\n\ndef logabsdet(x):\n \"\"\"Returns the log absolute determinant of square matrix x.\"\"\"\n # Note: torch.logdet() only works for positive determinant.\n _, res = torch.slogdet(x)\n return res\n\n\ndef random_orthogonal(size):\n \"\"\"\n Returns a random orthogonal matrix as a 2-dim tensor of shape [size, size].\n \"\"\"\n\n # Use the QR decomposition of a random Gaussian matrix.\n x = torch.randn(size, size)\n q, _ = torch.qr(x)\n return q\n\n\ndef get_num_parameters(model):\n \"\"\"\n Returns the number of trainable parameters in a model of type nets.Module\n :param model: nets.Module containing trainable parameters\n :return: number of trainable parameters in model\n \"\"\"\n num_parameters = 0\n for parameter in model.parameters():\n num_parameters += torch.numel(parameter)\n return num_parameters\n\n\ndef create_alternating_binary_mask(features, even=True):\n \"\"\"\n Creates a binary mask of a given dimension which alternates its masking.\n\n :param features: Dimension of mask.\n :param even: If True, even values are assigned 1s, odd 0s. If False, vice versa.\n :return: Alternating binary mask of type torch.Tensor.\n \"\"\"\n mask = torch.zeros(features).byte()\n start = 0 if even else 1\n mask[start::2] += 1\n return mask\n\n\ndef create_mid_split_binary_mask(features):\n \"\"\"\n Creates a binary mask of a given dimension which splits its masking at the midpoint.\n\n :param features: Dimension of mask.\n :return: Binary mask split at midpoint of type torch.Tensor\n \"\"\"\n mask = torch.zeros(features).byte()\n midpoint = features // 2 if features % 2 == 0 else features // 2 + 1\n mask[:midpoint] += 1\n return mask\n\n\ndef create_random_binary_mask(features):\n \"\"\"\n Creates a random binary mask of a given dimension with half of its entries\n randomly set to 1s.\n\n :param features: Dimension of mask.\n :return: Binary mask with half of its entries set to 1s, of type torch.Tensor.\n \"\"\"\n mask = torch.zeros(features).byte()\n weights = torch.ones(features).float()\n num_samples = features // 2 if features % 2 == 0 else features // 2 + 1\n indices = torch.multinomial(\n input=weights, num_samples=num_samples, replacement=False\n )\n mask[indices] += 1\n return mask\n\n\ndef searchsorted(bin_locations, inputs, eps=1e-6):\n bin_locations[..., -1] += eps\n return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1\n\n\ndef cbrt(x):\n \"\"\"Cube root. Equivalent to torch.pow(x, 1/3), but numerically stable.\"\"\"\n return torch.sign(x) * torch.exp(torch.log(torch.abs(x)) / 3.0)\n\n\ndef get_temperature(max_value, bound=1 - 1e-3):\n \"\"\"\n For a dataset with max value 'max_value', returns the temperature such that\n\n sigmoid(temperature * max_value) = bound.\n\n If temperature is greater than 1, returns 1.\n\n :param max_value:\n :param bound:\n :return:\n \"\"\"\n max_value = torch.Tensor([max_value])\n bound = torch.Tensor([bound])\n temperature = min(-(1 / max_value) * (torch.log1p(-bound) - torch.log(bound)), 1)\n return temperature\n\n\ndef gaussian_kde_log_eval(samples, query):\n N, D = samples.shape[0], samples.shape[-1]\n std = N ** (-1 / (D + 4))\n precision = (1 / (std ** 2)) * torch.eye(D)\n a = query - samples\n b = a @ precision\n c = -0.5 * torch.sum(a * b, dim=-1)\n d = -np.log(N) - (D / 2) * np.log(2 * np.pi) - D * np.log(std)\n c += d\n return torch.logsumexp(c, dim=-1)\n\n\nclass BoxUniform(Independent):\n def __init__(\n self, low: ScalarFloat, high: ScalarFloat, reinterpreted_batch_ndims: int = 1,\n ):\n \"\"\"Multidimensional uniform distribution defined on a box.\n\n A `Uniform` distribution initialized with e.g. a parameter vector low or high of\n length 3 will result in a /batch/ dimension of length 3. A log_prob evaluation\n will then output three numbers, one for each of the independent Uniforms in\n the batch. Instead, a `BoxUniform` initialized in the same way has three\n /event/ dimensions, and returns a scalar log_prob corresponding to whether\n the evaluated point is in the box defined by low and high or outside.\n\n Refer to torch.distributions.Uniform and torch.distributions.Independent for\n further documentation.\n\n Args:\n low: lower range (inclusive).\n high: upper range (exclusive).\n reinterpreted_batch_ndims (int): the number of batch dims to\n reinterpret as event dims.\n \"\"\"\n\n super().__init__(Uniform(low=low, high=high), reinterpreted_batch_ndims)\n\n\ndef ensure_theta_batched(theta: Tensor) -> Tensor:\n r\"\"\"\n Return parameter set theta that has a batch dimension, i.e. has shape\n (1, shape_of_single_theta)\n\n Args:\n theta: parameters $\\theta$, of shape (n) or (1,n)\n Returns:\n Batched parameter set $\\theta$\n \"\"\"\n\n # => ensure theta has shape (1, dim_parameter)\n if theta.ndim == 1:\n theta = theta.unsqueeze(0)\n\n return theta\n\n\ndef ensure_x_batched(x: Tensor) -> Tensor:\n \"\"\"\n Return simulation output x that has a batch dimension, i.e. has shape\n (1, shape_of_single_x).\n\n Args:\n x: simulation output of shape (n) or (1,n).\n Returns:\n Batched simulation output x.\n \"\"\"\n\n # ensure x has shape (1, shape_of_single_x). If shape[0] > 1, we assume that\n # the batch-dimension is missing, even though ndim might be >1 (e.g. for images)\n if x.shape[0] > 1 or x.ndim == 1:\n x = x.unsqueeze(0)\n\n return x\n\n\ndef atleast_2d_many(*arys: Array) -> OneOrMore[Tensor]:\n \"\"\"Return tensors with at least dimension 2.\n\n Tensors or arrays of dimension 0 or 1 will get additional dimension(s) prepended.\n\n Returns:\n Tensor or list of tensors all with dimension >= 2.\n \"\"\"\n if len(arys) == 1:\n arr = arys[0]\n if isinstance(arr, np.ndarray):\n arr = torch.from_numpy(arr)\n return atleast_2d(arr)\n else:\n return [atleast_2d_many(arr) for arr in arys]\n\n\ndef atleast_2d(t: Tensor) -> Tensor:\n return t if t.ndim >= 2 else t.reshape(1, -1)\n\n\ndef maybe_add_batch_dim_to_size(s: torch.Size) -> torch.Size:\n \"\"\"\n Take a torch.Size and add a batch dimension to it if dimensionality of size is 1.\n\n (N) -> (1,N)\n (1,N) -> (1,N)\n (N,M) -> (N,M)\n (1,N,M) -> (1,N,M)\n\n Args:\n s: Input size, possibly without batch dimension.\n\n Returns: Batch size.\n\n \"\"\"\n return s if len(s) >= 2 else torch.Size([1]) + s\n\n\ndef atleast_2d_float32_tensor(arr: Union[Tensor, np.ndarray]) -> Tensor:\n return atleast_2d(torch.as_tensor(arr, dtype=float32))\n\n\ndef batched_first_of_batch(t: Tensor) -> Tensor:\n \"\"\"\n Takes in a tensor of shape (N, M) and outputs tensor of shape (1,M).\n \"\"\"\n return t[:1]\n", "path": "sbi/utils/torchutils.py"}], "after_files": [{"content": "# This file is part of sbi, a toolkit for simulation-based inference. sbi is licensed\n# under the Affero General Public License v3, see <https://www.gnu.org/licenses/>.\n\n\"\"\"Various PyTorch utility functions.\"\"\"\n\nimport warnings\nfrom typing import Union\n\nimport numpy as np\nimport torch\nfrom torch import Tensor, float32\nfrom torch.distributions import Independent, Uniform\n\nfrom sbi import utils as utils\nfrom sbi.types import Array, OneOrMore, ScalarFloat\n\n\ndef process_device(device: str) -> str:\n \"\"\"Set and return the default device to cpu or gpu.\"\"\"\n\n if not device == \"cpu\":\n if device == \"gpu\":\n device = \"cuda\"\n try:\n torch.zeros(1).to(device)\n warnings.warn(\n \"\"\"GPU was selected as a device for training the neural network. Note\n that we expect **no** significant speed ups in training for the\n default architectures we provide. Using the GPU will be effective\n only for large neural networks with operations that are fast on the\n GPU, e.g., for a CNN or RNN `embedding_net`.\"\"\"\n )\n except (RuntimeError, AssertionError):\n warnings.warn(f\"Device {device} not available, falling back to CPU.\")\n device = \"cpu\"\n\n return device\n\n\ndef tile(x, n):\n if not utils.is_positive_int(n):\n raise TypeError(\"Argument `n` must be a positive integer.\")\n x_ = x.reshape(-1)\n x_ = x_.repeat(n)\n x_ = x_.reshape(n, -1)\n x_ = x_.transpose(1, 0)\n x_ = x_.reshape(-1)\n return x_\n\n\ndef sum_except_batch(x, num_batch_dims=1):\n \"\"\"Sums all elements of `x` except for the first `num_batch_dims` dimensions.\"\"\"\n if not utils.is_nonnegative_int(num_batch_dims):\n raise TypeError(\"Number of batch dimensions must be a non-negative integer.\")\n reduce_dims = list(range(num_batch_dims, x.ndimension()))\n return torch.sum(x, dim=reduce_dims)\n\n\ndef split_leading_dim(x, shape):\n \"\"\"Reshapes the leading dim of `x` to have the given shape.\"\"\"\n new_shape = torch.Size(shape) + x.shape[1:]\n return torch.reshape(x, new_shape)\n\n\ndef merge_leading_dims(x, num_dims):\n \"\"\"Reshapes the tensor `x` such that the first `num_dims` dimensions are merged to one.\"\"\"\n if not utils.is_positive_int(num_dims):\n raise TypeError(\"Number of leading dims must be a positive integer.\")\n if num_dims > x.dim():\n raise ValueError(\n \"Number of leading dims can't be greater than total number of dims.\"\n )\n new_shape = torch.Size([-1]) + x.shape[num_dims:]\n return torch.reshape(x, new_shape)\n\n\ndef repeat_rows(x, num_reps):\n \"\"\"Each row of tensor `x` is repeated `num_reps` times along leading dimension.\"\"\"\n if not utils.is_positive_int(num_reps):\n raise TypeError(\"Number of repetitions must be a positive integer.\")\n shape = x.shape\n x = x.unsqueeze(1)\n x = x.expand(shape[0], num_reps, *shape[1:])\n return merge_leading_dims(x, num_dims=2)\n\n\ndef tensor2numpy(x):\n return x.detach().cpu().numpy()\n\n\ndef logabsdet(x):\n \"\"\"Returns the log absolute determinant of square matrix x.\"\"\"\n # Note: torch.logdet() only works for positive determinant.\n _, res = torch.slogdet(x)\n return res\n\n\ndef random_orthogonal(size):\n \"\"\"\n Returns a random orthogonal matrix as a 2-dim tensor of shape [size, size].\n \"\"\"\n\n # Use the QR decomposition of a random Gaussian matrix.\n x = torch.randn(size, size)\n q, _ = torch.qr(x)\n return q\n\n\ndef get_num_parameters(model):\n \"\"\"\n Returns the number of trainable parameters in a model of type nets.Module\n :param model: nets.Module containing trainable parameters\n :return: number of trainable parameters in model\n \"\"\"\n num_parameters = 0\n for parameter in model.parameters():\n num_parameters += torch.numel(parameter)\n return num_parameters\n\n\ndef create_alternating_binary_mask(features, even=True):\n \"\"\"\n Creates a binary mask of a given dimension which alternates its masking.\n\n :param features: Dimension of mask.\n :param even: If True, even values are assigned 1s, odd 0s. If False, vice versa.\n :return: Alternating binary mask of type torch.Tensor.\n \"\"\"\n mask = torch.zeros(features).byte()\n start = 0 if even else 1\n mask[start::2] += 1\n return mask\n\n\ndef create_mid_split_binary_mask(features):\n \"\"\"\n Creates a binary mask of a given dimension which splits its masking at the midpoint.\n\n :param features: Dimension of mask.\n :return: Binary mask split at midpoint of type torch.Tensor\n \"\"\"\n mask = torch.zeros(features).byte()\n midpoint = features // 2 if features % 2 == 0 else features // 2 + 1\n mask[:midpoint] += 1\n return mask\n\n\ndef create_random_binary_mask(features):\n \"\"\"\n Creates a random binary mask of a given dimension with half of its entries\n randomly set to 1s.\n\n :param features: Dimension of mask.\n :return: Binary mask with half of its entries set to 1s, of type torch.Tensor.\n \"\"\"\n mask = torch.zeros(features).byte()\n weights = torch.ones(features).float()\n num_samples = features // 2 if features % 2 == 0 else features // 2 + 1\n indices = torch.multinomial(\n input=weights, num_samples=num_samples, replacement=False\n )\n mask[indices] += 1\n return mask\n\n\ndef searchsorted(bin_locations, inputs, eps=1e-6):\n bin_locations[..., -1] += eps\n return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1\n\n\ndef cbrt(x):\n \"\"\"Cube root. Equivalent to torch.pow(x, 1/3), but numerically stable.\"\"\"\n return torch.sign(x) * torch.exp(torch.log(torch.abs(x)) / 3.0)\n\n\ndef get_temperature(max_value, bound=1 - 1e-3):\n \"\"\"\n For a dataset with max value 'max_value', returns the temperature such that\n\n sigmoid(temperature * max_value) = bound.\n\n If temperature is greater than 1, returns 1.\n\n :param max_value:\n :param bound:\n :return:\n \"\"\"\n max_value = torch.Tensor([max_value])\n bound = torch.Tensor([bound])\n temperature = min(-(1 / max_value) * (torch.log1p(-bound) - torch.log(bound)), 1)\n return temperature\n\n\ndef gaussian_kde_log_eval(samples, query):\n N, D = samples.shape[0], samples.shape[-1]\n std = N ** (-1 / (D + 4))\n precision = (1 / (std ** 2)) * torch.eye(D)\n a = query - samples\n b = a @ precision\n c = -0.5 * torch.sum(a * b, dim=-1)\n d = -np.log(N) - (D / 2) * np.log(2 * np.pi) - D * np.log(std)\n c += d\n return torch.logsumexp(c, dim=-1)\n\n\nclass BoxUniform(Independent):\n def __init__(\n self, low: ScalarFloat, high: ScalarFloat, reinterpreted_batch_ndims: int = 1,\n ):\n \"\"\"Multidimensional uniform distribution defined on a box.\n\n A `Uniform` distribution initialized with e.g. a parameter vector low or high of\n length 3 will result in a /batch/ dimension of length 3. A log_prob evaluation\n will then output three numbers, one for each of the independent Uniforms in\n the batch. Instead, a `BoxUniform` initialized in the same way has three\n /event/ dimensions, and returns a scalar log_prob corresponding to whether\n the evaluated point is in the box defined by low and high or outside.\n\n Refer to torch.distributions.Uniform and torch.distributions.Independent for\n further documentation.\n\n Args:\n low: lower range (inclusive).\n high: upper range (exclusive).\n reinterpreted_batch_ndims (int): the number of batch dims to\n reinterpret as event dims.\n \"\"\"\n\n super().__init__(\n Uniform(\n low=torch.as_tensor(low, dtype=torch.float32),\n high=torch.as_tensor(high, dtype=torch.float32),\n ),\n reinterpreted_batch_ndims,\n )\n\n\ndef ensure_theta_batched(theta: Tensor) -> Tensor:\n r\"\"\"\n Return parameter set theta that has a batch dimension, i.e. has shape\n (1, shape_of_single_theta)\n\n Args:\n theta: parameters $\\theta$, of shape (n) or (1,n)\n Returns:\n Batched parameter set $\\theta$\n \"\"\"\n\n # => ensure theta has shape (1, dim_parameter)\n if theta.ndim == 1:\n theta = theta.unsqueeze(0)\n\n return theta\n\n\ndef ensure_x_batched(x: Tensor) -> Tensor:\n \"\"\"\n Return simulation output x that has a batch dimension, i.e. has shape\n (1, shape_of_single_x).\n\n Args:\n x: simulation output of shape (n) or (1,n).\n Returns:\n Batched simulation output x.\n \"\"\"\n\n # ensure x has shape (1, shape_of_single_x). If shape[0] > 1, we assume that\n # the batch-dimension is missing, even though ndim might be >1 (e.g. for images)\n if x.shape[0] > 1 or x.ndim == 1:\n x = x.unsqueeze(0)\n\n return x\n\n\ndef atleast_2d_many(*arys: Array) -> OneOrMore[Tensor]:\n \"\"\"Return tensors with at least dimension 2.\n\n Tensors or arrays of dimension 0 or 1 will get additional dimension(s) prepended.\n\n Returns:\n Tensor or list of tensors all with dimension >= 2.\n \"\"\"\n if len(arys) == 1:\n arr = arys[0]\n if isinstance(arr, np.ndarray):\n arr = torch.from_numpy(arr)\n return atleast_2d(arr)\n else:\n return [atleast_2d_many(arr) for arr in arys]\n\n\ndef atleast_2d(t: Tensor) -> Tensor:\n return t if t.ndim >= 2 else t.reshape(1, -1)\n\n\ndef maybe_add_batch_dim_to_size(s: torch.Size) -> torch.Size:\n \"\"\"\n Take a torch.Size and add a batch dimension to it if dimensionality of size is 1.\n\n (N) -> (1,N)\n (1,N) -> (1,N)\n (N,M) -> (N,M)\n (1,N,M) -> (1,N,M)\n\n Args:\n s: Input size, possibly without batch dimension.\n\n Returns: Batch size.\n\n \"\"\"\n return s if len(s) >= 2 else torch.Size([1]) + s\n\n\ndef atleast_2d_float32_tensor(arr: Union[Tensor, np.ndarray]) -> Tensor:\n return atleast_2d(torch.as_tensor(arr, dtype=float32))\n\n\ndef batched_first_of_batch(t: Tensor) -> Tensor:\n \"\"\"\n Takes in a tensor of shape (N, M) and outputs tensor of shape (1,M).\n \"\"\"\n return t[:1]\n", "path": "sbi/utils/torchutils.py"}]} | 3,694 | 150 |
gh_patches_debug_39356 | rasdani/github-patches | git_diff | ddionrails__ddionrails-605 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ddionrails.base.mixins uses eval for questionable purposes
### Subject of the issue
eval executes arbitrary python code and is used by the modelMixin to get attributes of objects:
https://github.com/ddionrails/ddionrails/blob/0a9b1d2d490ad7fc9ce90d5e6f68bfaf06c06bf3/ddionrails/base/mixins.py#L94-L106
https://github.com/ddionrails/ddionrails/blob/0a9b1d2d490ad7fc9ce90d5e6f68bfaf06c06bf3/ddionrails/base/mixins.py#L134-L143
https://github.com/ddionrails/ddionrails/blob/0a9b1d2d490ad7fc9ce90d5e6f68bfaf06c06bf3/ddionrails/base/mixins.py#L134-L155
Dedicated functionality should be used to do this instead, i.e. getattr().
This might have to be done recursively since the functions seem to have been used to get attributes over an arbitrary number of relations, e.g. object.objectAttribute.attribute or object.attribute.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddionrails/base/mixins.py`
Content:
```
1 """
2 Mixins for DDI on Rails.
3 """
4 from django import forms
5
6 from config.helpers import render_markdown
7
8
9 class ModelMixin:
10 """
11 Default mixins for all classes in DDI on Rails.
12
13 Requires two definition in the ``DOR`` class:
14
15 * io_fields: Fields that are used for the default form and in the default dict.
16 * id_fields: Fields that are used for the get_or_create default method.
17
18 Example:
19
20 ::
21
22 from django.db import models
23 from ddionrails.mixins import ModelMixin
24
25 class Test(models.Model, ModelMixin):
26
27 name = models.CharField(max_length=255, unique=True)
28
29 class DOR:
30 id_fields = ["name"]
31 io_fields = ["name"]
32
33 The default value for DOR is:
34
35 ::
36
37 class DOR:
38 id_fields = ["name"]
39 io_fields = ["name", "label", "description"]
40
41 The ``id_fields`` are also use to construct a default string identifier.
42 It is therefore recommended, to order them from the most general to the
43 most specific one.
44
45 """
46
47 class DOR:
48 id_fields = ["name"]
49 io_fields = ["name", "label", "description"]
50
51 @classmethod
52 def get_or_create(cls, x, lower_strings=True):
53 """
54 Default for the get_or_create based on a dict.
55
56 The method uses only relevant identifiers based on ``DOR.id_fields``.
57
58 By default, all strings are set to lower case (option ``lower_strings``).
59 """
60 definition = {key: x[key] for key in cls.DOR.id_fields}
61 for key, value in definition.items():
62 if value.__class__ == str and lower_strings:
63 definition[key] = value.lower()
64 return cls.objects.get_or_create(**definition)[0]
65
66 @classmethod
67 def get(cls, x):
68 """
69 Default for the get_or_create based on a dict.
70
71 The method uses only relevant identifiers based on ``DOR.id_fields``.
72 """
73 try:
74 definition = {key: x[key] for key in cls.DOR.id_fields}
75 result = cls.objects.get(**definition)
76 except cls.DoesNotExist:
77 result = None
78 return result
79
80 @classmethod
81 def default_form(cls):
82 """
83 Creates a default form for all attributes defined in ``DOR.io_fields``.
84 """
85
86 class DefaultForm(forms.ModelForm):
87 class Meta:
88 model = cls
89 fields = cls.DOR.io_fields
90
91 return DefaultForm
92
93 def to_dict(self):
94 """
95 Uses the ``DOR.io_fields`` attribute to generate a default
96 dict object for the current instance.
97 """
98 default_dict = {}
99 for field_name in self.DOR.io_fields:
100 field = eval("self.%s" % field_name)
101 try:
102 default_dict[field_name] = field.pk
103 except AttributeError:
104 default_dict[field_name] = field
105 return default_dict
106
107 def title(self):
108 """
109 Default for the title. It first looks for a valid label, next for a
110 valid name, and otherwise returns an empty string.
111 """
112 try:
113 name = self.name
114 except AttributeError:
115 name = ""
116 try:
117 label = self.label
118 except AttributeError:
119 label = ""
120 return name if label == "" else label
121
122 def html_description(self):
123 """
124 Uses the ddionrails Markdown parser (ddionrails.helpers) to render
125 the description into HTML.
126 """
127 try:
128 html = render_markdown(self.description)
129 except AttributeError:
130 html = ""
131 return html
132
133 def get_attribute(self, attribute, default=None):
134 """
135 Example::
136
137 study_id = variable.get_attribute("self.dataset.study.id")
138 """
139 try:
140 return eval(attribute)
141 except AttributeError:
142 return default
143
144 def string_id(self):
145 a = []
146 for field_name in self.DOR.id_fields:
147 field = eval("self.%s" % field_name)
148 try:
149 s = field.string_id()
150 except AttributeError:
151 s = str(field)
152 a.append(s)
153 a = "/".join(a)
154 return a
155
156 def __str__(self):
157 return self.string_id()
158
159
160 class AdminMixin:
161 """ A mixin for ModelAdmins to query related models via methods """
162
163 @staticmethod
164 def study_name(obj):
165 """ Return the name of the related study """
166 try:
167 return obj.study.name
168 except AttributeError:
169 return None
170
171 @staticmethod
172 def period_name(obj):
173 """ Return the name of the related period """
174 try:
175 return obj.period.name
176 except AttributeError:
177 return None
178
179 @staticmethod
180 def analysis_unit_name(obj):
181 """ Return the name of the related analysis_unit """
182 try:
183 return obj.analysis_unit.name
184 except AttributeError:
185 return None
186
187 @staticmethod
188 def dataset_name(obj):
189 """ Return the name of the related dataset """
190 try:
191 return obj.dataset.name
192 except AttributeError:
193 return None
194
195 @staticmethod
196 def dataset_study_name(obj):
197 """ Return the name of the related dataset.study """
198 try:
199 return obj.dataset.study.name
200 except AttributeError:
201 return None
202
203 @staticmethod
204 def instrument_name(obj):
205 """ Return the name of the related instrument """
206 try:
207 return obj.instrument.name
208 except AttributeError:
209 return None
210
211 @staticmethod
212 def instrument_study_name(obj):
213 """ Return the name of the related instrument.study """
214 try:
215 return obj.instrument.study.name
216 except AttributeError:
217 return None
218
219 @staticmethod
220 def basket_name(obj):
221 """ Return the name of the related basket """
222 try:
223 return obj.basket.name
224 except AttributeError:
225 return None
226
227 @staticmethod
228 def basket_study_name(obj):
229 """ Return the name of the related basket.study """
230 try:
231 return obj.basket.study.name
232 except AttributeError:
233 return None
234
235 @staticmethod
236 def user_name(obj):
237 """ Return the name of the related basket.user """
238 try:
239 return obj.basket.user.username
240 except AttributeError:
241 return None
242
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ddionrails/base/mixins.py b/ddionrails/base/mixins.py
--- a/ddionrails/base/mixins.py
+++ b/ddionrails/base/mixins.py
@@ -1,6 +1,9 @@
-"""
-Mixins for DDI on Rails.
-"""
+# -*- coding: utf-8 -*-
+
+""" Mixins for ddionrails.base app """
+
+from typing import Dict
+
from django import forms
from config.helpers import render_markdown
@@ -64,14 +67,14 @@
return cls.objects.get_or_create(**definition)[0]
@classmethod
- def get(cls, x):
+ def get(cls, parameters: Dict):
"""
Default for the get_or_create based on a dict.
The method uses only relevant identifiers based on ``DOR.id_fields``.
"""
try:
- definition = {key: x[key] for key in cls.DOR.id_fields}
+ definition = {key: parameters[key] for key in cls.DOR.id_fields}
result = cls.objects.get(**definition)
except cls.DoesNotExist:
result = None
@@ -90,19 +93,19 @@
return DefaultForm
- def to_dict(self):
+ def to_dict(self) -> Dict:
"""
Uses the ``DOR.io_fields`` attribute to generate a default
dict object for the current instance.
"""
- default_dict = {}
- for field_name in self.DOR.io_fields:
- field = eval("self.%s" % field_name)
+ dictionary = dict()
+ for field in self.DOR.io_fields:
+ value = getattr(self, field)
try:
- default_dict[field_name] = field.pk
+ dictionary[field] = value.pk
except AttributeError:
- default_dict[field_name] = field
- return default_dict
+ dictionary[field] = value
+ return dictionary
def title(self):
"""
@@ -130,31 +133,16 @@
html = ""
return html
- def get_attribute(self, attribute, default=None):
- """
- Example::
-
- study_id = variable.get_attribute("self.dataset.study.id")
- """
- try:
- return eval(attribute)
- except AttributeError:
- return default
-
- def string_id(self):
- a = []
- for field_name in self.DOR.id_fields:
- field = eval("self.%s" % field_name)
+ def __str__(self):
+ """ Returns a string reprensentation of the instance, using DOR.id_fields """
+ result = []
+ for field in self.DOR.id_fields:
+ value = getattr(self, field)
try:
- s = field.string_id()
+ result.append(value.string_id())
except AttributeError:
- s = str(field)
- a.append(s)
- a = "/".join(a)
- return a
-
- def __str__(self):
- return self.string_id()
+ result.append(str(value))
+ return "/".join(result)
class AdminMixin:
| {"golden_diff": "diff --git a/ddionrails/base/mixins.py b/ddionrails/base/mixins.py\n--- a/ddionrails/base/mixins.py\n+++ b/ddionrails/base/mixins.py\n@@ -1,6 +1,9 @@\n-\"\"\"\n-Mixins for DDI on Rails.\n-\"\"\"\n+# -*- coding: utf-8 -*-\n+\n+\"\"\" Mixins for ddionrails.base app \"\"\"\n+\n+from typing import Dict\n+\n from django import forms\n \n from config.helpers import render_markdown\n@@ -64,14 +67,14 @@\n return cls.objects.get_or_create(**definition)[0]\n \n @classmethod\n- def get(cls, x):\n+ def get(cls, parameters: Dict):\n \"\"\"\n Default for the get_or_create based on a dict.\n \n The method uses only relevant identifiers based on ``DOR.id_fields``.\n \"\"\"\n try:\n- definition = {key: x[key] for key in cls.DOR.id_fields}\n+ definition = {key: parameters[key] for key in cls.DOR.id_fields}\n result = cls.objects.get(**definition)\n except cls.DoesNotExist:\n result = None\n@@ -90,19 +93,19 @@\n \n return DefaultForm\n \n- def to_dict(self):\n+ def to_dict(self) -> Dict:\n \"\"\"\n Uses the ``DOR.io_fields`` attribute to generate a default\n dict object for the current instance.\n \"\"\"\n- default_dict = {}\n- for field_name in self.DOR.io_fields:\n- field = eval(\"self.%s\" % field_name)\n+ dictionary = dict()\n+ for field in self.DOR.io_fields:\n+ value = getattr(self, field)\n try:\n- default_dict[field_name] = field.pk\n+ dictionary[field] = value.pk\n except AttributeError:\n- default_dict[field_name] = field\n- return default_dict\n+ dictionary[field] = value\n+ return dictionary\n \n def title(self):\n \"\"\"\n@@ -130,31 +133,16 @@\n html = \"\"\n return html\n \n- def get_attribute(self, attribute, default=None):\n- \"\"\"\n- Example::\n-\n- study_id = variable.get_attribute(\"self.dataset.study.id\")\n- \"\"\"\n- try:\n- return eval(attribute)\n- except AttributeError:\n- return default\n-\n- def string_id(self):\n- a = []\n- for field_name in self.DOR.id_fields:\n- field = eval(\"self.%s\" % field_name)\n+ def __str__(self):\n+ \"\"\" Returns a string reprensentation of the instance, using DOR.id_fields \"\"\"\n+ result = []\n+ for field in self.DOR.id_fields:\n+ value = getattr(self, field)\n try:\n- s = field.string_id()\n+ result.append(value.string_id())\n except AttributeError:\n- s = str(field)\n- a.append(s)\n- a = \"/\".join(a)\n- return a\n-\n- def __str__(self):\n- return self.string_id()\n+ result.append(str(value))\n+ return \"/\".join(result)\n \n \n class AdminMixin:\n", "issue": "ddionrails.base.mixins uses eval for questionable purposes\n### Subject of the issue\r\n\r\neval executes arbitrary python code and is used by the modelMixin to get attributes of objects:\r\n\r\nhttps://github.com/ddionrails/ddionrails/blob/0a9b1d2d490ad7fc9ce90d5e6f68bfaf06c06bf3/ddionrails/base/mixins.py#L94-L106\r\n\r\nhttps://github.com/ddionrails/ddionrails/blob/0a9b1d2d490ad7fc9ce90d5e6f68bfaf06c06bf3/ddionrails/base/mixins.py#L134-L143\r\n\r\nhttps://github.com/ddionrails/ddionrails/blob/0a9b1d2d490ad7fc9ce90d5e6f68bfaf06c06bf3/ddionrails/base/mixins.py#L134-L155\r\n\r\nDedicated functionality should be used to do this instead, i.e. getattr().\r\nThis might have to be done recursively since the functions seem to have been used to get attributes over an arbitrary number of relations, e.g. object.objectAttribute.attribute or object.attribute.\n", "before_files": [{"content": "\"\"\"\nMixins for DDI on Rails.\n\"\"\"\nfrom django import forms\n\nfrom config.helpers import render_markdown\n\n\nclass ModelMixin:\n \"\"\"\n Default mixins for all classes in DDI on Rails.\n\n Requires two definition in the ``DOR`` class:\n\n * io_fields: Fields that are used for the default form and in the default dict.\n * id_fields: Fields that are used for the get_or_create default method.\n\n Example:\n\n ::\n\n from django.db import models\n from ddionrails.mixins import ModelMixin\n\n class Test(models.Model, ModelMixin):\n\n name = models.CharField(max_length=255, unique=True)\n\n class DOR:\n id_fields = [\"name\"]\n io_fields = [\"name\"]\n\n The default value for DOR is:\n\n ::\n\n class DOR:\n id_fields = [\"name\"]\n io_fields = [\"name\", \"label\", \"description\"]\n\n The ``id_fields`` are also use to construct a default string identifier.\n It is therefore recommended, to order them from the most general to the\n most specific one.\n\n \"\"\"\n\n class DOR:\n id_fields = [\"name\"]\n io_fields = [\"name\", \"label\", \"description\"]\n\n @classmethod\n def get_or_create(cls, x, lower_strings=True):\n \"\"\"\n Default for the get_or_create based on a dict.\n\n The method uses only relevant identifiers based on ``DOR.id_fields``.\n\n By default, all strings are set to lower case (option ``lower_strings``).\n \"\"\"\n definition = {key: x[key] for key in cls.DOR.id_fields}\n for key, value in definition.items():\n if value.__class__ == str and lower_strings:\n definition[key] = value.lower()\n return cls.objects.get_or_create(**definition)[0]\n\n @classmethod\n def get(cls, x):\n \"\"\"\n Default for the get_or_create based on a dict.\n\n The method uses only relevant identifiers based on ``DOR.id_fields``.\n \"\"\"\n try:\n definition = {key: x[key] for key in cls.DOR.id_fields}\n result = cls.objects.get(**definition)\n except cls.DoesNotExist:\n result = None\n return result\n\n @classmethod\n def default_form(cls):\n \"\"\"\n Creates a default form for all attributes defined in ``DOR.io_fields``.\n \"\"\"\n\n class DefaultForm(forms.ModelForm):\n class Meta:\n model = cls\n fields = cls.DOR.io_fields\n\n return DefaultForm\n\n def to_dict(self):\n \"\"\"\n Uses the ``DOR.io_fields`` attribute to generate a default\n dict object for the current instance.\n \"\"\"\n default_dict = {}\n for field_name in self.DOR.io_fields:\n field = eval(\"self.%s\" % field_name)\n try:\n default_dict[field_name] = field.pk\n except AttributeError:\n default_dict[field_name] = field\n return default_dict\n\n def title(self):\n \"\"\"\n Default for the title. It first looks for a valid label, next for a\n valid name, and otherwise returns an empty string.\n \"\"\"\n try:\n name = self.name\n except AttributeError:\n name = \"\"\n try:\n label = self.label\n except AttributeError:\n label = \"\"\n return name if label == \"\" else label\n\n def html_description(self):\n \"\"\"\n Uses the ddionrails Markdown parser (ddionrails.helpers) to render\n the description into HTML.\n \"\"\"\n try:\n html = render_markdown(self.description)\n except AttributeError:\n html = \"\"\n return html\n\n def get_attribute(self, attribute, default=None):\n \"\"\"\n Example::\n\n study_id = variable.get_attribute(\"self.dataset.study.id\")\n \"\"\"\n try:\n return eval(attribute)\n except AttributeError:\n return default\n\n def string_id(self):\n a = []\n for field_name in self.DOR.id_fields:\n field = eval(\"self.%s\" % field_name)\n try:\n s = field.string_id()\n except AttributeError:\n s = str(field)\n a.append(s)\n a = \"/\".join(a)\n return a\n\n def __str__(self):\n return self.string_id()\n\n\nclass AdminMixin:\n \"\"\" A mixin for ModelAdmins to query related models via methods \"\"\"\n\n @staticmethod\n def study_name(obj):\n \"\"\" Return the name of the related study \"\"\"\n try:\n return obj.study.name\n except AttributeError:\n return None\n\n @staticmethod\n def period_name(obj):\n \"\"\" Return the name of the related period \"\"\"\n try:\n return obj.period.name\n except AttributeError:\n return None\n\n @staticmethod\n def analysis_unit_name(obj):\n \"\"\" Return the name of the related analysis_unit \"\"\"\n try:\n return obj.analysis_unit.name\n except AttributeError:\n return None\n\n @staticmethod\n def dataset_name(obj):\n \"\"\" Return the name of the related dataset \"\"\"\n try:\n return obj.dataset.name\n except AttributeError:\n return None\n\n @staticmethod\n def dataset_study_name(obj):\n \"\"\" Return the name of the related dataset.study \"\"\"\n try:\n return obj.dataset.study.name\n except AttributeError:\n return None\n\n @staticmethod\n def instrument_name(obj):\n \"\"\" Return the name of the related instrument \"\"\"\n try:\n return obj.instrument.name\n except AttributeError:\n return None\n\n @staticmethod\n def instrument_study_name(obj):\n \"\"\" Return the name of the related instrument.study \"\"\"\n try:\n return obj.instrument.study.name\n except AttributeError:\n return None\n\n @staticmethod\n def basket_name(obj):\n \"\"\" Return the name of the related basket \"\"\"\n try:\n return obj.basket.name\n except AttributeError:\n return None\n\n @staticmethod\n def basket_study_name(obj):\n \"\"\" Return the name of the related basket.study \"\"\"\n try:\n return obj.basket.study.name\n except AttributeError:\n return None\n\n @staticmethod\n def user_name(obj):\n \"\"\" Return the name of the related basket.user \"\"\"\n try:\n return obj.basket.user.username\n except AttributeError:\n return None\n", "path": "ddionrails/base/mixins.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\" Mixins for ddionrails.base app \"\"\"\n\nfrom typing import Dict\n\nfrom django import forms\n\nfrom config.helpers import render_markdown\n\n\nclass ModelMixin:\n \"\"\"\n Default mixins for all classes in DDI on Rails.\n\n Requires two definition in the ``DOR`` class:\n\n * io_fields: Fields that are used for the default form and in the default dict.\n * id_fields: Fields that are used for the get_or_create default method.\n\n Example:\n\n ::\n\n from django.db import models\n from ddionrails.mixins import ModelMixin\n\n class Test(models.Model, ModelMixin):\n\n name = models.CharField(max_length=255, unique=True)\n\n class DOR:\n id_fields = [\"name\"]\n io_fields = [\"name\"]\n\n The default value for DOR is:\n\n ::\n\n class DOR:\n id_fields = [\"name\"]\n io_fields = [\"name\", \"label\", \"description\"]\n\n The ``id_fields`` are also use to construct a default string identifier.\n It is therefore recommended, to order them from the most general to the\n most specific one.\n\n \"\"\"\n\n class DOR:\n id_fields = [\"name\"]\n io_fields = [\"name\", \"label\", \"description\"]\n\n @classmethod\n def get_or_create(cls, x, lower_strings=True):\n \"\"\"\n Default for the get_or_create based on a dict.\n\n The method uses only relevant identifiers based on ``DOR.id_fields``.\n\n By default, all strings are set to lower case (option ``lower_strings``).\n \"\"\"\n definition = {key: x[key] for key in cls.DOR.id_fields}\n for key, value in definition.items():\n if value.__class__ == str and lower_strings:\n definition[key] = value.lower()\n return cls.objects.get_or_create(**definition)[0]\n\n @classmethod\n def get(cls, parameters: Dict):\n \"\"\"\n Default for the get_or_create based on a dict.\n\n The method uses only relevant identifiers based on ``DOR.id_fields``.\n \"\"\"\n try:\n definition = {key: parameters[key] for key in cls.DOR.id_fields}\n result = cls.objects.get(**definition)\n except cls.DoesNotExist:\n result = None\n return result\n\n @classmethod\n def default_form(cls):\n \"\"\"\n Creates a default form for all attributes defined in ``DOR.io_fields``.\n \"\"\"\n\n class DefaultForm(forms.ModelForm):\n class Meta:\n model = cls\n fields = cls.DOR.io_fields\n\n return DefaultForm\n\n def to_dict(self) -> Dict:\n \"\"\"\n Uses the ``DOR.io_fields`` attribute to generate a default\n dict object for the current instance.\n \"\"\"\n dictionary = dict()\n for field in self.DOR.io_fields:\n value = getattr(self, field)\n try:\n dictionary[field] = value.pk\n except AttributeError:\n dictionary[field] = value\n return dictionary\n\n def title(self):\n \"\"\"\n Default for the title. It first looks for a valid label, next for a\n valid name, and otherwise returns an empty string.\n \"\"\"\n try:\n name = self.name\n except AttributeError:\n name = \"\"\n try:\n label = self.label\n except AttributeError:\n label = \"\"\n return name if label == \"\" else label\n\n def html_description(self):\n \"\"\"\n Uses the ddionrails Markdown parser (ddionrails.helpers) to render\n the description into HTML.\n \"\"\"\n try:\n html = render_markdown(self.description)\n except AttributeError:\n html = \"\"\n return html\n\n def __str__(self):\n \"\"\" Returns a string reprensentation of the instance, using DOR.id_fields \"\"\"\n result = []\n for field in self.DOR.id_fields:\n value = getattr(self, field)\n try:\n result.append(value.string_id())\n except AttributeError:\n result.append(str(value))\n return \"/\".join(result)\n\n\nclass AdminMixin:\n \"\"\" A mixin for ModelAdmins to query related models via methods \"\"\"\n\n @staticmethod\n def study_name(obj):\n \"\"\" Return the name of the related study \"\"\"\n try:\n return obj.study.name\n except AttributeError:\n return None\n\n @staticmethod\n def period_name(obj):\n \"\"\" Return the name of the related period \"\"\"\n try:\n return obj.period.name\n except AttributeError:\n return None\n\n @staticmethod\n def analysis_unit_name(obj):\n \"\"\" Return the name of the related analysis_unit \"\"\"\n try:\n return obj.analysis_unit.name\n except AttributeError:\n return None\n\n @staticmethod\n def dataset_name(obj):\n \"\"\" Return the name of the related dataset \"\"\"\n try:\n return obj.dataset.name\n except AttributeError:\n return None\n\n @staticmethod\n def dataset_study_name(obj):\n \"\"\" Return the name of the related dataset.study \"\"\"\n try:\n return obj.dataset.study.name\n except AttributeError:\n return None\n\n @staticmethod\n def instrument_name(obj):\n \"\"\" Return the name of the related instrument \"\"\"\n try:\n return obj.instrument.name\n except AttributeError:\n return None\n\n @staticmethod\n def instrument_study_name(obj):\n \"\"\" Return the name of the related instrument.study \"\"\"\n try:\n return obj.instrument.study.name\n except AttributeError:\n return None\n\n @staticmethod\n def basket_name(obj):\n \"\"\" Return the name of the related basket \"\"\"\n try:\n return obj.basket.name\n except AttributeError:\n return None\n\n @staticmethod\n def basket_study_name(obj):\n \"\"\" Return the name of the related basket.study \"\"\"\n try:\n return obj.basket.study.name\n except AttributeError:\n return None\n\n @staticmethod\n def user_name(obj):\n \"\"\" Return the name of the related basket.user \"\"\"\n try:\n return obj.basket.user.username\n except AttributeError:\n return None\n", "path": "ddionrails/base/mixins.py"}]} | 2,503 | 694 |
gh_patches_debug_36345 | rasdani/github-patches | git_diff | goauthentik__authentik-4908 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Tasks API `start_timestamp` is not set, `task_duration` is wrong
**Describe the bug**
I noticed that System Tasks durations in the web UI are not right. The source of that data is the tasks API, and it's being calculated incorrectly there. `start_timestamp` is not part of the task schema, so it's not possible to calculate it yourself either.
**To Reproduce**
Check System Tasks page in admin interface, or `GET /admin/system_tasks/`
**Expected behavior**
Duration is correct, and `start_timestamp` is set
**Screenshots**


**Version and Deployment (please complete the following information):**
- authentik version: `gh-main`
- Deployment: docker-compose
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `authentik/events/monitored_tasks.py`
Content:
```
1 """Monitored tasks"""
2 from dataclasses import dataclass, field
3 from datetime import datetime
4 from enum import Enum
5 from timeit import default_timer
6 from typing import Any, Optional
7
8 from celery import Task
9 from django.core.cache import cache
10 from django.utils.translation import gettext_lazy as _
11 from structlog.stdlib import get_logger
12
13 from authentik.events.apps import GAUGE_TASKS
14 from authentik.events.models import Event, EventAction
15 from authentik.lib.utils.errors import exception_to_string
16
17 LOGGER = get_logger()
18 CACHE_KEY_PREFIX = "goauthentik.io/events/tasks/"
19
20
21 class TaskResultStatus(Enum):
22 """Possible states of tasks"""
23
24 SUCCESSFUL = 1
25 WARNING = 2
26 ERROR = 4
27 UNKNOWN = 8
28
29
30 @dataclass
31 class TaskResult:
32 """Result of a task run, this class is created by the task itself
33 and used by self.set_status"""
34
35 status: TaskResultStatus
36
37 messages: list[str] = field(default_factory=list)
38
39 # Optional UID used in cache for tasks that run in different instances
40 uid: Optional[str] = field(default=None)
41
42 def with_error(self, exc: Exception) -> "TaskResult":
43 """Since errors might not always be pickle-able, set the traceback"""
44 self.messages.append(str(exc))
45 return self
46
47
48 @dataclass
49 class TaskInfo:
50 """Info about a task run"""
51
52 task_name: str
53 start_timestamp: float
54 finish_timestamp: float
55 finish_time: datetime
56
57 result: TaskResult
58
59 task_call_module: str
60 task_call_func: str
61 task_call_args: list[Any] = field(default_factory=list)
62 task_call_kwargs: dict[str, Any] = field(default_factory=dict)
63
64 task_description: Optional[str] = field(default=None)
65
66 @staticmethod
67 def all() -> dict[str, "TaskInfo"]:
68 """Get all TaskInfo objects"""
69 return cache.get_many(cache.keys(CACHE_KEY_PREFIX + "*"))
70
71 @staticmethod
72 def by_name(name: str) -> Optional["TaskInfo"]:
73 """Get TaskInfo Object by name"""
74 return cache.get(CACHE_KEY_PREFIX + name, None)
75
76 def delete(self):
77 """Delete task info from cache"""
78 return cache.delete(CACHE_KEY_PREFIX + self.task_name)
79
80 def update_metrics(self):
81 """Update prometheus metrics"""
82 start = default_timer()
83 if hasattr(self, "start_timestamp"):
84 start = self.start_timestamp
85 try:
86 duration = max(self.finish_timestamp - start, 0)
87 except TypeError:
88 duration = 0
89 GAUGE_TASKS.labels(
90 task_name=self.task_name,
91 task_uid=self.result.uid or "",
92 status=self.result.status,
93 ).set(duration)
94
95 def save(self, timeout_hours=6):
96 """Save task into cache"""
97 key = CACHE_KEY_PREFIX + self.task_name
98 if self.result.uid:
99 key += f":{self.result.uid}"
100 self.task_name += f":{self.result.uid}"
101 self.update_metrics()
102 cache.set(key, self, timeout=timeout_hours * 60 * 60)
103
104
105 class MonitoredTask(Task):
106 """Task which can save its state to the cache"""
107
108 # For tasks that should only be listed if they failed, set this to False
109 save_on_success: bool
110
111 _result: Optional[TaskResult]
112
113 _uid: Optional[str]
114
115 def __init__(self, *args, **kwargs) -> None:
116 super().__init__(*args, **kwargs)
117 self.save_on_success = True
118 self._uid = None
119 self._result = None
120 self.result_timeout_hours = 6
121 self.start = default_timer()
122
123 def set_uid(self, uid: str):
124 """Set UID, so in the case of an unexpected error its saved correctly"""
125 self._uid = uid
126
127 def set_status(self, result: TaskResult):
128 """Set result for current run, will overwrite previous result."""
129 self._result = result
130
131 # pylint: disable=too-many-arguments
132 def after_return(self, status, retval, task_id, args: list[Any], kwargs: dict[str, Any], einfo):
133 super().after_return(status, retval, task_id, args, kwargs, einfo=einfo)
134 if not self._result:
135 return
136 if not self._result.uid:
137 self._result.uid = self._uid
138 info = TaskInfo(
139 task_name=self.__name__,
140 task_description=self.__doc__,
141 start_timestamp=self.start,
142 finish_timestamp=default_timer(),
143 finish_time=datetime.now(),
144 result=self._result,
145 task_call_module=self.__module__,
146 task_call_func=self.__name__,
147 task_call_args=args,
148 task_call_kwargs=kwargs,
149 )
150 if self._result.status == TaskResultStatus.SUCCESSFUL and not self.save_on_success:
151 info.delete()
152 return
153 info.save(self.result_timeout_hours)
154
155 # pylint: disable=too-many-arguments
156 def on_failure(self, exc, task_id, args, kwargs, einfo):
157 super().on_failure(exc, task_id, args, kwargs, einfo=einfo)
158 if not self._result:
159 self._result = TaskResult(status=TaskResultStatus.ERROR, messages=[str(exc)])
160 if not self._result.uid:
161 self._result.uid = self._uid
162 TaskInfo(
163 task_name=self.__name__,
164 task_description=self.__doc__,
165 start_timestamp=self.start,
166 finish_timestamp=default_timer(),
167 finish_time=datetime.now(),
168 result=self._result,
169 task_call_module=self.__module__,
170 task_call_func=self.__name__,
171 task_call_args=args,
172 task_call_kwargs=kwargs,
173 ).save(self.result_timeout_hours)
174 Event.new(
175 EventAction.SYSTEM_TASK_EXCEPTION,
176 message=f"Task {self.__name__} encountered an error: {exception_to_string(exc)}",
177 ).save()
178
179 def run(self, *args, **kwargs):
180 raise NotImplementedError
181
182
183 def prefill_task(func):
184 """Ensure a task's details are always in cache, so it can always be triggered via API"""
185 status = TaskInfo.by_name(func.__name__)
186 if status:
187 return func
188 TaskInfo(
189 task_name=func.__name__,
190 task_description=func.__doc__,
191 result=TaskResult(TaskResultStatus.UNKNOWN, messages=[_("Task has not been run yet.")]),
192 task_call_module=func.__module__,
193 task_call_func=func.__name__,
194 # We don't have real values for these attributes but they cannot be null
195 start_timestamp=default_timer(),
196 finish_timestamp=default_timer(),
197 finish_time=datetime.now(),
198 ).save(86400)
199 LOGGER.debug("prefilled task", task_name=func.__name__)
200 return func
201
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/authentik/events/monitored_tasks.py b/authentik/events/monitored_tasks.py
--- a/authentik/events/monitored_tasks.py
+++ b/authentik/events/monitored_tasks.py
@@ -111,6 +111,7 @@
_result: Optional[TaskResult]
_uid: Optional[str]
+ start: Optional[float] = None
def __init__(self, *args, **kwargs) -> None:
super().__init__(*args, **kwargs)
@@ -118,7 +119,6 @@
self._uid = None
self._result = None
self.result_timeout_hours = 6
- self.start = default_timer()
def set_uid(self, uid: str):
"""Set UID, so in the case of an unexpected error its saved correctly"""
@@ -128,6 +128,10 @@
"""Set result for current run, will overwrite previous result."""
self._result = result
+ def before_start(self, task_id, args, kwargs):
+ self.start = default_timer()
+ return super().before_start(task_id, args, kwargs)
+
# pylint: disable=too-many-arguments
def after_return(self, status, retval, task_id, args: list[Any], kwargs: dict[str, Any], einfo):
super().after_return(status, retval, task_id, args, kwargs, einfo=einfo)
@@ -138,7 +142,7 @@
info = TaskInfo(
task_name=self.__name__,
task_description=self.__doc__,
- start_timestamp=self.start,
+ start_timestamp=self.start or default_timer(),
finish_timestamp=default_timer(),
finish_time=datetime.now(),
result=self._result,
@@ -162,7 +166,7 @@
TaskInfo(
task_name=self.__name__,
task_description=self.__doc__,
- start_timestamp=self.start,
+ start_timestamp=self.start or default_timer(),
finish_timestamp=default_timer(),
finish_time=datetime.now(),
result=self._result,
| {"golden_diff": "diff --git a/authentik/events/monitored_tasks.py b/authentik/events/monitored_tasks.py\n--- a/authentik/events/monitored_tasks.py\n+++ b/authentik/events/monitored_tasks.py\n@@ -111,6 +111,7 @@\n _result: Optional[TaskResult]\n \n _uid: Optional[str]\n+ start: Optional[float] = None\n \n def __init__(self, *args, **kwargs) -> None:\n super().__init__(*args, **kwargs)\n@@ -118,7 +119,6 @@\n self._uid = None\n self._result = None\n self.result_timeout_hours = 6\n- self.start = default_timer()\n \n def set_uid(self, uid: str):\n \"\"\"Set UID, so in the case of an unexpected error its saved correctly\"\"\"\n@@ -128,6 +128,10 @@\n \"\"\"Set result for current run, will overwrite previous result.\"\"\"\n self._result = result\n \n+ def before_start(self, task_id, args, kwargs):\n+ self.start = default_timer()\n+ return super().before_start(task_id, args, kwargs)\n+\n # pylint: disable=too-many-arguments\n def after_return(self, status, retval, task_id, args: list[Any], kwargs: dict[str, Any], einfo):\n super().after_return(status, retval, task_id, args, kwargs, einfo=einfo)\n@@ -138,7 +142,7 @@\n info = TaskInfo(\n task_name=self.__name__,\n task_description=self.__doc__,\n- start_timestamp=self.start,\n+ start_timestamp=self.start or default_timer(),\n finish_timestamp=default_timer(),\n finish_time=datetime.now(),\n result=self._result,\n@@ -162,7 +166,7 @@\n TaskInfo(\n task_name=self.__name__,\n task_description=self.__doc__,\n- start_timestamp=self.start,\n+ start_timestamp=self.start or default_timer(),\n finish_timestamp=default_timer(),\n finish_time=datetime.now(),\n result=self._result,\n", "issue": "Tasks API `start_timestamp` is not set, `task_duration` is wrong\n**Describe the bug**\r\nI noticed that System Tasks durations in the web UI are not right. The source of that data is the tasks API, and it's being calculated incorrectly there. `start_timestamp` is not part of the task schema, so it's not possible to calculate it yourself either.\r\n\r\n**To Reproduce**\r\nCheck System Tasks page in admin interface, or `GET /admin/system_tasks/`\r\n\r\n**Expected behavior**\r\nDuration is correct, and `start_timestamp` is set\r\n\r\n**Screenshots**\r\n\r\n\r\n\r\n**Version and Deployment (please complete the following information):**\r\n - authentik version: `gh-main`\r\n - Deployment: docker-compose\r\n\n", "before_files": [{"content": "\"\"\"Monitored tasks\"\"\"\nfrom dataclasses import dataclass, field\nfrom datetime import datetime\nfrom enum import Enum\nfrom timeit import default_timer\nfrom typing import Any, Optional\n\nfrom celery import Task\nfrom django.core.cache import cache\nfrom django.utils.translation import gettext_lazy as _\nfrom structlog.stdlib import get_logger\n\nfrom authentik.events.apps import GAUGE_TASKS\nfrom authentik.events.models import Event, EventAction\nfrom authentik.lib.utils.errors import exception_to_string\n\nLOGGER = get_logger()\nCACHE_KEY_PREFIX = \"goauthentik.io/events/tasks/\"\n\n\nclass TaskResultStatus(Enum):\n \"\"\"Possible states of tasks\"\"\"\n\n SUCCESSFUL = 1\n WARNING = 2\n ERROR = 4\n UNKNOWN = 8\n\n\n@dataclass\nclass TaskResult:\n \"\"\"Result of a task run, this class is created by the task itself\n and used by self.set_status\"\"\"\n\n status: TaskResultStatus\n\n messages: list[str] = field(default_factory=list)\n\n # Optional UID used in cache for tasks that run in different instances\n uid: Optional[str] = field(default=None)\n\n def with_error(self, exc: Exception) -> \"TaskResult\":\n \"\"\"Since errors might not always be pickle-able, set the traceback\"\"\"\n self.messages.append(str(exc))\n return self\n\n\n@dataclass\nclass TaskInfo:\n \"\"\"Info about a task run\"\"\"\n\n task_name: str\n start_timestamp: float\n finish_timestamp: float\n finish_time: datetime\n\n result: TaskResult\n\n task_call_module: str\n task_call_func: str\n task_call_args: list[Any] = field(default_factory=list)\n task_call_kwargs: dict[str, Any] = field(default_factory=dict)\n\n task_description: Optional[str] = field(default=None)\n\n @staticmethod\n def all() -> dict[str, \"TaskInfo\"]:\n \"\"\"Get all TaskInfo objects\"\"\"\n return cache.get_many(cache.keys(CACHE_KEY_PREFIX + \"*\"))\n\n @staticmethod\n def by_name(name: str) -> Optional[\"TaskInfo\"]:\n \"\"\"Get TaskInfo Object by name\"\"\"\n return cache.get(CACHE_KEY_PREFIX + name, None)\n\n def delete(self):\n \"\"\"Delete task info from cache\"\"\"\n return cache.delete(CACHE_KEY_PREFIX + self.task_name)\n\n def update_metrics(self):\n \"\"\"Update prometheus metrics\"\"\"\n start = default_timer()\n if hasattr(self, \"start_timestamp\"):\n start = self.start_timestamp\n try:\n duration = max(self.finish_timestamp - start, 0)\n except TypeError:\n duration = 0\n GAUGE_TASKS.labels(\n task_name=self.task_name,\n task_uid=self.result.uid or \"\",\n status=self.result.status,\n ).set(duration)\n\n def save(self, timeout_hours=6):\n \"\"\"Save task into cache\"\"\"\n key = CACHE_KEY_PREFIX + self.task_name\n if self.result.uid:\n key += f\":{self.result.uid}\"\n self.task_name += f\":{self.result.uid}\"\n self.update_metrics()\n cache.set(key, self, timeout=timeout_hours * 60 * 60)\n\n\nclass MonitoredTask(Task):\n \"\"\"Task which can save its state to the cache\"\"\"\n\n # For tasks that should only be listed if they failed, set this to False\n save_on_success: bool\n\n _result: Optional[TaskResult]\n\n _uid: Optional[str]\n\n def __init__(self, *args, **kwargs) -> None:\n super().__init__(*args, **kwargs)\n self.save_on_success = True\n self._uid = None\n self._result = None\n self.result_timeout_hours = 6\n self.start = default_timer()\n\n def set_uid(self, uid: str):\n \"\"\"Set UID, so in the case of an unexpected error its saved correctly\"\"\"\n self._uid = uid\n\n def set_status(self, result: TaskResult):\n \"\"\"Set result for current run, will overwrite previous result.\"\"\"\n self._result = result\n\n # pylint: disable=too-many-arguments\n def after_return(self, status, retval, task_id, args: list[Any], kwargs: dict[str, Any], einfo):\n super().after_return(status, retval, task_id, args, kwargs, einfo=einfo)\n if not self._result:\n return\n if not self._result.uid:\n self._result.uid = self._uid\n info = TaskInfo(\n task_name=self.__name__,\n task_description=self.__doc__,\n start_timestamp=self.start,\n finish_timestamp=default_timer(),\n finish_time=datetime.now(),\n result=self._result,\n task_call_module=self.__module__,\n task_call_func=self.__name__,\n task_call_args=args,\n task_call_kwargs=kwargs,\n )\n if self._result.status == TaskResultStatus.SUCCESSFUL and not self.save_on_success:\n info.delete()\n return\n info.save(self.result_timeout_hours)\n\n # pylint: disable=too-many-arguments\n def on_failure(self, exc, task_id, args, kwargs, einfo):\n super().on_failure(exc, task_id, args, kwargs, einfo=einfo)\n if not self._result:\n self._result = TaskResult(status=TaskResultStatus.ERROR, messages=[str(exc)])\n if not self._result.uid:\n self._result.uid = self._uid\n TaskInfo(\n task_name=self.__name__,\n task_description=self.__doc__,\n start_timestamp=self.start,\n finish_timestamp=default_timer(),\n finish_time=datetime.now(),\n result=self._result,\n task_call_module=self.__module__,\n task_call_func=self.__name__,\n task_call_args=args,\n task_call_kwargs=kwargs,\n ).save(self.result_timeout_hours)\n Event.new(\n EventAction.SYSTEM_TASK_EXCEPTION,\n message=f\"Task {self.__name__} encountered an error: {exception_to_string(exc)}\",\n ).save()\n\n def run(self, *args, **kwargs):\n raise NotImplementedError\n\n\ndef prefill_task(func):\n \"\"\"Ensure a task's details are always in cache, so it can always be triggered via API\"\"\"\n status = TaskInfo.by_name(func.__name__)\n if status:\n return func\n TaskInfo(\n task_name=func.__name__,\n task_description=func.__doc__,\n result=TaskResult(TaskResultStatus.UNKNOWN, messages=[_(\"Task has not been run yet.\")]),\n task_call_module=func.__module__,\n task_call_func=func.__name__,\n # We don't have real values for these attributes but they cannot be null\n start_timestamp=default_timer(),\n finish_timestamp=default_timer(),\n finish_time=datetime.now(),\n ).save(86400)\n LOGGER.debug(\"prefilled task\", task_name=func.__name__)\n return func\n", "path": "authentik/events/monitored_tasks.py"}], "after_files": [{"content": "\"\"\"Monitored tasks\"\"\"\nfrom dataclasses import dataclass, field\nfrom datetime import datetime\nfrom enum import Enum\nfrom timeit import default_timer\nfrom typing import Any, Optional\n\nfrom celery import Task\nfrom django.core.cache import cache\nfrom django.utils.translation import gettext_lazy as _\nfrom structlog.stdlib import get_logger\n\nfrom authentik.events.apps import GAUGE_TASKS\nfrom authentik.events.models import Event, EventAction\nfrom authentik.lib.utils.errors import exception_to_string\n\nLOGGER = get_logger()\nCACHE_KEY_PREFIX = \"goauthentik.io/events/tasks/\"\n\n\nclass TaskResultStatus(Enum):\n \"\"\"Possible states of tasks\"\"\"\n\n SUCCESSFUL = 1\n WARNING = 2\n ERROR = 4\n UNKNOWN = 8\n\n\n@dataclass\nclass TaskResult:\n \"\"\"Result of a task run, this class is created by the task itself\n and used by self.set_status\"\"\"\n\n status: TaskResultStatus\n\n messages: list[str] = field(default_factory=list)\n\n # Optional UID used in cache for tasks that run in different instances\n uid: Optional[str] = field(default=None)\n\n def with_error(self, exc: Exception) -> \"TaskResult\":\n \"\"\"Since errors might not always be pickle-able, set the traceback\"\"\"\n self.messages.append(str(exc))\n return self\n\n\n@dataclass\nclass TaskInfo:\n \"\"\"Info about a task run\"\"\"\n\n task_name: str\n start_timestamp: float\n finish_timestamp: float\n finish_time: datetime\n\n result: TaskResult\n\n task_call_module: str\n task_call_func: str\n task_call_args: list[Any] = field(default_factory=list)\n task_call_kwargs: dict[str, Any] = field(default_factory=dict)\n\n task_description: Optional[str] = field(default=None)\n\n @staticmethod\n def all() -> dict[str, \"TaskInfo\"]:\n \"\"\"Get all TaskInfo objects\"\"\"\n return cache.get_many(cache.keys(CACHE_KEY_PREFIX + \"*\"))\n\n @staticmethod\n def by_name(name: str) -> Optional[\"TaskInfo\"]:\n \"\"\"Get TaskInfo Object by name\"\"\"\n return cache.get(CACHE_KEY_PREFIX + name, None)\n\n def delete(self):\n \"\"\"Delete task info from cache\"\"\"\n return cache.delete(CACHE_KEY_PREFIX + self.task_name)\n\n def update_metrics(self):\n \"\"\"Update prometheus metrics\"\"\"\n start = default_timer()\n if hasattr(self, \"start_timestamp\"):\n start = self.start_timestamp\n try:\n duration = max(self.finish_timestamp - start, 0)\n except TypeError:\n duration = 0\n GAUGE_TASKS.labels(\n task_name=self.task_name,\n task_uid=self.result.uid or \"\",\n status=self.result.status,\n ).set(duration)\n\n def save(self, timeout_hours=6):\n \"\"\"Save task into cache\"\"\"\n key = CACHE_KEY_PREFIX + self.task_name\n if self.result.uid:\n key += f\":{self.result.uid}\"\n self.task_name += f\":{self.result.uid}\"\n self.update_metrics()\n cache.set(key, self, timeout=timeout_hours * 60 * 60)\n\n\nclass MonitoredTask(Task):\n \"\"\"Task which can save its state to the cache\"\"\"\n\n # For tasks that should only be listed if they failed, set this to False\n save_on_success: bool\n\n _result: Optional[TaskResult]\n\n _uid: Optional[str]\n start: Optional[float] = None\n\n def __init__(self, *args, **kwargs) -> None:\n super().__init__(*args, **kwargs)\n self.save_on_success = True\n self._uid = None\n self._result = None\n self.result_timeout_hours = 6\n\n def set_uid(self, uid: str):\n \"\"\"Set UID, so in the case of an unexpected error its saved correctly\"\"\"\n self._uid = uid\n\n def set_status(self, result: TaskResult):\n \"\"\"Set result for current run, will overwrite previous result.\"\"\"\n self._result = result\n\n def before_start(self, task_id, args, kwargs):\n self.start = default_timer()\n return super().before_start(task_id, args, kwargs)\n\n # pylint: disable=too-many-arguments\n def after_return(self, status, retval, task_id, args: list[Any], kwargs: dict[str, Any], einfo):\n super().after_return(status, retval, task_id, args, kwargs, einfo=einfo)\n if not self._result:\n return\n if not self._result.uid:\n self._result.uid = self._uid\n info = TaskInfo(\n task_name=self.__name__,\n task_description=self.__doc__,\n start_timestamp=self.start or default_timer(),\n finish_timestamp=default_timer(),\n finish_time=datetime.now(),\n result=self._result,\n task_call_module=self.__module__,\n task_call_func=self.__name__,\n task_call_args=args,\n task_call_kwargs=kwargs,\n )\n if self._result.status == TaskResultStatus.SUCCESSFUL and not self.save_on_success:\n info.delete()\n return\n info.save(self.result_timeout_hours)\n\n # pylint: disable=too-many-arguments\n def on_failure(self, exc, task_id, args, kwargs, einfo):\n super().on_failure(exc, task_id, args, kwargs, einfo=einfo)\n if not self._result:\n self._result = TaskResult(status=TaskResultStatus.ERROR, messages=[str(exc)])\n if not self._result.uid:\n self._result.uid = self._uid\n TaskInfo(\n task_name=self.__name__,\n task_description=self.__doc__,\n start_timestamp=self.start or default_timer(),\n finish_timestamp=default_timer(),\n finish_time=datetime.now(),\n result=self._result,\n task_call_module=self.__module__,\n task_call_func=self.__name__,\n task_call_args=args,\n task_call_kwargs=kwargs,\n ).save(self.result_timeout_hours)\n Event.new(\n EventAction.SYSTEM_TASK_EXCEPTION,\n message=f\"Task {self.__name__} encountered an error: {exception_to_string(exc)}\",\n ).save()\n\n def run(self, *args, **kwargs):\n raise NotImplementedError\n\n\ndef prefill_task(func):\n \"\"\"Ensure a task's details are always in cache, so it can always be triggered via API\"\"\"\n status = TaskInfo.by_name(func.__name__)\n if status:\n return func\n TaskInfo(\n task_name=func.__name__,\n task_description=func.__doc__,\n result=TaskResult(TaskResultStatus.UNKNOWN, messages=[_(\"Task has not been run yet.\")]),\n task_call_module=func.__module__,\n task_call_func=func.__name__,\n # We don't have real values for these attributes but they cannot be null\n start_timestamp=default_timer(),\n finish_timestamp=default_timer(),\n finish_time=datetime.now(),\n ).save(86400)\n LOGGER.debug(\"prefilled task\", task_name=func.__name__)\n return func\n", "path": "authentik/events/monitored_tasks.py"}]} | 2,504 | 456 |
gh_patches_debug_58734 | rasdani/github-patches | git_diff | goauthentik__authentik-6727 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Certificate requires private key to be used for LDAP server verification in 2023.8.1
**Describe the bug**
CA Certificates that don't have private keys seem to be not showing up in the list of certificates for the configuration of TLS verification for LDAP sources
**To Reproduce**
Steps to reproduce the behavior:
1. Import the CA certificate of an LDAP server
2. Configure an LDAP source for that LDAP server
3. During configuration, try to select the CA certificate imported earlier as "TLS Verification Certificate"
4. In the list of certificates, only those with a private key are visible. I obviously don't have the private key of the CA, and I wouldn't want to import it here even if I had it, for security reasons.
**Expected behavior**
Certificates without private key can be selected if the purpose is only to validate remote servers.
**Screenshots**


**Logs**
N/A
**Version and Deployment (please complete the following information):**
- authentik version: 2023.8.1
- Deployment: docker-compose
**Additional context**
N/A
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `authentik/crypto/api.py`
Content:
```
1 """Crypto API Views"""
2 from datetime import datetime
3 from typing import Optional
4
5 from cryptography.hazmat.backends import default_backend
6 from cryptography.hazmat.primitives.serialization import load_pem_private_key
7 from cryptography.x509 import load_pem_x509_certificate
8 from django.http.response import HttpResponse
9 from django.urls import reverse
10 from django.utils.translation import gettext_lazy as _
11 from django_filters import FilterSet
12 from django_filters.filters import BooleanFilter
13 from drf_spectacular.types import OpenApiTypes
14 from drf_spectacular.utils import OpenApiParameter, OpenApiResponse, extend_schema
15 from rest_framework.decorators import action
16 from rest_framework.exceptions import ValidationError
17 from rest_framework.fields import CharField, DateTimeField, IntegerField, SerializerMethodField
18 from rest_framework.filters import OrderingFilter, SearchFilter
19 from rest_framework.request import Request
20 from rest_framework.response import Response
21 from rest_framework.serializers import ModelSerializer
22 from rest_framework.viewsets import ModelViewSet
23 from structlog.stdlib import get_logger
24
25 from authentik.api.authorization import SecretKeyFilter
26 from authentik.api.decorators import permission_required
27 from authentik.core.api.used_by import UsedByMixin
28 from authentik.core.api.utils import PassiveSerializer
29 from authentik.crypto.apps import MANAGED_KEY
30 from authentik.crypto.builder import CertificateBuilder
31 from authentik.crypto.models import CertificateKeyPair
32 from authentik.events.models import Event, EventAction
33
34 LOGGER = get_logger()
35
36
37 class CertificateKeyPairSerializer(ModelSerializer):
38 """CertificateKeyPair Serializer"""
39
40 fingerprint_sha256 = SerializerMethodField()
41 fingerprint_sha1 = SerializerMethodField()
42
43 cert_expiry = SerializerMethodField()
44 cert_subject = SerializerMethodField()
45 private_key_available = SerializerMethodField()
46 private_key_type = SerializerMethodField()
47
48 certificate_download_url = SerializerMethodField()
49 private_key_download_url = SerializerMethodField()
50
51 @property
52 def _should_include_details(self) -> bool:
53 request: Request = self.context.get("request", None)
54 if not request:
55 return True
56 return str(request.query_params.get("include_details", "true")).lower() == "true"
57
58 def get_fingerprint_sha256(self, instance: CertificateKeyPair) -> Optional[str]:
59 "Get certificate Hash (SHA256)"
60 if not self._should_include_details:
61 return None
62 return instance.fingerprint_sha256
63
64 def get_fingerprint_sha1(self, instance: CertificateKeyPair) -> Optional[str]:
65 "Get certificate Hash (SHA1)"
66 if not self._should_include_details:
67 return None
68 return instance.fingerprint_sha1
69
70 def get_cert_expiry(self, instance: CertificateKeyPair) -> Optional[datetime]:
71 "Get certificate expiry"
72 if not self._should_include_details:
73 return None
74 return DateTimeField().to_representation(instance.certificate.not_valid_after)
75
76 def get_cert_subject(self, instance: CertificateKeyPair) -> Optional[str]:
77 """Get certificate subject as full rfc4514"""
78 if not self._should_include_details:
79 return None
80 return instance.certificate.subject.rfc4514_string()
81
82 def get_private_key_available(self, instance: CertificateKeyPair) -> bool:
83 """Show if this keypair has a private key configured or not"""
84 return instance.key_data != "" and instance.key_data is not None
85
86 def get_private_key_type(self, instance: CertificateKeyPair) -> Optional[str]:
87 """Get the private key's type, if set"""
88 if not self._should_include_details:
89 return None
90 key = instance.private_key
91 if key:
92 return key.__class__.__name__.replace("_", "").lower().replace("privatekey", "")
93 return None
94
95 def get_certificate_download_url(self, instance: CertificateKeyPair) -> str:
96 """Get URL to download certificate"""
97 return (
98 reverse(
99 "authentik_api:certificatekeypair-view-certificate",
100 kwargs={"pk": instance.pk},
101 )
102 + "?download"
103 )
104
105 def get_private_key_download_url(self, instance: CertificateKeyPair) -> str:
106 """Get URL to download private key"""
107 return (
108 reverse(
109 "authentik_api:certificatekeypair-view-private-key",
110 kwargs={"pk": instance.pk},
111 )
112 + "?download"
113 )
114
115 def validate_certificate_data(self, value: str) -> str:
116 """Verify that input is a valid PEM x509 Certificate"""
117 try:
118 # Cast to string to fully load and parse certificate
119 # Prevents issues like https://github.com/goauthentik/authentik/issues/2082
120 str(load_pem_x509_certificate(value.encode("utf-8"), default_backend()))
121 except ValueError as exc:
122 LOGGER.warning("Failed to load certificate", exc=exc)
123 raise ValidationError("Unable to load certificate.")
124 return value
125
126 def validate_key_data(self, value: str) -> str:
127 """Verify that input is a valid PEM Key"""
128 # Since this field is optional, data can be empty.
129 if value != "":
130 try:
131 # Cast to string to fully load and parse certificate
132 # Prevents issues like https://github.com/goauthentik/authentik/issues/2082
133 str(
134 load_pem_private_key(
135 str.encode("\n".join([x.strip() for x in value.split("\n")])),
136 password=None,
137 backend=default_backend(),
138 )
139 )
140 except (ValueError, TypeError) as exc:
141 LOGGER.warning("Failed to load private key", exc=exc)
142 raise ValidationError("Unable to load private key (possibly encrypted?).")
143 return value
144
145 class Meta:
146 model = CertificateKeyPair
147 fields = [
148 "pk",
149 "name",
150 "fingerprint_sha256",
151 "fingerprint_sha1",
152 "certificate_data",
153 "key_data",
154 "cert_expiry",
155 "cert_subject",
156 "private_key_available",
157 "private_key_type",
158 "certificate_download_url",
159 "private_key_download_url",
160 "managed",
161 ]
162 extra_kwargs = {
163 "managed": {"read_only": True},
164 "key_data": {"write_only": True},
165 "certificate_data": {"write_only": True},
166 }
167
168
169 class CertificateDataSerializer(PassiveSerializer):
170 """Get CertificateKeyPair's data"""
171
172 data = CharField(read_only=True)
173
174
175 class CertificateGenerationSerializer(PassiveSerializer):
176 """Certificate generation parameters"""
177
178 common_name = CharField()
179 subject_alt_name = CharField(required=False, allow_blank=True, label=_("Subject-alt name"))
180 validity_days = IntegerField(initial=365)
181
182
183 class CertificateKeyPairFilter(FilterSet):
184 """Filter for certificates"""
185
186 has_key = BooleanFilter(
187 label="Only return certificate-key pairs with keys", method="filter_has_key"
188 )
189
190 def filter_has_key(self, queryset, name, value): # pragma: no cover
191 """Only return certificate-key pairs with keys"""
192 return queryset.exclude(key_data__exact="")
193
194 class Meta:
195 model = CertificateKeyPair
196 fields = ["name", "managed"]
197
198
199 class CertificateKeyPairViewSet(UsedByMixin, ModelViewSet):
200 """CertificateKeyPair Viewset"""
201
202 queryset = CertificateKeyPair.objects.exclude(managed=MANAGED_KEY)
203 serializer_class = CertificateKeyPairSerializer
204 filterset_class = CertificateKeyPairFilter
205 ordering = ["name"]
206 search_fields = ["name"]
207 filter_backends = [SecretKeyFilter, OrderingFilter, SearchFilter]
208
209 @extend_schema(
210 parameters=[
211 # Override the type for `has_key` above
212 OpenApiParameter(
213 "has_key",
214 bool,
215 required=False,
216 description="Only return certificate-key pairs with keys",
217 ),
218 OpenApiParameter("include_details", bool, default=True),
219 ]
220 )
221 def list(self, request, *args, **kwargs):
222 return super().list(request, *args, **kwargs)
223
224 @permission_required(None, ["authentik_crypto.add_certificatekeypair"])
225 @extend_schema(
226 request=CertificateGenerationSerializer(),
227 responses={
228 200: CertificateKeyPairSerializer,
229 400: OpenApiResponse(description="Bad request"),
230 },
231 )
232 @action(detail=False, methods=["POST"])
233 def generate(self, request: Request) -> Response:
234 """Generate a new, self-signed certificate-key pair"""
235 data = CertificateGenerationSerializer(data=request.data)
236 if not data.is_valid():
237 return Response(data.errors, status=400)
238 raw_san = data.validated_data.get("subject_alt_name", "")
239 sans = raw_san.split(",") if raw_san != "" else []
240 builder = CertificateBuilder(data.validated_data["common_name"])
241 builder.build(
242 subject_alt_names=sans,
243 validity_days=int(data.validated_data["validity_days"]),
244 )
245 instance = builder.save()
246 serializer = self.get_serializer(instance)
247 return Response(serializer.data)
248
249 @extend_schema(
250 parameters=[
251 OpenApiParameter(
252 name="download",
253 location=OpenApiParameter.QUERY,
254 type=OpenApiTypes.BOOL,
255 )
256 ],
257 responses={200: CertificateDataSerializer(many=False)},
258 )
259 @action(detail=True, pagination_class=None, filter_backends=[])
260 def view_certificate(self, request: Request, pk: str) -> Response:
261 """Return certificate-key pairs certificate and log access"""
262 certificate: CertificateKeyPair = self.get_object()
263 Event.new( # noqa # nosec
264 EventAction.SECRET_VIEW,
265 secret=certificate,
266 type="certificate",
267 ).from_http(request)
268 if "download" in request.query_params:
269 # Mime type from https://pki-tutorial.readthedocs.io/en/latest/mime.html
270 response = HttpResponse(
271 certificate.certificate_data, content_type="application/x-pem-file"
272 )
273 response[
274 "Content-Disposition"
275 ] = f'attachment; filename="{certificate.name}_certificate.pem"'
276 return response
277 return Response(CertificateDataSerializer({"data": certificate.certificate_data}).data)
278
279 @extend_schema(
280 parameters=[
281 OpenApiParameter(
282 name="download",
283 location=OpenApiParameter.QUERY,
284 type=OpenApiTypes.BOOL,
285 )
286 ],
287 responses={200: CertificateDataSerializer(many=False)},
288 )
289 @action(detail=True, pagination_class=None, filter_backends=[])
290 def view_private_key(self, request: Request, pk: str) -> Response:
291 """Return certificate-key pairs private key and log access"""
292 certificate: CertificateKeyPair = self.get_object()
293 Event.new( # noqa # nosec
294 EventAction.SECRET_VIEW,
295 secret=certificate,
296 type="private_key",
297 ).from_http(request)
298 if "download" in request.query_params:
299 # Mime type from https://pki-tutorial.readthedocs.io/en/latest/mime.html
300 response = HttpResponse(certificate.key_data, content_type="application/x-pem-file")
301 response[
302 "Content-Disposition"
303 ] = f'attachment; filename="{certificate.name}_private_key.pem"'
304 return response
305 return Response(CertificateDataSerializer({"data": certificate.key_data}).data)
306
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/authentik/crypto/api.py b/authentik/crypto/api.py
--- a/authentik/crypto/api.py
+++ b/authentik/crypto/api.py
@@ -189,6 +189,8 @@
def filter_has_key(self, queryset, name, value): # pragma: no cover
"""Only return certificate-key pairs with keys"""
+ if not value:
+ return queryset
return queryset.exclude(key_data__exact="")
class Meta:
| {"golden_diff": "diff --git a/authentik/crypto/api.py b/authentik/crypto/api.py\n--- a/authentik/crypto/api.py\n+++ b/authentik/crypto/api.py\n@@ -189,6 +189,8 @@\n \n def filter_has_key(self, queryset, name, value): # pragma: no cover\n \"\"\"Only return certificate-key pairs with keys\"\"\"\n+ if not value:\n+ return queryset\n return queryset.exclude(key_data__exact=\"\")\n \n class Meta:\n", "issue": "Certificate requires private key to be used for LDAP server verification in 2023.8.1\n**Describe the bug**\r\nCA Certificates that don't have private keys seem to be not showing up in the list of certificates for the configuration of TLS verification for LDAP sources\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n\r\n1. Import the CA certificate of an LDAP server\r\n2. Configure an LDAP source for that LDAP server\r\n3. During configuration, try to select the CA certificate imported earlier as \"TLS Verification Certificate\"\r\n4. In the list of certificates, only those with a private key are visible. I obviously don't have the private key of the CA, and I wouldn't want to import it here even if I had it, for security reasons.\r\n\r\n**Expected behavior**\r\nCertificates without private key can be selected if the purpose is only to validate remote servers.\r\n\r\n**Screenshots**\r\n\r\n\r\n\r\n\r\n**Logs**\r\nN/A\r\n\r\n**Version and Deployment (please complete the following information):**\r\n\r\n- authentik version: 2023.8.1\r\n- Deployment: docker-compose\r\n\r\n**Additional context**\r\nN/A\r\n\n", "before_files": [{"content": "\"\"\"Crypto API Views\"\"\"\nfrom datetime import datetime\nfrom typing import Optional\n\nfrom cryptography.hazmat.backends import default_backend\nfrom cryptography.hazmat.primitives.serialization import load_pem_private_key\nfrom cryptography.x509 import load_pem_x509_certificate\nfrom django.http.response import HttpResponse\nfrom django.urls import reverse\nfrom django.utils.translation import gettext_lazy as _\nfrom django_filters import FilterSet\nfrom django_filters.filters import BooleanFilter\nfrom drf_spectacular.types import OpenApiTypes\nfrom drf_spectacular.utils import OpenApiParameter, OpenApiResponse, extend_schema\nfrom rest_framework.decorators import action\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework.fields import CharField, DateTimeField, IntegerField, SerializerMethodField\nfrom rest_framework.filters import OrderingFilter, SearchFilter\nfrom rest_framework.request import Request\nfrom rest_framework.response import Response\nfrom rest_framework.serializers import ModelSerializer\nfrom rest_framework.viewsets import ModelViewSet\nfrom structlog.stdlib import get_logger\n\nfrom authentik.api.authorization import SecretKeyFilter\nfrom authentik.api.decorators import permission_required\nfrom authentik.core.api.used_by import UsedByMixin\nfrom authentik.core.api.utils import PassiveSerializer\nfrom authentik.crypto.apps import MANAGED_KEY\nfrom authentik.crypto.builder import CertificateBuilder\nfrom authentik.crypto.models import CertificateKeyPair\nfrom authentik.events.models import Event, EventAction\n\nLOGGER = get_logger()\n\n\nclass CertificateKeyPairSerializer(ModelSerializer):\n \"\"\"CertificateKeyPair Serializer\"\"\"\n\n fingerprint_sha256 = SerializerMethodField()\n fingerprint_sha1 = SerializerMethodField()\n\n cert_expiry = SerializerMethodField()\n cert_subject = SerializerMethodField()\n private_key_available = SerializerMethodField()\n private_key_type = SerializerMethodField()\n\n certificate_download_url = SerializerMethodField()\n private_key_download_url = SerializerMethodField()\n\n @property\n def _should_include_details(self) -> bool:\n request: Request = self.context.get(\"request\", None)\n if not request:\n return True\n return str(request.query_params.get(\"include_details\", \"true\")).lower() == \"true\"\n\n def get_fingerprint_sha256(self, instance: CertificateKeyPair) -> Optional[str]:\n \"Get certificate Hash (SHA256)\"\n if not self._should_include_details:\n return None\n return instance.fingerprint_sha256\n\n def get_fingerprint_sha1(self, instance: CertificateKeyPair) -> Optional[str]:\n \"Get certificate Hash (SHA1)\"\n if not self._should_include_details:\n return None\n return instance.fingerprint_sha1\n\n def get_cert_expiry(self, instance: CertificateKeyPair) -> Optional[datetime]:\n \"Get certificate expiry\"\n if not self._should_include_details:\n return None\n return DateTimeField().to_representation(instance.certificate.not_valid_after)\n\n def get_cert_subject(self, instance: CertificateKeyPair) -> Optional[str]:\n \"\"\"Get certificate subject as full rfc4514\"\"\"\n if not self._should_include_details:\n return None\n return instance.certificate.subject.rfc4514_string()\n\n def get_private_key_available(self, instance: CertificateKeyPair) -> bool:\n \"\"\"Show if this keypair has a private key configured or not\"\"\"\n return instance.key_data != \"\" and instance.key_data is not None\n\n def get_private_key_type(self, instance: CertificateKeyPair) -> Optional[str]:\n \"\"\"Get the private key's type, if set\"\"\"\n if not self._should_include_details:\n return None\n key = instance.private_key\n if key:\n return key.__class__.__name__.replace(\"_\", \"\").lower().replace(\"privatekey\", \"\")\n return None\n\n def get_certificate_download_url(self, instance: CertificateKeyPair) -> str:\n \"\"\"Get URL to download certificate\"\"\"\n return (\n reverse(\n \"authentik_api:certificatekeypair-view-certificate\",\n kwargs={\"pk\": instance.pk},\n )\n + \"?download\"\n )\n\n def get_private_key_download_url(self, instance: CertificateKeyPair) -> str:\n \"\"\"Get URL to download private key\"\"\"\n return (\n reverse(\n \"authentik_api:certificatekeypair-view-private-key\",\n kwargs={\"pk\": instance.pk},\n )\n + \"?download\"\n )\n\n def validate_certificate_data(self, value: str) -> str:\n \"\"\"Verify that input is a valid PEM x509 Certificate\"\"\"\n try:\n # Cast to string to fully load and parse certificate\n # Prevents issues like https://github.com/goauthentik/authentik/issues/2082\n str(load_pem_x509_certificate(value.encode(\"utf-8\"), default_backend()))\n except ValueError as exc:\n LOGGER.warning(\"Failed to load certificate\", exc=exc)\n raise ValidationError(\"Unable to load certificate.\")\n return value\n\n def validate_key_data(self, value: str) -> str:\n \"\"\"Verify that input is a valid PEM Key\"\"\"\n # Since this field is optional, data can be empty.\n if value != \"\":\n try:\n # Cast to string to fully load and parse certificate\n # Prevents issues like https://github.com/goauthentik/authentik/issues/2082\n str(\n load_pem_private_key(\n str.encode(\"\\n\".join([x.strip() for x in value.split(\"\\n\")])),\n password=None,\n backend=default_backend(),\n )\n )\n except (ValueError, TypeError) as exc:\n LOGGER.warning(\"Failed to load private key\", exc=exc)\n raise ValidationError(\"Unable to load private key (possibly encrypted?).\")\n return value\n\n class Meta:\n model = CertificateKeyPair\n fields = [\n \"pk\",\n \"name\",\n \"fingerprint_sha256\",\n \"fingerprint_sha1\",\n \"certificate_data\",\n \"key_data\",\n \"cert_expiry\",\n \"cert_subject\",\n \"private_key_available\",\n \"private_key_type\",\n \"certificate_download_url\",\n \"private_key_download_url\",\n \"managed\",\n ]\n extra_kwargs = {\n \"managed\": {\"read_only\": True},\n \"key_data\": {\"write_only\": True},\n \"certificate_data\": {\"write_only\": True},\n }\n\n\nclass CertificateDataSerializer(PassiveSerializer):\n \"\"\"Get CertificateKeyPair's data\"\"\"\n\n data = CharField(read_only=True)\n\n\nclass CertificateGenerationSerializer(PassiveSerializer):\n \"\"\"Certificate generation parameters\"\"\"\n\n common_name = CharField()\n subject_alt_name = CharField(required=False, allow_blank=True, label=_(\"Subject-alt name\"))\n validity_days = IntegerField(initial=365)\n\n\nclass CertificateKeyPairFilter(FilterSet):\n \"\"\"Filter for certificates\"\"\"\n\n has_key = BooleanFilter(\n label=\"Only return certificate-key pairs with keys\", method=\"filter_has_key\"\n )\n\n def filter_has_key(self, queryset, name, value): # pragma: no cover\n \"\"\"Only return certificate-key pairs with keys\"\"\"\n return queryset.exclude(key_data__exact=\"\")\n\n class Meta:\n model = CertificateKeyPair\n fields = [\"name\", \"managed\"]\n\n\nclass CertificateKeyPairViewSet(UsedByMixin, ModelViewSet):\n \"\"\"CertificateKeyPair Viewset\"\"\"\n\n queryset = CertificateKeyPair.objects.exclude(managed=MANAGED_KEY)\n serializer_class = CertificateKeyPairSerializer\n filterset_class = CertificateKeyPairFilter\n ordering = [\"name\"]\n search_fields = [\"name\"]\n filter_backends = [SecretKeyFilter, OrderingFilter, SearchFilter]\n\n @extend_schema(\n parameters=[\n # Override the type for `has_key` above\n OpenApiParameter(\n \"has_key\",\n bool,\n required=False,\n description=\"Only return certificate-key pairs with keys\",\n ),\n OpenApiParameter(\"include_details\", bool, default=True),\n ]\n )\n def list(self, request, *args, **kwargs):\n return super().list(request, *args, **kwargs)\n\n @permission_required(None, [\"authentik_crypto.add_certificatekeypair\"])\n @extend_schema(\n request=CertificateGenerationSerializer(),\n responses={\n 200: CertificateKeyPairSerializer,\n 400: OpenApiResponse(description=\"Bad request\"),\n },\n )\n @action(detail=False, methods=[\"POST\"])\n def generate(self, request: Request) -> Response:\n \"\"\"Generate a new, self-signed certificate-key pair\"\"\"\n data = CertificateGenerationSerializer(data=request.data)\n if not data.is_valid():\n return Response(data.errors, status=400)\n raw_san = data.validated_data.get(\"subject_alt_name\", \"\")\n sans = raw_san.split(\",\") if raw_san != \"\" else []\n builder = CertificateBuilder(data.validated_data[\"common_name\"])\n builder.build(\n subject_alt_names=sans,\n validity_days=int(data.validated_data[\"validity_days\"]),\n )\n instance = builder.save()\n serializer = self.get_serializer(instance)\n return Response(serializer.data)\n\n @extend_schema(\n parameters=[\n OpenApiParameter(\n name=\"download\",\n location=OpenApiParameter.QUERY,\n type=OpenApiTypes.BOOL,\n )\n ],\n responses={200: CertificateDataSerializer(many=False)},\n )\n @action(detail=True, pagination_class=None, filter_backends=[])\n def view_certificate(self, request: Request, pk: str) -> Response:\n \"\"\"Return certificate-key pairs certificate and log access\"\"\"\n certificate: CertificateKeyPair = self.get_object()\n Event.new( # noqa # nosec\n EventAction.SECRET_VIEW,\n secret=certificate,\n type=\"certificate\",\n ).from_http(request)\n if \"download\" in request.query_params:\n # Mime type from https://pki-tutorial.readthedocs.io/en/latest/mime.html\n response = HttpResponse(\n certificate.certificate_data, content_type=\"application/x-pem-file\"\n )\n response[\n \"Content-Disposition\"\n ] = f'attachment; filename=\"{certificate.name}_certificate.pem\"'\n return response\n return Response(CertificateDataSerializer({\"data\": certificate.certificate_data}).data)\n\n @extend_schema(\n parameters=[\n OpenApiParameter(\n name=\"download\",\n location=OpenApiParameter.QUERY,\n type=OpenApiTypes.BOOL,\n )\n ],\n responses={200: CertificateDataSerializer(many=False)},\n )\n @action(detail=True, pagination_class=None, filter_backends=[])\n def view_private_key(self, request: Request, pk: str) -> Response:\n \"\"\"Return certificate-key pairs private key and log access\"\"\"\n certificate: CertificateKeyPair = self.get_object()\n Event.new( # noqa # nosec\n EventAction.SECRET_VIEW,\n secret=certificate,\n type=\"private_key\",\n ).from_http(request)\n if \"download\" in request.query_params:\n # Mime type from https://pki-tutorial.readthedocs.io/en/latest/mime.html\n response = HttpResponse(certificate.key_data, content_type=\"application/x-pem-file\")\n response[\n \"Content-Disposition\"\n ] = f'attachment; filename=\"{certificate.name}_private_key.pem\"'\n return response\n return Response(CertificateDataSerializer({\"data\": certificate.key_data}).data)\n", "path": "authentik/crypto/api.py"}], "after_files": [{"content": "\"\"\"Crypto API Views\"\"\"\nfrom datetime import datetime\nfrom typing import Optional\n\nfrom cryptography.hazmat.backends import default_backend\nfrom cryptography.hazmat.primitives.serialization import load_pem_private_key\nfrom cryptography.x509 import load_pem_x509_certificate\nfrom django.http.response import HttpResponse\nfrom django.urls import reverse\nfrom django.utils.translation import gettext_lazy as _\nfrom django_filters import FilterSet\nfrom django_filters.filters import BooleanFilter\nfrom drf_spectacular.types import OpenApiTypes\nfrom drf_spectacular.utils import OpenApiParameter, OpenApiResponse, extend_schema\nfrom rest_framework.decorators import action\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework.fields import CharField, DateTimeField, IntegerField, SerializerMethodField\nfrom rest_framework.filters import OrderingFilter, SearchFilter\nfrom rest_framework.request import Request\nfrom rest_framework.response import Response\nfrom rest_framework.serializers import ModelSerializer\nfrom rest_framework.viewsets import ModelViewSet\nfrom structlog.stdlib import get_logger\n\nfrom authentik.api.authorization import SecretKeyFilter\nfrom authentik.api.decorators import permission_required\nfrom authentik.core.api.used_by import UsedByMixin\nfrom authentik.core.api.utils import PassiveSerializer\nfrom authentik.crypto.apps import MANAGED_KEY\nfrom authentik.crypto.builder import CertificateBuilder\nfrom authentik.crypto.models import CertificateKeyPair\nfrom authentik.events.models import Event, EventAction\n\nLOGGER = get_logger()\n\n\nclass CertificateKeyPairSerializer(ModelSerializer):\n \"\"\"CertificateKeyPair Serializer\"\"\"\n\n fingerprint_sha256 = SerializerMethodField()\n fingerprint_sha1 = SerializerMethodField()\n\n cert_expiry = SerializerMethodField()\n cert_subject = SerializerMethodField()\n private_key_available = SerializerMethodField()\n private_key_type = SerializerMethodField()\n\n certificate_download_url = SerializerMethodField()\n private_key_download_url = SerializerMethodField()\n\n @property\n def _should_include_details(self) -> bool:\n request: Request = self.context.get(\"request\", None)\n if not request:\n return True\n return str(request.query_params.get(\"include_details\", \"true\")).lower() == \"true\"\n\n def get_fingerprint_sha256(self, instance: CertificateKeyPair) -> Optional[str]:\n \"Get certificate Hash (SHA256)\"\n if not self._should_include_details:\n return None\n return instance.fingerprint_sha256\n\n def get_fingerprint_sha1(self, instance: CertificateKeyPair) -> Optional[str]:\n \"Get certificate Hash (SHA1)\"\n if not self._should_include_details:\n return None\n return instance.fingerprint_sha1\n\n def get_cert_expiry(self, instance: CertificateKeyPair) -> Optional[datetime]:\n \"Get certificate expiry\"\n if not self._should_include_details:\n return None\n return DateTimeField().to_representation(instance.certificate.not_valid_after)\n\n def get_cert_subject(self, instance: CertificateKeyPair) -> Optional[str]:\n \"\"\"Get certificate subject as full rfc4514\"\"\"\n if not self._should_include_details:\n return None\n return instance.certificate.subject.rfc4514_string()\n\n def get_private_key_available(self, instance: CertificateKeyPair) -> bool:\n \"\"\"Show if this keypair has a private key configured or not\"\"\"\n return instance.key_data != \"\" and instance.key_data is not None\n\n def get_private_key_type(self, instance: CertificateKeyPair) -> Optional[str]:\n \"\"\"Get the private key's type, if set\"\"\"\n if not self._should_include_details:\n return None\n key = instance.private_key\n if key:\n return key.__class__.__name__.replace(\"_\", \"\").lower().replace(\"privatekey\", \"\")\n return None\n\n def get_certificate_download_url(self, instance: CertificateKeyPair) -> str:\n \"\"\"Get URL to download certificate\"\"\"\n return (\n reverse(\n \"authentik_api:certificatekeypair-view-certificate\",\n kwargs={\"pk\": instance.pk},\n )\n + \"?download\"\n )\n\n def get_private_key_download_url(self, instance: CertificateKeyPair) -> str:\n \"\"\"Get URL to download private key\"\"\"\n return (\n reverse(\n \"authentik_api:certificatekeypair-view-private-key\",\n kwargs={\"pk\": instance.pk},\n )\n + \"?download\"\n )\n\n def validate_certificate_data(self, value: str) -> str:\n \"\"\"Verify that input is a valid PEM x509 Certificate\"\"\"\n try:\n # Cast to string to fully load and parse certificate\n # Prevents issues like https://github.com/goauthentik/authentik/issues/2082\n str(load_pem_x509_certificate(value.encode(\"utf-8\"), default_backend()))\n except ValueError as exc:\n LOGGER.warning(\"Failed to load certificate\", exc=exc)\n raise ValidationError(\"Unable to load certificate.\")\n return value\n\n def validate_key_data(self, value: str) -> str:\n \"\"\"Verify that input is a valid PEM Key\"\"\"\n # Since this field is optional, data can be empty.\n if value != \"\":\n try:\n # Cast to string to fully load and parse certificate\n # Prevents issues like https://github.com/goauthentik/authentik/issues/2082\n str(\n load_pem_private_key(\n str.encode(\"\\n\".join([x.strip() for x in value.split(\"\\n\")])),\n password=None,\n backend=default_backend(),\n )\n )\n except (ValueError, TypeError) as exc:\n LOGGER.warning(\"Failed to load private key\", exc=exc)\n raise ValidationError(\"Unable to load private key (possibly encrypted?).\")\n return value\n\n class Meta:\n model = CertificateKeyPair\n fields = [\n \"pk\",\n \"name\",\n \"fingerprint_sha256\",\n \"fingerprint_sha1\",\n \"certificate_data\",\n \"key_data\",\n \"cert_expiry\",\n \"cert_subject\",\n \"private_key_available\",\n \"private_key_type\",\n \"certificate_download_url\",\n \"private_key_download_url\",\n \"managed\",\n ]\n extra_kwargs = {\n \"managed\": {\"read_only\": True},\n \"key_data\": {\"write_only\": True},\n \"certificate_data\": {\"write_only\": True},\n }\n\n\nclass CertificateDataSerializer(PassiveSerializer):\n \"\"\"Get CertificateKeyPair's data\"\"\"\n\n data = CharField(read_only=True)\n\n\nclass CertificateGenerationSerializer(PassiveSerializer):\n \"\"\"Certificate generation parameters\"\"\"\n\n common_name = CharField()\n subject_alt_name = CharField(required=False, allow_blank=True, label=_(\"Subject-alt name\"))\n validity_days = IntegerField(initial=365)\n\n\nclass CertificateKeyPairFilter(FilterSet):\n \"\"\"Filter for certificates\"\"\"\n\n has_key = BooleanFilter(\n label=\"Only return certificate-key pairs with keys\", method=\"filter_has_key\"\n )\n\n def filter_has_key(self, queryset, name, value): # pragma: no cover\n \"\"\"Only return certificate-key pairs with keys\"\"\"\n if not value:\n return queryset\n return queryset.exclude(key_data__exact=\"\")\n\n class Meta:\n model = CertificateKeyPair\n fields = [\"name\", \"managed\"]\n\n\nclass CertificateKeyPairViewSet(UsedByMixin, ModelViewSet):\n \"\"\"CertificateKeyPair Viewset\"\"\"\n\n queryset = CertificateKeyPair.objects.exclude(managed=MANAGED_KEY)\n serializer_class = CertificateKeyPairSerializer\n filterset_class = CertificateKeyPairFilter\n ordering = [\"name\"]\n search_fields = [\"name\"]\n filter_backends = [SecretKeyFilter, OrderingFilter, SearchFilter]\n\n @extend_schema(\n parameters=[\n # Override the type for `has_key` above\n OpenApiParameter(\n \"has_key\",\n bool,\n required=False,\n description=\"Only return certificate-key pairs with keys\",\n ),\n OpenApiParameter(\"include_details\", bool, default=True),\n ]\n )\n def list(self, request, *args, **kwargs):\n return super().list(request, *args, **kwargs)\n\n @permission_required(None, [\"authentik_crypto.add_certificatekeypair\"])\n @extend_schema(\n request=CertificateGenerationSerializer(),\n responses={\n 200: CertificateKeyPairSerializer,\n 400: OpenApiResponse(description=\"Bad request\"),\n },\n )\n @action(detail=False, methods=[\"POST\"])\n def generate(self, request: Request) -> Response:\n \"\"\"Generate a new, self-signed certificate-key pair\"\"\"\n data = CertificateGenerationSerializer(data=request.data)\n if not data.is_valid():\n return Response(data.errors, status=400)\n raw_san = data.validated_data.get(\"subject_alt_name\", \"\")\n sans = raw_san.split(\",\") if raw_san != \"\" else []\n builder = CertificateBuilder(data.validated_data[\"common_name\"])\n builder.build(\n subject_alt_names=sans,\n validity_days=int(data.validated_data[\"validity_days\"]),\n )\n instance = builder.save()\n serializer = self.get_serializer(instance)\n return Response(serializer.data)\n\n @extend_schema(\n parameters=[\n OpenApiParameter(\n name=\"download\",\n location=OpenApiParameter.QUERY,\n type=OpenApiTypes.BOOL,\n )\n ],\n responses={200: CertificateDataSerializer(many=False)},\n )\n @action(detail=True, pagination_class=None, filter_backends=[])\n def view_certificate(self, request: Request, pk: str) -> Response:\n \"\"\"Return certificate-key pairs certificate and log access\"\"\"\n certificate: CertificateKeyPair = self.get_object()\n Event.new( # noqa # nosec\n EventAction.SECRET_VIEW,\n secret=certificate,\n type=\"certificate\",\n ).from_http(request)\n if \"download\" in request.query_params:\n # Mime type from https://pki-tutorial.readthedocs.io/en/latest/mime.html\n response = HttpResponse(\n certificate.certificate_data, content_type=\"application/x-pem-file\"\n )\n response[\n \"Content-Disposition\"\n ] = f'attachment; filename=\"{certificate.name}_certificate.pem\"'\n return response\n return Response(CertificateDataSerializer({\"data\": certificate.certificate_data}).data)\n\n @extend_schema(\n parameters=[\n OpenApiParameter(\n name=\"download\",\n location=OpenApiParameter.QUERY,\n type=OpenApiTypes.BOOL,\n )\n ],\n responses={200: CertificateDataSerializer(many=False)},\n )\n @action(detail=True, pagination_class=None, filter_backends=[])\n def view_private_key(self, request: Request, pk: str) -> Response:\n \"\"\"Return certificate-key pairs private key and log access\"\"\"\n certificate: CertificateKeyPair = self.get_object()\n Event.new( # noqa # nosec\n EventAction.SECRET_VIEW,\n secret=certificate,\n type=\"private_key\",\n ).from_http(request)\n if \"download\" in request.query_params:\n # Mime type from https://pki-tutorial.readthedocs.io/en/latest/mime.html\n response = HttpResponse(certificate.key_data, content_type=\"application/x-pem-file\")\n response[\n \"Content-Disposition\"\n ] = f'attachment; filename=\"{certificate.name}_private_key.pem\"'\n return response\n return Response(CertificateDataSerializer({\"data\": certificate.key_data}).data)\n", "path": "authentik/crypto/api.py"}]} | 3,832 | 106 |
gh_patches_debug_8216 | rasdani/github-patches | git_diff | huggingface__accelerate-50 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Mismatch between `accelerate config` cli and `default_config.yaml`
The generated `default_config.yaml` is mismatch with `accelerate config`.
Here are my cli outputs and `default_config.yaml`
cli outputs
```
In which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 0
Which type of machine are you using? ([0] No distributed training, [1] multi-GPU, [2] TPU): 1
How many different machines will you use (use more than 1 for multi-node training)? [1]: 2
What is the rank of this machine (from 0 to the number of machines - 1 )? [0]: 1
What is the IP address of the machine that will host the main process? 10.29.150.50
What is the port you will use to communicate with the main process? 2333
How many processes in total will you use? [1]: 6
Do you wish to use FP16 (mixed precision)? [yes/NO]: yes
```
`default_config.yaml`
```
compute_environment: LOCAL_MACHINE
distributed_type: MULTI_GPU
fp16: true
machine_rank: 1
main_process_ip: 2333
main_process_port: null
main_training_function: main
num_machines: 2
num_processes: 6
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/accelerate/commands/config/cluster.py`
Content:
```
1 #!/usr/bin/env python
2
3 # Copyright 2021 The HuggingFace Team. All rights reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 from accelerate.state import ComputeEnvironment, DistributedType
18
19 from .config_args import ClusterConfig
20 from .config_utils import _ask_field, _convert_distributed_mode, _convert_yes_no_to_bool
21
22
23 def get_cluster_input():
24 distributed_type = _ask_field(
25 "Which type of machine are you using? ([0] No distributed training, [1] multi-GPU, [2] TPU): ",
26 _convert_distributed_mode,
27 error_message="Please enter 0, 1 or 2.",
28 )
29
30 machine_rank = 0
31 num_machines = 1
32 main_process_ip = None
33 main_process_port = None
34 if distributed_type == DistributedType.MULTI_GPU:
35 num_machines = _ask_field(
36 "How many different machines will you use (use more than 1 for multi-node training)? [1]: ",
37 lambda x: int(x),
38 default=1,
39 )
40 if num_machines > 1:
41 machine_rank = _ask_field(
42 "What is the rank of this machine (from 0 to the number of machines - 1 )? [0]: ",
43 lambda x: int(x),
44 default=0,
45 )
46 main_process_ip = _ask_field(
47 "What is the IP address of the machine that will host the main process? ",
48 )
49 main_process_ip = _ask_field(
50 "What is the port you will use to communicate with the main process? ",
51 lambda x: int(x),
52 )
53 if distributed_type == DistributedType.TPU:
54 main_training_function = _ask_field(
55 "What is the name of the function in your script that should be launched in all parallel scripts? [main]: ",
56 default="main",
57 )
58 else:
59 main_training_function = "main"
60
61 num_processes = _ask_field(
62 "How many processes in total will you use? [1]: ",
63 lambda x: int(x),
64 default=1,
65 error_message="Please enter an integer.",
66 )
67
68 if distributed_type != DistributedType.TPU:
69 fp16 = _ask_field(
70 "Do you wish to use FP16 (mixed precision)? [yes/NO]: ",
71 _convert_yes_no_to_bool,
72 default=False,
73 error_message="Please enter yes or no.",
74 )
75 else:
76 fp16 = False
77
78 return ClusterConfig(
79 compute_environment=ComputeEnvironment.LOCAL_MACHINE,
80 distributed_type=distributed_type,
81 num_processes=num_processes,
82 fp16=fp16,
83 machine_rank=machine_rank,
84 num_machines=num_machines,
85 main_process_ip=main_process_ip,
86 main_process_port=main_process_port,
87 main_training_function=main_training_function,
88 )
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/accelerate/commands/config/cluster.py b/src/accelerate/commands/config/cluster.py
--- a/src/accelerate/commands/config/cluster.py
+++ b/src/accelerate/commands/config/cluster.py
@@ -46,7 +46,7 @@
main_process_ip = _ask_field(
"What is the IP address of the machine that will host the main process? ",
)
- main_process_ip = _ask_field(
+ main_process_port = _ask_field(
"What is the port you will use to communicate with the main process? ",
lambda x: int(x),
)
| {"golden_diff": "diff --git a/src/accelerate/commands/config/cluster.py b/src/accelerate/commands/config/cluster.py\n--- a/src/accelerate/commands/config/cluster.py\n+++ b/src/accelerate/commands/config/cluster.py\n@@ -46,7 +46,7 @@\n main_process_ip = _ask_field(\n \"What is the IP address of the machine that will host the main process? \",\n )\n- main_process_ip = _ask_field(\n+ main_process_port = _ask_field(\n \"What is the port you will use to communicate with the main process? \",\n lambda x: int(x),\n )\n", "issue": "Mismatch between `accelerate config` cli and `default_config.yaml`\nThe generated `default_config.yaml` is mismatch with `accelerate config`.\r\n\r\nHere are my cli outputs and `default_config.yaml`\r\n\r\ncli outputs\r\n\r\n```\r\nIn which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 0\r\nWhich type of machine are you using? ([0] No distributed training, [1] multi-GPU, [2] TPU): 1\r\nHow many different machines will you use (use more than 1 for multi-node training)? [1]: 2\r\nWhat is the rank of this machine (from 0 to the number of machines - 1 )? [0]: 1\r\nWhat is the IP address of the machine that will host the main process? 10.29.150.50\r\nWhat is the port you will use to communicate with the main process? 2333\r\nHow many processes in total will you use? [1]: 6\r\nDo you wish to use FP16 (mixed precision)? [yes/NO]: yes\r\n\r\n```\r\n\r\n`default_config.yaml`\r\n```\r\ncompute_environment: LOCAL_MACHINE\r\ndistributed_type: MULTI_GPU\r\nfp16: true\r\nmachine_rank: 1\r\nmain_process_ip: 2333\r\nmain_process_port: null\r\nmain_training_function: main\r\nnum_machines: 2\r\nnum_processes: 6\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# Copyright 2021 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom accelerate.state import ComputeEnvironment, DistributedType\n\nfrom .config_args import ClusterConfig\nfrom .config_utils import _ask_field, _convert_distributed_mode, _convert_yes_no_to_bool\n\n\ndef get_cluster_input():\n distributed_type = _ask_field(\n \"Which type of machine are you using? ([0] No distributed training, [1] multi-GPU, [2] TPU): \",\n _convert_distributed_mode,\n error_message=\"Please enter 0, 1 or 2.\",\n )\n\n machine_rank = 0\n num_machines = 1\n main_process_ip = None\n main_process_port = None\n if distributed_type == DistributedType.MULTI_GPU:\n num_machines = _ask_field(\n \"How many different machines will you use (use more than 1 for multi-node training)? [1]: \",\n lambda x: int(x),\n default=1,\n )\n if num_machines > 1:\n machine_rank = _ask_field(\n \"What is the rank of this machine (from 0 to the number of machines - 1 )? [0]: \",\n lambda x: int(x),\n default=0,\n )\n main_process_ip = _ask_field(\n \"What is the IP address of the machine that will host the main process? \",\n )\n main_process_ip = _ask_field(\n \"What is the port you will use to communicate with the main process? \",\n lambda x: int(x),\n )\n if distributed_type == DistributedType.TPU:\n main_training_function = _ask_field(\n \"What is the name of the function in your script that should be launched in all parallel scripts? [main]: \",\n default=\"main\",\n )\n else:\n main_training_function = \"main\"\n\n num_processes = _ask_field(\n \"How many processes in total will you use? [1]: \",\n lambda x: int(x),\n default=1,\n error_message=\"Please enter an integer.\",\n )\n\n if distributed_type != DistributedType.TPU:\n fp16 = _ask_field(\n \"Do you wish to use FP16 (mixed precision)? [yes/NO]: \",\n _convert_yes_no_to_bool,\n default=False,\n error_message=\"Please enter yes or no.\",\n )\n else:\n fp16 = False\n\n return ClusterConfig(\n compute_environment=ComputeEnvironment.LOCAL_MACHINE,\n distributed_type=distributed_type,\n num_processes=num_processes,\n fp16=fp16,\n machine_rank=machine_rank,\n num_machines=num_machines,\n main_process_ip=main_process_ip,\n main_process_port=main_process_port,\n main_training_function=main_training_function,\n )\n", "path": "src/accelerate/commands/config/cluster.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n# Copyright 2021 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom accelerate.state import ComputeEnvironment, DistributedType\n\nfrom .config_args import ClusterConfig\nfrom .config_utils import _ask_field, _convert_distributed_mode, _convert_yes_no_to_bool\n\n\ndef get_cluster_input():\n distributed_type = _ask_field(\n \"Which type of machine are you using? ([0] No distributed training, [1] multi-GPU, [2] TPU): \",\n _convert_distributed_mode,\n error_message=\"Please enter 0, 1 or 2.\",\n )\n\n machine_rank = 0\n num_machines = 1\n main_process_ip = None\n main_process_port = None\n if distributed_type == DistributedType.MULTI_GPU:\n num_machines = _ask_field(\n \"How many different machines will you use (use more than 1 for multi-node training)? [1]: \",\n lambda x: int(x),\n default=1,\n )\n if num_machines > 1:\n machine_rank = _ask_field(\n \"What is the rank of this machine (from 0 to the number of machines - 1 )? [0]: \",\n lambda x: int(x),\n default=0,\n )\n main_process_ip = _ask_field(\n \"What is the IP address of the machine that will host the main process? \",\n )\n main_process_port = _ask_field(\n \"What is the port you will use to communicate with the main process? \",\n lambda x: int(x),\n )\n if distributed_type == DistributedType.TPU:\n main_training_function = _ask_field(\n \"What is the name of the function in your script that should be launched in all parallel scripts? [main]: \",\n default=\"main\",\n )\n else:\n main_training_function = \"main\"\n\n num_processes = _ask_field(\n \"How many processes in total will you use? [1]: \",\n lambda x: int(x),\n default=1,\n error_message=\"Please enter an integer.\",\n )\n\n if distributed_type != DistributedType.TPU:\n fp16 = _ask_field(\n \"Do you wish to use FP16 (mixed precision)? [yes/NO]: \",\n _convert_yes_no_to_bool,\n default=False,\n error_message=\"Please enter yes or no.\",\n )\n else:\n fp16 = False\n\n return ClusterConfig(\n compute_environment=ComputeEnvironment.LOCAL_MACHINE,\n distributed_type=distributed_type,\n num_processes=num_processes,\n fp16=fp16,\n machine_rank=machine_rank,\n num_machines=num_machines,\n main_process_ip=main_process_ip,\n main_process_port=main_process_port,\n main_training_function=main_training_function,\n )\n", "path": "src/accelerate/commands/config/cluster.py"}]} | 1,475 | 139 |
gh_patches_debug_11831 | rasdani/github-patches | git_diff | beeware__toga-1485 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
RuntimeError in toga-demo
**Describe the bug**
when you run the toga-demo app and click on "Action 2" and then either Yes or No at the dialog, you get ```RuntimeError: Can't check dialog result directly; use await or an on_result handler``` printed on the console and no followup dialog.
**To Reproduce**
Steps to reproduce the behavior:
1. Install toga-core, toga-gtk, toga and demo using `pip install -e`
2. Run toga-demo
3. Click on Action 2
4. Click on either Yes or No in popup dialog
5. Note it logs a RuntimeError in the terminal you ran it from.
**Expected behavior**
Should not throw an error and instead it should pop up an enthusiastic response dialog.
**Screenshots**
```
Traceback (most recent call last):
File "/home/nick/Work/beeware/toga/src/core/toga/handlers.py", line 66, in _handler
result = handler(interface, *args, **kwargs)
File "/home/nick/Work/beeware/toga/src/core/toga/handlers.py", line 66, in _handler
result = handler(interface, *args, **kwargs)
File "/home/nick/Work/beeware/toga/demo/toga_demo/app.py", line 91, in action2
if self.main_window.question_dialog('Toga', 'Is this cool or what?'):
File "/home/nick/Work/beeware/toga/src/gtk/toga_gtk/dialogs.py", line 16, in __bool__
raise RuntimeError("Can't check dialog result directly; use await or an on_result handler")
RuntimeError: Can't check dialog result directly; use await or an on_result handler
```
If applicable, add screenshots to help explain your problem.
**Environment:**
- Operating System: Ubuntu 20.04.4
- Python version: 3.8.10
- Software versions:
- Toga: 0.3.0.dev34 400b6935c4689bedb134324b38eb1286af5b5ec6
**Suggested Fix**
It works if you make the `action2` function async, and make it await the question_dialog.
I'll submit an extremely trivial PR for this soon :-)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `demo/toga_demo/app.py`
Content:
```
1 import toga
2 from toga.constants import COLUMN
3 from toga.style import Pack
4
5
6 class TogaDemo(toga.App):
7
8 def startup(self):
9 # Create the main window
10 self.main_window = toga.MainWindow(self.name)
11
12 left_container = toga.OptionContainer()
13
14 left_table = toga.Table(
15 headings=['Hello', 'World'],
16 data=[
17 ('root1', 'value1'),
18 ('root2', 'value2'),
19 ('root3', 'value3'),
20 ('root4', 'value4'),
21 ]
22 )
23
24 left_tree = toga.Tree(
25 headings=['Navigate'],
26 data={
27 ('root1',): {
28 },
29 ('root2',): {
30 ('root2.1',): None,
31 ('root2.2',): [
32 ('root2.2.1',),
33 ('root2.2.2',),
34 ('root2.2.3',),
35 ]
36 }
37 }
38 )
39
40 left_container.add('Table', left_table)
41 left_container.add('Tree', left_tree)
42
43 right_content = toga.Box(style=Pack(direction=COLUMN))
44 for b in range(0, 10):
45 right_content.add(
46 toga.Button(
47 'Hello world %s' % b,
48 on_press=self.button_handler,
49 style=Pack(padding=20)
50 )
51 )
52
53 right_container = toga.ScrollContainer()
54
55 right_container.content = right_content
56
57 split = toga.SplitContainer()
58
59 split.content = [left_container, right_container]
60
61 cmd1 = toga.Command(
62 self.action1,
63 'Action 1',
64 tooltip='Perform action 1',
65 icon='resources/brutus',
66 )
67 cmd2 = toga.Command(
68 self.action2,
69 'Action 2',
70 tooltip='Perform action 2',
71 icon=toga.Icon.TOGA_ICON
72 )
73
74 self.main_window.toolbar.add(cmd1, cmd2)
75
76 self.main_window.content = split
77
78 # Show the main window
79 self.main_window.show()
80
81 def button_handler(self, widget):
82 print("button press")
83 for i in range(0, 10):
84 yield 1
85 print('still running... (iteration %s)' % i)
86
87 def action1(self, widget):
88 self.main_window.info_dialog('Toga', 'THIS! IS! TOGA!!')
89
90 def action2(self, widget):
91 if self.main_window.question_dialog('Toga', 'Is this cool or what?'):
92 self.main_window.info_dialog('Happiness', 'I know, right! :-)')
93 else:
94 self.main_window.info_dialog('Shucks...', "Well aren't you a spoilsport... :-(")
95
96
97 def main():
98 return TogaDemo('Toga Demo', 'org.beeware.toga-demo')
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/demo/toga_demo/app.py b/demo/toga_demo/app.py
--- a/demo/toga_demo/app.py
+++ b/demo/toga_demo/app.py
@@ -87,8 +87,8 @@
def action1(self, widget):
self.main_window.info_dialog('Toga', 'THIS! IS! TOGA!!')
- def action2(self, widget):
- if self.main_window.question_dialog('Toga', 'Is this cool or what?'):
+ async def action2(self, widget):
+ if await self.main_window.question_dialog('Toga', 'Is this cool or what?'):
self.main_window.info_dialog('Happiness', 'I know, right! :-)')
else:
self.main_window.info_dialog('Shucks...', "Well aren't you a spoilsport... :-(")
| {"golden_diff": "diff --git a/demo/toga_demo/app.py b/demo/toga_demo/app.py\n--- a/demo/toga_demo/app.py\n+++ b/demo/toga_demo/app.py\n@@ -87,8 +87,8 @@\n def action1(self, widget):\n self.main_window.info_dialog('Toga', 'THIS! IS! TOGA!!')\n \n- def action2(self, widget):\n- if self.main_window.question_dialog('Toga', 'Is this cool or what?'):\n+ async def action2(self, widget):\n+ if await self.main_window.question_dialog('Toga', 'Is this cool or what?'):\n self.main_window.info_dialog('Happiness', 'I know, right! :-)')\n else:\n self.main_window.info_dialog('Shucks...', \"Well aren't you a spoilsport... :-(\")\n", "issue": "RuntimeError in toga-demo\n**Describe the bug**\r\nwhen you run the toga-demo app and click on \"Action 2\" and then either Yes or No at the dialog, you get ```RuntimeError: Can't check dialog result directly; use await or an on_result handler``` printed on the console and no followup dialog.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Install toga-core, toga-gtk, toga and demo using `pip install -e`\r\n2. Run toga-demo\r\n3. Click on Action 2\r\n4. Click on either Yes or No in popup dialog\r\n5. Note it logs a RuntimeError in the terminal you ran it from.\r\n\r\n**Expected behavior**\r\nShould not throw an error and instead it should pop up an enthusiastic response dialog.\r\n\r\n**Screenshots**\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/nick/Work/beeware/toga/src/core/toga/handlers.py\", line 66, in _handler\r\n result = handler(interface, *args, **kwargs)\r\n File \"/home/nick/Work/beeware/toga/src/core/toga/handlers.py\", line 66, in _handler\r\n result = handler(interface, *args, **kwargs)\r\n File \"/home/nick/Work/beeware/toga/demo/toga_demo/app.py\", line 91, in action2\r\n if self.main_window.question_dialog('Toga', 'Is this cool or what?'):\r\n File \"/home/nick/Work/beeware/toga/src/gtk/toga_gtk/dialogs.py\", line 16, in __bool__\r\n raise RuntimeError(\"Can't check dialog result directly; use await or an on_result handler\")\r\nRuntimeError: Can't check dialog result directly; use await or an on_result handler\r\n```\r\n\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n**Environment:**\r\n - Operating System: Ubuntu 20.04.4\r\n - Python version: 3.8.10\r\n - Software versions:\r\n - Toga: 0.3.0.dev34 400b6935c4689bedb134324b38eb1286af5b5ec6\r\n \r\n**Suggested Fix**\r\nIt works if you make the `action2` function async, and make it await the question_dialog.\r\nI'll submit an extremely trivial PR for this soon :-)\r\n\n", "before_files": [{"content": "import toga\nfrom toga.constants import COLUMN\nfrom toga.style import Pack\n\n\nclass TogaDemo(toga.App):\n\n def startup(self):\n # Create the main window\n self.main_window = toga.MainWindow(self.name)\n\n left_container = toga.OptionContainer()\n\n left_table = toga.Table(\n headings=['Hello', 'World'],\n data=[\n ('root1', 'value1'),\n ('root2', 'value2'),\n ('root3', 'value3'),\n ('root4', 'value4'),\n ]\n )\n\n left_tree = toga.Tree(\n headings=['Navigate'],\n data={\n ('root1',): {\n },\n ('root2',): {\n ('root2.1',): None,\n ('root2.2',): [\n ('root2.2.1',),\n ('root2.2.2',),\n ('root2.2.3',),\n ]\n }\n }\n )\n\n left_container.add('Table', left_table)\n left_container.add('Tree', left_tree)\n\n right_content = toga.Box(style=Pack(direction=COLUMN))\n for b in range(0, 10):\n right_content.add(\n toga.Button(\n 'Hello world %s' % b,\n on_press=self.button_handler,\n style=Pack(padding=20)\n )\n )\n\n right_container = toga.ScrollContainer()\n\n right_container.content = right_content\n\n split = toga.SplitContainer()\n\n split.content = [left_container, right_container]\n\n cmd1 = toga.Command(\n self.action1,\n 'Action 1',\n tooltip='Perform action 1',\n icon='resources/brutus',\n )\n cmd2 = toga.Command(\n self.action2,\n 'Action 2',\n tooltip='Perform action 2',\n icon=toga.Icon.TOGA_ICON\n )\n\n self.main_window.toolbar.add(cmd1, cmd2)\n\n self.main_window.content = split\n\n # Show the main window\n self.main_window.show()\n\n def button_handler(self, widget):\n print(\"button press\")\n for i in range(0, 10):\n yield 1\n print('still running... (iteration %s)' % i)\n\n def action1(self, widget):\n self.main_window.info_dialog('Toga', 'THIS! IS! TOGA!!')\n\n def action2(self, widget):\n if self.main_window.question_dialog('Toga', 'Is this cool or what?'):\n self.main_window.info_dialog('Happiness', 'I know, right! :-)')\n else:\n self.main_window.info_dialog('Shucks...', \"Well aren't you a spoilsport... :-(\")\n\n\ndef main():\n return TogaDemo('Toga Demo', 'org.beeware.toga-demo')\n", "path": "demo/toga_demo/app.py"}], "after_files": [{"content": "import toga\nfrom toga.constants import COLUMN\nfrom toga.style import Pack\n\n\nclass TogaDemo(toga.App):\n\n def startup(self):\n # Create the main window\n self.main_window = toga.MainWindow(self.name)\n\n left_container = toga.OptionContainer()\n\n left_table = toga.Table(\n headings=['Hello', 'World'],\n data=[\n ('root1', 'value1'),\n ('root2', 'value2'),\n ('root3', 'value3'),\n ('root4', 'value4'),\n ]\n )\n\n left_tree = toga.Tree(\n headings=['Navigate'],\n data={\n ('root1',): {\n },\n ('root2',): {\n ('root2.1',): None,\n ('root2.2',): [\n ('root2.2.1',),\n ('root2.2.2',),\n ('root2.2.3',),\n ]\n }\n }\n )\n\n left_container.add('Table', left_table)\n left_container.add('Tree', left_tree)\n\n right_content = toga.Box(style=Pack(direction=COLUMN))\n for b in range(0, 10):\n right_content.add(\n toga.Button(\n 'Hello world %s' % b,\n on_press=self.button_handler,\n style=Pack(padding=20)\n )\n )\n\n right_container = toga.ScrollContainer()\n\n right_container.content = right_content\n\n split = toga.SplitContainer()\n\n split.content = [left_container, right_container]\n\n cmd1 = toga.Command(\n self.action1,\n 'Action 1',\n tooltip='Perform action 1',\n icon='resources/brutus',\n )\n cmd2 = toga.Command(\n self.action2,\n 'Action 2',\n tooltip='Perform action 2',\n icon=toga.Icon.TOGA_ICON\n )\n\n self.main_window.toolbar.add(cmd1, cmd2)\n\n self.main_window.content = split\n\n # Show the main window\n self.main_window.show()\n\n def button_handler(self, widget):\n print(\"button press\")\n for i in range(0, 10):\n yield 1\n print('still running... (iteration %s)' % i)\n\n def action1(self, widget):\n self.main_window.info_dialog('Toga', 'THIS! IS! TOGA!!')\n\n async def action2(self, widget):\n if await self.main_window.question_dialog('Toga', 'Is this cool or what?'):\n self.main_window.info_dialog('Happiness', 'I know, right! :-)')\n else:\n self.main_window.info_dialog('Shucks...', \"Well aren't you a spoilsport... :-(\")\n\n\ndef main():\n return TogaDemo('Toga Demo', 'org.beeware.toga-demo')\n", "path": "demo/toga_demo/app.py"}]} | 1,589 | 180 |
gh_patches_debug_12750 | rasdani/github-patches | git_diff | googleapis__google-auth-library-python-914 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Normalize nox / docs builds
## `docgen`
` `noxfile.py` contains a `docgen` session which, when run, delete's / overwrites some files in the `docs/reference` tree:
```bash
$ nox -re docgen
nox > Running session docgen
nox > Re-using existing virtual environment at .nox/docgen.
nox > pip install flask freezegun mock oauth2client pyopenssl pytest pytest-cov pytest-localserver requests urllib3 cryptography responses grpcio
nox > pip install sphinx
nox > pip install .
nox > rm -r docs/reference
nox > Warning: rm is not installed into the virtualenv, it is located at /bin/rm. This might cause issues! Pass external=True into run() to silence this message.
nox > sphinx-apidoc --output-dir docs/reference --separate --module-first google
Creating file docs/reference/google.rst.
Creating file docs/reference/google.auth.rst.
Creating file docs/reference/google.auth.app_engine.rst.
Creating file docs/reference/google.auth.credentials.rst.
Creating file docs/reference/google.auth.environment_vars.rst.
Creating file docs/reference/google.auth.exceptions.rst.
Creating file docs/reference/google.auth.iam.rst.
Creating file docs/reference/google.auth.impersonated_credentials.rst.
Creating file docs/reference/google.auth.jwt.rst.
Creating file docs/reference/google.auth.compute_engine.rst.
Creating file docs/reference/google.auth.compute_engine.credentials.rst.
Creating file docs/reference/google.auth.crypt.rst.
Creating file docs/reference/google.auth.crypt.base.rst.
Creating file docs/reference/google.auth.crypt.es256.rst.
Creating file docs/reference/google.auth.crypt.rsa.rst.
Creating file docs/reference/google.auth.transport.rst.
Creating file docs/reference/google.auth.transport.grpc.rst.
Creating file docs/reference/google.auth.transport.mtls.rst.
Creating file docs/reference/google.auth.transport.requests.rst.
Creating file docs/reference/google.auth.transport.urllib3.rst.
Creating file docs/reference/google.oauth2.rst.
Creating file docs/reference/google.oauth2.credentials.rst.
Creating file docs/reference/google.oauth2.id_token.rst.
Creating file docs/reference/google.oauth2.service_account.rst.
Creating file docs/reference/modules.rst.
nox > Session docgen was successful.
[/home/tseaver/projects/agendaless/Google/src/google-auth]
$ git stat
On branch master
Your branch is up to date with 'origin/master'.
Changes not staged for commit:
(use "git add/rm <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
deleted: docs/reference/google.auth.credentials_async.rst
deleted: docs/reference/google.auth.jwt_async.rst
modified: docs/reference/google.auth.rst
deleted: docs/reference/google.auth.transport.aiohttp_requests.rst
modified: docs/reference/google.auth.transport.rst
deleted: docs/reference/google.oauth2.credentials_async.rst
modified: docs/reference/google.oauth2.rst
deleted: docs/reference/google.oauth2.service_account_async.rst
```
I recommend that we delete this section, as a fossil of an approach we no longer mean to support.
## `docs` session
The `docs` session triggers the build via `make` (and the `docs/Makefile`), rather than running `sphinx-build` directly as in other projects. This choice is unique across all our projects, and means that we don't enforce
the same specs here (e.g., Sphinx warnings are not treated as errors).
I recommend we delete the `docs/Makefile` and normalize the `docs` session, fixing any Sphinx warnings found.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `noxfile.py`
Content:
```
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 import pathlib
17 import shutil
18
19 import nox
20
21 CURRENT_DIRECTORY = pathlib.Path(__file__).parent.absolute()
22
23 BLACK_VERSION = "black==19.3b0"
24 BLACK_PATHS = [
25 "google",
26 "tests",
27 "tests_async",
28 "noxfile.py",
29 "setup.py",
30 "docs/conf.py",
31 ]
32
33
34 @nox.session(python="3.7")
35 def lint(session):
36 session.install("flake8", "flake8-import-order", "docutils", BLACK_VERSION)
37 session.install("-e", ".")
38 session.run("black", "--check", *BLACK_PATHS)
39 session.run(
40 "flake8",
41 "--import-order-style=google",
42 "--application-import-names=google,tests,system_tests",
43 "google",
44 "tests",
45 "tests_async",
46 )
47 session.run(
48 "python", "setup.py", "check", "--metadata", "--restructuredtext", "--strict"
49 )
50
51
52 @nox.session(python="3.8")
53 def blacken(session):
54 """Run black.
55 Format code to uniform standard.
56 The Python version should be consistent with what is
57 supplied in the Python Owlbot postprocessor.
58
59 https://github.com/googleapis/synthtool/blob/master/docker/owlbot/python/Dockerfile
60 """
61 session.install(BLACK_VERSION)
62 session.run("black", *BLACK_PATHS)
63
64
65 @nox.session(python=["3.6", "3.7", "3.8", "3.9", "3.10"])
66 def unit(session):
67 constraints_path = str(
68 CURRENT_DIRECTORY / "testing" / f"constraints-{session.python}.txt"
69 )
70 session.install("-r", "testing/requirements.txt", "-c", constraints_path)
71 session.install("-e", ".", "-c", constraints_path)
72 session.run(
73 "pytest",
74 f"--junitxml=unit_{session.python}_sponge_log.xml",
75 "--cov=google.auth",
76 "--cov=google.oauth2",
77 "--cov=tests",
78 "--cov-report=term-missing",
79 "tests",
80 "tests_async",
81 )
82
83
84 @nox.session(python=["2.7"])
85 def unit_prev_versions(session):
86 constraints_path = str(
87 CURRENT_DIRECTORY / "testing" / f"constraints-{session.python}.txt"
88 )
89 session.install("-r", "testing/requirements.txt", "-c", constraints_path)
90 session.install("-e", ".", "-c", constraints_path)
91 session.run(
92 "pytest",
93 f"--junitxml=unit_{session.python}_sponge_log.xml",
94 "--cov=google.auth",
95 "--cov=google.oauth2",
96 "--cov=tests",
97 "tests",
98 )
99
100
101 @nox.session(python="3.7")
102 def cover(session):
103 session.install("-r", "testing/requirements.txt")
104 session.install("-e", ".")
105 session.run(
106 "pytest",
107 "--cov=google.auth",
108 "--cov=google.oauth2",
109 "--cov=tests",
110 "--cov=tests_async",
111 "--cov-report=term-missing",
112 "tests",
113 "tests_async",
114 )
115 session.run("coverage", "report", "--show-missing", "--fail-under=100")
116
117
118 @nox.session(python="3.7")
119 def docgen(session):
120 session.env["SPHINX_APIDOC_OPTIONS"] = "members,inherited-members,show-inheritance"
121 session.install("-r", "testing/requirements.txt")
122 session.install("sphinx")
123 session.install("-e", ".")
124 session.run("rm", "-r", "docs/reference")
125 session.run(
126 "sphinx-apidoc",
127 "--output-dir",
128 "docs/reference",
129 "--separate",
130 "--module-first",
131 "google",
132 )
133
134
135 @nox.session(python="3.8")
136 def docs(session):
137 """Build the docs for this library."""
138
139 session.install("-e", ".[aiohttp]")
140 session.install("sphinx", "alabaster", "recommonmark", "sphinx-docstring-typing")
141
142 shutil.rmtree(os.path.join("docs", "_build"), ignore_errors=True)
143 session.run(
144 "sphinx-build",
145 "-T", # show full traceback on exception
146 "-W", # warnings as errors
147 "-N", # no colors
148 "-b",
149 "html",
150 "-d",
151 os.path.join("docs", "_build", "doctrees", ""),
152 os.path.join("docs", ""),
153 os.path.join("docs", "_build", "html", ""),
154 )
155
156
157 @nox.session(python="pypy")
158 def pypy(session):
159 session.install("-r", "test/requirements.txt")
160 session.install("-e", ".")
161 session.run(
162 "pytest",
163 f"--junitxml=unit_{session.python}_sponge_log.xml",
164 "--cov=google.auth",
165 "--cov=google.oauth2",
166 "--cov=tests",
167 "tests",
168 "tests_async",
169 )
170
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/noxfile.py b/noxfile.py
--- a/noxfile.py
+++ b/noxfile.py
@@ -115,23 +115,6 @@
session.run("coverage", "report", "--show-missing", "--fail-under=100")
[email protected](python="3.7")
-def docgen(session):
- session.env["SPHINX_APIDOC_OPTIONS"] = "members,inherited-members,show-inheritance"
- session.install("-r", "testing/requirements.txt")
- session.install("sphinx")
- session.install("-e", ".")
- session.run("rm", "-r", "docs/reference")
- session.run(
- "sphinx-apidoc",
- "--output-dir",
- "docs/reference",
- "--separate",
- "--module-first",
- "google",
- )
-
-
@nox.session(python="3.8")
def docs(session):
"""Build the docs for this library."""
| {"golden_diff": "diff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -115,23 +115,6 @@\n session.run(\"coverage\", \"report\", \"--show-missing\", \"--fail-under=100\")\n \n \[email protected](python=\"3.7\")\n-def docgen(session):\n- session.env[\"SPHINX_APIDOC_OPTIONS\"] = \"members,inherited-members,show-inheritance\"\n- session.install(\"-r\", \"testing/requirements.txt\")\n- session.install(\"sphinx\")\n- session.install(\"-e\", \".\")\n- session.run(\"rm\", \"-r\", \"docs/reference\")\n- session.run(\n- \"sphinx-apidoc\",\n- \"--output-dir\",\n- \"docs/reference\",\n- \"--separate\",\n- \"--module-first\",\n- \"google\",\n- )\n-\n-\n @nox.session(python=\"3.8\")\n def docs(session):\n \"\"\"Build the docs for this library.\"\"\"\n", "issue": "Normalize nox / docs builds\n## `docgen`\r\n\r\n` `noxfile.py` contains a `docgen` session which, when run, delete's / overwrites some files in the `docs/reference` tree:\r\n\r\n```bash\r\n$ nox -re docgen\r\nnox > Running session docgen\r\nnox > Re-using existing virtual environment at .nox/docgen.\r\nnox > pip install flask freezegun mock oauth2client pyopenssl pytest pytest-cov pytest-localserver requests urllib3 cryptography responses grpcio\r\nnox > pip install sphinx\r\nnox > pip install .\r\nnox > rm -r docs/reference\r\nnox > Warning: rm is not installed into the virtualenv, it is located at /bin/rm. This might cause issues! Pass external=True into run() to silence this message.\r\nnox > sphinx-apidoc --output-dir docs/reference --separate --module-first google\r\nCreating file docs/reference/google.rst.\r\nCreating file docs/reference/google.auth.rst.\r\nCreating file docs/reference/google.auth.app_engine.rst.\r\nCreating file docs/reference/google.auth.credentials.rst.\r\nCreating file docs/reference/google.auth.environment_vars.rst.\r\nCreating file docs/reference/google.auth.exceptions.rst.\r\nCreating file docs/reference/google.auth.iam.rst.\r\nCreating file docs/reference/google.auth.impersonated_credentials.rst.\r\nCreating file docs/reference/google.auth.jwt.rst.\r\nCreating file docs/reference/google.auth.compute_engine.rst.\r\nCreating file docs/reference/google.auth.compute_engine.credentials.rst.\r\nCreating file docs/reference/google.auth.crypt.rst.\r\nCreating file docs/reference/google.auth.crypt.base.rst.\r\nCreating file docs/reference/google.auth.crypt.es256.rst.\r\nCreating file docs/reference/google.auth.crypt.rsa.rst.\r\nCreating file docs/reference/google.auth.transport.rst.\r\nCreating file docs/reference/google.auth.transport.grpc.rst.\r\nCreating file docs/reference/google.auth.transport.mtls.rst.\r\nCreating file docs/reference/google.auth.transport.requests.rst.\r\nCreating file docs/reference/google.auth.transport.urllib3.rst.\r\nCreating file docs/reference/google.oauth2.rst.\r\nCreating file docs/reference/google.oauth2.credentials.rst.\r\nCreating file docs/reference/google.oauth2.id_token.rst.\r\nCreating file docs/reference/google.oauth2.service_account.rst.\r\nCreating file docs/reference/modules.rst.\r\nnox > Session docgen was successful.\r\n[/home/tseaver/projects/agendaless/Google/src/google-auth] \r\n$ git stat\r\nOn branch master\r\nYour branch is up to date with 'origin/master'.\r\n\r\nChanges not staged for commit:\r\n (use \"git add/rm <file>...\" to update what will be committed)\r\n (use \"git checkout -- <file>...\" to discard changes in working directory)\r\n\r\n\tdeleted: docs/reference/google.auth.credentials_async.rst\r\n\tdeleted: docs/reference/google.auth.jwt_async.rst\r\n\tmodified: docs/reference/google.auth.rst\r\n\tdeleted: docs/reference/google.auth.transport.aiohttp_requests.rst\r\n\tmodified: docs/reference/google.auth.transport.rst\r\n\tdeleted: docs/reference/google.oauth2.credentials_async.rst\r\n\tmodified: docs/reference/google.oauth2.rst\r\n\tdeleted: docs/reference/google.oauth2.service_account_async.rst\r\n```\r\n\r\nI recommend that we delete this section, as a fossil of an approach we no longer mean to support.\r\n\r\n## `docs` session\r\n\r\nThe `docs` session triggers the build via `make` (and the `docs/Makefile`), rather than running `sphinx-build` directly as in other projects. This choice is unique across all our projects, and means that we don't enforce\r\nthe same specs here (e.g., Sphinx warnings are not treated as errors).\r\n\r\nI recommend we delete the `docs/Makefile` and normalize the `docs` session, fixing any Sphinx warnings found.\n", "before_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport pathlib\nimport shutil\n\nimport nox\n\nCURRENT_DIRECTORY = pathlib.Path(__file__).parent.absolute()\n\nBLACK_VERSION = \"black==19.3b0\"\nBLACK_PATHS = [\n \"google\",\n \"tests\",\n \"tests_async\",\n \"noxfile.py\",\n \"setup.py\",\n \"docs/conf.py\",\n]\n\n\[email protected](python=\"3.7\")\ndef lint(session):\n session.install(\"flake8\", \"flake8-import-order\", \"docutils\", BLACK_VERSION)\n session.install(\"-e\", \".\")\n session.run(\"black\", \"--check\", *BLACK_PATHS)\n session.run(\n \"flake8\",\n \"--import-order-style=google\",\n \"--application-import-names=google,tests,system_tests\",\n \"google\",\n \"tests\",\n \"tests_async\",\n )\n session.run(\n \"python\", \"setup.py\", \"check\", \"--metadata\", \"--restructuredtext\", \"--strict\"\n )\n\n\[email protected](python=\"3.8\")\ndef blacken(session):\n \"\"\"Run black.\n Format code to uniform standard.\n The Python version should be consistent with what is\n supplied in the Python Owlbot postprocessor.\n\n https://github.com/googleapis/synthtool/blob/master/docker/owlbot/python/Dockerfile\n \"\"\"\n session.install(BLACK_VERSION)\n session.run(\"black\", *BLACK_PATHS)\n\n\[email protected](python=[\"3.6\", \"3.7\", \"3.8\", \"3.9\", \"3.10\"])\ndef unit(session):\n constraints_path = str(\n CURRENT_DIRECTORY / \"testing\" / f\"constraints-{session.python}.txt\"\n )\n session.install(\"-r\", \"testing/requirements.txt\", \"-c\", constraints_path)\n session.install(\"-e\", \".\", \"-c\", constraints_path)\n session.run(\n \"pytest\",\n f\"--junitxml=unit_{session.python}_sponge_log.xml\",\n \"--cov=google.auth\",\n \"--cov=google.oauth2\",\n \"--cov=tests\",\n \"--cov-report=term-missing\",\n \"tests\",\n \"tests_async\",\n )\n\n\[email protected](python=[\"2.7\"])\ndef unit_prev_versions(session):\n constraints_path = str(\n CURRENT_DIRECTORY / \"testing\" / f\"constraints-{session.python}.txt\"\n )\n session.install(\"-r\", \"testing/requirements.txt\", \"-c\", constraints_path)\n session.install(\"-e\", \".\", \"-c\", constraints_path)\n session.run(\n \"pytest\",\n f\"--junitxml=unit_{session.python}_sponge_log.xml\",\n \"--cov=google.auth\",\n \"--cov=google.oauth2\",\n \"--cov=tests\",\n \"tests\",\n )\n\n\[email protected](python=\"3.7\")\ndef cover(session):\n session.install(\"-r\", \"testing/requirements.txt\")\n session.install(\"-e\", \".\")\n session.run(\n \"pytest\",\n \"--cov=google.auth\",\n \"--cov=google.oauth2\",\n \"--cov=tests\",\n \"--cov=tests_async\",\n \"--cov-report=term-missing\",\n \"tests\",\n \"tests_async\",\n )\n session.run(\"coverage\", \"report\", \"--show-missing\", \"--fail-under=100\")\n\n\[email protected](python=\"3.7\")\ndef docgen(session):\n session.env[\"SPHINX_APIDOC_OPTIONS\"] = \"members,inherited-members,show-inheritance\"\n session.install(\"-r\", \"testing/requirements.txt\")\n session.install(\"sphinx\")\n session.install(\"-e\", \".\")\n session.run(\"rm\", \"-r\", \"docs/reference\")\n session.run(\n \"sphinx-apidoc\",\n \"--output-dir\",\n \"docs/reference\",\n \"--separate\",\n \"--module-first\",\n \"google\",\n )\n\n\[email protected](python=\"3.8\")\ndef docs(session):\n \"\"\"Build the docs for this library.\"\"\"\n\n session.install(\"-e\", \".[aiohttp]\")\n session.install(\"sphinx\", \"alabaster\", \"recommonmark\", \"sphinx-docstring-typing\")\n\n shutil.rmtree(os.path.join(\"docs\", \"_build\"), ignore_errors=True)\n session.run(\n \"sphinx-build\",\n \"-T\", # show full traceback on exception\n \"-W\", # warnings as errors\n \"-N\", # no colors\n \"-b\",\n \"html\",\n \"-d\",\n os.path.join(\"docs\", \"_build\", \"doctrees\", \"\"),\n os.path.join(\"docs\", \"\"),\n os.path.join(\"docs\", \"_build\", \"html\", \"\"),\n )\n\n\[email protected](python=\"pypy\")\ndef pypy(session):\n session.install(\"-r\", \"test/requirements.txt\")\n session.install(\"-e\", \".\")\n session.run(\n \"pytest\",\n f\"--junitxml=unit_{session.python}_sponge_log.xml\",\n \"--cov=google.auth\",\n \"--cov=google.oauth2\",\n \"--cov=tests\",\n \"tests\",\n \"tests_async\",\n )\n", "path": "noxfile.py"}], "after_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport pathlib\nimport shutil\n\nimport nox\n\nCURRENT_DIRECTORY = pathlib.Path(__file__).parent.absolute()\n\nBLACK_VERSION = \"black==19.3b0\"\nBLACK_PATHS = [\n \"google\",\n \"tests\",\n \"tests_async\",\n \"noxfile.py\",\n \"setup.py\",\n \"docs/conf.py\",\n]\n\n\[email protected](python=\"3.7\")\ndef lint(session):\n session.install(\"flake8\", \"flake8-import-order\", \"docutils\", BLACK_VERSION)\n session.install(\"-e\", \".\")\n session.run(\"black\", \"--check\", *BLACK_PATHS)\n session.run(\n \"flake8\",\n \"--import-order-style=google\",\n \"--application-import-names=google,tests,system_tests\",\n \"google\",\n \"tests\",\n \"tests_async\",\n )\n session.run(\n \"python\", \"setup.py\", \"check\", \"--metadata\", \"--restructuredtext\", \"--strict\"\n )\n\n\[email protected](python=\"3.8\")\ndef blacken(session):\n \"\"\"Run black.\n Format code to uniform standard.\n The Python version should be consistent with what is\n supplied in the Python Owlbot postprocessor.\n\n https://github.com/googleapis/synthtool/blob/master/docker/owlbot/python/Dockerfile\n \"\"\"\n session.install(BLACK_VERSION)\n session.run(\"black\", *BLACK_PATHS)\n\n\[email protected](python=[\"3.6\", \"3.7\", \"3.8\", \"3.9\", \"3.10\"])\ndef unit(session):\n constraints_path = str(\n CURRENT_DIRECTORY / \"testing\" / f\"constraints-{session.python}.txt\"\n )\n session.install(\"-r\", \"testing/requirements.txt\", \"-c\", constraints_path)\n session.install(\"-e\", \".\", \"-c\", constraints_path)\n session.run(\n \"pytest\",\n f\"--junitxml=unit_{session.python}_sponge_log.xml\",\n \"--cov=google.auth\",\n \"--cov=google.oauth2\",\n \"--cov=tests\",\n \"--cov-report=term-missing\",\n \"tests\",\n \"tests_async\",\n )\n\n\[email protected](python=[\"2.7\"])\ndef unit_prev_versions(session):\n constraints_path = str(\n CURRENT_DIRECTORY / \"testing\" / f\"constraints-{session.python}.txt\"\n )\n session.install(\"-r\", \"testing/requirements.txt\", \"-c\", constraints_path)\n session.install(\"-e\", \".\", \"-c\", constraints_path)\n session.run(\n \"pytest\",\n f\"--junitxml=unit_{session.python}_sponge_log.xml\",\n \"--cov=google.auth\",\n \"--cov=google.oauth2\",\n \"--cov=tests\",\n \"tests\",\n )\n\n\[email protected](python=\"3.7\")\ndef cover(session):\n session.install(\"-r\", \"testing/requirements.txt\")\n session.install(\"-e\", \".\")\n session.run(\n \"pytest\",\n \"--cov=google.auth\",\n \"--cov=google.oauth2\",\n \"--cov=tests\",\n \"--cov=tests_async\",\n \"--cov-report=term-missing\",\n \"tests\",\n \"tests_async\",\n )\n session.run(\"coverage\", \"report\", \"--show-missing\", \"--fail-under=100\")\n\n\[email protected](python=\"3.8\")\ndef docs(session):\n \"\"\"Build the docs for this library.\"\"\"\n\n session.install(\"-e\", \".[aiohttp]\")\n session.install(\"sphinx\", \"alabaster\", \"recommonmark\", \"sphinx-docstring-typing\")\n\n shutil.rmtree(os.path.join(\"docs\", \"_build\"), ignore_errors=True)\n session.run(\n \"sphinx-build\",\n \"-T\", # show full traceback on exception\n \"-W\", # warnings as errors\n \"-N\", # no colors\n \"-b\",\n \"html\",\n \"-d\",\n os.path.join(\"docs\", \"_build\", \"doctrees\", \"\"),\n os.path.join(\"docs\", \"\"),\n os.path.join(\"docs\", \"_build\", \"html\", \"\"),\n )\n\n\[email protected](python=\"pypy\")\ndef pypy(session):\n session.install(\"-r\", \"test/requirements.txt\")\n session.install(\"-e\", \".\")\n session.run(\n \"pytest\",\n f\"--junitxml=unit_{session.python}_sponge_log.xml\",\n \"--cov=google.auth\",\n \"--cov=google.oauth2\",\n \"--cov=tests\",\n \"tests\",\n \"tests_async\",\n )\n", "path": "noxfile.py"}]} | 2,654 | 222 |
gh_patches_debug_27623 | rasdani/github-patches | git_diff | svthalia__concrexit-1788 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Still provide payment url for orders if order is free but age restricted
### Is your feature request related to a problem? Please describe.
Free orders don't need payment, so the API doesn't provide a payment url for free orders. However, age restricted orders still need member identification. For that, the API should still provide the URL that sets the payer.
### Describe the solution you'd like
Provide the payment url if the order is free but age restricted. In such cases, The sales order payment view should display a succes message 'you correctly identified yourself' or something like that.
### Motivation
Scanning a QR code for identification is nice
### Describe alternatives you've considered
We could also provide an identification url, which could make more sense as the url isn't exactly used for payments, but that would also make our API unnecessarily more complex as we can just re-use the existing field and view for it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/sales/views.py`
Content:
```
1 from django.contrib import messages
2 from django.contrib.auth.decorators import login_required
3 from django.shortcuts import get_object_or_404, redirect, render
4 from django.utils.decorators import method_decorator
5 from django.utils.translation import gettext_lazy as _
6 from django.views import View
7
8 from sales import services
9 from sales.models.order import Order
10
11
12 @method_decorator(login_required, name="dispatch")
13 class OrderPaymentView(View):
14 def get(self, request, *args, **kwargs):
15 order = get_object_or_404(Order, pk=kwargs["pk"])
16 if order.payment:
17 messages.warning(request, _("This order was already paid for."))
18 return redirect("index")
19 if order.payer is not None and order.payer != request.member:
20 messages.warning(request, _("This order is not yours."))
21 return redirect("index")
22
23 order.payer = request.member
24 order.save()
25
26 if order.total_amount == 0:
27 messages.warning(request, _("This order doesn't require payment."))
28 return redirect("index")
29
30 if order.age_restricted and not services.is_adult(request.member):
31 messages.error(
32 request,
33 _(
34 "The age restrictions on this order do not allow you to pay for this order."
35 ),
36 )
37 return redirect("index")
38 return render(request, "sales/order_payment.html", {"order": order})
39
```
Path: `website/sales/models/order.py`
Content:
```
1 from decimal import Decimal
2
3 from django.conf import settings
4 from django.core.exceptions import ValidationError
5 from django.core.validators import MinValueValidator
6 from django.db import models
7 from django.db.models import (
8 Sum,
9 Value,
10 F,
11 DecimalField,
12 Q,
13 IntegerField,
14 BooleanField,
15 Count,
16 )
17 from django.db.models.functions import Coalesce
18 from django.urls import reverse
19 from django.utils import timezone
20 from django.utils.translation import gettext_lazy as _
21 from queryable_properties.managers import QueryablePropertiesManager
22 from queryable_properties.properties import AnnotationProperty
23
24 from members.models import uuid, Member
25 from payments.models import Payment
26 from sales.models.product import ProductListItem
27 from sales.models.shift import Shift
28
29
30 def default_order_shift():
31 return Shift.objects.filter(active=True).first()
32
33
34 class Order(models.Model):
35
36 objects = QueryablePropertiesManager()
37
38 class Meta:
39 verbose_name = _("order")
40 verbose_name_plural = _("orders")
41 permissions = [
42 ("custom_prices", _("Can use custom prices and discounts in orders")),
43 ]
44 ordering = ["created_at"]
45
46 id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
47
48 created_at = models.DateTimeField(
49 verbose_name=_("created at"), default=timezone.now
50 )
51
52 shift = models.ForeignKey(
53 Shift,
54 verbose_name=_("shift"),
55 related_name="orders",
56 default=default_order_shift,
57 null=False,
58 blank=False,
59 on_delete=models.PROTECT,
60 )
61
62 items = models.ManyToManyField(
63 ProductListItem, through="OrderItem", verbose_name=_("items"),
64 )
65
66 payment = models.OneToOneField(
67 Payment,
68 verbose_name=_("payment"),
69 related_name="sales_order",
70 on_delete=models.SET_NULL,
71 blank=True,
72 null=True,
73 )
74
75 discount = models.DecimalField(
76 verbose_name=_("discount"),
77 max_digits=6,
78 decimal_places=2,
79 null=True,
80 blank=True,
81 validators=[MinValueValidator(Decimal("0.00"))],
82 )
83
84 payer = models.ForeignKey(
85 Member,
86 models.SET_NULL,
87 verbose_name=_("payer"),
88 related_name="sales_order",
89 blank=True,
90 null=True,
91 )
92
93 age_restricted = AnnotationProperty(
94 Count(
95 "order_items__pk",
96 filter=Q(order_items__product__product__age_restricted=True),
97 output_field=BooleanField(),
98 )
99 )
100
101 subtotal = AnnotationProperty(
102 Coalesce(Sum("order_items__total"), Value(0.00), output_field=DecimalField())
103 )
104
105 total_amount = AnnotationProperty(
106 Coalesce(Sum("order_items__total"), Value(0.00), output_field=DecimalField())
107 - Coalesce(F("discount"), Value(0.00), output_field=DecimalField())
108 )
109
110 num_items = AnnotationProperty(
111 Coalesce(Sum("order_items__amount"), Value(0), output_field=IntegerField())
112 )
113
114 def save(
115 self, force_insert=False, force_update=False, using=None, update_fields=None
116 ):
117 if self.shift.locked:
118 raise ValueError("The shift this order belongs to is locked.")
119 if self.shift.start > timezone.now():
120 raise ValueError("The shift hasn't started yet.")
121 if (
122 self.payment
123 and self.subtotal - Decimal(self.discount or 0) != self.payment.amount
124 ):
125 # We cannot use self.total_amount as it is a requires a database query and hence will not use any updated values
126 raise ValueError(
127 "The payment amount does not match the order total amount."
128 )
129 if self.payment and not self.payer:
130 self.payer = self.payment.paid_by
131
132 return super(Order, self).save(force_insert, force_update, using, update_fields)
133
134 def clean(self):
135 super().clean()
136 errors = {}
137
138 if self.shift.start > timezone.now():
139 errors.update({"shift": _("The shift hasn't started yet.")})
140
141 if self.shift.locked:
142 errors.update({"shift": _("The shift this order belongs to is locked.")})
143
144 if self.discount and self.discount > self.total_amount:
145 errors.update(
146 {"discount": _("Discount cannot be higher than total amount.")}
147 )
148
149 if errors:
150 raise ValidationError(errors)
151
152 @property
153 def order_description(self):
154 return ", ".join(str(x) for x in self.order_items.all())
155
156 @property
157 def accept_payment_from_any_user(self):
158 return True
159
160 @property
161 def payment_url(self):
162 return (
163 settings.BASE_URL + reverse("sales:order-pay", kwargs={"pk": self.pk})
164 if not self.payment
165 and (self.total_amount is not None and self.total_amount != 0)
166 else None
167 )
168
169 def __str__(self):
170 return f"Order {self.id} ({self.shift})"
171
172
173 class OrderItem(models.Model):
174 class Meta:
175 verbose_name = "item"
176 verbose_name_plural = "items"
177 ordering = ["pk"]
178 indexes = [
179 models.Index(fields=["order"]),
180 ]
181
182 product = models.ForeignKey(
183 ProductListItem,
184 verbose_name=_("product"),
185 null=False,
186 blank=False,
187 on_delete=models.PROTECT,
188 )
189 order = models.ForeignKey(
190 Order,
191 verbose_name=_("order"),
192 related_name="order_items",
193 null=False,
194 blank=False,
195 on_delete=models.CASCADE,
196 )
197 total = models.DecimalField(
198 verbose_name=_("total"),
199 max_digits=6,
200 decimal_places=2,
201 null=False,
202 blank=True,
203 validators=[MinValueValidator(Decimal("0.00"))],
204 help_text="Only when overriding the default",
205 )
206 amount = models.PositiveSmallIntegerField(
207 verbose_name=_("amount"), null=False, blank=False
208 )
209
210 def save(
211 self, force_insert=False, force_update=False, using=None, update_fields=None
212 ):
213 if self.order.shift.locked:
214 raise ValueError("The shift this order belongs to is locked.")
215 if self.order.payment:
216 raise ValueError("This order has already been paid for.")
217
218 if not self.total:
219 self.total = self.product.price * self.amount
220
221 return super(OrderItem, self).save(
222 force_insert, force_update, using, update_fields
223 )
224
225 def clean(self):
226 super().clean()
227 errors = {}
228
229 if self.order.shift.locked:
230 errors.update({"order": _("The shift is locked.")})
231
232 if self.product not in self.order.shift.product_list.product_items.all():
233 errors.update({"product": _("This product is not available.")})
234
235 if errors:
236 raise ValidationError(errors)
237
238 def __str__(self):
239 return f"{self.amount}x {self.product.product.name}"
240
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/sales/models/order.py b/website/sales/models/order.py
--- a/website/sales/models/order.py
+++ b/website/sales/models/order.py
@@ -162,7 +162,6 @@
return (
settings.BASE_URL + reverse("sales:order-pay", kwargs={"pk": self.pk})
if not self.payment
- and (self.total_amount is not None and self.total_amount != 0)
else None
)
diff --git a/website/sales/views.py b/website/sales/views.py
--- a/website/sales/views.py
+++ b/website/sales/views.py
@@ -23,10 +23,6 @@
order.payer = request.member
order.save()
- if order.total_amount == 0:
- messages.warning(request, _("This order doesn't require payment."))
- return redirect("index")
-
if order.age_restricted and not services.is_adult(request.member):
messages.error(
request,
@@ -35,4 +31,19 @@
),
)
return redirect("index")
+
+ if (
+ order.age_restricted
+ and services.is_adult(request.member)
+ and order.total_amount == 0
+ ):
+ messages.success(
+ request, _("You have successfully identified yourself for this order.")
+ )
+ return redirect("index")
+
+ if order.total_amount == 0:
+ messages.info(request, _("This order doesn't require payment."))
+ return redirect("index")
+
return render(request, "sales/order_payment.html", {"order": order})
| {"golden_diff": "diff --git a/website/sales/models/order.py b/website/sales/models/order.py\n--- a/website/sales/models/order.py\n+++ b/website/sales/models/order.py\n@@ -162,7 +162,6 @@\n return (\n settings.BASE_URL + reverse(\"sales:order-pay\", kwargs={\"pk\": self.pk})\n if not self.payment\n- and (self.total_amount is not None and self.total_amount != 0)\n else None\n )\n \ndiff --git a/website/sales/views.py b/website/sales/views.py\n--- a/website/sales/views.py\n+++ b/website/sales/views.py\n@@ -23,10 +23,6 @@\n order.payer = request.member\n order.save()\n \n- if order.total_amount == 0:\n- messages.warning(request, _(\"This order doesn't require payment.\"))\n- return redirect(\"index\")\n-\n if order.age_restricted and not services.is_adult(request.member):\n messages.error(\n request,\n@@ -35,4 +31,19 @@\n ),\n )\n return redirect(\"index\")\n+\n+ if (\n+ order.age_restricted\n+ and services.is_adult(request.member)\n+ and order.total_amount == 0\n+ ):\n+ messages.success(\n+ request, _(\"You have successfully identified yourself for this order.\")\n+ )\n+ return redirect(\"index\")\n+\n+ if order.total_amount == 0:\n+ messages.info(request, _(\"This order doesn't require payment.\"))\n+ return redirect(\"index\")\n+\n return render(request, \"sales/order_payment.html\", {\"order\": order})\n", "issue": "Still provide payment url for orders if order is free but age restricted\n### Is your feature request related to a problem? Please describe.\r\nFree orders don't need payment, so the API doesn't provide a payment url for free orders. However, age restricted orders still need member identification. For that, the API should still provide the URL that sets the payer.\r\n\r\n### Describe the solution you'd like\r\nProvide the payment url if the order is free but age restricted. In such cases, The sales order payment view should display a succes message 'you correctly identified yourself' or something like that.\r\n\r\n### Motivation\r\nScanning a QR code for identification is nice\r\n\r\n### Describe alternatives you've considered\r\nWe could also provide an identification url, which could make more sense as the url isn't exactly used for payments, but that would also make our API unnecessarily more complex as we can just re-use the existing field and view for it.\r\n\n", "before_files": [{"content": "from django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.utils.decorators import method_decorator\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views import View\n\nfrom sales import services\nfrom sales.models.order import Order\n\n\n@method_decorator(login_required, name=\"dispatch\")\nclass OrderPaymentView(View):\n def get(self, request, *args, **kwargs):\n order = get_object_or_404(Order, pk=kwargs[\"pk\"])\n if order.payment:\n messages.warning(request, _(\"This order was already paid for.\"))\n return redirect(\"index\")\n if order.payer is not None and order.payer != request.member:\n messages.warning(request, _(\"This order is not yours.\"))\n return redirect(\"index\")\n\n order.payer = request.member\n order.save()\n\n if order.total_amount == 0:\n messages.warning(request, _(\"This order doesn't require payment.\"))\n return redirect(\"index\")\n\n if order.age_restricted and not services.is_adult(request.member):\n messages.error(\n request,\n _(\n \"The age restrictions on this order do not allow you to pay for this order.\"\n ),\n )\n return redirect(\"index\")\n return render(request, \"sales/order_payment.html\", {\"order\": order})\n", "path": "website/sales/views.py"}, {"content": "from decimal import Decimal\n\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\nfrom django.core.validators import MinValueValidator\nfrom django.db import models\nfrom django.db.models import (\n Sum,\n Value,\n F,\n DecimalField,\n Q,\n IntegerField,\n BooleanField,\n Count,\n)\nfrom django.db.models.functions import Coalesce\nfrom django.urls import reverse\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\nfrom queryable_properties.managers import QueryablePropertiesManager\nfrom queryable_properties.properties import AnnotationProperty\n\nfrom members.models import uuid, Member\nfrom payments.models import Payment\nfrom sales.models.product import ProductListItem\nfrom sales.models.shift import Shift\n\n\ndef default_order_shift():\n return Shift.objects.filter(active=True).first()\n\n\nclass Order(models.Model):\n\n objects = QueryablePropertiesManager()\n\n class Meta:\n verbose_name = _(\"order\")\n verbose_name_plural = _(\"orders\")\n permissions = [\n (\"custom_prices\", _(\"Can use custom prices and discounts in orders\")),\n ]\n ordering = [\"created_at\"]\n\n id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)\n\n created_at = models.DateTimeField(\n verbose_name=_(\"created at\"), default=timezone.now\n )\n\n shift = models.ForeignKey(\n Shift,\n verbose_name=_(\"shift\"),\n related_name=\"orders\",\n default=default_order_shift,\n null=False,\n blank=False,\n on_delete=models.PROTECT,\n )\n\n items = models.ManyToManyField(\n ProductListItem, through=\"OrderItem\", verbose_name=_(\"items\"),\n )\n\n payment = models.OneToOneField(\n Payment,\n verbose_name=_(\"payment\"),\n related_name=\"sales_order\",\n on_delete=models.SET_NULL,\n blank=True,\n null=True,\n )\n\n discount = models.DecimalField(\n verbose_name=_(\"discount\"),\n max_digits=6,\n decimal_places=2,\n null=True,\n blank=True,\n validators=[MinValueValidator(Decimal(\"0.00\"))],\n )\n\n payer = models.ForeignKey(\n Member,\n models.SET_NULL,\n verbose_name=_(\"payer\"),\n related_name=\"sales_order\",\n blank=True,\n null=True,\n )\n\n age_restricted = AnnotationProperty(\n Count(\n \"order_items__pk\",\n filter=Q(order_items__product__product__age_restricted=True),\n output_field=BooleanField(),\n )\n )\n\n subtotal = AnnotationProperty(\n Coalesce(Sum(\"order_items__total\"), Value(0.00), output_field=DecimalField())\n )\n\n total_amount = AnnotationProperty(\n Coalesce(Sum(\"order_items__total\"), Value(0.00), output_field=DecimalField())\n - Coalesce(F(\"discount\"), Value(0.00), output_field=DecimalField())\n )\n\n num_items = AnnotationProperty(\n Coalesce(Sum(\"order_items__amount\"), Value(0), output_field=IntegerField())\n )\n\n def save(\n self, force_insert=False, force_update=False, using=None, update_fields=None\n ):\n if self.shift.locked:\n raise ValueError(\"The shift this order belongs to is locked.\")\n if self.shift.start > timezone.now():\n raise ValueError(\"The shift hasn't started yet.\")\n if (\n self.payment\n and self.subtotal - Decimal(self.discount or 0) != self.payment.amount\n ):\n # We cannot use self.total_amount as it is a requires a database query and hence will not use any updated values\n raise ValueError(\n \"The payment amount does not match the order total amount.\"\n )\n if self.payment and not self.payer:\n self.payer = self.payment.paid_by\n\n return super(Order, self).save(force_insert, force_update, using, update_fields)\n\n def clean(self):\n super().clean()\n errors = {}\n\n if self.shift.start > timezone.now():\n errors.update({\"shift\": _(\"The shift hasn't started yet.\")})\n\n if self.shift.locked:\n errors.update({\"shift\": _(\"The shift this order belongs to is locked.\")})\n\n if self.discount and self.discount > self.total_amount:\n errors.update(\n {\"discount\": _(\"Discount cannot be higher than total amount.\")}\n )\n\n if errors:\n raise ValidationError(errors)\n\n @property\n def order_description(self):\n return \", \".join(str(x) for x in self.order_items.all())\n\n @property\n def accept_payment_from_any_user(self):\n return True\n\n @property\n def payment_url(self):\n return (\n settings.BASE_URL + reverse(\"sales:order-pay\", kwargs={\"pk\": self.pk})\n if not self.payment\n and (self.total_amount is not None and self.total_amount != 0)\n else None\n )\n\n def __str__(self):\n return f\"Order {self.id} ({self.shift})\"\n\n\nclass OrderItem(models.Model):\n class Meta:\n verbose_name = \"item\"\n verbose_name_plural = \"items\"\n ordering = [\"pk\"]\n indexes = [\n models.Index(fields=[\"order\"]),\n ]\n\n product = models.ForeignKey(\n ProductListItem,\n verbose_name=_(\"product\"),\n null=False,\n blank=False,\n on_delete=models.PROTECT,\n )\n order = models.ForeignKey(\n Order,\n verbose_name=_(\"order\"),\n related_name=\"order_items\",\n null=False,\n blank=False,\n on_delete=models.CASCADE,\n )\n total = models.DecimalField(\n verbose_name=_(\"total\"),\n max_digits=6,\n decimal_places=2,\n null=False,\n blank=True,\n validators=[MinValueValidator(Decimal(\"0.00\"))],\n help_text=\"Only when overriding the default\",\n )\n amount = models.PositiveSmallIntegerField(\n verbose_name=_(\"amount\"), null=False, blank=False\n )\n\n def save(\n self, force_insert=False, force_update=False, using=None, update_fields=None\n ):\n if self.order.shift.locked:\n raise ValueError(\"The shift this order belongs to is locked.\")\n if self.order.payment:\n raise ValueError(\"This order has already been paid for.\")\n\n if not self.total:\n self.total = self.product.price * self.amount\n\n return super(OrderItem, self).save(\n force_insert, force_update, using, update_fields\n )\n\n def clean(self):\n super().clean()\n errors = {}\n\n if self.order.shift.locked:\n errors.update({\"order\": _(\"The shift is locked.\")})\n\n if self.product not in self.order.shift.product_list.product_items.all():\n errors.update({\"product\": _(\"This product is not available.\")})\n\n if errors:\n raise ValidationError(errors)\n\n def __str__(self):\n return f\"{self.amount}x {self.product.product.name}\"\n", "path": "website/sales/models/order.py"}], "after_files": [{"content": "from django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.utils.decorators import method_decorator\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views import View\n\nfrom sales import services\nfrom sales.models.order import Order\n\n\n@method_decorator(login_required, name=\"dispatch\")\nclass OrderPaymentView(View):\n def get(self, request, *args, **kwargs):\n order = get_object_or_404(Order, pk=kwargs[\"pk\"])\n if order.payment:\n messages.warning(request, _(\"This order was already paid for.\"))\n return redirect(\"index\")\n if order.payer is not None and order.payer != request.member:\n messages.warning(request, _(\"This order is not yours.\"))\n return redirect(\"index\")\n\n order.payer = request.member\n order.save()\n\n if order.age_restricted and not services.is_adult(request.member):\n messages.error(\n request,\n _(\n \"The age restrictions on this order do not allow you to pay for this order.\"\n ),\n )\n return redirect(\"index\")\n\n if (\n order.age_restricted\n and services.is_adult(request.member)\n and order.total_amount == 0\n ):\n messages.success(\n request, _(\"You have successfully identified yourself for this order.\")\n )\n return redirect(\"index\")\n\n if order.total_amount == 0:\n messages.info(request, _(\"This order doesn't require payment.\"))\n return redirect(\"index\")\n\n return render(request, \"sales/order_payment.html\", {\"order\": order})\n", "path": "website/sales/views.py"}, {"content": "from decimal import Decimal\n\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\nfrom django.core.validators import MinValueValidator\nfrom django.db import models\nfrom django.db.models import (\n Sum,\n Value,\n F,\n DecimalField,\n Q,\n IntegerField,\n BooleanField,\n Count,\n)\nfrom django.db.models.functions import Coalesce\nfrom django.urls import reverse\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\nfrom queryable_properties.managers import QueryablePropertiesManager\nfrom queryable_properties.properties import AnnotationProperty\n\nfrom members.models import uuid, Member\nfrom payments.models import Payment\nfrom sales.models.product import ProductListItem\nfrom sales.models.shift import Shift\n\n\ndef default_order_shift():\n return Shift.objects.filter(active=True).first()\n\n\nclass Order(models.Model):\n\n objects = QueryablePropertiesManager()\n\n class Meta:\n verbose_name = _(\"order\")\n verbose_name_plural = _(\"orders\")\n permissions = [\n (\"custom_prices\", _(\"Can use custom prices and discounts in orders\")),\n ]\n ordering = [\"created_at\"]\n\n id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)\n\n created_at = models.DateTimeField(\n verbose_name=_(\"created at\"), default=timezone.now\n )\n\n shift = models.ForeignKey(\n Shift,\n verbose_name=_(\"shift\"),\n related_name=\"orders\",\n default=default_order_shift,\n null=False,\n blank=False,\n on_delete=models.PROTECT,\n )\n\n items = models.ManyToManyField(\n ProductListItem, through=\"OrderItem\", verbose_name=_(\"items\"),\n )\n\n payment = models.OneToOneField(\n Payment,\n verbose_name=_(\"payment\"),\n related_name=\"sales_order\",\n on_delete=models.SET_NULL,\n blank=True,\n null=True,\n )\n\n discount = models.DecimalField(\n verbose_name=_(\"discount\"),\n max_digits=6,\n decimal_places=2,\n null=True,\n blank=True,\n validators=[MinValueValidator(Decimal(\"0.00\"))],\n )\n\n payer = models.ForeignKey(\n Member,\n models.SET_NULL,\n verbose_name=_(\"payer\"),\n related_name=\"sales_order\",\n blank=True,\n null=True,\n )\n\n age_restricted = AnnotationProperty(\n Count(\n \"order_items__pk\",\n filter=Q(order_items__product__product__age_restricted=True),\n output_field=BooleanField(),\n )\n )\n\n subtotal = AnnotationProperty(\n Coalesce(Sum(\"order_items__total\"), Value(0.00), output_field=DecimalField())\n )\n\n total_amount = AnnotationProperty(\n Coalesce(Sum(\"order_items__total\"), Value(0.00), output_field=DecimalField())\n - Coalesce(F(\"discount\"), Value(0.00), output_field=DecimalField())\n )\n\n num_items = AnnotationProperty(\n Coalesce(Sum(\"order_items__amount\"), Value(0), output_field=IntegerField())\n )\n\n def save(\n self, force_insert=False, force_update=False, using=None, update_fields=None\n ):\n if self.shift.locked:\n raise ValueError(\"The shift this order belongs to is locked.\")\n if self.shift.start > timezone.now():\n raise ValueError(\"The shift hasn't started yet.\")\n if (\n self.payment\n and self.subtotal - Decimal(self.discount or 0) != self.payment.amount\n ):\n # We cannot use self.total_amount as it is a requires a database query and hence will not use any updated values\n raise ValueError(\n \"The payment amount does not match the order total amount.\"\n )\n if self.payment and not self.payer:\n self.payer = self.payment.paid_by\n\n return super(Order, self).save(force_insert, force_update, using, update_fields)\n\n def clean(self):\n super().clean()\n errors = {}\n\n if self.shift.start > timezone.now():\n errors.update({\"shift\": _(\"The shift hasn't started yet.\")})\n\n if self.shift.locked:\n errors.update({\"shift\": _(\"The shift this order belongs to is locked.\")})\n\n if self.discount and self.discount > self.total_amount:\n errors.update(\n {\"discount\": _(\"Discount cannot be higher than total amount.\")}\n )\n\n if errors:\n raise ValidationError(errors)\n\n @property\n def order_description(self):\n return \", \".join(str(x) for x in self.order_items.all())\n\n @property\n def accept_payment_from_any_user(self):\n return True\n\n @property\n def payment_url(self):\n return (\n settings.BASE_URL + reverse(\"sales:order-pay\", kwargs={\"pk\": self.pk})\n if not self.payment\n else None\n )\n\n def __str__(self):\n return f\"Order {self.id} ({self.shift})\"\n\n\nclass OrderItem(models.Model):\n class Meta:\n verbose_name = \"item\"\n verbose_name_plural = \"items\"\n ordering = [\"pk\"]\n indexes = [\n models.Index(fields=[\"order\"]),\n ]\n\n product = models.ForeignKey(\n ProductListItem,\n verbose_name=_(\"product\"),\n null=False,\n blank=False,\n on_delete=models.PROTECT,\n )\n order = models.ForeignKey(\n Order,\n verbose_name=_(\"order\"),\n related_name=\"order_items\",\n null=False,\n blank=False,\n on_delete=models.CASCADE,\n )\n total = models.DecimalField(\n verbose_name=_(\"total\"),\n max_digits=6,\n decimal_places=2,\n null=False,\n blank=True,\n validators=[MinValueValidator(Decimal(\"0.00\"))],\n help_text=\"Only when overriding the default\",\n )\n amount = models.PositiveSmallIntegerField(\n verbose_name=_(\"amount\"), null=False, blank=False\n )\n\n def save(\n self, force_insert=False, force_update=False, using=None, update_fields=None\n ):\n if self.order.shift.locked:\n raise ValueError(\"The shift this order belongs to is locked.\")\n if self.order.payment:\n raise ValueError(\"This order has already been paid for.\")\n\n if not self.total:\n self.total = self.product.price * self.amount\n\n return super(OrderItem, self).save(\n force_insert, force_update, using, update_fields\n )\n\n def clean(self):\n super().clean()\n errors = {}\n\n if self.order.shift.locked:\n errors.update({\"order\": _(\"The shift is locked.\")})\n\n if self.product not in self.order.shift.product_list.product_items.all():\n errors.update({\"product\": _(\"This product is not available.\")})\n\n if errors:\n raise ValidationError(errors)\n\n def __str__(self):\n return f\"{self.amount}x {self.product.product.name}\"\n", "path": "website/sales/models/order.py"}]} | 2,885 | 361 |
gh_patches_debug_34661 | rasdani/github-patches | git_diff | ResonantGeoData__ResonantGeoData-681 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add new overlap parameter in search filter
A user wants to be able to search for SpatialEntries that have x% overlap with the search geometry. We should add this as an additional field on the `SpatialEntryFilter`.
Perhaps call it `percent_overlap` and accept a number between 0 an 1.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django-rgd/rgd/filters.py`
Content:
```
1 from django.contrib.gis import forms
2 from django.contrib.gis.db.models.functions import GeometryDistance
3 from django.contrib.gis.measure import D
4 from django.core.validators import RegexValidator
5 from django.db.models import F, Q
6 from django_filters import rest_framework as filters
7 from rgd.models import ChecksumFile, Collection, SpatialEntry
8 from rgd.permissions import get_paths
9
10
11 class GeometryFilter(filters.Filter):
12 field_class = forms.GeometryField
13 # Ensures GeoJSON objects are converted to correct SRID
14 field_class.widget.map_srid = 4326
15
16
17 class SpatialEntryFilter(filters.FilterSet):
18
19 q = GeometryFilter(
20 help_text='A Well-known text (WKT) representation of a geometry or a GeoJSON.',
21 label='WKT/GeoJSON',
22 method='filter_q',
23 )
24 predicate = filters.ChoiceFilter(
25 choices=(
26 ('contains', 'contains'),
27 ('crosses', 'crosses'),
28 ('disjoint', 'disjoint'),
29 ('equals', 'equals'),
30 ('intersects', 'intersects'),
31 ('overlaps', 'overlaps'),
32 ('touches', 'touches'),
33 ('within', 'within'),
34 ),
35 help_text=(
36 'A named spatial predicate based on the DE-9IM. This spatial predicate will be used '
37 'to filter data such that `predicate(a, b)` where `b` is the queried geometry.'
38 ),
39 label='Spatial predicate',
40 method='filter_predicate',
41 )
42 relates = filters.CharFilter(
43 help_text=(
44 'Specify exactly how the queried geometry should relate to the data using a DE-9IM '
45 'string code.'
46 ),
47 label='DE-9IM string code',
48 max_length=9,
49 method='filter_relates',
50 min_length=9,
51 validators=(
52 RegexValidator(regex=r'^[\*012TF]{9}$', message='Enter a valid DE-9IM string code.'),
53 ),
54 )
55 distance = filters.RangeFilter(
56 help_text='The minimum/maximum distance around the queried geometry in meters.',
57 label='Distance',
58 method='filter_distance',
59 )
60 acquired = filters.IsoDateTimeFromToRangeFilter(
61 field_name='acquisition_date',
62 help_text='The ISO 8601 formatted date and time when data was acquired.',
63 label='Acquired',
64 )
65 instrumentation = filters.CharFilter(
66 field_name='instrumentation',
67 help_text='The instrumentation used to acquire at least one of these data.',
68 label='Instrumentation',
69 lookup_expr='icontains',
70 )
71 time_of_day = filters.TimeRangeFilter(
72 help_text='The minimum/maximum times during the day the records were acquired.',
73 label='Time of Day',
74 method='filter_time_of_day',
75 )
76 collections = filters.ModelMultipleChoiceFilter(
77 help_text='One or more collections that the data might belong to.',
78 label='Collections',
79 method='filter_collection',
80 queryset=Collection.objects.all(),
81 )
82
83 @property
84 def _geometry(self):
85 return self.form.cleaned_data['q']
86
87 @property
88 def _has_geom(self):
89 return self._geometry is not None
90
91 def filter_q(self, queryset, name, value):
92 """Sort the queryset by distance to queried geometry.
93
94 Annotates the queryset with `distance`.
95
96 This uses the efficient KNN operation:
97 https://postgis.net/docs/geometry_distance_knn.html
98 """
99 if value:
100 queryset = queryset.annotate(distance=GeometryDistance('footprint', value)).order_by(
101 'distance'
102 )
103 return queryset
104
105 def filter_predicate(self, queryset, name, value):
106 """Filter the spatial entries by the chosen predicate."""
107 if value and self._has_geom:
108 queryset = queryset.filter(**{f'footprint__{value}': self._geometry})
109 return queryset
110
111 def filter_relates(self, queryset, name, value):
112 """Filter the spatial entries by the chosen DE-9IM."""
113 if value and self._has_geom:
114 queryset = queryset.filter(footprint__relates=(self._geometry, value))
115 return queryset
116
117 def filter_distance(self, queryset, name, value):
118 """Filter the queryset by distance to the queried geometry.
119
120 We may wish to use the distance in degrees later on. This is
121 very taxing on the DBMS right now. The distance in degrees
122 can be provided by the initial geometry query.
123 """
124 if value and self._has_geom:
125 geom = self._geometry
126 if value.start is not None:
127 queryset = queryset.filter(footprint__distance_gte=(geom, D(m=value.start)))
128 if value.stop is not None:
129 queryset = queryset.filter(footprint__distance_lte=(geom, D(m=value.stop)))
130 return queryset
131
132 def filter_time_of_day(self, queryset, name, value):
133 """Filter the queryset by time of day acquired.
134
135 Use case: find all rasters acquired between 8am and 4pm
136 for all days in the acquired date range (i.e. only daytime imagery)
137 """
138 if value is not None:
139 queryset = queryset.annotate(time_of_day=F('acquisition_date__time'))
140 if value.start is not None:
141 queryset = queryset.filter(time_of_day__gte=value.start)
142 if value.stop is not None:
143 queryset = queryset.filter(time_of_day__lte=value.stop)
144 return queryset
145
146 def filter_collection(self, queryset, name, value):
147 """Filter the queryset by the collection it belongs to."""
148 if value:
149 conditions = Q()
150 for path in get_paths(queryset.model, ChecksumFile):
151 conditions |= path.q(collection__in=value)
152 return queryset.filter(conditions).distinct()
153 return queryset
154
155 class Meta:
156 model = SpatialEntry
157 fields = [
158 'q',
159 'predicate',
160 'relates',
161 'distance',
162 'acquired',
163 'instrumentation',
164 'time_of_day',
165 'collections',
166 ]
167
168
169 class CollectionFilter(filters.FilterSet):
170 name = filters.CharFilter(
171 field_name='name',
172 help_text='The name of the Collection.',
173 label='Name',
174 lookup_expr='exact',
175 )
176
177 class Meta:
178 model = Collection
179 fields = [
180 'name',
181 ]
182
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django-rgd/rgd/filters.py b/django-rgd/rgd/filters.py
--- a/django-rgd/rgd/filters.py
+++ b/django-rgd/rgd/filters.py
@@ -1,8 +1,9 @@
from django.contrib.gis import forms
-from django.contrib.gis.db.models.functions import GeometryDistance
+from django.contrib.gis.db.models.functions import Area, GeometryDistance, Intersection
from django.contrib.gis.measure import D
from django.core.validators import RegexValidator
-from django.db.models import F, Q
+from django.db.models import F, FloatField, Q
+from django.db.models.functions import Cast
from django_filters import rest_framework as filters
from rgd.models import ChecksumFile, Collection, SpatialEntry
from rgd.permissions import get_paths
@@ -79,6 +80,11 @@
method='filter_collection',
queryset=Collection.objects.all(),
)
+ percent_overlap = filters.NumberFilter(
+ help_text='The minute percent overlap with search geometry (between 0 and 1).',
+ label='Percent overlap',
+ method='filter_percent_overlap',
+ )
@property
def _geometry(self):
@@ -152,6 +158,18 @@
return queryset.filter(conditions).distinct()
return queryset
+ def filter_percent_overlap(self, queryset, name, value: float):
+ """Filter the queryset by percent overlap with the queried geometry."""
+ if value is not None and value > 0 and value <= 1 and self._has_geom:
+ geom = self._geometry
+ queryset = queryset.filter(footprint__overlaps=geom).annotate(
+ overlap_percentage=(
+ Cast(Area(Intersection(F('footprint'), geom)) / Area(geom), FloatField())
+ )
+ )
+ queryset = queryset.filter(overlap_percentage__gte=value)
+ return queryset
+
class Meta:
model = SpatialEntry
fields = [
@@ -160,6 +178,7 @@
'relates',
'distance',
'acquired',
+ 'percent_overlap',
'instrumentation',
'time_of_day',
'collections',
| {"golden_diff": "diff --git a/django-rgd/rgd/filters.py b/django-rgd/rgd/filters.py\n--- a/django-rgd/rgd/filters.py\n+++ b/django-rgd/rgd/filters.py\n@@ -1,8 +1,9 @@\n from django.contrib.gis import forms\n-from django.contrib.gis.db.models.functions import GeometryDistance\n+from django.contrib.gis.db.models.functions import Area, GeometryDistance, Intersection\n from django.contrib.gis.measure import D\n from django.core.validators import RegexValidator\n-from django.db.models import F, Q\n+from django.db.models import F, FloatField, Q\n+from django.db.models.functions import Cast\n from django_filters import rest_framework as filters\n from rgd.models import ChecksumFile, Collection, SpatialEntry\n from rgd.permissions import get_paths\n@@ -79,6 +80,11 @@\n method='filter_collection',\n queryset=Collection.objects.all(),\n )\n+ percent_overlap = filters.NumberFilter(\n+ help_text='The minute percent overlap with search geometry (between 0 and 1).',\n+ label='Percent overlap',\n+ method='filter_percent_overlap',\n+ )\n \n @property\n def _geometry(self):\n@@ -152,6 +158,18 @@\n return queryset.filter(conditions).distinct()\n return queryset\n \n+ def filter_percent_overlap(self, queryset, name, value: float):\n+ \"\"\"Filter the queryset by percent overlap with the queried geometry.\"\"\"\n+ if value is not None and value > 0 and value <= 1 and self._has_geom:\n+ geom = self._geometry\n+ queryset = queryset.filter(footprint__overlaps=geom).annotate(\n+ overlap_percentage=(\n+ Cast(Area(Intersection(F('footprint'), geom)) / Area(geom), FloatField())\n+ )\n+ )\n+ queryset = queryset.filter(overlap_percentage__gte=value)\n+ return queryset\n+\n class Meta:\n model = SpatialEntry\n fields = [\n@@ -160,6 +178,7 @@\n 'relates',\n 'distance',\n 'acquired',\n+ 'percent_overlap',\n 'instrumentation',\n 'time_of_day',\n 'collections',\n", "issue": "Add new overlap parameter in search filter\nA user wants to be able to search for SpatialEntries that have x% overlap with the search geometry. We should add this as an additional field on the `SpatialEntryFilter`.\r\n\r\nPerhaps call it `percent_overlap` and accept a number between 0 an 1.\n", "before_files": [{"content": "from django.contrib.gis import forms\nfrom django.contrib.gis.db.models.functions import GeometryDistance\nfrom django.contrib.gis.measure import D\nfrom django.core.validators import RegexValidator\nfrom django.db.models import F, Q\nfrom django_filters import rest_framework as filters\nfrom rgd.models import ChecksumFile, Collection, SpatialEntry\nfrom rgd.permissions import get_paths\n\n\nclass GeometryFilter(filters.Filter):\n field_class = forms.GeometryField\n # Ensures GeoJSON objects are converted to correct SRID\n field_class.widget.map_srid = 4326\n\n\nclass SpatialEntryFilter(filters.FilterSet):\n\n q = GeometryFilter(\n help_text='A Well-known text (WKT) representation of a geometry or a GeoJSON.',\n label='WKT/GeoJSON',\n method='filter_q',\n )\n predicate = filters.ChoiceFilter(\n choices=(\n ('contains', 'contains'),\n ('crosses', 'crosses'),\n ('disjoint', 'disjoint'),\n ('equals', 'equals'),\n ('intersects', 'intersects'),\n ('overlaps', 'overlaps'),\n ('touches', 'touches'),\n ('within', 'within'),\n ),\n help_text=(\n 'A named spatial predicate based on the DE-9IM. This spatial predicate will be used '\n 'to filter data such that `predicate(a, b)` where `b` is the queried geometry.'\n ),\n label='Spatial predicate',\n method='filter_predicate',\n )\n relates = filters.CharFilter(\n help_text=(\n 'Specify exactly how the queried geometry should relate to the data using a DE-9IM '\n 'string code.'\n ),\n label='DE-9IM string code',\n max_length=9,\n method='filter_relates',\n min_length=9,\n validators=(\n RegexValidator(regex=r'^[\\*012TF]{9}$', message='Enter a valid DE-9IM string code.'),\n ),\n )\n distance = filters.RangeFilter(\n help_text='The minimum/maximum distance around the queried geometry in meters.',\n label='Distance',\n method='filter_distance',\n )\n acquired = filters.IsoDateTimeFromToRangeFilter(\n field_name='acquisition_date',\n help_text='The ISO 8601 formatted date and time when data was acquired.',\n label='Acquired',\n )\n instrumentation = filters.CharFilter(\n field_name='instrumentation',\n help_text='The instrumentation used to acquire at least one of these data.',\n label='Instrumentation',\n lookup_expr='icontains',\n )\n time_of_day = filters.TimeRangeFilter(\n help_text='The minimum/maximum times during the day the records were acquired.',\n label='Time of Day',\n method='filter_time_of_day',\n )\n collections = filters.ModelMultipleChoiceFilter(\n help_text='One or more collections that the data might belong to.',\n label='Collections',\n method='filter_collection',\n queryset=Collection.objects.all(),\n )\n\n @property\n def _geometry(self):\n return self.form.cleaned_data['q']\n\n @property\n def _has_geom(self):\n return self._geometry is not None\n\n def filter_q(self, queryset, name, value):\n \"\"\"Sort the queryset by distance to queried geometry.\n\n Annotates the queryset with `distance`.\n\n This uses the efficient KNN operation:\n https://postgis.net/docs/geometry_distance_knn.html\n \"\"\"\n if value:\n queryset = queryset.annotate(distance=GeometryDistance('footprint', value)).order_by(\n 'distance'\n )\n return queryset\n\n def filter_predicate(self, queryset, name, value):\n \"\"\"Filter the spatial entries by the chosen predicate.\"\"\"\n if value and self._has_geom:\n queryset = queryset.filter(**{f'footprint__{value}': self._geometry})\n return queryset\n\n def filter_relates(self, queryset, name, value):\n \"\"\"Filter the spatial entries by the chosen DE-9IM.\"\"\"\n if value and self._has_geom:\n queryset = queryset.filter(footprint__relates=(self._geometry, value))\n return queryset\n\n def filter_distance(self, queryset, name, value):\n \"\"\"Filter the queryset by distance to the queried geometry.\n\n We may wish to use the distance in degrees later on. This is\n very taxing on the DBMS right now. The distance in degrees\n can be provided by the initial geometry query.\n \"\"\"\n if value and self._has_geom:\n geom = self._geometry\n if value.start is not None:\n queryset = queryset.filter(footprint__distance_gte=(geom, D(m=value.start)))\n if value.stop is not None:\n queryset = queryset.filter(footprint__distance_lte=(geom, D(m=value.stop)))\n return queryset\n\n def filter_time_of_day(self, queryset, name, value):\n \"\"\"Filter the queryset by time of day acquired.\n\n Use case: find all rasters acquired between 8am and 4pm\n for all days in the acquired date range (i.e. only daytime imagery)\n \"\"\"\n if value is not None:\n queryset = queryset.annotate(time_of_day=F('acquisition_date__time'))\n if value.start is not None:\n queryset = queryset.filter(time_of_day__gte=value.start)\n if value.stop is not None:\n queryset = queryset.filter(time_of_day__lte=value.stop)\n return queryset\n\n def filter_collection(self, queryset, name, value):\n \"\"\"Filter the queryset by the collection it belongs to.\"\"\"\n if value:\n conditions = Q()\n for path in get_paths(queryset.model, ChecksumFile):\n conditions |= path.q(collection__in=value)\n return queryset.filter(conditions).distinct()\n return queryset\n\n class Meta:\n model = SpatialEntry\n fields = [\n 'q',\n 'predicate',\n 'relates',\n 'distance',\n 'acquired',\n 'instrumentation',\n 'time_of_day',\n 'collections',\n ]\n\n\nclass CollectionFilter(filters.FilterSet):\n name = filters.CharFilter(\n field_name='name',\n help_text='The name of the Collection.',\n label='Name',\n lookup_expr='exact',\n )\n\n class Meta:\n model = Collection\n fields = [\n 'name',\n ]\n", "path": "django-rgd/rgd/filters.py"}], "after_files": [{"content": "from django.contrib.gis import forms\nfrom django.contrib.gis.db.models.functions import Area, GeometryDistance, Intersection\nfrom django.contrib.gis.measure import D\nfrom django.core.validators import RegexValidator\nfrom django.db.models import F, FloatField, Q\nfrom django.db.models.functions import Cast\nfrom django_filters import rest_framework as filters\nfrom rgd.models import ChecksumFile, Collection, SpatialEntry\nfrom rgd.permissions import get_paths\n\n\nclass GeometryFilter(filters.Filter):\n field_class = forms.GeometryField\n # Ensures GeoJSON objects are converted to correct SRID\n field_class.widget.map_srid = 4326\n\n\nclass SpatialEntryFilter(filters.FilterSet):\n\n q = GeometryFilter(\n help_text='A Well-known text (WKT) representation of a geometry or a GeoJSON.',\n label='WKT/GeoJSON',\n method='filter_q',\n )\n predicate = filters.ChoiceFilter(\n choices=(\n ('contains', 'contains'),\n ('crosses', 'crosses'),\n ('disjoint', 'disjoint'),\n ('equals', 'equals'),\n ('intersects', 'intersects'),\n ('overlaps', 'overlaps'),\n ('touches', 'touches'),\n ('within', 'within'),\n ),\n help_text=(\n 'A named spatial predicate based on the DE-9IM. This spatial predicate will be used '\n 'to filter data such that `predicate(a, b)` where `b` is the queried geometry.'\n ),\n label='Spatial predicate',\n method='filter_predicate',\n )\n relates = filters.CharFilter(\n help_text=(\n 'Specify exactly how the queried geometry should relate to the data using a DE-9IM '\n 'string code.'\n ),\n label='DE-9IM string code',\n max_length=9,\n method='filter_relates',\n min_length=9,\n validators=(\n RegexValidator(regex=r'^[\\*012TF]{9}$', message='Enter a valid DE-9IM string code.'),\n ),\n )\n distance = filters.RangeFilter(\n help_text='The minimum/maximum distance around the queried geometry in meters.',\n label='Distance',\n method='filter_distance',\n )\n acquired = filters.IsoDateTimeFromToRangeFilter(\n field_name='acquisition_date',\n help_text='The ISO 8601 formatted date and time when data was acquired.',\n label='Acquired',\n )\n instrumentation = filters.CharFilter(\n field_name='instrumentation',\n help_text='The instrumentation used to acquire at least one of these data.',\n label='Instrumentation',\n lookup_expr='icontains',\n )\n time_of_day = filters.TimeRangeFilter(\n help_text='The minimum/maximum times during the day the records were acquired.',\n label='Time of Day',\n method='filter_time_of_day',\n )\n collections = filters.ModelMultipleChoiceFilter(\n help_text='One or more collections that the data might belong to.',\n label='Collections',\n method='filter_collection',\n queryset=Collection.objects.all(),\n )\n percent_overlap = filters.NumberFilter(\n help_text='The minute percent overlap with search geometry (between 0 and 1).',\n label='Percent overlap',\n method='filter_percent_overlap',\n )\n\n @property\n def _geometry(self):\n return self.form.cleaned_data['q']\n\n @property\n def _has_geom(self):\n return self._geometry is not None\n\n def filter_q(self, queryset, name, value):\n \"\"\"Sort the queryset by distance to queried geometry.\n\n Annotates the queryset with `distance`.\n\n This uses the efficient KNN operation:\n https://postgis.net/docs/geometry_distance_knn.html\n \"\"\"\n if value:\n queryset = queryset.annotate(distance=GeometryDistance('footprint', value)).order_by(\n 'distance'\n )\n return queryset\n\n def filter_predicate(self, queryset, name, value):\n \"\"\"Filter the spatial entries by the chosen predicate.\"\"\"\n if value and self._has_geom:\n queryset = queryset.filter(**{f'footprint__{value}': self._geometry})\n return queryset\n\n def filter_relates(self, queryset, name, value):\n \"\"\"Filter the spatial entries by the chosen DE-9IM.\"\"\"\n if value and self._has_geom:\n queryset = queryset.filter(footprint__relates=(self._geometry, value))\n return queryset\n\n def filter_distance(self, queryset, name, value):\n \"\"\"Filter the queryset by distance to the queried geometry.\n\n We may wish to use the distance in degrees later on. This is\n very taxing on the DBMS right now. The distance in degrees\n can be provided by the initial geometry query.\n \"\"\"\n if value and self._has_geom:\n geom = self._geometry\n if value.start is not None:\n queryset = queryset.filter(footprint__distance_gte=(geom, D(m=value.start)))\n if value.stop is not None:\n queryset = queryset.filter(footprint__distance_lte=(geom, D(m=value.stop)))\n return queryset\n\n def filter_time_of_day(self, queryset, name, value):\n \"\"\"Filter the queryset by time of day acquired.\n\n Use case: find all rasters acquired between 8am and 4pm\n for all days in the acquired date range (i.e. only daytime imagery)\n \"\"\"\n if value is not None:\n queryset = queryset.annotate(time_of_day=F('acquisition_date__time'))\n if value.start is not None:\n queryset = queryset.filter(time_of_day__gte=value.start)\n if value.stop is not None:\n queryset = queryset.filter(time_of_day__lte=value.stop)\n return queryset\n\n def filter_collection(self, queryset, name, value):\n \"\"\"Filter the queryset by the collection it belongs to.\"\"\"\n if value:\n conditions = Q()\n for path in get_paths(queryset.model, ChecksumFile):\n conditions |= path.q(collection__in=value)\n return queryset.filter(conditions).distinct()\n return queryset\n\n def filter_percent_overlap(self, queryset, name, value: float):\n \"\"\"Filter the queryset by percent overlap with the queried geometry.\"\"\"\n if value is not None and value > 0 and value <= 1 and self._has_geom:\n geom = self._geometry\n queryset = queryset.filter(footprint__overlaps=geom).annotate(\n overlap_percentage=(\n Cast(Area(Intersection(F('footprint'), geom)) / Area(geom), FloatField())\n )\n )\n queryset = queryset.filter(overlap_percentage__gte=value)\n return queryset\n\n class Meta:\n model = SpatialEntry\n fields = [\n 'q',\n 'predicate',\n 'relates',\n 'distance',\n 'acquired',\n 'percent_overlap',\n 'instrumentation',\n 'time_of_day',\n 'collections',\n ]\n\n\nclass CollectionFilter(filters.FilterSet):\n name = filters.CharFilter(\n field_name='name',\n help_text='The name of the Collection.',\n label='Name',\n lookup_expr='exact',\n )\n\n class Meta:\n model = Collection\n fields = [\n 'name',\n ]\n", "path": "django-rgd/rgd/filters.py"}]} | 2,117 | 490 |
gh_patches_debug_2175 | rasdani/github-patches | git_diff | networkx__networkx-2647 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Readthedocs pain
Readthedocs (RTD) is a pain to work with and keeps having timeout errors. I started to look into whether we can build the docs on our own and push them to RTD instead of having it built on the site. It would also make more sense to have the doc build process as part of our CI process, rather than only checked after the fact.
Has there been any discussion about moving away from RTD before (at least the build process)? If so, was there a reason not to move? I assume it is too late to move back to hosting the docs on github, but I thought I'd check since it might be easier to do.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `doc/conf.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Sphinx documentation build configuration file, created by
4 # sphinx-quickstart.py on Sat Mar 8 21:47:50 2008.
5 #
6 # This file is execfile()d with the current directory set to its containing dir.
7 #
8 # The contents of this file are pickled, so don't put values in the namespace
9 # that aren't pickleable (module imports are okay, they're removed automatically).
10 #
11 # All configuration values have a default value; values that are commented out
12 # serve to show the default value.
13 from __future__ import print_function
14
15 import sys
16 import os
17 from datetime import date
18
19 from sphinx_gallery.sorting import ExplicitOrder
20
21 # Check Sphinx version
22 import sphinx
23 if sphinx.__version__ < "1.3":
24 raise RuntimeError("Sphinx 1.3 or newer required")
25
26 # Environment variable to know if the docs are being built on rtd.
27 on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
28 #print
29 #print("Building on ReadTheDocs: {}".format(on_rtd))
30 #print
31 #print("Current working directory: {}".format(os.path.abspath(os.curdir)))
32 #print("Python: {}".format(sys.executable))
33
34 # If your extensions are in another directory, add it here.
35 # These locations are relative to conf.py
36
37 # General configuration
38 # ---------------------
39
40 # Add any Sphinx extension module names here, as strings. They can be extensions
41 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
42 extensions = [
43 'sphinx.ext.autosummary',
44 'sphinx.ext.autodoc',
45 'sphinx.ext.coverage',
46 'sphinx.ext.doctest',
47 'sphinx.ext.intersphinx',
48 'sphinx.ext.mathjax',
49 'sphinx.ext.napoleon',
50 'sphinx.ext.todo',
51 'sphinx.ext.viewcode',
52 'sphinx_gallery.gen_gallery',
53 'nb2plots',
54 'texext',
55 ]
56
57 # https://github.com/sphinx-gallery/sphinx-gallery
58 sphinx_gallery_conf = {
59 # path to your examples scripts
60 'examples_dirs': '../examples',
61 'subsection_order': ExplicitOrder(['../examples/basic',
62 '../examples/drawing',
63 '../examples/graph',
64 '../examples/algorithms',
65 '../examples/advanced',
66 '../examples/3d_drawing',
67 '../examples/pygraphviz',
68 '../examples/javascript',
69 '../examples/jit',
70 '../examples/subclass']),
71 # path where to save gallery generated examples
72 'gallery_dirs': 'auto_examples',
73 'backreferences_dir': 'modules/generated',
74 'expected_failing_examples': ['../examples/advanced/plot_parallel_betweenness.py']
75 }
76
77 # generate autosummary pages
78 autosummary_generate = True
79
80 # Add any paths that contain templates here, relative to this directory.
81 #templates_path = ['']
82
83 # The suffix of source filenames.
84 source_suffix = '.rst'
85
86 # The encoding of source files.
87 source_encoding = 'utf-8'
88
89 # The master toctree document.
90 master_doc = 'index'
91
92 # General substitutions.
93 project = 'NetworkX'
94 copyright = '2004-{}, NetworkX Developers'.format(date.today().year)
95
96 # The default replacements for |version| and |release|, also used in various
97 # other places throughout the built documents.
98 #
99 # The short X.Y version.
100 import networkx
101 version = networkx.__version__
102 # The full version, including dev info
103 release = networkx.__version__.replace('_', '')
104
105 # There are two options for replacing |today|: either, you set today to some
106 # non-false value, then it is used:
107 #today = ''
108 # Else, today_fmt is used as the format for a strftime call.
109 #today_fmt = '%B %d, %Y'
110
111 # List of documents that shouldn't be included in the build.
112 # unused_docs = ['']
113
114 # If true, '()' will be appended to :func: etc. cross-reference text.
115 #add_function_parentheses = True
116
117 # If true, the current module name will be prepended to all description
118 # unit titles (such as .. function::).
119 add_module_names = False
120
121 # show_authors = True
122
123 # The name of the Pygments (syntax highlighting) style to use.
124 #pygments_style = 'friendly'
125 pygments_style = 'sphinx'
126
127 # A list of prefixs that are ignored when creating the module index. (new in Sphinx 0.6)
128 modindex_common_prefix = ['networkx.']
129
130 doctest_global_setup = "import networkx as nx"
131
132 # treat ``x, y : type`` as vars x and y instead of default ``y(x,) : type``
133 napoleon_use_param = False
134
135 # Options for HTML output
136 # -----------------------
137
138 if not on_rtd:
139 import sphinx_rtd_theme
140 html_theme = 'sphinx_rtd_theme'
141 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
142
143 # html_theme_options = {
144 # "rightsidebar": "true",
145 # "relbarbgcolor: "black"
146 #}
147
148 # The style sheet to use for HTML and HTML Help pages. A file of that name
149 # must exist either in Sphinx' static/ path, or in one of the custom paths
150 # given in html_static_path.
151 #html_style = ''
152
153 # Add any paths that contain custom static files (such as style sheets) here,
154 # relative to this directory. They are copied after the builtin static files,
155 # so a file named "default.css" will overwrite the builtin "default.css".
156 html_static_path = ['_static']
157
158 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
159 # using the given strftime format.
160 html_last_updated_fmt = '%b %d, %Y'
161
162 # If true, SmartyPants will be used to convert quotes and dashes to
163 # typographically correct entities.
164 #html_use_smartypants = True
165
166 # Content template for the index page.
167 #html_index = 'index.html'
168
169 # Custom sidebar templates, maps page names to templates.
170 #html_sidebars = {}
171
172 # Additional templates that should be rendered to pages, maps page names to
173 # templates.
174 #html_additional_pages = {'': ''}
175
176 # If true, the reST sources are included in the HTML build as _sources/<name>.
177 html_copy_source = False
178
179 html_use_opensearch = 'http://networkx.github.io'
180
181 # Output file base name for HTML help builder.
182 htmlhelp_basename = 'NetworkX'
183
184 # Options for LaTeX output
185 # ------------------------
186
187 # The paper size ('letter' or 'a4').
188 latex_paper_size = 'letter'
189
190 # The font size ('10pt', '11pt' or '12pt').
191 #latex_font_size = '10pt'
192
193 # Grouping the document tree into LaTeX files. List of tuples
194 # (source start file, target name, title, author, document class [howto/manual]).
195 latex_documents = [('reference/index', 'networkx_reference.tex',
196 'NetworkX Reference',
197 'Aric Hagberg, Dan Schult, Pieter Swart', 'manual', 1)]
198
199 latex_appendices = ['tutorial']
200
201 # Intersphinx mapping
202 intersphinx_mapping = {'https://docs.python.org/': None,
203 'https://docs.scipy.org/doc/numpy/': None,
204 }
205
206 # The reST default role (used for this markup: `text`) to use for all
207 # documents.
208 default_role = 'obj'
209
210 numpydoc_show_class_members = False
211
212 # Add the 'copybutton' javascript, to hide/show the prompt in code
213 # examples
214 def setup(app):
215 app.add_javascript('copybutton.js')
216
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/doc/conf.py b/doc/conf.py
--- a/doc/conf.py
+++ b/doc/conf.py
@@ -199,7 +199,7 @@
latex_appendices = ['tutorial']
# Intersphinx mapping
-intersphinx_mapping = {'https://docs.python.org/': None,
+intersphinx_mapping = {'https://docs.python.org/2/': None,
'https://docs.scipy.org/doc/numpy/': None,
}
| {"golden_diff": "diff --git a/doc/conf.py b/doc/conf.py\n--- a/doc/conf.py\n+++ b/doc/conf.py\n@@ -199,7 +199,7 @@\n latex_appendices = ['tutorial']\n \n # Intersphinx mapping\n-intersphinx_mapping = {'https://docs.python.org/': None,\n+intersphinx_mapping = {'https://docs.python.org/2/': None,\n 'https://docs.scipy.org/doc/numpy/': None,\n }\n", "issue": "Readthedocs pain\nReadthedocs (RTD) is a pain to work with and keeps having timeout errors. I started to look into whether we can build the docs on our own and push them to RTD instead of having it built on the site. It would also make more sense to have the doc build process as part of our CI process, rather than only checked after the fact.\r\n\r\nHas there been any discussion about moving away from RTD before (at least the build process)? If so, was there a reason not to move? I assume it is too late to move back to hosting the docs on github, but I thought I'd check since it might be easier to do.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Sphinx documentation build configuration file, created by\n# sphinx-quickstart.py on Sat Mar 8 21:47:50 2008.\n#\n# This file is execfile()d with the current directory set to its containing dir.\n#\n# The contents of this file are pickled, so don't put values in the namespace\n# that aren't pickleable (module imports are okay, they're removed automatically).\n#\n# All configuration values have a default value; values that are commented out\n# serve to show the default value.\nfrom __future__ import print_function\n\nimport sys\nimport os\nfrom datetime import date\n\nfrom sphinx_gallery.sorting import ExplicitOrder\n\n# Check Sphinx version\nimport sphinx\nif sphinx.__version__ < \"1.3\":\n raise RuntimeError(\"Sphinx 1.3 or newer required\")\n\n# Environment variable to know if the docs are being built on rtd.\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n#print\n#print(\"Building on ReadTheDocs: {}\".format(on_rtd))\n#print\n#print(\"Current working directory: {}\".format(os.path.abspath(os.curdir)))\n#print(\"Python: {}\".format(sys.executable))\n\n# If your extensions are in another directory, add it here.\n# These locations are relative to conf.py\n\n# General configuration\n# ---------------------\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = [\n 'sphinx.ext.autosummary',\n 'sphinx.ext.autodoc',\n 'sphinx.ext.coverage',\n 'sphinx.ext.doctest',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.napoleon',\n 'sphinx.ext.todo',\n 'sphinx.ext.viewcode',\n 'sphinx_gallery.gen_gallery',\n 'nb2plots',\n 'texext',\n]\n\n# https://github.com/sphinx-gallery/sphinx-gallery\nsphinx_gallery_conf = {\n # path to your examples scripts\n 'examples_dirs': '../examples',\n 'subsection_order': ExplicitOrder(['../examples/basic',\n '../examples/drawing',\n '../examples/graph',\n '../examples/algorithms',\n '../examples/advanced',\n '../examples/3d_drawing',\n '../examples/pygraphviz',\n '../examples/javascript',\n '../examples/jit',\n '../examples/subclass']),\n # path where to save gallery generated examples\n 'gallery_dirs': 'auto_examples',\n 'backreferences_dir': 'modules/generated',\n 'expected_failing_examples': ['../examples/advanced/plot_parallel_betweenness.py']\n}\n\n# generate autosummary pages\nautosummary_generate = True\n\n# Add any paths that contain templates here, relative to this directory.\n#templates_path = ['']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\nsource_encoding = 'utf-8'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General substitutions.\nproject = 'NetworkX'\ncopyright = '2004-{}, NetworkX Developers'.format(date.today().year)\n\n# The default replacements for |version| and |release|, also used in various\n# other places throughout the built documents.\n#\n# The short X.Y version.\nimport networkx\nversion = networkx.__version__\n# The full version, including dev info\nrelease = networkx.__version__.replace('_', '')\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of documents that shouldn't be included in the build.\n# unused_docs = ['']\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\nadd_module_names = False\n\n# show_authors = True\n\n# The name of the Pygments (syntax highlighting) style to use.\n#pygments_style = 'friendly'\npygments_style = 'sphinx'\n\n# A list of prefixs that are ignored when creating the module index. (new in Sphinx 0.6)\nmodindex_common_prefix = ['networkx.']\n\ndoctest_global_setup = \"import networkx as nx\"\n\n# treat ``x, y : type`` as vars x and y instead of default ``y(x,) : type``\nnapoleon_use_param = False\n\n# Options for HTML output\n# -----------------------\n\nif not on_rtd:\n import sphinx_rtd_theme\n html_theme = 'sphinx_rtd_theme'\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# html_theme_options = {\n# \"rightsidebar\": \"true\",\n# \"relbarbgcolor: \"black\"\n#}\n\n# The style sheet to use for HTML and HTML Help pages. A file of that name\n# must exist either in Sphinx' static/ path, or in one of the custom paths\n# given in html_static_path.\n#html_style = ''\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\nhtml_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Content template for the index page.\n#html_index = 'index.html'\n\n# Custom sidebar templates, maps page names to templates.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# templates.\n#html_additional_pages = {'': ''}\n\n# If true, the reST sources are included in the HTML build as _sources/<name>.\nhtml_copy_source = False\n\nhtml_use_opensearch = 'http://networkx.github.io'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'NetworkX'\n\n# Options for LaTeX output\n# ------------------------\n\n# The paper size ('letter' or 'a4').\nlatex_paper_size = 'letter'\n\n# The font size ('10pt', '11pt' or '12pt').\n#latex_font_size = '10pt'\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, document class [howto/manual]).\nlatex_documents = [('reference/index', 'networkx_reference.tex',\n 'NetworkX Reference',\n 'Aric Hagberg, Dan Schult, Pieter Swart', 'manual', 1)]\n\nlatex_appendices = ['tutorial']\n\n# Intersphinx mapping\nintersphinx_mapping = {'https://docs.python.org/': None,\n 'https://docs.scipy.org/doc/numpy/': None,\n }\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\ndefault_role = 'obj'\n\nnumpydoc_show_class_members = False\n\n# Add the 'copybutton' javascript, to hide/show the prompt in code\n# examples\ndef setup(app):\n app.add_javascript('copybutton.js')\n", "path": "doc/conf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Sphinx documentation build configuration file, created by\n# sphinx-quickstart.py on Sat Mar 8 21:47:50 2008.\n#\n# This file is execfile()d with the current directory set to its containing dir.\n#\n# The contents of this file are pickled, so don't put values in the namespace\n# that aren't pickleable (module imports are okay, they're removed automatically).\n#\n# All configuration values have a default value; values that are commented out\n# serve to show the default value.\nfrom __future__ import print_function\n\nimport sys\nimport os\nfrom datetime import date\n\nfrom sphinx_gallery.sorting import ExplicitOrder\n\n# Check Sphinx version\nimport sphinx\nif sphinx.__version__ < \"1.3\":\n raise RuntimeError(\"Sphinx 1.3 or newer required\")\n\n# Environment variable to know if the docs are being built on rtd.\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n#print\n#print(\"Building on ReadTheDocs: {}\".format(on_rtd))\n#print\n#print(\"Current working directory: {}\".format(os.path.abspath(os.curdir)))\n#print(\"Python: {}\".format(sys.executable))\n\n# If your extensions are in another directory, add it here.\n# These locations are relative to conf.py\n\n# General configuration\n# ---------------------\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = [\n 'sphinx.ext.autosummary',\n 'sphinx.ext.autodoc',\n 'sphinx.ext.coverage',\n 'sphinx.ext.doctest',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.napoleon',\n 'sphinx.ext.todo',\n 'sphinx.ext.viewcode',\n 'sphinx_gallery.gen_gallery',\n 'nb2plots',\n 'texext',\n]\n\n# https://github.com/sphinx-gallery/sphinx-gallery\nsphinx_gallery_conf = {\n # path to your examples scripts\n 'examples_dirs': '../examples',\n 'subsection_order': ExplicitOrder(['../examples/basic',\n '../examples/drawing',\n '../examples/graph',\n '../examples/algorithms',\n '../examples/advanced',\n '../examples/3d_drawing',\n '../examples/pygraphviz',\n '../examples/javascript',\n '../examples/jit',\n '../examples/subclass']),\n # path where to save gallery generated examples\n 'gallery_dirs': 'auto_examples',\n 'backreferences_dir': 'modules/generated',\n 'expected_failing_examples': ['../examples/advanced/plot_parallel_betweenness.py']\n}\n\n# generate autosummary pages\nautosummary_generate = True\n\n# Add any paths that contain templates here, relative to this directory.\n#templates_path = ['']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\nsource_encoding = 'utf-8'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General substitutions.\nproject = 'NetworkX'\ncopyright = '2004-{}, NetworkX Developers'.format(date.today().year)\n\n# The default replacements for |version| and |release|, also used in various\n# other places throughout the built documents.\n#\n# The short X.Y version.\nimport networkx\nversion = networkx.__version__\n# The full version, including dev info\nrelease = networkx.__version__.replace('_', '')\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of documents that shouldn't be included in the build.\n# unused_docs = ['']\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\nadd_module_names = False\n\n# show_authors = True\n\n# The name of the Pygments (syntax highlighting) style to use.\n#pygments_style = 'friendly'\npygments_style = 'sphinx'\n\n# A list of prefixs that are ignored when creating the module index. (new in Sphinx 0.6)\nmodindex_common_prefix = ['networkx.']\n\ndoctest_global_setup = \"import networkx as nx\"\n\n# treat ``x, y : type`` as vars x and y instead of default ``y(x,) : type``\nnapoleon_use_param = False\n\n# Options for HTML output\n# -----------------------\n\nif not on_rtd:\n import sphinx_rtd_theme\n html_theme = 'sphinx_rtd_theme'\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# html_theme_options = {\n# \"rightsidebar\": \"true\",\n# \"relbarbgcolor: \"black\"\n#}\n\n# The style sheet to use for HTML and HTML Help pages. A file of that name\n# must exist either in Sphinx' static/ path, or in one of the custom paths\n# given in html_static_path.\n#html_style = ''\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\nhtml_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Content template for the index page.\n#html_index = 'index.html'\n\n# Custom sidebar templates, maps page names to templates.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# templates.\n#html_additional_pages = {'': ''}\n\n# If true, the reST sources are included in the HTML build as _sources/<name>.\nhtml_copy_source = False\n\nhtml_use_opensearch = 'http://networkx.github.io'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'NetworkX'\n\n# Options for LaTeX output\n# ------------------------\n\n# The paper size ('letter' or 'a4').\nlatex_paper_size = 'letter'\n\n# The font size ('10pt', '11pt' or '12pt').\n#latex_font_size = '10pt'\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, document class [howto/manual]).\nlatex_documents = [('reference/index', 'networkx_reference.tex',\n 'NetworkX Reference',\n 'Aric Hagberg, Dan Schult, Pieter Swart', 'manual', 1)]\n\nlatex_appendices = ['tutorial']\n\n# Intersphinx mapping\nintersphinx_mapping = {'https://docs.python.org/2/': None,\n 'https://docs.scipy.org/doc/numpy/': None,\n }\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\ndefault_role = 'obj'\n\nnumpydoc_show_class_members = False\n\n# Add the 'copybutton' javascript, to hide/show the prompt in code\n# examples\ndef setup(app):\n app.add_javascript('copybutton.js')\n", "path": "doc/conf.py"}]} | 2,617 | 101 |
gh_patches_debug_37486 | rasdani/github-patches | git_diff | PokemonGoF__PokemonGo-Bot-4443 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Create small random change of position to resume the Bot after some time off
When you stop the Bot (by more than X hours for example) does not return the exact same position. It is strange to stay a few hours without running the bot and it resumes in the same previous position.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pokemongo_bot/cell_workers/sleep_schedule.py`
Content:
```
1 from datetime import datetime, timedelta
2 from time import sleep
3 from random import uniform
4 from pokemongo_bot.base_task import BaseTask
5
6
7 class SleepSchedule(BaseTask):
8 """Pauses the execution of the bot every day for some time
9
10 Simulates the user going to sleep every day for some time, the sleep time
11 and the duration is changed every day by a random offset defined in the
12 config file
13 Example Config:
14 {
15 "type": "SleepSchedule",
16 "config": {
17 "time": "12:00",
18 "duration":"5:30",
19 "time_random_offset": "00:30",
20 "duration_random_offset": "00:30"
21 }
22 }
23 time: (HH:MM) local time that the bot should sleep
24 duration: (HH:MM) the duration of sleep
25 time_random_offset: (HH:MM) random offset of time that the sleep will start
26 for this example the possible start time is 11:30-12:30
27 duration_random_offset: (HH:MM) random offset of duration of sleep
28 for this example the possible duration is 5:00-6:00
29 """
30 SUPPORTED_TASK_API_VERSION = 1
31
32 LOG_INTERVAL_SECONDS = 600
33 SCHEDULING_MARGIN = timedelta(minutes=10) # Skip if next sleep is RESCHEDULING_MARGIN from now
34
35 def initialize(self):
36 # self.bot.event_manager.register_event('sleeper_scheduled', parameters=('datetime',))
37 self._process_config()
38 self._schedule_next_sleep()
39 self._calculate_current_sleep()
40
41 def work(self):
42 if self._should_sleep_now():
43 self._sleep()
44 self._schedule_next_sleep()
45 self.bot.login()
46
47 def _process_config(self):
48 self.time = datetime.strptime(self.config.get('time', '01:00'), '%H:%M')
49
50 # Using datetime for easier stripping of timedeltas
51 duration = datetime.strptime(self.config.get('duration', '07:00'), '%H:%M')
52 self.duration = int(timedelta(hours=duration.hour, minutes=duration.minute).total_seconds())
53
54 time_random_offset = datetime.strptime(self.config.get('time_random_offset', '01:00'), '%H:%M')
55 self.time_random_offset = int(
56 timedelta(
57 hours=time_random_offset.hour, minutes=time_random_offset.minute).total_seconds())
58
59 duration_random_offset = datetime.strptime(self.config.get('duration_random_offset', '00:30'), '%H:%M')
60 self.duration_random_offset = int(
61 timedelta(
62 hours=duration_random_offset.hour, minutes=duration_random_offset.minute).total_seconds())
63
64 def _schedule_next_sleep(self):
65 self._next_sleep = self._get_next_sleep_schedule()
66 self._next_duration = self._get_next_duration()
67 self.emit_event(
68 'next_sleep',
69 formatted="Next sleep at {time}",
70 data={
71 'time': str(self._next_sleep)
72 }
73 )
74
75 def _calculate_current_sleep(self):
76 self._current_sleep = self._next_sleep - timedelta(days=1)
77 current_duration = self._get_next_duration()
78 self._current_end = self._current_sleep + timedelta(seconds = current_duration)
79
80 def _should_sleep_now(self):
81 if datetime.now() >= self._next_sleep:
82 return True
83 if datetime.now() >= self._current_sleep and datetime.now() < self._current_end:
84 self._next_duration = (self._current_end - datetime.now()).total_seconds()
85 return True
86
87 return False
88
89 def _get_next_sleep_schedule(self):
90 now = datetime.now() + self.SCHEDULING_MARGIN
91 next_time = now.replace(hour=self.time.hour, minute=self.time.minute)
92
93 next_time += timedelta(seconds=self._get_random_offset(self.time_random_offset))
94
95 # If sleep time is passed add one day
96 if next_time <= now:
97 next_time += timedelta(days=1)
98
99 return next_time
100
101 def _get_next_duration(self):
102 duration = self.duration + self._get_random_offset(self.duration_random_offset)
103 return duration
104
105 def _get_random_offset(self, max_offset):
106 offset = uniform(-max_offset, max_offset)
107 return int(offset)
108
109 def _sleep(self):
110 sleep_to_go = self._next_duration
111
112 sleep_m, sleep_s = divmod(sleep_to_go, 60)
113 sleep_h, sleep_m = divmod(sleep_m, 60)
114 sleep_hms = '%02d:%02d:%02d' % (sleep_h, sleep_m, sleep_s)
115
116 now = datetime.now()
117 wake = str(now + timedelta(seconds=sleep_to_go))
118
119 self.emit_event(
120 'bot_sleep',
121 formatted="Sleeping for {time_hms}, wake at {wake}",
122 data={
123 'time_hms': sleep_hms,
124 'wake': wake
125 }
126 )
127 while sleep_to_go > 0:
128 if sleep_to_go < self.LOG_INTERVAL_SECONDS:
129 sleep(sleep_to_go)
130 sleep_to_go = 0
131 else:
132 sleep(self.LOG_INTERVAL_SECONDS)
133 sleep_to_go -= self.LOG_INTERVAL_SECONDS
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pokemongo_bot/cell_workers/sleep_schedule.py b/pokemongo_bot/cell_workers/sleep_schedule.py
--- a/pokemongo_bot/cell_workers/sleep_schedule.py
+++ b/pokemongo_bot/cell_workers/sleep_schedule.py
@@ -18,6 +18,7 @@
"duration":"5:30",
"time_random_offset": "00:30",
"duration_random_offset": "00:30"
+ "wake_up_at_location": ""
}
}
time: (HH:MM) local time that the bot should sleep
@@ -26,7 +27,8 @@
for this example the possible start time is 11:30-12:30
duration_random_offset: (HH:MM) random offset of duration of sleep
for this example the possible duration is 5:00-6:00
- """
+ wake_up_at_location: (lat, long | lat, long, alt | "") the location at which the bot wake up
+ *Note that an empty string ("") will not change the location*. """
SUPPORTED_TASK_API_VERSION = 1
LOG_INTERVAL_SECONDS = 600
@@ -42,6 +44,9 @@
if self._should_sleep_now():
self._sleep()
self._schedule_next_sleep()
+ wake_up_at_location = self.config.get("wake_up_at_location", "")
+ if wake_up_at_location:
+ self.bot.api.set_position(self.wake_up_at_location[0],self.wake_up_at_location[1],self.wake_up_at_location[2])
self.bot.login()
def _process_config(self):
@@ -60,6 +65,21 @@
self.duration_random_offset = int(
timedelta(
hours=duration_random_offset.hour, minutes=duration_random_offset.minute).total_seconds())
+
+ wake_up_at_location = self.config.get("wake_up_at_location", "")
+ if wake_up_at_location:
+ try:
+ wake_up_at_location = wake_up_at_location.split(',',2)
+ lat=float(wake_up_at_location[0])
+ lng=float(wake_up_at_location[1])
+ if len(wake_up_at_location) == 3:
+ alt=float(wake_up_at_location[2])
+ else:
+ alt = uniform(self.bot.config.alt_min, self.bot.config.alt_max)
+ except ValueError:
+ raise ValueError('SleepSchedule wake_up_at_location, parsing error in location') #TODO there must be a more elegant way to do it...
+
+ self.wake_up_at_location = [lat, lng, alt]
def _schedule_next_sleep(self):
self._next_sleep = self._get_next_sleep_schedule()
| {"golden_diff": "diff --git a/pokemongo_bot/cell_workers/sleep_schedule.py b/pokemongo_bot/cell_workers/sleep_schedule.py\n--- a/pokemongo_bot/cell_workers/sleep_schedule.py\n+++ b/pokemongo_bot/cell_workers/sleep_schedule.py\n@@ -18,6 +18,7 @@\n \"duration\":\"5:30\",\n \"time_random_offset\": \"00:30\",\n \"duration_random_offset\": \"00:30\"\n+ \"wake_up_at_location\": \"\"\n }\n }\n time: (HH:MM) local time that the bot should sleep\n@@ -26,7 +27,8 @@\n for this example the possible start time is 11:30-12:30\n duration_random_offset: (HH:MM) random offset of duration of sleep\n for this example the possible duration is 5:00-6:00\n- \"\"\"\n+ wake_up_at_location: (lat, long | lat, long, alt | \"\") the location at which the bot wake up \n+ *Note that an empty string (\"\") will not change the location*. \"\"\"\n SUPPORTED_TASK_API_VERSION = 1\n \n LOG_INTERVAL_SECONDS = 600\n@@ -42,6 +44,9 @@\n if self._should_sleep_now():\n self._sleep()\n self._schedule_next_sleep()\n+ wake_up_at_location = self.config.get(\"wake_up_at_location\", \"\")\n+ if wake_up_at_location:\n+ self.bot.api.set_position(self.wake_up_at_location[0],self.wake_up_at_location[1],self.wake_up_at_location[2])\n self.bot.login()\n \n def _process_config(self):\n@@ -60,6 +65,21 @@\n self.duration_random_offset = int(\n timedelta(\n hours=duration_random_offset.hour, minutes=duration_random_offset.minute).total_seconds())\n+ \n+ wake_up_at_location = self.config.get(\"wake_up_at_location\", \"\")\n+ if wake_up_at_location:\n+ try:\n+ wake_up_at_location = wake_up_at_location.split(',',2) \n+ lat=float(wake_up_at_location[0])\n+ lng=float(wake_up_at_location[1])\n+ if len(wake_up_at_location) == 3:\n+ alt=float(wake_up_at_location[2])\n+ else:\n+ alt = uniform(self.bot.config.alt_min, self.bot.config.alt_max)\n+ except ValueError:\n+ raise ValueError('SleepSchedule wake_up_at_location, parsing error in location') #TODO there must be a more elegant way to do it...\n+\n+ self.wake_up_at_location = [lat, lng, alt]\n \n def _schedule_next_sleep(self):\n self._next_sleep = self._get_next_sleep_schedule()\n", "issue": "Create small random change of position to resume the Bot after some time off\nWhen you stop the Bot (by more than X hours for example) does not return the exact same position. It is strange to stay a few hours without running the bot and it resumes in the same previous position.\n\n", "before_files": [{"content": "from datetime import datetime, timedelta\nfrom time import sleep\nfrom random import uniform\nfrom pokemongo_bot.base_task import BaseTask\n\n\nclass SleepSchedule(BaseTask):\n \"\"\"Pauses the execution of the bot every day for some time\n\n Simulates the user going to sleep every day for some time, the sleep time\n and the duration is changed every day by a random offset defined in the\n config file\n Example Config:\n {\n \"type\": \"SleepSchedule\",\n \"config\": {\n \"time\": \"12:00\",\n \"duration\":\"5:30\",\n \"time_random_offset\": \"00:30\",\n \"duration_random_offset\": \"00:30\"\n }\n }\n time: (HH:MM) local time that the bot should sleep\n duration: (HH:MM) the duration of sleep\n time_random_offset: (HH:MM) random offset of time that the sleep will start\n for this example the possible start time is 11:30-12:30\n duration_random_offset: (HH:MM) random offset of duration of sleep\n for this example the possible duration is 5:00-6:00\n \"\"\"\n SUPPORTED_TASK_API_VERSION = 1\n\n LOG_INTERVAL_SECONDS = 600\n SCHEDULING_MARGIN = timedelta(minutes=10) # Skip if next sleep is RESCHEDULING_MARGIN from now\n\n def initialize(self):\n # self.bot.event_manager.register_event('sleeper_scheduled', parameters=('datetime',))\n self._process_config()\n self._schedule_next_sleep()\n self._calculate_current_sleep()\n\n def work(self):\n if self._should_sleep_now():\n self._sleep()\n self._schedule_next_sleep()\n self.bot.login()\n\n def _process_config(self):\n self.time = datetime.strptime(self.config.get('time', '01:00'), '%H:%M')\n\n # Using datetime for easier stripping of timedeltas\n duration = datetime.strptime(self.config.get('duration', '07:00'), '%H:%M')\n self.duration = int(timedelta(hours=duration.hour, minutes=duration.minute).total_seconds())\n\n time_random_offset = datetime.strptime(self.config.get('time_random_offset', '01:00'), '%H:%M')\n self.time_random_offset = int(\n timedelta(\n hours=time_random_offset.hour, minutes=time_random_offset.minute).total_seconds())\n\n duration_random_offset = datetime.strptime(self.config.get('duration_random_offset', '00:30'), '%H:%M')\n self.duration_random_offset = int(\n timedelta(\n hours=duration_random_offset.hour, minutes=duration_random_offset.minute).total_seconds())\n\n def _schedule_next_sleep(self):\n self._next_sleep = self._get_next_sleep_schedule()\n self._next_duration = self._get_next_duration()\n self.emit_event(\n 'next_sleep',\n formatted=\"Next sleep at {time}\",\n data={\n 'time': str(self._next_sleep)\n }\n )\n\n def _calculate_current_sleep(self):\n self._current_sleep = self._next_sleep - timedelta(days=1)\n current_duration = self._get_next_duration()\n self._current_end = self._current_sleep + timedelta(seconds = current_duration)\n\n def _should_sleep_now(self):\n if datetime.now() >= self._next_sleep:\n return True\n if datetime.now() >= self._current_sleep and datetime.now() < self._current_end:\n self._next_duration = (self._current_end - datetime.now()).total_seconds()\n return True\n\n return False\n\n def _get_next_sleep_schedule(self):\n now = datetime.now() + self.SCHEDULING_MARGIN\n next_time = now.replace(hour=self.time.hour, minute=self.time.minute)\n\n next_time += timedelta(seconds=self._get_random_offset(self.time_random_offset))\n\n # If sleep time is passed add one day\n if next_time <= now:\n next_time += timedelta(days=1)\n\n return next_time\n\n def _get_next_duration(self):\n duration = self.duration + self._get_random_offset(self.duration_random_offset)\n return duration\n\n def _get_random_offset(self, max_offset):\n offset = uniform(-max_offset, max_offset)\n return int(offset)\n\n def _sleep(self):\n sleep_to_go = self._next_duration\n\n sleep_m, sleep_s = divmod(sleep_to_go, 60)\n sleep_h, sleep_m = divmod(sleep_m, 60)\n sleep_hms = '%02d:%02d:%02d' % (sleep_h, sleep_m, sleep_s)\n\n now = datetime.now()\n wake = str(now + timedelta(seconds=sleep_to_go))\n\n self.emit_event(\n 'bot_sleep',\n formatted=\"Sleeping for {time_hms}, wake at {wake}\",\n data={\n 'time_hms': sleep_hms,\n 'wake': wake\n }\n )\n while sleep_to_go > 0:\n if sleep_to_go < self.LOG_INTERVAL_SECONDS:\n sleep(sleep_to_go)\n sleep_to_go = 0\n else:\n sleep(self.LOG_INTERVAL_SECONDS)\n sleep_to_go -= self.LOG_INTERVAL_SECONDS\n", "path": "pokemongo_bot/cell_workers/sleep_schedule.py"}], "after_files": [{"content": "from datetime import datetime, timedelta\nfrom time import sleep\nfrom random import uniform\nfrom pokemongo_bot.base_task import BaseTask\n\n\nclass SleepSchedule(BaseTask):\n \"\"\"Pauses the execution of the bot every day for some time\n\n Simulates the user going to sleep every day for some time, the sleep time\n and the duration is changed every day by a random offset defined in the\n config file\n Example Config:\n {\n \"type\": \"SleepSchedule\",\n \"config\": {\n \"time\": \"12:00\",\n \"duration\":\"5:30\",\n \"time_random_offset\": \"00:30\",\n \"duration_random_offset\": \"00:30\"\n \"wake_up_at_location\": \"\"\n }\n }\n time: (HH:MM) local time that the bot should sleep\n duration: (HH:MM) the duration of sleep\n time_random_offset: (HH:MM) random offset of time that the sleep will start\n for this example the possible start time is 11:30-12:30\n duration_random_offset: (HH:MM) random offset of duration of sleep\n for this example the possible duration is 5:00-6:00\n wake_up_at_location: (lat, long | lat, long, alt | \"\") the location at which the bot wake up \n *Note that an empty string (\"\") will not change the location*. \"\"\"\n SUPPORTED_TASK_API_VERSION = 1\n\n LOG_INTERVAL_SECONDS = 600\n SCHEDULING_MARGIN = timedelta(minutes=10) # Skip if next sleep is RESCHEDULING_MARGIN from now\n\n def initialize(self):\n # self.bot.event_manager.register_event('sleeper_scheduled', parameters=('datetime',))\n self._process_config()\n self._schedule_next_sleep()\n self._calculate_current_sleep()\n\n def work(self):\n if self._should_sleep_now():\n self._sleep()\n self._schedule_next_sleep()\n wake_up_at_location = self.config.get(\"wake_up_at_location\", \"\")\n if wake_up_at_location:\n self.bot.api.set_position(self.wake_up_at_location[0],self.wake_up_at_location[1],self.wake_up_at_location[2])\n self.bot.login()\n\n def _process_config(self):\n self.time = datetime.strptime(self.config.get('time', '01:00'), '%H:%M')\n\n # Using datetime for easier stripping of timedeltas\n duration = datetime.strptime(self.config.get('duration', '07:00'), '%H:%M')\n self.duration = int(timedelta(hours=duration.hour, minutes=duration.minute).total_seconds())\n\n time_random_offset = datetime.strptime(self.config.get('time_random_offset', '01:00'), '%H:%M')\n self.time_random_offset = int(\n timedelta(\n hours=time_random_offset.hour, minutes=time_random_offset.minute).total_seconds())\n\n duration_random_offset = datetime.strptime(self.config.get('duration_random_offset', '00:30'), '%H:%M')\n self.duration_random_offset = int(\n timedelta(\n hours=duration_random_offset.hour, minutes=duration_random_offset.minute).total_seconds())\n \n wake_up_at_location = self.config.get(\"wake_up_at_location\", \"\")\n if wake_up_at_location:\n try:\n wake_up_at_location = wake_up_at_location.split(',',2) \n lat=float(wake_up_at_location[0])\n lng=float(wake_up_at_location[1])\n if len(wake_up_at_location) == 3:\n alt=float(wake_up_at_location[2])\n else:\n alt = uniform(self.bot.config.alt_min, self.bot.config.alt_max)\n except ValueError:\n raise ValueError('SleepSchedule wake_up_at_location, parsing error in location') #TODO there must be a more elegant way to do it...\n\n self.wake_up_at_location = [lat, lng, alt]\n\n def _schedule_next_sleep(self):\n self._next_sleep = self._get_next_sleep_schedule()\n self._next_duration = self._get_next_duration()\n self.emit_event(\n 'next_sleep',\n formatted=\"Next sleep at {time}\",\n data={\n 'time': str(self._next_sleep)\n }\n )\n\n def _calculate_current_sleep(self):\n self._current_sleep = self._next_sleep - timedelta(days=1)\n current_duration = self._get_next_duration()\n self._current_end = self._current_sleep + timedelta(seconds = current_duration)\n\n def _should_sleep_now(self):\n if datetime.now() >= self._next_sleep:\n return True\n if datetime.now() >= self._current_sleep and datetime.now() < self._current_end:\n self._next_duration = (self._current_end - datetime.now()).total_seconds()\n return True\n\n return False\n\n def _get_next_sleep_schedule(self):\n now = datetime.now() + self.SCHEDULING_MARGIN\n next_time = now.replace(hour=self.time.hour, minute=self.time.minute)\n\n next_time += timedelta(seconds=self._get_random_offset(self.time_random_offset))\n\n # If sleep time is passed add one day\n if next_time <= now:\n next_time += timedelta(days=1)\n\n return next_time\n\n def _get_next_duration(self):\n duration = self.duration + self._get_random_offset(self.duration_random_offset)\n return duration\n\n def _get_random_offset(self, max_offset):\n offset = uniform(-max_offset, max_offset)\n return int(offset)\n\n def _sleep(self):\n sleep_to_go = self._next_duration\n\n sleep_m, sleep_s = divmod(sleep_to_go, 60)\n sleep_h, sleep_m = divmod(sleep_m, 60)\n sleep_hms = '%02d:%02d:%02d' % (sleep_h, sleep_m, sleep_s)\n\n now = datetime.now()\n wake = str(now + timedelta(seconds=sleep_to_go))\n\n self.emit_event(\n 'bot_sleep',\n formatted=\"Sleeping for {time_hms}, wake at {wake}\",\n data={\n 'time_hms': sleep_hms,\n 'wake': wake\n }\n )\n while sleep_to_go > 0:\n if sleep_to_go < self.LOG_INTERVAL_SECONDS:\n sleep(sleep_to_go)\n sleep_to_go = 0\n else:\n sleep(self.LOG_INTERVAL_SECONDS)\n sleep_to_go -= self.LOG_INTERVAL_SECONDS\n", "path": "pokemongo_bot/cell_workers/sleep_schedule.py"}]} | 1,760 | 613 |
gh_patches_debug_16549 | rasdani/github-patches | git_diff | PaddlePaddle__models-344 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix bugs for model_utils/model.py
Always print 0 for ValidataionCost
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `deep_speech_2/model_utils/model.py`
Content:
```
1 """Contains DeepSpeech2 model."""
2 from __future__ import absolute_import
3 from __future__ import division
4 from __future__ import print_function
5
6 import sys
7 import os
8 import time
9 import logging
10 import gzip
11 from distutils.dir_util import mkpath
12 import paddle.v2 as paddle
13 from decoders.swig_wrapper import Scorer
14 from decoders.swig_wrapper import ctc_greedy_decoder
15 from decoders.swig_wrapper import ctc_beam_search_decoder_batch
16 from model_utils.network import deep_speech_v2_network
17
18 logging.basicConfig(
19 format='[%(levelname)s %(asctime)s %(filename)s:%(lineno)d] %(message)s')
20
21
22 class DeepSpeech2Model(object):
23 """DeepSpeech2Model class.
24
25 :param vocab_size: Decoding vocabulary size.
26 :type vocab_size: int
27 :param num_conv_layers: Number of stacking convolution layers.
28 :type num_conv_layers: int
29 :param num_rnn_layers: Number of stacking RNN layers.
30 :type num_rnn_layers: int
31 :param rnn_layer_size: RNN layer size (number of RNN cells).
32 :type rnn_layer_size: int
33 :param pretrained_model_path: Pretrained model path. If None, will train
34 from stratch.
35 :type pretrained_model_path: basestring|None
36 :param share_rnn_weights: Whether to share input-hidden weights between
37 forward and backward directional RNNs.Notice that
38 for GRU, weight sharing is not supported.
39 :type share_rnn_weights: bool
40 """
41
42 def __init__(self, vocab_size, num_conv_layers, num_rnn_layers,
43 rnn_layer_size, use_gru, pretrained_model_path,
44 share_rnn_weights):
45 self._create_network(vocab_size, num_conv_layers, num_rnn_layers,
46 rnn_layer_size, use_gru, share_rnn_weights)
47 self._create_parameters(pretrained_model_path)
48 self._inferer = None
49 self._loss_inferer = None
50 self._ext_scorer = None
51 self.logger = logging.getLogger("")
52 self.logger.setLevel(level=logging.INFO)
53
54 def train(self,
55 train_batch_reader,
56 dev_batch_reader,
57 feeding_dict,
58 learning_rate,
59 gradient_clipping,
60 num_passes,
61 output_model_dir,
62 is_local=True,
63 num_iterations_print=100,
64 test_off=False):
65 """Train the model.
66
67 :param train_batch_reader: Train data reader.
68 :type train_batch_reader: callable
69 :param dev_batch_reader: Validation data reader.
70 :type dev_batch_reader: callable
71 :param feeding_dict: Feeding is a map of field name and tuple index
72 of the data that reader returns.
73 :type feeding_dict: dict|list
74 :param learning_rate: Learning rate for ADAM optimizer.
75 :type learning_rate: float
76 :param gradient_clipping: Gradient clipping threshold.
77 :type gradient_clipping: float
78 :param num_passes: Number of training epochs.
79 :type num_passes: int
80 :param num_iterations_print: Number of training iterations for printing
81 a training loss.
82 :type rnn_iteratons_print: int
83 :param is_local: Set to False if running with pserver with multi-nodes.
84 :type is_local: bool
85 :param output_model_dir: Directory for saving the model (every pass).
86 :type output_model_dir: basestring
87 :param test_off: Turn off testing.
88 :type test_off: bool
89 """
90 # prepare model output directory
91 if not os.path.exists(output_model_dir):
92 mkpath(output_model_dir)
93
94 # prepare optimizer and trainer
95 optimizer = paddle.optimizer.Adam(
96 learning_rate=learning_rate,
97 gradient_clipping_threshold=gradient_clipping)
98 trainer = paddle.trainer.SGD(
99 cost=self._loss,
100 parameters=self._parameters,
101 update_equation=optimizer,
102 is_local=is_local)
103
104 # create event handler
105 def event_handler(event):
106 global start_time, cost_sum, cost_counter
107 if isinstance(event, paddle.event.EndIteration):
108 cost_sum += event.cost
109 cost_counter += 1
110 if (event.batch_id + 1) % num_iterations_print == 0:
111 output_model_path = os.path.join(output_model_dir,
112 "params.latest.tar.gz")
113 with gzip.open(output_model_path, 'w') as f:
114 self._parameters.to_tar(f)
115 print("\nPass: %d, Batch: %d, TrainCost: %f" %
116 (event.pass_id, event.batch_id + 1,
117 cost_sum / cost_counter))
118 cost_sum, cost_counter = 0.0, 0
119 else:
120 sys.stdout.write('.')
121 sys.stdout.flush()
122 if isinstance(event, paddle.event.BeginPass):
123 start_time = time.time()
124 cost_sum, cost_counter = 0.0, 0
125 if isinstance(event, paddle.event.EndPass):
126 if test_off:
127 print("\n------- Time: %d sec, Pass: %d" %
128 (time.time() - start_time, event.pass_id))
129 else:
130 result = trainer.test(
131 reader=dev_batch_reader, feeding=feeding_dict)
132 print("\n------- Time: %d sec, Pass: %d, "
133 "ValidationCost: %s" %
134 (time.time() - start_time, event.pass_id, 0))
135 output_model_path = os.path.join(
136 output_model_dir, "params.pass-%d.tar.gz" % event.pass_id)
137 with gzip.open(output_model_path, 'w') as f:
138 self._parameters.to_tar(f)
139
140 # run train
141 trainer.train(
142 reader=train_batch_reader,
143 event_handler=event_handler,
144 num_passes=num_passes,
145 feeding=feeding_dict)
146
147 def infer_loss_batch(self, infer_data):
148 """Model inference. Infer the ctc loss for a batch of speech
149 utterances.
150
151 :param infer_data: List of utterances to infer, with each utterance a
152 tuple of audio features and transcription text (empty
153 string).
154 :type infer_data: list
155 :return: List of ctc loss.
156 :rtype: List of float
157 """
158 # define inferer
159 if self._loss_inferer == None:
160 self._loss_inferer = paddle.inference.Inference(
161 output_layer=self._loss, parameters=self._parameters)
162 # run inference
163 return self._loss_inferer.infer(input=infer_data)
164
165 def infer_batch(self, infer_data, decoding_method, beam_alpha, beam_beta,
166 beam_size, cutoff_prob, cutoff_top_n, vocab_list,
167 language_model_path, num_processes):
168 """Model inference. Infer the transcription for a batch of speech
169 utterances.
170
171 :param infer_data: List of utterances to infer, with each utterance
172 consisting of a tuple of audio features and
173 transcription text (empty string).
174 :type infer_data: list
175 :param decoding_method: Decoding method name, 'ctc_greedy' or
176 'ctc_beam_search'.
177 :param decoding_method: string
178 :param beam_alpha: Parameter associated with language model.
179 :type beam_alpha: float
180 :param beam_beta: Parameter associated with word count.
181 :type beam_beta: float
182 :param beam_size: Width for Beam search.
183 :type beam_size: int
184 :param cutoff_prob: Cutoff probability in pruning,
185 default 1.0, no pruning.
186 :type cutoff_prob: float
187 :param cutoff_top_n: Cutoff number in pruning, only top cutoff_top_n
188 characters with highest probs in vocabulary will be
189 used in beam search, default 40.
190 :type cutoff_top_n: int
191 :param vocab_list: List of tokens in the vocabulary, for decoding.
192 :type vocab_list: list
193 :param language_model_path: Filepath for language model.
194 :type language_model_path: basestring|None
195 :param num_processes: Number of processes (CPU) for decoder.
196 :type num_processes: int
197 :return: List of transcription texts.
198 :rtype: List of basestring
199 """
200 # define inferer
201 if self._inferer == None:
202 self._inferer = paddle.inference.Inference(
203 output_layer=self._log_probs, parameters=self._parameters)
204 # run inference
205 infer_results = self._inferer.infer(input=infer_data)
206 num_steps = len(infer_results) // len(infer_data)
207 probs_split = [
208 infer_results[i * num_steps:(i + 1) * num_steps]
209 for i in xrange(0, len(infer_data))
210 ]
211 # run decoder
212 results = []
213 if decoding_method == "ctc_greedy":
214 # best path decode
215 for i, probs in enumerate(probs_split):
216 output_transcription = ctc_greedy_decoder(
217 probs_seq=probs, vocabulary=vocab_list)
218 results.append(output_transcription)
219 elif decoding_method == "ctc_beam_search":
220 # initialize external scorer
221 if self._ext_scorer == None:
222 self._loaded_lm_path = language_model_path
223 self.logger.info("begin to initialize the external scorer "
224 "for decoding")
225 self._ext_scorer = Scorer(beam_alpha, beam_beta,
226 language_model_path, vocab_list)
227
228 lm_char_based = self._ext_scorer.is_character_based()
229 lm_max_order = self._ext_scorer.get_max_order()
230 lm_dict_size = self._ext_scorer.get_dict_size()
231 self.logger.info("language model: "
232 "is_character_based = %d," % lm_char_based +
233 " max_order = %d," % lm_max_order +
234 " dict_size = %d" % lm_dict_size)
235 self.logger.info("end initializing scorer. Start decoding ...")
236 else:
237 self._ext_scorer.reset_params(beam_alpha, beam_beta)
238 assert self._loaded_lm_path == language_model_path
239 # beam search decode
240 num_processes = min(num_processes, len(probs_split))
241 beam_search_results = ctc_beam_search_decoder_batch(
242 probs_split=probs_split,
243 vocabulary=vocab_list,
244 beam_size=beam_size,
245 num_processes=num_processes,
246 ext_scoring_func=self._ext_scorer,
247 cutoff_prob=cutoff_prob,
248 cutoff_top_n=cutoff_top_n)
249
250 results = [result[0][1] for result in beam_search_results]
251 else:
252 raise ValueError("Decoding method [%s] is not supported." %
253 decoding_method)
254 return results
255
256 def _create_parameters(self, model_path=None):
257 """Load or create model parameters."""
258 if model_path is None:
259 self._parameters = paddle.parameters.create(self._loss)
260 else:
261 self._parameters = paddle.parameters.Parameters.from_tar(
262 gzip.open(model_path))
263
264 def _create_network(self, vocab_size, num_conv_layers, num_rnn_layers,
265 rnn_layer_size, use_gru, share_rnn_weights):
266 """Create data layers and model network."""
267 # paddle.data_type.dense_array is used for variable batch input.
268 # The size 161 * 161 is only an placeholder value and the real shape
269 # of input batch data will be induced during training.
270 audio_data = paddle.layer.data(
271 name="audio_spectrogram",
272 type=paddle.data_type.dense_array(161 * 161))
273 text_data = paddle.layer.data(
274 name="transcript_text",
275 type=paddle.data_type.integer_value_sequence(vocab_size))
276 self._log_probs, self._loss = deep_speech_v2_network(
277 audio_data=audio_data,
278 text_data=text_data,
279 dict_size=vocab_size,
280 num_conv_layers=num_conv_layers,
281 num_rnn_layers=num_rnn_layers,
282 rnn_size=rnn_layer_size,
283 use_gru=use_gru,
284 share_rnn_weights=share_rnn_weights)
285
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/deep_speech_2/model_utils/model.py b/deep_speech_2/model_utils/model.py
--- a/deep_speech_2/model_utils/model.py
+++ b/deep_speech_2/model_utils/model.py
@@ -129,9 +129,10 @@
else:
result = trainer.test(
reader=dev_batch_reader, feeding=feeding_dict)
- print("\n------- Time: %d sec, Pass: %d, "
- "ValidationCost: %s" %
- (time.time() - start_time, event.pass_id, 0))
+ print(
+ "\n------- Time: %d sec, Pass: %d, "
+ "ValidationCost: %s" %
+ (time.time() - start_time, event.pass_id, result.cost))
output_model_path = os.path.join(
output_model_dir, "params.pass-%d.tar.gz" % event.pass_id)
with gzip.open(output_model_path, 'w') as f:
| {"golden_diff": "diff --git a/deep_speech_2/model_utils/model.py b/deep_speech_2/model_utils/model.py\n--- a/deep_speech_2/model_utils/model.py\n+++ b/deep_speech_2/model_utils/model.py\n@@ -129,9 +129,10 @@\n else:\n result = trainer.test(\n reader=dev_batch_reader, feeding=feeding_dict)\n- print(\"\\n------- Time: %d sec, Pass: %d, \"\n- \"ValidationCost: %s\" %\n- (time.time() - start_time, event.pass_id, 0))\n+ print(\n+ \"\\n------- Time: %d sec, Pass: %d, \"\n+ \"ValidationCost: %s\" %\n+ (time.time() - start_time, event.pass_id, result.cost))\n output_model_path = os.path.join(\n output_model_dir, \"params.pass-%d.tar.gz\" % event.pass_id)\n with gzip.open(output_model_path, 'w') as f:\n", "issue": "Fix bugs for model_utils/model.py\nAlways print 0 for ValidataionCost\n", "before_files": [{"content": "\"\"\"Contains DeepSpeech2 model.\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport sys\nimport os\nimport time\nimport logging\nimport gzip\nfrom distutils.dir_util import mkpath\nimport paddle.v2 as paddle\nfrom decoders.swig_wrapper import Scorer\nfrom decoders.swig_wrapper import ctc_greedy_decoder\nfrom decoders.swig_wrapper import ctc_beam_search_decoder_batch\nfrom model_utils.network import deep_speech_v2_network\n\nlogging.basicConfig(\n format='[%(levelname)s %(asctime)s %(filename)s:%(lineno)d] %(message)s')\n\n\nclass DeepSpeech2Model(object):\n \"\"\"DeepSpeech2Model class.\n\n :param vocab_size: Decoding vocabulary size.\n :type vocab_size: int\n :param num_conv_layers: Number of stacking convolution layers.\n :type num_conv_layers: int\n :param num_rnn_layers: Number of stacking RNN layers.\n :type num_rnn_layers: int\n :param rnn_layer_size: RNN layer size (number of RNN cells).\n :type rnn_layer_size: int\n :param pretrained_model_path: Pretrained model path. If None, will train\n from stratch.\n :type pretrained_model_path: basestring|None\n :param share_rnn_weights: Whether to share input-hidden weights between\n forward and backward directional RNNs.Notice that\n for GRU, weight sharing is not supported.\n :type share_rnn_weights: bool\n \"\"\"\n\n def __init__(self, vocab_size, num_conv_layers, num_rnn_layers,\n rnn_layer_size, use_gru, pretrained_model_path,\n share_rnn_weights):\n self._create_network(vocab_size, num_conv_layers, num_rnn_layers,\n rnn_layer_size, use_gru, share_rnn_weights)\n self._create_parameters(pretrained_model_path)\n self._inferer = None\n self._loss_inferer = None\n self._ext_scorer = None\n self.logger = logging.getLogger(\"\")\n self.logger.setLevel(level=logging.INFO)\n\n def train(self,\n train_batch_reader,\n dev_batch_reader,\n feeding_dict,\n learning_rate,\n gradient_clipping,\n num_passes,\n output_model_dir,\n is_local=True,\n num_iterations_print=100,\n test_off=False):\n \"\"\"Train the model.\n\n :param train_batch_reader: Train data reader.\n :type train_batch_reader: callable\n :param dev_batch_reader: Validation data reader.\n :type dev_batch_reader: callable\n :param feeding_dict: Feeding is a map of field name and tuple index\n of the data that reader returns.\n :type feeding_dict: dict|list\n :param learning_rate: Learning rate for ADAM optimizer.\n :type learning_rate: float\n :param gradient_clipping: Gradient clipping threshold.\n :type gradient_clipping: float\n :param num_passes: Number of training epochs.\n :type num_passes: int\n :param num_iterations_print: Number of training iterations for printing\n a training loss.\n :type rnn_iteratons_print: int\n :param is_local: Set to False if running with pserver with multi-nodes.\n :type is_local: bool\n :param output_model_dir: Directory for saving the model (every pass).\n :type output_model_dir: basestring\n :param test_off: Turn off testing.\n :type test_off: bool\n \"\"\"\n # prepare model output directory\n if not os.path.exists(output_model_dir):\n mkpath(output_model_dir)\n\n # prepare optimizer and trainer\n optimizer = paddle.optimizer.Adam(\n learning_rate=learning_rate,\n gradient_clipping_threshold=gradient_clipping)\n trainer = paddle.trainer.SGD(\n cost=self._loss,\n parameters=self._parameters,\n update_equation=optimizer,\n is_local=is_local)\n\n # create event handler\n def event_handler(event):\n global start_time, cost_sum, cost_counter\n if isinstance(event, paddle.event.EndIteration):\n cost_sum += event.cost\n cost_counter += 1\n if (event.batch_id + 1) % num_iterations_print == 0:\n output_model_path = os.path.join(output_model_dir,\n \"params.latest.tar.gz\")\n with gzip.open(output_model_path, 'w') as f:\n self._parameters.to_tar(f)\n print(\"\\nPass: %d, Batch: %d, TrainCost: %f\" %\n (event.pass_id, event.batch_id + 1,\n cost_sum / cost_counter))\n cost_sum, cost_counter = 0.0, 0\n else:\n sys.stdout.write('.')\n sys.stdout.flush()\n if isinstance(event, paddle.event.BeginPass):\n start_time = time.time()\n cost_sum, cost_counter = 0.0, 0\n if isinstance(event, paddle.event.EndPass):\n if test_off:\n print(\"\\n------- Time: %d sec, Pass: %d\" %\n (time.time() - start_time, event.pass_id))\n else:\n result = trainer.test(\n reader=dev_batch_reader, feeding=feeding_dict)\n print(\"\\n------- Time: %d sec, Pass: %d, \"\n \"ValidationCost: %s\" %\n (time.time() - start_time, event.pass_id, 0))\n output_model_path = os.path.join(\n output_model_dir, \"params.pass-%d.tar.gz\" % event.pass_id)\n with gzip.open(output_model_path, 'w') as f:\n self._parameters.to_tar(f)\n\n # run train\n trainer.train(\n reader=train_batch_reader,\n event_handler=event_handler,\n num_passes=num_passes,\n feeding=feeding_dict)\n\n def infer_loss_batch(self, infer_data):\n \"\"\"Model inference. Infer the ctc loss for a batch of speech\n utterances.\n\n :param infer_data: List of utterances to infer, with each utterance a\n tuple of audio features and transcription text (empty\n string).\n :type infer_data: list\n :return: List of ctc loss.\n :rtype: List of float\n \"\"\"\n # define inferer\n if self._loss_inferer == None:\n self._loss_inferer = paddle.inference.Inference(\n output_layer=self._loss, parameters=self._parameters)\n # run inference\n return self._loss_inferer.infer(input=infer_data)\n\n def infer_batch(self, infer_data, decoding_method, beam_alpha, beam_beta,\n beam_size, cutoff_prob, cutoff_top_n, vocab_list,\n language_model_path, num_processes):\n \"\"\"Model inference. Infer the transcription for a batch of speech\n utterances.\n\n :param infer_data: List of utterances to infer, with each utterance\n consisting of a tuple of audio features and\n transcription text (empty string).\n :type infer_data: list\n :param decoding_method: Decoding method name, 'ctc_greedy' or\n 'ctc_beam_search'.\n :param decoding_method: string\n :param beam_alpha: Parameter associated with language model.\n :type beam_alpha: float\n :param beam_beta: Parameter associated with word count.\n :type beam_beta: float\n :param beam_size: Width for Beam search.\n :type beam_size: int\n :param cutoff_prob: Cutoff probability in pruning,\n default 1.0, no pruning.\n :type cutoff_prob: float\n :param cutoff_top_n: Cutoff number in pruning, only top cutoff_top_n\n characters with highest probs in vocabulary will be\n used in beam search, default 40.\n :type cutoff_top_n: int\n :param vocab_list: List of tokens in the vocabulary, for decoding.\n :type vocab_list: list\n :param language_model_path: Filepath for language model.\n :type language_model_path: basestring|None\n :param num_processes: Number of processes (CPU) for decoder.\n :type num_processes: int\n :return: List of transcription texts.\n :rtype: List of basestring\n \"\"\"\n # define inferer\n if self._inferer == None:\n self._inferer = paddle.inference.Inference(\n output_layer=self._log_probs, parameters=self._parameters)\n # run inference\n infer_results = self._inferer.infer(input=infer_data)\n num_steps = len(infer_results) // len(infer_data)\n probs_split = [\n infer_results[i * num_steps:(i + 1) * num_steps]\n for i in xrange(0, len(infer_data))\n ]\n # run decoder\n results = []\n if decoding_method == \"ctc_greedy\":\n # best path decode\n for i, probs in enumerate(probs_split):\n output_transcription = ctc_greedy_decoder(\n probs_seq=probs, vocabulary=vocab_list)\n results.append(output_transcription)\n elif decoding_method == \"ctc_beam_search\":\n # initialize external scorer\n if self._ext_scorer == None:\n self._loaded_lm_path = language_model_path\n self.logger.info(\"begin to initialize the external scorer \"\n \"for decoding\")\n self._ext_scorer = Scorer(beam_alpha, beam_beta,\n language_model_path, vocab_list)\n\n lm_char_based = self._ext_scorer.is_character_based()\n lm_max_order = self._ext_scorer.get_max_order()\n lm_dict_size = self._ext_scorer.get_dict_size()\n self.logger.info(\"language model: \"\n \"is_character_based = %d,\" % lm_char_based +\n \" max_order = %d,\" % lm_max_order +\n \" dict_size = %d\" % lm_dict_size)\n self.logger.info(\"end initializing scorer. Start decoding ...\")\n else:\n self._ext_scorer.reset_params(beam_alpha, beam_beta)\n assert self._loaded_lm_path == language_model_path\n # beam search decode\n num_processes = min(num_processes, len(probs_split))\n beam_search_results = ctc_beam_search_decoder_batch(\n probs_split=probs_split,\n vocabulary=vocab_list,\n beam_size=beam_size,\n num_processes=num_processes,\n ext_scoring_func=self._ext_scorer,\n cutoff_prob=cutoff_prob,\n cutoff_top_n=cutoff_top_n)\n\n results = [result[0][1] for result in beam_search_results]\n else:\n raise ValueError(\"Decoding method [%s] is not supported.\" %\n decoding_method)\n return results\n\n def _create_parameters(self, model_path=None):\n \"\"\"Load or create model parameters.\"\"\"\n if model_path is None:\n self._parameters = paddle.parameters.create(self._loss)\n else:\n self._parameters = paddle.parameters.Parameters.from_tar(\n gzip.open(model_path))\n\n def _create_network(self, vocab_size, num_conv_layers, num_rnn_layers,\n rnn_layer_size, use_gru, share_rnn_weights):\n \"\"\"Create data layers and model network.\"\"\"\n # paddle.data_type.dense_array is used for variable batch input.\n # The size 161 * 161 is only an placeholder value and the real shape\n # of input batch data will be induced during training.\n audio_data = paddle.layer.data(\n name=\"audio_spectrogram\",\n type=paddle.data_type.dense_array(161 * 161))\n text_data = paddle.layer.data(\n name=\"transcript_text\",\n type=paddle.data_type.integer_value_sequence(vocab_size))\n self._log_probs, self._loss = deep_speech_v2_network(\n audio_data=audio_data,\n text_data=text_data,\n dict_size=vocab_size,\n num_conv_layers=num_conv_layers,\n num_rnn_layers=num_rnn_layers,\n rnn_size=rnn_layer_size,\n use_gru=use_gru,\n share_rnn_weights=share_rnn_weights)\n", "path": "deep_speech_2/model_utils/model.py"}], "after_files": [{"content": "\"\"\"Contains DeepSpeech2 model.\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport sys\nimport os\nimport time\nimport logging\nimport gzip\nfrom distutils.dir_util import mkpath\nimport paddle.v2 as paddle\nfrom decoders.swig_wrapper import Scorer\nfrom decoders.swig_wrapper import ctc_greedy_decoder\nfrom decoders.swig_wrapper import ctc_beam_search_decoder_batch\nfrom model_utils.network import deep_speech_v2_network\n\nlogging.basicConfig(\n format='[%(levelname)s %(asctime)s %(filename)s:%(lineno)d] %(message)s')\n\n\nclass DeepSpeech2Model(object):\n \"\"\"DeepSpeech2Model class.\n\n :param vocab_size: Decoding vocabulary size.\n :type vocab_size: int\n :param num_conv_layers: Number of stacking convolution layers.\n :type num_conv_layers: int\n :param num_rnn_layers: Number of stacking RNN layers.\n :type num_rnn_layers: int\n :param rnn_layer_size: RNN layer size (number of RNN cells).\n :type rnn_layer_size: int\n :param pretrained_model_path: Pretrained model path. If None, will train\n from stratch.\n :type pretrained_model_path: basestring|None\n :param share_rnn_weights: Whether to share input-hidden weights between\n forward and backward directional RNNs.Notice that\n for GRU, weight sharing is not supported.\n :type share_rnn_weights: bool\n \"\"\"\n\n def __init__(self, vocab_size, num_conv_layers, num_rnn_layers,\n rnn_layer_size, use_gru, pretrained_model_path,\n share_rnn_weights):\n self._create_network(vocab_size, num_conv_layers, num_rnn_layers,\n rnn_layer_size, use_gru, share_rnn_weights)\n self._create_parameters(pretrained_model_path)\n self._inferer = None\n self._loss_inferer = None\n self._ext_scorer = None\n self.logger = logging.getLogger(\"\")\n self.logger.setLevel(level=logging.INFO)\n\n def train(self,\n train_batch_reader,\n dev_batch_reader,\n feeding_dict,\n learning_rate,\n gradient_clipping,\n num_passes,\n output_model_dir,\n is_local=True,\n num_iterations_print=100,\n test_off=False):\n \"\"\"Train the model.\n\n :param train_batch_reader: Train data reader.\n :type train_batch_reader: callable\n :param dev_batch_reader: Validation data reader.\n :type dev_batch_reader: callable\n :param feeding_dict: Feeding is a map of field name and tuple index\n of the data that reader returns.\n :type feeding_dict: dict|list\n :param learning_rate: Learning rate for ADAM optimizer.\n :type learning_rate: float\n :param gradient_clipping: Gradient clipping threshold.\n :type gradient_clipping: float\n :param num_passes: Number of training epochs.\n :type num_passes: int\n :param num_iterations_print: Number of training iterations for printing\n a training loss.\n :type rnn_iteratons_print: int\n :param is_local: Set to False if running with pserver with multi-nodes.\n :type is_local: bool\n :param output_model_dir: Directory for saving the model (every pass).\n :type output_model_dir: basestring\n :param test_off: Turn off testing.\n :type test_off: bool\n \"\"\"\n # prepare model output directory\n if not os.path.exists(output_model_dir):\n mkpath(output_model_dir)\n\n # prepare optimizer and trainer\n optimizer = paddle.optimizer.Adam(\n learning_rate=learning_rate,\n gradient_clipping_threshold=gradient_clipping)\n trainer = paddle.trainer.SGD(\n cost=self._loss,\n parameters=self._parameters,\n update_equation=optimizer,\n is_local=is_local)\n\n # create event handler\n def event_handler(event):\n global start_time, cost_sum, cost_counter\n if isinstance(event, paddle.event.EndIteration):\n cost_sum += event.cost\n cost_counter += 1\n if (event.batch_id + 1) % num_iterations_print == 0:\n output_model_path = os.path.join(output_model_dir,\n \"params.latest.tar.gz\")\n with gzip.open(output_model_path, 'w') as f:\n self._parameters.to_tar(f)\n print(\"\\nPass: %d, Batch: %d, TrainCost: %f\" %\n (event.pass_id, event.batch_id + 1,\n cost_sum / cost_counter))\n cost_sum, cost_counter = 0.0, 0\n else:\n sys.stdout.write('.')\n sys.stdout.flush()\n if isinstance(event, paddle.event.BeginPass):\n start_time = time.time()\n cost_sum, cost_counter = 0.0, 0\n if isinstance(event, paddle.event.EndPass):\n if test_off:\n print(\"\\n------- Time: %d sec, Pass: %d\" %\n (time.time() - start_time, event.pass_id))\n else:\n result = trainer.test(\n reader=dev_batch_reader, feeding=feeding_dict)\n print(\n \"\\n------- Time: %d sec, Pass: %d, \"\n \"ValidationCost: %s\" %\n (time.time() - start_time, event.pass_id, result.cost))\n output_model_path = os.path.join(\n output_model_dir, \"params.pass-%d.tar.gz\" % event.pass_id)\n with gzip.open(output_model_path, 'w') as f:\n self._parameters.to_tar(f)\n\n # run train\n trainer.train(\n reader=train_batch_reader,\n event_handler=event_handler,\n num_passes=num_passes,\n feeding=feeding_dict)\n\n def infer_loss_batch(self, infer_data):\n \"\"\"Model inference. Infer the ctc loss for a batch of speech\n utterances.\n\n :param infer_data: List of utterances to infer, with each utterance a\n tuple of audio features and transcription text (empty\n string).\n :type infer_data: list\n :return: List of ctc loss.\n :rtype: List of float\n \"\"\"\n # define inferer\n if self._loss_inferer == None:\n self._loss_inferer = paddle.inference.Inference(\n output_layer=self._loss, parameters=self._parameters)\n # run inference\n return self._loss_inferer.infer(input=infer_data)\n\n def infer_batch(self, infer_data, decoding_method, beam_alpha, beam_beta,\n beam_size, cutoff_prob, cutoff_top_n, vocab_list,\n language_model_path, num_processes):\n \"\"\"Model inference. Infer the transcription for a batch of speech\n utterances.\n\n :param infer_data: List of utterances to infer, with each utterance\n consisting of a tuple of audio features and\n transcription text (empty string).\n :type infer_data: list\n :param decoding_method: Decoding method name, 'ctc_greedy' or\n 'ctc_beam_search'.\n :param decoding_method: string\n :param beam_alpha: Parameter associated with language model.\n :type beam_alpha: float\n :param beam_beta: Parameter associated with word count.\n :type beam_beta: float\n :param beam_size: Width for Beam search.\n :type beam_size: int\n :param cutoff_prob: Cutoff probability in pruning,\n default 1.0, no pruning.\n :type cutoff_prob: float\n :param cutoff_top_n: Cutoff number in pruning, only top cutoff_top_n\n characters with highest probs in vocabulary will be\n used in beam search, default 40.\n :type cutoff_top_n: int\n :param vocab_list: List of tokens in the vocabulary, for decoding.\n :type vocab_list: list\n :param language_model_path: Filepath for language model.\n :type language_model_path: basestring|None\n :param num_processes: Number of processes (CPU) for decoder.\n :type num_processes: int\n :return: List of transcription texts.\n :rtype: List of basestring\n \"\"\"\n # define inferer\n if self._inferer == None:\n self._inferer = paddle.inference.Inference(\n output_layer=self._log_probs, parameters=self._parameters)\n # run inference\n infer_results = self._inferer.infer(input=infer_data)\n num_steps = len(infer_results) // len(infer_data)\n probs_split = [\n infer_results[i * num_steps:(i + 1) * num_steps]\n for i in xrange(0, len(infer_data))\n ]\n # run decoder\n results = []\n if decoding_method == \"ctc_greedy\":\n # best path decode\n for i, probs in enumerate(probs_split):\n output_transcription = ctc_greedy_decoder(\n probs_seq=probs, vocabulary=vocab_list)\n results.append(output_transcription)\n elif decoding_method == \"ctc_beam_search\":\n # initialize external scorer\n if self._ext_scorer == None:\n self._loaded_lm_path = language_model_path\n self.logger.info(\"begin to initialize the external scorer \"\n \"for decoding\")\n self._ext_scorer = Scorer(beam_alpha, beam_beta,\n language_model_path, vocab_list)\n\n lm_char_based = self._ext_scorer.is_character_based()\n lm_max_order = self._ext_scorer.get_max_order()\n lm_dict_size = self._ext_scorer.get_dict_size()\n self.logger.info(\"language model: \"\n \"is_character_based = %d,\" % lm_char_based +\n \" max_order = %d,\" % lm_max_order +\n \" dict_size = %d\" % lm_dict_size)\n self.logger.info(\"end initializing scorer. Start decoding ...\")\n else:\n self._ext_scorer.reset_params(beam_alpha, beam_beta)\n assert self._loaded_lm_path == language_model_path\n # beam search decode\n num_processes = min(num_processes, len(probs_split))\n beam_search_results = ctc_beam_search_decoder_batch(\n probs_split=probs_split,\n vocabulary=vocab_list,\n beam_size=beam_size,\n num_processes=num_processes,\n ext_scoring_func=self._ext_scorer,\n cutoff_prob=cutoff_prob,\n cutoff_top_n=cutoff_top_n)\n\n results = [result[0][1] for result in beam_search_results]\n else:\n raise ValueError(\"Decoding method [%s] is not supported.\" %\n decoding_method)\n return results\n\n def _create_parameters(self, model_path=None):\n \"\"\"Load or create model parameters.\"\"\"\n if model_path is None:\n self._parameters = paddle.parameters.create(self._loss)\n else:\n self._parameters = paddle.parameters.Parameters.from_tar(\n gzip.open(model_path))\n\n def _create_network(self, vocab_size, num_conv_layers, num_rnn_layers,\n rnn_layer_size, use_gru, share_rnn_weights):\n \"\"\"Create data layers and model network.\"\"\"\n # paddle.data_type.dense_array is used for variable batch input.\n # The size 161 * 161 is only an placeholder value and the real shape\n # of input batch data will be induced during training.\n audio_data = paddle.layer.data(\n name=\"audio_spectrogram\",\n type=paddle.data_type.dense_array(161 * 161))\n text_data = paddle.layer.data(\n name=\"transcript_text\",\n type=paddle.data_type.integer_value_sequence(vocab_size))\n self._log_probs, self._loss = deep_speech_v2_network(\n audio_data=audio_data,\n text_data=text_data,\n dict_size=vocab_size,\n num_conv_layers=num_conv_layers,\n num_rnn_layers=num_rnn_layers,\n rnn_size=rnn_layer_size,\n use_gru=use_gru,\n share_rnn_weights=share_rnn_weights)\n", "path": "deep_speech_2/model_utils/model.py"}]} | 3,623 | 224 |
gh_patches_debug_19850 | rasdani/github-patches | git_diff | PyGithub__PyGithub-1894 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug in repr(PublicKey)
hi there.
there is a bug in "repo.get_public_key".
i found this bug by trying to use "repo.create_secret", which failed because of this issue
```
repo.get_public_key()
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Users/sschultchen/PycharmProjects/IAC_Main/venv/lib/python3.9/site-packages/github/PublicKey.py", line 55, in __repr__
return self.get__repr__({"key_id": self._key_id.value, "key": self._key.value})
File "/Users/sschultchen/PycharmProjects/IAC_Main/venv/lib/python3.9/site-packages/github/GithubObject.py", line 62, in value
raise GithubException.BadAttributeException(
github.GithubException.BadAttributeException: (1, <class 'str'>, None)
```
i guess the reason for this might be that the pygithub implementation assumes that "key_id" is a string, but it actually is a integer.
at least, the github api gives me an integer, and not a string for this attribute.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `github/PublicKey.py`
Content:
```
1 ############################ Copyrights and license ############################
2 # #
3 # Copyright 2012 Vincent Jacques <[email protected]> #
4 # Copyright 2012 Zearin <[email protected]> #
5 # Copyright 2013 AKFish <[email protected]> #
6 # Copyright 2013 Vincent Jacques <[email protected]> #
7 # Copyright 2014 Vincent Jacques <[email protected]> #
8 # Copyright 2016 Jannis Gebauer <[email protected]> #
9 # Copyright 2016 Peter Buckley <[email protected]> #
10 # Copyright 2018 Wan Liuyang <[email protected]> #
11 # Copyright 2018 sfdye <[email protected]> #
12 # #
13 # This file is part of PyGithub. #
14 # http://pygithub.readthedocs.io/ #
15 # #
16 # PyGithub is free software: you can redistribute it and/or modify it under #
17 # the terms of the GNU Lesser General Public License as published by the Free #
18 # Software Foundation, either version 3 of the License, or (at your option) #
19 # any later version. #
20 # #
21 # PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #
22 # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #
23 # FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #
24 # details. #
25 # #
26 # You should have received a copy of the GNU Lesser General Public License #
27 # along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #
28 # #
29 ################################################################################
30
31 # https://docs.github.com/en/rest/reference/actions#example-encrypting-a-secret-using-python
32 from base64 import b64encode
33
34 from nacl import encoding, public
35
36 import github.GithubObject
37
38
39 def encrypt(public_key: str, secret_value: str) -> str:
40 """Encrypt a Unicode string using the public key."""
41 public_key = public.PublicKey(public_key.encode("utf-8"), encoding.Base64Encoder())
42 sealed_box = public.SealedBox(public_key)
43 encrypted = sealed_box.encrypt(secret_value.encode("utf-8"))
44 return b64encode(encrypted).decode("utf-8")
45
46
47 class PublicKey(github.GithubObject.CompletableGithubObject):
48 """
49 This class represents either an organization public key or a repository public key.
50 The reference can be found here https://docs.github.com/en/rest/reference/actions#get-an-organization-public-key
51 or here https://docs.github.com/en/rest/reference/actions#get-a-repository-public-key
52 """
53
54 def __repr__(self):
55 return self.get__repr__({"key_id": self._key_id.value, "key": self._key.value})
56
57 @property
58 def key(self):
59 """
60 :type: string
61 """
62 self._completeIfNotSet(self._key)
63 return self._key.value
64
65 @property
66 def key_id(self):
67 """
68 :type: string
69 """
70 self._completeIfNotSet(self._key_id)
71 return self._key_id.value
72
73 def _initAttributes(self):
74 self._key = github.GithubObject.NotSet
75 self._key_id = github.GithubObject.NotSet
76
77 def _useAttributes(self, attributes):
78 if "key" in attributes: # pragma no branch
79 self._key = self._makeStringAttribute(attributes["key"])
80 if "key_id" in attributes: # pragma no branch
81 self._key_id = self._makeStringAttribute(attributes["key_id"])
82
83 def encrypt(self, unencrypted_value):
84 return encrypt(self._key.value, unencrypted_value)
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/github/PublicKey.py b/github/PublicKey.py
--- a/github/PublicKey.py
+++ b/github/PublicKey.py
@@ -65,7 +65,7 @@
@property
def key_id(self):
"""
- :type: string
+ :type: string or int
"""
self._completeIfNotSet(self._key_id)
return self._key_id.value
@@ -78,7 +78,10 @@
if "key" in attributes: # pragma no branch
self._key = self._makeStringAttribute(attributes["key"])
if "key_id" in attributes: # pragma no branch
- self._key_id = self._makeStringAttribute(attributes["key_id"])
+ if type(attributes["key_id"]) == str:
+ self._key_id = self._makeStringAttribute(attributes["key_id"])
+ else:
+ self._key_id = self._makeIntAttribute(attributes["key_id"])
def encrypt(self, unencrypted_value):
return encrypt(self._key.value, unencrypted_value)
| {"golden_diff": "diff --git a/github/PublicKey.py b/github/PublicKey.py\n--- a/github/PublicKey.py\n+++ b/github/PublicKey.py\n@@ -65,7 +65,7 @@\n @property\n def key_id(self):\n \"\"\"\n- :type: string\n+ :type: string or int\n \"\"\"\n self._completeIfNotSet(self._key_id)\n return self._key_id.value\n@@ -78,7 +78,10 @@\n if \"key\" in attributes: # pragma no branch\n self._key = self._makeStringAttribute(attributes[\"key\"])\n if \"key_id\" in attributes: # pragma no branch\n- self._key_id = self._makeStringAttribute(attributes[\"key_id\"])\n+ if type(attributes[\"key_id\"]) == str:\n+ self._key_id = self._makeStringAttribute(attributes[\"key_id\"])\n+ else:\n+ self._key_id = self._makeIntAttribute(attributes[\"key_id\"])\n \n def encrypt(self, unencrypted_value):\n return encrypt(self._key.value, unencrypted_value)\n", "issue": "Bug in repr(PublicKey)\nhi there.\r\n\r\nthere is a bug in \"repo.get_public_key\".\r\n\r\ni found this bug by trying to use \"repo.create_secret\", which failed because of this issue\r\n\r\n```\r\nrepo.get_public_key()\r\nTraceback (most recent call last):\r\n File \"<input>\", line 1, in <module>\r\n File \"/Users/sschultchen/PycharmProjects/IAC_Main/venv/lib/python3.9/site-packages/github/PublicKey.py\", line 55, in __repr__\r\n return self.get__repr__({\"key_id\": self._key_id.value, \"key\": self._key.value})\r\n File \"/Users/sschultchen/PycharmProjects/IAC_Main/venv/lib/python3.9/site-packages/github/GithubObject.py\", line 62, in value\r\n raise GithubException.BadAttributeException(\r\ngithub.GithubException.BadAttributeException: (1, <class 'str'>, None)\r\n```\r\n\r\ni guess the reason for this might be that the pygithub implementation assumes that \"key_id\" is a string, but it actually is a integer.\r\n\r\nat least, the github api gives me an integer, and not a string for this attribute.\n", "before_files": [{"content": "############################ Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 AKFish <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2014 Vincent Jacques <[email protected]> #\n# Copyright 2016 Jannis Gebauer <[email protected]> #\n# Copyright 2016 Peter Buckley <[email protected]> #\n# Copyright 2018 Wan Liuyang <[email protected]> #\n# Copyright 2018 sfdye <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.readthedocs.io/ #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n################################################################################\n\n# https://docs.github.com/en/rest/reference/actions#example-encrypting-a-secret-using-python\nfrom base64 import b64encode\n\nfrom nacl import encoding, public\n\nimport github.GithubObject\n\n\ndef encrypt(public_key: str, secret_value: str) -> str:\n \"\"\"Encrypt a Unicode string using the public key.\"\"\"\n public_key = public.PublicKey(public_key.encode(\"utf-8\"), encoding.Base64Encoder())\n sealed_box = public.SealedBox(public_key)\n encrypted = sealed_box.encrypt(secret_value.encode(\"utf-8\"))\n return b64encode(encrypted).decode(\"utf-8\")\n\n\nclass PublicKey(github.GithubObject.CompletableGithubObject):\n \"\"\"\n This class represents either an organization public key or a repository public key.\n The reference can be found here https://docs.github.com/en/rest/reference/actions#get-an-organization-public-key\n or here https://docs.github.com/en/rest/reference/actions#get-a-repository-public-key\n \"\"\"\n\n def __repr__(self):\n return self.get__repr__({\"key_id\": self._key_id.value, \"key\": self._key.value})\n\n @property\n def key(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._key)\n return self._key.value\n\n @property\n def key_id(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._key_id)\n return self._key_id.value\n\n def _initAttributes(self):\n self._key = github.GithubObject.NotSet\n self._key_id = github.GithubObject.NotSet\n\n def _useAttributes(self, attributes):\n if \"key\" in attributes: # pragma no branch\n self._key = self._makeStringAttribute(attributes[\"key\"])\n if \"key_id\" in attributes: # pragma no branch\n self._key_id = self._makeStringAttribute(attributes[\"key_id\"])\n\n def encrypt(self, unencrypted_value):\n return encrypt(self._key.value, unencrypted_value)\n", "path": "github/PublicKey.py"}], "after_files": [{"content": "############################ Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 AKFish <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2014 Vincent Jacques <[email protected]> #\n# Copyright 2016 Jannis Gebauer <[email protected]> #\n# Copyright 2016 Peter Buckley <[email protected]> #\n# Copyright 2018 Wan Liuyang <[email protected]> #\n# Copyright 2018 sfdye <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.readthedocs.io/ #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n################################################################################\n\n# https://docs.github.com/en/rest/reference/actions#example-encrypting-a-secret-using-python\nfrom base64 import b64encode\n\nfrom nacl import encoding, public\n\nimport github.GithubObject\n\n\ndef encrypt(public_key: str, secret_value: str) -> str:\n \"\"\"Encrypt a Unicode string using the public key.\"\"\"\n public_key = public.PublicKey(public_key.encode(\"utf-8\"), encoding.Base64Encoder())\n sealed_box = public.SealedBox(public_key)\n encrypted = sealed_box.encrypt(secret_value.encode(\"utf-8\"))\n return b64encode(encrypted).decode(\"utf-8\")\n\n\nclass PublicKey(github.GithubObject.CompletableGithubObject):\n \"\"\"\n This class represents either an organization public key or a repository public key.\n The reference can be found here https://docs.github.com/en/rest/reference/actions#get-an-organization-public-key\n or here https://docs.github.com/en/rest/reference/actions#get-a-repository-public-key\n \"\"\"\n\n def __repr__(self):\n return self.get__repr__({\"key_id\": self._key_id.value, \"key\": self._key.value})\n\n @property\n def key(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._key)\n return self._key.value\n\n @property\n def key_id(self):\n \"\"\"\n :type: string or int\n \"\"\"\n self._completeIfNotSet(self._key_id)\n return self._key_id.value\n\n def _initAttributes(self):\n self._key = github.GithubObject.NotSet\n self._key_id = github.GithubObject.NotSet\n\n def _useAttributes(self, attributes):\n if \"key\" in attributes: # pragma no branch\n self._key = self._makeStringAttribute(attributes[\"key\"])\n if \"key_id\" in attributes: # pragma no branch\n if type(attributes[\"key_id\"]) == str:\n self._key_id = self._makeStringAttribute(attributes[\"key_id\"])\n else:\n self._key_id = self._makeIntAttribute(attributes[\"key_id\"])\n\n def encrypt(self, unencrypted_value):\n return encrypt(self._key.value, unencrypted_value)\n", "path": "github/PublicKey.py"}]} | 1,511 | 235 |
gh_patches_debug_6748 | rasdani/github-patches | git_diff | internetarchive__openlibrary-7922 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update solr Docker tag to v8.11.2
[](https://renovatebot.com)
This PR contains the following updates:
| Package | Update | Change |
|---|---|---|
| [solr](https://togithub.com/apache/solr) | minor | `8.10.1` -> `8.11.2` |
---
### Configuration
📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box
---
This PR has been generated by [Mend Renovate](https://www.mend.io/free-developer-tools/renovate/). View repository job log [here](https://developer.mend.io/github/internetarchive/openlibrary).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzMi4xNTQuMiIsInVwZGF0ZWRJblZlciI6IjM2LjQwLjMiLCJ0YXJnZXRCcmFuY2giOiJtYXN0ZXIifQ==-->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openlibrary/solr/types_generator.py`
Content:
```
1 #!/usr/bin/env python
2 import os
3
4 root = os.path.dirname(__file__)
5 OVERRIDES = {'type': "Literal['work', 'author', 'subject']"}
6
7
8 def generate():
9 """This function generates the types.py file."""
10 import xml.etree.ElementTree as ET
11
12 # read the managed-schema xml file
13 solr_schema = ET.parse(os.path.join(root, '../../conf/solr/conf/managed-schema'))
14 python_fields: list[str] = []
15 seen_names: set[str] = set()
16 for field in solr_schema.getroot().findall('field'):
17 name = field.get('name')
18 if name.startswith('_'):
19 continue
20
21 required = field.get('required') == 'true'
22 typ = field.get('type')
23 multivalued = field.get('multiValued') == 'true'
24 type_map = {
25 'pint': 'int',
26 'string': 'str',
27 'text_en_splitting': 'str',
28 'text_general': 'str',
29 'text_international': 'str',
30 'text_title_sort': 'str',
31 'boolean': 'bool',
32 'pfloat': 'float',
33 }
34
35 if name in OVERRIDES:
36 python_type = OVERRIDES[name]
37 elif typ in type_map:
38 python_type = type_map[typ]
39 elif (
40 field_type := solr_schema.find(f".//fieldType[@name='{typ}']")
41 ) is not None:
42 field_class = field_type.get('class')
43 if field_class == 'solr.EnumFieldType':
44 enumsConfigFile = field_type.get('enumsConfig')
45 enumsConfig = ET.parse(
46 os.path.join(root, '../../conf/solr/conf/', enumsConfigFile)
47 )
48 enum_values = [
49 el.text
50 for el in enumsConfig.findall(
51 f".//enum[@name='{field_type.get('enumName')}']/value"
52 )
53 ]
54 python_type = f"Literal[{', '.join(map(repr, enum_values))}]"
55 else:
56 raise Exception(f"Unknown field type class {field_class}")
57 else:
58 raise Exception(f"Unknown field type {typ}")
59
60 if name not in OVERRIDES:
61 if multivalued:
62 python_type = f"list[{python_type}]"
63 if not required:
64 python_type = f"Optional[{python_type}]"
65
66 seen_names.add(name)
67 python_fields.append(f" {name}: {python_type}")
68
69 for key in set(OVERRIDES) - seen_names:
70 python_fields.append(f" {key}: {OVERRIDES[key]}")
71
72 body = '\n'.join(python_fields)
73 python = f"""# This file is auto-generated by types_generator.py
74 # fmt: off
75 from typing import Literal, TypedDict, Optional
76
77
78 class SolrDocument(TypedDict):
79 {body}
80
81 # fmt: on"""
82
83 return python
84
85
86 if __name__ == '__main__':
87 print(generate())
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/openlibrary/solr/types_generator.py b/openlibrary/solr/types_generator.py
--- a/openlibrary/solr/types_generator.py
+++ b/openlibrary/solr/types_generator.py
@@ -10,7 +10,9 @@
import xml.etree.ElementTree as ET
# read the managed-schema xml file
- solr_schema = ET.parse(os.path.join(root, '../../conf/solr/conf/managed-schema'))
+ solr_schema = ET.parse(
+ os.path.join(root, '../../conf/solr/conf/managed-schema.xml')
+ )
python_fields: list[str] = []
seen_names: set[str] = set()
for field in solr_schema.getroot().findall('field'):
| {"golden_diff": "diff --git a/openlibrary/solr/types_generator.py b/openlibrary/solr/types_generator.py\n--- a/openlibrary/solr/types_generator.py\n+++ b/openlibrary/solr/types_generator.py\n@@ -10,7 +10,9 @@\n import xml.etree.ElementTree as ET\n \n # read the managed-schema xml file\n- solr_schema = ET.parse(os.path.join(root, '../../conf/solr/conf/managed-schema'))\n+ solr_schema = ET.parse(\n+ os.path.join(root, '../../conf/solr/conf/managed-schema.xml')\n+ )\n python_fields: list[str] = []\n seen_names: set[str] = set()\n for field in solr_schema.getroot().findall('field'):\n", "issue": "Update solr Docker tag to v8.11.2\n[](https://renovatebot.com)\n\nThis PR contains the following updates:\n\n| Package | Update | Change |\n|---|---|---|\n| [solr](https://togithub.com/apache/solr) | minor | `8.10.1` -> `8.11.2` |\n\n---\n\n### Configuration\n\n\ud83d\udcc5 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).\n\n\ud83d\udea6 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.\n\n\u267b **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.\n\n\ud83d\udd15 **Ignore**: Close this PR and you won't be reminded about this update again.\n\n---\n\n - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box\n\n---\n\nThis PR has been generated by [Mend Renovate](https://www.mend.io/free-developer-tools/renovate/). View repository job log [here](https://developer.mend.io/github/internetarchive/openlibrary).\n<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzMi4xNTQuMiIsInVwZGF0ZWRJblZlciI6IjM2LjQwLjMiLCJ0YXJnZXRCcmFuY2giOiJtYXN0ZXIifQ==-->\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\n\nroot = os.path.dirname(__file__)\nOVERRIDES = {'type': \"Literal['work', 'author', 'subject']\"}\n\n\ndef generate():\n \"\"\"This function generates the types.py file.\"\"\"\n import xml.etree.ElementTree as ET\n\n # read the managed-schema xml file\n solr_schema = ET.parse(os.path.join(root, '../../conf/solr/conf/managed-schema'))\n python_fields: list[str] = []\n seen_names: set[str] = set()\n for field in solr_schema.getroot().findall('field'):\n name = field.get('name')\n if name.startswith('_'):\n continue\n\n required = field.get('required') == 'true'\n typ = field.get('type')\n multivalued = field.get('multiValued') == 'true'\n type_map = {\n 'pint': 'int',\n 'string': 'str',\n 'text_en_splitting': 'str',\n 'text_general': 'str',\n 'text_international': 'str',\n 'text_title_sort': 'str',\n 'boolean': 'bool',\n 'pfloat': 'float',\n }\n\n if name in OVERRIDES:\n python_type = OVERRIDES[name]\n elif typ in type_map:\n python_type = type_map[typ]\n elif (\n field_type := solr_schema.find(f\".//fieldType[@name='{typ}']\")\n ) is not None:\n field_class = field_type.get('class')\n if field_class == 'solr.EnumFieldType':\n enumsConfigFile = field_type.get('enumsConfig')\n enumsConfig = ET.parse(\n os.path.join(root, '../../conf/solr/conf/', enumsConfigFile)\n )\n enum_values = [\n el.text\n for el in enumsConfig.findall(\n f\".//enum[@name='{field_type.get('enumName')}']/value\"\n )\n ]\n python_type = f\"Literal[{', '.join(map(repr, enum_values))}]\"\n else:\n raise Exception(f\"Unknown field type class {field_class}\")\n else:\n raise Exception(f\"Unknown field type {typ}\")\n\n if name not in OVERRIDES:\n if multivalued:\n python_type = f\"list[{python_type}]\"\n if not required:\n python_type = f\"Optional[{python_type}]\"\n\n seen_names.add(name)\n python_fields.append(f\" {name}: {python_type}\")\n\n for key in set(OVERRIDES) - seen_names:\n python_fields.append(f\" {key}: {OVERRIDES[key]}\")\n\n body = '\\n'.join(python_fields)\n python = f\"\"\"# This file is auto-generated by types_generator.py\n# fmt: off\nfrom typing import Literal, TypedDict, Optional\n\n\nclass SolrDocument(TypedDict):\n{body}\n\n# fmt: on\"\"\"\n\n return python\n\n\nif __name__ == '__main__':\n print(generate())\n", "path": "openlibrary/solr/types_generator.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport os\n\nroot = os.path.dirname(__file__)\nOVERRIDES = {'type': \"Literal['work', 'author', 'subject']\"}\n\n\ndef generate():\n \"\"\"This function generates the types.py file.\"\"\"\n import xml.etree.ElementTree as ET\n\n # read the managed-schema xml file\n solr_schema = ET.parse(\n os.path.join(root, '../../conf/solr/conf/managed-schema.xml')\n )\n python_fields: list[str] = []\n seen_names: set[str] = set()\n for field in solr_schema.getroot().findall('field'):\n name = field.get('name')\n if name.startswith('_'):\n continue\n\n required = field.get('required') == 'true'\n typ = field.get('type')\n multivalued = field.get('multiValued') == 'true'\n type_map = {\n 'pint': 'int',\n 'string': 'str',\n 'text_en_splitting': 'str',\n 'text_general': 'str',\n 'text_international': 'str',\n 'text_title_sort': 'str',\n 'boolean': 'bool',\n 'pfloat': 'float',\n }\n\n if name in OVERRIDES:\n python_type = OVERRIDES[name]\n elif typ in type_map:\n python_type = type_map[typ]\n elif (\n field_type := solr_schema.find(f\".//fieldType[@name='{typ}']\")\n ) is not None:\n field_class = field_type.get('class')\n if field_class == 'solr.EnumFieldType':\n enumsConfigFile = field_type.get('enumsConfig')\n enumsConfig = ET.parse(\n os.path.join(root, '../../conf/solr/conf/', enumsConfigFile)\n )\n enum_values = [\n el.text\n for el in enumsConfig.findall(\n f\".//enum[@name='{field_type.get('enumName')}']/value\"\n )\n ]\n python_type = f\"Literal[{', '.join(map(repr, enum_values))}]\"\n else:\n raise Exception(f\"Unknown field type class {field_class}\")\n else:\n raise Exception(f\"Unknown field type {typ}\")\n\n if name not in OVERRIDES:\n if multivalued:\n python_type = f\"list[{python_type}]\"\n if not required:\n python_type = f\"Optional[{python_type}]\"\n\n seen_names.add(name)\n python_fields.append(f\" {name}: {python_type}\")\n\n for key in set(OVERRIDES) - seen_names:\n python_fields.append(f\" {key}: {OVERRIDES[key]}\")\n\n body = '\\n'.join(python_fields)\n python = f\"\"\"# This file is auto-generated by types_generator.py\n# fmt: off\nfrom typing import Literal, TypedDict, Optional\n\n\nclass SolrDocument(TypedDict):\n{body}\n\n# fmt: on\"\"\"\n\n return python\n\n\nif __name__ == '__main__':\n print(generate())\n", "path": "openlibrary/solr/types_generator.py"}]} | 1,419 | 162 |
gh_patches_debug_30235 | rasdani/github-patches | git_diff | Flexget__Flexget-2765 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Flexget add to qbittorrent plugin will force recheck even if file is available
Qbittorrent allows you to buypass rechecking if the file exists. But it will sit there and verify every single file.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flexget/plugins/clients/qbittorrent.py`
Content:
```
1 import os
2
3 from loguru import logger
4 from requests import Session
5 from requests.exceptions import RequestException
6
7 from flexget import plugin
8 from flexget.event import event
9 from flexget.utils.template import RenderError
10
11 logger = logger.bind(name='qbittorrent')
12
13
14 class OutputQBitTorrent:
15 """
16 Example:
17
18 qbittorrent:
19 username: <USERNAME> (default: (none))
20 password: <PASSWORD> (default: (none))
21 host: <HOSTNAME> (default: localhost)
22 port: <PORT> (default: 8080)
23 use_ssl: <SSL> (default: False)
24 verify_cert: <VERIFY> (default: True)
25 path: <OUTPUT_DIR> (default: (none))
26 label: <LABEL> (default: (none))
27 maxupspeed: <torrent upload speed limit> (default: 0)
28 maxdownspeed: <torrent download speed limit> (default: 0)
29 add_paused: <ADD_PAUSED> (default: False)
30 """
31
32 schema = {
33 'anyOf': [
34 {'type': 'boolean'},
35 {
36 'type': 'object',
37 'properties': {
38 'username': {'type': 'string'},
39 'password': {'type': 'string'},
40 'host': {'type': 'string'},
41 'port': {'type': 'integer'},
42 'use_ssl': {'type': 'boolean'},
43 'verify_cert': {'type': 'boolean'},
44 'path': {'type': 'string'},
45 'label': {'type': 'string'},
46 'maxupspeed': {'type': 'integer'},
47 'maxdownspeed': {'type': 'integer'},
48 'fail_html': {'type': 'boolean'},
49 'add_paused': {'type': 'boolean'},
50 },
51 'additionalProperties': False,
52 },
53 ]
54 }
55
56 def __init__(self):
57 super().__init__()
58 self.session = Session()
59 self.api_url_login = None
60 self.api_url_upload = None
61 self.api_url_download = None
62 self.url = None
63 self.connected = False
64
65 def _request(self, method, url, msg_on_fail=None, **kwargs):
66 try:
67 response = self.session.request(method, url, **kwargs)
68 if response.text == "Ok.":
69 return response
70 msg = msg_on_fail if msg_on_fail else f'Failure. URL: {url}, data: {kwargs}'
71 except RequestException as e:
72 msg = str(e)
73 raise plugin.PluginError(f'Error when trying to send request to qBittorrent: {msg}')
74
75 def check_api_version(self, msg_on_fail, verify=True):
76 try:
77 url = self.url + "/api/v2/app/webapiVersion"
78 response = self.session.request('get', url, verify=verify)
79 if response.status_code != 404:
80 self.api_url_login = '/api/v2/auth/login'
81 self.api_url_upload = '/api/v2/torrents/add'
82 self.api_url_download = '/api/v2/torrents/add'
83 return response
84
85 url = self.url + "/version/api"
86 response = self.session.request('get', url, verify=verify)
87 if response.status_code != 404:
88 self.api_url_login = '/login'
89 self.api_url_upload = '/command/upload'
90 self.api_url_download = '/command/download'
91 return response
92
93 msg = 'Failure. URL: {}'.format(url) if not msg_on_fail else msg_on_fail
94 except RequestException as e:
95 msg = str(e)
96 raise plugin.PluginError(
97 'Error when trying to send request to qBittorrent: {}'.format(msg)
98 )
99
100 def connect(self, config):
101 """
102 Connect to qBittorrent Web UI. Username and password not necessary
103 if 'Bypass authentication for localhost' is checked and host is
104 'localhost'.
105 """
106 self.url = '{}://{}:{}'.format(
107 'https' if config['use_ssl'] else 'http', config['host'], config['port']
108 )
109 self.check_api_version('Check API version failed.', verify=config['verify_cert'])
110 if config.get('username') and config.get('password'):
111 data = {'username': config['username'], 'password': config['password']}
112 self._request(
113 'post',
114 self.url + self.api_url_login,
115 data=data,
116 msg_on_fail='Authentication failed.',
117 verify=config['verify_cert'],
118 )
119 logger.debug('Successfully connected to qBittorrent')
120 self.connected = True
121
122 def add_torrent_file(self, file_path, data, verify_cert):
123 if not self.connected:
124 raise plugin.PluginError('Not connected.')
125 multipart_data = {k: (None, v) for k, v in data.items()}
126 with open(file_path, 'rb') as f:
127 multipart_data['torrents'] = f
128 self._request(
129 'post',
130 self.url + self.api_url_upload,
131 msg_on_fail='Failed to add file to qBittorrent',
132 files=multipart_data,
133 verify=verify_cert,
134 )
135 logger.debug('Added torrent file {} to qBittorrent', file_path)
136
137 def add_torrent_url(self, url, data, verify_cert):
138 if not self.connected:
139 raise plugin.PluginError('Not connected.')
140 data['urls'] = url
141 multipart_data = {k: (None, v) for k, v in data.items()}
142 self._request(
143 'post',
144 self.url + self.api_url_download,
145 msg_on_fail='Failed to add file to qBittorrent',
146 files=multipart_data,
147 verify=verify_cert,
148 )
149 logger.debug('Added url {} to qBittorrent', url)
150
151 @staticmethod
152 def prepare_config(config):
153 if isinstance(config, bool):
154 config = {'enabled': config}
155 config.setdefault('enabled', True)
156 config.setdefault('host', 'localhost')
157 config.setdefault('port', 8080)
158 config.setdefault('use_ssl', False)
159 config.setdefault('verify_cert', True)
160 config.setdefault('label', '')
161 config.setdefault('maxupspeed', 0)
162 config.setdefault('maxdownspeed', 0)
163 config.setdefault('fail_html', True)
164 return config
165
166 def add_entries(self, task, config):
167 for entry in task.accepted:
168 form_data = {}
169 try:
170 save_path = entry.render(entry.get('path', config.get('path', '')))
171 if save_path:
172 form_data['savepath'] = save_path
173 except RenderError as e:
174 logger.error('Error setting path for {}: {}', entry['title'], e)
175
176 label = entry.render(entry.get('label', config.get('label', '')))
177 if label:
178 form_data['label'] = label # qBittorrent v3.3.3-
179 form_data['category'] = label # qBittorrent v3.3.4+
180
181 add_paused = entry.get('add_paused', config.get('add_paused'))
182 if add_paused:
183 form_data['paused'] = 'true'
184
185 maxupspeed = entry.get('maxupspeed', config.get('maxupspeed'))
186 if maxupspeed:
187 form_data['upLimit'] = maxupspeed * 1024
188
189 maxdownspeed = entry.get('maxdownspeed', config.get('maxdownspeed'))
190 if maxdownspeed:
191 form_data['dlLimit'] = maxdownspeed * 1024
192
193 is_magnet = entry['url'].startswith('magnet:')
194
195 if task.manager.options.test:
196 logger.info('Test mode.')
197 logger.info('Would add torrent to qBittorrent with:')
198 if not is_magnet:
199 logger.info('File: {}', entry.get('file'))
200 else:
201 logger.info('Url: {}', entry.get('url'))
202 logger.info('Save path: {}', form_data.get('savepath'))
203 logger.info('Label: {}', form_data.get('label'))
204 logger.info('Paused: {}', form_data.get('paused', 'false'))
205 if maxupspeed:
206 logger.info('Upload Speed Limit: {}', form_data.get('upLimit'))
207 if maxdownspeed:
208 logger.info('Download Speed Limit: {}', form_data.get('dlLimit'))
209 continue
210
211 if not is_magnet:
212 if 'file' not in entry:
213 entry.fail('File missing?')
214 continue
215 if not os.path.exists(entry['file']):
216 tmp_path = os.path.join(task.manager.config_base, 'temp')
217 logger.debug('entry: {}', entry)
218 logger.debug('temp: {}', ', '.join(os.listdir(tmp_path)))
219 entry.fail("Downloaded temp file '%s' doesn't exist!?" % entry['file'])
220 continue
221 self.add_torrent_file(entry['file'], form_data, config['verify_cert'])
222 else:
223 self.add_torrent_url(entry['url'], form_data, config['verify_cert'])
224
225 @plugin.priority(120)
226 def on_task_download(self, task, config):
227 """
228 Call download plugin to generate torrent files to load into
229 qBittorrent.
230 """
231 config = self.prepare_config(config)
232 if not config['enabled']:
233 return
234 if 'download' not in task.config:
235 download = plugin.get('download', self)
236 download.get_temp_files(task, handle_magnets=True, fail_html=config['fail_html'])
237
238 @plugin.priority(135)
239 def on_task_output(self, task, config):
240 """Add torrents to qBittorrent at exit."""
241 if task.accepted:
242 config = self.prepare_config(config)
243 self.connect(config)
244 self.add_entries(task, config)
245
246
247 @event('plugin.register')
248 def register_plugin():
249 plugin.register(OutputQBitTorrent, 'qbittorrent', api_ver=2)
250
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/flexget/plugins/clients/qbittorrent.py b/flexget/plugins/clients/qbittorrent.py
--- a/flexget/plugins/clients/qbittorrent.py
+++ b/flexget/plugins/clients/qbittorrent.py
@@ -47,6 +47,7 @@
'maxdownspeed': {'type': 'integer'},
'fail_html': {'type': 'boolean'},
'add_paused': {'type': 'boolean'},
+ 'skip_check': {'type': 'boolean'},
},
'additionalProperties': False,
},
@@ -182,6 +183,10 @@
if add_paused:
form_data['paused'] = 'true'
+ skip_check = entry.get('skip_check', config.get('skip_check'))
+ if skip_check:
+ form_data['skip_checking'] = 'true'
+
maxupspeed = entry.get('maxupspeed', config.get('maxupspeed'))
if maxupspeed:
form_data['upLimit'] = maxupspeed * 1024
@@ -202,6 +207,7 @@
logger.info('Save path: {}', form_data.get('savepath'))
logger.info('Label: {}', form_data.get('label'))
logger.info('Paused: {}', form_data.get('paused', 'false'))
+ logger.info('Skip Hash Check: {}', form_data.get('skip_checking', 'false'))
if maxupspeed:
logger.info('Upload Speed Limit: {}', form_data.get('upLimit'))
if maxdownspeed:
| {"golden_diff": "diff --git a/flexget/plugins/clients/qbittorrent.py b/flexget/plugins/clients/qbittorrent.py\n--- a/flexget/plugins/clients/qbittorrent.py\n+++ b/flexget/plugins/clients/qbittorrent.py\n@@ -47,6 +47,7 @@\n 'maxdownspeed': {'type': 'integer'},\n 'fail_html': {'type': 'boolean'},\n 'add_paused': {'type': 'boolean'},\n+ 'skip_check': {'type': 'boolean'},\n },\n 'additionalProperties': False,\n },\n@@ -182,6 +183,10 @@\n if add_paused:\n form_data['paused'] = 'true'\n \n+ skip_check = entry.get('skip_check', config.get('skip_check'))\n+ if skip_check:\n+ form_data['skip_checking'] = 'true'\n+\n maxupspeed = entry.get('maxupspeed', config.get('maxupspeed'))\n if maxupspeed:\n form_data['upLimit'] = maxupspeed * 1024\n@@ -202,6 +207,7 @@\n logger.info('Save path: {}', form_data.get('savepath'))\n logger.info('Label: {}', form_data.get('label'))\n logger.info('Paused: {}', form_data.get('paused', 'false'))\n+ logger.info('Skip Hash Check: {}', form_data.get('skip_checking', 'false'))\n if maxupspeed:\n logger.info('Upload Speed Limit: {}', form_data.get('upLimit'))\n if maxdownspeed:\n", "issue": "[BUG] Flexget add to qbittorrent plugin will force recheck even if file is available\nQbittorrent allows you to buypass rechecking if the file exists. But it will sit there and verify every single file. \n", "before_files": [{"content": "import os\n\nfrom loguru import logger\nfrom requests import Session\nfrom requests.exceptions import RequestException\n\nfrom flexget import plugin\nfrom flexget.event import event\nfrom flexget.utils.template import RenderError\n\nlogger = logger.bind(name='qbittorrent')\n\n\nclass OutputQBitTorrent:\n \"\"\"\n Example:\n\n qbittorrent:\n username: <USERNAME> (default: (none))\n password: <PASSWORD> (default: (none))\n host: <HOSTNAME> (default: localhost)\n port: <PORT> (default: 8080)\n use_ssl: <SSL> (default: False)\n verify_cert: <VERIFY> (default: True)\n path: <OUTPUT_DIR> (default: (none))\n label: <LABEL> (default: (none))\n maxupspeed: <torrent upload speed limit> (default: 0)\n maxdownspeed: <torrent download speed limit> (default: 0)\n add_paused: <ADD_PAUSED> (default: False)\n \"\"\"\n\n schema = {\n 'anyOf': [\n {'type': 'boolean'},\n {\n 'type': 'object',\n 'properties': {\n 'username': {'type': 'string'},\n 'password': {'type': 'string'},\n 'host': {'type': 'string'},\n 'port': {'type': 'integer'},\n 'use_ssl': {'type': 'boolean'},\n 'verify_cert': {'type': 'boolean'},\n 'path': {'type': 'string'},\n 'label': {'type': 'string'},\n 'maxupspeed': {'type': 'integer'},\n 'maxdownspeed': {'type': 'integer'},\n 'fail_html': {'type': 'boolean'},\n 'add_paused': {'type': 'boolean'},\n },\n 'additionalProperties': False,\n },\n ]\n }\n\n def __init__(self):\n super().__init__()\n self.session = Session()\n self.api_url_login = None\n self.api_url_upload = None\n self.api_url_download = None\n self.url = None\n self.connected = False\n\n def _request(self, method, url, msg_on_fail=None, **kwargs):\n try:\n response = self.session.request(method, url, **kwargs)\n if response.text == \"Ok.\":\n return response\n msg = msg_on_fail if msg_on_fail else f'Failure. URL: {url}, data: {kwargs}'\n except RequestException as e:\n msg = str(e)\n raise plugin.PluginError(f'Error when trying to send request to qBittorrent: {msg}')\n\n def check_api_version(self, msg_on_fail, verify=True):\n try:\n url = self.url + \"/api/v2/app/webapiVersion\"\n response = self.session.request('get', url, verify=verify)\n if response.status_code != 404:\n self.api_url_login = '/api/v2/auth/login'\n self.api_url_upload = '/api/v2/torrents/add'\n self.api_url_download = '/api/v2/torrents/add'\n return response\n\n url = self.url + \"/version/api\"\n response = self.session.request('get', url, verify=verify)\n if response.status_code != 404:\n self.api_url_login = '/login'\n self.api_url_upload = '/command/upload'\n self.api_url_download = '/command/download'\n return response\n\n msg = 'Failure. URL: {}'.format(url) if not msg_on_fail else msg_on_fail\n except RequestException as e:\n msg = str(e)\n raise plugin.PluginError(\n 'Error when trying to send request to qBittorrent: {}'.format(msg)\n )\n\n def connect(self, config):\n \"\"\"\n Connect to qBittorrent Web UI. Username and password not necessary\n if 'Bypass authentication for localhost' is checked and host is\n 'localhost'.\n \"\"\"\n self.url = '{}://{}:{}'.format(\n 'https' if config['use_ssl'] else 'http', config['host'], config['port']\n )\n self.check_api_version('Check API version failed.', verify=config['verify_cert'])\n if config.get('username') and config.get('password'):\n data = {'username': config['username'], 'password': config['password']}\n self._request(\n 'post',\n self.url + self.api_url_login,\n data=data,\n msg_on_fail='Authentication failed.',\n verify=config['verify_cert'],\n )\n logger.debug('Successfully connected to qBittorrent')\n self.connected = True\n\n def add_torrent_file(self, file_path, data, verify_cert):\n if not self.connected:\n raise plugin.PluginError('Not connected.')\n multipart_data = {k: (None, v) for k, v in data.items()}\n with open(file_path, 'rb') as f:\n multipart_data['torrents'] = f\n self._request(\n 'post',\n self.url + self.api_url_upload,\n msg_on_fail='Failed to add file to qBittorrent',\n files=multipart_data,\n verify=verify_cert,\n )\n logger.debug('Added torrent file {} to qBittorrent', file_path)\n\n def add_torrent_url(self, url, data, verify_cert):\n if not self.connected:\n raise plugin.PluginError('Not connected.')\n data['urls'] = url\n multipart_data = {k: (None, v) for k, v in data.items()}\n self._request(\n 'post',\n self.url + self.api_url_download,\n msg_on_fail='Failed to add file to qBittorrent',\n files=multipart_data,\n verify=verify_cert,\n )\n logger.debug('Added url {} to qBittorrent', url)\n\n @staticmethod\n def prepare_config(config):\n if isinstance(config, bool):\n config = {'enabled': config}\n config.setdefault('enabled', True)\n config.setdefault('host', 'localhost')\n config.setdefault('port', 8080)\n config.setdefault('use_ssl', False)\n config.setdefault('verify_cert', True)\n config.setdefault('label', '')\n config.setdefault('maxupspeed', 0)\n config.setdefault('maxdownspeed', 0)\n config.setdefault('fail_html', True)\n return config\n\n def add_entries(self, task, config):\n for entry in task.accepted:\n form_data = {}\n try:\n save_path = entry.render(entry.get('path', config.get('path', '')))\n if save_path:\n form_data['savepath'] = save_path\n except RenderError as e:\n logger.error('Error setting path for {}: {}', entry['title'], e)\n\n label = entry.render(entry.get('label', config.get('label', '')))\n if label:\n form_data['label'] = label # qBittorrent v3.3.3-\n form_data['category'] = label # qBittorrent v3.3.4+\n\n add_paused = entry.get('add_paused', config.get('add_paused'))\n if add_paused:\n form_data['paused'] = 'true'\n\n maxupspeed = entry.get('maxupspeed', config.get('maxupspeed'))\n if maxupspeed:\n form_data['upLimit'] = maxupspeed * 1024\n\n maxdownspeed = entry.get('maxdownspeed', config.get('maxdownspeed'))\n if maxdownspeed:\n form_data['dlLimit'] = maxdownspeed * 1024\n\n is_magnet = entry['url'].startswith('magnet:')\n\n if task.manager.options.test:\n logger.info('Test mode.')\n logger.info('Would add torrent to qBittorrent with:')\n if not is_magnet:\n logger.info('File: {}', entry.get('file'))\n else:\n logger.info('Url: {}', entry.get('url'))\n logger.info('Save path: {}', form_data.get('savepath'))\n logger.info('Label: {}', form_data.get('label'))\n logger.info('Paused: {}', form_data.get('paused', 'false'))\n if maxupspeed:\n logger.info('Upload Speed Limit: {}', form_data.get('upLimit'))\n if maxdownspeed:\n logger.info('Download Speed Limit: {}', form_data.get('dlLimit'))\n continue\n\n if not is_magnet:\n if 'file' not in entry:\n entry.fail('File missing?')\n continue\n if not os.path.exists(entry['file']):\n tmp_path = os.path.join(task.manager.config_base, 'temp')\n logger.debug('entry: {}', entry)\n logger.debug('temp: {}', ', '.join(os.listdir(tmp_path)))\n entry.fail(\"Downloaded temp file '%s' doesn't exist!?\" % entry['file'])\n continue\n self.add_torrent_file(entry['file'], form_data, config['verify_cert'])\n else:\n self.add_torrent_url(entry['url'], form_data, config['verify_cert'])\n\n @plugin.priority(120)\n def on_task_download(self, task, config):\n \"\"\"\n Call download plugin to generate torrent files to load into\n qBittorrent.\n \"\"\"\n config = self.prepare_config(config)\n if not config['enabled']:\n return\n if 'download' not in task.config:\n download = plugin.get('download', self)\n download.get_temp_files(task, handle_magnets=True, fail_html=config['fail_html'])\n\n @plugin.priority(135)\n def on_task_output(self, task, config):\n \"\"\"Add torrents to qBittorrent at exit.\"\"\"\n if task.accepted:\n config = self.prepare_config(config)\n self.connect(config)\n self.add_entries(task, config)\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(OutputQBitTorrent, 'qbittorrent', api_ver=2)\n", "path": "flexget/plugins/clients/qbittorrent.py"}], "after_files": [{"content": "import os\n\nfrom loguru import logger\nfrom requests import Session\nfrom requests.exceptions import RequestException\n\nfrom flexget import plugin\nfrom flexget.event import event\nfrom flexget.utils.template import RenderError\n\nlogger = logger.bind(name='qbittorrent')\n\n\nclass OutputQBitTorrent:\n \"\"\"\n Example:\n\n qbittorrent:\n username: <USERNAME> (default: (none))\n password: <PASSWORD> (default: (none))\n host: <HOSTNAME> (default: localhost)\n port: <PORT> (default: 8080)\n use_ssl: <SSL> (default: False)\n verify_cert: <VERIFY> (default: True)\n path: <OUTPUT_DIR> (default: (none))\n label: <LABEL> (default: (none))\n maxupspeed: <torrent upload speed limit> (default: 0)\n maxdownspeed: <torrent download speed limit> (default: 0)\n add_paused: <ADD_PAUSED> (default: False)\n \"\"\"\n\n schema = {\n 'anyOf': [\n {'type': 'boolean'},\n {\n 'type': 'object',\n 'properties': {\n 'username': {'type': 'string'},\n 'password': {'type': 'string'},\n 'host': {'type': 'string'},\n 'port': {'type': 'integer'},\n 'use_ssl': {'type': 'boolean'},\n 'verify_cert': {'type': 'boolean'},\n 'path': {'type': 'string'},\n 'label': {'type': 'string'},\n 'maxupspeed': {'type': 'integer'},\n 'maxdownspeed': {'type': 'integer'},\n 'fail_html': {'type': 'boolean'},\n 'add_paused': {'type': 'boolean'},\n 'skip_check': {'type': 'boolean'},\n },\n 'additionalProperties': False,\n },\n ]\n }\n\n def __init__(self):\n super().__init__()\n self.session = Session()\n self.api_url_login = None\n self.api_url_upload = None\n self.api_url_download = None\n self.url = None\n self.connected = False\n\n def _request(self, method, url, msg_on_fail=None, **kwargs):\n try:\n response = self.session.request(method, url, **kwargs)\n if response.text == \"Ok.\":\n return response\n msg = msg_on_fail if msg_on_fail else f'Failure. URL: {url}, data: {kwargs}'\n except RequestException as e:\n msg = str(e)\n raise plugin.PluginError(f'Error when trying to send request to qBittorrent: {msg}')\n\n def check_api_version(self, msg_on_fail, verify=True):\n try:\n url = self.url + \"/api/v2/app/webapiVersion\"\n response = self.session.request('get', url, verify=verify)\n if response.status_code != 404:\n self.api_url_login = '/api/v2/auth/login'\n self.api_url_upload = '/api/v2/torrents/add'\n self.api_url_download = '/api/v2/torrents/add'\n return response\n\n url = self.url + \"/version/api\"\n response = self.session.request('get', url, verify=verify)\n if response.status_code != 404:\n self.api_url_login = '/login'\n self.api_url_upload = '/command/upload'\n self.api_url_download = '/command/download'\n return response\n\n msg = 'Failure. URL: {}'.format(url) if not msg_on_fail else msg_on_fail\n except RequestException as e:\n msg = str(e)\n raise plugin.PluginError(\n 'Error when trying to send request to qBittorrent: {}'.format(msg)\n )\n\n def connect(self, config):\n \"\"\"\n Connect to qBittorrent Web UI. Username and password not necessary\n if 'Bypass authentication for localhost' is checked and host is\n 'localhost'.\n \"\"\"\n self.url = '{}://{}:{}'.format(\n 'https' if config['use_ssl'] else 'http', config['host'], config['port']\n )\n self.check_api_version('Check API version failed.', verify=config['verify_cert'])\n if config.get('username') and config.get('password'):\n data = {'username': config['username'], 'password': config['password']}\n self._request(\n 'post',\n self.url + self.api_url_login,\n data=data,\n msg_on_fail='Authentication failed.',\n verify=config['verify_cert'],\n )\n logger.debug('Successfully connected to qBittorrent')\n self.connected = True\n\n def add_torrent_file(self, file_path, data, verify_cert):\n if not self.connected:\n raise plugin.PluginError('Not connected.')\n multipart_data = {k: (None, v) for k, v in data.items()}\n with open(file_path, 'rb') as f:\n multipart_data['torrents'] = f\n self._request(\n 'post',\n self.url + self.api_url_upload,\n msg_on_fail='Failed to add file to qBittorrent',\n files=multipart_data,\n verify=verify_cert,\n )\n logger.debug('Added torrent file {} to qBittorrent', file_path)\n\n def add_torrent_url(self, url, data, verify_cert):\n if not self.connected:\n raise plugin.PluginError('Not connected.')\n data['urls'] = url\n multipart_data = {k: (None, v) for k, v in data.items()}\n self._request(\n 'post',\n self.url + self.api_url_download,\n msg_on_fail='Failed to add file to qBittorrent',\n files=multipart_data,\n verify=verify_cert,\n )\n logger.debug('Added url {} to qBittorrent', url)\n\n @staticmethod\n def prepare_config(config):\n if isinstance(config, bool):\n config = {'enabled': config}\n config.setdefault('enabled', True)\n config.setdefault('host', 'localhost')\n config.setdefault('port', 8080)\n config.setdefault('use_ssl', False)\n config.setdefault('verify_cert', True)\n config.setdefault('label', '')\n config.setdefault('maxupspeed', 0)\n config.setdefault('maxdownspeed', 0)\n config.setdefault('fail_html', True)\n return config\n\n def add_entries(self, task, config):\n for entry in task.accepted:\n form_data = {}\n try:\n save_path = entry.render(entry.get('path', config.get('path', '')))\n if save_path:\n form_data['savepath'] = save_path\n except RenderError as e:\n logger.error('Error setting path for {}: {}', entry['title'], e)\n\n label = entry.render(entry.get('label', config.get('label', '')))\n if label:\n form_data['label'] = label # qBittorrent v3.3.3-\n form_data['category'] = label # qBittorrent v3.3.4+\n\n add_paused = entry.get('add_paused', config.get('add_paused'))\n if add_paused:\n form_data['paused'] = 'true'\n\n skip_check = entry.get('skip_check', config.get('skip_check'))\n if skip_check:\n form_data['skip_checking'] = 'true'\n\n maxupspeed = entry.get('maxupspeed', config.get('maxupspeed'))\n if maxupspeed:\n form_data['upLimit'] = maxupspeed * 1024\n\n maxdownspeed = entry.get('maxdownspeed', config.get('maxdownspeed'))\n if maxdownspeed:\n form_data['dlLimit'] = maxdownspeed * 1024\n\n is_magnet = entry['url'].startswith('magnet:')\n\n if task.manager.options.test:\n logger.info('Test mode.')\n logger.info('Would add torrent to qBittorrent with:')\n if not is_magnet:\n logger.info('File: {}', entry.get('file'))\n else:\n logger.info('Url: {}', entry.get('url'))\n logger.info('Save path: {}', form_data.get('savepath'))\n logger.info('Label: {}', form_data.get('label'))\n logger.info('Paused: {}', form_data.get('paused', 'false'))\n logger.info('Skip Hash Check: {}', form_data.get('skip_checking', 'false'))\n if maxupspeed:\n logger.info('Upload Speed Limit: {}', form_data.get('upLimit'))\n if maxdownspeed:\n logger.info('Download Speed Limit: {}', form_data.get('dlLimit'))\n continue\n\n if not is_magnet:\n if 'file' not in entry:\n entry.fail('File missing?')\n continue\n if not os.path.exists(entry['file']):\n tmp_path = os.path.join(task.manager.config_base, 'temp')\n logger.debug('entry: {}', entry)\n logger.debug('temp: {}', ', '.join(os.listdir(tmp_path)))\n entry.fail(\"Downloaded temp file '%s' doesn't exist!?\" % entry['file'])\n continue\n self.add_torrent_file(entry['file'], form_data, config['verify_cert'])\n else:\n self.add_torrent_url(entry['url'], form_data, config['verify_cert'])\n\n @plugin.priority(120)\n def on_task_download(self, task, config):\n \"\"\"\n Call download plugin to generate torrent files to load into\n qBittorrent.\n \"\"\"\n config = self.prepare_config(config)\n if not config['enabled']:\n return\n if 'download' not in task.config:\n download = plugin.get('download', self)\n download.get_temp_files(task, handle_magnets=True, fail_html=config['fail_html'])\n\n @plugin.priority(135)\n def on_task_output(self, task, config):\n \"\"\"Add torrents to qBittorrent at exit.\"\"\"\n if task.accepted:\n config = self.prepare_config(config)\n self.connect(config)\n self.add_entries(task, config)\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(OutputQBitTorrent, 'qbittorrent', api_ver=2)\n", "path": "flexget/plugins/clients/qbittorrent.py"}]} | 3,100 | 350 |
gh_patches_debug_4480 | rasdani/github-patches | git_diff | buildbot__buildbot-5744 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
GitHubPullrequestPoller token secret gives builtins.TypeError:
When attempting the following in GitHubPullrequestPoller:
```
token=util.Secret("githubToken"),
```
I receive: builtins.TypeError: must be str, not Secret
and similarly via Interpolate:
```
token=util.Interpolate("%(secret:githubToken)s"),
```
builtins.TypeError: must be str, not Interpolate
Apologies if I'm doing this wrong but given:
https://docs.buildbot.net/latest/manual/secretsmanagement.html#as-argument-to-services
I had expected this to work. I got the impression that the secret manager was as yet not supported across the board and so if this is an example of an unsupported use then it seems like a nice one to have.
Thanks all for a great project.
Incidentally in both cases I can cast via str() and avoid this issue but then run into another issue which I will report separately.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `master/buildbot/changes/github.py`
Content:
```
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15
16 from datetime import datetime
17 from fnmatch import fnmatch
18
19 from twisted.internet import defer
20
21 from buildbot import config
22 from buildbot.changes import base
23 from buildbot.util import bytes2unicode
24 from buildbot.util import datetime2epoch
25 from buildbot.util import httpclientservice
26 from buildbot.util.logger import Logger
27 from buildbot.util.state import StateMixin
28
29 log = Logger()
30
31 HOSTED_BASE_URL = "https://api.github.com"
32 link_urls = {
33 "https": "clone_url",
34 "svn": "svn_url",
35 "git": "git_url",
36 "ssh": "ssh_url"
37 }
38
39
40 class PullRequestMixin:
41 def extractProperties(self, payload):
42 def flatten(properties, base, info_dict):
43 for k, v in info_dict.items():
44 name = ".".join([base, k])
45 if isinstance(v, dict):
46 flatten(properties, name, v)
47 elif any([fnmatch(name, expr)
48 for expr in self.github_property_whitelist]):
49 properties[name] = v
50
51 properties = {}
52 flatten(properties, "github", payload)
53 return properties
54
55
56 class GitHubPullrequestPoller(base.ReconfigurablePollingChangeSource,
57 StateMixin, PullRequestMixin):
58 compare_attrs = ("owner", "repo", "token", "branches", "pollInterval",
59 "category", "pollAtLaunch", "name")
60 db_class_name = 'GitHubPullrequestPoller'
61
62 def __init__(self, owner, repo, **kwargs):
63 name = kwargs.get("name")
64 if not name:
65 kwargs["name"] = "GitHubPullrequestPoller:" + owner + "/" + repo
66 super().__init__(owner, repo, **kwargs)
67
68 def checkConfig(self,
69 owner,
70 repo,
71 branches=None,
72 category='pull',
73 baseURL=None,
74 pullrequest_filter=True,
75 token=None,
76 magic_link=False,
77 repository_type="https",
78 github_property_whitelist=None,
79 **kwargs):
80 if repository_type not in ["https", "svn", "git", "ssh"]:
81 config.error(
82 "repository_type must be one of {https, svn, git, ssh}")
83 super().checkConfig(name=self.name, **kwargs)
84
85 @defer.inlineCallbacks
86 def reconfigService(self,
87 owner,
88 repo,
89 branches=None,
90 pollInterval=10 * 60,
91 category=None,
92 baseURL=None,
93 pullrequest_filter=True,
94 token=None,
95 pollAtLaunch=False,
96 magic_link=False,
97 repository_type="https",
98 github_property_whitelist=None,
99 **kwargs):
100 yield super().reconfigService(name=self.name, **kwargs)
101
102 if baseURL is None:
103 baseURL = HOSTED_BASE_URL
104 if baseURL.endswith('/'):
105 baseURL = baseURL[:-1]
106
107 http_headers = {'User-Agent': 'Buildbot'}
108 if token is not None:
109 http_headers.update({'Authorization': 'token ' + token})
110
111 self._http = yield httpclientservice.HTTPClientService.getService(
112 self.master, baseURL, headers=http_headers)
113
114 self.token = token
115 self.owner = owner
116 self.repo = repo
117 self.branches = branches
118 self.github_property_whitelist = github_property_whitelist
119 self.pollInterval = pollInterval
120 self.pollAtLaunch = pollAtLaunch
121 self.repository_type = link_urls[repository_type]
122 self.magic_link = magic_link
123
124 if github_property_whitelist is None:
125 self.github_property_whitelist = []
126
127 if callable(pullrequest_filter):
128 self.pullrequest_filter = pullrequest_filter
129 else:
130 self.pullrequest_filter = (lambda _: pullrequest_filter)
131
132 self.category = category if callable(category) else bytes2unicode(
133 category)
134
135 def describe(self):
136 return ("GitHubPullrequestPoller watching the "
137 "GitHub repository {}/{}").format(self.owner, self.repo)
138
139 @defer.inlineCallbacks
140 def _getPullInformation(self, pull_number):
141 result = yield self._http.get('/'.join(
142 ['/repos', self.owner, self.repo, 'pulls', str(pull_number)]))
143 my_json = yield result.json()
144 return my_json
145
146 @defer.inlineCallbacks
147 def _getPulls(self):
148 log.debug(("GitHubPullrequestPoller: polling "
149 "GitHub repository {}/{}, branches: {}").format(self.owner, self.repo,
150 self.branches))
151 result = yield self._http.get('/'.join(
152 ['/repos', self.owner, self.repo, 'pulls']))
153 my_json = yield result.json()
154 if result.code != 200:
155 message = my_json.get('message', 'unknown')
156 log.error("GitHubPullrequestPoller error {0.code} '{1}' "
157 "while loading {0.url}".format(result, message))
158 return []
159 return my_json
160
161 @defer.inlineCallbacks
162 def _getFiles(self, prnumber):
163 result = yield self._http.get("/".join([
164 '/repos', self.owner, self.repo, 'pulls', str(prnumber), 'files'
165 ]))
166 my_json = yield result.json()
167
168 return [f["filename"] for f in my_json]
169
170 @defer.inlineCallbacks
171 def _getCommitters(self, prnumber):
172 result = yield self._http.get("/".join([
173 '/repos', self.owner, self.repo, 'pulls', str(prnumber), 'commits'
174 ]))
175 my_json = yield result.json()
176
177 return [[c["commit"]["committer"]["name"],
178 c["commit"]["committer"]["email"]] for c in my_json]
179
180 @defer.inlineCallbacks
181 def _getAuthors(self, prnumber):
182 result = yield self._http.get("/".join([
183 '/repos', self.owner, self.repo, 'pulls', str(prnumber), 'commits'
184 ]))
185 my_json = yield result.json()
186
187 return [[a["commit"]["author"]["name"], a["commit"]["author"]["email"]] for a in my_json]
188
189 @defer.inlineCallbacks
190 def _getCurrentRev(self, prnumber):
191 # Get currently assigned revision of PR number
192
193 result = yield self._getStateObjectId()
194 rev = yield self.master.db.state.getState(result, 'pull_request%d' %
195 prnumber, None)
196 return rev
197
198 @defer.inlineCallbacks
199 def _setCurrentRev(self, prnumber, rev):
200 # Set the updated revision for PR number.
201
202 result = yield self._getStateObjectId()
203 yield self.master.db.state.setState(result,
204 'pull_request%d' % prnumber, rev)
205
206 @defer.inlineCallbacks
207 def _getStateObjectId(self):
208 # Return a deferred for object id in state db.
209 result = yield self.master.db.state.getObjectId('{}/{}'.format(self.owner, self.repo),
210 self.db_class_name)
211 return result
212
213 @defer.inlineCallbacks
214 def _processChanges(self, github_result):
215 for pr in github_result:
216 # Track PRs for specified branches
217 base_branch = pr['base']['ref']
218 prnumber = pr['number']
219 revision = pr['head']['sha']
220
221 # Check to see if the branch is set or matches
222 if self.branches is not None and base_branch not in self.branches:
223 continue
224 if (self.pullrequest_filter is not None and
225 not self.pullrequest_filter(pr)):
226 continue
227 current = yield self._getCurrentRev(prnumber)
228 if not current or current[0:12] != revision[0:12]:
229 # Access title, repo, html link, and comments
230 pr = yield self._getPullInformation(prnumber)
231 title = pr['title']
232 if self.magic_link:
233 branch = 'refs/pull/{:d}/merge'.format(prnumber)
234 repo = pr['base']['repo'][self.repository_type]
235 else:
236 branch = pr['head']['ref']
237 repo = pr['head']['repo'][self.repository_type]
238 revlink = pr['html_url']
239 comments = pr['body']
240 updated = datetime.strptime(pr['updated_at'],
241 '%Y-%m-%dT%H:%M:%SZ')
242 # update database
243 yield self._setCurrentRev(prnumber, revision)
244
245 project = pr['base']['repo']['full_name']
246 commits = pr['commits']
247
248 dl = defer.DeferredList(
249 [self._getAuthors(prnumber), self._getCommitters(prnumber),
250 self._getFiles(prnumber)],
251 consumeErrors=True)
252
253 results = yield dl
254 failures = [r[1] for r in results if not r[0]]
255 if failures:
256 for failure in failures:
257 log.error("while processing changes for "
258 "Pullrequest {} revision {}".format(
259 prnumber, revision))
260 # Fail on the first error!
261 failures[0].raiseException()
262 [authors, committers, files] = [r[1] for r in results]
263
264 author = authors[0][0] + " <" + authors[0][1] + ">"
265
266 committer = committers[0][0] + " <" + committers[0][1] + ">"
267
268 properties = self.extractProperties(pr)
269
270 # emit the change
271 yield self.master.data.updates.addChange(
272 author=author,
273 committer=committer,
274 revision=bytes2unicode(revision),
275 revlink=bytes2unicode(revlink),
276 comments='GitHub Pull Request #{0} ({1} commit{2})\n{3}\n{4}'.
277 format(prnumber, commits, 's'
278 if commits > 0 else '', title, comments),
279 when_timestamp=datetime2epoch(updated),
280 branch=bytes2unicode(branch),
281 category=self.category,
282 project=project,
283 repository=bytes2unicode(repo),
284 files=files,
285 properties=properties,
286 src='git')
287
288 @defer.inlineCallbacks
289 def poll(self):
290 result = yield self._getPulls()
291 yield self._processChanges(result)
292
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/master/buildbot/changes/github.py b/master/buildbot/changes/github.py
--- a/master/buildbot/changes/github.py
+++ b/master/buildbot/changes/github.py
@@ -106,6 +106,7 @@
http_headers = {'User-Agent': 'Buildbot'}
if token is not None:
+ token = yield self.renderSecrets(token)
http_headers.update({'Authorization': 'token ' + token})
self._http = yield httpclientservice.HTTPClientService.getService(
| {"golden_diff": "diff --git a/master/buildbot/changes/github.py b/master/buildbot/changes/github.py\n--- a/master/buildbot/changes/github.py\n+++ b/master/buildbot/changes/github.py\n@@ -106,6 +106,7 @@\n \n http_headers = {'User-Agent': 'Buildbot'}\n if token is not None:\n+ token = yield self.renderSecrets(token)\n http_headers.update({'Authorization': 'token ' + token})\n \n self._http = yield httpclientservice.HTTPClientService.getService(\n", "issue": "GitHubPullrequestPoller token secret gives builtins.TypeError:\nWhen attempting the following in GitHubPullrequestPoller:\r\n```\r\ntoken=util.Secret(\"githubToken\"),\r\n```\r\nI receive: builtins.TypeError: must be str, not Secret\r\n\r\nand similarly via Interpolate:\r\n```\r\ntoken=util.Interpolate(\"%(secret:githubToken)s\"),\r\n```\r\nbuiltins.TypeError: must be str, not Interpolate\r\n\r\nApologies if I'm doing this wrong but given:\r\nhttps://docs.buildbot.net/latest/manual/secretsmanagement.html#as-argument-to-services\r\nI had expected this to work. I got the impression that the secret manager was as yet not supported across the board and so if this is an example of an unsupported use then it seems like a nice one to have.\r\n\r\nThanks all for a great project.\r\n\r\nIncidentally in both cases I can cast via str() and avoid this issue but then run into another issue which I will report separately.\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nfrom datetime import datetime\nfrom fnmatch import fnmatch\n\nfrom twisted.internet import defer\n\nfrom buildbot import config\nfrom buildbot.changes import base\nfrom buildbot.util import bytes2unicode\nfrom buildbot.util import datetime2epoch\nfrom buildbot.util import httpclientservice\nfrom buildbot.util.logger import Logger\nfrom buildbot.util.state import StateMixin\n\nlog = Logger()\n\nHOSTED_BASE_URL = \"https://api.github.com\"\nlink_urls = {\n \"https\": \"clone_url\",\n \"svn\": \"svn_url\",\n \"git\": \"git_url\",\n \"ssh\": \"ssh_url\"\n}\n\n\nclass PullRequestMixin:\n def extractProperties(self, payload):\n def flatten(properties, base, info_dict):\n for k, v in info_dict.items():\n name = \".\".join([base, k])\n if isinstance(v, dict):\n flatten(properties, name, v)\n elif any([fnmatch(name, expr)\n for expr in self.github_property_whitelist]):\n properties[name] = v\n\n properties = {}\n flatten(properties, \"github\", payload)\n return properties\n\n\nclass GitHubPullrequestPoller(base.ReconfigurablePollingChangeSource,\n StateMixin, PullRequestMixin):\n compare_attrs = (\"owner\", \"repo\", \"token\", \"branches\", \"pollInterval\",\n \"category\", \"pollAtLaunch\", \"name\")\n db_class_name = 'GitHubPullrequestPoller'\n\n def __init__(self, owner, repo, **kwargs):\n name = kwargs.get(\"name\")\n if not name:\n kwargs[\"name\"] = \"GitHubPullrequestPoller:\" + owner + \"/\" + repo\n super().__init__(owner, repo, **kwargs)\n\n def checkConfig(self,\n owner,\n repo,\n branches=None,\n category='pull',\n baseURL=None,\n pullrequest_filter=True,\n token=None,\n magic_link=False,\n repository_type=\"https\",\n github_property_whitelist=None,\n **kwargs):\n if repository_type not in [\"https\", \"svn\", \"git\", \"ssh\"]:\n config.error(\n \"repository_type must be one of {https, svn, git, ssh}\")\n super().checkConfig(name=self.name, **kwargs)\n\n @defer.inlineCallbacks\n def reconfigService(self,\n owner,\n repo,\n branches=None,\n pollInterval=10 * 60,\n category=None,\n baseURL=None,\n pullrequest_filter=True,\n token=None,\n pollAtLaunch=False,\n magic_link=False,\n repository_type=\"https\",\n github_property_whitelist=None,\n **kwargs):\n yield super().reconfigService(name=self.name, **kwargs)\n\n if baseURL is None:\n baseURL = HOSTED_BASE_URL\n if baseURL.endswith('/'):\n baseURL = baseURL[:-1]\n\n http_headers = {'User-Agent': 'Buildbot'}\n if token is not None:\n http_headers.update({'Authorization': 'token ' + token})\n\n self._http = yield httpclientservice.HTTPClientService.getService(\n self.master, baseURL, headers=http_headers)\n\n self.token = token\n self.owner = owner\n self.repo = repo\n self.branches = branches\n self.github_property_whitelist = github_property_whitelist\n self.pollInterval = pollInterval\n self.pollAtLaunch = pollAtLaunch\n self.repository_type = link_urls[repository_type]\n self.magic_link = magic_link\n\n if github_property_whitelist is None:\n self.github_property_whitelist = []\n\n if callable(pullrequest_filter):\n self.pullrequest_filter = pullrequest_filter\n else:\n self.pullrequest_filter = (lambda _: pullrequest_filter)\n\n self.category = category if callable(category) else bytes2unicode(\n category)\n\n def describe(self):\n return (\"GitHubPullrequestPoller watching the \"\n \"GitHub repository {}/{}\").format(self.owner, self.repo)\n\n @defer.inlineCallbacks\n def _getPullInformation(self, pull_number):\n result = yield self._http.get('/'.join(\n ['/repos', self.owner, self.repo, 'pulls', str(pull_number)]))\n my_json = yield result.json()\n return my_json\n\n @defer.inlineCallbacks\n def _getPulls(self):\n log.debug((\"GitHubPullrequestPoller: polling \"\n \"GitHub repository {}/{}, branches: {}\").format(self.owner, self.repo,\n self.branches))\n result = yield self._http.get('/'.join(\n ['/repos', self.owner, self.repo, 'pulls']))\n my_json = yield result.json()\n if result.code != 200:\n message = my_json.get('message', 'unknown')\n log.error(\"GitHubPullrequestPoller error {0.code} '{1}' \"\n \"while loading {0.url}\".format(result, message))\n return []\n return my_json\n\n @defer.inlineCallbacks\n def _getFiles(self, prnumber):\n result = yield self._http.get(\"/\".join([\n '/repos', self.owner, self.repo, 'pulls', str(prnumber), 'files'\n ]))\n my_json = yield result.json()\n\n return [f[\"filename\"] for f in my_json]\n\n @defer.inlineCallbacks\n def _getCommitters(self, prnumber):\n result = yield self._http.get(\"/\".join([\n '/repos', self.owner, self.repo, 'pulls', str(prnumber), 'commits'\n ]))\n my_json = yield result.json()\n\n return [[c[\"commit\"][\"committer\"][\"name\"],\n c[\"commit\"][\"committer\"][\"email\"]] for c in my_json]\n\n @defer.inlineCallbacks\n def _getAuthors(self, prnumber):\n result = yield self._http.get(\"/\".join([\n '/repos', self.owner, self.repo, 'pulls', str(prnumber), 'commits'\n ]))\n my_json = yield result.json()\n\n return [[a[\"commit\"][\"author\"][\"name\"], a[\"commit\"][\"author\"][\"email\"]] for a in my_json]\n\n @defer.inlineCallbacks\n def _getCurrentRev(self, prnumber):\n # Get currently assigned revision of PR number\n\n result = yield self._getStateObjectId()\n rev = yield self.master.db.state.getState(result, 'pull_request%d' %\n prnumber, None)\n return rev\n\n @defer.inlineCallbacks\n def _setCurrentRev(self, prnumber, rev):\n # Set the updated revision for PR number.\n\n result = yield self._getStateObjectId()\n yield self.master.db.state.setState(result,\n 'pull_request%d' % prnumber, rev)\n\n @defer.inlineCallbacks\n def _getStateObjectId(self):\n # Return a deferred for object id in state db.\n result = yield self.master.db.state.getObjectId('{}/{}'.format(self.owner, self.repo),\n self.db_class_name)\n return result\n\n @defer.inlineCallbacks\n def _processChanges(self, github_result):\n for pr in github_result:\n # Track PRs for specified branches\n base_branch = pr['base']['ref']\n prnumber = pr['number']\n revision = pr['head']['sha']\n\n # Check to see if the branch is set or matches\n if self.branches is not None and base_branch not in self.branches:\n continue\n if (self.pullrequest_filter is not None and\n not self.pullrequest_filter(pr)):\n continue\n current = yield self._getCurrentRev(prnumber)\n if not current or current[0:12] != revision[0:12]:\n # Access title, repo, html link, and comments\n pr = yield self._getPullInformation(prnumber)\n title = pr['title']\n if self.magic_link:\n branch = 'refs/pull/{:d}/merge'.format(prnumber)\n repo = pr['base']['repo'][self.repository_type]\n else:\n branch = pr['head']['ref']\n repo = pr['head']['repo'][self.repository_type]\n revlink = pr['html_url']\n comments = pr['body']\n updated = datetime.strptime(pr['updated_at'],\n '%Y-%m-%dT%H:%M:%SZ')\n # update database\n yield self._setCurrentRev(prnumber, revision)\n\n project = pr['base']['repo']['full_name']\n commits = pr['commits']\n\n dl = defer.DeferredList(\n [self._getAuthors(prnumber), self._getCommitters(prnumber),\n self._getFiles(prnumber)],\n consumeErrors=True)\n\n results = yield dl\n failures = [r[1] for r in results if not r[0]]\n if failures:\n for failure in failures:\n log.error(\"while processing changes for \"\n \"Pullrequest {} revision {}\".format(\n prnumber, revision))\n # Fail on the first error!\n failures[0].raiseException()\n [authors, committers, files] = [r[1] for r in results]\n\n author = authors[0][0] + \" <\" + authors[0][1] + \">\"\n\n committer = committers[0][0] + \" <\" + committers[0][1] + \">\"\n\n properties = self.extractProperties(pr)\n\n # emit the change\n yield self.master.data.updates.addChange(\n author=author,\n committer=committer,\n revision=bytes2unicode(revision),\n revlink=bytes2unicode(revlink),\n comments='GitHub Pull Request #{0} ({1} commit{2})\\n{3}\\n{4}'.\n format(prnumber, commits, 's'\n if commits > 0 else '', title, comments),\n when_timestamp=datetime2epoch(updated),\n branch=bytes2unicode(branch),\n category=self.category,\n project=project,\n repository=bytes2unicode(repo),\n files=files,\n properties=properties,\n src='git')\n\n @defer.inlineCallbacks\n def poll(self):\n result = yield self._getPulls()\n yield self._processChanges(result)\n", "path": "master/buildbot/changes/github.py"}], "after_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nfrom datetime import datetime\nfrom fnmatch import fnmatch\n\nfrom twisted.internet import defer\n\nfrom buildbot import config\nfrom buildbot.changes import base\nfrom buildbot.util import bytes2unicode\nfrom buildbot.util import datetime2epoch\nfrom buildbot.util import httpclientservice\nfrom buildbot.util.logger import Logger\nfrom buildbot.util.state import StateMixin\n\nlog = Logger()\n\nHOSTED_BASE_URL = \"https://api.github.com\"\nlink_urls = {\n \"https\": \"clone_url\",\n \"svn\": \"svn_url\",\n \"git\": \"git_url\",\n \"ssh\": \"ssh_url\"\n}\n\n\nclass PullRequestMixin:\n def extractProperties(self, payload):\n def flatten(properties, base, info_dict):\n for k, v in info_dict.items():\n name = \".\".join([base, k])\n if isinstance(v, dict):\n flatten(properties, name, v)\n elif any([fnmatch(name, expr)\n for expr in self.github_property_whitelist]):\n properties[name] = v\n\n properties = {}\n flatten(properties, \"github\", payload)\n return properties\n\n\nclass GitHubPullrequestPoller(base.ReconfigurablePollingChangeSource,\n StateMixin, PullRequestMixin):\n compare_attrs = (\"owner\", \"repo\", \"token\", \"branches\", \"pollInterval\",\n \"category\", \"pollAtLaunch\", \"name\")\n db_class_name = 'GitHubPullrequestPoller'\n\n def __init__(self, owner, repo, **kwargs):\n name = kwargs.get(\"name\")\n if not name:\n kwargs[\"name\"] = \"GitHubPullrequestPoller:\" + owner + \"/\" + repo\n super().__init__(owner, repo, **kwargs)\n\n def checkConfig(self,\n owner,\n repo,\n branches=None,\n category='pull',\n baseURL=None,\n pullrequest_filter=True,\n token=None,\n magic_link=False,\n repository_type=\"https\",\n github_property_whitelist=None,\n **kwargs):\n if repository_type not in [\"https\", \"svn\", \"git\", \"ssh\"]:\n config.error(\n \"repository_type must be one of {https, svn, git, ssh}\")\n super().checkConfig(name=self.name, **kwargs)\n\n @defer.inlineCallbacks\n def reconfigService(self,\n owner,\n repo,\n branches=None,\n pollInterval=10 * 60,\n category=None,\n baseURL=None,\n pullrequest_filter=True,\n token=None,\n pollAtLaunch=False,\n magic_link=False,\n repository_type=\"https\",\n github_property_whitelist=None,\n **kwargs):\n yield super().reconfigService(name=self.name, **kwargs)\n\n if baseURL is None:\n baseURL = HOSTED_BASE_URL\n if baseURL.endswith('/'):\n baseURL = baseURL[:-1]\n\n http_headers = {'User-Agent': 'Buildbot'}\n if token is not None:\n token = yield self.renderSecrets(token)\n http_headers.update({'Authorization': 'token ' + token})\n\n self._http = yield httpclientservice.HTTPClientService.getService(\n self.master, baseURL, headers=http_headers)\n\n self.token = token\n self.owner = owner\n self.repo = repo\n self.branches = branches\n self.github_property_whitelist = github_property_whitelist\n self.pollInterval = pollInterval\n self.pollAtLaunch = pollAtLaunch\n self.repository_type = link_urls[repository_type]\n self.magic_link = magic_link\n\n if github_property_whitelist is None:\n self.github_property_whitelist = []\n\n if callable(pullrequest_filter):\n self.pullrequest_filter = pullrequest_filter\n else:\n self.pullrequest_filter = (lambda _: pullrequest_filter)\n\n self.category = category if callable(category) else bytes2unicode(\n category)\n\n def describe(self):\n return (\"GitHubPullrequestPoller watching the \"\n \"GitHub repository {}/{}\").format(self.owner, self.repo)\n\n @defer.inlineCallbacks\n def _getPullInformation(self, pull_number):\n result = yield self._http.get('/'.join(\n ['/repos', self.owner, self.repo, 'pulls', str(pull_number)]))\n my_json = yield result.json()\n return my_json\n\n @defer.inlineCallbacks\n def _getPulls(self):\n log.debug((\"GitHubPullrequestPoller: polling \"\n \"GitHub repository {}/{}, branches: {}\").format(self.owner, self.repo,\n self.branches))\n result = yield self._http.get('/'.join(\n ['/repos', self.owner, self.repo, 'pulls']))\n my_json = yield result.json()\n if result.code != 200:\n message = my_json.get('message', 'unknown')\n log.error(\"GitHubPullrequestPoller error {0.code} '{1}' \"\n \"while loading {0.url}\".format(result, message))\n return []\n return my_json\n\n @defer.inlineCallbacks\n def _getFiles(self, prnumber):\n result = yield self._http.get(\"/\".join([\n '/repos', self.owner, self.repo, 'pulls', str(prnumber), 'files'\n ]))\n my_json = yield result.json()\n\n return [f[\"filename\"] for f in my_json]\n\n @defer.inlineCallbacks\n def _getCommitters(self, prnumber):\n result = yield self._http.get(\"/\".join([\n '/repos', self.owner, self.repo, 'pulls', str(prnumber), 'commits'\n ]))\n my_json = yield result.json()\n\n return [[c[\"commit\"][\"committer\"][\"name\"],\n c[\"commit\"][\"committer\"][\"email\"]] for c in my_json]\n\n @defer.inlineCallbacks\n def _getAuthors(self, prnumber):\n result = yield self._http.get(\"/\".join([\n '/repos', self.owner, self.repo, 'pulls', str(prnumber), 'commits'\n ]))\n my_json = yield result.json()\n\n return [[a[\"commit\"][\"author\"][\"name\"], a[\"commit\"][\"author\"][\"email\"]] for a in my_json]\n\n @defer.inlineCallbacks\n def _getCurrentRev(self, prnumber):\n # Get currently assigned revision of PR number\n\n result = yield self._getStateObjectId()\n rev = yield self.master.db.state.getState(result, 'pull_request%d' %\n prnumber, None)\n return rev\n\n @defer.inlineCallbacks\n def _setCurrentRev(self, prnumber, rev):\n # Set the updated revision for PR number.\n\n result = yield self._getStateObjectId()\n yield self.master.db.state.setState(result,\n 'pull_request%d' % prnumber, rev)\n\n @defer.inlineCallbacks\n def _getStateObjectId(self):\n # Return a deferred for object id in state db.\n result = yield self.master.db.state.getObjectId('{}/{}'.format(self.owner, self.repo),\n self.db_class_name)\n return result\n\n @defer.inlineCallbacks\n def _processChanges(self, github_result):\n for pr in github_result:\n # Track PRs for specified branches\n base_branch = pr['base']['ref']\n prnumber = pr['number']\n revision = pr['head']['sha']\n\n # Check to see if the branch is set or matches\n if self.branches is not None and base_branch not in self.branches:\n continue\n if (self.pullrequest_filter is not None and\n not self.pullrequest_filter(pr)):\n continue\n current = yield self._getCurrentRev(prnumber)\n if not current or current[0:12] != revision[0:12]:\n # Access title, repo, html link, and comments\n pr = yield self._getPullInformation(prnumber)\n title = pr['title']\n if self.magic_link:\n branch = 'refs/pull/{:d}/merge'.format(prnumber)\n repo = pr['base']['repo'][self.repository_type]\n else:\n branch = pr['head']['ref']\n repo = pr['head']['repo'][self.repository_type]\n revlink = pr['html_url']\n comments = pr['body']\n updated = datetime.strptime(pr['updated_at'],\n '%Y-%m-%dT%H:%M:%SZ')\n # update database\n yield self._setCurrentRev(prnumber, revision)\n\n project = pr['base']['repo']['full_name']\n commits = pr['commits']\n\n dl = defer.DeferredList(\n [self._getAuthors(prnumber), self._getCommitters(prnumber),\n self._getFiles(prnumber)],\n consumeErrors=True)\n\n results = yield dl\n failures = [r[1] for r in results if not r[0]]\n if failures:\n for failure in failures:\n log.error(\"while processing changes for \"\n \"Pullrequest {} revision {}\".format(\n prnumber, revision))\n # Fail on the first error!\n failures[0].raiseException()\n [authors, committers, files] = [r[1] for r in results]\n\n author = authors[0][0] + \" <\" + authors[0][1] + \">\"\n\n committer = committers[0][0] + \" <\" + committers[0][1] + \">\"\n\n properties = self.extractProperties(pr)\n\n # emit the change\n yield self.master.data.updates.addChange(\n author=author,\n committer=committer,\n revision=bytes2unicode(revision),\n revlink=bytes2unicode(revlink),\n comments='GitHub Pull Request #{0} ({1} commit{2})\\n{3}\\n{4}'.\n format(prnumber, commits, 's'\n if commits > 0 else '', title, comments),\n when_timestamp=datetime2epoch(updated),\n branch=bytes2unicode(branch),\n category=self.category,\n project=project,\n repository=bytes2unicode(repo),\n files=files,\n properties=properties,\n src='git')\n\n @defer.inlineCallbacks\n def poll(self):\n result = yield self._getPulls()\n yield self._processChanges(result)\n", "path": "master/buildbot/changes/github.py"}]} | 3,558 | 113 |
gh_patches_debug_2663 | rasdani/github-patches | git_diff | litestar-org__litestar-2681 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Docs: Build errors
### Summary
```
/home/peter/PycharmProjects/litestar/litestar/plugins/base.py:docstring of litestar.plugins.base.InitPluginProtocol.on_app_init:5: ERROR: Error in "code-block" directive:
maximum 1 argument(s) allowed, 14 supplied.
.. code-block:: python
from litestar import Litestar, get
from litestar.di import Provide
from litestar.plugins import InitPluginProtocol
def get_name() -> str:
return "world"
@get("/my-path")
def my_route_handler(name: str) -> dict[str, str]:
return {"hello": name}
class MyPlugin(InitPluginProtocol):
def on_app_init(self, app_config: AppConfig) -> AppConfig:
app_config.dependencies["name"] = Provide(get_name)
app_config.route_handlers.append(my_route_handler)
return app_config
app = Litestar(plugins=[MyPlugin()])
```
And
```
/home/peter/PycharmProjects/litestar/docs/topics/deployment/manually-with-asgi-server.rst:2: WARNING: Title underline too short.
Manually with ASGI server
==========
```
<!-- POLAR PLEDGE BADGE START -->
---
> [!NOTE]
> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and
> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.
>
> Check out all issues funded or available for funding [on our Polar.sh Litestar dashboard](https://polar.sh/litestar-org)
> * If you would like to see an issue prioritized, make a pledge towards it!
> * We receive the pledge once the issue is completed & verified
> * This, along with engagement in the community, helps us know which features are a priority to our users.
<a href="https://polar.sh/litestar-org/litestar/issues/2680">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/litestar-org/litestar/issues/2680/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/litestar-org/litestar/issues/2680/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `litestar/plugins/base.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING, Any, Iterator, Protocol, TypeVar, Union, cast, runtime_checkable
4
5 if TYPE_CHECKING:
6 from click import Group
7
8 from litestar._openapi.schema_generation import SchemaCreator
9 from litestar.config.app import AppConfig
10 from litestar.dto import AbstractDTO
11 from litestar.openapi.spec import Schema
12 from litestar.typing import FieldDefinition
13
14 __all__ = (
15 "SerializationPluginProtocol",
16 "InitPluginProtocol",
17 "OpenAPISchemaPluginProtocol",
18 "OpenAPISchemaPlugin",
19 "PluginProtocol",
20 "CLIPluginProtocol",
21 "PluginRegistry",
22 )
23
24
25 @runtime_checkable
26 class InitPluginProtocol(Protocol):
27 """Protocol used to define plugins that affect the application's init process."""
28
29 __slots__ = ()
30
31 def on_app_init(self, app_config: AppConfig) -> AppConfig:
32 """Receive the :class:`AppConfig<.config.app.AppConfig>` instance after `on_app_init` hooks have been called.
33
34 Examples:
35 .. code-block:: python
36 from litestar import Litestar, get
37 from litestar.di import Provide
38 from litestar.plugins import InitPluginProtocol
39
40
41 def get_name() -> str:
42 return "world"
43
44
45 @get("/my-path")
46 def my_route_handler(name: str) -> dict[str, str]:
47 return {"hello": name}
48
49
50 class MyPlugin(InitPluginProtocol):
51 def on_app_init(self, app_config: AppConfig) -> AppConfig:
52 app_config.dependencies["name"] = Provide(get_name)
53 app_config.route_handlers.append(my_route_handler)
54 return app_config
55
56
57 app = Litestar(plugins=[MyPlugin()])
58
59 Args:
60 app_config: The :class:`AppConfig <litestar.config.app.AppConfig>` instance.
61
62 Returns:
63 The app config object.
64 """
65 return app_config # pragma: no cover
66
67
68 @runtime_checkable
69 class CLIPluginProtocol(Protocol):
70 """Plugin protocol to extend the CLI."""
71
72 def on_cli_init(self, cli: Group) -> None:
73 """Called when the CLI is initialized.
74
75 This can be used to extend or override existing commands.
76
77 Args:
78 cli: The root :class:`click.Group` of the Litestar CLI
79
80 Examples:
81 .. code-block:: python
82
83 from litestar import Litestar
84 from litestar.plugins import CLIPluginProtocol
85 from click import Group
86
87
88 class CLIPlugin(CLIPluginProtocol):
89 def on_cli_init(self, cli: Group) -> None:
90 @cli.command()
91 def is_debug_mode(app: Litestar):
92 print(app.debug)
93
94
95 app = Litestar(plugins=[CLIPlugin()])
96 """
97
98
99 @runtime_checkable
100 class SerializationPluginProtocol(Protocol):
101 """Protocol used to define a serialization plugin for DTOs."""
102
103 __slots__ = ()
104
105 def supports_type(self, field_definition: FieldDefinition) -> bool:
106 """Given a value of indeterminate type, determine if this value is supported by the plugin.
107
108 Args:
109 field_definition: A parsed type.
110
111 Returns:
112 Whether the type is supported by the plugin.
113 """
114 raise NotImplementedError()
115
116 def create_dto_for_type(self, field_definition: FieldDefinition) -> type[AbstractDTO]:
117 """Given a parsed type, create a DTO class.
118
119 Args:
120 field_definition: A parsed type.
121
122 Returns:
123 A DTO class.
124 """
125 raise NotImplementedError()
126
127
128 @runtime_checkable
129 class OpenAPISchemaPluginProtocol(Protocol):
130 """Plugin protocol to extend the support of OpenAPI schema generation for non-library types."""
131
132 __slots__ = ()
133
134 @staticmethod
135 def is_plugin_supported_type(value: Any) -> bool:
136 """Given a value of indeterminate type, determine if this value is supported by the plugin.
137
138 Args:
139 value: An arbitrary value.
140
141 Returns:
142 A typeguard dictating whether the value is supported by the plugin.
143 """
144 raise NotImplementedError()
145
146 def to_openapi_schema(self, field_definition: FieldDefinition, schema_creator: SchemaCreator) -> Schema:
147 """Given a type annotation, transform it into an OpenAPI schema class.
148
149 Args:
150 field_definition: An :class:`OpenAPI <litestar.openapi.spec.schema.Schema>` instance.
151 schema_creator: An instance of the openapi SchemaCreator.
152
153 Returns:
154 An :class:`OpenAPI <litestar.openapi.spec.schema.Schema>` instance.
155 """
156 raise NotImplementedError()
157
158
159 class OpenAPISchemaPlugin(OpenAPISchemaPluginProtocol):
160 """Plugin to extend the support of OpenAPI schema generation for non-library types."""
161
162 @staticmethod
163 def is_undefined_sentinel(value: Any) -> bool:
164 """Return ``True`` if ``value`` should be treated as an undefined field"""
165 return False
166
167 @staticmethod
168 def is_constrained_field(field_definition: FieldDefinition) -> bool:
169 """Return ``True`` if the field should be treated as constrained. If returning
170 ``True``, constraints should be defined in the field's extras
171 """
172 return False
173
174
175 PluginProtocol = Union[
176 SerializationPluginProtocol,
177 InitPluginProtocol,
178 OpenAPISchemaPluginProtocol,
179 OpenAPISchemaPlugin,
180 CLIPluginProtocol,
181 ]
182
183 PluginT = TypeVar("PluginT", bound=PluginProtocol)
184
185
186 class PluginRegistry:
187 __slots__ = {
188 "init": "Plugins that implement the InitPluginProtocol",
189 "openapi": "Plugins that implement the OpenAPISchemaPluginProtocol",
190 "serialization": "Plugins that implement the SerializationPluginProtocol",
191 "cli": "Plugins that implement the CLIPluginProtocol",
192 "_plugins_by_type": None,
193 "_plugins": None,
194 "_get_plugins_of_type": None,
195 }
196
197 def __init__(self, plugins: list[PluginProtocol]) -> None:
198 self._plugins_by_type = {type(p): p for p in plugins}
199 self._plugins = frozenset(plugins)
200 self.init = tuple(p for p in plugins if isinstance(p, InitPluginProtocol))
201 self.openapi = tuple(p for p in plugins if isinstance(p, OpenAPISchemaPluginProtocol))
202 self.serialization = tuple(p for p in plugins if isinstance(p, SerializationPluginProtocol))
203 self.cli = tuple(p for p in plugins if isinstance(p, CLIPluginProtocol))
204
205 def get(self, type_: type[PluginT]) -> PluginT:
206 """Return the registered plugin of ``type_``.
207
208 This should be used with subclasses of the plugin protocols.
209 """
210 try:
211 return cast(PluginT, self._plugins_by_type[type_]) # type: ignore[index]
212 except KeyError as e:
213 raise KeyError(f"No plugin of type {type_.__name__!r} registered") from e
214
215 def __iter__(self) -> Iterator[PluginProtocol]:
216 return iter(self._plugins)
217
218 def __contains__(self, item: PluginProtocol) -> bool:
219 return item in self._plugins
220
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/litestar/plugins/base.py b/litestar/plugins/base.py
--- a/litestar/plugins/base.py
+++ b/litestar/plugins/base.py
@@ -33,6 +33,7 @@
Examples:
.. code-block:: python
+
from litestar import Litestar, get
from litestar.di import Provide
from litestar.plugins import InitPluginProtocol
| {"golden_diff": "diff --git a/litestar/plugins/base.py b/litestar/plugins/base.py\n--- a/litestar/plugins/base.py\n+++ b/litestar/plugins/base.py\n@@ -33,6 +33,7 @@\n \n Examples:\n .. code-block:: python\n+\n from litestar import Litestar, get\n from litestar.di import Provide\n from litestar.plugins import InitPluginProtocol\n", "issue": "Docs: Build errors\n### Summary\n\n```\r\n/home/peter/PycharmProjects/litestar/litestar/plugins/base.py:docstring of litestar.plugins.base.InitPluginProtocol.on_app_init:5: ERROR: Error in \"code-block\" directive:\r\nmaximum 1 argument(s) allowed, 14 supplied.\r\n\r\n.. code-block:: python\r\n from litestar import Litestar, get\r\n from litestar.di import Provide\r\n from litestar.plugins import InitPluginProtocol\r\n\r\n\r\n def get_name() -> str:\r\n return \"world\"\r\n\r\n\r\n @get(\"/my-path\")\r\n def my_route_handler(name: str) -> dict[str, str]:\r\n return {\"hello\": name}\r\n\r\n\r\n class MyPlugin(InitPluginProtocol):\r\n def on_app_init(self, app_config: AppConfig) -> AppConfig:\r\n app_config.dependencies[\"name\"] = Provide(get_name)\r\n app_config.route_handlers.append(my_route_handler)\r\n return app_config\r\n\r\n\r\n app = Litestar(plugins=[MyPlugin()])\r\n```\r\n\r\nAnd\r\n\r\n```\r\n/home/peter/PycharmProjects/litestar/docs/topics/deployment/manually-with-asgi-server.rst:2: WARNING: Title underline too short.\r\n\r\nManually with ASGI server\r\n==========\r\n```\n\n<!-- POLAR PLEDGE BADGE START -->\n---\n> [!NOTE] \n> While we are open for sponsoring on [GitHub Sponsors](https://github.com/sponsors/litestar-org/) and \n> [OpenCollective](https://opencollective.com/litestar), we also utilize [Polar.sh](https://polar.sh/) to engage in pledge-based sponsorship.\n>\n> Check out all issues funded or available for funding [on our Polar.sh Litestar dashboard](https://polar.sh/litestar-org)\n> * If you would like to see an issue prioritized, make a pledge towards it!\n> * We receive the pledge once the issue is completed & verified\n> * This, along with engagement in the community, helps us know which features are a priority to our users.\n\n<a href=\"https://polar.sh/litestar-org/litestar/issues/2680\">\n<picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/litestar-org/litestar/issues/2680/pledge.svg?darkmode=1\">\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/litestar-org/litestar/issues/2680/pledge.svg\">\n</picture>\n</a>\n<!-- POLAR PLEDGE BADGE END -->\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, Iterator, Protocol, TypeVar, Union, cast, runtime_checkable\n\nif TYPE_CHECKING:\n from click import Group\n\n from litestar._openapi.schema_generation import SchemaCreator\n from litestar.config.app import AppConfig\n from litestar.dto import AbstractDTO\n from litestar.openapi.spec import Schema\n from litestar.typing import FieldDefinition\n\n__all__ = (\n \"SerializationPluginProtocol\",\n \"InitPluginProtocol\",\n \"OpenAPISchemaPluginProtocol\",\n \"OpenAPISchemaPlugin\",\n \"PluginProtocol\",\n \"CLIPluginProtocol\",\n \"PluginRegistry\",\n)\n\n\n@runtime_checkable\nclass InitPluginProtocol(Protocol):\n \"\"\"Protocol used to define plugins that affect the application's init process.\"\"\"\n\n __slots__ = ()\n\n def on_app_init(self, app_config: AppConfig) -> AppConfig:\n \"\"\"Receive the :class:`AppConfig<.config.app.AppConfig>` instance after `on_app_init` hooks have been called.\n\n Examples:\n .. code-block:: python\n from litestar import Litestar, get\n from litestar.di import Provide\n from litestar.plugins import InitPluginProtocol\n\n\n def get_name() -> str:\n return \"world\"\n\n\n @get(\"/my-path\")\n def my_route_handler(name: str) -> dict[str, str]:\n return {\"hello\": name}\n\n\n class MyPlugin(InitPluginProtocol):\n def on_app_init(self, app_config: AppConfig) -> AppConfig:\n app_config.dependencies[\"name\"] = Provide(get_name)\n app_config.route_handlers.append(my_route_handler)\n return app_config\n\n\n app = Litestar(plugins=[MyPlugin()])\n\n Args:\n app_config: The :class:`AppConfig <litestar.config.app.AppConfig>` instance.\n\n Returns:\n The app config object.\n \"\"\"\n return app_config # pragma: no cover\n\n\n@runtime_checkable\nclass CLIPluginProtocol(Protocol):\n \"\"\"Plugin protocol to extend the CLI.\"\"\"\n\n def on_cli_init(self, cli: Group) -> None:\n \"\"\"Called when the CLI is initialized.\n\n This can be used to extend or override existing commands.\n\n Args:\n cli: The root :class:`click.Group` of the Litestar CLI\n\n Examples:\n .. code-block:: python\n\n from litestar import Litestar\n from litestar.plugins import CLIPluginProtocol\n from click import Group\n\n\n class CLIPlugin(CLIPluginProtocol):\n def on_cli_init(self, cli: Group) -> None:\n @cli.command()\n def is_debug_mode(app: Litestar):\n print(app.debug)\n\n\n app = Litestar(plugins=[CLIPlugin()])\n \"\"\"\n\n\n@runtime_checkable\nclass SerializationPluginProtocol(Protocol):\n \"\"\"Protocol used to define a serialization plugin for DTOs.\"\"\"\n\n __slots__ = ()\n\n def supports_type(self, field_definition: FieldDefinition) -> bool:\n \"\"\"Given a value of indeterminate type, determine if this value is supported by the plugin.\n\n Args:\n field_definition: A parsed type.\n\n Returns:\n Whether the type is supported by the plugin.\n \"\"\"\n raise NotImplementedError()\n\n def create_dto_for_type(self, field_definition: FieldDefinition) -> type[AbstractDTO]:\n \"\"\"Given a parsed type, create a DTO class.\n\n Args:\n field_definition: A parsed type.\n\n Returns:\n A DTO class.\n \"\"\"\n raise NotImplementedError()\n\n\n@runtime_checkable\nclass OpenAPISchemaPluginProtocol(Protocol):\n \"\"\"Plugin protocol to extend the support of OpenAPI schema generation for non-library types.\"\"\"\n\n __slots__ = ()\n\n @staticmethod\n def is_plugin_supported_type(value: Any) -> bool:\n \"\"\"Given a value of indeterminate type, determine if this value is supported by the plugin.\n\n Args:\n value: An arbitrary value.\n\n Returns:\n A typeguard dictating whether the value is supported by the plugin.\n \"\"\"\n raise NotImplementedError()\n\n def to_openapi_schema(self, field_definition: FieldDefinition, schema_creator: SchemaCreator) -> Schema:\n \"\"\"Given a type annotation, transform it into an OpenAPI schema class.\n\n Args:\n field_definition: An :class:`OpenAPI <litestar.openapi.spec.schema.Schema>` instance.\n schema_creator: An instance of the openapi SchemaCreator.\n\n Returns:\n An :class:`OpenAPI <litestar.openapi.spec.schema.Schema>` instance.\n \"\"\"\n raise NotImplementedError()\n\n\nclass OpenAPISchemaPlugin(OpenAPISchemaPluginProtocol):\n \"\"\"Plugin to extend the support of OpenAPI schema generation for non-library types.\"\"\"\n\n @staticmethod\n def is_undefined_sentinel(value: Any) -> bool:\n \"\"\"Return ``True`` if ``value`` should be treated as an undefined field\"\"\"\n return False\n\n @staticmethod\n def is_constrained_field(field_definition: FieldDefinition) -> bool:\n \"\"\"Return ``True`` if the field should be treated as constrained. If returning\n ``True``, constraints should be defined in the field's extras\n \"\"\"\n return False\n\n\nPluginProtocol = Union[\n SerializationPluginProtocol,\n InitPluginProtocol,\n OpenAPISchemaPluginProtocol,\n OpenAPISchemaPlugin,\n CLIPluginProtocol,\n]\n\nPluginT = TypeVar(\"PluginT\", bound=PluginProtocol)\n\n\nclass PluginRegistry:\n __slots__ = {\n \"init\": \"Plugins that implement the InitPluginProtocol\",\n \"openapi\": \"Plugins that implement the OpenAPISchemaPluginProtocol\",\n \"serialization\": \"Plugins that implement the SerializationPluginProtocol\",\n \"cli\": \"Plugins that implement the CLIPluginProtocol\",\n \"_plugins_by_type\": None,\n \"_plugins\": None,\n \"_get_plugins_of_type\": None,\n }\n\n def __init__(self, plugins: list[PluginProtocol]) -> None:\n self._plugins_by_type = {type(p): p for p in plugins}\n self._plugins = frozenset(plugins)\n self.init = tuple(p for p in plugins if isinstance(p, InitPluginProtocol))\n self.openapi = tuple(p for p in plugins if isinstance(p, OpenAPISchemaPluginProtocol))\n self.serialization = tuple(p for p in plugins if isinstance(p, SerializationPluginProtocol))\n self.cli = tuple(p for p in plugins if isinstance(p, CLIPluginProtocol))\n\n def get(self, type_: type[PluginT]) -> PluginT:\n \"\"\"Return the registered plugin of ``type_``.\n\n This should be used with subclasses of the plugin protocols.\n \"\"\"\n try:\n return cast(PluginT, self._plugins_by_type[type_]) # type: ignore[index]\n except KeyError as e:\n raise KeyError(f\"No plugin of type {type_.__name__!r} registered\") from e\n\n def __iter__(self) -> Iterator[PluginProtocol]:\n return iter(self._plugins)\n\n def __contains__(self, item: PluginProtocol) -> bool:\n return item in self._plugins\n", "path": "litestar/plugins/base.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any, Iterator, Protocol, TypeVar, Union, cast, runtime_checkable\n\nif TYPE_CHECKING:\n from click import Group\n\n from litestar._openapi.schema_generation import SchemaCreator\n from litestar.config.app import AppConfig\n from litestar.dto import AbstractDTO\n from litestar.openapi.spec import Schema\n from litestar.typing import FieldDefinition\n\n__all__ = (\n \"SerializationPluginProtocol\",\n \"InitPluginProtocol\",\n \"OpenAPISchemaPluginProtocol\",\n \"OpenAPISchemaPlugin\",\n \"PluginProtocol\",\n \"CLIPluginProtocol\",\n \"PluginRegistry\",\n)\n\n\n@runtime_checkable\nclass InitPluginProtocol(Protocol):\n \"\"\"Protocol used to define plugins that affect the application's init process.\"\"\"\n\n __slots__ = ()\n\n def on_app_init(self, app_config: AppConfig) -> AppConfig:\n \"\"\"Receive the :class:`AppConfig<.config.app.AppConfig>` instance after `on_app_init` hooks have been called.\n\n Examples:\n .. code-block:: python\n\n from litestar import Litestar, get\n from litestar.di import Provide\n from litestar.plugins import InitPluginProtocol\n\n\n def get_name() -> str:\n return \"world\"\n\n\n @get(\"/my-path\")\n def my_route_handler(name: str) -> dict[str, str]:\n return {\"hello\": name}\n\n\n class MyPlugin(InitPluginProtocol):\n def on_app_init(self, app_config: AppConfig) -> AppConfig:\n app_config.dependencies[\"name\"] = Provide(get_name)\n app_config.route_handlers.append(my_route_handler)\n return app_config\n\n\n app = Litestar(plugins=[MyPlugin()])\n\n Args:\n app_config: The :class:`AppConfig <litestar.config.app.AppConfig>` instance.\n\n Returns:\n The app config object.\n \"\"\"\n return app_config # pragma: no cover\n\n\n@runtime_checkable\nclass CLIPluginProtocol(Protocol):\n \"\"\"Plugin protocol to extend the CLI.\"\"\"\n\n def on_cli_init(self, cli: Group) -> None:\n \"\"\"Called when the CLI is initialized.\n\n This can be used to extend or override existing commands.\n\n Args:\n cli: The root :class:`click.Group` of the Litestar CLI\n\n Examples:\n .. code-block:: python\n\n from litestar import Litestar\n from litestar.plugins import CLIPluginProtocol\n from click import Group\n\n\n class CLIPlugin(CLIPluginProtocol):\n def on_cli_init(self, cli: Group) -> None:\n @cli.command()\n def is_debug_mode(app: Litestar):\n print(app.debug)\n\n\n app = Litestar(plugins=[CLIPlugin()])\n \"\"\"\n\n\n@runtime_checkable\nclass SerializationPluginProtocol(Protocol):\n \"\"\"Protocol used to define a serialization plugin for DTOs.\"\"\"\n\n __slots__ = ()\n\n def supports_type(self, field_definition: FieldDefinition) -> bool:\n \"\"\"Given a value of indeterminate type, determine if this value is supported by the plugin.\n\n Args:\n field_definition: A parsed type.\n\n Returns:\n Whether the type is supported by the plugin.\n \"\"\"\n raise NotImplementedError()\n\n def create_dto_for_type(self, field_definition: FieldDefinition) -> type[AbstractDTO]:\n \"\"\"Given a parsed type, create a DTO class.\n\n Args:\n field_definition: A parsed type.\n\n Returns:\n A DTO class.\n \"\"\"\n raise NotImplementedError()\n\n\n@runtime_checkable\nclass OpenAPISchemaPluginProtocol(Protocol):\n \"\"\"Plugin protocol to extend the support of OpenAPI schema generation for non-library types.\"\"\"\n\n __slots__ = ()\n\n @staticmethod\n def is_plugin_supported_type(value: Any) -> bool:\n \"\"\"Given a value of indeterminate type, determine if this value is supported by the plugin.\n\n Args:\n value: An arbitrary value.\n\n Returns:\n A typeguard dictating whether the value is supported by the plugin.\n \"\"\"\n raise NotImplementedError()\n\n def to_openapi_schema(self, field_definition: FieldDefinition, schema_creator: SchemaCreator) -> Schema:\n \"\"\"Given a type annotation, transform it into an OpenAPI schema class.\n\n Args:\n field_definition: An :class:`OpenAPI <litestar.openapi.spec.schema.Schema>` instance.\n schema_creator: An instance of the openapi SchemaCreator.\n\n Returns:\n An :class:`OpenAPI <litestar.openapi.spec.schema.Schema>` instance.\n \"\"\"\n raise NotImplementedError()\n\n\nclass OpenAPISchemaPlugin(OpenAPISchemaPluginProtocol):\n \"\"\"Plugin to extend the support of OpenAPI schema generation for non-library types.\"\"\"\n\n @staticmethod\n def is_undefined_sentinel(value: Any) -> bool:\n \"\"\"Return ``True`` if ``value`` should be treated as an undefined field\"\"\"\n return False\n\n @staticmethod\n def is_constrained_field(field_definition: FieldDefinition) -> bool:\n \"\"\"Return ``True`` if the field should be treated as constrained. If returning\n ``True``, constraints should be defined in the field's extras\n \"\"\"\n return False\n\n\nPluginProtocol = Union[\n SerializationPluginProtocol,\n InitPluginProtocol,\n OpenAPISchemaPluginProtocol,\n OpenAPISchemaPlugin,\n CLIPluginProtocol,\n]\n\nPluginT = TypeVar(\"PluginT\", bound=PluginProtocol)\n\n\nclass PluginRegistry:\n __slots__ = {\n \"init\": \"Plugins that implement the InitPluginProtocol\",\n \"openapi\": \"Plugins that implement the OpenAPISchemaPluginProtocol\",\n \"serialization\": \"Plugins that implement the SerializationPluginProtocol\",\n \"cli\": \"Plugins that implement the CLIPluginProtocol\",\n \"_plugins_by_type\": None,\n \"_plugins\": None,\n \"_get_plugins_of_type\": None,\n }\n\n def __init__(self, plugins: list[PluginProtocol]) -> None:\n self._plugins_by_type = {type(p): p for p in plugins}\n self._plugins = frozenset(plugins)\n self.init = tuple(p for p in plugins if isinstance(p, InitPluginProtocol))\n self.openapi = tuple(p for p in plugins if isinstance(p, OpenAPISchemaPluginProtocol))\n self.serialization = tuple(p for p in plugins if isinstance(p, SerializationPluginProtocol))\n self.cli = tuple(p for p in plugins if isinstance(p, CLIPluginProtocol))\n\n def get(self, type_: type[PluginT]) -> PluginT:\n \"\"\"Return the registered plugin of ``type_``.\n\n This should be used with subclasses of the plugin protocols.\n \"\"\"\n try:\n return cast(PluginT, self._plugins_by_type[type_]) # type: ignore[index]\n except KeyError as e:\n raise KeyError(f\"No plugin of type {type_.__name__!r} registered\") from e\n\n def __iter__(self) -> Iterator[PluginProtocol]:\n return iter(self._plugins)\n\n def __contains__(self, item: PluginProtocol) -> bool:\n return item in self._plugins\n", "path": "litestar/plugins/base.py"}]} | 2,882 | 88 |
gh_patches_debug_15540 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-350 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Ensure that only Postgres databases are connected
**Problem**
<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->
Django and SQLAlchemy both support multiple database systems, but we're presuming that the user is only connected Postgres databases. This could lead to bugs if the user connects a MySQL or other database.
**Proposed solution**
<!-- A clear and concise description of your proposed solution or feature. -->
We should check to make sure that we're only managing Postgres databases and throw an error if another database system is connected.
I'm not sure if matters what database is used for the Django database, since we're only using standard Django functionality there, but we should definitely check the Mathesar managed databases.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `config/settings.py`
Content:
```
1 """
2 Django settings for config project.
3
4 Generated by 'django-admin startproject' using Django 3.1.7.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/3.1/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/3.1/ref/settings/
11 """
12
13 import os
14 from pathlib import Path
15
16 from decouple import Csv, config as decouple_config
17 from dj_database_url import parse as db_url
18
19
20 # We use a 'tuple' with pipes as delimiters as decople naively splits the global
21 # variables on commas when casting to Csv()
22 def pipe_delim(pipe_string):
23 # Remove opening and closing brackets
24 pipe_string = pipe_string[1:-1]
25 # Split on pipe delim
26 return pipe_string.split("|")
27
28
29 # Build paths inside the project like this: BASE_DIR / 'subdir'.
30 BASE_DIR = Path(__file__).resolve().parent.parent
31
32 # Application definition
33
34 INSTALLED_APPS = [
35 "django.contrib.admin",
36 "django.contrib.auth",
37 "django.contrib.contenttypes",
38 "django.contrib.sessions",
39 "django.contrib.messages",
40 "django.contrib.staticfiles",
41 "rest_framework",
42 "django_filters",
43 "django_property_filter",
44 "mathesar",
45 ]
46
47 MIDDLEWARE = [
48 "django.middleware.security.SecurityMiddleware",
49 "django.contrib.sessions.middleware.SessionMiddleware",
50 "django.middleware.common.CommonMiddleware",
51 "django.middleware.csrf.CsrfViewMiddleware",
52 "django.contrib.auth.middleware.AuthenticationMiddleware",
53 "django.contrib.messages.middleware.MessageMiddleware",
54 "django.middleware.clickjacking.XFrameOptionsMiddleware",
55 ]
56
57 ROOT_URLCONF = "config.urls"
58
59 TEMPLATES = [
60 {
61 "BACKEND": "django.template.backends.django.DjangoTemplates",
62 "DIRS": [],
63 "APP_DIRS": True,
64 "OPTIONS": {
65 "context_processors": [
66 "config.context_processors.get_settings",
67 "django.template.context_processors.debug",
68 "django.template.context_processors.request",
69 "django.contrib.auth.context_processors.auth",
70 "django.contrib.messages.context_processors.messages",
71 ],
72 },
73 },
74 ]
75
76 WSGI_APPLICATION = "config.wsgi.application"
77
78 # Database
79 # https://docs.djangoproject.com/en/3.1/ref/settings/#databases
80
81 # TODO: Add to documentation that database keys should not be than 128 characters.
82
83 # MATHESAR_DATABASES should be of the form '({db_name}|{db_url}), ({db_name}|{db_url})'
84 # See pipe_delim above for why we use pipes as delimiters
85 DATABASES = {
86 db_key: db_url(url_string)
87 for db_key, url_string in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim))
88 }
89 DATABASES[decouple_config('DJANGO_DATABASE_KEY')] = decouple_config('DJANGO_DATABASE_URL', cast=db_url)
90
91 # pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}'
92 # and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME']
93 if decouple_config('TEST', default=False, cast=bool):
94 for db_key, _ in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim)):
95 DATABASES[db_key]['TEST'] = {'NAME': DATABASES[db_key]['NAME']}
96
97
98 # Quick-start development settings - unsuitable for production
99 # See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/
100
101 # SECURITY WARNING: keep the secret key used in production secret!
102 SECRET_KEY = decouple_config('SECRET_KEY')
103
104 # SECURITY WARNING: don't run with debug turned on in production!
105 DEBUG = decouple_config('DEBUG', default=False, cast=bool)
106
107 ALLOWED_HOSTS = decouple_config('ALLOWED_HOSTS', cast=Csv())
108
109 # Password validation
110 # https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators
111
112 AUTH_PASSWORD_VALIDATORS = [
113 {
114 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
115 },
116 {
117 "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
118 },
119 {
120 "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
121 },
122 {
123 "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
124 },
125 ]
126
127
128 # Internationalization
129 # https://docs.djangoproject.com/en/3.1/topics/i18n/
130
131 LANGUAGE_CODE = "en-us"
132
133 TIME_ZONE = "UTC"
134
135 USE_I18N = True
136
137 USE_L10N = True
138
139 USE_TZ = True
140
141
142 # Static files (CSS, JavaScript, Images)
143 # https://docs.djangoproject.com/en/3.1/howto/static-files/
144
145 STATIC_URL = "/static/"
146
147 CLIENT_DEV_URL = "http://localhost:3000"
148
149
150 # Media files (uploaded by the user)
151
152 MEDIA_ROOT = os.path.join(BASE_DIR, '.media/')
153
154 MEDIA_URL = "/media/"
155
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/config/settings.py b/config/settings.py
--- a/config/settings.py
+++ b/config/settings.py
@@ -88,6 +88,15 @@
}
DATABASES[decouple_config('DJANGO_DATABASE_KEY')] = decouple_config('DJANGO_DATABASE_URL', cast=db_url)
+for db_key, db_dict in DATABASES.items():
+ # Engine can be '.postgresql' or '.postgresql_psycopg2'
+ if not db_dict['ENGINE'].startswith('django.db.backends.postgresql'):
+ raise ValueError(
+ f"{db_key} is not a PostgreSQL database. "
+ f"{db_dict['ENGINE']} found for {db_key}'s engine."
+ )
+
+
# pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}'
# and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME']
if decouple_config('TEST', default=False, cast=bool):
| {"golden_diff": "diff --git a/config/settings.py b/config/settings.py\n--- a/config/settings.py\n+++ b/config/settings.py\n@@ -88,6 +88,15 @@\n }\n DATABASES[decouple_config('DJANGO_DATABASE_KEY')] = decouple_config('DJANGO_DATABASE_URL', cast=db_url)\n \n+for db_key, db_dict in DATABASES.items():\n+ # Engine can be '.postgresql' or '.postgresql_psycopg2'\n+ if not db_dict['ENGINE'].startswith('django.db.backends.postgresql'):\n+ raise ValueError(\n+ f\"{db_key} is not a PostgreSQL database. \"\n+ f\"{db_dict['ENGINE']} found for {db_key}'s engine.\"\n+ )\n+\n+\n # pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}'\n # and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME']\n if decouple_config('TEST', default=False, cast=bool):\n", "issue": "Ensure that only Postgres databases are connected\n**Problem**\r\n<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->\r\nDjango and SQLAlchemy both support multiple database systems, but we're presuming that the user is only connected Postgres databases. This could lead to bugs if the user connects a MySQL or other database.\r\n\r\n**Proposed solution**\r\n<!-- A clear and concise description of your proposed solution or feature. -->\r\nWe should check to make sure that we're only managing Postgres databases and throw an error if another database system is connected.\r\n\r\nI'm not sure if matters what database is used for the Django database, since we're only using standard Django functionality there, but we should definitely check the Mathesar managed databases.\n", "before_files": [{"content": "\"\"\"\nDjango settings for config project.\n\nGenerated by 'django-admin startproject' using Django 3.1.7.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/3.1/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/3.1/ref/settings/\n\"\"\"\n\nimport os\nfrom pathlib import Path\n\nfrom decouple import Csv, config as decouple_config\nfrom dj_database_url import parse as db_url\n\n\n# We use a 'tuple' with pipes as delimiters as decople naively splits the global\n# variables on commas when casting to Csv()\ndef pipe_delim(pipe_string):\n # Remove opening and closing brackets\n pipe_string = pipe_string[1:-1]\n # Split on pipe delim\n return pipe_string.split(\"|\")\n\n\n# Build paths inside the project like this: BASE_DIR / 'subdir'.\nBASE_DIR = Path(__file__).resolve().parent.parent\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"rest_framework\",\n \"django_filters\",\n \"django_property_filter\",\n \"mathesar\",\n]\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nROOT_URLCONF = \"config.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"config.context_processors.get_settings\",\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/3.1/ref/settings/#databases\n\n# TODO: Add to documentation that database keys should not be than 128 characters.\n\n# MATHESAR_DATABASES should be of the form '({db_name}|{db_url}), ({db_name}|{db_url})'\n# See pipe_delim above for why we use pipes as delimiters\nDATABASES = {\n db_key: db_url(url_string)\n for db_key, url_string in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim))\n}\nDATABASES[decouple_config('DJANGO_DATABASE_KEY')] = decouple_config('DJANGO_DATABASE_URL', cast=db_url)\n\n# pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}'\n# and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME']\nif decouple_config('TEST', default=False, cast=bool):\n for db_key, _ in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim)):\n DATABASES[db_key]['TEST'] = {'NAME': DATABASES[db_key]['NAME']}\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = decouple_config('SECRET_KEY')\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = decouple_config('DEBUG', default=False, cast=bool)\n\nALLOWED_HOSTS = decouple_config('ALLOWED_HOSTS', cast=Csv())\n\n# Password validation\n# https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.1/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.1/howto/static-files/\n\nSTATIC_URL = \"/static/\"\n\nCLIENT_DEV_URL = \"http://localhost:3000\"\n\n\n# Media files (uploaded by the user)\n\nMEDIA_ROOT = os.path.join(BASE_DIR, '.media/')\n\nMEDIA_URL = \"/media/\"\n", "path": "config/settings.py"}], "after_files": [{"content": "\"\"\"\nDjango settings for config project.\n\nGenerated by 'django-admin startproject' using Django 3.1.7.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/3.1/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/3.1/ref/settings/\n\"\"\"\n\nimport os\nfrom pathlib import Path\n\nfrom decouple import Csv, config as decouple_config\nfrom dj_database_url import parse as db_url\n\n\n# We use a 'tuple' with pipes as delimiters as decople naively splits the global\n# variables on commas when casting to Csv()\ndef pipe_delim(pipe_string):\n # Remove opening and closing brackets\n pipe_string = pipe_string[1:-1]\n # Split on pipe delim\n return pipe_string.split(\"|\")\n\n\n# Build paths inside the project like this: BASE_DIR / 'subdir'.\nBASE_DIR = Path(__file__).resolve().parent.parent\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"rest_framework\",\n \"django_filters\",\n \"django_property_filter\",\n \"mathesar\",\n]\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nROOT_URLCONF = \"config.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"config.context_processors.get_settings\",\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/3.1/ref/settings/#databases\n\n# TODO: Add to documentation that database keys should not be than 128 characters.\n\n# MATHESAR_DATABASES should be of the form '({db_name}|{db_url}), ({db_name}|{db_url})'\n# See pipe_delim above for why we use pipes as delimiters\nDATABASES = {\n db_key: db_url(url_string)\n for db_key, url_string in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim))\n}\nDATABASES[decouple_config('DJANGO_DATABASE_KEY')] = decouple_config('DJANGO_DATABASE_URL', cast=db_url)\n\nfor db_key, db_dict in DATABASES.items():\n # Engine can be '.postgresql' or '.postgresql_psycopg2'\n if not db_dict['ENGINE'].startswith('django.db.backends.postgresql'):\n raise ValueError(\n f\"{db_key} is not a PostgreSQL database. \"\n f\"{db_dict['ENGINE']} found for {db_key}'s engine.\"\n )\n\n\n# pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}'\n# and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME']\nif decouple_config('TEST', default=False, cast=bool):\n for db_key, _ in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim)):\n DATABASES[db_key]['TEST'] = {'NAME': DATABASES[db_key]['NAME']}\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = decouple_config('SECRET_KEY')\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = decouple_config('DEBUG', default=False, cast=bool)\n\nALLOWED_HOSTS = decouple_config('ALLOWED_HOSTS', cast=Csv())\n\n# Password validation\n# https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.1/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.1/howto/static-files/\n\nSTATIC_URL = \"/static/\"\n\nCLIENT_DEV_URL = \"http://localhost:3000\"\n\n\n# Media files (uploaded by the user)\n\nMEDIA_ROOT = os.path.join(BASE_DIR, '.media/')\n\nMEDIA_URL = \"/media/\"\n", "path": "config/settings.py"}]} | 1,848 | 208 |
gh_patches_debug_13917 | rasdani/github-patches | git_diff | falconry__falcon-1961 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Doc not up to date or unclear on trailing slash with routing
The doc say:
> Also, note that if the template contains a trailing slash character, it will be stripped in order to normalize the routing logic. https://falcon.readthedocs.io/en/stable/api/routing.html#routing-utilities
https://github.com/falconry/falcon/issues/1544 seem to show the behavior change in 2019, https://github.com/falconry/falcon/pull/1751 seem to have fix a bug or to add the feature.
Currently I use falcon 3.0.1 and `/version` and `/version/` are not considerate the same route contrary to what the doc say, I was really wondering why I did wrong.
```python
import falcon
app = falcon.App()
class Version:
def on_get(self, req, resp):
resp.content_type = falcon.MEDIA_JSON
resp.media = {
"major": 0,
"minor": 1,
"patch": 0
}
resp.status = falcon.HTTP_200
app.add_route('/version', Version())
from wsgiref.simple_server import make_server
with make_server('', 8000, app) as httpd:
print('Serving on port 8000...')
httpd.serve_forever()
```
```none
Serving on port 8000...
127.0.0.1 - - [05/Jul/2021 16:22:44] "GET /version HTTP/1.1" 200 36
127.0.0.1 - - [05/Jul/2021 16:22:49] "GET /version/ HTTP/1.1" 404 26
```
Can you clarify this ? This default is odd specially when it's the contrary to previous version and very few usage case.
Bonus Solution for other like me:
```python
app = falcon.App()
app.req_options.strip_url_path_trailing_slash = True
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `falcon/routing/util.py`
Content:
```
1 # Copyright 2013 by Rackspace Hosting, Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Routing utilities."""
16
17 import re
18
19 from falcon import constants
20 from falcon import responders
21
22
23 class SuffixedMethodNotFoundError(Exception):
24 def __init__(self, message):
25 super(SuffixedMethodNotFoundError, self).__init__(message)
26 self.message = message
27
28
29 # NOTE(kgriffs): Published method; take care to avoid breaking changes.
30 def compile_uri_template(template):
31 """Compile the given URI template string into a pattern matcher.
32
33 This function can be used to construct custom routing engines that
34 iterate through a list of possible routes, attempting to match
35 an incoming request against each route's compiled regular expression.
36
37 Each field is converted to a named group, so that when a match
38 is found, the fields can be easily extracted using
39 :py:meth:`re.MatchObject.groupdict`.
40
41 This function does not support the more flexible templating
42 syntax used in the default router. Only simple paths with bracketed
43 field expressions are recognized. For example::
44
45 /
46 /books
47 /books/{isbn}
48 /books/{isbn}/characters
49 /books/{isbn}/characters/{name}
50
51 Also, note that if the template contains a trailing slash character,
52 it will be stripped in order to normalize the routing logic.
53
54 Args:
55 template(str): The template to compile. Note that field names are
56 restricted to ASCII a-z, A-Z, and the underscore character.
57
58 Returns:
59 tuple: (template_field_names, template_regex)
60 """
61
62 if not isinstance(template, str):
63 raise TypeError('uri_template is not a string')
64
65 if not template.startswith('/'):
66 raise ValueError("uri_template must start with '/'")
67
68 if '//' in template:
69 raise ValueError("uri_template may not contain '//'")
70
71 if template != '/' and template.endswith('/'):
72 template = template[:-1]
73
74 # template names should be able to start with A-Za-z
75 # but also contain 0-9_ in the remaining portion
76 expression_pattern = r'{([a-zA-Z]\w*)}'
77
78 # Get a list of field names
79 fields = set(re.findall(expression_pattern, template))
80
81 # Convert Level 1 var patterns to equivalent named regex groups
82 escaped = re.sub(r'[\.\(\)\[\]\?\*\+\^\|]', r'\\\g<0>', template)
83 pattern = re.sub(expression_pattern, r'(?P<\1>[^/]+)', escaped)
84 pattern = r'\A' + pattern + r'\Z'
85
86 return fields, re.compile(pattern, re.IGNORECASE)
87
88
89 def map_http_methods(resource, suffix=None):
90 """Map HTTP methods (e.g., GET, POST) to methods of a resource object.
91
92 Args:
93 resource: An object with *responder* methods, following the naming
94 convention *on_\\**, that correspond to each method the resource
95 supports. For example, if a resource supports GET and POST, it
96 should define ``on_get(self, req, resp)`` and
97 ``on_post(self, req, resp)``.
98
99 Keyword Args:
100 suffix (str): Optional responder name suffix for this route. If
101 a suffix is provided, Falcon will map GET requests to
102 ``on_get_{suffix}()``, POST requests to ``on_post_{suffix}()``,
103 etc.
104
105 Returns:
106 dict: A mapping of HTTP methods to explicitly defined resource responders.
107
108 """
109
110 method_map = {}
111
112 for method in constants.COMBINED_METHODS:
113 try:
114 responder_name = 'on_' + method.lower()
115 if suffix:
116 responder_name += '_' + suffix
117
118 responder = getattr(resource, responder_name)
119 except AttributeError:
120 # resource does not implement this method
121 pass
122 else:
123 # Usually expect a method, but any callable will do
124 if callable(responder):
125 method_map[method] = responder
126
127 # If suffix is specified and doesn't map to any methods, raise an error
128 if suffix and not method_map:
129 raise SuffixedMethodNotFoundError(
130 'No responders found for the specified suffix'
131 )
132
133 return method_map
134
135
136 def set_default_responders(method_map, asgi=False):
137 """Map HTTP methods not explicitly defined on a resource to default responders.
138
139 Args:
140 method_map: A dict with HTTP methods mapped to responders explicitly
141 defined in a resource.
142 asgi (bool): ``True`` if using an ASGI app, ``False`` otherwise
143 (default ``False``).
144 """
145
146 # Attach a resource for unsupported HTTP methods
147 allowed_methods = [
148 m for m in sorted(list(method_map.keys())) if m not in constants._META_METHODS
149 ]
150
151 if 'OPTIONS' not in method_map:
152 # OPTIONS itself is intentionally excluded from the Allow header
153 opt_responder = responders.create_default_options(allowed_methods, asgi=asgi)
154 method_map['OPTIONS'] = opt_responder
155 allowed_methods.append('OPTIONS')
156
157 na_responder = responders.create_method_not_allowed(allowed_methods, asgi=asgi)
158
159 for method in constants.COMBINED_METHODS:
160 if method not in method_map:
161 method_map[method] = na_responder
162
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/falcon/routing/util.py b/falcon/routing/util.py
--- a/falcon/routing/util.py
+++ b/falcon/routing/util.py
@@ -48,8 +48,17 @@
/books/{isbn}/characters
/books/{isbn}/characters/{name}
- Also, note that if the template contains a trailing slash character,
- it will be stripped in order to normalize the routing logic.
+ Warning:
+ If the template contains a trailing slash character, it will be
+ stripped.
+
+ Note that this is **different** from :ref:`the default behavior
+ <trailing_slash_in_path>` of :func:`~falcon.App.add_route` used
+ with the default :class:`~falcon.routing.CompiledRouter`.
+
+ The :attr:`~falcon.RequestOptions.strip_url_path_trailing_slash`
+ request option is not considered by ``compile_uri_template()``.
+
Args:
template(str): The template to compile. Note that field names are
| {"golden_diff": "diff --git a/falcon/routing/util.py b/falcon/routing/util.py\n--- a/falcon/routing/util.py\n+++ b/falcon/routing/util.py\n@@ -48,8 +48,17 @@\n /books/{isbn}/characters\n /books/{isbn}/characters/{name}\n \n- Also, note that if the template contains a trailing slash character,\n- it will be stripped in order to normalize the routing logic.\n+ Warning:\n+ If the template contains a trailing slash character, it will be\n+ stripped.\n+\n+ Note that this is **different** from :ref:`the default behavior\n+ <trailing_slash_in_path>` of :func:`~falcon.App.add_route` used\n+ with the default :class:`~falcon.routing.CompiledRouter`.\n+\n+ The :attr:`~falcon.RequestOptions.strip_url_path_trailing_slash`\n+ request option is not considered by ``compile_uri_template()``.\n+\n \n Args:\n template(str): The template to compile. Note that field names are\n", "issue": "Doc not up to date or unclear on trailing slash with routing\nThe doc say:\r\n\r\n> Also, note that if the template contains a trailing slash character, it will be stripped in order to normalize the routing logic. https://falcon.readthedocs.io/en/stable/api/routing.html#routing-utilities\r\n\r\nhttps://github.com/falconry/falcon/issues/1544 seem to show the behavior change in 2019, https://github.com/falconry/falcon/pull/1751 seem to have fix a bug or to add the feature.\r\n\r\nCurrently I use falcon 3.0.1 and `/version` and `/version/` are not considerate the same route contrary to what the doc say, I was really wondering why I did wrong.\r\n\r\n```python\r\nimport falcon\r\n\r\napp = falcon.App()\r\n\r\nclass Version:\r\n def on_get(self, req, resp):\r\n resp.content_type = falcon.MEDIA_JSON\r\n resp.media = {\r\n \"major\": 0,\r\n \"minor\": 1,\r\n \"patch\": 0\r\n }\r\n resp.status = falcon.HTTP_200\r\n\r\napp.add_route('/version', Version())\r\n\r\nfrom wsgiref.simple_server import make_server\r\nwith make_server('', 8000, app) as httpd:\r\n print('Serving on port 8000...')\r\n httpd.serve_forever()\r\n```\r\n\r\n```none\r\nServing on port 8000...\r\n127.0.0.1 - - [05/Jul/2021 16:22:44] \"GET /version HTTP/1.1\" 200 36\r\n127.0.0.1 - - [05/Jul/2021 16:22:49] \"GET /version/ HTTP/1.1\" 404 26\r\n```\r\n\r\nCan you clarify this ? This default is odd specially when it's the contrary to previous version and very few usage case.\r\n\r\nBonus Solution for other like me:\r\n\r\n```python\r\napp = falcon.App()\r\napp.req_options.strip_url_path_trailing_slash = True\r\n```\n", "before_files": [{"content": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Routing utilities.\"\"\"\n\nimport re\n\nfrom falcon import constants\nfrom falcon import responders\n\n\nclass SuffixedMethodNotFoundError(Exception):\n def __init__(self, message):\n super(SuffixedMethodNotFoundError, self).__init__(message)\n self.message = message\n\n\n# NOTE(kgriffs): Published method; take care to avoid breaking changes.\ndef compile_uri_template(template):\n \"\"\"Compile the given URI template string into a pattern matcher.\n\n This function can be used to construct custom routing engines that\n iterate through a list of possible routes, attempting to match\n an incoming request against each route's compiled regular expression.\n\n Each field is converted to a named group, so that when a match\n is found, the fields can be easily extracted using\n :py:meth:`re.MatchObject.groupdict`.\n\n This function does not support the more flexible templating\n syntax used in the default router. Only simple paths with bracketed\n field expressions are recognized. For example::\n\n /\n /books\n /books/{isbn}\n /books/{isbn}/characters\n /books/{isbn}/characters/{name}\n\n Also, note that if the template contains a trailing slash character,\n it will be stripped in order to normalize the routing logic.\n\n Args:\n template(str): The template to compile. Note that field names are\n restricted to ASCII a-z, A-Z, and the underscore character.\n\n Returns:\n tuple: (template_field_names, template_regex)\n \"\"\"\n\n if not isinstance(template, str):\n raise TypeError('uri_template is not a string')\n\n if not template.startswith('/'):\n raise ValueError(\"uri_template must start with '/'\")\n\n if '//' in template:\n raise ValueError(\"uri_template may not contain '//'\")\n\n if template != '/' and template.endswith('/'):\n template = template[:-1]\n\n # template names should be able to start with A-Za-z\n # but also contain 0-9_ in the remaining portion\n expression_pattern = r'{([a-zA-Z]\\w*)}'\n\n # Get a list of field names\n fields = set(re.findall(expression_pattern, template))\n\n # Convert Level 1 var patterns to equivalent named regex groups\n escaped = re.sub(r'[\\.\\(\\)\\[\\]\\?\\*\\+\\^\\|]', r'\\\\\\g<0>', template)\n pattern = re.sub(expression_pattern, r'(?P<\\1>[^/]+)', escaped)\n pattern = r'\\A' + pattern + r'\\Z'\n\n return fields, re.compile(pattern, re.IGNORECASE)\n\n\ndef map_http_methods(resource, suffix=None):\n \"\"\"Map HTTP methods (e.g., GET, POST) to methods of a resource object.\n\n Args:\n resource: An object with *responder* methods, following the naming\n convention *on_\\\\**, that correspond to each method the resource\n supports. For example, if a resource supports GET and POST, it\n should define ``on_get(self, req, resp)`` and\n ``on_post(self, req, resp)``.\n\n Keyword Args:\n suffix (str): Optional responder name suffix for this route. If\n a suffix is provided, Falcon will map GET requests to\n ``on_get_{suffix}()``, POST requests to ``on_post_{suffix}()``,\n etc.\n\n Returns:\n dict: A mapping of HTTP methods to explicitly defined resource responders.\n\n \"\"\"\n\n method_map = {}\n\n for method in constants.COMBINED_METHODS:\n try:\n responder_name = 'on_' + method.lower()\n if suffix:\n responder_name += '_' + suffix\n\n responder = getattr(resource, responder_name)\n except AttributeError:\n # resource does not implement this method\n pass\n else:\n # Usually expect a method, but any callable will do\n if callable(responder):\n method_map[method] = responder\n\n # If suffix is specified and doesn't map to any methods, raise an error\n if suffix and not method_map:\n raise SuffixedMethodNotFoundError(\n 'No responders found for the specified suffix'\n )\n\n return method_map\n\n\ndef set_default_responders(method_map, asgi=False):\n \"\"\"Map HTTP methods not explicitly defined on a resource to default responders.\n\n Args:\n method_map: A dict with HTTP methods mapped to responders explicitly\n defined in a resource.\n asgi (bool): ``True`` if using an ASGI app, ``False`` otherwise\n (default ``False``).\n \"\"\"\n\n # Attach a resource for unsupported HTTP methods\n allowed_methods = [\n m for m in sorted(list(method_map.keys())) if m not in constants._META_METHODS\n ]\n\n if 'OPTIONS' not in method_map:\n # OPTIONS itself is intentionally excluded from the Allow header\n opt_responder = responders.create_default_options(allowed_methods, asgi=asgi)\n method_map['OPTIONS'] = opt_responder\n allowed_methods.append('OPTIONS')\n\n na_responder = responders.create_method_not_allowed(allowed_methods, asgi=asgi)\n\n for method in constants.COMBINED_METHODS:\n if method not in method_map:\n method_map[method] = na_responder\n", "path": "falcon/routing/util.py"}], "after_files": [{"content": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Routing utilities.\"\"\"\n\nimport re\n\nfrom falcon import constants\nfrom falcon import responders\n\n\nclass SuffixedMethodNotFoundError(Exception):\n def __init__(self, message):\n super(SuffixedMethodNotFoundError, self).__init__(message)\n self.message = message\n\n\n# NOTE(kgriffs): Published method; take care to avoid breaking changes.\ndef compile_uri_template(template):\n \"\"\"Compile the given URI template string into a pattern matcher.\n\n This function can be used to construct custom routing engines that\n iterate through a list of possible routes, attempting to match\n an incoming request against each route's compiled regular expression.\n\n Each field is converted to a named group, so that when a match\n is found, the fields can be easily extracted using\n :py:meth:`re.MatchObject.groupdict`.\n\n This function does not support the more flexible templating\n syntax used in the default router. Only simple paths with bracketed\n field expressions are recognized. For example::\n\n /\n /books\n /books/{isbn}\n /books/{isbn}/characters\n /books/{isbn}/characters/{name}\n\n Warning:\n If the template contains a trailing slash character, it will be\n stripped.\n\n Note that this is **different** from :ref:`the default behavior\n <trailing_slash_in_path>` of :func:`~falcon.App.add_route` used\n with the default :class:`~falcon.routing.CompiledRouter`.\n\n The :attr:`~falcon.RequestOptions.strip_url_path_trailing_slash`\n request option is not considered by ``compile_uri_template()``.\n\n\n Args:\n template(str): The template to compile. Note that field names are\n restricted to ASCII a-z, A-Z, and the underscore character.\n\n Returns:\n tuple: (template_field_names, template_regex)\n \"\"\"\n\n if not isinstance(template, str):\n raise TypeError('uri_template is not a string')\n\n if not template.startswith('/'):\n raise ValueError(\"uri_template must start with '/'\")\n\n if '//' in template:\n raise ValueError(\"uri_template may not contain '//'\")\n\n if template != '/' and template.endswith('/'):\n template = template[:-1]\n\n # template names should be able to start with A-Za-z\n # but also contain 0-9_ in the remaining portion\n expression_pattern = r'{([a-zA-Z]\\w*)}'\n\n # Get a list of field names\n fields = set(re.findall(expression_pattern, template))\n\n # Convert Level 1 var patterns to equivalent named regex groups\n escaped = re.sub(r'[\\.\\(\\)\\[\\]\\?\\*\\+\\^\\|]', r'\\\\\\g<0>', template)\n pattern = re.sub(expression_pattern, r'(?P<\\1>[^/]+)', escaped)\n pattern = r'\\A' + pattern + r'\\Z'\n\n return fields, re.compile(pattern, re.IGNORECASE)\n\n\ndef map_http_methods(resource, suffix=None):\n \"\"\"Map HTTP methods (e.g., GET, POST) to methods of a resource object.\n\n Args:\n resource: An object with *responder* methods, following the naming\n convention *on_\\\\**, that correspond to each method the resource\n supports. For example, if a resource supports GET and POST, it\n should define ``on_get(self, req, resp)`` and\n ``on_post(self, req, resp)``.\n\n Keyword Args:\n suffix (str): Optional responder name suffix for this route. If\n a suffix is provided, Falcon will map GET requests to\n ``on_get_{suffix}()``, POST requests to ``on_post_{suffix}()``,\n etc.\n\n Returns:\n dict: A mapping of HTTP methods to explicitly defined resource responders.\n\n \"\"\"\n\n method_map = {}\n\n for method in constants.COMBINED_METHODS:\n try:\n responder_name = 'on_' + method.lower()\n if suffix:\n responder_name += '_' + suffix\n\n responder = getattr(resource, responder_name)\n except AttributeError:\n # resource does not implement this method\n pass\n else:\n # Usually expect a method, but any callable will do\n if callable(responder):\n method_map[method] = responder\n\n # If suffix is specified and doesn't map to any methods, raise an error\n if suffix and not method_map:\n raise SuffixedMethodNotFoundError(\n 'No responders found for the specified suffix'\n )\n\n return method_map\n\n\ndef set_default_responders(method_map, asgi=False):\n \"\"\"Map HTTP methods not explicitly defined on a resource to default responders.\n\n Args:\n method_map: A dict with HTTP methods mapped to responders explicitly\n defined in a resource.\n asgi (bool): ``True`` if using an ASGI app, ``False`` otherwise\n (default ``False``).\n \"\"\"\n\n # Attach a resource for unsupported HTTP methods\n allowed_methods = [\n m for m in sorted(list(method_map.keys())) if m not in constants._META_METHODS\n ]\n\n if 'OPTIONS' not in method_map:\n # OPTIONS itself is intentionally excluded from the Allow header\n opt_responder = responders.create_default_options(allowed_methods, asgi=asgi)\n method_map['OPTIONS'] = opt_responder\n allowed_methods.append('OPTIONS')\n\n na_responder = responders.create_method_not_allowed(allowed_methods, asgi=asgi)\n\n for method in constants.COMBINED_METHODS:\n if method not in method_map:\n method_map[method] = na_responder\n", "path": "falcon/routing/util.py"}]} | 2,369 | 229 |
gh_patches_debug_25426 | rasdani/github-patches | git_diff | electricitymaps__electricitymaps-contrib-3679 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CA-PE production parser down
## Description
This is an automatic error report generated for Canada Prince Edward Island (CA-PE).
Issues:
- No recent data found for `production` parser
## Suggestions
- Try running the parser locally using the command `poetry run test_parser CA-PE production`
- <a href="https://storage.googleapis.com/electricitymap-parser-logs/CA-PE.html">Explore the runtime logs</a>
You can see an overview of all parser issues [here](https://github.com/tmrowco/electricitymap-contrib/wiki/Parser-issues).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parsers/CA_PE.py`
Content:
```
1 #!/usr/bin/env python3
2
3 import json
4
5 # The arrow library is used to handle datetimes consistently with other parsers
6 import arrow
7
8 # The request library is used to fetch content through HTTP
9 import requests
10
11
12 timezone = 'Canada/Atlantic'
13
14
15 def _find_pei_key(pei_list, sought_key):
16 matching_item = [item for item in pei_list
17 if 'header' in item['data']
18 and item['data']['header'].startswith(sought_key)]
19
20 if not matching_item:
21 return None
22
23 return matching_item[0]['data']['actualValue']
24
25
26 def _get_pei_info(requests_obj):
27 url = 'https://wdf.princeedwardisland.ca/workflow'
28 request = {'featureName': 'WindEnergy'}
29 headers = {'Content-Type': 'application/json'}
30 response = requests_obj.post(url, data=json.dumps(request), headers=headers)
31
32 raw_data = response.json().get('data', [])
33
34 datetime_item = [item['data']['text'] for item in raw_data
35 if 'text' in item['data']]
36 if not datetime_item:
37 # unable to get a timestamp, return empty
38 return None
39 datetime_text = datetime_item[0][len('Last updated '):]
40 data_timestamp = arrow.get(datetime_text, 'MMMM D, YYYY HH:mm A').replace(tzinfo='Canada/Atlantic')
41
42 # see https://ruk.ca/content/new-api-endpoint-pei-wind for more info
43 data = {
44 'pei_load': _find_pei_key(raw_data, 'Total On-Island Load'),
45 'pei_wind_gen': _find_pei_key(raw_data, 'Total On-Island Wind Generation'),
46 'pei_fossil_gen': _find_pei_key(raw_data, 'Total On-Island Fossil Fuel Generation'),
47 'pei_wind_used': _find_pei_key(raw_data, 'Wind Power Used On Island'),
48 'pei_wind_exported': _find_pei_key(raw_data, 'Wind Power Exported Off Island'),
49 'datetime': data_timestamp.datetime
50 }
51
52 # the following keys are always required downstream, if we don't have them, no sense returning
53 if data['pei_wind_gen'] is None or data['pei_fossil_gen'] is None:
54 return None
55
56 return data
57
58
59 def fetch_production(zone_key='CA-PE', session=None, target_datetime=None, logger=None) -> dict:
60 """Requests the last known production mix (in MW) of a given country."""
61 if target_datetime:
62 raise NotImplementedError('This parser is not yet able to parse past dates')
63
64 requests_obj = session or requests.session()
65 pei_info = _get_pei_info(requests_obj)
66
67 if pei_info is None:
68 return None
69
70 data = {
71 'datetime': pei_info['datetime'],
72 'zoneKey': zone_key,
73 'production': {
74 'wind': pei_info['pei_wind_gen'],
75
76 # These are oil-fueled ("heavy fuel oil" and "diesel") generators
77 # used as peakers and back-up
78 'oil': pei_info['pei_fossil_gen'],
79
80 # specify some sources that definitely aren't present on PEI as zero,
81 # this allows the analyzer to better estimate CO2eq
82 'coal': 0,
83 'hydro': 0,
84 'nuclear': 0,
85 'geothermal': 0
86 },
87 'storage': {},
88 'source': 'princeedwardisland.ca'
89 }
90
91 return data
92
93
94 def fetch_exchange(zone_key1, zone_key2, session=None, target_datetime=None, logger=None) -> dict:
95 """Requests the last known power exchange (in MW) between two regions."""
96 if target_datetime:
97 raise NotImplementedError('This parser is not yet able to parse past dates')
98
99 sorted_zone_keys = '->'.join(sorted([zone_key1, zone_key2]))
100
101 if sorted_zone_keys != 'CA-NB->CA-PE':
102 raise NotImplementedError('This exchange pair is not implemented')
103
104 requests_obj = session or requests.session()
105 pei_info = _get_pei_info(requests_obj)
106
107 if pei_info is None or pei_info['pei_load'] is None:
108 return None
109
110 # PEI imports most of its electricity. Everything not generated on island
111 # is imported from New Brunswick.
112 # In case of wind, some is paper-"exported" even if there is a net import,
113 # and 'pei_wind_used'/'data5' indicates their accounting of part of the load
114 # served by non-exported wind.
115 # # http://www.gov.pe.ca/windenergy/chart.php says:
116 # "Wind Power Exported Off-Island is that portion of wind generation that is supplying
117 # contracts elsewhere. The actual electricity from this portion of wind generation
118 # may stay within PEI but is satisfying a contractual arrangement in another jurisdiction."
119 # We are ignoring these paper exports, as they are an accounting/legal detail
120 # that doesn't actually reflect what happens on the wires.
121 # (New Brunswick being the only interconnection with PEI, "exporting" wind power to NB
122 # then "importing" a balance of NB electricity likely doesn't actually happen.)
123 imported_from_nb = (pei_info['pei_load'] - pei_info['pei_fossil_gen'] - pei_info['pei_wind_gen'])
124
125 # In expected result, "net" represents an export.
126 # We have sorted_zone_keys 'CA-NB->CA-PE', so it's export *from* NB,
127 # and import *to* PEI.
128 data = {
129 'datetime': pei_info['datetime'],
130 'sortedZoneKeys': sorted_zone_keys,
131 'netFlow': imported_from_nb,
132 'source': 'princeedwardisland.ca'
133 }
134
135 return data
136
137
138 if __name__ == '__main__':
139 """Main method, never used by the Electricity Map backend, but handy for testing."""
140
141 print('fetch_production() ->')
142 print(fetch_production())
143
144 print('fetch_exchange("CA-PE", "CA-NB") ->')
145 print(fetch_exchange("CA-PE", "CA-NB"))
146
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/parsers/CA_PE.py b/parsers/CA_PE.py
--- a/parsers/CA_PE.py
+++ b/parsers/CA_PE.py
@@ -25,7 +25,7 @@
def _get_pei_info(requests_obj):
url = 'https://wdf.princeedwardisland.ca/workflow'
- request = {'featureName': 'WindEnergy'}
+ request = {'featureName': 'WindEnergy', 'queryName': 'WindEnergy'}
headers = {'Content-Type': 'application/json'}
response = requests_obj.post(url, data=json.dumps(request), headers=headers)
@@ -112,7 +112,7 @@
# In case of wind, some is paper-"exported" even if there is a net import,
# and 'pei_wind_used'/'data5' indicates their accounting of part of the load
# served by non-exported wind.
- # # http://www.gov.pe.ca/windenergy/chart.php says:
+ # https://www.princeedwardisland.ca/en/feature/pei-wind-energy says:
# "Wind Power Exported Off-Island is that portion of wind generation that is supplying
# contracts elsewhere. The actual electricity from this portion of wind generation
# may stay within PEI but is satisfying a contractual arrangement in another jurisdiction."
| {"golden_diff": "diff --git a/parsers/CA_PE.py b/parsers/CA_PE.py\n--- a/parsers/CA_PE.py\n+++ b/parsers/CA_PE.py\n@@ -25,7 +25,7 @@\n \n def _get_pei_info(requests_obj):\n url = 'https://wdf.princeedwardisland.ca/workflow'\n- request = {'featureName': 'WindEnergy'}\n+ request = {'featureName': 'WindEnergy', 'queryName': 'WindEnergy'}\n headers = {'Content-Type': 'application/json'}\n response = requests_obj.post(url, data=json.dumps(request), headers=headers)\n \n@@ -112,7 +112,7 @@\n # In case of wind, some is paper-\"exported\" even if there is a net import,\n # and 'pei_wind_used'/'data5' indicates their accounting of part of the load\n # served by non-exported wind.\n- # # http://www.gov.pe.ca/windenergy/chart.php says:\n+ # https://www.princeedwardisland.ca/en/feature/pei-wind-energy says:\n # \"Wind Power Exported Off-Island is that portion of wind generation that is supplying\n # contracts elsewhere. The actual electricity from this portion of wind generation\n # may stay within PEI but is satisfying a contractual arrangement in another jurisdiction.\"\n", "issue": "CA-PE production parser down\n## Description\n\nThis is an automatic error report generated for Canada Prince Edward Island (CA-PE).\n\nIssues:\n- No recent data found for `production` parser\n\n## Suggestions\n- Try running the parser locally using the command `poetry run test_parser CA-PE production`\n- <a href=\"https://storage.googleapis.com/electricitymap-parser-logs/CA-PE.html\">Explore the runtime logs</a>\n\nYou can see an overview of all parser issues [here](https://github.com/tmrowco/electricitymap-contrib/wiki/Parser-issues).\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport json\n\n# The arrow library is used to handle datetimes consistently with other parsers\nimport arrow\n\n# The request library is used to fetch content through HTTP\nimport requests\n\n\ntimezone = 'Canada/Atlantic'\n\n\ndef _find_pei_key(pei_list, sought_key):\n matching_item = [item for item in pei_list\n if 'header' in item['data']\n and item['data']['header'].startswith(sought_key)]\n\n if not matching_item:\n return None\n\n return matching_item[0]['data']['actualValue']\n\n\ndef _get_pei_info(requests_obj):\n url = 'https://wdf.princeedwardisland.ca/workflow'\n request = {'featureName': 'WindEnergy'}\n headers = {'Content-Type': 'application/json'}\n response = requests_obj.post(url, data=json.dumps(request), headers=headers)\n\n raw_data = response.json().get('data', [])\n\n datetime_item = [item['data']['text'] for item in raw_data\n if 'text' in item['data']]\n if not datetime_item:\n # unable to get a timestamp, return empty\n return None\n datetime_text = datetime_item[0][len('Last updated '):]\n data_timestamp = arrow.get(datetime_text, 'MMMM D, YYYY HH:mm A').replace(tzinfo='Canada/Atlantic')\n\n # see https://ruk.ca/content/new-api-endpoint-pei-wind for more info\n data = {\n 'pei_load': _find_pei_key(raw_data, 'Total On-Island Load'),\n 'pei_wind_gen': _find_pei_key(raw_data, 'Total On-Island Wind Generation'),\n 'pei_fossil_gen': _find_pei_key(raw_data, 'Total On-Island Fossil Fuel Generation'),\n 'pei_wind_used': _find_pei_key(raw_data, 'Wind Power Used On Island'),\n 'pei_wind_exported': _find_pei_key(raw_data, 'Wind Power Exported Off Island'),\n 'datetime': data_timestamp.datetime\n }\n\n # the following keys are always required downstream, if we don't have them, no sense returning\n if data['pei_wind_gen'] is None or data['pei_fossil_gen'] is None:\n return None\n\n return data\n\n\ndef fetch_production(zone_key='CA-PE', session=None, target_datetime=None, logger=None) -> dict:\n \"\"\"Requests the last known production mix (in MW) of a given country.\"\"\"\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n requests_obj = session or requests.session()\n pei_info = _get_pei_info(requests_obj)\n\n if pei_info is None:\n return None\n\n data = {\n 'datetime': pei_info['datetime'],\n 'zoneKey': zone_key,\n 'production': {\n 'wind': pei_info['pei_wind_gen'],\n\n # These are oil-fueled (\"heavy fuel oil\" and \"diesel\") generators\n # used as peakers and back-up\n 'oil': pei_info['pei_fossil_gen'],\n\n # specify some sources that definitely aren't present on PEI as zero,\n # this allows the analyzer to better estimate CO2eq\n 'coal': 0,\n 'hydro': 0,\n 'nuclear': 0,\n 'geothermal': 0\n },\n 'storage': {},\n 'source': 'princeedwardisland.ca'\n }\n\n return data\n\n\ndef fetch_exchange(zone_key1, zone_key2, session=None, target_datetime=None, logger=None) -> dict:\n \"\"\"Requests the last known power exchange (in MW) between two regions.\"\"\"\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n sorted_zone_keys = '->'.join(sorted([zone_key1, zone_key2]))\n\n if sorted_zone_keys != 'CA-NB->CA-PE':\n raise NotImplementedError('This exchange pair is not implemented')\n\n requests_obj = session or requests.session()\n pei_info = _get_pei_info(requests_obj)\n\n if pei_info is None or pei_info['pei_load'] is None:\n return None\n\n # PEI imports most of its electricity. Everything not generated on island\n # is imported from New Brunswick.\n # In case of wind, some is paper-\"exported\" even if there is a net import,\n # and 'pei_wind_used'/'data5' indicates their accounting of part of the load\n # served by non-exported wind.\n # # http://www.gov.pe.ca/windenergy/chart.php says:\n # \"Wind Power Exported Off-Island is that portion of wind generation that is supplying\n # contracts elsewhere. The actual electricity from this portion of wind generation\n # may stay within PEI but is satisfying a contractual arrangement in another jurisdiction.\"\n # We are ignoring these paper exports, as they are an accounting/legal detail\n # that doesn't actually reflect what happens on the wires.\n # (New Brunswick being the only interconnection with PEI, \"exporting\" wind power to NB\n # then \"importing\" a balance of NB electricity likely doesn't actually happen.)\n imported_from_nb = (pei_info['pei_load'] - pei_info['pei_fossil_gen'] - pei_info['pei_wind_gen'])\n\n # In expected result, \"net\" represents an export.\n # We have sorted_zone_keys 'CA-NB->CA-PE', so it's export *from* NB,\n # and import *to* PEI.\n data = {\n 'datetime': pei_info['datetime'],\n 'sortedZoneKeys': sorted_zone_keys,\n 'netFlow': imported_from_nb,\n 'source': 'princeedwardisland.ca'\n }\n\n return data\n\n\nif __name__ == '__main__':\n \"\"\"Main method, never used by the Electricity Map backend, but handy for testing.\"\"\"\n\n print('fetch_production() ->')\n print(fetch_production())\n\n print('fetch_exchange(\"CA-PE\", \"CA-NB\") ->')\n print(fetch_exchange(\"CA-PE\", \"CA-NB\"))\n", "path": "parsers/CA_PE.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nimport json\n\n# The arrow library is used to handle datetimes consistently with other parsers\nimport arrow\n\n# The request library is used to fetch content through HTTP\nimport requests\n\n\ntimezone = 'Canada/Atlantic'\n\n\ndef _find_pei_key(pei_list, sought_key):\n matching_item = [item for item in pei_list\n if 'header' in item['data']\n and item['data']['header'].startswith(sought_key)]\n\n if not matching_item:\n return None\n\n return matching_item[0]['data']['actualValue']\n\n\ndef _get_pei_info(requests_obj):\n url = 'https://wdf.princeedwardisland.ca/workflow'\n request = {'featureName': 'WindEnergy', 'queryName': 'WindEnergy'}\n headers = {'Content-Type': 'application/json'}\n response = requests_obj.post(url, data=json.dumps(request), headers=headers)\n\n raw_data = response.json().get('data', [])\n\n datetime_item = [item['data']['text'] for item in raw_data\n if 'text' in item['data']]\n if not datetime_item:\n # unable to get a timestamp, return empty\n return None\n datetime_text = datetime_item[0][len('Last updated '):]\n data_timestamp = arrow.get(datetime_text, 'MMMM D, YYYY HH:mm A').replace(tzinfo='Canada/Atlantic')\n\n # see https://ruk.ca/content/new-api-endpoint-pei-wind for more info\n data = {\n 'pei_load': _find_pei_key(raw_data, 'Total On-Island Load'),\n 'pei_wind_gen': _find_pei_key(raw_data, 'Total On-Island Wind Generation'),\n 'pei_fossil_gen': _find_pei_key(raw_data, 'Total On-Island Fossil Fuel Generation'),\n 'pei_wind_used': _find_pei_key(raw_data, 'Wind Power Used On Island'),\n 'pei_wind_exported': _find_pei_key(raw_data, 'Wind Power Exported Off Island'),\n 'datetime': data_timestamp.datetime\n }\n\n # the following keys are always required downstream, if we don't have them, no sense returning\n if data['pei_wind_gen'] is None or data['pei_fossil_gen'] is None:\n return None\n\n return data\n\n\ndef fetch_production(zone_key='CA-PE', session=None, target_datetime=None, logger=None) -> dict:\n \"\"\"Requests the last known production mix (in MW) of a given country.\"\"\"\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n requests_obj = session or requests.session()\n pei_info = _get_pei_info(requests_obj)\n\n if pei_info is None:\n return None\n\n data = {\n 'datetime': pei_info['datetime'],\n 'zoneKey': zone_key,\n 'production': {\n 'wind': pei_info['pei_wind_gen'],\n\n # These are oil-fueled (\"heavy fuel oil\" and \"diesel\") generators\n # used as peakers and back-up\n 'oil': pei_info['pei_fossil_gen'],\n\n # specify some sources that definitely aren't present on PEI as zero,\n # this allows the analyzer to better estimate CO2eq\n 'coal': 0,\n 'hydro': 0,\n 'nuclear': 0,\n 'geothermal': 0\n },\n 'storage': {},\n 'source': 'princeedwardisland.ca'\n }\n\n return data\n\n\ndef fetch_exchange(zone_key1, zone_key2, session=None, target_datetime=None, logger=None) -> dict:\n \"\"\"Requests the last known power exchange (in MW) between two regions.\"\"\"\n if target_datetime:\n raise NotImplementedError('This parser is not yet able to parse past dates')\n\n sorted_zone_keys = '->'.join(sorted([zone_key1, zone_key2]))\n\n if sorted_zone_keys != 'CA-NB->CA-PE':\n raise NotImplementedError('This exchange pair is not implemented')\n\n requests_obj = session or requests.session()\n pei_info = _get_pei_info(requests_obj)\n\n if pei_info is None or pei_info['pei_load'] is None:\n return None\n\n # PEI imports most of its electricity. Everything not generated on island\n # is imported from New Brunswick.\n # In case of wind, some is paper-\"exported\" even if there is a net import,\n # and 'pei_wind_used'/'data5' indicates their accounting of part of the load\n # served by non-exported wind.\n # https://www.princeedwardisland.ca/en/feature/pei-wind-energy says:\n # \"Wind Power Exported Off-Island is that portion of wind generation that is supplying\n # contracts elsewhere. The actual electricity from this portion of wind generation\n # may stay within PEI but is satisfying a contractual arrangement in another jurisdiction.\"\n # We are ignoring these paper exports, as they are an accounting/legal detail\n # that doesn't actually reflect what happens on the wires.\n # (New Brunswick being the only interconnection with PEI, \"exporting\" wind power to NB\n # then \"importing\" a balance of NB electricity likely doesn't actually happen.)\n imported_from_nb = (pei_info['pei_load'] - pei_info['pei_fossil_gen'] - pei_info['pei_wind_gen'])\n\n # In expected result, \"net\" represents an export.\n # We have sorted_zone_keys 'CA-NB->CA-PE', so it's export *from* NB,\n # and import *to* PEI.\n data = {\n 'datetime': pei_info['datetime'],\n 'sortedZoneKeys': sorted_zone_keys,\n 'netFlow': imported_from_nb,\n 'source': 'princeedwardisland.ca'\n }\n\n return data\n\n\nif __name__ == '__main__':\n \"\"\"Main method, never used by the Electricity Map backend, but handy for testing.\"\"\"\n\n print('fetch_production() ->')\n print(fetch_production())\n\n print('fetch_exchange(\"CA-PE\", \"CA-NB\") ->')\n print(fetch_exchange(\"CA-PE\", \"CA-NB\"))\n", "path": "parsers/CA_PE.py"}]} | 2,068 | 296 |
gh_patches_debug_9247 | rasdani/github-patches | git_diff | lnbits__lnbits-750 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Switching to `FakeWallet` deletes pending outgoing payments from another backend.
Needs investigation.
What I observed: create outgoing pending payment from other wallet, stop LNbits, change to `FakeWallet`, start LNbits.
-> Payments get deleted because of a `failed` state that is returned in `lnbits.core.models:check_pending:148`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lnbits/wallets/fake.py`
Content:
```
1 import asyncio
2 import hashlib
3 import random
4 from datetime import datetime
5 from os import getenv
6 from typing import AsyncGenerator, Dict, Optional
7
8 from environs import Env # type: ignore
9 from loguru import logger
10
11 from lnbits.helpers import urlsafe_short_hash
12
13 from ..bolt11 import decode, encode
14 from .base import (
15 InvoiceResponse,
16 PaymentResponse,
17 PaymentStatus,
18 StatusResponse,
19 Wallet,
20 )
21
22 env = Env()
23 env.read_env()
24
25
26 class FakeWallet(Wallet):
27 async def status(self) -> StatusResponse:
28 logger.info(
29 "FakeWallet funding source is for using LNbits as a centralised, stand-alone payment system with brrrrrr."
30 )
31 return StatusResponse(None, float("inf"))
32
33 async def create_invoice(
34 self,
35 amount: int,
36 memo: Optional[str] = None,
37 description_hash: Optional[bytes] = None,
38 ) -> InvoiceResponse:
39 # we set a default secret since FakeWallet is used for internal=True invoices
40 # and the user might not have configured a secret yet
41 secret = env.str("FAKE_WALLET_SECTRET", default="ToTheMoon1")
42 data: Dict = {
43 "out": False,
44 "amount": amount,
45 "currency": "bc",
46 "privkey": hashlib.pbkdf2_hmac(
47 "sha256",
48 secret.encode("utf-8"),
49 ("FakeWallet").encode("utf-8"),
50 2048,
51 32,
52 ).hex(),
53 "memo": None,
54 "description_hash": None,
55 "description": "",
56 "fallback": None,
57 "expires": None,
58 "route": None,
59 }
60 data["amount"] = amount * 1000
61 data["timestamp"] = datetime.now().timestamp()
62 if description_hash:
63 data["tags_set"] = ["h"]
64 data["description_hash"] = description_hash.hex()
65 else:
66 data["tags_set"] = ["d"]
67 data["memo"] = memo
68 data["description"] = memo
69 randomHash = (
70 data["privkey"][:6]
71 + hashlib.sha256(str(random.getrandbits(256)).encode("utf-8")).hexdigest()[
72 6:
73 ]
74 )
75 data["paymenthash"] = randomHash
76 payment_request = encode(data)
77 checking_id = randomHash
78
79 return InvoiceResponse(True, checking_id, payment_request)
80
81 async def pay_invoice(self, bolt11: str, fee_limit_msat: int) -> PaymentResponse:
82 invoice = decode(bolt11)
83 if (
84 hasattr(invoice, "checking_id")
85 and invoice.checking_id[6:] == data["privkey"][:6]
86 ):
87 return PaymentResponse(True, invoice.payment_hash, 0)
88 else:
89 return PaymentResponse(
90 ok=False, error_message="Only internal invoices can be used!"
91 )
92
93 async def get_invoice_status(self, checking_id: str) -> PaymentStatus:
94 return PaymentStatus(False)
95
96 async def get_payment_status(self, checking_id: str) -> PaymentStatus:
97 return PaymentStatus(False)
98
99 async def paid_invoices_stream(self) -> AsyncGenerator[str, None]:
100 self.queue = asyncio.Queue(0)
101 while True:
102 value = await self.queue.get()
103 yield value
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lnbits/wallets/fake.py b/lnbits/wallets/fake.py
--- a/lnbits/wallets/fake.py
+++ b/lnbits/wallets/fake.py
@@ -91,10 +91,10 @@
)
async def get_invoice_status(self, checking_id: str) -> PaymentStatus:
- return PaymentStatus(False)
+ return PaymentStatus(None)
async def get_payment_status(self, checking_id: str) -> PaymentStatus:
- return PaymentStatus(False)
+ return PaymentStatus(None)
async def paid_invoices_stream(self) -> AsyncGenerator[str, None]:
self.queue = asyncio.Queue(0)
| {"golden_diff": "diff --git a/lnbits/wallets/fake.py b/lnbits/wallets/fake.py\n--- a/lnbits/wallets/fake.py\n+++ b/lnbits/wallets/fake.py\n@@ -91,10 +91,10 @@\n )\n \n async def get_invoice_status(self, checking_id: str) -> PaymentStatus:\n- return PaymentStatus(False)\n+ return PaymentStatus(None)\n \n async def get_payment_status(self, checking_id: str) -> PaymentStatus:\n- return PaymentStatus(False)\n+ return PaymentStatus(None)\n \n async def paid_invoices_stream(self) -> AsyncGenerator[str, None]:\n self.queue = asyncio.Queue(0)\n", "issue": "Switching to `FakeWallet` deletes pending outgoing payments from another backend.\nNeeds investigation.\r\n\r\nWhat I observed: create outgoing pending payment from other wallet, stop LNbits, change to `FakeWallet`, start LNbits. \r\n\r\n-> Payments get deleted because of a `failed` state that is returned in `lnbits.core.models:check_pending:148`.\n", "before_files": [{"content": "import asyncio\nimport hashlib\nimport random\nfrom datetime import datetime\nfrom os import getenv\nfrom typing import AsyncGenerator, Dict, Optional\n\nfrom environs import Env # type: ignore\nfrom loguru import logger\n\nfrom lnbits.helpers import urlsafe_short_hash\n\nfrom ..bolt11 import decode, encode\nfrom .base import (\n InvoiceResponse,\n PaymentResponse,\n PaymentStatus,\n StatusResponse,\n Wallet,\n)\n\nenv = Env()\nenv.read_env()\n\n\nclass FakeWallet(Wallet):\n async def status(self) -> StatusResponse:\n logger.info(\n \"FakeWallet funding source is for using LNbits as a centralised, stand-alone payment system with brrrrrr.\"\n )\n return StatusResponse(None, float(\"inf\"))\n\n async def create_invoice(\n self,\n amount: int,\n memo: Optional[str] = None,\n description_hash: Optional[bytes] = None,\n ) -> InvoiceResponse:\n # we set a default secret since FakeWallet is used for internal=True invoices\n # and the user might not have configured a secret yet\n secret = env.str(\"FAKE_WALLET_SECTRET\", default=\"ToTheMoon1\")\n data: Dict = {\n \"out\": False,\n \"amount\": amount,\n \"currency\": \"bc\",\n \"privkey\": hashlib.pbkdf2_hmac(\n \"sha256\",\n secret.encode(\"utf-8\"),\n (\"FakeWallet\").encode(\"utf-8\"),\n 2048,\n 32,\n ).hex(),\n \"memo\": None,\n \"description_hash\": None,\n \"description\": \"\",\n \"fallback\": None,\n \"expires\": None,\n \"route\": None,\n }\n data[\"amount\"] = amount * 1000\n data[\"timestamp\"] = datetime.now().timestamp()\n if description_hash:\n data[\"tags_set\"] = [\"h\"]\n data[\"description_hash\"] = description_hash.hex()\n else:\n data[\"tags_set\"] = [\"d\"]\n data[\"memo\"] = memo\n data[\"description\"] = memo\n randomHash = (\n data[\"privkey\"][:6]\n + hashlib.sha256(str(random.getrandbits(256)).encode(\"utf-8\")).hexdigest()[\n 6:\n ]\n )\n data[\"paymenthash\"] = randomHash\n payment_request = encode(data)\n checking_id = randomHash\n\n return InvoiceResponse(True, checking_id, payment_request)\n\n async def pay_invoice(self, bolt11: str, fee_limit_msat: int) -> PaymentResponse:\n invoice = decode(bolt11)\n if (\n hasattr(invoice, \"checking_id\")\n and invoice.checking_id[6:] == data[\"privkey\"][:6]\n ):\n return PaymentResponse(True, invoice.payment_hash, 0)\n else:\n return PaymentResponse(\n ok=False, error_message=\"Only internal invoices can be used!\"\n )\n\n async def get_invoice_status(self, checking_id: str) -> PaymentStatus:\n return PaymentStatus(False)\n\n async def get_payment_status(self, checking_id: str) -> PaymentStatus:\n return PaymentStatus(False)\n\n async def paid_invoices_stream(self) -> AsyncGenerator[str, None]:\n self.queue = asyncio.Queue(0)\n while True:\n value = await self.queue.get()\n yield value\n", "path": "lnbits/wallets/fake.py"}], "after_files": [{"content": "import asyncio\nimport hashlib\nimport random\nfrom datetime import datetime\nfrom os import getenv\nfrom typing import AsyncGenerator, Dict, Optional\n\nfrom environs import Env # type: ignore\nfrom loguru import logger\n\nfrom lnbits.helpers import urlsafe_short_hash\n\nfrom ..bolt11 import decode, encode\nfrom .base import (\n InvoiceResponse,\n PaymentResponse,\n PaymentStatus,\n StatusResponse,\n Wallet,\n)\n\nenv = Env()\nenv.read_env()\n\n\nclass FakeWallet(Wallet):\n async def status(self) -> StatusResponse:\n logger.info(\n \"FakeWallet funding source is for using LNbits as a centralised, stand-alone payment system with brrrrrr.\"\n )\n return StatusResponse(None, float(\"inf\"))\n\n async def create_invoice(\n self,\n amount: int,\n memo: Optional[str] = None,\n description_hash: Optional[bytes] = None,\n ) -> InvoiceResponse:\n # we set a default secret since FakeWallet is used for internal=True invoices\n # and the user might not have configured a secret yet\n secret = env.str(\"FAKE_WALLET_SECTRET\", default=\"ToTheMoon1\")\n data: Dict = {\n \"out\": False,\n \"amount\": amount,\n \"currency\": \"bc\",\n \"privkey\": hashlib.pbkdf2_hmac(\n \"sha256\",\n secret.encode(\"utf-8\"),\n (\"FakeWallet\").encode(\"utf-8\"),\n 2048,\n 32,\n ).hex(),\n \"memo\": None,\n \"description_hash\": None,\n \"description\": \"\",\n \"fallback\": None,\n \"expires\": None,\n \"route\": None,\n }\n data[\"amount\"] = amount * 1000\n data[\"timestamp\"] = datetime.now().timestamp()\n if description_hash:\n data[\"tags_set\"] = [\"h\"]\n data[\"description_hash\"] = description_hash.hex()\n else:\n data[\"tags_set\"] = [\"d\"]\n data[\"memo\"] = memo\n data[\"description\"] = memo\n randomHash = (\n data[\"privkey\"][:6]\n + hashlib.sha256(str(random.getrandbits(256)).encode(\"utf-8\")).hexdigest()[\n 6:\n ]\n )\n data[\"paymenthash\"] = randomHash\n payment_request = encode(data)\n checking_id = randomHash\n\n return InvoiceResponse(True, checking_id, payment_request)\n\n async def pay_invoice(self, bolt11: str, fee_limit_msat: int) -> PaymentResponse:\n invoice = decode(bolt11)\n if (\n hasattr(invoice, \"checking_id\")\n and invoice.checking_id[6:] == data[\"privkey\"][:6]\n ):\n return PaymentResponse(True, invoice.payment_hash, 0)\n else:\n return PaymentResponse(\n ok=False, error_message=\"Only internal invoices can be used!\"\n )\n\n async def get_invoice_status(self, checking_id: str) -> PaymentStatus:\n return PaymentStatus(None)\n\n async def get_payment_status(self, checking_id: str) -> PaymentStatus:\n return PaymentStatus(None)\n\n async def paid_invoices_stream(self) -> AsyncGenerator[str, None]:\n self.queue = asyncio.Queue(0)\n while True:\n value = await self.queue.get()\n yield value\n", "path": "lnbits/wallets/fake.py"}]} | 1,272 | 154 |
gh_patches_debug_16050 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-369 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
docker stdout log formatting
Currently the docker output log is too messy. for example:
```
{"stream":"\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\r21970944/26421880 [=======================\u003e......] - ETA: 40s"}
{"stream":"\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\r21987328/26421880 [=======================\u003e......] - ETA: 40s"}
{"stream":"\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\r22003712/26421880 [=======================\u003e......] - ETA: 40s"}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticdl/client/client.py`
Content:
```
1 import argparse
2 import os
3 import inspect
4 import tempfile
5 import time
6 import getpass
7 import sys
8 from string import Template
9 import docker
10 import yaml
11 from kubernetes.client.apis import core_v1_api
12 from kubernetes import config
13
14
15 def _m_file_in_docker(model_file):
16 return "/model/" + os.path.basename(model_file)
17
18 def _build_docker_image(
19 m_file, image_name, image_base="elasticdl:dev",
20 repository=None
21 ):
22 DOCKER_TEMPLATE = """
23 FROM {}
24 COPY {} {}
25 """
26
27 with tempfile.NamedTemporaryFile(mode="w+", delete=False) as df:
28 df.write(DOCKER_TEMPLATE.format(image_base, m_file, _m_file_in_docker(m_file)))
29
30 client = docker.APIClient(base_url="unix://var/run/docker.sock")
31 for line in client.build(
32 dockerfile=df.name, path=".", rm=True, tag=image_name
33 ):
34 print(str(line, encoding="utf-8"))
35
36 if repository != None:
37 for line in client.push(image_name, stream=True, decode=True):
38 print(line)
39
40 def _gen_master_def(image_name, model_file, job_name, argv):
41 master_yaml = """
42 apiVersion: v1
43 kind: Pod
44 metadata:
45 name: "elasticdl-master-{job_name}"
46 labels:
47 purpose: test-command
48 spec:
49 containers:
50 - name: "elasticdl-master-{job_name}"
51 image: "{image_name}"
52 command: ["python"]
53 args: [
54 "-m", "elasticdl.master.main",
55 "--job_name", "{job_name}",
56 "--worker_image", "{image_name}",
57 "--model_file", "{m_file}"
58 ]
59 imagePullPolicy: IfNotPresent
60 env:
61 - name: MY_POD_IP
62 valueFrom:
63 fieldRef:
64 fieldPath: status.podIP
65 restartPolicy: Never
66 """ .format(m_file=_m_file_in_docker(model_file), image_name=image_name, job_name=job_name)
67
68 master_def = yaml.safe_load(master_yaml)
69
70 # Build master arguments
71 master_def['spec']['containers'][0]['args'].extend(argv)
72 return master_def
73
74 def _submit(image_name, model_file, job_name, argv):
75 master_def = _gen_master_def(image_name, model_file, job_name, argv)
76 config.load_kube_config()
77 api = core_v1_api.CoreV1Api()
78 resp = api.create_namespaced_pod(body=master_def, namespace="default")
79 print("Master launched. status='%s'" % str(resp.status))
80
81 def main():
82 parser = argparse.ArgumentParser(description="ElasticDL Client")
83 # Rewrite model_file argument and pass all other arguments to master.
84 parser.add_argument("--model_file", help="Path to Model file", required=True)
85 parser.add_argument("--image_base", help="Base image containing elasticdl runtime environment.", required=True)
86 parser.add_argument("--repository", help="The repository to push docker image to.")
87 parser.add_argument("--job_name", help="ElasticDL job name", required=True)
88 args, argv = parser.parse_known_args()
89
90 job_name = args.job_name + "-" + str(int(round(time.time() * 1000)))
91 image_name = args.image_base + '_' + job_name
92 _build_docker_image(args.model_file, image_name, image_base=args.image_base,
93 repository=args.repository)
94 _submit(image_name, args.model_file, job_name, argv)
95
96
97 if __name__ == "__main__":
98 main()
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticdl/client/client.py b/elasticdl/client/client.py
--- a/elasticdl/client/client.py
+++ b/elasticdl/client/client.py
@@ -28,11 +28,15 @@
df.write(DOCKER_TEMPLATE.format(image_base, m_file, _m_file_in_docker(m_file)))
client = docker.APIClient(base_url="unix://var/run/docker.sock")
+ print("===== Building Docker Image =====")
for line in client.build(
- dockerfile=df.name, path=".", rm=True, tag=image_name
+ dockerfile=df.name, path=".", rm=True, tag=image_name, decode=True
):
- print(str(line, encoding="utf-8"))
-
+ text = line.get("stream", None)
+ if text:
+ sys.stdout.write(text)
+ sys.stdout.flush()
+ print("===== Docker Image Built =====")
if repository != None:
for line in client.push(image_name, stream=True, decode=True):
print(line)
| {"golden_diff": "diff --git a/elasticdl/client/client.py b/elasticdl/client/client.py\n--- a/elasticdl/client/client.py\n+++ b/elasticdl/client/client.py\n@@ -28,11 +28,15 @@\n df.write(DOCKER_TEMPLATE.format(image_base, m_file, _m_file_in_docker(m_file)))\n \n client = docker.APIClient(base_url=\"unix://var/run/docker.sock\")\n+ print(\"===== Building Docker Image =====\")\n for line in client.build(\n- dockerfile=df.name, path=\".\", rm=True, tag=image_name\n+ dockerfile=df.name, path=\".\", rm=True, tag=image_name, decode=True\n ):\n- print(str(line, encoding=\"utf-8\"))\n-\n+ text = line.get(\"stream\", None)\n+ if text:\n+ sys.stdout.write(text)\n+ sys.stdout.flush()\n+ print(\"===== Docker Image Built =====\")\n if repository != None:\n for line in client.push(image_name, stream=True, decode=True):\n print(line)\n", "issue": "docker stdout log formatting\nCurrently the docker output log is too messy. for example:\r\n```\r\n{\"stream\":\"\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\r21970944/26421880 [=======================\\u003e......] - ETA: 40s\"}\r\n\r\n{\"stream\":\"\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\r21987328/26421880 [=======================\\u003e......] - ETA: 40s\"}\r\n\r\n{\"stream\":\"\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\u0008\\r22003712/26421880 [=======================\\u003e......] - ETA: 40s\"}\r\n```\n", "before_files": [{"content": "import argparse\nimport os\nimport inspect\nimport tempfile\nimport time\nimport getpass\nimport sys\nfrom string import Template\nimport docker\nimport yaml\nfrom kubernetes.client.apis import core_v1_api\nfrom kubernetes import config\n\n\ndef _m_file_in_docker(model_file):\n return \"/model/\" + os.path.basename(model_file)\n\ndef _build_docker_image(\n m_file, image_name, image_base=\"elasticdl:dev\",\n repository=None\n):\n DOCKER_TEMPLATE = \"\"\"\nFROM {}\nCOPY {} {}\n\"\"\"\n\n with tempfile.NamedTemporaryFile(mode=\"w+\", delete=False) as df:\n df.write(DOCKER_TEMPLATE.format(image_base, m_file, _m_file_in_docker(m_file)))\n\n client = docker.APIClient(base_url=\"unix://var/run/docker.sock\")\n for line in client.build(\n dockerfile=df.name, path=\".\", rm=True, tag=image_name\n ):\n print(str(line, encoding=\"utf-8\"))\n\n if repository != None:\n for line in client.push(image_name, stream=True, decode=True):\n print(line)\n\ndef _gen_master_def(image_name, model_file, job_name, argv):\n master_yaml = \"\"\"\napiVersion: v1\nkind: Pod\nmetadata:\n name: \"elasticdl-master-{job_name}\"\n labels:\n purpose: test-command\nspec:\n containers:\n - name: \"elasticdl-master-{job_name}\"\n image: \"{image_name}\"\n command: [\"python\"]\n args: [\n \"-m\", \"elasticdl.master.main\",\n \"--job_name\", \"{job_name}\",\n \"--worker_image\", \"{image_name}\",\n \"--model_file\", \"{m_file}\"\n ]\n imagePullPolicy: IfNotPresent \n env:\n - name: MY_POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n restartPolicy: Never\n\"\"\" .format(m_file=_m_file_in_docker(model_file), image_name=image_name, job_name=job_name)\n\n master_def = yaml.safe_load(master_yaml)\n\n # Build master arguments\n master_def['spec']['containers'][0]['args'].extend(argv)\n return master_def\n\ndef _submit(image_name, model_file, job_name, argv):\n master_def = _gen_master_def(image_name, model_file, job_name, argv)\n config.load_kube_config()\n api = core_v1_api.CoreV1Api()\n resp = api.create_namespaced_pod(body=master_def, namespace=\"default\")\n print(\"Master launched. status='%s'\" % str(resp.status))\n\ndef main():\n parser = argparse.ArgumentParser(description=\"ElasticDL Client\")\n # Rewrite model_file argument and pass all other arguments to master.\n parser.add_argument(\"--model_file\", help=\"Path to Model file\", required=True)\n parser.add_argument(\"--image_base\", help=\"Base image containing elasticdl runtime environment.\", required=True)\n parser.add_argument(\"--repository\", help=\"The repository to push docker image to.\")\n parser.add_argument(\"--job_name\", help=\"ElasticDL job name\", required=True)\n args, argv = parser.parse_known_args()\n\n job_name = args.job_name + \"-\" + str(int(round(time.time() * 1000)))\n image_name = args.image_base + '_' + job_name \n _build_docker_image(args.model_file, image_name, image_base=args.image_base,\n repository=args.repository)\n _submit(image_name, args.model_file, job_name, argv)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "elasticdl/client/client.py"}], "after_files": [{"content": "import argparse\nimport os\nimport inspect\nimport tempfile\nimport time\nimport getpass\nimport sys\nfrom string import Template\nimport docker\nimport yaml\nfrom kubernetes.client.apis import core_v1_api\nfrom kubernetes import config\n\n\ndef _m_file_in_docker(model_file):\n return \"/model/\" + os.path.basename(model_file)\n\ndef _build_docker_image(\n m_file, image_name, image_base=\"elasticdl:dev\",\n repository=None\n):\n DOCKER_TEMPLATE = \"\"\"\nFROM {}\nCOPY {} {}\n\"\"\"\n\n with tempfile.NamedTemporaryFile(mode=\"w+\", delete=False) as df:\n df.write(DOCKER_TEMPLATE.format(image_base, m_file, _m_file_in_docker(m_file)))\n\n client = docker.APIClient(base_url=\"unix://var/run/docker.sock\")\n print(\"===== Building Docker Image =====\")\n for line in client.build(\n dockerfile=df.name, path=\".\", rm=True, tag=image_name, decode=True\n ):\n text = line.get(\"stream\", None)\n if text:\n sys.stdout.write(text)\n sys.stdout.flush()\n print(\"===== Docker Image Built =====\")\n if repository != None:\n for line in client.push(image_name, stream=True, decode=True):\n print(line)\n\ndef _gen_master_def(image_name, model_file, job_name, argv):\n master_yaml = \"\"\"\napiVersion: v1\nkind: Pod\nmetadata:\n name: \"elasticdl-master-{job_name}\"\n labels:\n purpose: test-command\nspec:\n containers:\n - name: \"elasticdl-master-{job_name}\"\n image: \"{image_name}\"\n command: [\"python\"]\n args: [\n \"-m\", \"elasticdl.master.main\",\n \"--job_name\", \"{job_name}\",\n \"--worker_image\", \"{image_name}\",\n \"--model_file\", \"{m_file}\"\n ]\n imagePullPolicy: IfNotPresent \n env:\n - name: MY_POD_IP\n valueFrom:\n fieldRef:\n fieldPath: status.podIP\n restartPolicy: Never\n\"\"\" .format(m_file=_m_file_in_docker(model_file), image_name=image_name, job_name=job_name)\n\n master_def = yaml.safe_load(master_yaml)\n\n # Build master arguments\n master_def['spec']['containers'][0]['args'].extend(argv)\n return master_def\n\ndef _submit(image_name, model_file, job_name, argv):\n master_def = _gen_master_def(image_name, model_file, job_name, argv)\n config.load_kube_config()\n api = core_v1_api.CoreV1Api()\n resp = api.create_namespaced_pod(body=master_def, namespace=\"default\")\n print(\"Master launched. status='%s'\" % str(resp.status))\n\ndef main():\n parser = argparse.ArgumentParser(description=\"ElasticDL Client\")\n # Rewrite model_file argument and pass all other arguments to master.\n parser.add_argument(\"--model_file\", help=\"Path to Model file\", required=True)\n parser.add_argument(\"--image_base\", help=\"Base image containing elasticdl runtime environment.\", required=True)\n parser.add_argument(\"--repository\", help=\"The repository to push docker image to.\")\n parser.add_argument(\"--job_name\", help=\"ElasticDL job name\", required=True)\n args, argv = parser.parse_known_args()\n\n job_name = args.job_name + \"-\" + str(int(round(time.time() * 1000)))\n image_name = args.image_base + '_' + job_name \n _build_docker_image(args.model_file, image_name, image_base=args.image_base,\n repository=args.repository)\n _submit(image_name, args.model_file, job_name, argv)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "elasticdl/client/client.py"}]} | 2,267 | 223 |
gh_patches_debug_16684 | rasdani/github-patches | git_diff | pwndbg__pwndbg-1569 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Generating configfile with modifed context-sections fails
### Description
Trying to generate a configfile when changing context-sections produces error.
### Steps to reproduce
Happens with no binary loaded
Gdb session history:
```
set context-sections ''
configfile
```
With exception-verbose on:
```
pwndbg> configfile
Showing only changed values:
# context-sections: which context sections are displayed (controls order)
Traceback (most recent call last):
File "/home/rene/tempenis/pwndbg/pwndbg/commands/__init__.py", line 184, in __call__
return self.function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rene/tempenis/pwndbg/pwndbg/commands/config.py", line 127, in configfile
configfile_print_scope("config", show_all)
File "/home/rene/tempenis/pwndbg/pwndbg/commands/config.py", line 154, in configfile_print_scope
print("# default: %s" % p.native_default)
^^^^^^^^^^^^^^^^
File "/home/rene/tempenis/pwndbg/pwndbg/lib/config.py", line 64, in __getattr__
return getattr(self.value, name)
^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'native_default'
If that is an issue, you can report it on https://github.com/pwndbg/pwndbg/issues
(Please don't forget to search if it hasn't been reported before)
To generate the report and open a browser, you may run `bugreport --run-browser`
PS: Pull requests are welcome
pwndbg>
```
### My setup
```
Platform: Linux-6.1.0-3-amd64-x86_64-with-glibc2.36
Gdb: 12.1
Python: 3.11.1 (main, Dec 31 2022, 10:23:59) [GCC 12.2.0]
Pwndbg: 1.1.1 build: 47e4e09
Capstone: 4.0.1024
Unicorn: 2.0.1
This GDB was configured as follows:
configure --host=x86_64-linux-gnu --target=x86_64-linux-gnu
--with-auto-load-dir=$debugdir:$datadir/auto-load
--with-auto-load-safe-path=$debugdir:$datadir/auto-load
--with-expat
--with-gdb-datadir=/usr/share/gdb (relocatable)
--with-jit-reader-dir=/usr/lib/gdb (relocatable)
--without-libunwind-ia64
--with-lzma
--with-babeltrace
--with-intel-pt
--with-mpfr
--with-xxhash
--with-python=/usr (relocatable)
--with-python-libdir=/usr/lib (relocatable)
--with-debuginfod
--without-guile
--enable-source-highlight
--with-separate-debug-dir=/usr/lib/debug (relocatable)
--with-system-gdbinit=/etc/gdb/gdbinit
--with-system-gdbinit-dir=/etc/gdb/gdbinit.d
("Relocatable" means the directory can be moved with the GDB installation
tree, and GDB will still find it.)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwndbg/commands/config.py`
Content:
```
1 """
2 Dumps all pwndbg-specific configuration points.
3 """
4
5 import argparse
6
7 import pwndbg.commands
8 import pwndbg.gdblib.config
9 from pwndbg.color import generateColorFunction
10 from pwndbg.color import ljust_colored
11 from pwndbg.color import strip
12 from pwndbg.color.message import hint
13 from pwndbg.commands import CommandCategory
14
15
16 def print_row(name, value, default, set_show_doc, ljust_optname, ljust_value, empty_space=6):
17 name = ljust_colored(name, ljust_optname + empty_space)
18 defval = extend_value_with_default(value, default)
19 defval = ljust_colored(defval, ljust_value + empty_space)
20 result = " ".join((name, defval, set_show_doc))
21 print(result)
22 return result
23
24
25 def extend_value_with_default(value, default):
26 if strip(value) != strip(default):
27 return "%s (%s)" % (value, default)
28 return value
29
30
31 def get_config_parameters(scope, filter_pattern):
32 values = [
33 v
34 for k, v in pwndbg.gdblib.config.params.items()
35 if isinstance(v, pwndbg.lib.config.Parameter) and v.scope == scope
36 ]
37
38 if filter_pattern:
39 filter_pattern = filter_pattern.lower()
40 values = [
41 v
42 for v in values
43 if filter_pattern in v.name.lower() or filter_pattern in v.set_show_doc.lower()
44 ]
45
46 return values
47
48
49 parser = argparse.ArgumentParser(description="Shows pwndbg-specific configuration.")
50 parser.add_argument(
51 "filter_pattern",
52 type=str,
53 nargs="?",
54 default=None,
55 help="Filter to apply to config parameters names/descriptions",
56 )
57
58
59 def display_config(filter_pattern: str, scope: str, has_file_command: bool = True) -> None:
60 values = get_config_parameters(scope, filter_pattern)
61
62 if not values:
63 print(hint(f'No {scope} parameter found with filter "{filter_pattern}"'))
64 return
65
66 longest_optname = max(map(len, [v.name for v in values]))
67 longest_value = max(
68 # We use `repr` here so the string values will be in quotes
69 map(len, [extend_value_with_default(repr(v.value), repr(v.default)) for v in values])
70 )
71
72 header = print_row("Name", "Value", "Def", "Documentation", longest_optname, longest_value)
73 print("-" * (len(header)))
74
75 for v in sorted(values):
76 if isinstance(v, pwndbg.color.theme.ColorParameter):
77 # Only the theme scope should use ColorParameter
78 assert scope == "theme"
79
80 value = generateColorFunction(v.value)(v.value)
81 default = generateColorFunction(v.default)(v.default)
82 else:
83 value = repr(v.value)
84 default = repr(v.default)
85
86 print_row(v.name, value, default, v.set_show_doc, longest_optname, longest_value)
87
88 print(hint(f"You can set config variable with `set <{scope}-var> <value>`"))
89 if has_file_command:
90 print(
91 hint(
92 f"You can generate configuration file using `{scope}file` "
93 "- then put it in your .gdbinit after initializing pwndbg"
94 )
95 )
96
97
98 @pwndbg.commands.ArgparsedCommand(parser, category=CommandCategory.PWNDBG)
99 def config(filter_pattern) -> None:
100 display_config(filter_pattern, "config")
101
102
103 configfile_parser = argparse.ArgumentParser(
104 description="Generates a configuration file for the current pwndbg options."
105 )
106 configfile_parser.add_argument(
107 "--show-all", action="store_true", help="Display all configuration options."
108 )
109
110 parser = argparse.ArgumentParser(description="Shows pwndbg-specific theme configuration.")
111 parser.add_argument(
112 "filter_pattern",
113 type=str,
114 nargs="?",
115 default=None,
116 help="Filter to apply to theme parameters names/descriptions",
117 )
118
119
120 @pwndbg.commands.ArgparsedCommand(parser, category=CommandCategory.PWNDBG)
121 def theme(filter_pattern) -> None:
122 display_config(filter_pattern, "theme")
123
124
125 @pwndbg.commands.ArgparsedCommand(configfile_parser, category=CommandCategory.PWNDBG)
126 def configfile(show_all=False) -> None:
127 configfile_print_scope("config", show_all)
128
129
130 themefile_parser = argparse.ArgumentParser(
131 description="Generates a configuration file for the current pwndbg theme options."
132 )
133 themefile_parser.add_argument(
134 "--show-all", action="store_true", help="Force displaying of all theme options."
135 )
136
137
138 @pwndbg.commands.ArgparsedCommand(themefile_parser, category=CommandCategory.PWNDBG)
139 def themefile(show_all=False) -> None:
140 configfile_print_scope("theme", show_all)
141
142
143 def configfile_print_scope(scope, show_all=False) -> None:
144 params = pwndbg.gdblib.config.get_params(scope)
145
146 if not show_all:
147 params = list(filter(lambda p: p.is_changed, params))
148
149 if params:
150 if not show_all:
151 print(hint("Showing only changed values:"))
152 for p in params:
153 print("# %s: %s" % (p.name, p.set_show_doc))
154 print("# default: %s" % p.native_default)
155 print("set %s %s" % (p.name, p.native_value))
156 print()
157 else:
158 print(hint("No changed values. To see current values use `%s`." % scope))
159
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pwndbg/commands/config.py b/pwndbg/commands/config.py
--- a/pwndbg/commands/config.py
+++ b/pwndbg/commands/config.py
@@ -150,9 +150,15 @@
if not show_all:
print(hint("Showing only changed values:"))
for p in params:
+ native_default = pwndbg.gdblib.config_mod.Parameter._value_to_gdb_native(
+ p.default, param_class=p.param_class
+ )
+ native_value = pwndbg.gdblib.config_mod.Parameter._value_to_gdb_native(
+ p.value, param_class=p.param_class
+ )
print("# %s: %s" % (p.name, p.set_show_doc))
- print("# default: %s" % p.native_default)
- print("set %s %s" % (p.name, p.native_value))
+ print("# default: %s" % native_default)
+ print("set %s %s" % (p.name, native_value))
print()
else:
print(hint("No changed values. To see current values use `%s`." % scope))
| {"golden_diff": "diff --git a/pwndbg/commands/config.py b/pwndbg/commands/config.py\n--- a/pwndbg/commands/config.py\n+++ b/pwndbg/commands/config.py\n@@ -150,9 +150,15 @@\n if not show_all:\n print(hint(\"Showing only changed values:\"))\n for p in params:\n+ native_default = pwndbg.gdblib.config_mod.Parameter._value_to_gdb_native(\n+ p.default, param_class=p.param_class\n+ )\n+ native_value = pwndbg.gdblib.config_mod.Parameter._value_to_gdb_native(\n+ p.value, param_class=p.param_class\n+ )\n print(\"# %s: %s\" % (p.name, p.set_show_doc))\n- print(\"# default: %s\" % p.native_default)\n- print(\"set %s %s\" % (p.name, p.native_value))\n+ print(\"# default: %s\" % native_default)\n+ print(\"set %s %s\" % (p.name, native_value))\n print()\n else:\n print(hint(\"No changed values. To see current values use `%s`.\" % scope))\n", "issue": "Generating configfile with modifed context-sections fails\n### Description\r\n\r\nTrying to generate a configfile when changing context-sections produces error.\r\n\r\n### Steps to reproduce\r\n\r\nHappens with no binary loaded\r\n\r\nGdb session history:\r\n```\r\nset context-sections ''\r\nconfigfile\r\n```\r\n\r\n\r\nWith exception-verbose on:\r\n```\r\npwndbg> configfile\r\nShowing only changed values:\r\n# context-sections: which context sections are displayed (controls order)\r\nTraceback (most recent call last):\r\n File \"/home/rene/tempenis/pwndbg/pwndbg/commands/__init__.py\", line 184, in __call__\r\n return self.function(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/rene/tempenis/pwndbg/pwndbg/commands/config.py\", line 127, in configfile\r\n configfile_print_scope(\"config\", show_all)\r\n File \"/home/rene/tempenis/pwndbg/pwndbg/commands/config.py\", line 154, in configfile_print_scope\r\n print(\"# default: %s\" % p.native_default)\r\n ^^^^^^^^^^^^^^^^\r\n File \"/home/rene/tempenis/pwndbg/pwndbg/lib/config.py\", line 64, in __getattr__\r\n return getattr(self.value, name)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nAttributeError: 'str' object has no attribute 'native_default'\r\n\r\nIf that is an issue, you can report it on https://github.com/pwndbg/pwndbg/issues\r\n(Please don't forget to search if it hasn't been reported before)\r\nTo generate the report and open a browser, you may run `bugreport --run-browser`\r\nPS: Pull requests are welcome\r\npwndbg>\r\n```\r\n### My setup\r\n\r\n```\r\nPlatform: Linux-6.1.0-3-amd64-x86_64-with-glibc2.36\r\nGdb: 12.1\r\nPython: 3.11.1 (main, Dec 31 2022, 10:23:59) [GCC 12.2.0]\r\nPwndbg: 1.1.1 build: 47e4e09\r\nCapstone: 4.0.1024\r\nUnicorn: 2.0.1\r\nThis GDB was configured as follows:\r\n configure --host=x86_64-linux-gnu --target=x86_64-linux-gnu\r\n\t --with-auto-load-dir=$debugdir:$datadir/auto-load\r\n\t --with-auto-load-safe-path=$debugdir:$datadir/auto-load\r\n\t --with-expat\r\n\t --with-gdb-datadir=/usr/share/gdb (relocatable)\r\n\t --with-jit-reader-dir=/usr/lib/gdb (relocatable)\r\n\t --without-libunwind-ia64\r\n\t --with-lzma\r\n\t --with-babeltrace\r\n\t --with-intel-pt\r\n\t --with-mpfr\r\n\t --with-xxhash\r\n\t --with-python=/usr (relocatable)\r\n\t --with-python-libdir=/usr/lib (relocatable)\r\n\t --with-debuginfod\r\n\t --without-guile\r\n\t --enable-source-highlight\r\n\t --with-separate-debug-dir=/usr/lib/debug (relocatable)\r\n\t --with-system-gdbinit=/etc/gdb/gdbinit\r\n\t --with-system-gdbinit-dir=/etc/gdb/gdbinit.d\r\n\r\n(\"Relocatable\" means the directory can be moved with the GDB installation\r\ntree, and GDB will still find it.)\r\n\r\n```\n", "before_files": [{"content": "\"\"\"\nDumps all pwndbg-specific configuration points.\n\"\"\"\n\nimport argparse\n\nimport pwndbg.commands\nimport pwndbg.gdblib.config\nfrom pwndbg.color import generateColorFunction\nfrom pwndbg.color import ljust_colored\nfrom pwndbg.color import strip\nfrom pwndbg.color.message import hint\nfrom pwndbg.commands import CommandCategory\n\n\ndef print_row(name, value, default, set_show_doc, ljust_optname, ljust_value, empty_space=6):\n name = ljust_colored(name, ljust_optname + empty_space)\n defval = extend_value_with_default(value, default)\n defval = ljust_colored(defval, ljust_value + empty_space)\n result = \" \".join((name, defval, set_show_doc))\n print(result)\n return result\n\n\ndef extend_value_with_default(value, default):\n if strip(value) != strip(default):\n return \"%s (%s)\" % (value, default)\n return value\n\n\ndef get_config_parameters(scope, filter_pattern):\n values = [\n v\n for k, v in pwndbg.gdblib.config.params.items()\n if isinstance(v, pwndbg.lib.config.Parameter) and v.scope == scope\n ]\n\n if filter_pattern:\n filter_pattern = filter_pattern.lower()\n values = [\n v\n for v in values\n if filter_pattern in v.name.lower() or filter_pattern in v.set_show_doc.lower()\n ]\n\n return values\n\n\nparser = argparse.ArgumentParser(description=\"Shows pwndbg-specific configuration.\")\nparser.add_argument(\n \"filter_pattern\",\n type=str,\n nargs=\"?\",\n default=None,\n help=\"Filter to apply to config parameters names/descriptions\",\n)\n\n\ndef display_config(filter_pattern: str, scope: str, has_file_command: bool = True) -> None:\n values = get_config_parameters(scope, filter_pattern)\n\n if not values:\n print(hint(f'No {scope} parameter found with filter \"{filter_pattern}\"'))\n return\n\n longest_optname = max(map(len, [v.name for v in values]))\n longest_value = max(\n # We use `repr` here so the string values will be in quotes\n map(len, [extend_value_with_default(repr(v.value), repr(v.default)) for v in values])\n )\n\n header = print_row(\"Name\", \"Value\", \"Def\", \"Documentation\", longest_optname, longest_value)\n print(\"-\" * (len(header)))\n\n for v in sorted(values):\n if isinstance(v, pwndbg.color.theme.ColorParameter):\n # Only the theme scope should use ColorParameter\n assert scope == \"theme\"\n\n value = generateColorFunction(v.value)(v.value)\n default = generateColorFunction(v.default)(v.default)\n else:\n value = repr(v.value)\n default = repr(v.default)\n\n print_row(v.name, value, default, v.set_show_doc, longest_optname, longest_value)\n\n print(hint(f\"You can set config variable with `set <{scope}-var> <value>`\"))\n if has_file_command:\n print(\n hint(\n f\"You can generate configuration file using `{scope}file` \"\n \"- then put it in your .gdbinit after initializing pwndbg\"\n )\n )\n\n\[email protected](parser, category=CommandCategory.PWNDBG)\ndef config(filter_pattern) -> None:\n display_config(filter_pattern, \"config\")\n\n\nconfigfile_parser = argparse.ArgumentParser(\n description=\"Generates a configuration file for the current pwndbg options.\"\n)\nconfigfile_parser.add_argument(\n \"--show-all\", action=\"store_true\", help=\"Display all configuration options.\"\n)\n\nparser = argparse.ArgumentParser(description=\"Shows pwndbg-specific theme configuration.\")\nparser.add_argument(\n \"filter_pattern\",\n type=str,\n nargs=\"?\",\n default=None,\n help=\"Filter to apply to theme parameters names/descriptions\",\n)\n\n\[email protected](parser, category=CommandCategory.PWNDBG)\ndef theme(filter_pattern) -> None:\n display_config(filter_pattern, \"theme\")\n\n\[email protected](configfile_parser, category=CommandCategory.PWNDBG)\ndef configfile(show_all=False) -> None:\n configfile_print_scope(\"config\", show_all)\n\n\nthemefile_parser = argparse.ArgumentParser(\n description=\"Generates a configuration file for the current pwndbg theme options.\"\n)\nthemefile_parser.add_argument(\n \"--show-all\", action=\"store_true\", help=\"Force displaying of all theme options.\"\n)\n\n\[email protected](themefile_parser, category=CommandCategory.PWNDBG)\ndef themefile(show_all=False) -> None:\n configfile_print_scope(\"theme\", show_all)\n\n\ndef configfile_print_scope(scope, show_all=False) -> None:\n params = pwndbg.gdblib.config.get_params(scope)\n\n if not show_all:\n params = list(filter(lambda p: p.is_changed, params))\n\n if params:\n if not show_all:\n print(hint(\"Showing only changed values:\"))\n for p in params:\n print(\"# %s: %s\" % (p.name, p.set_show_doc))\n print(\"# default: %s\" % p.native_default)\n print(\"set %s %s\" % (p.name, p.native_value))\n print()\n else:\n print(hint(\"No changed values. To see current values use `%s`.\" % scope))\n", "path": "pwndbg/commands/config.py"}], "after_files": [{"content": "\"\"\"\nDumps all pwndbg-specific configuration points.\n\"\"\"\n\nimport argparse\n\nimport pwndbg.commands\nimport pwndbg.gdblib.config\nfrom pwndbg.color import generateColorFunction\nfrom pwndbg.color import ljust_colored\nfrom pwndbg.color import strip\nfrom pwndbg.color.message import hint\nfrom pwndbg.commands import CommandCategory\n\n\ndef print_row(name, value, default, set_show_doc, ljust_optname, ljust_value, empty_space=6):\n name = ljust_colored(name, ljust_optname + empty_space)\n defval = extend_value_with_default(value, default)\n defval = ljust_colored(defval, ljust_value + empty_space)\n result = \" \".join((name, defval, set_show_doc))\n print(result)\n return result\n\n\ndef extend_value_with_default(value, default):\n if strip(value) != strip(default):\n return \"%s (%s)\" % (value, default)\n return value\n\n\ndef get_config_parameters(scope, filter_pattern):\n values = [\n v\n for k, v in pwndbg.gdblib.config.params.items()\n if isinstance(v, pwndbg.lib.config.Parameter) and v.scope == scope\n ]\n\n if filter_pattern:\n filter_pattern = filter_pattern.lower()\n values = [\n v\n for v in values\n if filter_pattern in v.name.lower() or filter_pattern in v.set_show_doc.lower()\n ]\n\n return values\n\n\nparser = argparse.ArgumentParser(description=\"Shows pwndbg-specific configuration.\")\nparser.add_argument(\n \"filter_pattern\",\n type=str,\n nargs=\"?\",\n default=None,\n help=\"Filter to apply to config parameters names/descriptions\",\n)\n\n\ndef display_config(filter_pattern: str, scope: str, has_file_command: bool = True) -> None:\n values = get_config_parameters(scope, filter_pattern)\n\n if not values:\n print(hint(f'No {scope} parameter found with filter \"{filter_pattern}\"'))\n return\n\n longest_optname = max(map(len, [v.name for v in values]))\n longest_value = max(\n # We use `repr` here so the string values will be in quotes\n map(len, [extend_value_with_default(repr(v.value), repr(v.default)) for v in values])\n )\n\n header = print_row(\"Name\", \"Value\", \"Def\", \"Documentation\", longest_optname, longest_value)\n print(\"-\" * (len(header)))\n\n for v in sorted(values):\n if isinstance(v, pwndbg.color.theme.ColorParameter):\n # Only the theme scope should use ColorParameter\n assert scope == \"theme\"\n\n value = generateColorFunction(v.value)(v.value)\n default = generateColorFunction(v.default)(v.default)\n else:\n value = repr(v.value)\n default = repr(v.default)\n\n print_row(v.name, value, default, v.set_show_doc, longest_optname, longest_value)\n\n print(hint(f\"You can set config variable with `set <{scope}-var> <value>`\"))\n if has_file_command:\n print(\n hint(\n f\"You can generate configuration file using `{scope}file` \"\n \"- then put it in your .gdbinit after initializing pwndbg\"\n )\n )\n\n\[email protected](parser, category=CommandCategory.PWNDBG)\ndef config(filter_pattern) -> None:\n display_config(filter_pattern, \"config\")\n\n\nconfigfile_parser = argparse.ArgumentParser(\n description=\"Generates a configuration file for the current pwndbg options.\"\n)\nconfigfile_parser.add_argument(\n \"--show-all\", action=\"store_true\", help=\"Display all configuration options.\"\n)\n\nparser = argparse.ArgumentParser(description=\"Shows pwndbg-specific theme configuration.\")\nparser.add_argument(\n \"filter_pattern\",\n type=str,\n nargs=\"?\",\n default=None,\n help=\"Filter to apply to theme parameters names/descriptions\",\n)\n\n\[email protected](parser, category=CommandCategory.PWNDBG)\ndef theme(filter_pattern) -> None:\n display_config(filter_pattern, \"theme\")\n\n\[email protected](configfile_parser, category=CommandCategory.PWNDBG)\ndef configfile(show_all=False) -> None:\n configfile_print_scope(\"config\", show_all)\n\n\nthemefile_parser = argparse.ArgumentParser(\n description=\"Generates a configuration file for the current pwndbg theme options.\"\n)\nthemefile_parser.add_argument(\n \"--show-all\", action=\"store_true\", help=\"Force displaying of all theme options.\"\n)\n\n\[email protected](themefile_parser, category=CommandCategory.PWNDBG)\ndef themefile(show_all=False) -> None:\n configfile_print_scope(\"theme\", show_all)\n\n\ndef configfile_print_scope(scope, show_all=False) -> None:\n params = pwndbg.gdblib.config.get_params(scope)\n\n if not show_all:\n params = list(filter(lambda p: p.is_changed, params))\n\n if params:\n if not show_all:\n print(hint(\"Showing only changed values:\"))\n for p in params:\n native_default = pwndbg.gdblib.config_mod.Parameter._value_to_gdb_native(\n p.default, param_class=p.param_class\n )\n native_value = pwndbg.gdblib.config_mod.Parameter._value_to_gdb_native(\n p.value, param_class=p.param_class\n )\n print(\"# %s: %s\" % (p.name, p.set_show_doc))\n print(\"# default: %s\" % native_default)\n print(\"set %s %s\" % (p.name, native_value))\n print()\n else:\n print(hint(\"No changed values. To see current values use `%s`.\" % scope))\n", "path": "pwndbg/commands/config.py"}]} | 2,604 | 258 |
gh_patches_debug_3865 | rasdani/github-patches | git_diff | cloudtools__troposphere-1036 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cloudformation bugs in EMR_Cluster example
Great project!
I found 2 bugs with the EMR cloudformation template.
More than happy to submit a PR with the corrections if considered.
1. The reference to `EMR_AutoScaling_DefaultRole` fails template validation checks with the current configuration.
```
https://github.com/charlesgreen/troposphere/blob/master/examples/EMR_Cluster.py#L205
Bug:
AutoScalingRole=Ref(emr_autoscaling_role),
Correction:
AutoScalingRole=emr_autoscaling_role,
```
2. MasterInstanceGroup should not have an associated autoscaling group.
I receive an error and found the following in the AWS documentation.
"Each instance group in a cluster, except the master instance group, can have its own scaling policy, which consists of scale-out and scale-in rules".
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-automatic-scaling.html
```
https://github.com/charlesgreen/troposphere/blob/master/examples/EMR_Cluster.py#L214
# lines 214 to 220 should be deleted
AutoScalingPolicy=emr.AutoScalingPolicy(
Constraints=emr.ScalingConstraints(
MinCapacity="1",
MaxCapacity="3"
),
Rules=generate_rules("MasterAutoScalingPolicy")
)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/EMR_Cluster.py`
Content:
```
1 from troposphere import Parameter, Ref, Template, Tags, If, Equals, Not, Join
2 from troposphere.constants import KEY_PAIR_NAME, SUBNET_ID, M4_LARGE, NUMBER
3 import troposphere.emr as emr
4 import troposphere.iam as iam
5
6
7 scaling_policy = emr.SimpleScalingPolicyConfiguration(
8 AdjustmentType="EXACT_CAPACITY",
9 ScalingAdjustment="1",
10 CoolDown="300"
11 )
12
13
14 kms_key = 'arn:aws:kms:us-east-1:123456789012:key/1234-1234-1234-1234-1234'
15
16 security_configuration = {
17 'EncryptionConfiguration': {
18 'EnableInTransitEncryption': 'true',
19 'InTransitEncryptionConfiguration': {
20 'TLSCertificateConfiguration': {
21 'CertificateProviderType': 'PEM',
22 'S3Object': 's3://MyConfigStore/artifacts/MyCerts.zip'
23 }
24 },
25 'EnableAtRestEncryption': 'true',
26 'AtRestEncryptionConfiguration': {
27 'S3EncryptionConfiguration': {
28 'EncryptionMode': 'SSE-KMS',
29 'AwsKmsKey': kms_key
30 },
31 'LocalDiskEncryptionConfiguration': {
32 'EncryptionKeyProviderType': 'AwsKms',
33 'AwsKmsKey': kms_key
34 }
35 }
36 }
37 }
38
39
40 def generate_rules(rules_name):
41 global emr, scaling_policy
42
43 rules = [
44 emr.ScalingRule(
45 Name=rules_name,
46 Description="%s rules" % rules_name,
47 Action=emr.ScalingAction(
48 Market="ON_DEMAND",
49 SimpleScalingPolicyConfiguration=scaling_policy
50 ),
51 Trigger=emr.ScalingTrigger(
52 CloudWatchAlarmDefinition=emr.CloudWatchAlarmDefinition(
53 ComparisonOperator="GREATER_THAN",
54 EvaluationPeriods="120",
55 MetricName="TestMetric",
56 Namespace="AWS/ElasticMapReduce",
57 Period="300",
58 Statistic="AVERAGE",
59 Threshold="50",
60 Unit="PERCENT",
61 Dimensions=[
62 emr.MetricDimension(
63 'my.custom.master.property',
64 'my.custom.master.value'
65 )
66 ]
67 )
68 )
69 )
70 ]
71 return rules
72
73
74 template = Template()
75 template.add_description(
76 "Sample CloudFormation template for creating an EMR cluster"
77 )
78
79 keyname = template.add_parameter(Parameter(
80 "KeyName",
81 Description="Name of an existing EC2 KeyPair to enable SSH "
82 "to the instances",
83 Type=KEY_PAIR_NAME
84 ))
85
86 subnet = template.add_parameter(Parameter(
87 "Subnet",
88 Description="Subnet ID for creating the EMR cluster",
89 Type=SUBNET_ID
90 ))
91
92 spot = template.add_parameter(Parameter(
93 "SpotPrice",
94 Description="Spot price (or use 0 for 'on demand' instance)",
95 Type=NUMBER,
96 Default="0.1"
97 ))
98
99 withSpotPrice = "WithSpotPrice"
100 template.add_condition(withSpotPrice, Not(Equals(Ref(spot), "0")))
101
102 gcTimeRatio = template.add_parameter(Parameter(
103 "GcTimeRatioValue",
104 Description="Hadoop name node garbage collector time ratio",
105 Type=NUMBER,
106 Default="19"
107 ))
108
109 # IAM roles required by EMR
110
111 emr_service_role = template.add_resource(iam.Role(
112 'EMRServiceRole',
113 AssumeRolePolicyDocument={
114 "Statement": [{
115 "Effect": "Allow",
116 "Principal": {
117 "Service": [
118 "elasticmapreduce.amazonaws.com"
119 ]
120 },
121 "Action": ["sts:AssumeRole"]
122 }]
123 },
124 ManagedPolicyArns=[
125 'arn:aws:iam::aws:policy/service-role/AmazonElasticMapReduceRole'
126 ]
127 ))
128
129 emr_autoscaling_role = "EMR_AutoScaling_DefaultRole"
130
131 emr_job_flow_role = template.add_resource(iam.Role(
132 "EMRJobFlowRole",
133 AssumeRolePolicyDocument={
134 "Statement": [{
135 "Effect": "Allow",
136 "Principal": {
137 "Service": [
138 "ec2.amazonaws.com"
139 ]
140 },
141 "Action": ["sts:AssumeRole"]
142 }]
143 },
144 ManagedPolicyArns=[
145 'arn:aws:iam::aws:policy/service-role/AmazonElasticMapReduceforEC2Role'
146 ]
147 ))
148
149 emr_instance_profile = template.add_resource(iam.InstanceProfile(
150 "EMRInstanceProfile",
151 Roles=[Ref(emr_job_flow_role)]
152 ))
153
154 # EMR Cluster Resource
155
156 security_config = template.add_resource(emr.SecurityConfiguration(
157 'EMRSecurityConfiguration',
158 Name="EMRSampleClusterSecurityConfiguration",
159 SecurityConfiguration=security_configuration,
160 ))
161
162 cluster = template.add_resource(emr.Cluster(
163 "EMRSampleCluster",
164 Name="EMR Sample Cluster",
165 ReleaseLabel='emr-4.4.0',
166 SecurityConfiguration=Ref(security_config),
167 BootstrapActions=[emr.BootstrapActionConfig(
168 Name='Dummy bootstrap action',
169 ScriptBootstrapAction=emr.ScriptBootstrapActionConfig(
170 Path='file:/usr/share/aws/emr/scripts/install-hue',
171 Args=["dummy", "parameter"]
172 )
173 )],
174 Configurations=[
175 emr.Configuration(
176 Classification="core-site",
177 ConfigurationProperties={
178 'hadoop.security.groups.cache.secs': '250'
179 }
180 ),
181 emr.Configuration(
182 Classification="mapred-site",
183 ConfigurationProperties={
184 'mapred.tasktracker.map.tasks.maximum': '2',
185 'mapreduce.map.sort.spill.percent': '90',
186 'mapreduce.tasktracker.reduce.tasks.maximum': '5'
187 }
188 ),
189 emr.Configuration(
190 Classification="hadoop-env",
191 Configurations=[
192 emr.Configuration(
193 Classification="export",
194 ConfigurationProperties={
195 "HADOOP_DATANODE_HEAPSIZE": "2048",
196 "HADOOP_NAMENODE_OPTS": Join("", ["-XX:GCTimeRatio=",
197 Ref(gcTimeRatio)])
198 }
199 )
200 ]
201 )
202 ],
203 JobFlowRole=Ref(emr_instance_profile),
204 ServiceRole=Ref(emr_service_role),
205 AutoScalingRole=Ref(emr_autoscaling_role),
206 Instances=emr.JobFlowInstancesConfig(
207 Ec2KeyName=Ref(keyname),
208 Ec2SubnetId=Ref(subnet),
209 MasterInstanceGroup=emr.InstanceGroupConfigProperty(
210 Name="Master Instance",
211 InstanceCount="1",
212 InstanceType=M4_LARGE,
213 Market="ON_DEMAND",
214 AutoScalingPolicy=emr.AutoScalingPolicy(
215 Constraints=emr.ScalingConstraints(
216 MinCapacity="1",
217 MaxCapacity="3"
218 ),
219 Rules=generate_rules("MasterAutoScalingPolicy")
220 )
221 ),
222 CoreInstanceGroup=emr.InstanceGroupConfigProperty(
223 Name="Core Instance",
224 BidPrice=If(withSpotPrice, Ref(spot), Ref("AWS::NoValue")),
225 Market=If(withSpotPrice, "SPOT", "ON_DEMAND"),
226 AutoScalingPolicy=emr.AutoScalingPolicy(
227 Constraints=emr.ScalingConstraints(
228 MinCapacity="1",
229 MaxCapacity="3"
230 ),
231 Rules=generate_rules("CoreAutoScalingPolicy"),
232 ),
233 EbsConfiguration=emr.EbsConfiguration(
234 EbsBlockDeviceConfigs=[
235 emr.EbsBlockDeviceConfigs(
236 VolumeSpecification=emr.VolumeSpecification(
237 SizeInGB="10",
238 VolumeType="gp2"
239 ),
240 VolumesPerInstance="1"
241 )
242 ],
243 EbsOptimized="true"
244 ),
245 InstanceCount="1",
246 InstanceType=M4_LARGE,
247 )
248 ),
249 Applications=[
250 emr.Application(Name="Hadoop"),
251 emr.Application(Name="Hive"),
252 emr.Application(Name="Mahout"),
253 emr.Application(Name="Pig"),
254 emr.Application(Name="Spark")
255 ],
256 VisibleToAllUsers="true",
257 Tags=Tags(
258 Name="EMR Sample Cluster"
259 )
260 ))
261
262 step = template.add_resource(emr.Step(
263 'TestStep',
264 Name="TestStep",
265 ActionOnFailure='CONTINUE',
266 HadoopJarStep=emr.HadoopJarStepConfig(
267 Args=["5", "10"],
268 Jar="s3://emr-cfn-test/hadoop-mapreduce-examples-2.6.0.jar",
269 MainClass="pi",
270 StepProperties=[
271 emr.KeyValue('my.custom.property', 'my.custom.value')
272 ]
273 ),
274 JobFlowId=Ref(cluster)
275 ))
276
277 print(template.to_json())
278
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/EMR_Cluster.py b/examples/EMR_Cluster.py
--- a/examples/EMR_Cluster.py
+++ b/examples/EMR_Cluster.py
@@ -202,7 +202,7 @@
],
JobFlowRole=Ref(emr_instance_profile),
ServiceRole=Ref(emr_service_role),
- AutoScalingRole=Ref(emr_autoscaling_role),
+ AutoScalingRole=emr_autoscaling_role,
Instances=emr.JobFlowInstancesConfig(
Ec2KeyName=Ref(keyname),
Ec2SubnetId=Ref(subnet),
| {"golden_diff": "diff --git a/examples/EMR_Cluster.py b/examples/EMR_Cluster.py\n--- a/examples/EMR_Cluster.py\n+++ b/examples/EMR_Cluster.py\n@@ -202,7 +202,7 @@\n ],\n JobFlowRole=Ref(emr_instance_profile),\n ServiceRole=Ref(emr_service_role),\n- AutoScalingRole=Ref(emr_autoscaling_role),\n+ AutoScalingRole=emr_autoscaling_role,\n Instances=emr.JobFlowInstancesConfig(\n Ec2KeyName=Ref(keyname),\n Ec2SubnetId=Ref(subnet),\n", "issue": "Cloudformation bugs in EMR_Cluster example\nGreat project!\r\n\r\nI found 2 bugs with the EMR cloudformation template. \r\n\r\nMore than happy to submit a PR with the corrections if considered. \r\n\r\n1. The reference to `EMR_AutoScaling_DefaultRole` fails template validation checks with the current configuration.\r\n\r\n```\r\nhttps://github.com/charlesgreen/troposphere/blob/master/examples/EMR_Cluster.py#L205\r\n\r\nBug:\r\nAutoScalingRole=Ref(emr_autoscaling_role),\r\n\r\nCorrection:\r\nAutoScalingRole=emr_autoscaling_role,\r\n```\r\n\r\n2. MasterInstanceGroup should not have an associated autoscaling group.\r\n\r\nI receive an error and found the following in the AWS documentation. \r\n\"Each instance group in a cluster, except the master instance group, can have its own scaling policy, which consists of scale-out and scale-in rules\".\r\n\r\nhttps://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-automatic-scaling.html\r\n\r\n```\r\nhttps://github.com/charlesgreen/troposphere/blob/master/examples/EMR_Cluster.py#L214\r\n\r\n# lines 214 to 220 should be deleted\r\n\r\nAutoScalingPolicy=emr.AutoScalingPolicy(\r\n Constraints=emr.ScalingConstraints(\r\n MinCapacity=\"1\",\r\n MaxCapacity=\"3\"\r\n ),\r\n Rules=generate_rules(\"MasterAutoScalingPolicy\")\r\n)\r\n```\r\n\n", "before_files": [{"content": "from troposphere import Parameter, Ref, Template, Tags, If, Equals, Not, Join\nfrom troposphere.constants import KEY_PAIR_NAME, SUBNET_ID, M4_LARGE, NUMBER\nimport troposphere.emr as emr\nimport troposphere.iam as iam\n\n\nscaling_policy = emr.SimpleScalingPolicyConfiguration(\n AdjustmentType=\"EXACT_CAPACITY\",\n ScalingAdjustment=\"1\",\n CoolDown=\"300\"\n )\n\n\nkms_key = 'arn:aws:kms:us-east-1:123456789012:key/1234-1234-1234-1234-1234'\n\nsecurity_configuration = {\n 'EncryptionConfiguration': {\n 'EnableInTransitEncryption': 'true',\n 'InTransitEncryptionConfiguration': {\n 'TLSCertificateConfiguration': {\n 'CertificateProviderType': 'PEM',\n 'S3Object': 's3://MyConfigStore/artifacts/MyCerts.zip'\n }\n },\n 'EnableAtRestEncryption': 'true',\n 'AtRestEncryptionConfiguration': {\n 'S3EncryptionConfiguration': {\n 'EncryptionMode': 'SSE-KMS',\n 'AwsKmsKey': kms_key\n },\n 'LocalDiskEncryptionConfiguration': {\n 'EncryptionKeyProviderType': 'AwsKms',\n 'AwsKmsKey': kms_key\n }\n }\n }\n}\n\n\ndef generate_rules(rules_name):\n global emr, scaling_policy\n\n rules = [\n emr.ScalingRule(\n Name=rules_name,\n Description=\"%s rules\" % rules_name,\n Action=emr.ScalingAction(\n Market=\"ON_DEMAND\",\n SimpleScalingPolicyConfiguration=scaling_policy\n ),\n Trigger=emr.ScalingTrigger(\n CloudWatchAlarmDefinition=emr.CloudWatchAlarmDefinition(\n ComparisonOperator=\"GREATER_THAN\",\n EvaluationPeriods=\"120\",\n MetricName=\"TestMetric\",\n Namespace=\"AWS/ElasticMapReduce\",\n Period=\"300\",\n Statistic=\"AVERAGE\",\n Threshold=\"50\",\n Unit=\"PERCENT\",\n Dimensions=[\n emr.MetricDimension(\n 'my.custom.master.property',\n 'my.custom.master.value'\n )\n ]\n )\n )\n )\n ]\n return rules\n\n\ntemplate = Template()\ntemplate.add_description(\n \"Sample CloudFormation template for creating an EMR cluster\"\n)\n\nkeyname = template.add_parameter(Parameter(\n \"KeyName\",\n Description=\"Name of an existing EC2 KeyPair to enable SSH \"\n \"to the instances\",\n Type=KEY_PAIR_NAME\n))\n\nsubnet = template.add_parameter(Parameter(\n \"Subnet\",\n Description=\"Subnet ID for creating the EMR cluster\",\n Type=SUBNET_ID\n))\n\nspot = template.add_parameter(Parameter(\n \"SpotPrice\",\n Description=\"Spot price (or use 0 for 'on demand' instance)\",\n Type=NUMBER,\n Default=\"0.1\"\n))\n\nwithSpotPrice = \"WithSpotPrice\"\ntemplate.add_condition(withSpotPrice, Not(Equals(Ref(spot), \"0\")))\n\ngcTimeRatio = template.add_parameter(Parameter(\n \"GcTimeRatioValue\",\n Description=\"Hadoop name node garbage collector time ratio\",\n Type=NUMBER,\n Default=\"19\"\n))\n\n# IAM roles required by EMR\n\nemr_service_role = template.add_resource(iam.Role(\n 'EMRServiceRole',\n AssumeRolePolicyDocument={\n \"Statement\": [{\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"Service\": [\n \"elasticmapreduce.amazonaws.com\"\n ]\n },\n \"Action\": [\"sts:AssumeRole\"]\n }]\n },\n ManagedPolicyArns=[\n 'arn:aws:iam::aws:policy/service-role/AmazonElasticMapReduceRole'\n ]\n))\n\nemr_autoscaling_role = \"EMR_AutoScaling_DefaultRole\"\n\nemr_job_flow_role = template.add_resource(iam.Role(\n \"EMRJobFlowRole\",\n AssumeRolePolicyDocument={\n \"Statement\": [{\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"Service\": [\n \"ec2.amazonaws.com\"\n ]\n },\n \"Action\": [\"sts:AssumeRole\"]\n }]\n },\n ManagedPolicyArns=[\n 'arn:aws:iam::aws:policy/service-role/AmazonElasticMapReduceforEC2Role'\n ]\n))\n\nemr_instance_profile = template.add_resource(iam.InstanceProfile(\n \"EMRInstanceProfile\",\n Roles=[Ref(emr_job_flow_role)]\n))\n\n# EMR Cluster Resource\n\nsecurity_config = template.add_resource(emr.SecurityConfiguration(\n 'EMRSecurityConfiguration',\n Name=\"EMRSampleClusterSecurityConfiguration\",\n SecurityConfiguration=security_configuration,\n))\n\ncluster = template.add_resource(emr.Cluster(\n \"EMRSampleCluster\",\n Name=\"EMR Sample Cluster\",\n ReleaseLabel='emr-4.4.0',\n SecurityConfiguration=Ref(security_config),\n BootstrapActions=[emr.BootstrapActionConfig(\n Name='Dummy bootstrap action',\n ScriptBootstrapAction=emr.ScriptBootstrapActionConfig(\n Path='file:/usr/share/aws/emr/scripts/install-hue',\n Args=[\"dummy\", \"parameter\"]\n )\n )],\n Configurations=[\n emr.Configuration(\n Classification=\"core-site\",\n ConfigurationProperties={\n 'hadoop.security.groups.cache.secs': '250'\n }\n ),\n emr.Configuration(\n Classification=\"mapred-site\",\n ConfigurationProperties={\n 'mapred.tasktracker.map.tasks.maximum': '2',\n 'mapreduce.map.sort.spill.percent': '90',\n 'mapreduce.tasktracker.reduce.tasks.maximum': '5'\n }\n ),\n emr.Configuration(\n Classification=\"hadoop-env\",\n Configurations=[\n emr.Configuration(\n Classification=\"export\",\n ConfigurationProperties={\n \"HADOOP_DATANODE_HEAPSIZE\": \"2048\",\n \"HADOOP_NAMENODE_OPTS\": Join(\"\", [\"-XX:GCTimeRatio=\",\n Ref(gcTimeRatio)])\n }\n )\n ]\n )\n ],\n JobFlowRole=Ref(emr_instance_profile),\n ServiceRole=Ref(emr_service_role),\n AutoScalingRole=Ref(emr_autoscaling_role),\n Instances=emr.JobFlowInstancesConfig(\n Ec2KeyName=Ref(keyname),\n Ec2SubnetId=Ref(subnet),\n MasterInstanceGroup=emr.InstanceGroupConfigProperty(\n Name=\"Master Instance\",\n InstanceCount=\"1\",\n InstanceType=M4_LARGE,\n Market=\"ON_DEMAND\",\n AutoScalingPolicy=emr.AutoScalingPolicy(\n Constraints=emr.ScalingConstraints(\n MinCapacity=\"1\",\n MaxCapacity=\"3\"\n ),\n Rules=generate_rules(\"MasterAutoScalingPolicy\")\n )\n ),\n CoreInstanceGroup=emr.InstanceGroupConfigProperty(\n Name=\"Core Instance\",\n BidPrice=If(withSpotPrice, Ref(spot), Ref(\"AWS::NoValue\")),\n Market=If(withSpotPrice, \"SPOT\", \"ON_DEMAND\"),\n AutoScalingPolicy=emr.AutoScalingPolicy(\n Constraints=emr.ScalingConstraints(\n MinCapacity=\"1\",\n MaxCapacity=\"3\"\n ),\n Rules=generate_rules(\"CoreAutoScalingPolicy\"),\n ),\n EbsConfiguration=emr.EbsConfiguration(\n EbsBlockDeviceConfigs=[\n emr.EbsBlockDeviceConfigs(\n VolumeSpecification=emr.VolumeSpecification(\n SizeInGB=\"10\",\n VolumeType=\"gp2\"\n ),\n VolumesPerInstance=\"1\"\n )\n ],\n EbsOptimized=\"true\"\n ),\n InstanceCount=\"1\",\n InstanceType=M4_LARGE,\n )\n ),\n Applications=[\n emr.Application(Name=\"Hadoop\"),\n emr.Application(Name=\"Hive\"),\n emr.Application(Name=\"Mahout\"),\n emr.Application(Name=\"Pig\"),\n emr.Application(Name=\"Spark\")\n ],\n VisibleToAllUsers=\"true\",\n Tags=Tags(\n Name=\"EMR Sample Cluster\"\n )\n))\n\nstep = template.add_resource(emr.Step(\n 'TestStep',\n Name=\"TestStep\",\n ActionOnFailure='CONTINUE',\n HadoopJarStep=emr.HadoopJarStepConfig(\n Args=[\"5\", \"10\"],\n Jar=\"s3://emr-cfn-test/hadoop-mapreduce-examples-2.6.0.jar\",\n MainClass=\"pi\",\n StepProperties=[\n emr.KeyValue('my.custom.property', 'my.custom.value')\n ]\n ),\n JobFlowId=Ref(cluster)\n))\n\nprint(template.to_json())\n", "path": "examples/EMR_Cluster.py"}], "after_files": [{"content": "from troposphere import Parameter, Ref, Template, Tags, If, Equals, Not, Join\nfrom troposphere.constants import KEY_PAIR_NAME, SUBNET_ID, M4_LARGE, NUMBER\nimport troposphere.emr as emr\nimport troposphere.iam as iam\n\n\nscaling_policy = emr.SimpleScalingPolicyConfiguration(\n AdjustmentType=\"EXACT_CAPACITY\",\n ScalingAdjustment=\"1\",\n CoolDown=\"300\"\n )\n\n\nkms_key = 'arn:aws:kms:us-east-1:123456789012:key/1234-1234-1234-1234-1234'\n\nsecurity_configuration = {\n 'EncryptionConfiguration': {\n 'EnableInTransitEncryption': 'true',\n 'InTransitEncryptionConfiguration': {\n 'TLSCertificateConfiguration': {\n 'CertificateProviderType': 'PEM',\n 'S3Object': 's3://MyConfigStore/artifacts/MyCerts.zip'\n }\n },\n 'EnableAtRestEncryption': 'true',\n 'AtRestEncryptionConfiguration': {\n 'S3EncryptionConfiguration': {\n 'EncryptionMode': 'SSE-KMS',\n 'AwsKmsKey': kms_key\n },\n 'LocalDiskEncryptionConfiguration': {\n 'EncryptionKeyProviderType': 'AwsKms',\n 'AwsKmsKey': kms_key\n }\n }\n }\n}\n\n\ndef generate_rules(rules_name):\n global emr, scaling_policy\n\n rules = [\n emr.ScalingRule(\n Name=rules_name,\n Description=\"%s rules\" % rules_name,\n Action=emr.ScalingAction(\n Market=\"ON_DEMAND\",\n SimpleScalingPolicyConfiguration=scaling_policy\n ),\n Trigger=emr.ScalingTrigger(\n CloudWatchAlarmDefinition=emr.CloudWatchAlarmDefinition(\n ComparisonOperator=\"GREATER_THAN\",\n EvaluationPeriods=\"120\",\n MetricName=\"TestMetric\",\n Namespace=\"AWS/ElasticMapReduce\",\n Period=\"300\",\n Statistic=\"AVERAGE\",\n Threshold=\"50\",\n Unit=\"PERCENT\",\n Dimensions=[\n emr.MetricDimension(\n 'my.custom.master.property',\n 'my.custom.master.value'\n )\n ]\n )\n )\n )\n ]\n return rules\n\n\ntemplate = Template()\ntemplate.add_description(\n \"Sample CloudFormation template for creating an EMR cluster\"\n)\n\nkeyname = template.add_parameter(Parameter(\n \"KeyName\",\n Description=\"Name of an existing EC2 KeyPair to enable SSH \"\n \"to the instances\",\n Type=KEY_PAIR_NAME\n))\n\nsubnet = template.add_parameter(Parameter(\n \"Subnet\",\n Description=\"Subnet ID for creating the EMR cluster\",\n Type=SUBNET_ID\n))\n\nspot = template.add_parameter(Parameter(\n \"SpotPrice\",\n Description=\"Spot price (or use 0 for 'on demand' instance)\",\n Type=NUMBER,\n Default=\"0.1\"\n))\n\nwithSpotPrice = \"WithSpotPrice\"\ntemplate.add_condition(withSpotPrice, Not(Equals(Ref(spot), \"0\")))\n\ngcTimeRatio = template.add_parameter(Parameter(\n \"GcTimeRatioValue\",\n Description=\"Hadoop name node garbage collector time ratio\",\n Type=NUMBER,\n Default=\"19\"\n))\n\n# IAM roles required by EMR\n\nemr_service_role = template.add_resource(iam.Role(\n 'EMRServiceRole',\n AssumeRolePolicyDocument={\n \"Statement\": [{\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"Service\": [\n \"elasticmapreduce.amazonaws.com\"\n ]\n },\n \"Action\": [\"sts:AssumeRole\"]\n }]\n },\n ManagedPolicyArns=[\n 'arn:aws:iam::aws:policy/service-role/AmazonElasticMapReduceRole'\n ]\n))\n\nemr_autoscaling_role = \"EMR_AutoScaling_DefaultRole\"\n\nemr_job_flow_role = template.add_resource(iam.Role(\n \"EMRJobFlowRole\",\n AssumeRolePolicyDocument={\n \"Statement\": [{\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"Service\": [\n \"ec2.amazonaws.com\"\n ]\n },\n \"Action\": [\"sts:AssumeRole\"]\n }]\n },\n ManagedPolicyArns=[\n 'arn:aws:iam::aws:policy/service-role/AmazonElasticMapReduceforEC2Role'\n ]\n))\n\nemr_instance_profile = template.add_resource(iam.InstanceProfile(\n \"EMRInstanceProfile\",\n Roles=[Ref(emr_job_flow_role)]\n))\n\n# EMR Cluster Resource\n\nsecurity_config = template.add_resource(emr.SecurityConfiguration(\n 'EMRSecurityConfiguration',\n Name=\"EMRSampleClusterSecurityConfiguration\",\n SecurityConfiguration=security_configuration,\n))\n\ncluster = template.add_resource(emr.Cluster(\n \"EMRSampleCluster\",\n Name=\"EMR Sample Cluster\",\n ReleaseLabel='emr-4.4.0',\n SecurityConfiguration=Ref(security_config),\n BootstrapActions=[emr.BootstrapActionConfig(\n Name='Dummy bootstrap action',\n ScriptBootstrapAction=emr.ScriptBootstrapActionConfig(\n Path='file:/usr/share/aws/emr/scripts/install-hue',\n Args=[\"dummy\", \"parameter\"]\n )\n )],\n Configurations=[\n emr.Configuration(\n Classification=\"core-site\",\n ConfigurationProperties={\n 'hadoop.security.groups.cache.secs': '250'\n }\n ),\n emr.Configuration(\n Classification=\"mapred-site\",\n ConfigurationProperties={\n 'mapred.tasktracker.map.tasks.maximum': '2',\n 'mapreduce.map.sort.spill.percent': '90',\n 'mapreduce.tasktracker.reduce.tasks.maximum': '5'\n }\n ),\n emr.Configuration(\n Classification=\"hadoop-env\",\n Configurations=[\n emr.Configuration(\n Classification=\"export\",\n ConfigurationProperties={\n \"HADOOP_DATANODE_HEAPSIZE\": \"2048\",\n \"HADOOP_NAMENODE_OPTS\": Join(\"\", [\"-XX:GCTimeRatio=\",\n Ref(gcTimeRatio)])\n }\n )\n ]\n )\n ],\n JobFlowRole=Ref(emr_instance_profile),\n ServiceRole=Ref(emr_service_role),\n AutoScalingRole=emr_autoscaling_role,\n Instances=emr.JobFlowInstancesConfig(\n Ec2KeyName=Ref(keyname),\n Ec2SubnetId=Ref(subnet),\n MasterInstanceGroup=emr.InstanceGroupConfigProperty(\n Name=\"Master Instance\",\n InstanceCount=\"1\",\n InstanceType=M4_LARGE,\n Market=\"ON_DEMAND\",\n AutoScalingPolicy=emr.AutoScalingPolicy(\n Constraints=emr.ScalingConstraints(\n MinCapacity=\"1\",\n MaxCapacity=\"3\"\n ),\n Rules=generate_rules(\"MasterAutoScalingPolicy\")\n )\n ),\n CoreInstanceGroup=emr.InstanceGroupConfigProperty(\n Name=\"Core Instance\",\n BidPrice=If(withSpotPrice, Ref(spot), Ref(\"AWS::NoValue\")),\n Market=If(withSpotPrice, \"SPOT\", \"ON_DEMAND\"),\n AutoScalingPolicy=emr.AutoScalingPolicy(\n Constraints=emr.ScalingConstraints(\n MinCapacity=\"1\",\n MaxCapacity=\"3\"\n ),\n Rules=generate_rules(\"CoreAutoScalingPolicy\"),\n ),\n EbsConfiguration=emr.EbsConfiguration(\n EbsBlockDeviceConfigs=[\n emr.EbsBlockDeviceConfigs(\n VolumeSpecification=emr.VolumeSpecification(\n SizeInGB=\"10\",\n VolumeType=\"gp2\"\n ),\n VolumesPerInstance=\"1\"\n )\n ],\n EbsOptimized=\"true\"\n ),\n InstanceCount=\"1\",\n InstanceType=M4_LARGE,\n )\n ),\n Applications=[\n emr.Application(Name=\"Hadoop\"),\n emr.Application(Name=\"Hive\"),\n emr.Application(Name=\"Mahout\"),\n emr.Application(Name=\"Pig\"),\n emr.Application(Name=\"Spark\")\n ],\n VisibleToAllUsers=\"true\",\n Tags=Tags(\n Name=\"EMR Sample Cluster\"\n )\n))\n\nstep = template.add_resource(emr.Step(\n 'TestStep',\n Name=\"TestStep\",\n ActionOnFailure='CONTINUE',\n HadoopJarStep=emr.HadoopJarStepConfig(\n Args=[\"5\", \"10\"],\n Jar=\"s3://emr-cfn-test/hadoop-mapreduce-examples-2.6.0.jar\",\n MainClass=\"pi\",\n StepProperties=[\n emr.KeyValue('my.custom.property', 'my.custom.value')\n ]\n ),\n JobFlowId=Ref(cluster)\n))\n\nprint(template.to_json())\n", "path": "examples/EMR_Cluster.py"}]} | 3,138 | 134 |
gh_patches_debug_15197 | rasdani/github-patches | git_diff | conan-io__conan-3185 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
USERPROFILE in conanbuildinfo.props Visual Studio files
PR https://github.com/conan-io/conan/pull/2936 was reverted due to failing things.
cc/ @pawelkami
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conans/client/generators/visualstudio.py`
Content:
```
1 from conans.model import Generator
2 from conans.paths import BUILD_INFO_VISUAL_STUDIO
3
4
5 class VisualStudioGenerator(Generator):
6
7 template = '''<?xml version="1.0" encoding="utf-8"?>
8 <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
9 <ImportGroup Label="PropertySheets" />
10 <PropertyGroup Label="UserMacros" />
11 <PropertyGroup Label="Conan-RootDirs">{item_properties}
12 </PropertyGroup>
13 <PropertyGroup Label="ConanVariables">
14 <ConanBinaryDirectories>{bin_dirs}</ConanBinaryDirectories>
15 <ConanResourceDirectories>{res_dirs}</ConanResourceDirectories>
16 </PropertyGroup>
17 <PropertyGroup>
18 <LocalDebuggerEnvironment>PATH=%PATH%;{bin_dirs}</LocalDebuggerEnvironment>
19 <DebuggerFlavor>WindowsLocalDebugger</DebuggerFlavor>
20 </PropertyGroup>
21 <ItemDefinitionGroup>
22 <ClCompile>
23 <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
24 <PreprocessorDefinitions>{definitions}%(PreprocessorDefinitions)</PreprocessorDefinitions>
25 <AdditionalOptions>{compiler_flags} %(AdditionalOptions)</AdditionalOptions>
26 </ClCompile>
27 <Link>
28 <AdditionalLibraryDirectories>{lib_dirs}%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
29 <AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies>
30 <AdditionalOptions>{linker_flags} %(AdditionalOptions)</AdditionalOptions>
31 </Link>
32 <Midl>
33 <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
34 </Midl>
35 <ResourceCompile>
36 <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
37 <PreprocessorDefinitions>{definitions}%(PreprocessorDefinitions)</PreprocessorDefinitions>
38 <AdditionalOptions>{compiler_flags} %(AdditionalOptions)</AdditionalOptions>
39 </ResourceCompile>
40 </ItemDefinitionGroup>
41 <ItemGroup />
42 </Project>'''
43
44 item_template = '''
45 <Conan-{name}-Root>{root_dir}</Conan-{name}-Root>'''
46
47 def _format_items(self):
48 sections = []
49 for dep_name, cpp_info in self.deps_build_info.dependencies:
50 fields = {
51 'root_dir': cpp_info.rootpath.replace("\\", "/"),
52 'name': dep_name.replace(".", "-")
53 }
54 section = self.item_template.format(**fields)
55 sections.append(section)
56 return "".join(sections)
57
58 @property
59 def filename(self):
60 return BUILD_INFO_VISUAL_STUDIO
61
62 @property
63 def content(self):
64 per_item_props = self._format_items()
65 fields = {
66 'item_properties': per_item_props,
67 'bin_dirs': "".join("%s;" % p for p in self._deps_build_info.bin_paths).replace("\\", "/"),
68 'res_dirs': "".join("%s;" % p for p in self._deps_build_info.res_paths).replace("\\", "/"),
69 'include_dirs': "".join("%s;" % p for p in self._deps_build_info.include_paths).replace("\\", "/"),
70 'lib_dirs': "".join("%s;" % p for p in self._deps_build_info.lib_paths).replace("\\", "/"),
71 'libs': "".join(['%s.lib;' % lib if not lib.endswith(".lib")
72 else '%s;' % lib for lib in self._deps_build_info.libs]),
73 'definitions': "".join("%s;" % d for d in self._deps_build_info.defines),
74 'compiler_flags': " ".join(self._deps_build_info.cppflags + self._deps_build_info.cflags),
75 'linker_flags': " ".join(self._deps_build_info.sharedlinkflags),
76 'exe_flags': " ".join(self._deps_build_info.exelinkflags)
77 }
78 formatted_template = self.template.format(**fields)
79 return formatted_template
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conans/client/generators/visualstudio.py b/conans/client/generators/visualstudio.py
--- a/conans/client/generators/visualstudio.py
+++ b/conans/client/generators/visualstudio.py
@@ -1,5 +1,8 @@
+import os
+
from conans.model import Generator
from conans.paths import BUILD_INFO_VISUAL_STUDIO
+import re
class VisualStudioGenerator(Generator):
@@ -76,4 +79,8 @@
'exe_flags': " ".join(self._deps_build_info.exelinkflags)
}
formatted_template = self.template.format(**fields)
+ userprofile = os.getenv("USERPROFILE")
+ if userprofile:
+ userprofile = userprofile.replace("\\", "/")
+ formatted_template = re.sub(userprofile, "$(USERPROFILE)", formatted_template, flags=re.I)
return formatted_template
| {"golden_diff": "diff --git a/conans/client/generators/visualstudio.py b/conans/client/generators/visualstudio.py\n--- a/conans/client/generators/visualstudio.py\n+++ b/conans/client/generators/visualstudio.py\n@@ -1,5 +1,8 @@\n+import os\n+\n from conans.model import Generator\n from conans.paths import BUILD_INFO_VISUAL_STUDIO\n+import re\n \n \n class VisualStudioGenerator(Generator):\n@@ -76,4 +79,8 @@\n 'exe_flags': \" \".join(self._deps_build_info.exelinkflags)\n }\n formatted_template = self.template.format(**fields)\n+ userprofile = os.getenv(\"USERPROFILE\")\n+ if userprofile:\n+ userprofile = userprofile.replace(\"\\\\\", \"/\")\n+ formatted_template = re.sub(userprofile, \"$(USERPROFILE)\", formatted_template, flags=re.I)\n return formatted_template\n", "issue": "USERPROFILE in conanbuildinfo.props Visual Studio files\nPR https://github.com/conan-io/conan/pull/2936 was reverted due to failing things.\r\n\r\ncc/ @pawelkami \n", "before_files": [{"content": "from conans.model import Generator\nfrom conans.paths import BUILD_INFO_VISUAL_STUDIO\n\n\nclass VisualStudioGenerator(Generator):\n\n template = '''<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n <ImportGroup Label=\"PropertySheets\" />\n <PropertyGroup Label=\"UserMacros\" />\n <PropertyGroup Label=\"Conan-RootDirs\">{item_properties}\n </PropertyGroup>\n <PropertyGroup Label=\"ConanVariables\">\n <ConanBinaryDirectories>{bin_dirs}</ConanBinaryDirectories>\n <ConanResourceDirectories>{res_dirs}</ConanResourceDirectories>\n </PropertyGroup>\n <PropertyGroup>\n <LocalDebuggerEnvironment>PATH=%PATH%;{bin_dirs}</LocalDebuggerEnvironment>\n <DebuggerFlavor>WindowsLocalDebugger</DebuggerFlavor>\n </PropertyGroup>\n <ItemDefinitionGroup>\n <ClCompile>\n <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n <PreprocessorDefinitions>{definitions}%(PreprocessorDefinitions)</PreprocessorDefinitions>\n <AdditionalOptions>{compiler_flags} %(AdditionalOptions)</AdditionalOptions>\n </ClCompile>\n <Link>\n <AdditionalLibraryDirectories>{lib_dirs}%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>\n <AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies>\n <AdditionalOptions>{linker_flags} %(AdditionalOptions)</AdditionalOptions>\n </Link>\n <Midl>\n <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n </Midl>\n <ResourceCompile>\n <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n <PreprocessorDefinitions>{definitions}%(PreprocessorDefinitions)</PreprocessorDefinitions>\n <AdditionalOptions>{compiler_flags} %(AdditionalOptions)</AdditionalOptions>\n </ResourceCompile>\n </ItemDefinitionGroup>\n <ItemGroup />\n</Project>'''\n\n item_template = '''\n <Conan-{name}-Root>{root_dir}</Conan-{name}-Root>'''\n\n def _format_items(self):\n sections = []\n for dep_name, cpp_info in self.deps_build_info.dependencies:\n fields = {\n 'root_dir': cpp_info.rootpath.replace(\"\\\\\", \"/\"),\n 'name': dep_name.replace(\".\", \"-\")\n }\n section = self.item_template.format(**fields)\n sections.append(section)\n return \"\".join(sections)\n\n @property\n def filename(self):\n return BUILD_INFO_VISUAL_STUDIO\n\n @property\n def content(self):\n per_item_props = self._format_items()\n fields = {\n 'item_properties': per_item_props,\n 'bin_dirs': \"\".join(\"%s;\" % p for p in self._deps_build_info.bin_paths).replace(\"\\\\\", \"/\"),\n 'res_dirs': \"\".join(\"%s;\" % p for p in self._deps_build_info.res_paths).replace(\"\\\\\", \"/\"),\n 'include_dirs': \"\".join(\"%s;\" % p for p in self._deps_build_info.include_paths).replace(\"\\\\\", \"/\"),\n 'lib_dirs': \"\".join(\"%s;\" % p for p in self._deps_build_info.lib_paths).replace(\"\\\\\", \"/\"),\n 'libs': \"\".join(['%s.lib;' % lib if not lib.endswith(\".lib\")\n else '%s;' % lib for lib in self._deps_build_info.libs]),\n 'definitions': \"\".join(\"%s;\" % d for d in self._deps_build_info.defines),\n 'compiler_flags': \" \".join(self._deps_build_info.cppflags + self._deps_build_info.cflags),\n 'linker_flags': \" \".join(self._deps_build_info.sharedlinkflags),\n 'exe_flags': \" \".join(self._deps_build_info.exelinkflags)\n }\n formatted_template = self.template.format(**fields)\n return formatted_template\n", "path": "conans/client/generators/visualstudio.py"}], "after_files": [{"content": "import os\n\nfrom conans.model import Generator\nfrom conans.paths import BUILD_INFO_VISUAL_STUDIO\nimport re\n\n\nclass VisualStudioGenerator(Generator):\n\n template = '''<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n <ImportGroup Label=\"PropertySheets\" />\n <PropertyGroup Label=\"UserMacros\" />\n <PropertyGroup Label=\"Conan-RootDirs\">{item_properties}\n </PropertyGroup>\n <PropertyGroup Label=\"ConanVariables\">\n <ConanBinaryDirectories>{bin_dirs}</ConanBinaryDirectories>\n <ConanResourceDirectories>{res_dirs}</ConanResourceDirectories>\n </PropertyGroup>\n <PropertyGroup>\n <LocalDebuggerEnvironment>PATH=%PATH%;{bin_dirs}</LocalDebuggerEnvironment>\n <DebuggerFlavor>WindowsLocalDebugger</DebuggerFlavor>\n </PropertyGroup>\n <ItemDefinitionGroup>\n <ClCompile>\n <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n <PreprocessorDefinitions>{definitions}%(PreprocessorDefinitions)</PreprocessorDefinitions>\n <AdditionalOptions>{compiler_flags} %(AdditionalOptions)</AdditionalOptions>\n </ClCompile>\n <Link>\n <AdditionalLibraryDirectories>{lib_dirs}%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>\n <AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies>\n <AdditionalOptions>{linker_flags} %(AdditionalOptions)</AdditionalOptions>\n </Link>\n <Midl>\n <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n </Midl>\n <ResourceCompile>\n <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n <PreprocessorDefinitions>{definitions}%(PreprocessorDefinitions)</PreprocessorDefinitions>\n <AdditionalOptions>{compiler_flags} %(AdditionalOptions)</AdditionalOptions>\n </ResourceCompile>\n </ItemDefinitionGroup>\n <ItemGroup />\n</Project>'''\n\n item_template = '''\n <Conan-{name}-Root>{root_dir}</Conan-{name}-Root>'''\n\n def _format_items(self):\n sections = []\n for dep_name, cpp_info in self.deps_build_info.dependencies:\n fields = {\n 'root_dir': cpp_info.rootpath.replace(\"\\\\\", \"/\"),\n 'name': dep_name.replace(\".\", \"-\")\n }\n section = self.item_template.format(**fields)\n sections.append(section)\n return \"\".join(sections)\n\n @property\n def filename(self):\n return BUILD_INFO_VISUAL_STUDIO\n\n @property\n def content(self):\n per_item_props = self._format_items()\n fields = {\n 'item_properties': per_item_props,\n 'bin_dirs': \"\".join(\"%s;\" % p for p in self._deps_build_info.bin_paths).replace(\"\\\\\", \"/\"),\n 'res_dirs': \"\".join(\"%s;\" % p for p in self._deps_build_info.res_paths).replace(\"\\\\\", \"/\"),\n 'include_dirs': \"\".join(\"%s;\" % p for p in self._deps_build_info.include_paths).replace(\"\\\\\", \"/\"),\n 'lib_dirs': \"\".join(\"%s;\" % p for p in self._deps_build_info.lib_paths).replace(\"\\\\\", \"/\"),\n 'libs': \"\".join(['%s.lib;' % lib if not lib.endswith(\".lib\")\n else '%s;' % lib for lib in self._deps_build_info.libs]),\n 'definitions': \"\".join(\"%s;\" % d for d in self._deps_build_info.defines),\n 'compiler_flags': \" \".join(self._deps_build_info.cppflags + self._deps_build_info.cflags),\n 'linker_flags': \" \".join(self._deps_build_info.sharedlinkflags),\n 'exe_flags': \" \".join(self._deps_build_info.exelinkflags)\n }\n formatted_template = self.template.format(**fields)\n userprofile = os.getenv(\"USERPROFILE\")\n if userprofile:\n userprofile = userprofile.replace(\"\\\\\", \"/\")\n formatted_template = re.sub(userprofile, \"$(USERPROFILE)\", formatted_template, flags=re.I)\n return formatted_template\n", "path": "conans/client/generators/visualstudio.py"}]} | 1,292 | 195 |
gh_patches_debug_657 | rasdani/github-patches | git_diff | pex-tool__pex-1987 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.114
On the docket:
+ [ ] Only insert "" to head of sys.path if a venv PEX runs in interpreter mode #1984
+ [x] venv_dir calculation doesn't correctly handle PEX_PYTHON_PATH with symlinks. #1885
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.113"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.113"
+__version__ = "2.1.114"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.113\"\n+__version__ = \"2.1.114\"\n", "issue": "Release 2.1.114\nOn the docket:\r\n+ [ ] Only insert \"\" to head of sys.path if a venv PEX runs in interpreter mode #1984\r\n+ [x] venv_dir calculation doesn't correctly handle PEX_PYTHON_PATH with symlinks. #1885\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.113\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.114\"\n", "path": "pex/version.py"}]} | 378 | 98 |
gh_patches_debug_247 | rasdani/github-patches | git_diff | fail2ban__fail2ban-249 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
weak regex'es for apache
See email on fail2ban-users
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `common/version.py`
Content:
```
1 # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: t -*-
2 # vi: set ft=python sts=4 ts=4 sw=4 noet :
3
4 # This file is part of Fail2Ban.
5 #
6 # Fail2Ban is free software; you can redistribute it and/or modify
7 # it under the terms of the GNU General Public License as published by
8 # the Free Software Foundation; either version 2 of the License, or
9 # (at your option) any later version.
10 #
11 # Fail2Ban is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with Fail2Ban; if not, write to the Free Software
18 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
19
20 # Author: Cyril Jaquier
21 #
22
23 __author__ = "Cyril Jaquier, Yaroslav Halchenko"
24 __copyright__ = "Copyright (c) 2004 Cyril Jaquier, 2011-2013 Yaroslav Halchenko"
25 __license__ = "GPL"
26
27 version = "0.8.9.dev"
28
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/common/version.py b/common/version.py
--- a/common/version.py
+++ b/common/version.py
@@ -24,4 +24,4 @@
__copyright__ = "Copyright (c) 2004 Cyril Jaquier, 2011-2013 Yaroslav Halchenko"
__license__ = "GPL"
-version = "0.8.9.dev"
+version = "0.8.10"
| {"golden_diff": "diff --git a/common/version.py b/common/version.py\n--- a/common/version.py\n+++ b/common/version.py\n@@ -24,4 +24,4 @@\n __copyright__ = \"Copyright (c) 2004 Cyril Jaquier, 2011-2013 Yaroslav Halchenko\"\n __license__ = \"GPL\"\n \n-version = \"0.8.9.dev\"\n+version = \"0.8.10\"\n", "issue": "weak regex'es for apache\nSee email on fail2ban-users\n\n", "before_files": [{"content": "# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: t -*-\n# vi: set ft=python sts=4 ts=4 sw=4 noet :\n\n# This file is part of Fail2Ban.\n#\n# Fail2Ban is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# Fail2Ban is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Fail2Ban; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\n# Author: Cyril Jaquier\n#\n\n__author__ = \"Cyril Jaquier, Yaroslav Halchenko\"\n__copyright__ = \"Copyright (c) 2004 Cyril Jaquier, 2011-2013 Yaroslav Halchenko\"\n__license__ = \"GPL\"\n\nversion = \"0.8.9.dev\"\n", "path": "common/version.py"}], "after_files": [{"content": "# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: t -*-\n# vi: set ft=python sts=4 ts=4 sw=4 noet :\n\n# This file is part of Fail2Ban.\n#\n# Fail2Ban is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# Fail2Ban is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Fail2Ban; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\n# Author: Cyril Jaquier\n#\n\n__author__ = \"Cyril Jaquier, Yaroslav Halchenko\"\n__copyright__ = \"Copyright (c) 2004 Cyril Jaquier, 2011-2013 Yaroslav Halchenko\"\n__license__ = \"GPL\"\n\nversion = \"0.8.10\"\n", "path": "common/version.py"}]} | 624 | 101 |
gh_patches_debug_37565 | rasdani/github-patches | git_diff | opensearch-project__opensearch-build-456 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Integration tests to pull artifacts from S3Bucket
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bundle-workflow/src/manifests/bundle_manifest.py`
Content:
```
1 # SPDX-License-Identifier: Apache-2.0
2 #
3 # The OpenSearch Contributors require contributions made to
4 # this file be licensed under the Apache-2.0 license or a
5 # compatible open source license.
6
7 from manifests.manifest import Manifest
8
9
10 class BundleManifest(Manifest):
11 """
12 A BundleManifest is an immutable view of the outputs from a assemble step
13 The manifest contains information about the bundle that was built (in the `assemble` section),
14 and the components that made up the bundle in the `components` section.
15
16 The format for schema version 1.0 is:
17 schema-version: "1.0"
18 build:
19 name: string
20 version: string
21 architecture: x64 or arm64
22 location: /relative/path/to/tarball
23 components:
24 - name: string
25 repository: URL of git repository
26 ref: git ref that was built (sha, branch, or tag)
27 commit_id: The actual git commit ID that was built (i.e. the resolved "ref")
28 location: /relative/path/to/artifact
29 """
30
31 def __init__(self, data):
32 super().__init__(data)
33
34 self.build = self.Build(data["build"])
35 self.components = list(
36 map(lambda entry: self.Component(entry), data["components"])
37 )
38
39 def __to_dict__(self):
40 return {
41 "schema-version": "1.0",
42 "build": self.build.__to_dict__(),
43 "components": list(
44 map(lambda component: component.__to_dict__(), self.components)
45 ),
46 }
47
48 class Build:
49 def __init__(self, data):
50 self.name = data["name"]
51 self.version = data["version"]
52 self.architecture = data["architecture"]
53 self.location = data["location"]
54 self.id = data["id"]
55
56 def __to_dict__(self):
57 return {
58 "name": self.name,
59 "version": self.version,
60 "architecture": self.architecture,
61 "location": self.location,
62 "id": self.id,
63 }
64
65 class Component:
66 def __init__(self, data):
67 self.name = data["name"]
68 self.repository = data["repository"]
69 self.ref = data["ref"]
70 self.commit_id = data["commit_id"]
71 self.location = data["location"]
72
73 def __to_dict__(self):
74 return {
75 "name": self.name,
76 "repository": self.repository,
77 "ref": self.ref,
78 "commit_id": self.commit_id,
79 "location": self.location,
80 }
81
```
Path: `bundle-workflow/src/manifests/build_manifest.py`
Content:
```
1 # SPDX-License-Identifier: Apache-2.0
2 #
3 # The OpenSearch Contributors require contributions made to
4 # this file be licensed under the Apache-2.0 license or a
5 # compatible open source license.
6
7 from manifests.manifest import Manifest
8
9 """
10 A BuildManifest is an immutable view of the outputs from a build step
11 The manifest contains information about the product that was built (in the `build` section),
12 and the components that made up the build in the `components` section.
13
14 The format for schema version 1.0 is:
15 schema-version: "1.0"
16 build:
17 name: string
18 version: string
19 architecture: x64 or arm64
20 components:
21 - name: string
22 repository: URL of git repository
23 ref: git ref that was built (sha, branch, or tag)
24 commit_id: The actual git commit ID that was built (i.e. the resolved "ref")
25 artifacts:
26 maven:
27 - maven/relative/path/to/artifact
28 - ...
29 plugins:
30 - plugins/relative/path/to/artifact
31 - ...
32 libs:
33 - libs/relative/path/to/artifact
34 - ...
35 - ...
36 """
37
38
39 class BuildManifest(Manifest):
40 def __init__(self, data):
41 super().__init__(data)
42
43 self.build = self.Build(data["build"])
44 self.components = list(
45 map(lambda entry: self.Component(entry), data["components"])
46 )
47
48 def __to_dict__(self):
49 return {
50 "schema-version": "1.0",
51 "build": self.build.__to_dict__(),
52 "components": list(
53 map(lambda component: component.__to_dict__(), self.components)
54 ),
55 }
56
57 class Build:
58 def __init__(self, data):
59 self.name = data["name"]
60 self.version = data["version"]
61 self.architecture = data["architecture"]
62 self.id = data["id"]
63
64 def __to_dict__(self):
65 return {
66 "name": self.name,
67 "version": self.version,
68 "architecture": self.architecture,
69 "id": self.id,
70 }
71
72 class Component:
73 def __init__(self, data):
74 self.name = data["name"]
75 self.repository = data["repository"]
76 self.ref = data["ref"]
77 self.commit_id = data["commit_id"]
78 self.artifacts = data["artifacts"]
79 self.version = data["version"]
80
81 def __to_dict__(self):
82 return {
83 "name": self.name,
84 "repository": self.repository,
85 "ref": self.ref,
86 "commit_id": self.commit_id,
87 "artifacts": self.artifacts,
88 "version": self.version,
89 }
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bundle-workflow/src/manifests/build_manifest.py b/bundle-workflow/src/manifests/build_manifest.py
--- a/bundle-workflow/src/manifests/build_manifest.py
+++ b/bundle-workflow/src/manifests/build_manifest.py
@@ -4,6 +4,9 @@
# this file be licensed under the Apache-2.0 license or a
# compatible open source license.
+import os
+
+from aws.s3_bucket import S3Bucket
from manifests.manifest import Manifest
"""
@@ -54,6 +57,20 @@
),
}
+ @staticmethod
+ def get_build_manifest_relative_location(build_id, opensearch_version, architecture):
+ return f"builds/{opensearch_version}/{build_id}/{architecture}/manifest.yml"
+
+ @staticmethod
+ def from_s3(bucket_name, build_id, opensearch_version, architecture, work_dir=None):
+ work_dir = work_dir if not None else str(os.getcwd())
+ manifest_s3_path = BuildManifest.get_build_manifest_relative_location(build_id, opensearch_version, architecture)
+ S3Bucket(bucket_name).download_file(manifest_s3_path, work_dir)
+ with open('manifest.yml', 'r') as file:
+ build_manifest = BuildManifest.from_file(file)
+ os.remove(os.path.realpath(os.path.join(work_dir, 'manifest.yml')))
+ return build_manifest
+
class Build:
def __init__(self, data):
self.name = data["name"]
diff --git a/bundle-workflow/src/manifests/bundle_manifest.py b/bundle-workflow/src/manifests/bundle_manifest.py
--- a/bundle-workflow/src/manifests/bundle_manifest.py
+++ b/bundle-workflow/src/manifests/bundle_manifest.py
@@ -4,6 +4,9 @@
# this file be licensed under the Apache-2.0 license or a
# compatible open source license.
+import os
+
+from aws.s3_bucket import S3Bucket
from manifests.manifest import Manifest
@@ -45,6 +48,28 @@
),
}
+ @staticmethod
+ def from_s3(bucket_name, build_id, opensearch_version, architecture, work_dir=None):
+ work_dir = work_dir if not None else str(os.getcwd())
+ manifest_s3_path = BundleManifest.get_bundle_manifest_relative_location(build_id, opensearch_version, architecture)
+ S3Bucket(bucket_name).download_file(manifest_s3_path, work_dir)
+ with open('manifest.yml', 'r') as file:
+ bundle_manifest = BundleManifest.from_file(file)
+ os.remove(os.path.realpath(os.path.join(work_dir, 'manifest.yml')))
+ return bundle_manifest
+
+ @staticmethod
+ def get_tarball_relative_location(build_id, opensearch_version, architecture):
+ return f"bundles/{opensearch_version}/{build_id}/{architecture}/opensearch-{opensearch_version}-linux-{architecture}.tar.gz"
+
+ @staticmethod
+ def get_tarball_name(opensearch_version, architecture):
+ return f"opensearch-{opensearch_version}-linux-{architecture}.tar.gz"
+
+ @staticmethod
+ def get_bundle_manifest_relative_location(build_id, opensearch_version, architecture):
+ return f"bundles/{opensearch_version}/{build_id}/{architecture}/manifest.yml"
+
class Build:
def __init__(self, data):
self.name = data["name"]
| {"golden_diff": "diff --git a/bundle-workflow/src/manifests/build_manifest.py b/bundle-workflow/src/manifests/build_manifest.py\n--- a/bundle-workflow/src/manifests/build_manifest.py\n+++ b/bundle-workflow/src/manifests/build_manifest.py\n@@ -4,6 +4,9 @@\n # this file be licensed under the Apache-2.0 license or a\n # compatible open source license.\n \n+import os\n+\n+from aws.s3_bucket import S3Bucket\n from manifests.manifest import Manifest\n \n \"\"\"\n@@ -54,6 +57,20 @@\n ),\n }\n \n+ @staticmethod\n+ def get_build_manifest_relative_location(build_id, opensearch_version, architecture):\n+ return f\"builds/{opensearch_version}/{build_id}/{architecture}/manifest.yml\"\n+\n+ @staticmethod\n+ def from_s3(bucket_name, build_id, opensearch_version, architecture, work_dir=None):\n+ work_dir = work_dir if not None else str(os.getcwd())\n+ manifest_s3_path = BuildManifest.get_build_manifest_relative_location(build_id, opensearch_version, architecture)\n+ S3Bucket(bucket_name).download_file(manifest_s3_path, work_dir)\n+ with open('manifest.yml', 'r') as file:\n+ build_manifest = BuildManifest.from_file(file)\n+ os.remove(os.path.realpath(os.path.join(work_dir, 'manifest.yml')))\n+ return build_manifest\n+\n class Build:\n def __init__(self, data):\n self.name = data[\"name\"]\ndiff --git a/bundle-workflow/src/manifests/bundle_manifest.py b/bundle-workflow/src/manifests/bundle_manifest.py\n--- a/bundle-workflow/src/manifests/bundle_manifest.py\n+++ b/bundle-workflow/src/manifests/bundle_manifest.py\n@@ -4,6 +4,9 @@\n # this file be licensed under the Apache-2.0 license or a\n # compatible open source license.\n \n+import os\n+\n+from aws.s3_bucket import S3Bucket\n from manifests.manifest import Manifest\n \n \n@@ -45,6 +48,28 @@\n ),\n }\n \n+ @staticmethod\n+ def from_s3(bucket_name, build_id, opensearch_version, architecture, work_dir=None):\n+ work_dir = work_dir if not None else str(os.getcwd())\n+ manifest_s3_path = BundleManifest.get_bundle_manifest_relative_location(build_id, opensearch_version, architecture)\n+ S3Bucket(bucket_name).download_file(manifest_s3_path, work_dir)\n+ with open('manifest.yml', 'r') as file:\n+ bundle_manifest = BundleManifest.from_file(file)\n+ os.remove(os.path.realpath(os.path.join(work_dir, 'manifest.yml')))\n+ return bundle_manifest\n+\n+ @staticmethod\n+ def get_tarball_relative_location(build_id, opensearch_version, architecture):\n+ return f\"bundles/{opensearch_version}/{build_id}/{architecture}/opensearch-{opensearch_version}-linux-{architecture}.tar.gz\"\n+\n+ @staticmethod\n+ def get_tarball_name(opensearch_version, architecture):\n+ return f\"opensearch-{opensearch_version}-linux-{architecture}.tar.gz\"\n+\n+ @staticmethod\n+ def get_bundle_manifest_relative_location(build_id, opensearch_version, architecture):\n+ return f\"bundles/{opensearch_version}/{build_id}/{architecture}/manifest.yml\"\n+\n class Build:\n def __init__(self, data):\n self.name = data[\"name\"]\n", "issue": "Integration tests to pull artifacts from S3Bucket\n\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nfrom manifests.manifest import Manifest\n\n\nclass BundleManifest(Manifest):\n \"\"\"\n A BundleManifest is an immutable view of the outputs from a assemble step\n The manifest contains information about the bundle that was built (in the `assemble` section),\n and the components that made up the bundle in the `components` section.\n\n The format for schema version 1.0 is:\n schema-version: \"1.0\"\n build:\n name: string\n version: string\n architecture: x64 or arm64\n location: /relative/path/to/tarball\n components:\n - name: string\n repository: URL of git repository\n ref: git ref that was built (sha, branch, or tag)\n commit_id: The actual git commit ID that was built (i.e. the resolved \"ref\")\n location: /relative/path/to/artifact\n \"\"\"\n\n def __init__(self, data):\n super().__init__(data)\n\n self.build = self.Build(data[\"build\"])\n self.components = list(\n map(lambda entry: self.Component(entry), data[\"components\"])\n )\n\n def __to_dict__(self):\n return {\n \"schema-version\": \"1.0\",\n \"build\": self.build.__to_dict__(),\n \"components\": list(\n map(lambda component: component.__to_dict__(), self.components)\n ),\n }\n\n class Build:\n def __init__(self, data):\n self.name = data[\"name\"]\n self.version = data[\"version\"]\n self.architecture = data[\"architecture\"]\n self.location = data[\"location\"]\n self.id = data[\"id\"]\n\n def __to_dict__(self):\n return {\n \"name\": self.name,\n \"version\": self.version,\n \"architecture\": self.architecture,\n \"location\": self.location,\n \"id\": self.id,\n }\n\n class Component:\n def __init__(self, data):\n self.name = data[\"name\"]\n self.repository = data[\"repository\"]\n self.ref = data[\"ref\"]\n self.commit_id = data[\"commit_id\"]\n self.location = data[\"location\"]\n\n def __to_dict__(self):\n return {\n \"name\": self.name,\n \"repository\": self.repository,\n \"ref\": self.ref,\n \"commit_id\": self.commit_id,\n \"location\": self.location,\n }\n", "path": "bundle-workflow/src/manifests/bundle_manifest.py"}, {"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nfrom manifests.manifest import Manifest\n\n\"\"\"\nA BuildManifest is an immutable view of the outputs from a build step\nThe manifest contains information about the product that was built (in the `build` section),\nand the components that made up the build in the `components` section.\n\nThe format for schema version 1.0 is:\nschema-version: \"1.0\"\nbuild:\n name: string\n version: string\n architecture: x64 or arm64\ncomponents:\n - name: string\n repository: URL of git repository\n ref: git ref that was built (sha, branch, or tag)\n commit_id: The actual git commit ID that was built (i.e. the resolved \"ref\")\n artifacts:\n maven:\n - maven/relative/path/to/artifact\n - ...\n plugins:\n - plugins/relative/path/to/artifact\n - ...\n libs:\n - libs/relative/path/to/artifact\n - ...\n - ...\n\"\"\"\n\n\nclass BuildManifest(Manifest):\n def __init__(self, data):\n super().__init__(data)\n\n self.build = self.Build(data[\"build\"])\n self.components = list(\n map(lambda entry: self.Component(entry), data[\"components\"])\n )\n\n def __to_dict__(self):\n return {\n \"schema-version\": \"1.0\",\n \"build\": self.build.__to_dict__(),\n \"components\": list(\n map(lambda component: component.__to_dict__(), self.components)\n ),\n }\n\n class Build:\n def __init__(self, data):\n self.name = data[\"name\"]\n self.version = data[\"version\"]\n self.architecture = data[\"architecture\"]\n self.id = data[\"id\"]\n\n def __to_dict__(self):\n return {\n \"name\": self.name,\n \"version\": self.version,\n \"architecture\": self.architecture,\n \"id\": self.id,\n }\n\n class Component:\n def __init__(self, data):\n self.name = data[\"name\"]\n self.repository = data[\"repository\"]\n self.ref = data[\"ref\"]\n self.commit_id = data[\"commit_id\"]\n self.artifacts = data[\"artifacts\"]\n self.version = data[\"version\"]\n\n def __to_dict__(self):\n return {\n \"name\": self.name,\n \"repository\": self.repository,\n \"ref\": self.ref,\n \"commit_id\": self.commit_id,\n \"artifacts\": self.artifacts,\n \"version\": self.version,\n }\n", "path": "bundle-workflow/src/manifests/build_manifest.py"}], "after_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\n\nfrom aws.s3_bucket import S3Bucket\nfrom manifests.manifest import Manifest\n\n\nclass BundleManifest(Manifest):\n \"\"\"\n A BundleManifest is an immutable view of the outputs from a assemble step\n The manifest contains information about the bundle that was built (in the `assemble` section),\n and the components that made up the bundle in the `components` section.\n\n The format for schema version 1.0 is:\n schema-version: \"1.0\"\n build:\n name: string\n version: string\n architecture: x64 or arm64\n location: /relative/path/to/tarball\n components:\n - name: string\n repository: URL of git repository\n ref: git ref that was built (sha, branch, or tag)\n commit_id: The actual git commit ID that was built (i.e. the resolved \"ref\")\n location: /relative/path/to/artifact\n \"\"\"\n\n def __init__(self, data):\n super().__init__(data)\n\n self.build = self.Build(data[\"build\"])\n self.components = list(\n map(lambda entry: self.Component(entry), data[\"components\"])\n )\n\n def __to_dict__(self):\n return {\n \"schema-version\": \"1.0\",\n \"build\": self.build.__to_dict__(),\n \"components\": list(\n map(lambda component: component.__to_dict__(), self.components)\n ),\n }\n\n @staticmethod\n def from_s3(bucket_name, build_id, opensearch_version, architecture, work_dir=None):\n work_dir = work_dir if not None else str(os.getcwd())\n manifest_s3_path = BundleManifest.get_bundle_manifest_relative_location(build_id, opensearch_version, architecture)\n S3Bucket(bucket_name).download_file(manifest_s3_path, work_dir)\n with open('manifest.yml', 'r') as file:\n bundle_manifest = BundleManifest.from_file(file)\n os.remove(os.path.realpath(os.path.join(work_dir, 'manifest.yml')))\n return bundle_manifest\n\n @staticmethod\n def get_tarball_relative_location(build_id, opensearch_version, architecture):\n return f\"bundles/{opensearch_version}/{build_id}/{architecture}/opensearch-{opensearch_version}-linux-{architecture}.tar.gz\"\n\n @staticmethod\n def get_tarball_name(opensearch_version, architecture):\n return f\"opensearch-{opensearch_version}-linux-{architecture}.tar.gz\"\n\n @staticmethod\n def get_bundle_manifest_relative_location(build_id, opensearch_version, architecture):\n return f\"bundles/{opensearch_version}/{build_id}/{architecture}/manifest.yml\"\n\n class Build:\n def __init__(self, data):\n self.name = data[\"name\"]\n self.version = data[\"version\"]\n self.architecture = data[\"architecture\"]\n self.location = data[\"location\"]\n self.id = data[\"id\"]\n\n def __to_dict__(self):\n return {\n \"name\": self.name,\n \"version\": self.version,\n \"architecture\": self.architecture,\n \"location\": self.location,\n \"id\": self.id,\n }\n\n class Component:\n def __init__(self, data):\n self.name = data[\"name\"]\n self.repository = data[\"repository\"]\n self.ref = data[\"ref\"]\n self.commit_id = data[\"commit_id\"]\n self.location = data[\"location\"]\n\n def __to_dict__(self):\n return {\n \"name\": self.name,\n \"repository\": self.repository,\n \"ref\": self.ref,\n \"commit_id\": self.commit_id,\n \"location\": self.location,\n }\n", "path": "bundle-workflow/src/manifests/bundle_manifest.py"}, {"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\n\nfrom aws.s3_bucket import S3Bucket\nfrom manifests.manifest import Manifest\n\n\"\"\"\nA BuildManifest is an immutable view of the outputs from a build step\nThe manifest contains information about the product that was built (in the `build` section),\nand the components that made up the build in the `components` section.\n\nThe format for schema version 1.0 is:\nschema-version: \"1.0\"\nbuild:\n name: string\n version: string\n architecture: x64 or arm64\ncomponents:\n - name: string\n repository: URL of git repository\n ref: git ref that was built (sha, branch, or tag)\n commit_id: The actual git commit ID that was built (i.e. the resolved \"ref\")\n artifacts:\n maven:\n - maven/relative/path/to/artifact\n - ...\n plugins:\n - plugins/relative/path/to/artifact\n - ...\n libs:\n - libs/relative/path/to/artifact\n - ...\n - ...\n\"\"\"\n\n\nclass BuildManifest(Manifest):\n def __init__(self, data):\n super().__init__(data)\n\n self.build = self.Build(data[\"build\"])\n self.components = list(\n map(lambda entry: self.Component(entry), data[\"components\"])\n )\n\n def __to_dict__(self):\n return {\n \"schema-version\": \"1.0\",\n \"build\": self.build.__to_dict__(),\n \"components\": list(\n map(lambda component: component.__to_dict__(), self.components)\n ),\n }\n\n @staticmethod\n def get_build_manifest_relative_location(build_id, opensearch_version, architecture):\n return f\"builds/{opensearch_version}/{build_id}/{architecture}/manifest.yml\"\n\n @staticmethod\n def from_s3(bucket_name, build_id, opensearch_version, architecture, work_dir=None):\n work_dir = work_dir if not None else str(os.getcwd())\n manifest_s3_path = BuildManifest.get_build_manifest_relative_location(build_id, opensearch_version, architecture)\n S3Bucket(bucket_name).download_file(manifest_s3_path, work_dir)\n with open('manifest.yml', 'r') as file:\n build_manifest = BuildManifest.from_file(file)\n os.remove(os.path.realpath(os.path.join(work_dir, 'manifest.yml')))\n return build_manifest\n\n class Build:\n def __init__(self, data):\n self.name = data[\"name\"]\n self.version = data[\"version\"]\n self.architecture = data[\"architecture\"]\n self.id = data[\"id\"]\n\n def __to_dict__(self):\n return {\n \"name\": self.name,\n \"version\": self.version,\n \"architecture\": self.architecture,\n \"id\": self.id,\n }\n\n class Component:\n def __init__(self, data):\n self.name = data[\"name\"]\n self.repository = data[\"repository\"]\n self.ref = data[\"ref\"]\n self.commit_id = data[\"commit_id\"]\n self.artifacts = data[\"artifacts\"]\n self.version = data[\"version\"]\n\n def __to_dict__(self):\n return {\n \"name\": self.name,\n \"repository\": self.repository,\n \"ref\": self.ref,\n \"commit_id\": self.commit_id,\n \"artifacts\": self.artifacts,\n \"version\": self.version,\n }\n", "path": "bundle-workflow/src/manifests/build_manifest.py"}]} | 1,758 | 765 |
gh_patches_debug_47932 | rasdani/github-patches | git_diff | liqd__a4-opin-612 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
too much space below video, not deletable in wagtail

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `home/wagtail_hooks.py`
Content:
```
1 from django.conf import settings
2 from django.utils.html import format_html
3 from wagtail.wagtailcore import hooks
4
5
6 @hooks.register('insert_editor_css')
7 def editor_css():
8 return format_html('<link rel="stylesheet" href="'
9 + settings.STATIC_URL
10 + 'scss/wagtail_admin/wagtail_admin.css">')
11
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/home/wagtail_hooks.py b/home/wagtail_hooks.py
--- a/home/wagtail_hooks.py
+++ b/home/wagtail_hooks.py
@@ -7,4 +7,4 @@
def editor_css():
return format_html('<link rel="stylesheet" href="'
+ settings.STATIC_URL
- + 'scss/wagtail_admin/wagtail_admin.css">')
+ + 'wagtail_admin.css">')
| {"golden_diff": "diff --git a/home/wagtail_hooks.py b/home/wagtail_hooks.py\n--- a/home/wagtail_hooks.py\n+++ b/home/wagtail_hooks.py\n@@ -7,4 +7,4 @@\n def editor_css():\n return format_html('<link rel=\"stylesheet\" href=\"'\n + settings.STATIC_URL\n- + 'scss/wagtail_admin/wagtail_admin.css\">')\n+ + 'wagtail_admin.css\">')\n", "issue": "too much space below video, not deletable in wagtail \n\r\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.utils.html import format_html\nfrom wagtail.wagtailcore import hooks\n\n\[email protected]('insert_editor_css')\ndef editor_css():\n return format_html('<link rel=\"stylesheet\" href=\"'\n + settings.STATIC_URL\n + 'scss/wagtail_admin/wagtail_admin.css\">')\n", "path": "home/wagtail_hooks.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django.utils.html import format_html\nfrom wagtail.wagtailcore import hooks\n\n\[email protected]('insert_editor_css')\ndef editor_css():\n return format_html('<link rel=\"stylesheet\" href=\"'\n + settings.STATIC_URL\n + 'wagtail_admin.css\">')\n", "path": "home/wagtail_hooks.py"}]} | 428 | 97 |
gh_patches_debug_42817 | rasdani/github-patches | git_diff | iterative__dvc-147 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Easy] Get rid of pathlib library
Most of the code uses standard `os` library as a filesystem utility but some modules (init.py) use `pathlib`. Let's get rid of `pathlib`. It will unify the code and decrease the package size.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import platform
2 from setuptools import setup, find_packages
3 from distutils.errors import DistutilsPlatformError
4 from dvc import VERSION
5
6 install_requires = [
7 'altgraph',
8 'appdirs',
9 'backports.shutil-get-terminal-size',
10 'boto',
11 'cachetools',
12 'configparser',
13 'decorator',
14 'dill',
15 'enum34',
16 'fasteners',
17 'funcsigs',
18 'future',
19 'futures',
20 'gapic-google-cloud-datastore-v1',
21 'gapic-google-cloud-error-reporting-v1beta1',
22 'gapic-google-cloud-logging-v2',
23 'gapic-google-cloud-pubsub-v1',
24 'gapic-google-cloud-spanner-admin-database-v1',
25 'gapic-google-cloud-spanner-admin-instance-v1',
26 'gapic-google-cloud-spanner-v1',
27 'gapic-google-cloud-speech-v1beta1',
28 'gapic-google-cloud-vision-v1',
29 'google-auth',
30 'google-auth-httplib2',
31 'google-cloud',
32 'google-cloud-bigquery',
33 'google-cloud-bigtable',
34 'google-cloud-core',
35 'google-cloud-datastore',
36 'google-cloud-dns',
37 'google-cloud-error-reporting',
38 'google-cloud-language',
39 'google-cloud-logging',
40 'google-cloud-monitoring',
41 'google-cloud-pubsub',
42 'google-cloud-resource-manager',
43 'google-cloud-runtimeconfig',
44 'google-cloud-spanner',
45 'google-cloud-speech',
46 'google-cloud-storage',
47 'google-cloud-translate',
48 'google-cloud-vision',
49 'google-gax',
50 'googleapis-common-protos',
51 'grpc-google-iam-v1',
52 'grpcio',
53 'httplib2',
54 'ipython',
55 'ipython-genutils',
56 'macholib',
57 'mock',
58 'modulegraph',
59 'monotonic',
60 'nose',
61 'oauth2client',
62 'packaging',
63 'pathlib',
64 'pathlib2',
65 'pbr',
66 'pexpect',
67 'pickleshare',
68 'ply',
69 'prompt-toolkit',
70 'proto-google-cloud-datastore-v1',
71 'proto-google-cloud-error-reporting-v1beta1',
72 'proto-google-cloud-logging-v2',
73 'proto-google-cloud-pubsub-v1',
74 'proto-google-cloud-spanner-admin-database-v1',
75 'proto-google-cloud-spanner-admin-instance-v1',
76 'proto-google-cloud-spanner-v1',
77 'proto-google-cloud-speech-v1beta1',
78 'proto-google-cloud-vision-v1',
79 'protobuf',
80 'ptyprocess',
81 'pyasn1',
82 'pyasn1-modules',
83 'Pygments',
84 'PyInstaller',
85 'pyparsing',
86 'requests',
87 'rsa',
88 'scandir',
89 'simplegeneric',
90 'six',
91 'traitlets',
92 'wcwidth',
93 'colorama',
94 'google-compute-engine'
95 ]
96
97 if platform.system() == 'Darwin':
98 install_requires.append('appnope')
99 install_requires.append('py2app')
100
101 setup(
102 name='dvc',
103 version=VERSION,
104 description='Data Version Control makes your data science projects reproducible and shareable.',
105 author='Dmitry Petrov',
106 author_email='[email protected]',
107 url='https://github.com/dataversioncontrol/dvc.git',
108 license='Apache License 2.0',
109 install_requires=install_requires,
110 keywords='data science, data version control, machine learning',
111 classifiers=[
112 'Development Status :: 4 - Beta',
113 'Programming Language :: Python :: 2',
114 'Programming Language :: Python :: 3',
115 ],
116 packages=find_packages(exclude=['bin', 'tests', 'functests']),
117 include_package_data=True,
118 download_url='http://dataversioncontrol.com',
119 entry_points={
120 'console_scripts': ['dvc = dvc.main:main']
121 },
122 zip_safe=False
123 )
124
```
Path: `dvc/command/init.py`
Content:
```
1 import os
2 from pathlib import Path
3
4 from dvc.command.base import CmdBase
5 from dvc.logger import Logger
6 from dvc.config import Config
7 from dvc.exceptions import DvcException
8 from dvc.state_file import StateFile
9 from dvc.system import System
10
11
12 class InitError(DvcException):
13 def __init__(self, msg):
14 DvcException.__init__(self, 'Init error: {}'.format(msg))
15
16
17 class CmdInit(CmdBase):
18 CONFIG_TEMPLATE = '''[Global]
19 DataDir = {}
20 CacheDir = {}
21 StateDir = {}
22 TargetFile = {}
23
24 # Supported clouds: AWS, GCP
25 Cloud = AWS
26 StoragePath =
27
28 # Log levels: Debug, Info, Warning and Error
29 LogLevel = Info
30
31 [AWS]
32 StoragePath = dvc/tutorial
33
34 CredentialPath =
35
36 Region = us-east-1
37 Zone = us-east-1a
38
39 Image = ami-2d39803a
40
41 InstanceType = t2.nano
42
43 SpotPrice =
44 SpotTimeout = 300
45
46 Storage = my-100gb-drive-io
47
48 KeyDir = ~/.ssh
49 KeyName = dvc-key
50
51 SecurityGroup = dvc-group
52
53 [GCP]
54 StoragePath =
55 ProjectName =
56 '''
57
58 EMPTY_FILE_NAME = 'empty'
59 EMPTY_FILE_CHECKSUM = '0000000'
60
61 def __init__(self, settings):
62 super(CmdInit, self).__init__(settings)
63
64 def get_not_existing_path(self, dir):
65 path = Path(os.path.join(self.git.git_dir, dir))
66 if path.exists():
67 raise InitError('Path "{}" already exist'.format(path.name))
68 return path
69
70 def get_not_existing_conf_file_name(self):
71 file_name = os.path.join(self.git.git_dir, Config.CONFIG)
72 if os.path.exists(file_name):
73 raise InitError('Configuration file "{}" already exist'.format(file_name))
74 return file_name
75
76 def run(self):
77 if not self.no_git_actions and not self.git.is_ready_to_go():
78 return 1
79
80 if os.path.realpath(os.path.curdir) != self.settings.git.git_dir_abs:
81 Logger.error('DVC error: initialization could be done only from git root directory {}'.format(
82 self.settings.git.git_dir_abs
83 ))
84 return 1
85
86 data_dir_path = self.get_not_existing_path(self.parsed_args.data_dir)
87 cache_dir_path = self.get_not_existing_path(self.parsed_args.cache_dir)
88 state_dir_path = self.get_not_existing_path(self.parsed_args.state_dir)
89 target_file_path = self.get_not_existing_path(self.parsed_args.target_file)
90
91 self.settings.config.set(self.parsed_args.data_dir,
92 self.parsed_args.cache_dir,
93 self.parsed_args.state_dir,
94 target_file_path)
95
96 conf_file_name = self.get_not_existing_conf_file_name()
97
98 data_dir_path.mkdir()
99 cache_dir_path.mkdir()
100 state_dir_path.mkdir()
101 target_file_path.touch()
102 Logger.info('Directories {}/, {}/, {}/ and target file {} were created'.format(
103 data_dir_path.name,
104 cache_dir_path.name,
105 state_dir_path.name,
106 target_file_path.name))
107
108 self.create_empty_file()
109
110 conf_file = open(conf_file_name, 'wt')
111 conf_file.write(self.CONFIG_TEMPLATE.format(data_dir_path.name,
112 cache_dir_path.name,
113 state_dir_path.name,
114 target_file_path.name))
115 conf_file.close()
116
117 message = 'DVC init. data dir {}, cache dir {}, state dir {}, '.format(
118 data_dir_path.name,
119 cache_dir_path.name,
120 state_dir_path.name
121 )
122 if self.commit_if_needed(message) == 1:
123 return 1
124
125 self.modify_gitignore(cache_dir_path.name)
126 return self.commit_if_needed('DVC init. Commit .gitignore file')
127
128 def create_empty_file(self):
129 empty_data_path = os.path.join(self.parsed_args.data_dir, self.EMPTY_FILE_NAME)
130 cache_file_suffix = self.EMPTY_FILE_NAME + '_' + self.EMPTY_FILE_CHECKSUM
131 empty_cache_path = os.path.join(self.parsed_args.cache_dir, cache_file_suffix)
132 empty_state_path = os.path.join(self.parsed_args.state_dir, self.EMPTY_FILE_NAME + '.state')
133
134 open(empty_cache_path, 'w').close()
135 System.symlink(os.path.join('..', empty_cache_path), empty_data_path)
136
137 StateFile(StateFile.COMMAND_EMPTY_FILE,
138 empty_state_path,
139 self.settings,
140 input_files=[],
141 output_files=[],
142 lock=False).save()
143 pass
144
145 def modify_gitignore(self, cache_dir_name):
146 gitignore_file = os.path.join(self.git.git_dir, '.gitignore')
147 if not os.path.exists(gitignore_file):
148 open(gitignore_file, 'a').close()
149 Logger.info('File .gitignore was created')
150 with open(gitignore_file, 'a') as fd:
151 fd.write('\n{}'.format(cache_dir_name))
152 fd.write('\n{}'.format(os.path.basename(self.git.lock_file)))
153
154 Logger.info('Directory {} was added to .gitignore file'.format(cache_dir_name))
155
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dvc/command/init.py b/dvc/command/init.py
--- a/dvc/command/init.py
+++ b/dvc/command/init.py
@@ -1,5 +1,4 @@
import os
-from pathlib import Path
from dvc.command.base import CmdBase
from dvc.logger import Logger
@@ -62,10 +61,10 @@
super(CmdInit, self).__init__(settings)
def get_not_existing_path(self, dir):
- path = Path(os.path.join(self.git.git_dir, dir))
- if path.exists():
- raise InitError('Path "{}" already exist'.format(path.name))
- return path
+ path = os.path.join(self.git.git_dir, dir)
+ if os.path.exists(path):
+ raise InitError('Path "{}" already exist'.format(path))
+ return os.path.basename(path), path
def get_not_existing_conf_file_name(self):
file_name = os.path.join(self.git.git_dir, Config.CONFIG)
@@ -83,10 +82,10 @@
))
return 1
- data_dir_path = self.get_not_existing_path(self.parsed_args.data_dir)
- cache_dir_path = self.get_not_existing_path(self.parsed_args.cache_dir)
- state_dir_path = self.get_not_existing_path(self.parsed_args.state_dir)
- target_file_path = self.get_not_existing_path(self.parsed_args.target_file)
+ data_dir, data_dir_path = self.get_not_existing_path(self.parsed_args.data_dir)
+ cache_dir, cache_dir_path = self.get_not_existing_path(self.parsed_args.cache_dir)
+ state_dir, state_dir_path = self.get_not_existing_path(self.parsed_args.state_dir)
+ target_file, target_file_path = self.get_not_existing_path(self.parsed_args.target_file)
self.settings.config.set(self.parsed_args.data_dir,
self.parsed_args.cache_dir,
@@ -95,34 +94,34 @@
conf_file_name = self.get_not_existing_conf_file_name()
- data_dir_path.mkdir()
- cache_dir_path.mkdir()
- state_dir_path.mkdir()
- target_file_path.touch()
+ os.mkdir(data_dir_path)
+ os.mkdir(cache_dir_path)
+ os.mkdir(state_dir_path)
+ open(target_file_path, 'w').close()
Logger.info('Directories {}/, {}/, {}/ and target file {} were created'.format(
- data_dir_path.name,
- cache_dir_path.name,
- state_dir_path.name,
- target_file_path.name))
+ data_dir,
+ cache_dir,
+ state_dir,
+ target_file))
self.create_empty_file()
conf_file = open(conf_file_name, 'wt')
- conf_file.write(self.CONFIG_TEMPLATE.format(data_dir_path.name,
- cache_dir_path.name,
- state_dir_path.name,
- target_file_path.name))
+ conf_file.write(self.CONFIG_TEMPLATE.format(data_dir,
+ cache_dir,
+ state_dir,
+ target_file))
conf_file.close()
message = 'DVC init. data dir {}, cache dir {}, state dir {}, '.format(
- data_dir_path.name,
- cache_dir_path.name,
- state_dir_path.name
+ data_dir,
+ cache_dir,
+ state_dir
)
if self.commit_if_needed(message) == 1:
return 1
- self.modify_gitignore(cache_dir_path.name)
+ self.modify_gitignore(cache_dir)
return self.commit_if_needed('DVC init. Commit .gitignore file')
def create_empty_file(self):
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -60,8 +60,6 @@
'nose',
'oauth2client',
'packaging',
- 'pathlib',
- 'pathlib2',
'pbr',
'pexpect',
'pickleshare',
| {"golden_diff": "diff --git a/dvc/command/init.py b/dvc/command/init.py\n--- a/dvc/command/init.py\n+++ b/dvc/command/init.py\n@@ -1,5 +1,4 @@\n import os\n-from pathlib import Path\n \n from dvc.command.base import CmdBase\n from dvc.logger import Logger\n@@ -62,10 +61,10 @@\n super(CmdInit, self).__init__(settings)\n \n def get_not_existing_path(self, dir):\n- path = Path(os.path.join(self.git.git_dir, dir))\n- if path.exists():\n- raise InitError('Path \"{}\" already exist'.format(path.name))\n- return path\n+ path = os.path.join(self.git.git_dir, dir)\n+ if os.path.exists(path):\n+ raise InitError('Path \"{}\" already exist'.format(path))\n+ return os.path.basename(path), path\n \n def get_not_existing_conf_file_name(self):\n file_name = os.path.join(self.git.git_dir, Config.CONFIG)\n@@ -83,10 +82,10 @@\n ))\n return 1\n \n- data_dir_path = self.get_not_existing_path(self.parsed_args.data_dir)\n- cache_dir_path = self.get_not_existing_path(self.parsed_args.cache_dir)\n- state_dir_path = self.get_not_existing_path(self.parsed_args.state_dir)\n- target_file_path = self.get_not_existing_path(self.parsed_args.target_file)\n+ data_dir, data_dir_path = self.get_not_existing_path(self.parsed_args.data_dir)\n+ cache_dir, cache_dir_path = self.get_not_existing_path(self.parsed_args.cache_dir)\n+ state_dir, state_dir_path = self.get_not_existing_path(self.parsed_args.state_dir)\n+ target_file, target_file_path = self.get_not_existing_path(self.parsed_args.target_file)\n \n self.settings.config.set(self.parsed_args.data_dir,\n self.parsed_args.cache_dir,\n@@ -95,34 +94,34 @@\n \n conf_file_name = self.get_not_existing_conf_file_name()\n \n- data_dir_path.mkdir()\n- cache_dir_path.mkdir()\n- state_dir_path.mkdir()\n- target_file_path.touch()\n+ os.mkdir(data_dir_path)\n+ os.mkdir(cache_dir_path)\n+ os.mkdir(state_dir_path)\n+ open(target_file_path, 'w').close()\n Logger.info('Directories {}/, {}/, {}/ and target file {} were created'.format(\n- data_dir_path.name,\n- cache_dir_path.name,\n- state_dir_path.name,\n- target_file_path.name))\n+ data_dir,\n+ cache_dir,\n+ state_dir,\n+ target_file))\n \n self.create_empty_file()\n \n conf_file = open(conf_file_name, 'wt')\n- conf_file.write(self.CONFIG_TEMPLATE.format(data_dir_path.name,\n- cache_dir_path.name,\n- state_dir_path.name,\n- target_file_path.name))\n+ conf_file.write(self.CONFIG_TEMPLATE.format(data_dir,\n+ cache_dir,\n+ state_dir,\n+ target_file))\n conf_file.close()\n \n message = 'DVC init. data dir {}, cache dir {}, state dir {}, '.format(\n- data_dir_path.name,\n- cache_dir_path.name,\n- state_dir_path.name\n+ data_dir,\n+ cache_dir,\n+ state_dir\n )\n if self.commit_if_needed(message) == 1:\n return 1\n \n- self.modify_gitignore(cache_dir_path.name)\n+ self.modify_gitignore(cache_dir)\n return self.commit_if_needed('DVC init. Commit .gitignore file')\n \n def create_empty_file(self):\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -60,8 +60,6 @@\n 'nose',\n 'oauth2client',\n 'packaging',\n- 'pathlib',\n- 'pathlib2',\n 'pbr',\n 'pexpect',\n 'pickleshare',\n", "issue": "[Easy] Get rid of pathlib library\nMost of the code uses standard `os` library as a filesystem utility but some modules (init.py) use `pathlib`. Let's get rid of `pathlib`. It will unify the code and decrease the package size.\n", "before_files": [{"content": "import platform\nfrom setuptools import setup, find_packages\nfrom distutils.errors import DistutilsPlatformError\nfrom dvc import VERSION\n\ninstall_requires = [\n 'altgraph',\n 'appdirs',\n 'backports.shutil-get-terminal-size',\n 'boto',\n 'cachetools',\n 'configparser',\n 'decorator',\n 'dill',\n 'enum34',\n 'fasteners',\n 'funcsigs',\n 'future',\n 'futures',\n 'gapic-google-cloud-datastore-v1',\n 'gapic-google-cloud-error-reporting-v1beta1',\n 'gapic-google-cloud-logging-v2',\n 'gapic-google-cloud-pubsub-v1',\n 'gapic-google-cloud-spanner-admin-database-v1',\n 'gapic-google-cloud-spanner-admin-instance-v1',\n 'gapic-google-cloud-spanner-v1',\n 'gapic-google-cloud-speech-v1beta1',\n 'gapic-google-cloud-vision-v1',\n 'google-auth',\n 'google-auth-httplib2',\n 'google-cloud',\n 'google-cloud-bigquery',\n 'google-cloud-bigtable',\n 'google-cloud-core',\n 'google-cloud-datastore',\n 'google-cloud-dns',\n 'google-cloud-error-reporting',\n 'google-cloud-language',\n 'google-cloud-logging',\n 'google-cloud-monitoring',\n 'google-cloud-pubsub',\n 'google-cloud-resource-manager',\n 'google-cloud-runtimeconfig',\n 'google-cloud-spanner',\n 'google-cloud-speech',\n 'google-cloud-storage',\n 'google-cloud-translate',\n 'google-cloud-vision',\n 'google-gax',\n 'googleapis-common-protos',\n 'grpc-google-iam-v1',\n 'grpcio',\n 'httplib2',\n 'ipython',\n 'ipython-genutils',\n 'macholib',\n 'mock',\n 'modulegraph',\n 'monotonic',\n 'nose',\n 'oauth2client',\n 'packaging',\n 'pathlib',\n 'pathlib2',\n 'pbr',\n 'pexpect',\n 'pickleshare',\n 'ply',\n 'prompt-toolkit',\n 'proto-google-cloud-datastore-v1',\n 'proto-google-cloud-error-reporting-v1beta1',\n 'proto-google-cloud-logging-v2',\n 'proto-google-cloud-pubsub-v1',\n 'proto-google-cloud-spanner-admin-database-v1',\n 'proto-google-cloud-spanner-admin-instance-v1',\n 'proto-google-cloud-spanner-v1',\n 'proto-google-cloud-speech-v1beta1',\n 'proto-google-cloud-vision-v1',\n 'protobuf',\n 'ptyprocess',\n 'pyasn1',\n 'pyasn1-modules',\n 'Pygments',\n 'PyInstaller',\n 'pyparsing',\n 'requests',\n 'rsa',\n 'scandir',\n 'simplegeneric',\n 'six',\n 'traitlets',\n 'wcwidth',\n 'colorama',\n 'google-compute-engine'\n]\n\nif platform.system() == 'Darwin':\n install_requires.append('appnope')\n install_requires.append('py2app')\n\nsetup(\n name='dvc',\n version=VERSION,\n description='Data Version Control makes your data science projects reproducible and shareable.',\n author='Dmitry Petrov',\n author_email='[email protected]',\n url='https://github.com/dataversioncontrol/dvc.git',\n license='Apache License 2.0',\n install_requires=install_requires,\n keywords='data science, data version control, machine learning',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 3',\n ],\n packages=find_packages(exclude=['bin', 'tests', 'functests']),\n include_package_data=True,\n download_url='http://dataversioncontrol.com',\n entry_points={\n 'console_scripts': ['dvc = dvc.main:main']\n },\n zip_safe=False\n)\n", "path": "setup.py"}, {"content": "import os\nfrom pathlib import Path\n\nfrom dvc.command.base import CmdBase\nfrom dvc.logger import Logger\nfrom dvc.config import Config\nfrom dvc.exceptions import DvcException\nfrom dvc.state_file import StateFile\nfrom dvc.system import System\n\n\nclass InitError(DvcException):\n def __init__(self, msg):\n DvcException.__init__(self, 'Init error: {}'.format(msg))\n\n\nclass CmdInit(CmdBase):\n CONFIG_TEMPLATE = '''[Global]\nDataDir = {}\nCacheDir = {}\nStateDir = {}\nTargetFile = {}\n\n# Supported clouds: AWS, GCP\nCloud = AWS\nStoragePath =\n\n# Log levels: Debug, Info, Warning and Error\nLogLevel = Info\n\n[AWS]\nStoragePath = dvc/tutorial\n\nCredentialPath =\n\nRegion = us-east-1\nZone = us-east-1a\n\nImage = ami-2d39803a\n\nInstanceType = t2.nano\n\nSpotPrice =\nSpotTimeout = 300\n\nStorage = my-100gb-drive-io\n\nKeyDir = ~/.ssh\nKeyName = dvc-key\n\nSecurityGroup = dvc-group\n\n[GCP]\nStoragePath =\nProjectName =\n'''\n\n EMPTY_FILE_NAME = 'empty'\n EMPTY_FILE_CHECKSUM = '0000000'\n\n def __init__(self, settings):\n super(CmdInit, self).__init__(settings)\n\n def get_not_existing_path(self, dir):\n path = Path(os.path.join(self.git.git_dir, dir))\n if path.exists():\n raise InitError('Path \"{}\" already exist'.format(path.name))\n return path\n\n def get_not_existing_conf_file_name(self):\n file_name = os.path.join(self.git.git_dir, Config.CONFIG)\n if os.path.exists(file_name):\n raise InitError('Configuration file \"{}\" already exist'.format(file_name))\n return file_name\n\n def run(self):\n if not self.no_git_actions and not self.git.is_ready_to_go():\n return 1\n\n if os.path.realpath(os.path.curdir) != self.settings.git.git_dir_abs:\n Logger.error('DVC error: initialization could be done only from git root directory {}'.format(\n self.settings.git.git_dir_abs\n ))\n return 1\n\n data_dir_path = self.get_not_existing_path(self.parsed_args.data_dir)\n cache_dir_path = self.get_not_existing_path(self.parsed_args.cache_dir)\n state_dir_path = self.get_not_existing_path(self.parsed_args.state_dir)\n target_file_path = self.get_not_existing_path(self.parsed_args.target_file)\n\n self.settings.config.set(self.parsed_args.data_dir,\n self.parsed_args.cache_dir,\n self.parsed_args.state_dir,\n target_file_path)\n\n conf_file_name = self.get_not_existing_conf_file_name()\n\n data_dir_path.mkdir()\n cache_dir_path.mkdir()\n state_dir_path.mkdir()\n target_file_path.touch()\n Logger.info('Directories {}/, {}/, {}/ and target file {} were created'.format(\n data_dir_path.name,\n cache_dir_path.name,\n state_dir_path.name,\n target_file_path.name))\n\n self.create_empty_file()\n\n conf_file = open(conf_file_name, 'wt')\n conf_file.write(self.CONFIG_TEMPLATE.format(data_dir_path.name,\n cache_dir_path.name,\n state_dir_path.name,\n target_file_path.name))\n conf_file.close()\n\n message = 'DVC init. data dir {}, cache dir {}, state dir {}, '.format(\n data_dir_path.name,\n cache_dir_path.name,\n state_dir_path.name\n )\n if self.commit_if_needed(message) == 1:\n return 1\n\n self.modify_gitignore(cache_dir_path.name)\n return self.commit_if_needed('DVC init. Commit .gitignore file')\n\n def create_empty_file(self):\n empty_data_path = os.path.join(self.parsed_args.data_dir, self.EMPTY_FILE_NAME)\n cache_file_suffix = self.EMPTY_FILE_NAME + '_' + self.EMPTY_FILE_CHECKSUM\n empty_cache_path = os.path.join(self.parsed_args.cache_dir, cache_file_suffix)\n empty_state_path = os.path.join(self.parsed_args.state_dir, self.EMPTY_FILE_NAME + '.state')\n\n open(empty_cache_path, 'w').close()\n System.symlink(os.path.join('..', empty_cache_path), empty_data_path)\n\n StateFile(StateFile.COMMAND_EMPTY_FILE,\n empty_state_path,\n self.settings,\n input_files=[],\n output_files=[],\n lock=False).save()\n pass\n\n def modify_gitignore(self, cache_dir_name):\n gitignore_file = os.path.join(self.git.git_dir, '.gitignore')\n if not os.path.exists(gitignore_file):\n open(gitignore_file, 'a').close()\n Logger.info('File .gitignore was created')\n with open(gitignore_file, 'a') as fd:\n fd.write('\\n{}'.format(cache_dir_name))\n fd.write('\\n{}'.format(os.path.basename(self.git.lock_file)))\n\n Logger.info('Directory {} was added to .gitignore file'.format(cache_dir_name))\n", "path": "dvc/command/init.py"}], "after_files": [{"content": "import platform\nfrom setuptools import setup, find_packages\nfrom distutils.errors import DistutilsPlatformError\nfrom dvc import VERSION\n\ninstall_requires = [\n 'altgraph',\n 'appdirs',\n 'backports.shutil-get-terminal-size',\n 'boto',\n 'cachetools',\n 'configparser',\n 'decorator',\n 'dill',\n 'enum34',\n 'fasteners',\n 'funcsigs',\n 'future',\n 'futures',\n 'gapic-google-cloud-datastore-v1',\n 'gapic-google-cloud-error-reporting-v1beta1',\n 'gapic-google-cloud-logging-v2',\n 'gapic-google-cloud-pubsub-v1',\n 'gapic-google-cloud-spanner-admin-database-v1',\n 'gapic-google-cloud-spanner-admin-instance-v1',\n 'gapic-google-cloud-spanner-v1',\n 'gapic-google-cloud-speech-v1beta1',\n 'gapic-google-cloud-vision-v1',\n 'google-auth',\n 'google-auth-httplib2',\n 'google-cloud',\n 'google-cloud-bigquery',\n 'google-cloud-bigtable',\n 'google-cloud-core',\n 'google-cloud-datastore',\n 'google-cloud-dns',\n 'google-cloud-error-reporting',\n 'google-cloud-language',\n 'google-cloud-logging',\n 'google-cloud-monitoring',\n 'google-cloud-pubsub',\n 'google-cloud-resource-manager',\n 'google-cloud-runtimeconfig',\n 'google-cloud-spanner',\n 'google-cloud-speech',\n 'google-cloud-storage',\n 'google-cloud-translate',\n 'google-cloud-vision',\n 'google-gax',\n 'googleapis-common-protos',\n 'grpc-google-iam-v1',\n 'grpcio',\n 'httplib2',\n 'ipython',\n 'ipython-genutils',\n 'macholib',\n 'mock',\n 'modulegraph',\n 'monotonic',\n 'nose',\n 'oauth2client',\n 'packaging',\n 'pbr',\n 'pexpect',\n 'pickleshare',\n 'ply',\n 'prompt-toolkit',\n 'proto-google-cloud-datastore-v1',\n 'proto-google-cloud-error-reporting-v1beta1',\n 'proto-google-cloud-logging-v2',\n 'proto-google-cloud-pubsub-v1',\n 'proto-google-cloud-spanner-admin-database-v1',\n 'proto-google-cloud-spanner-admin-instance-v1',\n 'proto-google-cloud-spanner-v1',\n 'proto-google-cloud-speech-v1beta1',\n 'proto-google-cloud-vision-v1',\n 'protobuf',\n 'ptyprocess',\n 'pyasn1',\n 'pyasn1-modules',\n 'Pygments',\n 'PyInstaller',\n 'pyparsing',\n 'requests',\n 'rsa',\n 'scandir',\n 'simplegeneric',\n 'six',\n 'traitlets',\n 'wcwidth',\n 'colorama',\n 'google-compute-engine'\n]\n\nif platform.system() == 'Darwin':\n install_requires.append('appnope')\n install_requires.append('py2app')\n\nsetup(\n name='dvc',\n version=VERSION,\n description='Data Version Control makes your data science projects reproducible and shareable.',\n author='Dmitry Petrov',\n author_email='[email protected]',\n url='https://github.com/dataversioncontrol/dvc.git',\n license='Apache License 2.0',\n install_requires=install_requires,\n keywords='data science, data version control, machine learning',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 3',\n ],\n packages=find_packages(exclude=['bin', 'tests', 'functests']),\n include_package_data=True,\n download_url='http://dataversioncontrol.com',\n entry_points={\n 'console_scripts': ['dvc = dvc.main:main']\n },\n zip_safe=False\n)\n", "path": "setup.py"}, {"content": "import os\n\nfrom dvc.command.base import CmdBase\nfrom dvc.logger import Logger\nfrom dvc.config import Config\nfrom dvc.exceptions import DvcException\nfrom dvc.state_file import StateFile\nfrom dvc.system import System\n\n\nclass InitError(DvcException):\n def __init__(self, msg):\n DvcException.__init__(self, 'Init error: {}'.format(msg))\n\n\nclass CmdInit(CmdBase):\n CONFIG_TEMPLATE = '''[Global]\nDataDir = {}\nCacheDir = {}\nStateDir = {}\nTargetFile = {}\n\n# Supported clouds: AWS, GCP\nCloud = AWS\nStoragePath =\n\n# Log levels: Debug, Info, Warning and Error\nLogLevel = Info\n\n[AWS]\nStoragePath = dvc/tutorial\n\nCredentialPath =\n\nRegion = us-east-1\nZone = us-east-1a\n\nImage = ami-2d39803a\n\nInstanceType = t2.nano\n\nSpotPrice =\nSpotTimeout = 300\n\nStorage = my-100gb-drive-io\n\nKeyDir = ~/.ssh\nKeyName = dvc-key\n\nSecurityGroup = dvc-group\n\n[GCP]\nStoragePath =\nProjectName =\n'''\n\n EMPTY_FILE_NAME = 'empty'\n EMPTY_FILE_CHECKSUM = '0000000'\n\n def __init__(self, settings):\n super(CmdInit, self).__init__(settings)\n\n def get_not_existing_path(self, dir):\n path = os.path.join(self.git.git_dir, dir)\n if os.path.exists(path):\n raise InitError('Path \"{}\" already exist'.format(path))\n return os.path.basename(path), path\n\n def get_not_existing_conf_file_name(self):\n file_name = os.path.join(self.git.git_dir, Config.CONFIG)\n if os.path.exists(file_name):\n raise InitError('Configuration file \"{}\" already exist'.format(file_name))\n return file_name\n\n def run(self):\n if not self.no_git_actions and not self.git.is_ready_to_go():\n return 1\n\n if os.path.realpath(os.path.curdir) != self.settings.git.git_dir_abs:\n Logger.error('DVC error: initialization could be done only from git root directory {}'.format(\n self.settings.git.git_dir_abs\n ))\n return 1\n\n data_dir, data_dir_path = self.get_not_existing_path(self.parsed_args.data_dir)\n cache_dir, cache_dir_path = self.get_not_existing_path(self.parsed_args.cache_dir)\n state_dir, state_dir_path = self.get_not_existing_path(self.parsed_args.state_dir)\n target_file, target_file_path = self.get_not_existing_path(self.parsed_args.target_file)\n\n self.settings.config.set(self.parsed_args.data_dir,\n self.parsed_args.cache_dir,\n self.parsed_args.state_dir,\n target_file_path)\n\n conf_file_name = self.get_not_existing_conf_file_name()\n\n os.mkdir(data_dir_path)\n os.mkdir(cache_dir_path)\n os.mkdir(state_dir_path)\n open(target_file_path, 'w').close()\n Logger.info('Directories {}/, {}/, {}/ and target file {} were created'.format(\n data_dir,\n cache_dir,\n state_dir,\n target_file))\n\n self.create_empty_file()\n\n conf_file = open(conf_file_name, 'wt')\n conf_file.write(self.CONFIG_TEMPLATE.format(data_dir,\n cache_dir,\n state_dir,\n target_file))\n conf_file.close()\n\n message = 'DVC init. data dir {}, cache dir {}, state dir {}, '.format(\n data_dir,\n cache_dir,\n state_dir\n )\n if self.commit_if_needed(message) == 1:\n return 1\n\n self.modify_gitignore(cache_dir)\n return self.commit_if_needed('DVC init. Commit .gitignore file')\n\n def create_empty_file(self):\n empty_data_path = os.path.join(self.parsed_args.data_dir, self.EMPTY_FILE_NAME)\n cache_file_suffix = self.EMPTY_FILE_NAME + '_' + self.EMPTY_FILE_CHECKSUM\n empty_cache_path = os.path.join(self.parsed_args.cache_dir, cache_file_suffix)\n empty_state_path = os.path.join(self.parsed_args.state_dir, self.EMPTY_FILE_NAME + '.state')\n\n open(empty_cache_path, 'w').close()\n System.symlink(os.path.join('..', empty_cache_path), empty_data_path)\n\n StateFile(StateFile.COMMAND_EMPTY_FILE,\n empty_state_path,\n self.settings,\n input_files=[],\n output_files=[],\n lock=False).save()\n pass\n\n def modify_gitignore(self, cache_dir_name):\n gitignore_file = os.path.join(self.git.git_dir, '.gitignore')\n if not os.path.exists(gitignore_file):\n open(gitignore_file, 'a').close()\n Logger.info('File .gitignore was created')\n with open(gitignore_file, 'a') as fd:\n fd.write('\\n{}'.format(cache_dir_name))\n fd.write('\\n{}'.format(os.path.basename(self.git.lock_file)))\n\n Logger.info('Directory {} was added to .gitignore file'.format(cache_dir_name))\n", "path": "dvc/command/init.py"}]} | 2,939 | 863 |
gh_patches_debug_50259 | rasdani/github-patches | git_diff | DataBiosphere__toil-1431 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add missing abstract method in LSF batchsystem
Add missing abstract method in LSF batchsystem
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/toil/batchSystems/lsf.py`
Content:
```
1 #Copyright (C) 2013 by Thomas Keane ([email protected])
2 #
3 #Permission is hereby granted, free of charge, to any person obtaining a copy
4 #of this software and associated documentation files (the "Software"), to deal
5 #in the Software without restriction, including without limitation the rights
6 #to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7 #copies of the Software, and to permit persons to whom the Software is
8 #furnished to do so, subject to the following conditions:
9 #
10 #The above copyright notice and this permission notice shall be included in
11 #all copies or substantial portions of the Software.
12 #
13 #THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 #IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 #FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 #AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 #LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18 #OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
19 #THE SOFTWARE.
20 from __future__ import absolute_import
21 import logging
22 import subprocess
23 import time
24 from threading import Thread
25 from datetime import date
26
27 # Python 3 compatibility imports
28 from six.moves.queue import Empty, Queue
29
30 from toil.batchSystems import MemoryString
31 from toil.batchSystems.abstractBatchSystem import BatchSystemSupport
32
33 logger = logging.getLogger( __name__ )
34
35
36
37 def prepareBsub(cpu, mem):
38 mem = '' if mem is None else '-R "select[type==X86_64 && mem > ' + str(int(mem/ 1000000)) + '] rusage[mem=' + str(int(mem/ 1000000)) + ']" -M' + str(int(mem/ 1000000)) + '000'
39 cpu = '' if cpu is None else '-n ' + str(int(cpu))
40 bsubline = ["bsub", mem, cpu,"-cwd", ".", "-o", "/dev/null", "-e", "/dev/null"]
41 return bsubline
42
43 def bsub(bsubline):
44 process = subprocess.Popen(" ".join(bsubline), shell=True, stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
45 liney = process.stdout.readline()
46 logger.debug("BSUB: " + liney)
47 result = int(liney.strip().split()[1].strip('<>'))
48 logger.debug("Got the job id: %s" % (str(result)))
49 return result
50
51 def getjobexitcode(lsfJobID):
52 job, task = lsfJobID
53
54 #first try bjobs to find out job state
55 args = ["bjobs", "-l", str(job)]
56 logger.debug("Checking job exit code for job via bjobs: " + str(job))
57 process = subprocess.Popen(" ".join(args), shell=True, stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
58 started = 0
59 for line in process.stdout:
60 if line.find("Done successfully") > -1:
61 logger.debug("bjobs detected job completed for job: " + str(job))
62 return 0
63 elif line.find("Completed <exit>") > -1:
64 logger.debug("bjobs detected job failed for job: " + str(job))
65 return 1
66 elif line.find("New job is waiting for scheduling") > -1:
67 logger.debug("bjobs detected job pending scheduling for job: " + str(job))
68 return None
69 elif line.find("PENDING REASONS") > -1:
70 logger.debug("bjobs detected job pending for job: " + str(job))
71 return None
72 elif line.find("Started on ") > -1:
73 started = 1
74
75 if started == 1:
76 logger.debug("bjobs detected job started but not completed: " + str(job))
77 return None
78
79 #if not found in bjobs, then try bacct (slower than bjobs)
80 logger.debug("bjobs failed to detect job - trying bacct: " + str(job))
81
82 args = ["bacct", "-l", str(job)]
83 logger.debug("Checking job exit code for job via bacct:" + str(job))
84 process = subprocess.Popen(" ".join(args), shell=True, stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
85 for line in process.stdout:
86 if line.find("Completed <done>") > -1:
87 logger.debug("Detected job completed for job: " + str(job))
88 return 0
89 elif line.find("Completed <exit>") > -1:
90 logger.debug("Detected job failed for job: " + str(job))
91 return 1
92 logger.debug("Cant determine exit code for job or job still running: " + str(job))
93 return None
94
95 class Worker(Thread):
96 def __init__(self, newJobsQueue, updatedJobsQueue, boss):
97 Thread.__init__(self)
98 self.newJobsQueue = newJobsQueue
99 self.updatedJobsQueue = updatedJobsQueue
100 self.currentjobs = list()
101 self.runningjobs = set()
102 self.boss = boss
103
104 def run(self):
105 while True:
106 # Load new job ids:
107 while not self.newJobsQueue.empty():
108 self.currentjobs.append(self.newJobsQueue.get())
109
110 # Launch jobs as necessary:
111 while len(self.currentjobs) > 0:
112 jobID, bsubline = self.currentjobs.pop()
113 lsfJobID = bsub(bsubline)
114 self.boss.jobIDs[(lsfJobID, None)] = jobID
115 self.boss.lsfJobIDs[jobID] = (lsfJobID, None)
116 self.runningjobs.add((lsfJobID, None))
117
118 # Test known job list
119 for lsfJobID in list(self.runningjobs):
120 exit = getjobexitcode(lsfJobID)
121 if exit is not None:
122 self.updatedJobsQueue.put((lsfJobID, exit))
123 self.runningjobs.remove(lsfJobID)
124
125 time.sleep(10)
126
127 class LSFBatchSystem(BatchSystemSupport):
128 """
129 The interface for running jobs on lsf, runs all the jobs you give it as they come in,
130 but in parallel.
131 """
132 @classmethod
133 def supportsWorkerCleanup(cls):
134 return False
135
136 @classmethod
137 def supportsHotDeployment(cls):
138 return False
139
140 def __init__(self, config, maxCores, maxMemory, maxDisk):
141 super(LSFBatchSystem, self).__init__(config, maxCores, maxMemory, maxDisk)
142 self.lsfResultsFile = self._getResultsFileName(config.jobStore)
143 #Reset the job queue and results (initially, we do this again once we've killed the jobs)
144 self.lsfResultsFileHandle = open(self.lsfResultsFile, 'w')
145 self.lsfResultsFileHandle.close() #We lose any previous state in this file, and ensure the files existence
146 self.currentjobs = set()
147 self.obtainSystemConstants()
148 self.jobIDs = dict()
149 self.lsfJobIDs = dict()
150 self.nextJobID = 0
151
152 self.newJobsQueue = Queue()
153 self.updatedJobsQueue = Queue()
154 self.worker = Worker(self.newJobsQueue, self.updatedJobsQueue, self)
155 self.worker.setDaemon(True)
156 self.worker.start()
157
158 def __des__(self):
159 #Closes the file handle associated with the results file.
160 self.lsfResultsFileHandle.close() #Close the results file, cos were done.
161
162 def issueBatchJob(self, jobNode):
163 jobID = self.nextJobID
164 self.nextJobID += 1
165 self.currentjobs.add(jobID)
166 bsubline = prepareBsub(jobNode.cores, jobNode.memory) + [jobNode.command]
167 self.newJobsQueue.put((jobID, bsubline))
168 logger.debug("Issued the job command: %s with job id: %s " % (jobNode.command, str(jobID)))
169 return jobID
170
171 def getLsfID(self, jobID):
172 if not jobID in self.lsfJobIDs:
173 RuntimeError("Unknown jobID, could not be converted")
174
175 (job,task) = self.lsfJobIDs[jobID]
176 if task is None:
177 return str(job)
178 else:
179 return str(job) + "." + str(task)
180
181 def killBatchJobs(self, jobIDs):
182 """Kills the given job IDs.
183 """
184 for jobID in jobIDs:
185 logger.debug("DEL: " + str(self.getLsfID(jobID)))
186 self.currentjobs.remove(jobID)
187 process = subprocess.Popen(["bkill", self.getLsfID(jobID)])
188 del self.jobIDs[self.lsfJobIDs[jobID]]
189 del self.lsfJobIDs[jobID]
190
191 toKill = set(jobIDs)
192 while len(toKill) > 0:
193 for jobID in list(toKill):
194 if getjobexitcode(self.lsfJobIDs[jobID]) is not None:
195 toKill.remove(jobID)
196
197 if len(toKill) > 0:
198 logger.warn("Tried to kill some jobs, but something happened and they are still going, "
199 "so I'll try again")
200 time.sleep(5)
201
202 def getIssuedBatchJobIDs(self):
203 """A list of jobs (as jobIDs) currently issued (may be running, or maybe
204 just waiting).
205 """
206 return self.currentjobs
207
208 def getRunningBatchJobIDs(self):
209 """Gets a map of jobs (as jobIDs) currently running (not just waiting)
210 and a how long they have been running for (in seconds).
211 """
212 times = {}
213 currentjobs = set(self.lsfJobIDs[x] for x in self.getIssuedBatchJobIDs())
214 process = subprocess.Popen(["bjobs"], stdout = subprocess.PIPE)
215
216 for curline in process.stdout:
217 items = curline.strip().split()
218 if (len(items) > 9 and (items[0]) in currentjobs) and items[2] == 'RUN':
219 jobstart = "/".join(items[7:9]) + '/' + str(date.today().year)
220 jobstart = jobstart + ' ' + items[9]
221 jobstart = time.mktime(time.strptime(jobstart,"%b/%d/%Y %H:%M"))
222 jobstart = time.mktime(time.strptime(jobstart,"%m/%d/%Y %H:%M:%S"))
223 times[self.jobIDs[(items[0])]] = time.time() - jobstart
224 return times
225
226 def getUpdatedBatchJob(self, maxWait):
227 try:
228 sgeJobID, retcode = self.updatedJobsQueue.get(timeout=maxWait)
229 self.updatedJobsQueue.task_done()
230 jobID, retcode = (self.jobIDs[sgeJobID], retcode)
231 self.currentjobs -= {self.jobIDs[sgeJobID]}
232 except Empty:
233 pass
234 else:
235 return jobID, retcode, None
236
237 def getWaitDuration(self):
238 """We give parasol a second to catch its breath (in seconds)
239 """
240 #return 0.0
241 return 15
242
243 @classmethod
244 def getRescueBatchJobFrequency(cls):
245 """Parasol leaks jobs, but rescuing jobs involves calls to parasol list jobs and pstat2,
246 making it expensive. We allow this every 10 minutes..
247 """
248 return 1800
249
250 def obtainSystemConstants(self):
251 p = subprocess.Popen(["lshosts"], stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
252
253 line = p.stdout.readline()
254 items = line.strip().split()
255 num_columns = len(items)
256 cpu_index = None
257 mem_index = None
258 for i in range(num_columns):
259 if items[i] == 'ncpus':
260 cpu_index = i
261 elif items[i] == 'maxmem':
262 mem_index = i
263
264 if cpu_index is None or mem_index is None:
265 RuntimeError("lshosts command does not return ncpus or maxmem columns")
266
267 p.stdout.readline()
268
269 self.maxCPU = 0
270 self.maxMEM = MemoryString("0")
271 for line in p.stdout:
272 items = line.strip().split()
273 if len(items) < num_columns:
274 RuntimeError("lshosts output has a varying number of columns")
275 if items[cpu_index] != '-' and items[cpu_index] > self.maxCPU:
276 self.maxCPU = items[cpu_index]
277 if items[mem_index] != '-' and MemoryString(items[mem_index]) > self.maxMEM:
278 self.maxMEM = MemoryString(items[mem_index])
279
280 if self.maxCPU is 0 or self.maxMEM is 0:
281 RuntimeError("lshosts returns null ncpus or maxmem info")
282 logger.debug("Got the maxCPU: %s" % (self.maxMEM))
283
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/toil/batchSystems/lsf.py b/src/toil/batchSystems/lsf.py
--- a/src/toil/batchSystems/lsf.py
+++ b/src/toil/batchSystems/lsf.py
@@ -137,6 +137,9 @@
def supportsHotDeployment(cls):
return False
+ def shutdown(self):
+ pass
+
def __init__(self, config, maxCores, maxMemory, maxDisk):
super(LSFBatchSystem, self).__init__(config, maxCores, maxMemory, maxDisk)
self.lsfResultsFile = self._getResultsFileName(config.jobStore)
| {"golden_diff": "diff --git a/src/toil/batchSystems/lsf.py b/src/toil/batchSystems/lsf.py\n--- a/src/toil/batchSystems/lsf.py\n+++ b/src/toil/batchSystems/lsf.py\n@@ -137,6 +137,9 @@\n def supportsHotDeployment(cls):\n return False\n \n+ def shutdown(self):\n+ pass\n+\n def __init__(self, config, maxCores, maxMemory, maxDisk):\n super(LSFBatchSystem, self).__init__(config, maxCores, maxMemory, maxDisk)\n self.lsfResultsFile = self._getResultsFileName(config.jobStore)\n", "issue": "Add missing abstract method in LSF batchsystem\n\nAdd missing abstract method in LSF batchsystem\n\n", "before_files": [{"content": "#Copyright (C) 2013 by Thomas Keane ([email protected])\n#\n#Permission is hereby granted, free of charge, to any person obtaining a copy\n#of this software and associated documentation files (the \"Software\"), to deal\n#in the Software without restriction, including without limitation the rights\n#to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n#copies of the Software, and to permit persons to whom the Software is\n#furnished to do so, subject to the following conditions:\n#\n#The above copyright notice and this permission notice shall be included in\n#all copies or substantial portions of the Software.\n#\n#THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n#IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n#FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n#AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n#LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n#OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n#THE SOFTWARE.\nfrom __future__ import absolute_import\nimport logging\nimport subprocess\nimport time\nfrom threading import Thread\nfrom datetime import date\n\n# Python 3 compatibility imports\nfrom six.moves.queue import Empty, Queue\n\nfrom toil.batchSystems import MemoryString\nfrom toil.batchSystems.abstractBatchSystem import BatchSystemSupport\n\nlogger = logging.getLogger( __name__ )\n\n\n\ndef prepareBsub(cpu, mem):\n mem = '' if mem is None else '-R \"select[type==X86_64 && mem > ' + str(int(mem/ 1000000)) + '] rusage[mem=' + str(int(mem/ 1000000)) + ']\" -M' + str(int(mem/ 1000000)) + '000'\n cpu = '' if cpu is None else '-n ' + str(int(cpu))\n bsubline = [\"bsub\", mem, cpu,\"-cwd\", \".\", \"-o\", \"/dev/null\", \"-e\", \"/dev/null\"]\n return bsubline\n\ndef bsub(bsubline):\n process = subprocess.Popen(\" \".join(bsubline), shell=True, stdout = subprocess.PIPE, stderr = subprocess.STDOUT)\n liney = process.stdout.readline()\n logger.debug(\"BSUB: \" + liney)\n result = int(liney.strip().split()[1].strip('<>'))\n logger.debug(\"Got the job id: %s\" % (str(result)))\n return result\n\ndef getjobexitcode(lsfJobID):\n job, task = lsfJobID\n\n #first try bjobs to find out job state\n args = [\"bjobs\", \"-l\", str(job)]\n logger.debug(\"Checking job exit code for job via bjobs: \" + str(job))\n process = subprocess.Popen(\" \".join(args), shell=True, stdout = subprocess.PIPE, stderr = subprocess.STDOUT)\n started = 0\n for line in process.stdout:\n if line.find(\"Done successfully\") > -1:\n logger.debug(\"bjobs detected job completed for job: \" + str(job))\n return 0\n elif line.find(\"Completed <exit>\") > -1:\n logger.debug(\"bjobs detected job failed for job: \" + str(job))\n return 1\n elif line.find(\"New job is waiting for scheduling\") > -1:\n logger.debug(\"bjobs detected job pending scheduling for job: \" + str(job))\n return None\n elif line.find(\"PENDING REASONS\") > -1:\n logger.debug(\"bjobs detected job pending for job: \" + str(job))\n return None\n elif line.find(\"Started on \") > -1:\n started = 1\n\n if started == 1:\n logger.debug(\"bjobs detected job started but not completed: \" + str(job))\n return None\n\n #if not found in bjobs, then try bacct (slower than bjobs)\n logger.debug(\"bjobs failed to detect job - trying bacct: \" + str(job))\n\n args = [\"bacct\", \"-l\", str(job)]\n logger.debug(\"Checking job exit code for job via bacct:\" + str(job))\n process = subprocess.Popen(\" \".join(args), shell=True, stdout = subprocess.PIPE, stderr = subprocess.STDOUT)\n for line in process.stdout:\n if line.find(\"Completed <done>\") > -1:\n logger.debug(\"Detected job completed for job: \" + str(job))\n return 0\n elif line.find(\"Completed <exit>\") > -1:\n logger.debug(\"Detected job failed for job: \" + str(job))\n return 1\n logger.debug(\"Cant determine exit code for job or job still running: \" + str(job))\n return None\n\nclass Worker(Thread):\n def __init__(self, newJobsQueue, updatedJobsQueue, boss):\n Thread.__init__(self)\n self.newJobsQueue = newJobsQueue\n self.updatedJobsQueue = updatedJobsQueue\n self.currentjobs = list()\n self.runningjobs = set()\n self.boss = boss\n\n def run(self):\n while True:\n # Load new job ids:\n while not self.newJobsQueue.empty():\n self.currentjobs.append(self.newJobsQueue.get())\n\n # Launch jobs as necessary:\n while len(self.currentjobs) > 0:\n jobID, bsubline = self.currentjobs.pop()\n lsfJobID = bsub(bsubline)\n self.boss.jobIDs[(lsfJobID, None)] = jobID\n self.boss.lsfJobIDs[jobID] = (lsfJobID, None)\n self.runningjobs.add((lsfJobID, None))\n\n # Test known job list\n for lsfJobID in list(self.runningjobs):\n exit = getjobexitcode(lsfJobID)\n if exit is not None:\n self.updatedJobsQueue.put((lsfJobID, exit))\n self.runningjobs.remove(lsfJobID)\n\n time.sleep(10)\n\nclass LSFBatchSystem(BatchSystemSupport):\n \"\"\"\n The interface for running jobs on lsf, runs all the jobs you give it as they come in,\n but in parallel.\n \"\"\"\n @classmethod\n def supportsWorkerCleanup(cls):\n return False\n\n @classmethod\n def supportsHotDeployment(cls):\n return False\n\n def __init__(self, config, maxCores, maxMemory, maxDisk):\n super(LSFBatchSystem, self).__init__(config, maxCores, maxMemory, maxDisk)\n self.lsfResultsFile = self._getResultsFileName(config.jobStore)\n #Reset the job queue and results (initially, we do this again once we've killed the jobs)\n self.lsfResultsFileHandle = open(self.lsfResultsFile, 'w')\n self.lsfResultsFileHandle.close() #We lose any previous state in this file, and ensure the files existence\n self.currentjobs = set()\n self.obtainSystemConstants()\n self.jobIDs = dict()\n self.lsfJobIDs = dict()\n self.nextJobID = 0\n\n self.newJobsQueue = Queue()\n self.updatedJobsQueue = Queue()\n self.worker = Worker(self.newJobsQueue, self.updatedJobsQueue, self)\n self.worker.setDaemon(True)\n self.worker.start()\n\n def __des__(self):\n #Closes the file handle associated with the results file.\n self.lsfResultsFileHandle.close() #Close the results file, cos were done.\n\n def issueBatchJob(self, jobNode):\n jobID = self.nextJobID\n self.nextJobID += 1\n self.currentjobs.add(jobID)\n bsubline = prepareBsub(jobNode.cores, jobNode.memory) + [jobNode.command]\n self.newJobsQueue.put((jobID, bsubline))\n logger.debug(\"Issued the job command: %s with job id: %s \" % (jobNode.command, str(jobID)))\n return jobID\n\n def getLsfID(self, jobID):\n if not jobID in self.lsfJobIDs:\n RuntimeError(\"Unknown jobID, could not be converted\")\n\n (job,task) = self.lsfJobIDs[jobID]\n if task is None:\n return str(job)\n else:\n return str(job) + \".\" + str(task)\n\n def killBatchJobs(self, jobIDs):\n \"\"\"Kills the given job IDs.\n \"\"\"\n for jobID in jobIDs:\n logger.debug(\"DEL: \" + str(self.getLsfID(jobID)))\n self.currentjobs.remove(jobID)\n process = subprocess.Popen([\"bkill\", self.getLsfID(jobID)])\n del self.jobIDs[self.lsfJobIDs[jobID]]\n del self.lsfJobIDs[jobID]\n\n toKill = set(jobIDs)\n while len(toKill) > 0:\n for jobID in list(toKill):\n if getjobexitcode(self.lsfJobIDs[jobID]) is not None:\n toKill.remove(jobID)\n\n if len(toKill) > 0:\n logger.warn(\"Tried to kill some jobs, but something happened and they are still going, \"\n \"so I'll try again\")\n time.sleep(5)\n\n def getIssuedBatchJobIDs(self):\n \"\"\"A list of jobs (as jobIDs) currently issued (may be running, or maybe \n just waiting).\n \"\"\"\n return self.currentjobs\n\n def getRunningBatchJobIDs(self):\n \"\"\"Gets a map of jobs (as jobIDs) currently running (not just waiting) \n and a how long they have been running for (in seconds).\n \"\"\"\n times = {}\n currentjobs = set(self.lsfJobIDs[x] for x in self.getIssuedBatchJobIDs())\n process = subprocess.Popen([\"bjobs\"], stdout = subprocess.PIPE)\n\n for curline in process.stdout:\n items = curline.strip().split()\n if (len(items) > 9 and (items[0]) in currentjobs) and items[2] == 'RUN':\n jobstart = \"/\".join(items[7:9]) + '/' + str(date.today().year)\n jobstart = jobstart + ' ' + items[9]\n jobstart = time.mktime(time.strptime(jobstart,\"%b/%d/%Y %H:%M\"))\n jobstart = time.mktime(time.strptime(jobstart,\"%m/%d/%Y %H:%M:%S\"))\n times[self.jobIDs[(items[0])]] = time.time() - jobstart\n return times\n\n def getUpdatedBatchJob(self, maxWait):\n try:\n sgeJobID, retcode = self.updatedJobsQueue.get(timeout=maxWait)\n self.updatedJobsQueue.task_done()\n jobID, retcode = (self.jobIDs[sgeJobID], retcode)\n self.currentjobs -= {self.jobIDs[sgeJobID]}\n except Empty:\n pass\n else:\n return jobID, retcode, None\n\n def getWaitDuration(self):\n \"\"\"We give parasol a second to catch its breath (in seconds)\n \"\"\"\n #return 0.0\n return 15\n\n @classmethod\n def getRescueBatchJobFrequency(cls):\n \"\"\"Parasol leaks jobs, but rescuing jobs involves calls to parasol list jobs and pstat2,\n making it expensive. We allow this every 10 minutes..\n \"\"\"\n return 1800\n\n def obtainSystemConstants(self):\n p = subprocess.Popen([\"lshosts\"], stdout = subprocess.PIPE, stderr = subprocess.STDOUT)\n\n line = p.stdout.readline()\n items = line.strip().split()\n num_columns = len(items)\n cpu_index = None\n mem_index = None\n for i in range(num_columns):\n if items[i] == 'ncpus':\n cpu_index = i\n elif items[i] == 'maxmem':\n mem_index = i\n\n if cpu_index is None or mem_index is None:\n RuntimeError(\"lshosts command does not return ncpus or maxmem columns\")\n\n p.stdout.readline()\n\n self.maxCPU = 0\n self.maxMEM = MemoryString(\"0\")\n for line in p.stdout:\n items = line.strip().split()\n if len(items) < num_columns:\n RuntimeError(\"lshosts output has a varying number of columns\")\n if items[cpu_index] != '-' and items[cpu_index] > self.maxCPU:\n self.maxCPU = items[cpu_index]\n if items[mem_index] != '-' and MemoryString(items[mem_index]) > self.maxMEM:\n self.maxMEM = MemoryString(items[mem_index])\n\n if self.maxCPU is 0 or self.maxMEM is 0:\n RuntimeError(\"lshosts returns null ncpus or maxmem info\")\n logger.debug(\"Got the maxCPU: %s\" % (self.maxMEM))\n", "path": "src/toil/batchSystems/lsf.py"}], "after_files": [{"content": "#Copyright (C) 2013 by Thomas Keane ([email protected])\n#\n#Permission is hereby granted, free of charge, to any person obtaining a copy\n#of this software and associated documentation files (the \"Software\"), to deal\n#in the Software without restriction, including without limitation the rights\n#to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n#copies of the Software, and to permit persons to whom the Software is\n#furnished to do so, subject to the following conditions:\n#\n#The above copyright notice and this permission notice shall be included in\n#all copies or substantial portions of the Software.\n#\n#THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n#IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n#FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n#AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n#LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n#OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n#THE SOFTWARE.\nfrom __future__ import absolute_import\nimport logging\nimport subprocess\nimport time\nfrom threading import Thread\nfrom datetime import date\n\n# Python 3 compatibility imports\nfrom six.moves.queue import Empty, Queue\n\nfrom toil.batchSystems import MemoryString\nfrom toil.batchSystems.abstractBatchSystem import BatchSystemSupport\n\nlogger = logging.getLogger( __name__ )\n\n\n\ndef prepareBsub(cpu, mem):\n mem = '' if mem is None else '-R \"select[type==X86_64 && mem > ' + str(int(mem/ 1000000)) + '] rusage[mem=' + str(int(mem/ 1000000)) + ']\" -M' + str(int(mem/ 1000000)) + '000'\n cpu = '' if cpu is None else '-n ' + str(int(cpu))\n bsubline = [\"bsub\", mem, cpu,\"-cwd\", \".\", \"-o\", \"/dev/null\", \"-e\", \"/dev/null\"]\n return bsubline\n\ndef bsub(bsubline):\n process = subprocess.Popen(\" \".join(bsubline), shell=True, stdout = subprocess.PIPE, stderr = subprocess.STDOUT)\n liney = process.stdout.readline()\n logger.debug(\"BSUB: \" + liney)\n result = int(liney.strip().split()[1].strip('<>'))\n logger.debug(\"Got the job id: %s\" % (str(result)))\n return result\n\ndef getjobexitcode(lsfJobID):\n job, task = lsfJobID\n\n #first try bjobs to find out job state\n args = [\"bjobs\", \"-l\", str(job)]\n logger.debug(\"Checking job exit code for job via bjobs: \" + str(job))\n process = subprocess.Popen(\" \".join(args), shell=True, stdout = subprocess.PIPE, stderr = subprocess.STDOUT)\n started = 0\n for line in process.stdout:\n if line.find(\"Done successfully\") > -1:\n logger.debug(\"bjobs detected job completed for job: \" + str(job))\n return 0\n elif line.find(\"Completed <exit>\") > -1:\n logger.debug(\"bjobs detected job failed for job: \" + str(job))\n return 1\n elif line.find(\"New job is waiting for scheduling\") > -1:\n logger.debug(\"bjobs detected job pending scheduling for job: \" + str(job))\n return None\n elif line.find(\"PENDING REASONS\") > -1:\n logger.debug(\"bjobs detected job pending for job: \" + str(job))\n return None\n elif line.find(\"Started on \") > -1:\n started = 1\n\n if started == 1:\n logger.debug(\"bjobs detected job started but not completed: \" + str(job))\n return None\n\n #if not found in bjobs, then try bacct (slower than bjobs)\n logger.debug(\"bjobs failed to detect job - trying bacct: \" + str(job))\n\n args = [\"bacct\", \"-l\", str(job)]\n logger.debug(\"Checking job exit code for job via bacct:\" + str(job))\n process = subprocess.Popen(\" \".join(args), shell=True, stdout = subprocess.PIPE, stderr = subprocess.STDOUT)\n for line in process.stdout:\n if line.find(\"Completed <done>\") > -1:\n logger.debug(\"Detected job completed for job: \" + str(job))\n return 0\n elif line.find(\"Completed <exit>\") > -1:\n logger.debug(\"Detected job failed for job: \" + str(job))\n return 1\n logger.debug(\"Cant determine exit code for job or job still running: \" + str(job))\n return None\n\nclass Worker(Thread):\n def __init__(self, newJobsQueue, updatedJobsQueue, boss):\n Thread.__init__(self)\n self.newJobsQueue = newJobsQueue\n self.updatedJobsQueue = updatedJobsQueue\n self.currentjobs = list()\n self.runningjobs = set()\n self.boss = boss\n\n def run(self):\n while True:\n # Load new job ids:\n while not self.newJobsQueue.empty():\n self.currentjobs.append(self.newJobsQueue.get())\n\n # Launch jobs as necessary:\n while len(self.currentjobs) > 0:\n jobID, bsubline = self.currentjobs.pop()\n lsfJobID = bsub(bsubline)\n self.boss.jobIDs[(lsfJobID, None)] = jobID\n self.boss.lsfJobIDs[jobID] = (lsfJobID, None)\n self.runningjobs.add((lsfJobID, None))\n\n # Test known job list\n for lsfJobID in list(self.runningjobs):\n exit = getjobexitcode(lsfJobID)\n if exit is not None:\n self.updatedJobsQueue.put((lsfJobID, exit))\n self.runningjobs.remove(lsfJobID)\n\n time.sleep(10)\n\nclass LSFBatchSystem(BatchSystemSupport):\n \"\"\"\n The interface for running jobs on lsf, runs all the jobs you give it as they come in,\n but in parallel.\n \"\"\"\n @classmethod\n def supportsWorkerCleanup(cls):\n return False\n\n @classmethod\n def supportsHotDeployment(cls):\n return False\n\n def shutdown(self):\n pass\n\n def __init__(self, config, maxCores, maxMemory, maxDisk):\n super(LSFBatchSystem, self).__init__(config, maxCores, maxMemory, maxDisk)\n self.lsfResultsFile = self._getResultsFileName(config.jobStore)\n #Reset the job queue and results (initially, we do this again once we've killed the jobs)\n self.lsfResultsFileHandle = open(self.lsfResultsFile, 'w')\n self.lsfResultsFileHandle.close() #We lose any previous state in this file, and ensure the files existence\n self.currentjobs = set()\n self.obtainSystemConstants()\n self.jobIDs = dict()\n self.lsfJobIDs = dict()\n self.nextJobID = 0\n\n self.newJobsQueue = Queue()\n self.updatedJobsQueue = Queue()\n self.worker = Worker(self.newJobsQueue, self.updatedJobsQueue, self)\n self.worker.setDaemon(True)\n self.worker.start()\n\n def __des__(self):\n #Closes the file handle associated with the results file.\n self.lsfResultsFileHandle.close() #Close the results file, cos were done.\n\n def issueBatchJob(self, jobNode):\n jobID = self.nextJobID\n self.nextJobID += 1\n self.currentjobs.add(jobID)\n bsubline = prepareBsub(jobNode.cores, jobNode.memory) + [jobNode.command]\n self.newJobsQueue.put((jobID, bsubline))\n logger.debug(\"Issued the job command: %s with job id: %s \" % (jobNode.command, str(jobID)))\n return jobID\n\n def getLsfID(self, jobID):\n if not jobID in self.lsfJobIDs:\n RuntimeError(\"Unknown jobID, could not be converted\")\n\n (job,task) = self.lsfJobIDs[jobID]\n if task is None:\n return str(job)\n else:\n return str(job) + \".\" + str(task)\n\n def killBatchJobs(self, jobIDs):\n \"\"\"Kills the given job IDs.\n \"\"\"\n for jobID in jobIDs:\n logger.debug(\"DEL: \" + str(self.getLsfID(jobID)))\n self.currentjobs.remove(jobID)\n process = subprocess.Popen([\"bkill\", self.getLsfID(jobID)])\n del self.jobIDs[self.lsfJobIDs[jobID]]\n del self.lsfJobIDs[jobID]\n\n toKill = set(jobIDs)\n while len(toKill) > 0:\n for jobID in list(toKill):\n if getjobexitcode(self.lsfJobIDs[jobID]) is not None:\n toKill.remove(jobID)\n\n if len(toKill) > 0:\n logger.warn(\"Tried to kill some jobs, but something happened and they are still going, \"\n \"so I'll try again\")\n time.sleep(5)\n\n def getIssuedBatchJobIDs(self):\n \"\"\"A list of jobs (as jobIDs) currently issued (may be running, or maybe \n just waiting).\n \"\"\"\n return self.currentjobs\n\n def getRunningBatchJobIDs(self):\n \"\"\"Gets a map of jobs (as jobIDs) currently running (not just waiting) \n and a how long they have been running for (in seconds).\n \"\"\"\n times = {}\n currentjobs = set(self.lsfJobIDs[x] for x in self.getIssuedBatchJobIDs())\n process = subprocess.Popen([\"bjobs\"], stdout = subprocess.PIPE)\n\n for curline in process.stdout:\n items = curline.strip().split()\n if (len(items) > 9 and (items[0]) in currentjobs) and items[2] == 'RUN':\n jobstart = \"/\".join(items[7:9]) + '/' + str(date.today().year)\n jobstart = jobstart + ' ' + items[9]\n jobstart = time.mktime(time.strptime(jobstart,\"%b/%d/%Y %H:%M\"))\n jobstart = time.mktime(time.strptime(jobstart,\"%m/%d/%Y %H:%M:%S\"))\n times[self.jobIDs[(items[0])]] = time.time() - jobstart\n return times\n\n def getUpdatedBatchJob(self, maxWait):\n try:\n sgeJobID, retcode = self.updatedJobsQueue.get(timeout=maxWait)\n self.updatedJobsQueue.task_done()\n jobID, retcode = (self.jobIDs[sgeJobID], retcode)\n self.currentjobs -= {self.jobIDs[sgeJobID]}\n except Empty:\n pass\n else:\n return jobID, retcode, None\n\n def getWaitDuration(self):\n \"\"\"We give parasol a second to catch its breath (in seconds)\n \"\"\"\n #return 0.0\n return 15\n\n @classmethod\n def getRescueBatchJobFrequency(cls):\n \"\"\"Parasol leaks jobs, but rescuing jobs involves calls to parasol list jobs and pstat2,\n making it expensive. We allow this every 10 minutes..\n \"\"\"\n return 1800\n\n def obtainSystemConstants(self):\n p = subprocess.Popen([\"lshosts\"], stdout = subprocess.PIPE, stderr = subprocess.STDOUT)\n\n line = p.stdout.readline()\n items = line.strip().split()\n num_columns = len(items)\n cpu_index = None\n mem_index = None\n for i in range(num_columns):\n if items[i] == 'ncpus':\n cpu_index = i\n elif items[i] == 'maxmem':\n mem_index = i\n\n if cpu_index is None or mem_index is None:\n RuntimeError(\"lshosts command does not return ncpus or maxmem columns\")\n\n p.stdout.readline()\n\n self.maxCPU = 0\n self.maxMEM = MemoryString(\"0\")\n for line in p.stdout:\n items = line.strip().split()\n if len(items) < num_columns:\n RuntimeError(\"lshosts output has a varying number of columns\")\n if items[cpu_index] != '-' and items[cpu_index] > self.maxCPU:\n self.maxCPU = items[cpu_index]\n if items[mem_index] != '-' and MemoryString(items[mem_index]) > self.maxMEM:\n self.maxMEM = MemoryString(items[mem_index])\n\n if self.maxCPU is 0 or self.maxMEM is 0:\n RuntimeError(\"lshosts returns null ncpus or maxmem info\")\n logger.debug(\"Got the maxCPU: %s\" % (self.maxMEM))\n", "path": "src/toil/batchSystems/lsf.py"}]} | 3,860 | 147 |
gh_patches_debug_30027 | rasdani/github-patches | git_diff | fidals__shopelectro-992 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Resurrect script to process photos to the DB
Now we are having script processing photos to the DB
`shopelectro/management/commands/images.py`
Since we have no tests for it, code may become stale. Resurrect it and move products to the prod
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `shopelectro/logic/header.py`
Content:
```
1 import typing
2 from functools import lru_cache
3
4 from django.conf import settings
5 from django.db.models import Q
6
7 from pages import models as pages_models
8 from shopelectro import models
9
10
11 class Menu:
12 DICT_TYPE = typing.Dict[models.CategoryPage, typing.List[models.CategoryPage]]
13
14 @staticmethod
15 def roots() -> pages_models.PageQuerySet:
16 """
17 QuerySet with header menu items.
18
19 Contains root categories.
20 Result can be tuned HEADER_LINKS settings option.
21 """
22 return (
23 pages_models.Page.objects.active()
24 .filter(
25 Q(slug__in=settings.HEADER_LINKS['add'])
26 | (
27 # @todo #974:30m Optimize the header menu query.
28 # Fetch catalog page for the header menu at the same query.
29 # root category pages.
30 Q(parent=pages_models.CustomPage.objects.filter(slug='catalog'))
31 & Q(type='model')
32 & Q(related_model_name=models.Category._meta.db_table)
33 & ~Q(slug__in=settings.HEADER_LINKS['exclude'])
34 )
35 )
36 .order_by('position')
37 )
38
39 @lru_cache(maxsize=1)
40 def as_dict(self) -> DICT_TYPE:
41 return {
42 root: list(
43 root.get_children()
44 .filter(type='model')
45 .filter(related_model_name=models.Category._meta.db_table)
46 )
47 for root in self.roots().iterator()
48 }
49
```
Path: `shopelectro/management/commands/images.py`
Content:
```
1 """Create Image objects from folder with image files."""
2 import os
3
4 from django.conf import settings
5 from django.core.files.images import ImageFile
6 from django.core.management.base import BaseCommand
7
8 from images.models import Image
9 from pages.models import Page
10 from shopelectro.models import Product
11
12
13 IMAGES_ROOT_FOLDER_NAME = os.path.join(settings.MEDIA_ROOT, 'products')
14
15
16 def create_image_models():
17
18 def iter_dirs(path: str):
19 return (dir_ for dir_ in os.scandir(path) if dir_.is_dir())
20
21 def iter_files(path: str):
22 return (file_ for file_ in os.scandir(path) if file_.is_file())
23
24 def get_page(product_id: int) -> Page:
25 product_ = Product.objects.filter(id=product_id).first()
26 return product_.page if product_ else None
27
28 def create_image_model(file_, product_id: int, slug):
29 file_short_name, _ = os.path.splitext(file_.name)
30
31 # skip images, resized to small size
32 if file_short_name == 'small':
33 return
34
35 # create Image model object based on current image
36 page = get_page(product_id=product_id)
37 if not page:
38 return
39 # don't use bulk create, because save() isn't hooked with it
40 # http://bit.ly/django_bulk_create
41 Image.objects.create(
42 model=page,
43 # autoincrement file names: '1.jpg', '2.jpg' and so on
44 slug=slug,
45 image=ImageFile(open(file_.path, mode='rb')),
46 is_main=(file_short_name == 'main')
47 )
48
49 if not os.path.isdir(IMAGES_ROOT_FOLDER_NAME) or len(Image.objects.all()):
50 return
51
52 # run over every image in every folder
53 for dir_ in iter_dirs(IMAGES_ROOT_FOLDER_NAME):
54 for slug_index, file in enumerate(iter_files(dir_.path)):
55 create_image_model(
56 file_=file,
57 product_id=int(dir_.name),
58 slug=str(slug_index)
59 )
60 # old folder stays in fs as backup of old photos
61
62
63 class Command(BaseCommand):
64
65 def handle(self, *args, **kwargs):
66 create_image_models()
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/shopelectro/logic/header.py b/shopelectro/logic/header.py
--- a/shopelectro/logic/header.py
+++ b/shopelectro/logic/header.py
@@ -14,7 +14,7 @@
@staticmethod
def roots() -> pages_models.PageQuerySet:
"""
- QuerySet with header menu items.
+ Queryset with header menu items.
Contains root categories.
Result can be tuned HEADER_LINKS settings option.
diff --git a/shopelectro/management/commands/images.py b/shopelectro/management/commands/images.py
--- a/shopelectro/management/commands/images.py
+++ b/shopelectro/management/commands/images.py
@@ -9,7 +9,6 @@
from pages.models import Page
from shopelectro.models import Product
-
IMAGES_ROOT_FOLDER_NAME = os.path.join(settings.MEDIA_ROOT, 'products')
@@ -22,7 +21,7 @@
return (file_ for file_ in os.scandir(path) if file_.is_file())
def get_page(product_id: int) -> Page:
- product_ = Product.objects.filter(id=product_id).first()
+ product_ = Product.objects.filter(vendor_code=product_id).first()
return product_.page if product_ else None
def create_image_model(file_, product_id: int, slug):
@@ -42,6 +41,7 @@
model=page,
# autoincrement file names: '1.jpg', '2.jpg' and so on
slug=slug,
+ # copies file with to the new path on create
image=ImageFile(open(file_.path, mode='rb')),
is_main=(file_short_name == 'main')
)
| {"golden_diff": "diff --git a/shopelectro/logic/header.py b/shopelectro/logic/header.py\n--- a/shopelectro/logic/header.py\n+++ b/shopelectro/logic/header.py\n@@ -14,7 +14,7 @@\n @staticmethod\n def roots() -> pages_models.PageQuerySet:\n \"\"\"\n- QuerySet with header menu items.\n+ Queryset with header menu items.\n \n Contains root categories.\n Result can be tuned HEADER_LINKS settings option.\ndiff --git a/shopelectro/management/commands/images.py b/shopelectro/management/commands/images.py\n--- a/shopelectro/management/commands/images.py\n+++ b/shopelectro/management/commands/images.py\n@@ -9,7 +9,6 @@\n from pages.models import Page\n from shopelectro.models import Product\n \n-\n IMAGES_ROOT_FOLDER_NAME = os.path.join(settings.MEDIA_ROOT, 'products')\n \n \n@@ -22,7 +21,7 @@\n return (file_ for file_ in os.scandir(path) if file_.is_file())\n \n def get_page(product_id: int) -> Page:\n- product_ = Product.objects.filter(id=product_id).first()\n+ product_ = Product.objects.filter(vendor_code=product_id).first()\n return product_.page if product_ else None\n \n def create_image_model(file_, product_id: int, slug):\n@@ -42,6 +41,7 @@\n model=page,\n # autoincrement file names: '1.jpg', '2.jpg' and so on\n slug=slug,\n+ # copies file with to the new path on create\n image=ImageFile(open(file_.path, mode='rb')),\n is_main=(file_short_name == 'main')\n )\n", "issue": "Resurrect script to process photos to the DB\nNow we are having script processing photos to the DB\r\n`shopelectro/management/commands/images.py`\r\n\r\nSince we have no tests for it, code may become stale. Resurrect it and move products to the prod\n", "before_files": [{"content": "import typing\nfrom functools import lru_cache\n\nfrom django.conf import settings\nfrom django.db.models import Q\n\nfrom pages import models as pages_models\nfrom shopelectro import models\n\n\nclass Menu:\n DICT_TYPE = typing.Dict[models.CategoryPage, typing.List[models.CategoryPage]]\n\n @staticmethod\n def roots() -> pages_models.PageQuerySet:\n \"\"\"\n QuerySet with header menu items.\n\n Contains root categories.\n Result can be tuned HEADER_LINKS settings option.\n \"\"\"\n return (\n pages_models.Page.objects.active()\n .filter(\n Q(slug__in=settings.HEADER_LINKS['add'])\n | (\n # @todo #974:30m Optimize the header menu query.\n # Fetch catalog page for the header menu at the same query.\n # root category pages.\n Q(parent=pages_models.CustomPage.objects.filter(slug='catalog'))\n & Q(type='model')\n & Q(related_model_name=models.Category._meta.db_table)\n & ~Q(slug__in=settings.HEADER_LINKS['exclude'])\n )\n )\n .order_by('position')\n )\n\n @lru_cache(maxsize=1)\n def as_dict(self) -> DICT_TYPE:\n return {\n root: list(\n root.get_children()\n .filter(type='model')\n .filter(related_model_name=models.Category._meta.db_table)\n )\n for root in self.roots().iterator()\n }\n", "path": "shopelectro/logic/header.py"}, {"content": "\"\"\"Create Image objects from folder with image files.\"\"\"\nimport os\n\nfrom django.conf import settings\nfrom django.core.files.images import ImageFile\nfrom django.core.management.base import BaseCommand\n\nfrom images.models import Image\nfrom pages.models import Page\nfrom shopelectro.models import Product\n\n\nIMAGES_ROOT_FOLDER_NAME = os.path.join(settings.MEDIA_ROOT, 'products')\n\n\ndef create_image_models():\n\n def iter_dirs(path: str):\n return (dir_ for dir_ in os.scandir(path) if dir_.is_dir())\n\n def iter_files(path: str):\n return (file_ for file_ in os.scandir(path) if file_.is_file())\n\n def get_page(product_id: int) -> Page:\n product_ = Product.objects.filter(id=product_id).first()\n return product_.page if product_ else None\n\n def create_image_model(file_, product_id: int, slug):\n file_short_name, _ = os.path.splitext(file_.name)\n\n # skip images, resized to small size\n if file_short_name == 'small':\n return\n\n # create Image model object based on current image\n page = get_page(product_id=product_id)\n if not page:\n return\n # don't use bulk create, because save() isn't hooked with it\n # http://bit.ly/django_bulk_create\n Image.objects.create(\n model=page,\n # autoincrement file names: '1.jpg', '2.jpg' and so on\n slug=slug,\n image=ImageFile(open(file_.path, mode='rb')),\n is_main=(file_short_name == 'main')\n )\n\n if not os.path.isdir(IMAGES_ROOT_FOLDER_NAME) or len(Image.objects.all()):\n return\n\n # run over every image in every folder\n for dir_ in iter_dirs(IMAGES_ROOT_FOLDER_NAME):\n for slug_index, file in enumerate(iter_files(dir_.path)):\n create_image_model(\n file_=file,\n product_id=int(dir_.name),\n slug=str(slug_index)\n )\n # old folder stays in fs as backup of old photos\n\n\nclass Command(BaseCommand):\n\n def handle(self, *args, **kwargs):\n create_image_models()\n", "path": "shopelectro/management/commands/images.py"}], "after_files": [{"content": "import typing\nfrom functools import lru_cache\n\nfrom django.conf import settings\nfrom django.db.models import Q\n\nfrom pages import models as pages_models\nfrom shopelectro import models\n\n\nclass Menu:\n DICT_TYPE = typing.Dict[models.CategoryPage, typing.List[models.CategoryPage]]\n\n @staticmethod\n def roots() -> pages_models.PageQuerySet:\n \"\"\"\n Queryset with header menu items.\n\n Contains root categories.\n Result can be tuned HEADER_LINKS settings option.\n \"\"\"\n return (\n pages_models.Page.objects.active()\n .filter(\n Q(slug__in=settings.HEADER_LINKS['add'])\n | (\n # @todo #974:30m Optimize the header menu query.\n # Fetch catalog page for the header menu at the same query.\n # root category pages.\n Q(parent=pages_models.CustomPage.objects.filter(slug='catalog'))\n & Q(type='model')\n & Q(related_model_name=models.Category._meta.db_table)\n & ~Q(slug__in=settings.HEADER_LINKS['exclude'])\n )\n )\n .order_by('position')\n )\n\n @lru_cache(maxsize=1)\n def as_dict(self) -> DICT_TYPE:\n return {\n root: list(\n root.get_children()\n .filter(type='model')\n .filter(related_model_name=models.Category._meta.db_table)\n )\n for root in self.roots().iterator()\n }\n", "path": "shopelectro/logic/header.py"}, {"content": "\"\"\"Create Image objects from folder with image files.\"\"\"\nimport os\n\nfrom django.conf import settings\nfrom django.core.files.images import ImageFile\nfrom django.core.management.base import BaseCommand\n\nfrom images.models import Image\nfrom pages.models import Page\nfrom shopelectro.models import Product\n\nIMAGES_ROOT_FOLDER_NAME = os.path.join(settings.MEDIA_ROOT, 'products')\n\n\ndef create_image_models():\n\n def iter_dirs(path: str):\n return (dir_ for dir_ in os.scandir(path) if dir_.is_dir())\n\n def iter_files(path: str):\n return (file_ for file_ in os.scandir(path) if file_.is_file())\n\n def get_page(product_id: int) -> Page:\n product_ = Product.objects.filter(vendor_code=product_id).first()\n return product_.page if product_ else None\n\n def create_image_model(file_, product_id: int, slug):\n file_short_name, _ = os.path.splitext(file_.name)\n\n # skip images, resized to small size\n if file_short_name == 'small':\n return\n\n # create Image model object based on current image\n page = get_page(product_id=product_id)\n if not page:\n return\n # don't use bulk create, because save() isn't hooked with it\n # http://bit.ly/django_bulk_create\n Image.objects.create(\n model=page,\n # autoincrement file names: '1.jpg', '2.jpg' and so on\n slug=slug,\n # copies file with to the new path on create\n image=ImageFile(open(file_.path, mode='rb')),\n is_main=(file_short_name == 'main')\n )\n\n if not os.path.isdir(IMAGES_ROOT_FOLDER_NAME) or len(Image.objects.all()):\n return\n\n # run over every image in every folder\n for dir_ in iter_dirs(IMAGES_ROOT_FOLDER_NAME):\n for slug_index, file in enumerate(iter_files(dir_.path)):\n create_image_model(\n file_=file,\n product_id=int(dir_.name),\n slug=str(slug_index)\n )\n # old folder stays in fs as backup of old photos\n\n\nclass Command(BaseCommand):\n\n def handle(self, *args, **kwargs):\n create_image_models()\n", "path": "shopelectro/management/commands/images.py"}]} | 1,346 | 386 |
gh_patches_debug_37760 | rasdani/github-patches | git_diff | scalableminds__webknossos-libs-126 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
float/double instead of float32/float64
webknossos expects the datatypes to be called `float` and `double` instead of `float32` or `float64` in the `datasource-properties.json`. This needs to be converted both ways in `wkcuber.metadata`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wkcuber/metadata.py`
Content:
```
1 import json
2 import re
3 import wkw
4 import logging
5 import numpy as np
6
7 from argparse import ArgumentParser
8 from glob import iglob
9 from os import path, listdir
10 from typing import Optional
11 from .mag import Mag
12 from typing import List
13 from .utils import add_verbose_flag, setup_logging, add_scale_flag
14
15
16 def get_datasource_path(dataset_path):
17 return path.join(dataset_path, "datasource-properties.json")
18
19
20 def create_parser():
21 parser = ArgumentParser()
22
23 parser.add_argument("path", help="Directory containing the dataset.")
24
25 parser.add_argument("--name", "-n", help="Name of the dataset", default=None)
26
27 parser.add_argument("--refresh", "-r", default=False, action="store_true")
28
29 group = parser.add_mutually_exclusive_group()
30 group.add_argument(
31 "--compute_max_id",
32 "-c",
33 help="set to compute max id",
34 default=False,
35 action="store_true",
36 )
37 group.add_argument("--max_id", help="set max id of segmentation.", default=0)
38
39 add_scale_flag(parser, required=False)
40 add_verbose_flag(parser)
41
42 return parser
43
44
45 def write_datasource_properties(dataset_path, datasource_properties):
46 datasource_properties_path = get_datasource_path(dataset_path)
47 with open(datasource_properties_path, "wt") as datasource_properties_file:
48 json.dump(datasource_properties, datasource_properties_file, indent=2)
49
50
51 def read_datasource_properties(dataset_path):
52 with open(get_datasource_path(dataset_path), "r") as datasource_properties_file:
53 return json.load(datasource_properties_file)
54
55
56 """
57 Creates a datasource-properties.json file with the specified properties
58 for the given dataset path. Common layers are detected automatically.
59 """
60
61
62 def write_webknossos_metadata(
63 dataset_path,
64 name,
65 scale,
66 max_id=0,
67 compute_max_id=False,
68 exact_bounding_box: Optional[dict] = None,
69 ):
70 if name == None:
71 name = path.basename(dataset_path)
72
73 # Generate a metadata file for webKnossos
74 # Currently includes no source of information for team
75 layers = list(
76 detect_layers(dataset_path, max_id, compute_max_id, exact_bounding_box)
77 )
78 write_datasource_properties(
79 dataset_path,
80 {
81 "id": {"name": name, "team": "<unknown>"},
82 "dataLayers": layers,
83 "scale": scale,
84 },
85 )
86
87
88 """
89 Updates the datasource-properties.json file for a given dataset.
90 Use this method if you added (or removed) layers and/or changed magnifications for
91 existing layers.
92
93 Raises an exception if the datasource-properties.json file does not exist, yet.
94 In this case, use write_webknossos_metadata instead.
95 """
96
97
98 def refresh_metadata(
99 wkw_path, max_id=0, compute_max_id=False, exact_bounding_box: Optional[dict] = None
100 ):
101 dataset_path = get_datasource_path(wkw_path)
102 if not path.exists(dataset_path):
103 raise Exception(
104 "datasource-properties.json file could not be found. Please use write_webknossos_metadata to create it."
105 )
106
107 datasource_properties = read_datasource_properties(wkw_path)
108 existing_layers_dict = {
109 layer["name"]: layer for layer in datasource_properties["dataLayers"]
110 }
111
112 new_layers = list(
113 detect_layers(wkw_path, max_id, compute_max_id, exact_bounding_box)
114 )
115
116 # Merge the freshly read layers with the existing layer information, so that information,
117 # such as bounding boxes, are not lost for existing layers.
118 # For existing layers, only the resolutions will be updated.
119 merged_layers = []
120 for new_layer in new_layers:
121 layer_name = new_layer["name"]
122 if layer_name in existing_layers_dict:
123 existing_layer = existing_layers_dict[layer_name]
124 # Update the resolutions
125 existing_layer["wkwResolutions"] = new_layer["wkwResolutions"]
126 merged_layers.append(existing_layer)
127 else:
128 merged_layers.append(new_layer)
129
130 datasource_properties["dataLayers"] = merged_layers
131 write_datasource_properties(wkw_path, datasource_properties)
132
133
134 def read_metadata_for_layer(wkw_path, layer_name):
135 datasource_properties = read_datasource_properties(wkw_path)
136
137 layers = datasource_properties["dataLayers"]
138 layer_info = next(layer for layer in layers if layer["name"] == layer_name)
139
140 dtype = np.dtype(layer_info["elementClass"])
141 bounding_box = layer_info["boundingBox"]
142 origin = bounding_box["topLeft"]
143 bounding_box = [
144 bounding_box["width"],
145 bounding_box["height"],
146 bounding_box["depth"],
147 ]
148
149 return layer_info, dtype, bounding_box, origin
150
151
152 def detect_dtype(dataset_path, layer, mag: Mag = Mag(1)):
153 layer_path = path.join(dataset_path, layer, str(mag))
154 if path.exists(layer_path):
155 with wkw.Dataset.open(layer_path) as dataset:
156 voxel_type = dataset.header.voxel_type
157 num_channels = dataset.header.num_channels
158 voxel_size = np.dtype(voxel_type)
159 if voxel_size == np.uint8 and num_channels > 1:
160 return "uint" + str(8 * num_channels)
161 else:
162 return str(np.dtype(voxel_type))
163
164
165 def detect_cubeLength(dataset_path, layer, mag: Mag = Mag(1)):
166 layer_path = path.join(dataset_path, layer, str(mag))
167 if path.exists(layer_path):
168 with wkw.Dataset.open(layer_path) as dataset:
169 return dataset.header.block_len * dataset.header.file_len
170
171
172 def detect_bbox(dataset_path, layer, mag: Mag = Mag(1)):
173 # Detect the coarse bounding box of a dataset by iterating
174 # over the WKW cubes
175 layer_path = path.join(dataset_path, layer, str(mag))
176
177 def list_files(layer_path):
178 return iglob(path.join(layer_path, "*", "*", "*.wkw"), recursive=True)
179
180 def parse_cube_file_name(filename):
181 CUBE_REGEX = re.compile(r"z(\d+)/y(\d+)/x(\d+)(\.wkw)$")
182 m = CUBE_REGEX.search(filename)
183 return (int(m.group(3)), int(m.group(2)), int(m.group(1)))
184
185 def list_cubes(layer_path):
186 return (parse_cube_file_name(f) for f in list_files(layer_path))
187
188 xs, ys, zs = list(zip(*list_cubes(layer_path)))
189
190 min_x, min_y, min_z = min(xs), min(ys), min(zs)
191 max_x, max_y, max_z = max(xs), max(ys), max(zs)
192
193 cubeLength = detect_cubeLength(dataset_path, layer, mag)
194
195 return {
196 "topLeft": [min_x * cubeLength, min_y * cubeLength, min_z * cubeLength],
197 "width": (1 + max_x - min_x) * cubeLength,
198 "height": (1 + max_y - min_y) * cubeLength,
199 "depth": (1 + max_z - min_z) * cubeLength,
200 }
201
202
203 def detect_resolutions(dataset_path, layer) -> List[Mag]:
204 for mag in listdir(path.join(dataset_path, layer)):
205 try:
206 yield Mag(mag)
207 except ValueError:
208 logging.info("ignoring {} as resolution".format(mag))
209
210
211 def detect_standard_layer(
212 dataset_path, layer_name, exact_bounding_box=None, category="color"
213 ):
214 # Perform metadata detection for well-known layers
215
216 mags = list(detect_resolutions(dataset_path, layer_name))
217 mags = sorted(mags)
218 assert len(mags) > 0, "No resolutions found"
219
220 if exact_bounding_box is None:
221 bbox = detect_bbox(dataset_path, layer_name, mags[0])
222 else:
223 bbox = exact_bounding_box
224
225 resolutions = [
226 {
227 "resolution": mag.to_array(),
228 "cubeLength": detect_cubeLength(dataset_path, layer_name, mag),
229 }
230 for mag in mags
231 ]
232
233 dtype = detect_dtype(dataset_path, layer_name, mags[0])
234
235 return {
236 "dataFormat": "wkw",
237 "name": layer_name,
238 "category": category,
239 "elementClass": dtype,
240 "boundingBox": bbox,
241 "wkwResolutions": list(resolutions),
242 }
243
244
245 def detect_segmentation_layer(
246 dataset_path, layer_name, max_id, compute_max_id=False, exact_bounding_box=None
247 ):
248 layer_info = detect_standard_layer(
249 dataset_path, layer_name, exact_bounding_box, category="segmentation"
250 )
251 layer_info["mappings"] = []
252 layer_info["largestSegmentId"] = max_id
253
254 if compute_max_id:
255 logging.info("Computing max id of layer={}".format(layer_name))
256 # Computing the current largest segment id
257 # This may take very long due to IO load
258 layer_path = path.join(dataset_path, layer_name, "1")
259 with wkw.Dataset.open(layer_path) as dataset:
260 bbox = layer_info["boundingBox"]
261 layer_info["largestSegmentId"] = int(
262 np.max(
263 dataset.read(
264 bbox["topLeft"], [bbox["width"], bbox["height"], bbox["depth"]]
265 )
266 )
267 )
268 logging.info(
269 "Max id of layer={} is {}".format(
270 layer_name, layer_info["largestSegmentId"]
271 )
272 )
273
274 return layer_info
275
276
277 def detect_layers(dataset_path, max_id, compute_max_id, exact_bounding_box=None):
278 # Detect metadata for well-known layers (i.e., color, prediction and segmentation)
279 for layer_name in ["color", "prediction"]:
280 if path.exists(path.join(dataset_path, layer_name)):
281 yield detect_standard_layer(dataset_path, layer_name, exact_bounding_box)
282 if path.exists(path.join(dataset_path, "segmentation")):
283 yield detect_segmentation_layer(
284 dataset_path, "segmentation", max_id, compute_max_id, exact_bounding_box
285 )
286
287
288 if __name__ == "__main__":
289 args = create_parser().parse_args()
290 setup_logging(args)
291
292 if not args.refresh:
293 assert (
294 args.scale is not None
295 ), "The scale has to be specified when creating metadata for a dataset."
296 assert (
297 args.name is not None
298 ), "Please provide a name via --name to create meta data."
299 scale = tuple(float(x) for x in args.scale.split(","))
300 write_webknossos_metadata(
301 args.path, args.name, scale, args.max_id, args.compute_max_id
302 )
303 else:
304 if args.name is not None:
305 logging.warn(
306 "The --name argument is ignored, since --refresh was provided."
307 )
308 refresh_metadata(args.path, args.max_id, args.compute_max_id)
309
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wkcuber/metadata.py b/wkcuber/metadata.py
--- a/wkcuber/metadata.py
+++ b/wkcuber/metadata.py
@@ -131,13 +131,26 @@
write_datasource_properties(wkw_path, datasource_properties)
+def convert_element_class_to_dtype(elementClass):
+ default_dtype = np.uint8 if "uint" in elementClass else np.dtype(elementClass)
+ conversion_map = {
+ "float": np.float32,
+ "double": np.float64,
+ "uint8": np.uint8,
+ "uint16": np.uint16,
+ "uint32": np.uint32,
+ "uint64": np.uint64,
+ }
+ return conversion_map.get(elementClass, default_dtype)
+
+
def read_metadata_for_layer(wkw_path, layer_name):
datasource_properties = read_datasource_properties(wkw_path)
layers = datasource_properties["dataLayers"]
layer_info = next(layer for layer in layers if layer["name"] == layer_name)
- dtype = np.dtype(layer_info["elementClass"])
+ dtype = convert_element_class_to_dtype(layer_info["elementClass"])
bounding_box = layer_info["boundingBox"]
origin = bounding_box["topLeft"]
bounding_box = [
@@ -149,6 +162,19 @@
return layer_info, dtype, bounding_box, origin
+def convert_dype_to_element_class(dtype):
+ element_class_to_dtype_map = {
+ "float": np.float32,
+ "double": np.float64,
+ "uint8": np.uint8,
+ "uint16": np.uint16,
+ "uint32": np.uint32,
+ "uint64": np.uint64,
+ }
+ conversion_map = {v: k for k, v in element_class_to_dtype_map.items()}
+ return conversion_map.get(dtype.type, str(dtype))
+
+
def detect_dtype(dataset_path, layer, mag: Mag = Mag(1)):
layer_path = path.join(dataset_path, layer, str(mag))
if path.exists(layer_path):
@@ -159,7 +185,7 @@
if voxel_size == np.uint8 and num_channels > 1:
return "uint" + str(8 * num_channels)
else:
- return str(np.dtype(voxel_type))
+ return convert_dype_to_element_class(voxel_size)
def detect_cubeLength(dataset_path, layer, mag: Mag = Mag(1)):
@@ -302,7 +328,7 @@
)
else:
if args.name is not None:
- logging.warn(
+ logging.warning(
"The --name argument is ignored, since --refresh was provided."
)
refresh_metadata(args.path, args.max_id, args.compute_max_id)
| {"golden_diff": "diff --git a/wkcuber/metadata.py b/wkcuber/metadata.py\n--- a/wkcuber/metadata.py\n+++ b/wkcuber/metadata.py\n@@ -131,13 +131,26 @@\n write_datasource_properties(wkw_path, datasource_properties)\n \n \n+def convert_element_class_to_dtype(elementClass):\n+ default_dtype = np.uint8 if \"uint\" in elementClass else np.dtype(elementClass)\n+ conversion_map = {\n+ \"float\": np.float32,\n+ \"double\": np.float64,\n+ \"uint8\": np.uint8,\n+ \"uint16\": np.uint16,\n+ \"uint32\": np.uint32,\n+ \"uint64\": np.uint64,\n+ }\n+ return conversion_map.get(elementClass, default_dtype)\n+\n+\n def read_metadata_for_layer(wkw_path, layer_name):\n datasource_properties = read_datasource_properties(wkw_path)\n \n layers = datasource_properties[\"dataLayers\"]\n layer_info = next(layer for layer in layers if layer[\"name\"] == layer_name)\n \n- dtype = np.dtype(layer_info[\"elementClass\"])\n+ dtype = convert_element_class_to_dtype(layer_info[\"elementClass\"])\n bounding_box = layer_info[\"boundingBox\"]\n origin = bounding_box[\"topLeft\"]\n bounding_box = [\n@@ -149,6 +162,19 @@\n return layer_info, dtype, bounding_box, origin\n \n \n+def convert_dype_to_element_class(dtype):\n+ element_class_to_dtype_map = {\n+ \"float\": np.float32,\n+ \"double\": np.float64,\n+ \"uint8\": np.uint8,\n+ \"uint16\": np.uint16,\n+ \"uint32\": np.uint32,\n+ \"uint64\": np.uint64,\n+ }\n+ conversion_map = {v: k for k, v in element_class_to_dtype_map.items()}\n+ return conversion_map.get(dtype.type, str(dtype))\n+\n+\n def detect_dtype(dataset_path, layer, mag: Mag = Mag(1)):\n layer_path = path.join(dataset_path, layer, str(mag))\n if path.exists(layer_path):\n@@ -159,7 +185,7 @@\n if voxel_size == np.uint8 and num_channels > 1:\n return \"uint\" + str(8 * num_channels)\n else:\n- return str(np.dtype(voxel_type))\n+ return convert_dype_to_element_class(voxel_size)\n \n \n def detect_cubeLength(dataset_path, layer, mag: Mag = Mag(1)):\n@@ -302,7 +328,7 @@\n )\n else:\n if args.name is not None:\n- logging.warn(\n+ logging.warning(\n \"The --name argument is ignored, since --refresh was provided.\"\n )\n refresh_metadata(args.path, args.max_id, args.compute_max_id)\n", "issue": "float/double instead of float32/float64\nwebknossos expects the datatypes to be called `float` and `double` instead of `float32` or `float64` in the `datasource-properties.json`. This needs to be converted both ways in `wkcuber.metadata`\n", "before_files": [{"content": "import json\nimport re\nimport wkw\nimport logging\nimport numpy as np\n\nfrom argparse import ArgumentParser\nfrom glob import iglob\nfrom os import path, listdir\nfrom typing import Optional\nfrom .mag import Mag\nfrom typing import List\nfrom .utils import add_verbose_flag, setup_logging, add_scale_flag\n\n\ndef get_datasource_path(dataset_path):\n return path.join(dataset_path, \"datasource-properties.json\")\n\n\ndef create_parser():\n parser = ArgumentParser()\n\n parser.add_argument(\"path\", help=\"Directory containing the dataset.\")\n\n parser.add_argument(\"--name\", \"-n\", help=\"Name of the dataset\", default=None)\n\n parser.add_argument(\"--refresh\", \"-r\", default=False, action=\"store_true\")\n\n group = parser.add_mutually_exclusive_group()\n group.add_argument(\n \"--compute_max_id\",\n \"-c\",\n help=\"set to compute max id\",\n default=False,\n action=\"store_true\",\n )\n group.add_argument(\"--max_id\", help=\"set max id of segmentation.\", default=0)\n\n add_scale_flag(parser, required=False)\n add_verbose_flag(parser)\n\n return parser\n\n\ndef write_datasource_properties(dataset_path, datasource_properties):\n datasource_properties_path = get_datasource_path(dataset_path)\n with open(datasource_properties_path, \"wt\") as datasource_properties_file:\n json.dump(datasource_properties, datasource_properties_file, indent=2)\n\n\ndef read_datasource_properties(dataset_path):\n with open(get_datasource_path(dataset_path), \"r\") as datasource_properties_file:\n return json.load(datasource_properties_file)\n\n\n\"\"\"\nCreates a datasource-properties.json file with the specified properties\nfor the given dataset path. Common layers are detected automatically.\n\"\"\"\n\n\ndef write_webknossos_metadata(\n dataset_path,\n name,\n scale,\n max_id=0,\n compute_max_id=False,\n exact_bounding_box: Optional[dict] = None,\n):\n if name == None:\n name = path.basename(dataset_path)\n\n # Generate a metadata file for webKnossos\n # Currently includes no source of information for team\n layers = list(\n detect_layers(dataset_path, max_id, compute_max_id, exact_bounding_box)\n )\n write_datasource_properties(\n dataset_path,\n {\n \"id\": {\"name\": name, \"team\": \"<unknown>\"},\n \"dataLayers\": layers,\n \"scale\": scale,\n },\n )\n\n\n\"\"\"\nUpdates the datasource-properties.json file for a given dataset.\nUse this method if you added (or removed) layers and/or changed magnifications for\nexisting layers.\n\nRaises an exception if the datasource-properties.json file does not exist, yet.\nIn this case, use write_webknossos_metadata instead.\n\"\"\"\n\n\ndef refresh_metadata(\n wkw_path, max_id=0, compute_max_id=False, exact_bounding_box: Optional[dict] = None\n):\n dataset_path = get_datasource_path(wkw_path)\n if not path.exists(dataset_path):\n raise Exception(\n \"datasource-properties.json file could not be found. Please use write_webknossos_metadata to create it.\"\n )\n\n datasource_properties = read_datasource_properties(wkw_path)\n existing_layers_dict = {\n layer[\"name\"]: layer for layer in datasource_properties[\"dataLayers\"]\n }\n\n new_layers = list(\n detect_layers(wkw_path, max_id, compute_max_id, exact_bounding_box)\n )\n\n # Merge the freshly read layers with the existing layer information, so that information,\n # such as bounding boxes, are not lost for existing layers.\n # For existing layers, only the resolutions will be updated.\n merged_layers = []\n for new_layer in new_layers:\n layer_name = new_layer[\"name\"]\n if layer_name in existing_layers_dict:\n existing_layer = existing_layers_dict[layer_name]\n # Update the resolutions\n existing_layer[\"wkwResolutions\"] = new_layer[\"wkwResolutions\"]\n merged_layers.append(existing_layer)\n else:\n merged_layers.append(new_layer)\n\n datasource_properties[\"dataLayers\"] = merged_layers\n write_datasource_properties(wkw_path, datasource_properties)\n\n\ndef read_metadata_for_layer(wkw_path, layer_name):\n datasource_properties = read_datasource_properties(wkw_path)\n\n layers = datasource_properties[\"dataLayers\"]\n layer_info = next(layer for layer in layers if layer[\"name\"] == layer_name)\n\n dtype = np.dtype(layer_info[\"elementClass\"])\n bounding_box = layer_info[\"boundingBox\"]\n origin = bounding_box[\"topLeft\"]\n bounding_box = [\n bounding_box[\"width\"],\n bounding_box[\"height\"],\n bounding_box[\"depth\"],\n ]\n\n return layer_info, dtype, bounding_box, origin\n\n\ndef detect_dtype(dataset_path, layer, mag: Mag = Mag(1)):\n layer_path = path.join(dataset_path, layer, str(mag))\n if path.exists(layer_path):\n with wkw.Dataset.open(layer_path) as dataset:\n voxel_type = dataset.header.voxel_type\n num_channels = dataset.header.num_channels\n voxel_size = np.dtype(voxel_type)\n if voxel_size == np.uint8 and num_channels > 1:\n return \"uint\" + str(8 * num_channels)\n else:\n return str(np.dtype(voxel_type))\n\n\ndef detect_cubeLength(dataset_path, layer, mag: Mag = Mag(1)):\n layer_path = path.join(dataset_path, layer, str(mag))\n if path.exists(layer_path):\n with wkw.Dataset.open(layer_path) as dataset:\n return dataset.header.block_len * dataset.header.file_len\n\n\ndef detect_bbox(dataset_path, layer, mag: Mag = Mag(1)):\n # Detect the coarse bounding box of a dataset by iterating\n # over the WKW cubes\n layer_path = path.join(dataset_path, layer, str(mag))\n\n def list_files(layer_path):\n return iglob(path.join(layer_path, \"*\", \"*\", \"*.wkw\"), recursive=True)\n\n def parse_cube_file_name(filename):\n CUBE_REGEX = re.compile(r\"z(\\d+)/y(\\d+)/x(\\d+)(\\.wkw)$\")\n m = CUBE_REGEX.search(filename)\n return (int(m.group(3)), int(m.group(2)), int(m.group(1)))\n\n def list_cubes(layer_path):\n return (parse_cube_file_name(f) for f in list_files(layer_path))\n\n xs, ys, zs = list(zip(*list_cubes(layer_path)))\n\n min_x, min_y, min_z = min(xs), min(ys), min(zs)\n max_x, max_y, max_z = max(xs), max(ys), max(zs)\n\n cubeLength = detect_cubeLength(dataset_path, layer, mag)\n\n return {\n \"topLeft\": [min_x * cubeLength, min_y * cubeLength, min_z * cubeLength],\n \"width\": (1 + max_x - min_x) * cubeLength,\n \"height\": (1 + max_y - min_y) * cubeLength,\n \"depth\": (1 + max_z - min_z) * cubeLength,\n }\n\n\ndef detect_resolutions(dataset_path, layer) -> List[Mag]:\n for mag in listdir(path.join(dataset_path, layer)):\n try:\n yield Mag(mag)\n except ValueError:\n logging.info(\"ignoring {} as resolution\".format(mag))\n\n\ndef detect_standard_layer(\n dataset_path, layer_name, exact_bounding_box=None, category=\"color\"\n):\n # Perform metadata detection for well-known layers\n\n mags = list(detect_resolutions(dataset_path, layer_name))\n mags = sorted(mags)\n assert len(mags) > 0, \"No resolutions found\"\n\n if exact_bounding_box is None:\n bbox = detect_bbox(dataset_path, layer_name, mags[0])\n else:\n bbox = exact_bounding_box\n\n resolutions = [\n {\n \"resolution\": mag.to_array(),\n \"cubeLength\": detect_cubeLength(dataset_path, layer_name, mag),\n }\n for mag in mags\n ]\n\n dtype = detect_dtype(dataset_path, layer_name, mags[0])\n\n return {\n \"dataFormat\": \"wkw\",\n \"name\": layer_name,\n \"category\": category,\n \"elementClass\": dtype,\n \"boundingBox\": bbox,\n \"wkwResolutions\": list(resolutions),\n }\n\n\ndef detect_segmentation_layer(\n dataset_path, layer_name, max_id, compute_max_id=False, exact_bounding_box=None\n):\n layer_info = detect_standard_layer(\n dataset_path, layer_name, exact_bounding_box, category=\"segmentation\"\n )\n layer_info[\"mappings\"] = []\n layer_info[\"largestSegmentId\"] = max_id\n\n if compute_max_id:\n logging.info(\"Computing max id of layer={}\".format(layer_name))\n # Computing the current largest segment id\n # This may take very long due to IO load\n layer_path = path.join(dataset_path, layer_name, \"1\")\n with wkw.Dataset.open(layer_path) as dataset:\n bbox = layer_info[\"boundingBox\"]\n layer_info[\"largestSegmentId\"] = int(\n np.max(\n dataset.read(\n bbox[\"topLeft\"], [bbox[\"width\"], bbox[\"height\"], bbox[\"depth\"]]\n )\n )\n )\n logging.info(\n \"Max id of layer={} is {}\".format(\n layer_name, layer_info[\"largestSegmentId\"]\n )\n )\n\n return layer_info\n\n\ndef detect_layers(dataset_path, max_id, compute_max_id, exact_bounding_box=None):\n # Detect metadata for well-known layers (i.e., color, prediction and segmentation)\n for layer_name in [\"color\", \"prediction\"]:\n if path.exists(path.join(dataset_path, layer_name)):\n yield detect_standard_layer(dataset_path, layer_name, exact_bounding_box)\n if path.exists(path.join(dataset_path, \"segmentation\")):\n yield detect_segmentation_layer(\n dataset_path, \"segmentation\", max_id, compute_max_id, exact_bounding_box\n )\n\n\nif __name__ == \"__main__\":\n args = create_parser().parse_args()\n setup_logging(args)\n\n if not args.refresh:\n assert (\n args.scale is not None\n ), \"The scale has to be specified when creating metadata for a dataset.\"\n assert (\n args.name is not None\n ), \"Please provide a name via --name to create meta data.\"\n scale = tuple(float(x) for x in args.scale.split(\",\"))\n write_webknossos_metadata(\n args.path, args.name, scale, args.max_id, args.compute_max_id\n )\n else:\n if args.name is not None:\n logging.warn(\n \"The --name argument is ignored, since --refresh was provided.\"\n )\n refresh_metadata(args.path, args.max_id, args.compute_max_id)\n", "path": "wkcuber/metadata.py"}], "after_files": [{"content": "import json\nimport re\nimport wkw\nimport logging\nimport numpy as np\n\nfrom argparse import ArgumentParser\nfrom glob import iglob\nfrom os import path, listdir\nfrom typing import Optional\nfrom .mag import Mag\nfrom typing import List\nfrom .utils import add_verbose_flag, setup_logging, add_scale_flag\n\n\ndef get_datasource_path(dataset_path):\n return path.join(dataset_path, \"datasource-properties.json\")\n\n\ndef create_parser():\n parser = ArgumentParser()\n\n parser.add_argument(\"path\", help=\"Directory containing the dataset.\")\n\n parser.add_argument(\"--name\", \"-n\", help=\"Name of the dataset\", default=None)\n\n parser.add_argument(\"--refresh\", \"-r\", default=False, action=\"store_true\")\n\n group = parser.add_mutually_exclusive_group()\n group.add_argument(\n \"--compute_max_id\",\n \"-c\",\n help=\"set to compute max id\",\n default=False,\n action=\"store_true\",\n )\n group.add_argument(\"--max_id\", help=\"set max id of segmentation.\", default=0)\n\n add_scale_flag(parser, required=False)\n add_verbose_flag(parser)\n\n return parser\n\n\ndef write_datasource_properties(dataset_path, datasource_properties):\n datasource_properties_path = get_datasource_path(dataset_path)\n with open(datasource_properties_path, \"wt\") as datasource_properties_file:\n json.dump(datasource_properties, datasource_properties_file, indent=2)\n\n\ndef read_datasource_properties(dataset_path):\n with open(get_datasource_path(dataset_path), \"r\") as datasource_properties_file:\n return json.load(datasource_properties_file)\n\n\n\"\"\"\nCreates a datasource-properties.json file with the specified properties\nfor the given dataset path. Common layers are detected automatically.\n\"\"\"\n\n\ndef write_webknossos_metadata(\n dataset_path,\n name,\n scale,\n max_id=0,\n compute_max_id=False,\n exact_bounding_box: Optional[dict] = None,\n):\n if name == None:\n name = path.basename(dataset_path)\n\n # Generate a metadata file for webKnossos\n # Currently includes no source of information for team\n layers = list(\n detect_layers(dataset_path, max_id, compute_max_id, exact_bounding_box)\n )\n write_datasource_properties(\n dataset_path,\n {\n \"id\": {\"name\": name, \"team\": \"<unknown>\"},\n \"dataLayers\": layers,\n \"scale\": scale,\n },\n )\n\n\n\"\"\"\nUpdates the datasource-properties.json file for a given dataset.\nUse this method if you added (or removed) layers and/or changed magnifications for\nexisting layers.\n\nRaises an exception if the datasource-properties.json file does not exist, yet.\nIn this case, use write_webknossos_metadata instead.\n\"\"\"\n\n\ndef refresh_metadata(\n wkw_path, max_id=0, compute_max_id=False, exact_bounding_box: Optional[dict] = None\n):\n dataset_path = get_datasource_path(wkw_path)\n if not path.exists(dataset_path):\n raise Exception(\n \"datasource-properties.json file could not be found. Please use write_webknossos_metadata to create it.\"\n )\n\n datasource_properties = read_datasource_properties(wkw_path)\n existing_layers_dict = {\n layer[\"name\"]: layer for layer in datasource_properties[\"dataLayers\"]\n }\n\n new_layers = list(\n detect_layers(wkw_path, max_id, compute_max_id, exact_bounding_box)\n )\n\n # Merge the freshly read layers with the existing layer information, so that information,\n # such as bounding boxes, are not lost for existing layers.\n # For existing layers, only the resolutions will be updated.\n merged_layers = []\n for new_layer in new_layers:\n layer_name = new_layer[\"name\"]\n if layer_name in existing_layers_dict:\n existing_layer = existing_layers_dict[layer_name]\n # Update the resolutions\n existing_layer[\"wkwResolutions\"] = new_layer[\"wkwResolutions\"]\n merged_layers.append(existing_layer)\n else:\n merged_layers.append(new_layer)\n\n datasource_properties[\"dataLayers\"] = merged_layers\n write_datasource_properties(wkw_path, datasource_properties)\n\n\ndef convert_element_class_to_dtype(elementClass):\n default_dtype = np.uint8 if \"uint\" in elementClass else np.dtype(elementClass)\n conversion_map = {\n \"float\": np.float32,\n \"double\": np.float64,\n \"uint8\": np.uint8,\n \"uint16\": np.uint16,\n \"uint32\": np.uint32,\n \"uint64\": np.uint64,\n }\n return conversion_map.get(elementClass, default_dtype)\n\n\ndef read_metadata_for_layer(wkw_path, layer_name):\n datasource_properties = read_datasource_properties(wkw_path)\n\n layers = datasource_properties[\"dataLayers\"]\n layer_info = next(layer for layer in layers if layer[\"name\"] == layer_name)\n\n dtype = convert_element_class_to_dtype(layer_info[\"elementClass\"])\n bounding_box = layer_info[\"boundingBox\"]\n origin = bounding_box[\"topLeft\"]\n bounding_box = [\n bounding_box[\"width\"],\n bounding_box[\"height\"],\n bounding_box[\"depth\"],\n ]\n\n return layer_info, dtype, bounding_box, origin\n\n\ndef convert_dype_to_element_class(dtype):\n element_class_to_dtype_map = {\n \"float\": np.float32,\n \"double\": np.float64,\n \"uint8\": np.uint8,\n \"uint16\": np.uint16,\n \"uint32\": np.uint32,\n \"uint64\": np.uint64,\n }\n conversion_map = {v: k for k, v in element_class_to_dtype_map.items()}\n return conversion_map.get(dtype.type, str(dtype))\n\n\ndef detect_dtype(dataset_path, layer, mag: Mag = Mag(1)):\n layer_path = path.join(dataset_path, layer, str(mag))\n if path.exists(layer_path):\n with wkw.Dataset.open(layer_path) as dataset:\n voxel_type = dataset.header.voxel_type\n num_channels = dataset.header.num_channels\n voxel_size = np.dtype(voxel_type)\n if voxel_size == np.uint8 and num_channels > 1:\n return \"uint\" + str(8 * num_channels)\n else:\n return convert_dype_to_element_class(voxel_size)\n\n\ndef detect_cubeLength(dataset_path, layer, mag: Mag = Mag(1)):\n layer_path = path.join(dataset_path, layer, str(mag))\n if path.exists(layer_path):\n with wkw.Dataset.open(layer_path) as dataset:\n return dataset.header.block_len * dataset.header.file_len\n\n\ndef detect_bbox(dataset_path, layer, mag: Mag = Mag(1)):\n # Detect the coarse bounding box of a dataset by iterating\n # over the WKW cubes\n layer_path = path.join(dataset_path, layer, str(mag))\n\n def list_files(layer_path):\n return iglob(path.join(layer_path, \"*\", \"*\", \"*.wkw\"), recursive=True)\n\n def parse_cube_file_name(filename):\n CUBE_REGEX = re.compile(r\"z(\\d+)/y(\\d+)/x(\\d+)(\\.wkw)$\")\n m = CUBE_REGEX.search(filename)\n return (int(m.group(3)), int(m.group(2)), int(m.group(1)))\n\n def list_cubes(layer_path):\n return (parse_cube_file_name(f) for f in list_files(layer_path))\n\n xs, ys, zs = list(zip(*list_cubes(layer_path)))\n\n min_x, min_y, min_z = min(xs), min(ys), min(zs)\n max_x, max_y, max_z = max(xs), max(ys), max(zs)\n\n cubeLength = detect_cubeLength(dataset_path, layer, mag)\n\n return {\n \"topLeft\": [min_x * cubeLength, min_y * cubeLength, min_z * cubeLength],\n \"width\": (1 + max_x - min_x) * cubeLength,\n \"height\": (1 + max_y - min_y) * cubeLength,\n \"depth\": (1 + max_z - min_z) * cubeLength,\n }\n\n\ndef detect_resolutions(dataset_path, layer) -> List[Mag]:\n for mag in listdir(path.join(dataset_path, layer)):\n try:\n yield Mag(mag)\n except ValueError:\n logging.info(\"ignoring {} as resolution\".format(mag))\n\n\ndef detect_standard_layer(\n dataset_path, layer_name, exact_bounding_box=None, category=\"color\"\n):\n # Perform metadata detection for well-known layers\n\n mags = list(detect_resolutions(dataset_path, layer_name))\n mags = sorted(mags)\n assert len(mags) > 0, \"No resolutions found\"\n\n if exact_bounding_box is None:\n bbox = detect_bbox(dataset_path, layer_name, mags[0])\n else:\n bbox = exact_bounding_box\n\n resolutions = [\n {\n \"resolution\": mag.to_array(),\n \"cubeLength\": detect_cubeLength(dataset_path, layer_name, mag),\n }\n for mag in mags\n ]\n\n dtype = detect_dtype(dataset_path, layer_name, mags[0])\n\n return {\n \"dataFormat\": \"wkw\",\n \"name\": layer_name,\n \"category\": category,\n \"elementClass\": dtype,\n \"boundingBox\": bbox,\n \"wkwResolutions\": list(resolutions),\n }\n\n\ndef detect_segmentation_layer(\n dataset_path, layer_name, max_id, compute_max_id=False, exact_bounding_box=None\n):\n layer_info = detect_standard_layer(\n dataset_path, layer_name, exact_bounding_box, category=\"segmentation\"\n )\n layer_info[\"mappings\"] = []\n layer_info[\"largestSegmentId\"] = max_id\n\n if compute_max_id:\n logging.info(\"Computing max id of layer={}\".format(layer_name))\n # Computing the current largest segment id\n # This may take very long due to IO load\n layer_path = path.join(dataset_path, layer_name, \"1\")\n with wkw.Dataset.open(layer_path) as dataset:\n bbox = layer_info[\"boundingBox\"]\n layer_info[\"largestSegmentId\"] = int(\n np.max(\n dataset.read(\n bbox[\"topLeft\"], [bbox[\"width\"], bbox[\"height\"], bbox[\"depth\"]]\n )\n )\n )\n logging.info(\n \"Max id of layer={} is {}\".format(\n layer_name, layer_info[\"largestSegmentId\"]\n )\n )\n\n return layer_info\n\n\ndef detect_layers(dataset_path, max_id, compute_max_id, exact_bounding_box=None):\n # Detect metadata for well-known layers (i.e., color, prediction and segmentation)\n for layer_name in [\"color\", \"prediction\"]:\n if path.exists(path.join(dataset_path, layer_name)):\n yield detect_standard_layer(dataset_path, layer_name, exact_bounding_box)\n if path.exists(path.join(dataset_path, \"segmentation\")):\n yield detect_segmentation_layer(\n dataset_path, \"segmentation\", max_id, compute_max_id, exact_bounding_box\n )\n\n\nif __name__ == \"__main__\":\n args = create_parser().parse_args()\n setup_logging(args)\n\n if not args.refresh:\n assert (\n args.scale is not None\n ), \"The scale has to be specified when creating metadata for a dataset.\"\n assert (\n args.name is not None\n ), \"Please provide a name via --name to create meta data.\"\n scale = tuple(float(x) for x in args.scale.split(\",\"))\n write_webknossos_metadata(\n args.path, args.name, scale, args.max_id, args.compute_max_id\n )\n else:\n if args.name is not None:\n logging.warning(\n \"The --name argument is ignored, since --refresh was provided.\"\n )\n refresh_metadata(args.path, args.max_id, args.compute_max_id)\n", "path": "wkcuber/metadata.py"}]} | 3,485 | 636 |
gh_patches_debug_4886 | rasdani/github-patches | git_diff | internetarchive__openlibrary-6785 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`/merges` entries should be displayed in descending order
<!-- What problem are we solving? What does the experience look like today? What are the symptoms? -->
Today, entries in all `/merges` views are displayed in order of `created` timestamp, ascending. They should instead by ordered by `created` timestamp, __descending__.
### Evidence / Screenshot (if possible)
### Relevant url?
<!-- `https://openlibrary.org/...` -->
https://openlibrary.org/merges
### Steps to Reproduce
<!-- What steps caused you to find the bug? -->
1. Go to any `/merges` view.
2. Note the order of the merge requests in the table.
<!-- What actually happened after these steps? What did you expect to happen? -->
* Actual: Entries are listed from oldest to newest.
* Expected: Entries are listed from newest to oldest.
### Details
- **Logged in (Y/N)?**N
- **Browser type/version?**Any
- **Operating system?** Any
- **Environment (prod/dev/local)?** prod
<!-- If not sure, put prod -->
### Proposal & Constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
When we fetch the requests for the `/merges` table, we should pass `order='created desc'` to the `CommunityEditsQueue.get_requests()` call.
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
`/merges` controller is in [/openlibrary/plugins/upstream/edits.py](https://github.com/internetarchive/openlibrary/blob/master/openlibrary/plugins/upstream/edits.py)
### Stakeholders
<!-- @ tag stakeholders of this bug -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openlibrary/plugins/upstream/edits.py`
Content:
```
1 """Librarian Edits
2 """
3
4 import json
5 import web
6
7 from openlibrary import accounts
8 from openlibrary.core.edits import CommunityEditsQueue, get_status_for_view
9 from infogami.utils import delegate
10 from infogami.utils.view import render_template
11
12
13 def create_request(olids: str, username: str, comment: str = None):
14 work_ids = olids.split(',')
15 return CommunityEditsQueue.submit_work_merge_request(
16 work_ids,
17 submitter=username,
18 comment=comment,
19 )
20
21
22 class community_edits_queue(delegate.page):
23 path = '/merges'
24
25 def POST(self):
26 def response(status='ok', **kwargs):
27 return {'status': status, **kwargs}
28
29 i = web.input(
30 work_ids="", # Comma-separated OLIDs (OL1W,OL2W,OL3W,...,OL111W)
31 rtype="merge-works",
32 mrid=None,
33 action=None, # create, approve, decline, comment, unassign
34 comment=None,
35 )
36 user = accounts.get_current_user()
37 username = user['key'].split('/')[-1]
38 if i.mrid: # We are updating an existing merge request
39 if i.action == 'comment':
40 if i.comment:
41 CommunityEditsQueue.comment_request(i.mrid, username, i.comment)
42 return delegate.RawText(
43 json.dumps(response()), content_type="application/json"
44 )
45 else:
46 return delegate.RawText(
47 json.dumps(
48 response(
49 status='error', error='No comment sent in request.'
50 )
51 )
52 )
53 elif i.action == 'claim':
54 result = CommunityEditsQueue.assign_request(i.mrid, username)
55 return delegate.RawText(
56 json.dumps(response(**result)), content_type="application/json"
57 )
58 elif i.action == 'unassign':
59 CommunityEditsQueue.unassign_request(i.mrid)
60 status = get_status_for_view(CommunityEditsQueue.STATUS['PENDING'])
61 return delegate.RawText(json.dumps(response(newStatus=status)))
62 else:
63 if i.action == "decline":
64 status = CommunityEditsQueue.STATUS['DECLINED']
65 elif i.action == 'approve':
66 status = CommunityEditsQueue.STATUS['MERGED']
67 CommunityEditsQueue.update_request_status(
68 i.mrid, status, username, comment=i.comment
69 )
70 return delegate.RawText(
71 json.dumps(response()), content_type="application/json"
72 )
73 elif i.rtype == "merge-works":
74 if i.action == 'create':
75 result = create_request(i.work_ids, username, i.comment)
76 resp = (
77 response(id=result)
78 if result
79 else response(
80 status='error',
81 error='A request to merge these works has already been submitted.',
82 )
83 )
84 return delegate.RawText(
85 json.dumps(resp), content_type="application/json"
86 )
87
88 def GET(self):
89 i = web.input(page=1, limit=25, mode="open", submitter=None, reviewer=None)
90 merge_requests = CommunityEditsQueue.get_requests(
91 page=int(i.page),
92 limit=int(i.limit),
93 mode=i.mode,
94 submitter=i.submitter,
95 reviewer=i.reviewer,
96 order='created',
97 ).list()
98 enriched_requests = self.enrich(merge_requests)
99 total_found = CommunityEditsQueue.get_counts_by_mode(
100 mode=i.mode, submitter=i.submitter, reviewer=i.reviewer
101 )
102 return render_template(
103 'merge_queue/merge_queue',
104 total_found,
105 merge_requests=enriched_requests,
106 )
107
108 def enrich(self, merge_requests):
109 results = []
110 for r in merge_requests:
111 comments = r['comments']
112 obj = {
113 'id': r['id'],
114 'submitter': r['submitter'],
115 'reviewer': r['reviewer'],
116 'url': r['url'],
117 'status': r['status'],
118 'comments': (comments and comments.get('comments')) or [],
119 'created': r['created'],
120 'updated': r['updated'],
121 }
122 olids = self.extract_olids(r['url'])
123 obj['title'] = ''
124 for olid in olids:
125 book = web.ctx.site.get(f'/works/{olid}')
126 if book:
127 if not obj['title']:
128 obj['title'] = book.title
129 break
130
131 results.append(obj)
132 return results
133
134 def extract_olids(self, url):
135 query_string = url.split('?')[1]
136 split_params = query_string.split('&')
137 params = {}
138 for p in split_params:
139 kv = p.split('=')
140 params[kv[0]] = kv[1]
141 return params['records'].split(',')
142
143
144 class ui_partials(delegate.page):
145 path = '/merges/partials'
146
147 def GET(self):
148 i = web.input(type=None, comment='')
149 if i.type == 'comment':
150 component = render_template('merge_queue/comment', comment_str=i.comment)
151 return delegate.RawText(component)
152
153
154 def setup():
155 pass
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/openlibrary/plugins/upstream/edits.py b/openlibrary/plugins/upstream/edits.py
--- a/openlibrary/plugins/upstream/edits.py
+++ b/openlibrary/plugins/upstream/edits.py
@@ -93,7 +93,7 @@
mode=i.mode,
submitter=i.submitter,
reviewer=i.reviewer,
- order='created',
+ order='created desc',
).list()
enriched_requests = self.enrich(merge_requests)
total_found = CommunityEditsQueue.get_counts_by_mode(
| {"golden_diff": "diff --git a/openlibrary/plugins/upstream/edits.py b/openlibrary/plugins/upstream/edits.py\n--- a/openlibrary/plugins/upstream/edits.py\n+++ b/openlibrary/plugins/upstream/edits.py\n@@ -93,7 +93,7 @@\n mode=i.mode,\n submitter=i.submitter,\n reviewer=i.reviewer,\n- order='created',\n+ order='created desc',\n ).list()\n enriched_requests = self.enrich(merge_requests)\n total_found = CommunityEditsQueue.get_counts_by_mode(\n", "issue": "`/merges` entries should be displayed in descending order\n<!-- What problem are we solving? What does the experience look like today? What are the symptoms? -->\r\nToday, entries in all `/merges` views are displayed in order of `created` timestamp, ascending. They should instead by ordered by `created` timestamp, __descending__.\r\n\r\n### Evidence / Screenshot (if possible)\r\n\r\n### Relevant url?\r\n<!-- `https://openlibrary.org/...` -->\r\nhttps://openlibrary.org/merges\r\n\r\n### Steps to Reproduce\r\n<!-- What steps caused you to find the bug? -->\r\n1. Go to any `/merges` view.\r\n2. Note the order of the merge requests in the table.\r\n\r\n<!-- What actually happened after these steps? What did you expect to happen? -->\r\n* Actual: Entries are listed from oldest to newest.\r\n* Expected: Entries are listed from newest to oldest.\r\n\r\n### Details\r\n\r\n- **Logged in (Y/N)?**N\r\n- **Browser type/version?**Any\r\n- **Operating system?** Any\r\n- **Environment (prod/dev/local)?** prod\r\n<!-- If not sure, put prod -->\r\n\r\n### Proposal & Constraints\r\n<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->\r\nWhen we fetch the requests for the `/merges` table, we should pass `order='created desc'` to the `CommunityEditsQueue.get_requests()` call.\r\n\r\n### Related files\r\n<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->\r\n`/merges` controller is in [/openlibrary/plugins/upstream/edits.py](https://github.com/internetarchive/openlibrary/blob/master/openlibrary/plugins/upstream/edits.py)\r\n\r\n### Stakeholders\r\n<!-- @ tag stakeholders of this bug -->\r\n\n", "before_files": [{"content": "\"\"\"Librarian Edits\n\"\"\"\n\nimport json\nimport web\n\nfrom openlibrary import accounts\nfrom openlibrary.core.edits import CommunityEditsQueue, get_status_for_view\nfrom infogami.utils import delegate\nfrom infogami.utils.view import render_template\n\n\ndef create_request(olids: str, username: str, comment: str = None):\n work_ids = olids.split(',')\n return CommunityEditsQueue.submit_work_merge_request(\n work_ids,\n submitter=username,\n comment=comment,\n )\n\n\nclass community_edits_queue(delegate.page):\n path = '/merges'\n\n def POST(self):\n def response(status='ok', **kwargs):\n return {'status': status, **kwargs}\n\n i = web.input(\n work_ids=\"\", # Comma-separated OLIDs (OL1W,OL2W,OL3W,...,OL111W)\n rtype=\"merge-works\",\n mrid=None,\n action=None, # create, approve, decline, comment, unassign\n comment=None,\n )\n user = accounts.get_current_user()\n username = user['key'].split('/')[-1]\n if i.mrid: # We are updating an existing merge request\n if i.action == 'comment':\n if i.comment:\n CommunityEditsQueue.comment_request(i.mrid, username, i.comment)\n return delegate.RawText(\n json.dumps(response()), content_type=\"application/json\"\n )\n else:\n return delegate.RawText(\n json.dumps(\n response(\n status='error', error='No comment sent in request.'\n )\n )\n )\n elif i.action == 'claim':\n result = CommunityEditsQueue.assign_request(i.mrid, username)\n return delegate.RawText(\n json.dumps(response(**result)), content_type=\"application/json\"\n )\n elif i.action == 'unassign':\n CommunityEditsQueue.unassign_request(i.mrid)\n status = get_status_for_view(CommunityEditsQueue.STATUS['PENDING'])\n return delegate.RawText(json.dumps(response(newStatus=status)))\n else:\n if i.action == \"decline\":\n status = CommunityEditsQueue.STATUS['DECLINED']\n elif i.action == 'approve':\n status = CommunityEditsQueue.STATUS['MERGED']\n CommunityEditsQueue.update_request_status(\n i.mrid, status, username, comment=i.comment\n )\n return delegate.RawText(\n json.dumps(response()), content_type=\"application/json\"\n )\n elif i.rtype == \"merge-works\":\n if i.action == 'create':\n result = create_request(i.work_ids, username, i.comment)\n resp = (\n response(id=result)\n if result\n else response(\n status='error',\n error='A request to merge these works has already been submitted.',\n )\n )\n return delegate.RawText(\n json.dumps(resp), content_type=\"application/json\"\n )\n\n def GET(self):\n i = web.input(page=1, limit=25, mode=\"open\", submitter=None, reviewer=None)\n merge_requests = CommunityEditsQueue.get_requests(\n page=int(i.page),\n limit=int(i.limit),\n mode=i.mode,\n submitter=i.submitter,\n reviewer=i.reviewer,\n order='created',\n ).list()\n enriched_requests = self.enrich(merge_requests)\n total_found = CommunityEditsQueue.get_counts_by_mode(\n mode=i.mode, submitter=i.submitter, reviewer=i.reviewer\n )\n return render_template(\n 'merge_queue/merge_queue',\n total_found,\n merge_requests=enriched_requests,\n )\n\n def enrich(self, merge_requests):\n results = []\n for r in merge_requests:\n comments = r['comments']\n obj = {\n 'id': r['id'],\n 'submitter': r['submitter'],\n 'reviewer': r['reviewer'],\n 'url': r['url'],\n 'status': r['status'],\n 'comments': (comments and comments.get('comments')) or [],\n 'created': r['created'],\n 'updated': r['updated'],\n }\n olids = self.extract_olids(r['url'])\n obj['title'] = ''\n for olid in olids:\n book = web.ctx.site.get(f'/works/{olid}')\n if book:\n if not obj['title']:\n obj['title'] = book.title\n break\n\n results.append(obj)\n return results\n\n def extract_olids(self, url):\n query_string = url.split('?')[1]\n split_params = query_string.split('&')\n params = {}\n for p in split_params:\n kv = p.split('=')\n params[kv[0]] = kv[1]\n return params['records'].split(',')\n\n\nclass ui_partials(delegate.page):\n path = '/merges/partials'\n\n def GET(self):\n i = web.input(type=None, comment='')\n if i.type == 'comment':\n component = render_template('merge_queue/comment', comment_str=i.comment)\n return delegate.RawText(component)\n\n\ndef setup():\n pass\n", "path": "openlibrary/plugins/upstream/edits.py"}], "after_files": [{"content": "\"\"\"Librarian Edits\n\"\"\"\n\nimport json\nimport web\n\nfrom openlibrary import accounts\nfrom openlibrary.core.edits import CommunityEditsQueue, get_status_for_view\nfrom infogami.utils import delegate\nfrom infogami.utils.view import render_template\n\n\ndef create_request(olids: str, username: str, comment: str = None):\n work_ids = olids.split(',')\n return CommunityEditsQueue.submit_work_merge_request(\n work_ids,\n submitter=username,\n comment=comment,\n )\n\n\nclass community_edits_queue(delegate.page):\n path = '/merges'\n\n def POST(self):\n def response(status='ok', **kwargs):\n return {'status': status, **kwargs}\n\n i = web.input(\n work_ids=\"\", # Comma-separated OLIDs (OL1W,OL2W,OL3W,...,OL111W)\n rtype=\"merge-works\",\n mrid=None,\n action=None, # create, approve, decline, comment, unassign\n comment=None,\n )\n user = accounts.get_current_user()\n username = user['key'].split('/')[-1]\n if i.mrid: # We are updating an existing merge request\n if i.action == 'comment':\n if i.comment:\n CommunityEditsQueue.comment_request(i.mrid, username, i.comment)\n return delegate.RawText(\n json.dumps(response()), content_type=\"application/json\"\n )\n else:\n return delegate.RawText(\n json.dumps(\n response(\n status='error', error='No comment sent in request.'\n )\n )\n )\n elif i.action == 'claim':\n result = CommunityEditsQueue.assign_request(i.mrid, username)\n return delegate.RawText(\n json.dumps(response(**result)), content_type=\"application/json\"\n )\n elif i.action == 'unassign':\n CommunityEditsQueue.unassign_request(i.mrid)\n status = get_status_for_view(CommunityEditsQueue.STATUS['PENDING'])\n return delegate.RawText(json.dumps(response(newStatus=status)))\n else:\n if i.action == \"decline\":\n status = CommunityEditsQueue.STATUS['DECLINED']\n elif i.action == 'approve':\n status = CommunityEditsQueue.STATUS['MERGED']\n CommunityEditsQueue.update_request_status(\n i.mrid, status, username, comment=i.comment\n )\n return delegate.RawText(\n json.dumps(response()), content_type=\"application/json\"\n )\n elif i.rtype == \"merge-works\":\n if i.action == 'create':\n result = create_request(i.work_ids, username, i.comment)\n resp = (\n response(id=result)\n if result\n else response(\n status='error',\n error='A request to merge these works has already been submitted.',\n )\n )\n return delegate.RawText(\n json.dumps(resp), content_type=\"application/json\"\n )\n\n def GET(self):\n i = web.input(page=1, limit=25, mode=\"open\", submitter=None, reviewer=None)\n merge_requests = CommunityEditsQueue.get_requests(\n page=int(i.page),\n limit=int(i.limit),\n mode=i.mode,\n submitter=i.submitter,\n reviewer=i.reviewer,\n order='created desc',\n ).list()\n enriched_requests = self.enrich(merge_requests)\n total_found = CommunityEditsQueue.get_counts_by_mode(\n mode=i.mode, submitter=i.submitter, reviewer=i.reviewer\n )\n return render_template(\n 'merge_queue/merge_queue',\n total_found,\n merge_requests=enriched_requests,\n )\n\n def enrich(self, merge_requests):\n results = []\n for r in merge_requests:\n comments = r['comments']\n obj = {\n 'id': r['id'],\n 'submitter': r['submitter'],\n 'reviewer': r['reviewer'],\n 'url': r['url'],\n 'status': r['status'],\n 'comments': (comments and comments.get('comments')) or [],\n 'created': r['created'],\n 'updated': r['updated'],\n }\n olids = self.extract_olids(r['url'])\n obj['title'] = ''\n for olid in olids:\n book = web.ctx.site.get(f'/works/{olid}')\n if book:\n if not obj['title']:\n obj['title'] = book.title\n break\n\n results.append(obj)\n return results\n\n def extract_olids(self, url):\n query_string = url.split('?')[1]\n split_params = query_string.split('&')\n params = {}\n for p in split_params:\n kv = p.split('=')\n params[kv[0]] = kv[1]\n return params['records'].split(',')\n\n\nclass ui_partials(delegate.page):\n path = '/merges/partials'\n\n def GET(self):\n i = web.input(type=None, comment='')\n if i.type == 'comment':\n component = render_template('merge_queue/comment', comment_str=i.comment)\n return delegate.RawText(component)\n\n\ndef setup():\n pass\n", "path": "openlibrary/plugins/upstream/edits.py"}]} | 2,104 | 119 |
gh_patches_debug_10333 | rasdani/github-patches | git_diff | gammapy__gammapy-2259 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
fit.minos_contour parameter handling
While accesing the fit.minos_contour with my Spectral model with parameters(amplitude, reference, alpha and beta which corresponds to strings par_000_amplitude, par_001_alpha, par_002_beta) but with the minos_contour when I am passing the parameters it is taking strings(par_000_amplitude, par_001_reference, par_002_alpha, par_003_beta) which is giivng me the key error of parameter not found.
fit.minos_contour parameter handling
While accesing the fit.minos_contour with my Spectral model with parameters(amplitude, reference, alpha and beta which corresponds to strings par_000_amplitude, par_001_alpha, par_002_beta) but with the minos_contour when I am passing the parameters it is taking strings(par_000_amplitude, par_001_reference, par_002_alpha, par_003_beta) which is giivng me the key error of parameter not found.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gammapy/utils/fitting/iminuit.py`
Content:
```
1 # Licensed under a 3-clause BSD style license - see LICENSE.rst
2 """iminuit fitting functions."""
3 import logging
4 import numpy as np
5 from .likelihood import Likelihood
6
7 __all__ = ["optimize_iminuit", "covariance_iminuit", "confidence_iminuit", "mncontour"]
8
9 log = logging.getLogger(__name__)
10
11
12 class MinuitLikelihood(Likelihood):
13 """Likelihood function interface for iminuit."""
14
15 def fcn(self, *factors):
16 self.parameters.set_parameter_factors(factors)
17 return self.function()
18
19
20 def optimize_iminuit(parameters, function, **kwargs):
21 """iminuit optimization
22
23 Parameters
24 ----------
25 parameters : `~gammapy.utils.modeling.Parameters`
26 Parameters with starting values
27 function : callable
28 Likelihood function
29 **kwargs : dict
30 Options passed to `iminuit.Minuit` constructor. If there is an entry 'migrad_opts', those options
31 will be passed to `iminuit.Minuit.migrad()`.
32
33 Returns
34 -------
35 result : (factors, info, optimizer)
36 Tuple containing the best fit factors, some info and the optimizer instance.
37 """
38 from iminuit import Minuit
39
40 # In Gammapy, we have the factor 2 in the likelihood function
41 # This means `errordef=1` in the Minuit interface is correct
42 kwargs.setdefault("errordef", 1)
43 kwargs.setdefault("print_level", 0)
44 kwargs.update(make_minuit_par_kwargs(parameters))
45
46 minuit_func = MinuitLikelihood(function, parameters)
47
48 kwargs = kwargs.copy()
49 migrad_opts = kwargs.pop("migrad_opts", {})
50 minuit = Minuit(minuit_func.fcn, **kwargs)
51 minuit.migrad(**migrad_opts)
52
53 factors = minuit.args
54 info = {
55 "success": minuit.migrad_ok(),
56 "nfev": minuit.get_num_call_fcn(),
57 "message": _get_message(minuit),
58 }
59 optimizer = minuit
60
61 return factors, info, optimizer
62
63
64 def covariance_iminuit(minuit):
65 # TODO: add minuit.hesse() call once we have better tests
66
67 message, success = "Hesse terminated successfully.", True
68 try:
69 covariance_factors = minuit.np_covariance()
70 except (TypeError, RuntimeError):
71 N = len(minuit.args)
72 covariance_factors = np.nan * np.ones((N, N))
73 message, success = "Hesse failed", False
74 return covariance_factors, {"success": success, "message": message}
75
76
77 def confidence_iminuit(minuit, parameters, parameter, sigma, maxcall=0):
78 # TODO: this is ugly - design something better for translating to MINUIT parameter names.
79 # Maybe a wrapper class MinuitParameters?
80 parameter = parameters[parameter]
81 idx = parameters.free_parameters.index(parameter)
82 var = _make_parname(idx, parameter)
83
84 message, success = "Minos terminated successfully.", True
85 try:
86 result = minuit.minos(var=var, sigma=sigma, maxcall=maxcall)
87 info = result[var]
88 except RuntimeError as error:
89 message, success = str(error), False
90 info = {"is_valid": False, "lower": np.nan, "upper": np.nan, "nfcn": 0}
91
92 return {
93 "success": success,
94 "message": message,
95 "errp": info["upper"],
96 "errn": -info["lower"],
97 "nfev": info["nfcn"],
98 }
99
100
101 def mncontour(minuit, parameters, x, y, numpoints, sigma):
102 idx = parameters._get_idx(x)
103 x = _make_parname(idx, parameters[idx])
104
105 idx = parameters._get_idx(y)
106 y = _make_parname(idx, parameters[idx])
107
108 x_info, y_info, contour = minuit.mncontour(x, y, numpoints, sigma)
109 contour = np.array(contour)
110
111 success = x_info["is_valid"] and y_info["is_valid"]
112
113 return {
114 "success": success,
115 "x": contour[:, 0],
116 "y": contour[:, 1],
117 "x_info": x_info,
118 "y_info": y_info,
119 }
120
121
122 # this code is copied from https://github.com/iminuit/iminuit/blob/master/iminuit/_minimize.py#L95
123 def _get_message(m):
124 message = "Optimization terminated successfully."
125 success = m.migrad_ok()
126 if not success:
127 message = "Optimization failed."
128 fmin = m.get_fmin()
129 if fmin.has_reached_call_limit:
130 message += " Call limit was reached."
131 if fmin.is_above_max_edm:
132 message += " Estimated distance to minimum too large."
133 return message
134
135
136 def _make_parnames(parameters):
137 return [_make_parname(idx, par) for idx, par in enumerate(parameters)]
138
139
140 def _make_parname(idx, par):
141 return "par_{:03d}_{}".format(idx, par.name)
142
143
144 def make_minuit_par_kwargs(parameters):
145 """Create *Parameter Keyword Arguments* for the `Minuit` constructor.
146
147 See: http://iminuit.readthedocs.io/en/latest/api.html#iminuit.Minuit
148 """
149 names = _make_parnames(parameters.free_parameters)
150 kwargs = {"forced_parameters": names}
151
152 for name, par in zip(names, parameters.free_parameters):
153 kwargs[name] = par.factor
154
155 min_ = None if np.isnan(par.factor_min) else par.factor_min
156 max_ = None if np.isnan(par.factor_max) else par.factor_max
157 kwargs["limit_{}".format(name)] = (min_, max_)
158
159 if parameters.covariance is not None:
160 error = parameters.error(par) / par.scale
161 elif parameters.apply_autoscale:
162 error = 1
163 else:
164 error = 1
165 log.warning(
166 "Neither covariance matrix set nor auto-scaling of parameters activated."
167 "Assuming stepsize of 1, which could lead to convergence problems of the "
168 "Minuit optimizer."
169 )
170
171 kwargs["error_{}".format(name)] = error
172
173 return kwargs
174
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gammapy/utils/fitting/iminuit.py b/gammapy/utils/fitting/iminuit.py
--- a/gammapy/utils/fitting/iminuit.py
+++ b/gammapy/utils/fitting/iminuit.py
@@ -99,11 +99,13 @@
def mncontour(minuit, parameters, x, y, numpoints, sigma):
- idx = parameters._get_idx(x)
- x = _make_parname(idx, parameters[idx])
+ par_x = parameters[x]
+ idx_x = parameters.free_parameters.index(par_x)
+ x = _make_parname(idx_x, par_x)
- idx = parameters._get_idx(y)
- y = _make_parname(idx, parameters[idx])
+ par_y = parameters[y]
+ idx_y = parameters.free_parameters.index(par_y)
+ y = _make_parname(idx_y, par_y)
x_info, y_info, contour = minuit.mncontour(x, y, numpoints, sigma)
contour = np.array(contour)
| {"golden_diff": "diff --git a/gammapy/utils/fitting/iminuit.py b/gammapy/utils/fitting/iminuit.py\n--- a/gammapy/utils/fitting/iminuit.py\n+++ b/gammapy/utils/fitting/iminuit.py\n@@ -99,11 +99,13 @@\n \n \n def mncontour(minuit, parameters, x, y, numpoints, sigma):\n- idx = parameters._get_idx(x)\n- x = _make_parname(idx, parameters[idx])\n+ par_x = parameters[x]\n+ idx_x = parameters.free_parameters.index(par_x)\n+ x = _make_parname(idx_x, par_x)\n \n- idx = parameters._get_idx(y)\n- y = _make_parname(idx, parameters[idx])\n+ par_y = parameters[y]\n+ idx_y = parameters.free_parameters.index(par_y)\n+ y = _make_parname(idx_y, par_y)\n \n x_info, y_info, contour = minuit.mncontour(x, y, numpoints, sigma)\n contour = np.array(contour)\n", "issue": "fit.minos_contour parameter handling\nWhile accesing the fit.minos_contour with my Spectral model with parameters(amplitude, reference, alpha and beta which corresponds to strings par_000_amplitude, par_001_alpha, par_002_beta) but with the minos_contour when I am passing the parameters it is taking strings(par_000_amplitude, par_001_reference, par_002_alpha, par_003_beta) which is giivng me the key error of parameter not found.\nfit.minos_contour parameter handling\nWhile accesing the fit.minos_contour with my Spectral model with parameters(amplitude, reference, alpha and beta which corresponds to strings par_000_amplitude, par_001_alpha, par_002_beta) but with the minos_contour when I am passing the parameters it is taking strings(par_000_amplitude, par_001_reference, par_002_alpha, par_003_beta) which is giivng me the key error of parameter not found.\n", "before_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\"\"\"iminuit fitting functions.\"\"\"\nimport logging\nimport numpy as np\nfrom .likelihood import Likelihood\n\n__all__ = [\"optimize_iminuit\", \"covariance_iminuit\", \"confidence_iminuit\", \"mncontour\"]\n\nlog = logging.getLogger(__name__)\n\n\nclass MinuitLikelihood(Likelihood):\n \"\"\"Likelihood function interface for iminuit.\"\"\"\n\n def fcn(self, *factors):\n self.parameters.set_parameter_factors(factors)\n return self.function()\n\n\ndef optimize_iminuit(parameters, function, **kwargs):\n \"\"\"iminuit optimization\n\n Parameters\n ----------\n parameters : `~gammapy.utils.modeling.Parameters`\n Parameters with starting values\n function : callable\n Likelihood function\n **kwargs : dict\n Options passed to `iminuit.Minuit` constructor. If there is an entry 'migrad_opts', those options\n will be passed to `iminuit.Minuit.migrad()`.\n\n Returns\n -------\n result : (factors, info, optimizer)\n Tuple containing the best fit factors, some info and the optimizer instance.\n \"\"\"\n from iminuit import Minuit\n\n # In Gammapy, we have the factor 2 in the likelihood function\n # This means `errordef=1` in the Minuit interface is correct\n kwargs.setdefault(\"errordef\", 1)\n kwargs.setdefault(\"print_level\", 0)\n kwargs.update(make_minuit_par_kwargs(parameters))\n\n minuit_func = MinuitLikelihood(function, parameters)\n\n kwargs = kwargs.copy()\n migrad_opts = kwargs.pop(\"migrad_opts\", {})\n minuit = Minuit(minuit_func.fcn, **kwargs)\n minuit.migrad(**migrad_opts)\n\n factors = minuit.args\n info = {\n \"success\": minuit.migrad_ok(),\n \"nfev\": minuit.get_num_call_fcn(),\n \"message\": _get_message(minuit),\n }\n optimizer = minuit\n\n return factors, info, optimizer\n\n\ndef covariance_iminuit(minuit):\n # TODO: add minuit.hesse() call once we have better tests\n\n message, success = \"Hesse terminated successfully.\", True\n try:\n covariance_factors = minuit.np_covariance()\n except (TypeError, RuntimeError):\n N = len(minuit.args)\n covariance_factors = np.nan * np.ones((N, N))\n message, success = \"Hesse failed\", False\n return covariance_factors, {\"success\": success, \"message\": message}\n\n\ndef confidence_iminuit(minuit, parameters, parameter, sigma, maxcall=0):\n # TODO: this is ugly - design something better for translating to MINUIT parameter names.\n # Maybe a wrapper class MinuitParameters?\n parameter = parameters[parameter]\n idx = parameters.free_parameters.index(parameter)\n var = _make_parname(idx, parameter)\n\n message, success = \"Minos terminated successfully.\", True\n try:\n result = minuit.minos(var=var, sigma=sigma, maxcall=maxcall)\n info = result[var]\n except RuntimeError as error:\n message, success = str(error), False\n info = {\"is_valid\": False, \"lower\": np.nan, \"upper\": np.nan, \"nfcn\": 0}\n\n return {\n \"success\": success,\n \"message\": message,\n \"errp\": info[\"upper\"],\n \"errn\": -info[\"lower\"],\n \"nfev\": info[\"nfcn\"],\n }\n\n\ndef mncontour(minuit, parameters, x, y, numpoints, sigma):\n idx = parameters._get_idx(x)\n x = _make_parname(idx, parameters[idx])\n\n idx = parameters._get_idx(y)\n y = _make_parname(idx, parameters[idx])\n\n x_info, y_info, contour = minuit.mncontour(x, y, numpoints, sigma)\n contour = np.array(contour)\n\n success = x_info[\"is_valid\"] and y_info[\"is_valid\"]\n\n return {\n \"success\": success,\n \"x\": contour[:, 0],\n \"y\": contour[:, 1],\n \"x_info\": x_info,\n \"y_info\": y_info,\n }\n\n\n# this code is copied from https://github.com/iminuit/iminuit/blob/master/iminuit/_minimize.py#L95\ndef _get_message(m):\n message = \"Optimization terminated successfully.\"\n success = m.migrad_ok()\n if not success:\n message = \"Optimization failed.\"\n fmin = m.get_fmin()\n if fmin.has_reached_call_limit:\n message += \" Call limit was reached.\"\n if fmin.is_above_max_edm:\n message += \" Estimated distance to minimum too large.\"\n return message\n\n\ndef _make_parnames(parameters):\n return [_make_parname(idx, par) for idx, par in enumerate(parameters)]\n\n\ndef _make_parname(idx, par):\n return \"par_{:03d}_{}\".format(idx, par.name)\n\n\ndef make_minuit_par_kwargs(parameters):\n \"\"\"Create *Parameter Keyword Arguments* for the `Minuit` constructor.\n\n See: http://iminuit.readthedocs.io/en/latest/api.html#iminuit.Minuit\n \"\"\"\n names = _make_parnames(parameters.free_parameters)\n kwargs = {\"forced_parameters\": names}\n\n for name, par in zip(names, parameters.free_parameters):\n kwargs[name] = par.factor\n\n min_ = None if np.isnan(par.factor_min) else par.factor_min\n max_ = None if np.isnan(par.factor_max) else par.factor_max\n kwargs[\"limit_{}\".format(name)] = (min_, max_)\n\n if parameters.covariance is not None:\n error = parameters.error(par) / par.scale\n elif parameters.apply_autoscale:\n error = 1\n else:\n error = 1\n log.warning(\n \"Neither covariance matrix set nor auto-scaling of parameters activated.\"\n \"Assuming stepsize of 1, which could lead to convergence problems of the \"\n \"Minuit optimizer.\"\n )\n\n kwargs[\"error_{}\".format(name)] = error\n\n return kwargs\n", "path": "gammapy/utils/fitting/iminuit.py"}], "after_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\"\"\"iminuit fitting functions.\"\"\"\nimport logging\nimport numpy as np\nfrom .likelihood import Likelihood\n\n__all__ = [\"optimize_iminuit\", \"covariance_iminuit\", \"confidence_iminuit\", \"mncontour\"]\n\nlog = logging.getLogger(__name__)\n\n\nclass MinuitLikelihood(Likelihood):\n \"\"\"Likelihood function interface for iminuit.\"\"\"\n\n def fcn(self, *factors):\n self.parameters.set_parameter_factors(factors)\n return self.function()\n\n\ndef optimize_iminuit(parameters, function, **kwargs):\n \"\"\"iminuit optimization\n\n Parameters\n ----------\n parameters : `~gammapy.utils.modeling.Parameters`\n Parameters with starting values\n function : callable\n Likelihood function\n **kwargs : dict\n Options passed to `iminuit.Minuit` constructor. If there is an entry 'migrad_opts', those options\n will be passed to `iminuit.Minuit.migrad()`.\n\n Returns\n -------\n result : (factors, info, optimizer)\n Tuple containing the best fit factors, some info and the optimizer instance.\n \"\"\"\n from iminuit import Minuit\n\n # In Gammapy, we have the factor 2 in the likelihood function\n # This means `errordef=1` in the Minuit interface is correct\n kwargs.setdefault(\"errordef\", 1)\n kwargs.setdefault(\"print_level\", 0)\n kwargs.update(make_minuit_par_kwargs(parameters))\n\n minuit_func = MinuitLikelihood(function, parameters)\n\n kwargs = kwargs.copy()\n migrad_opts = kwargs.pop(\"migrad_opts\", {})\n minuit = Minuit(minuit_func.fcn, **kwargs)\n minuit.migrad(**migrad_opts)\n\n factors = minuit.args\n info = {\n \"success\": minuit.migrad_ok(),\n \"nfev\": minuit.get_num_call_fcn(),\n \"message\": _get_message(minuit),\n }\n optimizer = minuit\n\n return factors, info, optimizer\n\n\ndef covariance_iminuit(minuit):\n # TODO: add minuit.hesse() call once we have better tests\n\n message, success = \"Hesse terminated successfully.\", True\n try:\n covariance_factors = minuit.np_covariance()\n except (TypeError, RuntimeError):\n N = len(minuit.args)\n covariance_factors = np.nan * np.ones((N, N))\n message, success = \"Hesse failed\", False\n return covariance_factors, {\"success\": success, \"message\": message}\n\n\ndef confidence_iminuit(minuit, parameters, parameter, sigma, maxcall=0):\n # TODO: this is ugly - design something better for translating to MINUIT parameter names.\n # Maybe a wrapper class MinuitParameters?\n parameter = parameters[parameter]\n idx = parameters.free_parameters.index(parameter)\n var = _make_parname(idx, parameter)\n\n message, success = \"Minos terminated successfully.\", True\n try:\n result = minuit.minos(var=var, sigma=sigma, maxcall=maxcall)\n info = result[var]\n except RuntimeError as error:\n message, success = str(error), False\n info = {\"is_valid\": False, \"lower\": np.nan, \"upper\": np.nan, \"nfcn\": 0}\n\n return {\n \"success\": success,\n \"message\": message,\n \"errp\": info[\"upper\"],\n \"errn\": -info[\"lower\"],\n \"nfev\": info[\"nfcn\"],\n }\n\n\ndef mncontour(minuit, parameters, x, y, numpoints, sigma):\n par_x = parameters[x]\n idx_x = parameters.free_parameters.index(par_x)\n x = _make_parname(idx_x, par_x)\n\n par_y = parameters[y]\n idx_y = parameters.free_parameters.index(par_y)\n y = _make_parname(idx_y, par_y)\n\n x_info, y_info, contour = minuit.mncontour(x, y, numpoints, sigma)\n contour = np.array(contour)\n\n success = x_info[\"is_valid\"] and y_info[\"is_valid\"]\n\n return {\n \"success\": success,\n \"x\": contour[:, 0],\n \"y\": contour[:, 1],\n \"x_info\": x_info,\n \"y_info\": y_info,\n }\n\n\n# this code is copied from https://github.com/iminuit/iminuit/blob/master/iminuit/_minimize.py#L95\ndef _get_message(m):\n message = \"Optimization terminated successfully.\"\n success = m.migrad_ok()\n if not success:\n message = \"Optimization failed.\"\n fmin = m.get_fmin()\n if fmin.has_reached_call_limit:\n message += \" Call limit was reached.\"\n if fmin.is_above_max_edm:\n message += \" Estimated distance to minimum too large.\"\n return message\n\n\ndef _make_parnames(parameters):\n return [_make_parname(idx, par) for idx, par in enumerate(parameters)]\n\n\ndef _make_parname(idx, par):\n return \"par_{:03d}_{}\".format(idx, par.name)\n\n\ndef make_minuit_par_kwargs(parameters):\n \"\"\"Create *Parameter Keyword Arguments* for the `Minuit` constructor.\n\n See: http://iminuit.readthedocs.io/en/latest/api.html#iminuit.Minuit\n \"\"\"\n names = _make_parnames(parameters.free_parameters)\n kwargs = {\"forced_parameters\": names}\n\n for name, par in zip(names, parameters.free_parameters):\n kwargs[name] = par.factor\n\n min_ = None if np.isnan(par.factor_min) else par.factor_min\n max_ = None if np.isnan(par.factor_max) else par.factor_max\n kwargs[\"limit_{}\".format(name)] = (min_, max_)\n\n if parameters.covariance is not None:\n error = parameters.error(par) / par.scale\n elif parameters.apply_autoscale:\n error = 1\n else:\n error = 1\n log.warning(\n \"Neither covariance matrix set nor auto-scaling of parameters activated.\"\n \"Assuming stepsize of 1, which could lead to convergence problems of the \"\n \"Minuit optimizer.\"\n )\n\n kwargs[\"error_{}\".format(name)] = error\n\n return kwargs\n", "path": "gammapy/utils/fitting/iminuit.py"}]} | 2,288 | 243 |
gh_patches_debug_5602 | rasdani/github-patches | git_diff | nf-core__tools-1720 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
modules info broken in dev
### Description of the bug
Running `nf-core modules info` on `nf-core/tools version 2.5.dev0` produces a `TypeError`
### Command used and terminal output
```console
nf-core modules info
TypeError: __init__() takes 6 positional arguments but 7 were given
/Users/arthurgymer/.pyenv/versions/3.9.7/envs/nf-core/lib/python3.9/site-packages/nf_core/__main__.py:711 in info │
│ │
│ 710 │ try: │
│ ❱ 711 │ │ module_info = nf_core.modules.ModuleInfo( │
│ 712 │ │ │ dir,
```
### System information
nf-core/tools version 2.5.dev0
python 3.9.7
macOS
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nf_core/modules/info.py`
Content:
```
1 import logging
2 import os
3
4 import questionary
5 import yaml
6 from rich import box
7 from rich.console import Group
8 from rich.markdown import Markdown
9 from rich.panel import Panel
10 from rich.table import Table
11 from rich.text import Text
12
13 import nf_core.utils
14
15 from .module_utils import get_repo_type
16 from .modules_command import ModuleCommand
17 from .modules_repo import NF_CORE_MODULES_REMOTE
18
19 log = logging.getLogger(__name__)
20
21
22 class ModuleInfo(ModuleCommand):
23 def __init__(self, pipeline_dir, tool, remote_url, branch, no_pull):
24 super().__init__(pipeline_dir, remote_url, branch, no_pull)
25 self.meta = None
26 self.local_path = None
27 self.remote_location = None
28
29 # Quietly check if this is a pipeline or not
30 if pipeline_dir:
31 try:
32 pipeline_dir, repo_type = get_repo_type(pipeline_dir, use_prompt=False)
33 log.debug(f"Found {repo_type} repo: {pipeline_dir}")
34 except UserWarning as e:
35 log.debug(f"Only showing remote info: {e}")
36 pipeline_dir = None
37
38 self.get_pipeline_modules()
39 self.module = self.init_mod_name(tool)
40
41 def init_mod_name(self, module):
42 """
43 Makes sure that we have a module name before proceeding.
44
45 Args:
46 module: str: Module name to check
47 """
48 if module is None:
49 local = questionary.confirm(
50 "Is the module locally installed?", style=nf_core.utils.nfcore_question_style
51 ).unsafe_ask()
52 if local:
53 if self.repo_type == "modules":
54 modules = self.module_names["modules"]
55 else:
56 modules = self.module_names.get(self.modules_repo.fullname)
57 if modules is None:
58 raise UserWarning(f"No modules installed from '{self.modules_repo.remote_url}'")
59 else:
60 modules = self.modules_repo.get_avail_modules()
61 module = questionary.autocomplete(
62 "Please select a module", choices=modules, style=nf_core.utils.nfcore_question_style
63 ).unsafe_ask()
64 while module not in modules:
65 log.info(f"'{module}' is not a valid module name")
66 module = questionary.autocomplete(
67 "Please select a new module", choices=modules, style=nf_core.utils.nfcore_question_style
68 ).unsafe_ask()
69
70 return module
71
72 def get_module_info(self):
73 """Given the name of a module, parse meta.yml and print usage help."""
74
75 # Running with a local install, try to find the local meta
76 if self.dir:
77 self.meta = self.get_local_yaml()
78
79 # Either failed locally or in remote mode
80 if not self.meta:
81 self.meta = self.get_remote_yaml()
82
83 # Could not find the meta
84 if self.meta is False:
85 raise UserWarning(f"Could not find module '{self.module}'")
86
87 return self.generate_module_info_help()
88
89 def get_local_yaml(self):
90 """Attempt to get the meta.yml file from a locally installed module.
91
92 Returns:
93 dict or bool: Parsed meta.yml found, False otherwise
94 """
95
96 if self.repo_type == "pipeline":
97 # Try to find and load the meta.yml file
98 repo_name = self.modules_repo.fullname
99 module_base_path = os.path.join(self.dir, "modules", repo_name)
100 # Check that we have any modules installed from this repo
101 modules = self.module_names.get(repo_name)
102 if modules is None:
103 raise LookupError(f"No modules installed from {self.modules_repo.remote_url}")
104
105 if self.module in modules:
106 mod_dir = os.path.join(module_base_path, self.module)
107 meta_fn = os.path.join(mod_dir, "meta.yml")
108 if os.path.exists(meta_fn):
109 log.debug(f"Found local file: {meta_fn}")
110 with open(meta_fn, "r") as fh:
111 self.local_path = mod_dir
112 return yaml.safe_load(fh)
113
114 log.debug(f"Module '{self.module}' meta.yml not found locally")
115 else:
116 module_base_path = os.path.join(self.dir, "modules")
117 if self.module in os.listdir(module_base_path):
118 mod_dir = os.path.join(module_base_path, self.module)
119 meta_fn = os.path.join(mod_dir, "meta.yml")
120 if os.path.exists(meta_fn):
121 log.debug(f"Found local file: {meta_fn}")
122 with open(meta_fn, "r") as fh:
123 self.local_path = mod_dir
124 return yaml.safe_load(fh)
125 log.debug(f"Module '{self.module}' meta.yml not found locally")
126
127 return None
128
129 def get_remote_yaml(self):
130 """Attempt to get the meta.yml file from a remote repo.
131
132 Returns:
133 dict or bool: Parsed meta.yml found, False otherwise
134 """
135 # Check if our requested module is there
136 if self.module not in self.modules_repo.get_avail_modules():
137 return False
138
139 file_contents = self.modules_repo.get_meta_yml(self.module)
140 if file_contents is None:
141 return False
142 self.remote_location = self.modules_repo.remote_url
143 return yaml.safe_load(file_contents)
144
145 def generate_module_info_help(self):
146 """Take the parsed meta.yml and generate rich help.
147
148 Returns:
149 rich renderable
150 """
151
152 renderables = []
153
154 # Intro panel
155 intro_text = Text()
156 if self.local_path:
157 intro_text.append(Text.from_markup(f"Location: [blue]{self.local_path}\n"))
158 elif self.remote_location:
159 intro_text.append(
160 Text.from_markup(
161 f":globe_with_meridians: Repository: [link=https://github.com/{self.remote_location}]{self.remote_location}[/]\n"
162 )
163 )
164
165 if self.meta.get("tools"):
166 tools_strings = []
167 for tool in self.meta["tools"]:
168 for tool_name, tool_meta in tool.items():
169 if "homepage" in tool_meta:
170 tools_strings.append(f"[link={tool_meta['homepage']}]{tool_name}[/link]")
171 else:
172 tools_strings.append(f"{tool_name}")
173 intro_text.append(Text.from_markup(f":wrench: Tools: {', '.join(tools_strings)}\n", style="dim"))
174
175 if self.meta.get("description"):
176 intro_text.append(Text.from_markup(f":book: Description: {self.meta['description']}", style="dim"))
177
178 renderables.append(
179 Panel(
180 intro_text,
181 title=f"[bold]Module: [green]{self.module}\n",
182 title_align="left",
183 )
184 )
185
186 # Inputs
187 if self.meta.get("input"):
188 inputs_table = Table(expand=True, show_lines=True, box=box.MINIMAL_HEAVY_HEAD, padding=0)
189 inputs_table.add_column(":inbox_tray: Inputs")
190 inputs_table.add_column("Description")
191 inputs_table.add_column("Pattern", justify="right", style="green")
192 for input in self.meta["input"]:
193 for key, info in input.items():
194 inputs_table.add_row(
195 f"[orange1 on black] {key} [/][dim i] ({info['type']})",
196 Markdown(info["description"] if info["description"] else ""),
197 info.get("pattern", ""),
198 )
199
200 renderables.append(inputs_table)
201
202 # Outputs
203 if self.meta.get("output"):
204 outputs_table = Table(expand=True, show_lines=True, box=box.MINIMAL_HEAVY_HEAD, padding=0)
205 outputs_table.add_column(":outbox_tray: Outputs")
206 outputs_table.add_column("Description")
207 outputs_table.add_column("Pattern", justify="right", style="green")
208 for output in self.meta["output"]:
209 for key, info in output.items():
210 outputs_table.add_row(
211 f"[orange1 on black] {key} [/][dim i] ({info['type']})",
212 Markdown(info["description"] if info["description"] else ""),
213 info.get("pattern", ""),
214 )
215
216 renderables.append(outputs_table)
217
218 # Installation command
219 if self.remote_location:
220 cmd_base = "nf-core modules"
221 if self.remote_location != NF_CORE_MODULES_REMOTE:
222 cmd_base = f"nf-core modules --git-remote {self.remote_location}"
223 renderables.append(
224 Text.from_markup(f"\n :computer: Installation command: [magenta]{cmd_base} install {self.module}\n")
225 )
226
227 return Group(*renderables)
228
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nf_core/modules/info.py b/nf_core/modules/info.py
--- a/nf_core/modules/info.py
+++ b/nf_core/modules/info.py
@@ -20,8 +20,8 @@
class ModuleInfo(ModuleCommand):
- def __init__(self, pipeline_dir, tool, remote_url, branch, no_pull):
- super().__init__(pipeline_dir, remote_url, branch, no_pull)
+ def __init__(self, pipeline_dir, tool, remote_url, branch, no_pull, base_path):
+ super().__init__(pipeline_dir, remote_url, branch, no_pull, base_path)
self.meta = None
self.local_path = None
self.remote_location = None
| {"golden_diff": "diff --git a/nf_core/modules/info.py b/nf_core/modules/info.py\n--- a/nf_core/modules/info.py\n+++ b/nf_core/modules/info.py\n@@ -20,8 +20,8 @@\n \n \n class ModuleInfo(ModuleCommand):\n- def __init__(self, pipeline_dir, tool, remote_url, branch, no_pull):\n- super().__init__(pipeline_dir, remote_url, branch, no_pull)\n+ def __init__(self, pipeline_dir, tool, remote_url, branch, no_pull, base_path):\n+ super().__init__(pipeline_dir, remote_url, branch, no_pull, base_path)\n self.meta = None\n self.local_path = None\n self.remote_location = None\n", "issue": "modules info broken in dev \n### Description of the bug\n\nRunning `nf-core modules info` on `nf-core/tools version 2.5.dev0` produces a `TypeError`\n\n### Command used and terminal output\n\n```console\nnf-core modules info\r\n\r\nTypeError: __init__() takes 6 positional arguments but 7 were given\r\n\r\n/Users/arthurgymer/.pyenv/versions/3.9.7/envs/nf-core/lib/python3.9/site-packages/nf_core/__main__.py:711 in info \u2502\r\n\u2502 \u2502\r\n\u2502 710 \u2502 try: \u2502\r\n\u2502 \u2771 711 \u2502 \u2502 module_info = nf_core.modules.ModuleInfo( \u2502\r\n\u2502 712 \u2502 \u2502 \u2502 dir,\n```\n\n\n### System information\n\nnf-core/tools version 2.5.dev0\r\npython 3.9.7\r\nmacOS\r\n\n", "before_files": [{"content": "import logging\nimport os\n\nimport questionary\nimport yaml\nfrom rich import box\nfrom rich.console import Group\nfrom rich.markdown import Markdown\nfrom rich.panel import Panel\nfrom rich.table import Table\nfrom rich.text import Text\n\nimport nf_core.utils\n\nfrom .module_utils import get_repo_type\nfrom .modules_command import ModuleCommand\nfrom .modules_repo import NF_CORE_MODULES_REMOTE\n\nlog = logging.getLogger(__name__)\n\n\nclass ModuleInfo(ModuleCommand):\n def __init__(self, pipeline_dir, tool, remote_url, branch, no_pull):\n super().__init__(pipeline_dir, remote_url, branch, no_pull)\n self.meta = None\n self.local_path = None\n self.remote_location = None\n\n # Quietly check if this is a pipeline or not\n if pipeline_dir:\n try:\n pipeline_dir, repo_type = get_repo_type(pipeline_dir, use_prompt=False)\n log.debug(f\"Found {repo_type} repo: {pipeline_dir}\")\n except UserWarning as e:\n log.debug(f\"Only showing remote info: {e}\")\n pipeline_dir = None\n\n self.get_pipeline_modules()\n self.module = self.init_mod_name(tool)\n\n def init_mod_name(self, module):\n \"\"\"\n Makes sure that we have a module name before proceeding.\n\n Args:\n module: str: Module name to check\n \"\"\"\n if module is None:\n local = questionary.confirm(\n \"Is the module locally installed?\", style=nf_core.utils.nfcore_question_style\n ).unsafe_ask()\n if local:\n if self.repo_type == \"modules\":\n modules = self.module_names[\"modules\"]\n else:\n modules = self.module_names.get(self.modules_repo.fullname)\n if modules is None:\n raise UserWarning(f\"No modules installed from '{self.modules_repo.remote_url}'\")\n else:\n modules = self.modules_repo.get_avail_modules()\n module = questionary.autocomplete(\n \"Please select a module\", choices=modules, style=nf_core.utils.nfcore_question_style\n ).unsafe_ask()\n while module not in modules:\n log.info(f\"'{module}' is not a valid module name\")\n module = questionary.autocomplete(\n \"Please select a new module\", choices=modules, style=nf_core.utils.nfcore_question_style\n ).unsafe_ask()\n\n return module\n\n def get_module_info(self):\n \"\"\"Given the name of a module, parse meta.yml and print usage help.\"\"\"\n\n # Running with a local install, try to find the local meta\n if self.dir:\n self.meta = self.get_local_yaml()\n\n # Either failed locally or in remote mode\n if not self.meta:\n self.meta = self.get_remote_yaml()\n\n # Could not find the meta\n if self.meta is False:\n raise UserWarning(f\"Could not find module '{self.module}'\")\n\n return self.generate_module_info_help()\n\n def get_local_yaml(self):\n \"\"\"Attempt to get the meta.yml file from a locally installed module.\n\n Returns:\n dict or bool: Parsed meta.yml found, False otherwise\n \"\"\"\n\n if self.repo_type == \"pipeline\":\n # Try to find and load the meta.yml file\n repo_name = self.modules_repo.fullname\n module_base_path = os.path.join(self.dir, \"modules\", repo_name)\n # Check that we have any modules installed from this repo\n modules = self.module_names.get(repo_name)\n if modules is None:\n raise LookupError(f\"No modules installed from {self.modules_repo.remote_url}\")\n\n if self.module in modules:\n mod_dir = os.path.join(module_base_path, self.module)\n meta_fn = os.path.join(mod_dir, \"meta.yml\")\n if os.path.exists(meta_fn):\n log.debug(f\"Found local file: {meta_fn}\")\n with open(meta_fn, \"r\") as fh:\n self.local_path = mod_dir\n return yaml.safe_load(fh)\n\n log.debug(f\"Module '{self.module}' meta.yml not found locally\")\n else:\n module_base_path = os.path.join(self.dir, \"modules\")\n if self.module in os.listdir(module_base_path):\n mod_dir = os.path.join(module_base_path, self.module)\n meta_fn = os.path.join(mod_dir, \"meta.yml\")\n if os.path.exists(meta_fn):\n log.debug(f\"Found local file: {meta_fn}\")\n with open(meta_fn, \"r\") as fh:\n self.local_path = mod_dir\n return yaml.safe_load(fh)\n log.debug(f\"Module '{self.module}' meta.yml not found locally\")\n\n return None\n\n def get_remote_yaml(self):\n \"\"\"Attempt to get the meta.yml file from a remote repo.\n\n Returns:\n dict or bool: Parsed meta.yml found, False otherwise\n \"\"\"\n # Check if our requested module is there\n if self.module not in self.modules_repo.get_avail_modules():\n return False\n\n file_contents = self.modules_repo.get_meta_yml(self.module)\n if file_contents is None:\n return False\n self.remote_location = self.modules_repo.remote_url\n return yaml.safe_load(file_contents)\n\n def generate_module_info_help(self):\n \"\"\"Take the parsed meta.yml and generate rich help.\n\n Returns:\n rich renderable\n \"\"\"\n\n renderables = []\n\n # Intro panel\n intro_text = Text()\n if self.local_path:\n intro_text.append(Text.from_markup(f\"Location: [blue]{self.local_path}\\n\"))\n elif self.remote_location:\n intro_text.append(\n Text.from_markup(\n f\":globe_with_meridians: Repository: [link=https://github.com/{self.remote_location}]{self.remote_location}[/]\\n\"\n )\n )\n\n if self.meta.get(\"tools\"):\n tools_strings = []\n for tool in self.meta[\"tools\"]:\n for tool_name, tool_meta in tool.items():\n if \"homepage\" in tool_meta:\n tools_strings.append(f\"[link={tool_meta['homepage']}]{tool_name}[/link]\")\n else:\n tools_strings.append(f\"{tool_name}\")\n intro_text.append(Text.from_markup(f\":wrench: Tools: {', '.join(tools_strings)}\\n\", style=\"dim\"))\n\n if self.meta.get(\"description\"):\n intro_text.append(Text.from_markup(f\":book: Description: {self.meta['description']}\", style=\"dim\"))\n\n renderables.append(\n Panel(\n intro_text,\n title=f\"[bold]Module: [green]{self.module}\\n\",\n title_align=\"left\",\n )\n )\n\n # Inputs\n if self.meta.get(\"input\"):\n inputs_table = Table(expand=True, show_lines=True, box=box.MINIMAL_HEAVY_HEAD, padding=0)\n inputs_table.add_column(\":inbox_tray: Inputs\")\n inputs_table.add_column(\"Description\")\n inputs_table.add_column(\"Pattern\", justify=\"right\", style=\"green\")\n for input in self.meta[\"input\"]:\n for key, info in input.items():\n inputs_table.add_row(\n f\"[orange1 on black] {key} [/][dim i] ({info['type']})\",\n Markdown(info[\"description\"] if info[\"description\"] else \"\"),\n info.get(\"pattern\", \"\"),\n )\n\n renderables.append(inputs_table)\n\n # Outputs\n if self.meta.get(\"output\"):\n outputs_table = Table(expand=True, show_lines=True, box=box.MINIMAL_HEAVY_HEAD, padding=0)\n outputs_table.add_column(\":outbox_tray: Outputs\")\n outputs_table.add_column(\"Description\")\n outputs_table.add_column(\"Pattern\", justify=\"right\", style=\"green\")\n for output in self.meta[\"output\"]:\n for key, info in output.items():\n outputs_table.add_row(\n f\"[orange1 on black] {key} [/][dim i] ({info['type']})\",\n Markdown(info[\"description\"] if info[\"description\"] else \"\"),\n info.get(\"pattern\", \"\"),\n )\n\n renderables.append(outputs_table)\n\n # Installation command\n if self.remote_location:\n cmd_base = \"nf-core modules\"\n if self.remote_location != NF_CORE_MODULES_REMOTE:\n cmd_base = f\"nf-core modules --git-remote {self.remote_location}\"\n renderables.append(\n Text.from_markup(f\"\\n :computer: Installation command: [magenta]{cmd_base} install {self.module}\\n\")\n )\n\n return Group(*renderables)\n", "path": "nf_core/modules/info.py"}], "after_files": [{"content": "import logging\nimport os\n\nimport questionary\nimport yaml\nfrom rich import box\nfrom rich.console import Group\nfrom rich.markdown import Markdown\nfrom rich.panel import Panel\nfrom rich.table import Table\nfrom rich.text import Text\n\nimport nf_core.utils\n\nfrom .module_utils import get_repo_type\nfrom .modules_command import ModuleCommand\nfrom .modules_repo import NF_CORE_MODULES_REMOTE\n\nlog = logging.getLogger(__name__)\n\n\nclass ModuleInfo(ModuleCommand):\n def __init__(self, pipeline_dir, tool, remote_url, branch, no_pull, base_path):\n super().__init__(pipeline_dir, remote_url, branch, no_pull, base_path)\n self.meta = None\n self.local_path = None\n self.remote_location = None\n\n # Quietly check if this is a pipeline or not\n if pipeline_dir:\n try:\n pipeline_dir, repo_type = get_repo_type(pipeline_dir, use_prompt=False)\n log.debug(f\"Found {repo_type} repo: {pipeline_dir}\")\n except UserWarning as e:\n log.debug(f\"Only showing remote info: {e}\")\n pipeline_dir = None\n\n self.get_pipeline_modules()\n self.module = self.init_mod_name(tool)\n\n def init_mod_name(self, module):\n \"\"\"\n Makes sure that we have a module name before proceeding.\n\n Args:\n module: str: Module name to check\n \"\"\"\n if module is None:\n local = questionary.confirm(\n \"Is the module locally installed?\", style=nf_core.utils.nfcore_question_style\n ).unsafe_ask()\n if local:\n if self.repo_type == \"modules\":\n modules = self.module_names[\"modules\"]\n else:\n modules = self.module_names.get(self.modules_repo.fullname)\n if modules is None:\n raise UserWarning(f\"No modules installed from '{self.modules_repo.remote_url}'\")\n else:\n modules = self.modules_repo.get_avail_modules()\n module = questionary.autocomplete(\n \"Please select a module\", choices=modules, style=nf_core.utils.nfcore_question_style\n ).unsafe_ask()\n while module not in modules:\n log.info(f\"'{module}' is not a valid module name\")\n module = questionary.autocomplete(\n \"Please select a new module\", choices=modules, style=nf_core.utils.nfcore_question_style\n ).unsafe_ask()\n\n return module\n\n def get_module_info(self):\n \"\"\"Given the name of a module, parse meta.yml and print usage help.\"\"\"\n\n # Running with a local install, try to find the local meta\n if self.dir:\n self.meta = self.get_local_yaml()\n\n # Either failed locally or in remote mode\n if not self.meta:\n self.meta = self.get_remote_yaml()\n\n # Could not find the meta\n if self.meta is False:\n raise UserWarning(f\"Could not find module '{self.module}'\")\n\n return self.generate_module_info_help()\n\n def get_local_yaml(self):\n \"\"\"Attempt to get the meta.yml file from a locally installed module.\n\n Returns:\n dict or bool: Parsed meta.yml found, False otherwise\n \"\"\"\n\n if self.repo_type == \"pipeline\":\n # Try to find and load the meta.yml file\n repo_name = self.modules_repo.fullname\n module_base_path = os.path.join(self.dir, \"modules\", repo_name)\n # Check that we have any modules installed from this repo\n modules = self.module_names.get(repo_name)\n if modules is None:\n raise LookupError(f\"No modules installed from {self.modules_repo.remote_url}\")\n\n if self.module in modules:\n mod_dir = os.path.join(module_base_path, self.module)\n meta_fn = os.path.join(mod_dir, \"meta.yml\")\n if os.path.exists(meta_fn):\n log.debug(f\"Found local file: {meta_fn}\")\n with open(meta_fn, \"r\") as fh:\n self.local_path = mod_dir\n return yaml.safe_load(fh)\n\n log.debug(f\"Module '{self.module}' meta.yml not found locally\")\n else:\n module_base_path = os.path.join(self.dir, \"modules\")\n if self.module in os.listdir(module_base_path):\n mod_dir = os.path.join(module_base_path, self.module)\n meta_fn = os.path.join(mod_dir, \"meta.yml\")\n if os.path.exists(meta_fn):\n log.debug(f\"Found local file: {meta_fn}\")\n with open(meta_fn, \"r\") as fh:\n self.local_path = mod_dir\n return yaml.safe_load(fh)\n log.debug(f\"Module '{self.module}' meta.yml not found locally\")\n\n return None\n\n def get_remote_yaml(self):\n \"\"\"Attempt to get the meta.yml file from a remote repo.\n\n Returns:\n dict or bool: Parsed meta.yml found, False otherwise\n \"\"\"\n # Check if our requested module is there\n if self.module not in self.modules_repo.get_avail_modules():\n return False\n\n file_contents = self.modules_repo.get_meta_yml(self.module)\n if file_contents is None:\n return False\n self.remote_location = self.modules_repo.remote_url\n return yaml.safe_load(file_contents)\n\n def generate_module_info_help(self):\n \"\"\"Take the parsed meta.yml and generate rich help.\n\n Returns:\n rich renderable\n \"\"\"\n\n renderables = []\n\n # Intro panel\n intro_text = Text()\n if self.local_path:\n intro_text.append(Text.from_markup(f\"Location: [blue]{self.local_path}\\n\"))\n elif self.remote_location:\n intro_text.append(\n Text.from_markup(\n f\":globe_with_meridians: Repository: [link=https://github.com/{self.remote_location}]{self.remote_location}[/]\\n\"\n )\n )\n\n if self.meta.get(\"tools\"):\n tools_strings = []\n for tool in self.meta[\"tools\"]:\n for tool_name, tool_meta in tool.items():\n if \"homepage\" in tool_meta:\n tools_strings.append(f\"[link={tool_meta['homepage']}]{tool_name}[/link]\")\n else:\n tools_strings.append(f\"{tool_name}\")\n intro_text.append(Text.from_markup(f\":wrench: Tools: {', '.join(tools_strings)}\\n\", style=\"dim\"))\n\n if self.meta.get(\"description\"):\n intro_text.append(Text.from_markup(f\":book: Description: {self.meta['description']}\", style=\"dim\"))\n\n renderables.append(\n Panel(\n intro_text,\n title=f\"[bold]Module: [green]{self.module}\\n\",\n title_align=\"left\",\n )\n )\n\n # Inputs\n if self.meta.get(\"input\"):\n inputs_table = Table(expand=True, show_lines=True, box=box.MINIMAL_HEAVY_HEAD, padding=0)\n inputs_table.add_column(\":inbox_tray: Inputs\")\n inputs_table.add_column(\"Description\")\n inputs_table.add_column(\"Pattern\", justify=\"right\", style=\"green\")\n for input in self.meta[\"input\"]:\n for key, info in input.items():\n inputs_table.add_row(\n f\"[orange1 on black] {key} [/][dim i] ({info['type']})\",\n Markdown(info[\"description\"] if info[\"description\"] else \"\"),\n info.get(\"pattern\", \"\"),\n )\n\n renderables.append(inputs_table)\n\n # Outputs\n if self.meta.get(\"output\"):\n outputs_table = Table(expand=True, show_lines=True, box=box.MINIMAL_HEAVY_HEAD, padding=0)\n outputs_table.add_column(\":outbox_tray: Outputs\")\n outputs_table.add_column(\"Description\")\n outputs_table.add_column(\"Pattern\", justify=\"right\", style=\"green\")\n for output in self.meta[\"output\"]:\n for key, info in output.items():\n outputs_table.add_row(\n f\"[orange1 on black] {key} [/][dim i] ({info['type']})\",\n Markdown(info[\"description\"] if info[\"description\"] else \"\"),\n info.get(\"pattern\", \"\"),\n )\n\n renderables.append(outputs_table)\n\n # Installation command\n if self.remote_location:\n cmd_base = \"nf-core modules\"\n if self.remote_location != NF_CORE_MODULES_REMOTE:\n cmd_base = f\"nf-core modules --git-remote {self.remote_location}\"\n renderables.append(\n Text.from_markup(f\"\\n :computer: Installation command: [magenta]{cmd_base} install {self.module}\\n\")\n )\n\n return Group(*renderables)\n", "path": "nf_core/modules/info.py"}]} | 2,841 | 157 |
gh_patches_debug_30136 | rasdani/github-patches | git_diff | ktbyers__netmiko-1648 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Raise exception if asa_login() fails to login successfully
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netmiko/cisco/cisco_asa_ssh.py`
Content:
```
1 """Subclass specific to Cisco ASA."""
2 import re
3 import time
4 from netmiko.cisco_base_connection import CiscoSSHConnection, CiscoFileTransfer
5
6
7 class CiscoAsaSSH(CiscoSSHConnection):
8 """Subclass specific to Cisco ASA."""
9
10 def session_preparation(self):
11 """Prepare the session after the connection has been established."""
12 self._test_channel_read()
13 self.set_base_prompt()
14 if self.secret:
15 self.enable()
16 else:
17 self.asa_login()
18 self.disable_paging(command="terminal pager 0")
19 if self.allow_auto_change:
20 try:
21 self.send_config_set("terminal width 511")
22 except ValueError:
23 # Don't fail for the terminal width
24 pass
25
26 # Clear the read buffer
27 time.sleep(0.3 * self.global_delay_factor)
28 self.clear_buffer()
29
30 def send_command_timing(self, *args, **kwargs):
31 """
32 If the ASA is in multi-context mode, then the base_prompt needs to be
33 updated after each context change.
34 """
35 output = super().send_command_timing(*args, **kwargs)
36 if len(args) >= 1:
37 command_string = args[0]
38 else:
39 command_string = kwargs["command_string"]
40 if "changeto" in command_string:
41 self.set_base_prompt()
42 return output
43
44 def send_command(self, *args, **kwargs):
45 """
46 If the ASA is in multi-context mode, then the base_prompt needs to be
47 updated after each context change.
48 """
49 if len(args) >= 1:
50 command_string = args[0]
51 else:
52 command_string = kwargs["command_string"]
53
54 # If changeto in command, look for '#' to determine command is done
55 if "changeto" in command_string:
56 if len(args) <= 1:
57 expect_string = kwargs.get("expect_string", "#")
58 kwargs["expect_string"] = expect_string
59 output = super().send_command(*args, **kwargs)
60
61 if "changeto" in command_string:
62 self.set_base_prompt()
63
64 return output
65
66 def send_command_expect(self, *args, **kwargs):
67 """Backwards compaitibility."""
68 return self.send_command(*args, **kwargs)
69
70 def set_base_prompt(self, *args, **kwargs):
71 """
72 Cisco ASA in multi-context mode needs to have the base prompt updated
73 (if you switch contexts i.e. 'changeto')
74
75 This switch of ASA contexts can occur in configuration mode. If this
76 happens the trailing '(config*' needs stripped off.
77 """
78 cur_base_prompt = super().set_base_prompt(*args, **kwargs)
79 match = re.search(r"(.*)\(conf.*", cur_base_prompt)
80 if match:
81 # strip off (conf.* from base_prompt
82 self.base_prompt = match.group(1)
83 return self.base_prompt
84
85 def asa_login(self):
86 """
87 Handle ASA reaching privilege level 15 using login
88
89 twb-dc-fw1> login
90 Username: admin
91 Password: ************
92 """
93 delay_factor = self.select_delay_factor(0)
94
95 i = 1
96 max_attempts = 50
97 self.write_channel("login" + self.RETURN)
98 while i <= max_attempts:
99 time.sleep(0.5 * delay_factor)
100 output = self.read_channel()
101 if "sername" in output:
102 self.write_channel(self.username + self.RETURN)
103 elif "ssword" in output:
104 self.write_channel(self.password + self.RETURN)
105 elif "#" in output:
106 break
107 else:
108 self.write_channel("login" + self.RETURN)
109 i += 1
110
111 def save_config(self, cmd="write mem", confirm=False, confirm_response=""):
112 """Saves Config"""
113 return super().save_config(
114 cmd=cmd, confirm=confirm, confirm_response=confirm_response
115 )
116
117
118 class CiscoAsaFileTransfer(CiscoFileTransfer):
119 """Cisco ASA SCP File Transfer driver."""
120
121 pass
122
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netmiko/cisco/cisco_asa_ssh.py b/netmiko/cisco/cisco_asa_ssh.py
--- a/netmiko/cisco/cisco_asa_ssh.py
+++ b/netmiko/cisco/cisco_asa_ssh.py
@@ -2,6 +2,7 @@
import re
import time
from netmiko.cisco_base_connection import CiscoSSHConnection, CiscoFileTransfer
+from netmiko.ssh_exception import NetmikoAuthenticationException
class CiscoAsaSSH(CiscoSSHConnection):
@@ -88,12 +89,14 @@
twb-dc-fw1> login
Username: admin
- Password: ************
+
+ Raises NetmikoAuthenticationException, if we do not reach privilege
+ level 15 after 3 attempts.
"""
delay_factor = self.select_delay_factor(0)
i = 1
- max_attempts = 50
+ max_attempts = 3
self.write_channel("login" + self.RETURN)
while i <= max_attempts:
time.sleep(0.5 * delay_factor)
@@ -103,11 +106,14 @@
elif "ssword" in output:
self.write_channel(self.password + self.RETURN)
elif "#" in output:
- break
+ return True
else:
self.write_channel("login" + self.RETURN)
i += 1
+ msg = "Unable to get to enable mode!"
+ raise NetmikoAuthenticationException(msg)
+
def save_config(self, cmd="write mem", confirm=False, confirm_response=""):
"""Saves Config"""
return super().save_config(
| {"golden_diff": "diff --git a/netmiko/cisco/cisco_asa_ssh.py b/netmiko/cisco/cisco_asa_ssh.py\n--- a/netmiko/cisco/cisco_asa_ssh.py\n+++ b/netmiko/cisco/cisco_asa_ssh.py\n@@ -2,6 +2,7 @@\n import re\n import time\n from netmiko.cisco_base_connection import CiscoSSHConnection, CiscoFileTransfer\n+from netmiko.ssh_exception import NetmikoAuthenticationException\n \n \n class CiscoAsaSSH(CiscoSSHConnection):\n@@ -88,12 +89,14 @@\n \n twb-dc-fw1> login\n Username: admin\n- Password: ************\n+\n+ Raises NetmikoAuthenticationException, if we do not reach privilege\n+ level 15 after 3 attempts.\n \"\"\"\n delay_factor = self.select_delay_factor(0)\n \n i = 1\n- max_attempts = 50\n+ max_attempts = 3\n self.write_channel(\"login\" + self.RETURN)\n while i <= max_attempts:\n time.sleep(0.5 * delay_factor)\n@@ -103,11 +106,14 @@\n elif \"ssword\" in output:\n self.write_channel(self.password + self.RETURN)\n elif \"#\" in output:\n- break\n+ return True\n else:\n self.write_channel(\"login\" + self.RETURN)\n i += 1\n \n+ msg = \"Unable to get to enable mode!\"\n+ raise NetmikoAuthenticationException(msg)\n+\n def save_config(self, cmd=\"write mem\", confirm=False, confirm_response=\"\"):\n \"\"\"Saves Config\"\"\"\n return super().save_config(\n", "issue": "Raise exception if asa_login() fails to login successfully\n\n", "before_files": [{"content": "\"\"\"Subclass specific to Cisco ASA.\"\"\"\nimport re\nimport time\nfrom netmiko.cisco_base_connection import CiscoSSHConnection, CiscoFileTransfer\n\n\nclass CiscoAsaSSH(CiscoSSHConnection):\n \"\"\"Subclass specific to Cisco ASA.\"\"\"\n\n def session_preparation(self):\n \"\"\"Prepare the session after the connection has been established.\"\"\"\n self._test_channel_read()\n self.set_base_prompt()\n if self.secret:\n self.enable()\n else:\n self.asa_login()\n self.disable_paging(command=\"terminal pager 0\")\n if self.allow_auto_change:\n try:\n self.send_config_set(\"terminal width 511\")\n except ValueError:\n # Don't fail for the terminal width\n pass\n\n # Clear the read buffer\n time.sleep(0.3 * self.global_delay_factor)\n self.clear_buffer()\n\n def send_command_timing(self, *args, **kwargs):\n \"\"\"\n If the ASA is in multi-context mode, then the base_prompt needs to be\n updated after each context change.\n \"\"\"\n output = super().send_command_timing(*args, **kwargs)\n if len(args) >= 1:\n command_string = args[0]\n else:\n command_string = kwargs[\"command_string\"]\n if \"changeto\" in command_string:\n self.set_base_prompt()\n return output\n\n def send_command(self, *args, **kwargs):\n \"\"\"\n If the ASA is in multi-context mode, then the base_prompt needs to be\n updated after each context change.\n \"\"\"\n if len(args) >= 1:\n command_string = args[0]\n else:\n command_string = kwargs[\"command_string\"]\n\n # If changeto in command, look for '#' to determine command is done\n if \"changeto\" in command_string:\n if len(args) <= 1:\n expect_string = kwargs.get(\"expect_string\", \"#\")\n kwargs[\"expect_string\"] = expect_string\n output = super().send_command(*args, **kwargs)\n\n if \"changeto\" in command_string:\n self.set_base_prompt()\n\n return output\n\n def send_command_expect(self, *args, **kwargs):\n \"\"\"Backwards compaitibility.\"\"\"\n return self.send_command(*args, **kwargs)\n\n def set_base_prompt(self, *args, **kwargs):\n \"\"\"\n Cisco ASA in multi-context mode needs to have the base prompt updated\n (if you switch contexts i.e. 'changeto')\n\n This switch of ASA contexts can occur in configuration mode. If this\n happens the trailing '(config*' needs stripped off.\n \"\"\"\n cur_base_prompt = super().set_base_prompt(*args, **kwargs)\n match = re.search(r\"(.*)\\(conf.*\", cur_base_prompt)\n if match:\n # strip off (conf.* from base_prompt\n self.base_prompt = match.group(1)\n return self.base_prompt\n\n def asa_login(self):\n \"\"\"\n Handle ASA reaching privilege level 15 using login\n\n twb-dc-fw1> login\n Username: admin\n Password: ************\n \"\"\"\n delay_factor = self.select_delay_factor(0)\n\n i = 1\n max_attempts = 50\n self.write_channel(\"login\" + self.RETURN)\n while i <= max_attempts:\n time.sleep(0.5 * delay_factor)\n output = self.read_channel()\n if \"sername\" in output:\n self.write_channel(self.username + self.RETURN)\n elif \"ssword\" in output:\n self.write_channel(self.password + self.RETURN)\n elif \"#\" in output:\n break\n else:\n self.write_channel(\"login\" + self.RETURN)\n i += 1\n\n def save_config(self, cmd=\"write mem\", confirm=False, confirm_response=\"\"):\n \"\"\"Saves Config\"\"\"\n return super().save_config(\n cmd=cmd, confirm=confirm, confirm_response=confirm_response\n )\n\n\nclass CiscoAsaFileTransfer(CiscoFileTransfer):\n \"\"\"Cisco ASA SCP File Transfer driver.\"\"\"\n\n pass\n", "path": "netmiko/cisco/cisco_asa_ssh.py"}], "after_files": [{"content": "\"\"\"Subclass specific to Cisco ASA.\"\"\"\nimport re\nimport time\nfrom netmiko.cisco_base_connection import CiscoSSHConnection, CiscoFileTransfer\nfrom netmiko.ssh_exception import NetmikoAuthenticationException\n\n\nclass CiscoAsaSSH(CiscoSSHConnection):\n \"\"\"Subclass specific to Cisco ASA.\"\"\"\n\n def session_preparation(self):\n \"\"\"Prepare the session after the connection has been established.\"\"\"\n self._test_channel_read()\n self.set_base_prompt()\n if self.secret:\n self.enable()\n else:\n self.asa_login()\n self.disable_paging(command=\"terminal pager 0\")\n if self.allow_auto_change:\n try:\n self.send_config_set(\"terminal width 511\")\n except ValueError:\n # Don't fail for the terminal width\n pass\n\n # Clear the read buffer\n time.sleep(0.3 * self.global_delay_factor)\n self.clear_buffer()\n\n def send_command_timing(self, *args, **kwargs):\n \"\"\"\n If the ASA is in multi-context mode, then the base_prompt needs to be\n updated after each context change.\n \"\"\"\n output = super().send_command_timing(*args, **kwargs)\n if len(args) >= 1:\n command_string = args[0]\n else:\n command_string = kwargs[\"command_string\"]\n if \"changeto\" in command_string:\n self.set_base_prompt()\n return output\n\n def send_command(self, *args, **kwargs):\n \"\"\"\n If the ASA is in multi-context mode, then the base_prompt needs to be\n updated after each context change.\n \"\"\"\n if len(args) >= 1:\n command_string = args[0]\n else:\n command_string = kwargs[\"command_string\"]\n\n # If changeto in command, look for '#' to determine command is done\n if \"changeto\" in command_string:\n if len(args) <= 1:\n expect_string = kwargs.get(\"expect_string\", \"#\")\n kwargs[\"expect_string\"] = expect_string\n output = super().send_command(*args, **kwargs)\n\n if \"changeto\" in command_string:\n self.set_base_prompt()\n\n return output\n\n def send_command_expect(self, *args, **kwargs):\n \"\"\"Backwards compaitibility.\"\"\"\n return self.send_command(*args, **kwargs)\n\n def set_base_prompt(self, *args, **kwargs):\n \"\"\"\n Cisco ASA in multi-context mode needs to have the base prompt updated\n (if you switch contexts i.e. 'changeto')\n\n This switch of ASA contexts can occur in configuration mode. If this\n happens the trailing '(config*' needs stripped off.\n \"\"\"\n cur_base_prompt = super().set_base_prompt(*args, **kwargs)\n match = re.search(r\"(.*)\\(conf.*\", cur_base_prompt)\n if match:\n # strip off (conf.* from base_prompt\n self.base_prompt = match.group(1)\n return self.base_prompt\n\n def asa_login(self):\n \"\"\"\n Handle ASA reaching privilege level 15 using login\n\n twb-dc-fw1> login\n Username: admin\n\n Raises NetmikoAuthenticationException, if we do not reach privilege\n level 15 after 3 attempts.\n \"\"\"\n delay_factor = self.select_delay_factor(0)\n\n i = 1\n max_attempts = 3\n self.write_channel(\"login\" + self.RETURN)\n while i <= max_attempts:\n time.sleep(0.5 * delay_factor)\n output = self.read_channel()\n if \"sername\" in output:\n self.write_channel(self.username + self.RETURN)\n elif \"ssword\" in output:\n self.write_channel(self.password + self.RETURN)\n elif \"#\" in output:\n return True\n else:\n self.write_channel(\"login\" + self.RETURN)\n i += 1\n\n msg = \"Unable to get to enable mode!\"\n raise NetmikoAuthenticationException(msg)\n\n def save_config(self, cmd=\"write mem\", confirm=False, confirm_response=\"\"):\n \"\"\"Saves Config\"\"\"\n return super().save_config(\n cmd=cmd, confirm=confirm, confirm_response=confirm_response\n )\n\n\nclass CiscoAsaFileTransfer(CiscoFileTransfer):\n \"\"\"Cisco ASA SCP File Transfer driver.\"\"\"\n\n pass\n", "path": "netmiko/cisco/cisco_asa_ssh.py"}]} | 1,415 | 372 |
gh_patches_debug_9341 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-1986 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug]: Haringey Waste Collection sensor returning 'Unknown'
### I Have A Problem With:
A specific source
### What's Your Problem
I am having an issue adding the Haringey council (UK) Waste Collection sensor into HA. The sensor value shows as 'Unknown'.
I have added this code to my configuration.yaml (replacing "My UPRN" with my actual UPRN):
```
sources:
- name: haringey_gov_uk
args:
uprn: "My UPRN"
sensor:
- platform: waste_collection_schedule
name: "Haringey Waste Collection"
details_format: upcoming
value_template: 'in {{value.daysTo}} days'
```

I have tested with other configs for other councils I found online and those work as expected. Is there something wrong with the Haringey data feed? The URL mentioned in the documentation is still correct, and I can see the correct information on the Haringey website.
### Source (if relevant)
_No response_
### Logs
_No response_
### Relevant Configuration
_No response_
### Checklist Source Error
- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)
- [X] Checked that the website of your service provider is still working
- [X] Tested my attributes on the service provider website (if possible)
- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version)
### Checklist Sensor Error
- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)
### Required
- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py`
Content:
```
1 from datetime import datetime
2
3 import requests
4 from bs4 import BeautifulSoup
5 from waste_collection_schedule import Collection # type: ignore[attr-defined]
6
7 TITLE = "Haringey Council"
8 DESCRIPTION = "Source for haringey.gov.uk services for Haringey Council, UK."
9 URL = "https://www.haringey.gov.uk/"
10 TEST_CASES = {
11 "Test_001": {"uprn": "100021209182"},
12 "Test_002": {"uprn": "100021207181"},
13 "Test_003": {"uprn": "100021202738"},
14 "Test_004": {"uprn": 100021202131},
15 }
16 ICON_MAP = {
17 "General Waste": "mdi:trash-can",
18 "Collect Domestic Recycling": "mdi:recycle",
19 "Food Waste": "mdi:food-apple",
20 "Collect Paid Domestic Garden": "mdi:leaf",
21 }
22
23
24 class Source:
25 def __init__(self, uprn):
26 self._uprn = str(uprn).zfill(12)
27
28 def fetch(self):
29 api_url = f"https://wastecollections.haringey.gov.uk/property/{self._uprn}"
30 response = requests.get(api_url)
31
32 soup = BeautifulSoup(response.text, features="html.parser")
33 soup.prettify()
34
35 entries = []
36
37 service_elements = soup.select(".service-wrapper")
38
39 for service_element in service_elements:
40 service_name = service_element.select(".service-name")[0].text.strip()
41 next_service_date = service_element.select("td.next-service")[0]
42
43 next_service_date.span.extract()
44
45 entries.append(
46 Collection(
47 date=datetime.strptime(
48 next_service_date.text.strip(), "%d/%m/%Y"
49 ).date(),
50 t=service_name,
51 icon=ICON_MAP.get(service_name),
52 )
53 )
54
55 return entries
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py
@@ -38,7 +38,11 @@
for service_element in service_elements:
service_name = service_element.select(".service-name")[0].text.strip()
- next_service_date = service_element.select("td.next-service")[0]
+
+ next_service_dates = service_element.select("td.next-service")
+ if len(next_service_dates) == 0:
+ continue
+ next_service_date = next_service_dates[0]
next_service_date.span.extract()
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py\n@@ -38,7 +38,11 @@\n \n for service_element in service_elements:\n service_name = service_element.select(\".service-name\")[0].text.strip()\n- next_service_date = service_element.select(\"td.next-service\")[0]\n+\n+ next_service_dates = service_element.select(\"td.next-service\")\n+ if len(next_service_dates) == 0:\n+ continue\n+ next_service_date = next_service_dates[0]\n \n next_service_date.span.extract()\n", "issue": "[Bug]: Haringey Waste Collection sensor returning 'Unknown'\n### I Have A Problem With:\r\n\r\nA specific source\r\n\r\n### What's Your Problem\r\n\r\nI am having an issue adding the Haringey council (UK) Waste Collection sensor into HA. The sensor value shows as 'Unknown'.\r\n\r\nI have added this code to my configuration.yaml (replacing \"My UPRN\" with my actual UPRN):\r\n\r\n```\r\n sources:\r\n - name: haringey_gov_uk\r\n args:\r\n uprn: \"My UPRN\"\r\n\r\nsensor:\r\n - platform: waste_collection_schedule\r\n name: \"Haringey Waste Collection\"\r\n details_format: upcoming\r\n value_template: 'in {{value.daysTo}} days'\r\n```\r\n\r\n\r\n\r\nI have tested with other configs for other councils I found online and those work as expected. Is there something wrong with the Haringey data feed? The URL mentioned in the documentation is still correct, and I can see the correct information on the Haringey website.\r\n\r\n### Source (if relevant)\r\n\r\n_No response_\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### Relevant Configuration\r\n\r\n_No response_\r\n\r\n### Checklist Source Error\r\n\r\n- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\r\n- [X] Checked that the website of your service provider is still working\r\n- [X] Tested my attributes on the service provider website (if possible)\r\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\r\n\r\n### Checklist Sensor Error\r\n\r\n- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\r\n\r\n### Required\r\n\r\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\r\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "from datetime import datetime\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Haringey Council\"\nDESCRIPTION = \"Source for haringey.gov.uk services for Haringey Council, UK.\"\nURL = \"https://www.haringey.gov.uk/\"\nTEST_CASES = {\n \"Test_001\": {\"uprn\": \"100021209182\"},\n \"Test_002\": {\"uprn\": \"100021207181\"},\n \"Test_003\": {\"uprn\": \"100021202738\"},\n \"Test_004\": {\"uprn\": 100021202131},\n}\nICON_MAP = {\n \"General Waste\": \"mdi:trash-can\",\n \"Collect Domestic Recycling\": \"mdi:recycle\",\n \"Food Waste\": \"mdi:food-apple\",\n \"Collect Paid Domestic Garden\": \"mdi:leaf\",\n}\n\n\nclass Source:\n def __init__(self, uprn):\n self._uprn = str(uprn).zfill(12)\n\n def fetch(self):\n api_url = f\"https://wastecollections.haringey.gov.uk/property/{self._uprn}\"\n response = requests.get(api_url)\n\n soup = BeautifulSoup(response.text, features=\"html.parser\")\n soup.prettify()\n\n entries = []\n\n service_elements = soup.select(\".service-wrapper\")\n\n for service_element in service_elements:\n service_name = service_element.select(\".service-name\")[0].text.strip()\n next_service_date = service_element.select(\"td.next-service\")[0]\n\n next_service_date.span.extract()\n\n entries.append(\n Collection(\n date=datetime.strptime(\n next_service_date.text.strip(), \"%d/%m/%Y\"\n ).date(),\n t=service_name,\n icon=ICON_MAP.get(service_name),\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py"}], "after_files": [{"content": "from datetime import datetime\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Haringey Council\"\nDESCRIPTION = \"Source for haringey.gov.uk services for Haringey Council, UK.\"\nURL = \"https://www.haringey.gov.uk/\"\nTEST_CASES = {\n \"Test_001\": {\"uprn\": \"100021209182\"},\n \"Test_002\": {\"uprn\": \"100021207181\"},\n \"Test_003\": {\"uprn\": \"100021202738\"},\n \"Test_004\": {\"uprn\": 100021202131},\n}\nICON_MAP = {\n \"General Waste\": \"mdi:trash-can\",\n \"Collect Domestic Recycling\": \"mdi:recycle\",\n \"Food Waste\": \"mdi:food-apple\",\n \"Collect Paid Domestic Garden\": \"mdi:leaf\",\n}\n\n\nclass Source:\n def __init__(self, uprn):\n self._uprn = str(uprn).zfill(12)\n\n def fetch(self):\n api_url = f\"https://wastecollections.haringey.gov.uk/property/{self._uprn}\"\n response = requests.get(api_url)\n\n soup = BeautifulSoup(response.text, features=\"html.parser\")\n soup.prettify()\n\n entries = []\n\n service_elements = soup.select(\".service-wrapper\")\n\n for service_element in service_elements:\n service_name = service_element.select(\".service-name\")[0].text.strip()\n\n next_service_dates = service_element.select(\"td.next-service\")\n if len(next_service_dates) == 0:\n continue\n next_service_date = next_service_dates[0]\n\n next_service_date.span.extract()\n\n entries.append(\n Collection(\n date=datetime.strptime(\n next_service_date.text.strip(), \"%d/%m/%Y\"\n ).date(),\n t=service_name,\n icon=ICON_MAP.get(service_name),\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py"}]} | 1,321 | 193 |
gh_patches_debug_12841 | rasdani/github-patches | git_diff | opsdroid__opsdroid-887 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Matrix Connector should not log HTML page contents on a 504
If the matrix server responds with a 504 error on `/sync` (which is pretty common) the connector should not log the whole HTML page, as it clutters the log.
This is *especially* true on a matrix.org account as you get a whole cloudflare error page.
Matrix Connector should not log HTML page contents on a 504
If the matrix server responds with a 504 error on `/sync` (which is pretty common) the connector should not log the whole HTML page, as it clutters the log.
This is *especially* true on a matrix.org account as you get a whole cloudflare error page.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opsdroid/connector/matrix/connector.py`
Content:
```
1 """Connector for Matrix (https://matrix.org)."""
2
3 import re
4 import logging
5 from concurrent.futures import CancelledError
6
7 import aiohttp
8
9 from matrix_api_async.api_asyncio import AsyncHTTPAPI
10 from matrix_client.errors import MatrixRequestError
11
12 from opsdroid.connector import Connector, register_event
13 from opsdroid.events import Message
14
15 from .html_cleaner import clean
16
17
18 _LOGGER = logging.getLogger(__name__)
19
20 __all__ = ['ConnectorMatrix']
21
22
23 class ConnectorMatrix(Connector):
24 """Connector for Matrix (https://matrix.org)."""
25
26 def __init__(self, config, opsdroid=None): # noqa: D107
27 """Init the config for the connector."""
28 super().__init__(config, opsdroid=opsdroid)
29
30 self.name = "ConnectorMatrix" # The name of your connector
31 self.rooms = config.get('rooms', None)
32 if not self.rooms:
33 self.rooms = {'main': config['room']}
34 self.room_ids = {}
35 self.default_target = self.rooms['main']
36 self.mxid = config['mxid']
37 self.nick = config.get('nick', None)
38 self.homeserver = config.get('homeserver', "https://matrix.org")
39 self.password = config['password']
40 self.room_specific_nicks = config.get("room_specific_nicks", False)
41 self.session = None
42 self.filter_id = None
43 self.connection = None
44
45 @property
46 def filter_json(self):
47 """Define JSON filter to apply to incoming events."""
48 return {
49 "event_format": "client",
50 "account_data": {
51 "limit": 0,
52 "types": []
53 },
54 "presence": {
55 "limit": 0,
56 "types": []
57 },
58 "room": {
59 "rooms": [],
60 "account_data": {
61 "types": []
62 },
63 "timeline": {
64 "types": ["m.room.message"]
65 },
66 "ephemeral": {
67 "types": []
68 },
69 "state": {
70 "types": []
71 }
72 }
73 }
74
75 async def make_filter(self, api, room_ids):
76 """Make a filter on the server for future syncs."""
77 fjson = self.filter_json
78 for room_id in room_ids:
79 fjson['room']['rooms'].append(room_id)
80
81 resp = await api.create_filter(
82 user_id=self.mxid, filter_params=fjson)
83
84 return resp['filter_id']
85
86 async def connect(self):
87 """Create connection object with chat library."""
88 session = aiohttp.ClientSession()
89 mapi = AsyncHTTPAPI(self.homeserver, session)
90
91 self.session = session
92 login_response = await mapi.login(
93 "m.login.password", user=self.mxid, password=self.password)
94 mapi.token = login_response['access_token']
95 mapi.sync_token = None
96
97 for roomname, room in self.rooms.items():
98 response = await mapi.join_room(room)
99 self.room_ids[roomname] = response['room_id']
100 self.connection = mapi
101
102 # Create a filter now, saves time on each later sync
103 self.filter_id = await self.make_filter(mapi, self.room_ids.values())
104
105 # Do initial sync so we don't get old messages later.
106 response = await self.connection.sync(
107 timeout_ms=3000,
108 filter='{ "room": { "timeline" : { "limit" : 1 } } }',
109 set_presence="online")
110 self.connection.sync_token = response["next_batch"]
111
112 if self.nick:
113 display_name = await self.connection.get_display_name(self.mxid)
114 if display_name != self.nick:
115 await self.connection.set_display_name(self.mxid, self.nick)
116
117 async def _parse_sync_response(self, response):
118 self.connection.sync_token = response["next_batch"]
119 for roomid in self.room_ids.values():
120 room = response['rooms']['join'].get(roomid, None)
121 if room and 'timeline' in room:
122 for event in room['timeline']['events']:
123 if event['content']['msgtype'] == 'm.text':
124 if event['sender'] != self.mxid:
125 return Message(
126 event['content']['body'],
127 await self._get_nick(roomid, event['sender']),
128 roomid,
129 self,
130 event_id=event['event_id'],
131 raw_event=event)
132
133 async def listen(self): # pragma: no cover
134 """Listen for new messages from the chat service."""
135 while True: # pylint: disable=R1702
136 try:
137 response = await self.connection.sync(
138 self.connection.sync_token,
139 timeout_ms=int(6 * 60 * 60 * 1e3), # 6h in ms
140 filter=self.filter_id)
141 _LOGGER.debug("matrix sync request returned")
142 message = await self._parse_sync_response(response)
143 await self.opsdroid.parse(message)
144
145 except CancelledError:
146 raise
147 except Exception: # pylint: disable=W0703
148 _LOGGER.exception('Matrix Sync Error')
149
150 async def _get_nick(self, roomid, mxid):
151 """
152 Get nickname from user ID.
153
154 Get the nickname of a sender depending on the room specific config
155 setting.
156 """
157 if self.room_specific_nicks:
158 try:
159 return await self.connection.get_room_displayname(roomid, mxid)
160 except Exception: # pylint: disable=W0703
161 # Fallback to the non-room specific one
162 logging.exception(
163 "Failed to lookup room specific nick for %s", mxid)
164
165 try:
166 return await self.connection.get_display_name(mxid)
167 except MatrixRequestError as mre:
168 # Log the error if it's not the 404 from the user not having a nick
169 if mre.code != 404:
170 logging.exception("Failed to lookup nick for %s", mxid)
171 return mxid
172
173 @staticmethod
174 def _get_formatted_message_body(message, body=None, msgtype="m.text"):
175 """
176 Get HTML from a message.
177
178 Return the json representation of the message in
179 "org.matrix.custom.html" format.
180 """
181 # Markdown leaves a <p></p> around standard messages that we want to
182 # strip:
183 if message.startswith('<p>'):
184 message = message[3:]
185 if message.endswith('</p>'):
186 message = message[:-4]
187
188 clean_html = clean(message)
189
190 return {
191 # Strip out any tags from the markdown to make the body
192 "body": body if body else re.sub('<[^<]+?>', '', clean_html),
193 "msgtype": msgtype,
194 "format": "org.matrix.custom.html",
195 "formatted_body": clean_html
196 }
197
198 @register_event(Message)
199 async def send_message(self, message):
200 """Send `message.text` back to the chat service."""
201 if not message.target.startswith(("!", "#")):
202 room_id = self.rooms[message.target]
203 else:
204 room_id = message.target
205
206 # Ensure we have a room id not alias
207 if not room_id.startswith('!'):
208 room_id = await self.connection.get_room_id(room_id)
209 else:
210 room_id = room_id
211
212 try:
213 await self.connection.send_message_event(
214 room_id,
215 "m.room.message",
216 self._get_formatted_message_body(message.text))
217 except aiohttp.client_exceptions.ServerDisconnectedError:
218 _LOGGER.debug("Server had disconnected, retrying send.")
219 await self.connection.send_message_event(
220 room_id,
221 "m.room.message",
222 self._get_formatted_message_body(message.text))
223
224 async def disconnect(self):
225 """Close the matrix session."""
226 await self.session.close()
227
228 def get_roomname(self, room):
229 """Get the name of a room from alias or room ID."""
230 if room.startswith(('#', '!')):
231 for connroom in self.rooms:
232 if room in (connroom, self.room_ids[connroom]):
233 return connroom
234
235 return room
236
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opsdroid/connector/matrix/connector.py b/opsdroid/connector/matrix/connector.py
--- a/opsdroid/connector/matrix/connector.py
+++ b/opsdroid/connector/matrix/connector.py
@@ -142,6 +142,14 @@
message = await self._parse_sync_response(response)
await self.opsdroid.parse(message)
+ except MatrixRequestError as mre:
+ # We can safely ignore timeout errors. The non-standard error
+ # codes are returned by Cloudflare.
+ if mre.code in [504, 522, 524]:
+ _LOGGER.info('Matrix Sync Timeout (code: %d)', mre.code)
+ continue
+
+ _LOGGER.exception('Matrix Sync Error')
except CancelledError:
raise
except Exception: # pylint: disable=W0703
| {"golden_diff": "diff --git a/opsdroid/connector/matrix/connector.py b/opsdroid/connector/matrix/connector.py\n--- a/opsdroid/connector/matrix/connector.py\n+++ b/opsdroid/connector/matrix/connector.py\n@@ -142,6 +142,14 @@\n message = await self._parse_sync_response(response)\n await self.opsdroid.parse(message)\n \n+ except MatrixRequestError as mre:\n+ # We can safely ignore timeout errors. The non-standard error\n+ # codes are returned by Cloudflare.\n+ if mre.code in [504, 522, 524]:\n+ _LOGGER.info('Matrix Sync Timeout (code: %d)', mre.code)\n+ continue\n+\n+ _LOGGER.exception('Matrix Sync Error')\n except CancelledError:\n raise\n except Exception: # pylint: disable=W0703\n", "issue": "Matrix Connector should not log HTML page contents on a 504\nIf the matrix server responds with a 504 error on `/sync` (which is pretty common) the connector should not log the whole HTML page, as it clutters the log.\r\n\r\nThis is *especially* true on a matrix.org account as you get a whole cloudflare error page.\nMatrix Connector should not log HTML page contents on a 504\nIf the matrix server responds with a 504 error on `/sync` (which is pretty common) the connector should not log the whole HTML page, as it clutters the log.\r\n\r\nThis is *especially* true on a matrix.org account as you get a whole cloudflare error page.\n", "before_files": [{"content": "\"\"\"Connector for Matrix (https://matrix.org).\"\"\"\n\nimport re\nimport logging\nfrom concurrent.futures import CancelledError\n\nimport aiohttp\n\nfrom matrix_api_async.api_asyncio import AsyncHTTPAPI\nfrom matrix_client.errors import MatrixRequestError\n\nfrom opsdroid.connector import Connector, register_event\nfrom opsdroid.events import Message\n\nfrom .html_cleaner import clean\n\n\n_LOGGER = logging.getLogger(__name__)\n\n__all__ = ['ConnectorMatrix']\n\n\nclass ConnectorMatrix(Connector):\n \"\"\"Connector for Matrix (https://matrix.org).\"\"\"\n\n def __init__(self, config, opsdroid=None): # noqa: D107\n \"\"\"Init the config for the connector.\"\"\"\n super().__init__(config, opsdroid=opsdroid)\n\n self.name = \"ConnectorMatrix\" # The name of your connector\n self.rooms = config.get('rooms', None)\n if not self.rooms:\n self.rooms = {'main': config['room']}\n self.room_ids = {}\n self.default_target = self.rooms['main']\n self.mxid = config['mxid']\n self.nick = config.get('nick', None)\n self.homeserver = config.get('homeserver', \"https://matrix.org\")\n self.password = config['password']\n self.room_specific_nicks = config.get(\"room_specific_nicks\", False)\n self.session = None\n self.filter_id = None\n self.connection = None\n\n @property\n def filter_json(self):\n \"\"\"Define JSON filter to apply to incoming events.\"\"\"\n return {\n \"event_format\": \"client\",\n \"account_data\": {\n \"limit\": 0,\n \"types\": []\n },\n \"presence\": {\n \"limit\": 0,\n \"types\": []\n },\n \"room\": {\n \"rooms\": [],\n \"account_data\": {\n \"types\": []\n },\n \"timeline\": {\n \"types\": [\"m.room.message\"]\n },\n \"ephemeral\": {\n \"types\": []\n },\n \"state\": {\n \"types\": []\n }\n }\n }\n\n async def make_filter(self, api, room_ids):\n \"\"\"Make a filter on the server for future syncs.\"\"\"\n fjson = self.filter_json\n for room_id in room_ids:\n fjson['room']['rooms'].append(room_id)\n\n resp = await api.create_filter(\n user_id=self.mxid, filter_params=fjson)\n\n return resp['filter_id']\n\n async def connect(self):\n \"\"\"Create connection object with chat library.\"\"\"\n session = aiohttp.ClientSession()\n mapi = AsyncHTTPAPI(self.homeserver, session)\n\n self.session = session\n login_response = await mapi.login(\n \"m.login.password\", user=self.mxid, password=self.password)\n mapi.token = login_response['access_token']\n mapi.sync_token = None\n\n for roomname, room in self.rooms.items():\n response = await mapi.join_room(room)\n self.room_ids[roomname] = response['room_id']\n self.connection = mapi\n\n # Create a filter now, saves time on each later sync\n self.filter_id = await self.make_filter(mapi, self.room_ids.values())\n\n # Do initial sync so we don't get old messages later.\n response = await self.connection.sync(\n timeout_ms=3000,\n filter='{ \"room\": { \"timeline\" : { \"limit\" : 1 } } }',\n set_presence=\"online\")\n self.connection.sync_token = response[\"next_batch\"]\n\n if self.nick:\n display_name = await self.connection.get_display_name(self.mxid)\n if display_name != self.nick:\n await self.connection.set_display_name(self.mxid, self.nick)\n\n async def _parse_sync_response(self, response):\n self.connection.sync_token = response[\"next_batch\"]\n for roomid in self.room_ids.values():\n room = response['rooms']['join'].get(roomid, None)\n if room and 'timeline' in room:\n for event in room['timeline']['events']:\n if event['content']['msgtype'] == 'm.text':\n if event['sender'] != self.mxid:\n return Message(\n event['content']['body'],\n await self._get_nick(roomid, event['sender']),\n roomid,\n self,\n event_id=event['event_id'],\n raw_event=event)\n\n async def listen(self): # pragma: no cover\n \"\"\"Listen for new messages from the chat service.\"\"\"\n while True: # pylint: disable=R1702\n try:\n response = await self.connection.sync(\n self.connection.sync_token,\n timeout_ms=int(6 * 60 * 60 * 1e3), # 6h in ms\n filter=self.filter_id)\n _LOGGER.debug(\"matrix sync request returned\")\n message = await self._parse_sync_response(response)\n await self.opsdroid.parse(message)\n\n except CancelledError:\n raise\n except Exception: # pylint: disable=W0703\n _LOGGER.exception('Matrix Sync Error')\n\n async def _get_nick(self, roomid, mxid):\n \"\"\"\n Get nickname from user ID.\n\n Get the nickname of a sender depending on the room specific config\n setting.\n \"\"\"\n if self.room_specific_nicks:\n try:\n return await self.connection.get_room_displayname(roomid, mxid)\n except Exception: # pylint: disable=W0703\n # Fallback to the non-room specific one\n logging.exception(\n \"Failed to lookup room specific nick for %s\", mxid)\n\n try:\n return await self.connection.get_display_name(mxid)\n except MatrixRequestError as mre:\n # Log the error if it's not the 404 from the user not having a nick\n if mre.code != 404:\n logging.exception(\"Failed to lookup nick for %s\", mxid)\n return mxid\n\n @staticmethod\n def _get_formatted_message_body(message, body=None, msgtype=\"m.text\"):\n \"\"\"\n Get HTML from a message.\n\n Return the json representation of the message in\n \"org.matrix.custom.html\" format.\n \"\"\"\n # Markdown leaves a <p></p> around standard messages that we want to\n # strip:\n if message.startswith('<p>'):\n message = message[3:]\n if message.endswith('</p>'):\n message = message[:-4]\n\n clean_html = clean(message)\n\n return {\n # Strip out any tags from the markdown to make the body\n \"body\": body if body else re.sub('<[^<]+?>', '', clean_html),\n \"msgtype\": msgtype,\n \"format\": \"org.matrix.custom.html\",\n \"formatted_body\": clean_html\n }\n\n @register_event(Message)\n async def send_message(self, message):\n \"\"\"Send `message.text` back to the chat service.\"\"\"\n if not message.target.startswith((\"!\", \"#\")):\n room_id = self.rooms[message.target]\n else:\n room_id = message.target\n\n # Ensure we have a room id not alias\n if not room_id.startswith('!'):\n room_id = await self.connection.get_room_id(room_id)\n else:\n room_id = room_id\n\n try:\n await self.connection.send_message_event(\n room_id,\n \"m.room.message\",\n self._get_formatted_message_body(message.text))\n except aiohttp.client_exceptions.ServerDisconnectedError:\n _LOGGER.debug(\"Server had disconnected, retrying send.\")\n await self.connection.send_message_event(\n room_id,\n \"m.room.message\",\n self._get_formatted_message_body(message.text))\n\n async def disconnect(self):\n \"\"\"Close the matrix session.\"\"\"\n await self.session.close()\n\n def get_roomname(self, room):\n \"\"\"Get the name of a room from alias or room ID.\"\"\"\n if room.startswith(('#', '!')):\n for connroom in self.rooms:\n if room in (connroom, self.room_ids[connroom]):\n return connroom\n\n return room\n", "path": "opsdroid/connector/matrix/connector.py"}], "after_files": [{"content": "\"\"\"Connector for Matrix (https://matrix.org).\"\"\"\n\nimport re\nimport logging\nfrom concurrent.futures import CancelledError\n\nimport aiohttp\n\nfrom matrix_api_async.api_asyncio import AsyncHTTPAPI\nfrom matrix_client.errors import MatrixRequestError\n\nfrom opsdroid.connector import Connector, register_event\nfrom opsdroid.events import Message\n\nfrom .html_cleaner import clean\n\n\n_LOGGER = logging.getLogger(__name__)\n\n__all__ = ['ConnectorMatrix']\n\n\nclass ConnectorMatrix(Connector):\n \"\"\"Connector for Matrix (https://matrix.org).\"\"\"\n\n def __init__(self, config, opsdroid=None): # noqa: D107\n \"\"\"Init the config for the connector.\"\"\"\n super().__init__(config, opsdroid=opsdroid)\n\n self.name = \"ConnectorMatrix\" # The name of your connector\n self.rooms = config.get('rooms', None)\n if not self.rooms:\n self.rooms = {'main': config['room']}\n self.room_ids = {}\n self.default_target = self.rooms['main']\n self.mxid = config['mxid']\n self.nick = config.get('nick', None)\n self.homeserver = config.get('homeserver', \"https://matrix.org\")\n self.password = config['password']\n self.room_specific_nicks = config.get(\"room_specific_nicks\", False)\n self.session = None\n self.filter_id = None\n self.connection = None\n\n @property\n def filter_json(self):\n \"\"\"Define JSON filter to apply to incoming events.\"\"\"\n return {\n \"event_format\": \"client\",\n \"account_data\": {\n \"limit\": 0,\n \"types\": []\n },\n \"presence\": {\n \"limit\": 0,\n \"types\": []\n },\n \"room\": {\n \"rooms\": [],\n \"account_data\": {\n \"types\": []\n },\n \"timeline\": {\n \"types\": [\"m.room.message\"]\n },\n \"ephemeral\": {\n \"types\": []\n },\n \"state\": {\n \"types\": []\n }\n }\n }\n\n async def make_filter(self, api, room_ids):\n \"\"\"Make a filter on the server for future syncs.\"\"\"\n fjson = self.filter_json\n for room_id in room_ids:\n fjson['room']['rooms'].append(room_id)\n\n resp = await api.create_filter(\n user_id=self.mxid, filter_params=fjson)\n\n return resp['filter_id']\n\n async def connect(self):\n \"\"\"Create connection object with chat library.\"\"\"\n session = aiohttp.ClientSession()\n mapi = AsyncHTTPAPI(self.homeserver, session)\n\n self.session = session\n login_response = await mapi.login(\n \"m.login.password\", user=self.mxid, password=self.password)\n mapi.token = login_response['access_token']\n mapi.sync_token = None\n\n for roomname, room in self.rooms.items():\n response = await mapi.join_room(room)\n self.room_ids[roomname] = response['room_id']\n self.connection = mapi\n\n # Create a filter now, saves time on each later sync\n self.filter_id = await self.make_filter(mapi, self.room_ids.values())\n\n # Do initial sync so we don't get old messages later.\n response = await self.connection.sync(\n timeout_ms=3000,\n filter='{ \"room\": { \"timeline\" : { \"limit\" : 1 } } }',\n set_presence=\"online\")\n self.connection.sync_token = response[\"next_batch\"]\n\n if self.nick:\n display_name = await self.connection.get_display_name(self.mxid)\n if display_name != self.nick:\n await self.connection.set_display_name(self.mxid, self.nick)\n\n async def _parse_sync_response(self, response):\n self.connection.sync_token = response[\"next_batch\"]\n for roomid in self.room_ids.values():\n room = response['rooms']['join'].get(roomid, None)\n if room and 'timeline' in room:\n for event in room['timeline']['events']:\n if event['content']['msgtype'] == 'm.text':\n if event['sender'] != self.mxid:\n return Message(\n event['content']['body'],\n await self._get_nick(roomid, event['sender']),\n roomid,\n self,\n event_id=event['event_id'],\n raw_event=event)\n\n async def listen(self): # pragma: no cover\n \"\"\"Listen for new messages from the chat service.\"\"\"\n while True: # pylint: disable=R1702\n try:\n response = await self.connection.sync(\n self.connection.sync_token,\n timeout_ms=int(6 * 60 * 60 * 1e3), # 6h in ms\n filter=self.filter_id)\n _LOGGER.debug(\"matrix sync request returned\")\n message = await self._parse_sync_response(response)\n await self.opsdroid.parse(message)\n\n except MatrixRequestError as mre:\n # We can safely ignore timeout errors. The non-standard error\n # codes are returned by Cloudflare.\n if mre.code in [504, 522, 524]:\n _LOGGER.info('Matrix Sync Timeout (code: %d)', mre.code)\n continue\n\n _LOGGER.exception('Matrix Sync Error')\n except CancelledError:\n raise\n except Exception: # pylint: disable=W0703\n _LOGGER.exception('Matrix Sync Error')\n\n async def _get_nick(self, roomid, mxid):\n \"\"\"\n Get nickname from user ID.\n\n Get the nickname of a sender depending on the room specific config\n setting.\n \"\"\"\n if self.room_specific_nicks:\n try:\n return await self.connection.get_room_displayname(roomid, mxid)\n except Exception: # pylint: disable=W0703\n # Fallback to the non-room specific one\n logging.exception(\n \"Failed to lookup room specific nick for %s\", mxid)\n\n try:\n return await self.connection.get_display_name(mxid)\n except MatrixRequestError as mre:\n # Log the error if it's not the 404 from the user not having a nick\n if mre.code != 404:\n logging.exception(\"Failed to lookup nick for %s\", mxid)\n return mxid\n\n @staticmethod\n def _get_formatted_message_body(message, body=None, msgtype=\"m.text\"):\n \"\"\"\n Get HTML from a message.\n\n Return the json representation of the message in\n \"org.matrix.custom.html\" format.\n \"\"\"\n # Markdown leaves a <p></p> around standard messages that we want to\n # strip:\n if message.startswith('<p>'):\n message = message[3:]\n if message.endswith('</p>'):\n message = message[:-4]\n\n clean_html = clean(message)\n\n return {\n # Strip out any tags from the markdown to make the body\n \"body\": body if body else re.sub('<[^<]+?>', '', clean_html),\n \"msgtype\": msgtype,\n \"format\": \"org.matrix.custom.html\",\n \"formatted_body\": clean_html\n }\n\n @register_event(Message)\n async def send_message(self, message):\n \"\"\"Send `message.text` back to the chat service.\"\"\"\n if not message.target.startswith((\"!\", \"#\")):\n room_id = self.rooms[message.target]\n else:\n room_id = message.target\n\n # Ensure we have a room id not alias\n if not room_id.startswith('!'):\n room_id = await self.connection.get_room_id(room_id)\n else:\n room_id = room_id\n\n try:\n await self.connection.send_message_event(\n room_id,\n \"m.room.message\",\n self._get_formatted_message_body(message.text))\n except aiohttp.client_exceptions.ServerDisconnectedError:\n _LOGGER.debug(\"Server had disconnected, retrying send.\")\n await self.connection.send_message_event(\n room_id,\n \"m.room.message\",\n self._get_formatted_message_body(message.text))\n\n async def disconnect(self):\n \"\"\"Close the matrix session.\"\"\"\n await self.session.close()\n\n def get_roomname(self, room):\n \"\"\"Get the name of a room from alias or room ID.\"\"\"\n if room.startswith(('#', '!')):\n for connroom in self.rooms:\n if room in (connroom, self.room_ids[connroom]):\n return connroom\n\n return room\n", "path": "opsdroid/connector/matrix/connector.py"}]} | 2,765 | 207 |
gh_patches_debug_5925 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-2002 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Clarify UI around content warnings
A few problems with content warnings:
- [ ] The button is icon-only and hard to find for visual users in the compose UI (I think it's fine for screen reader users):
<img width="360" alt="Screen Shot 2021-09-18 at 11 11 48 AM" src="https://user-images.githubusercontent.com/1807695/133898456-ec77f0d2-febd-428f-aa70-4702e8576d31.png">
- [x] ~When content warnings are active in the compose box, they aren't clearly labeled (again, for visual users only):~ (fixed by #1422)
<img width="357" alt="Screen Shot 2021-09-18 at 11 12 44 AM" src="https://user-images.githubusercontent.com/1807695/133898461-71879dc4-7a69-4280-a33f-7607746b8454.png">
- [x] #1430
<img width="779" alt="Screen Shot 2021-09-18 at 11 13 09 AM" src="https://user-images.githubusercontent.com/1807695/133898481-c63000d9-1ffd-4742-89ec-8e0558921c30.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/views/status.py`
Content:
```
1 """ what are we here for if not for posting """
2 import re
3 from urllib.parse import urlparse
4
5 from django.contrib.auth.decorators import login_required
6 from django.core.validators import URLValidator
7 from django.core.exceptions import ValidationError
8 from django.http import HttpResponse, HttpResponseBadRequest, Http404
9 from django.shortcuts import get_object_or_404, redirect
10 from django.template.response import TemplateResponse
11 from django.utils import timezone
12 from django.utils.decorators import method_decorator
13 from django.views import View
14 from django.views.decorators.http import require_POST
15
16 from markdown import markdown
17 from bookwyrm import forms, models
18 from bookwyrm.sanitize_html import InputHtmlParser
19 from bookwyrm.settings import DOMAIN
20 from bookwyrm.utils import regex
21 from .helpers import handle_remote_webfinger, is_api_request
22 from .helpers import load_date_in_user_tz_as_utc
23
24
25 # pylint: disable= no-self-use
26 @method_decorator(login_required, name="dispatch")
27 class EditStatus(View):
28 """the view for *posting*"""
29
30 def get(self, request, status_id): # pylint: disable=unused-argument
31 """load the edit panel"""
32 status = get_object_or_404(
33 models.Status.objects.select_subclasses(), id=status_id
34 )
35 status.raise_not_editable(request.user)
36
37 status_type = "reply" if status.reply_parent else status.status_type.lower()
38 data = {
39 "type": status_type,
40 "book": getattr(status, "book", None),
41 "draft": status,
42 }
43 return TemplateResponse(request, "compose.html", data)
44
45
46 # pylint: disable= no-self-use
47 @method_decorator(login_required, name="dispatch")
48 class CreateStatus(View):
49 """the view for *posting*"""
50
51 def get(self, request, status_type): # pylint: disable=unused-argument
52 """compose view (...not used?)"""
53 book = get_object_or_404(models.Edition, id=request.GET.get("book"))
54 data = {"book": book}
55 return TemplateResponse(request, "compose.html", data)
56
57 # pylint: disable=too-many-branches
58 def post(self, request, status_type, existing_status_id=None):
59 """create status of whatever type"""
60 created = not existing_status_id
61 existing_status = None
62 if existing_status_id:
63 existing_status = get_object_or_404(
64 models.Status.objects.select_subclasses(), id=existing_status_id
65 )
66 existing_status.raise_not_editable(request.user)
67 existing_status.edited_date = timezone.now()
68
69 status_type = status_type[0].upper() + status_type[1:]
70
71 try:
72 form = getattr(forms, f"{status_type}Form")(
73 request.POST, instance=existing_status
74 )
75 except AttributeError:
76 return HttpResponseBadRequest()
77 if not form.is_valid():
78 if is_api_request(request):
79 return HttpResponse(status=500)
80 return redirect(request.headers.get("Referer", "/"))
81
82 status = form.save(commit=False)
83 # save the plain, unformatted version of the status for future editing
84 status.raw_content = status.content
85 if hasattr(status, "quote"):
86 status.raw_quote = status.quote
87
88 if not status.sensitive and status.content_warning:
89 # the cw text field remains populated when you click "remove"
90 status.content_warning = None
91 status.save(broadcast=False)
92
93 # inspect the text for user tags
94 content = status.content
95 for (mention_text, mention_user) in find_mentions(content):
96 # add them to status mentions fk
97 status.mention_users.add(mention_user)
98
99 # turn the mention into a link
100 content = re.sub(
101 rf"{mention_text}([^@]|$)",
102 rf'<a href="{mention_user.remote_id}">{mention_text}</a>\g<1>',
103 content,
104 )
105 # add reply parent to mentions
106 if status.reply_parent:
107 status.mention_users.add(status.reply_parent.user)
108
109 # deduplicate mentions
110 status.mention_users.set(set(status.mention_users.all()))
111
112 # don't apply formatting to generated notes
113 if not isinstance(status, models.GeneratedNote) and content:
114 status.content = to_markdown(content)
115 # do apply formatting to quotes
116 if hasattr(status, "quote"):
117 status.quote = to_markdown(status.quote)
118
119 status.save(created=created)
120
121 # update a readthrough, if needed
122 if bool(request.POST.get("id")):
123 try:
124 edit_readthrough(request)
125 except Http404:
126 pass
127
128 if is_api_request(request):
129 return HttpResponse()
130 return redirect("/")
131
132
133 @method_decorator(login_required, name="dispatch")
134 class DeleteStatus(View):
135 """tombstone that bad boy"""
136
137 def post(self, request, status_id):
138 """delete and tombstone a status"""
139 status = get_object_or_404(models.Status, id=status_id)
140
141 # don't let people delete other people's statuses
142 status.raise_not_deletable(request.user)
143
144 # perform deletion
145 status.delete()
146 return redirect(request.headers.get("Referer", "/"))
147
148
149 @login_required
150 @require_POST
151 def update_progress(request, book_id): # pylint: disable=unused-argument
152 """Either it's just a progress update, or it's a comment with a progress update"""
153 if request.POST.get("post-status"):
154 return CreateStatus.as_view()(request, "comment")
155 return edit_readthrough(request)
156
157
158 @login_required
159 @require_POST
160 def edit_readthrough(request):
161 """can't use the form because the dates are too finnicky"""
162 # TODO: remove this, it duplicates the code in the ReadThrough view
163 readthrough = get_object_or_404(models.ReadThrough, id=request.POST.get("id"))
164 readthrough.raise_not_editable(request.user)
165
166 readthrough.start_date = load_date_in_user_tz_as_utc(
167 request.POST.get("start_date"), request.user
168 )
169 readthrough.finish_date = load_date_in_user_tz_as_utc(
170 request.POST.get("finish_date"), request.user
171 )
172
173 progress = request.POST.get("progress")
174 try:
175 progress = int(progress)
176 readthrough.progress = progress
177 except (ValueError, TypeError):
178 pass
179
180 progress_mode = request.POST.get("progress_mode")
181 try:
182 progress_mode = models.ProgressMode(progress_mode)
183 readthrough.progress_mode = progress_mode
184 except ValueError:
185 pass
186
187 readthrough.save()
188
189 # record the progress update individually
190 # use default now for date field
191 readthrough.create_update()
192
193 if is_api_request(request):
194 return HttpResponse()
195 return redirect(request.headers.get("Referer", "/"))
196
197
198 def find_mentions(content):
199 """detect @mentions in raw status content"""
200 if not content:
201 return
202 for match in re.finditer(regex.STRICT_USERNAME, content):
203 username = match.group().strip().split("@")[1:]
204 if len(username) == 1:
205 # this looks like a local user (@user), fill in the domain
206 username.append(DOMAIN)
207 username = "@".join(username)
208
209 mention_user = handle_remote_webfinger(username)
210 if not mention_user:
211 # we can ignore users we don't know about
212 continue
213 yield (match.group(), mention_user)
214
215
216 def format_links(content):
217 """detect and format links"""
218 validator = URLValidator()
219 formatted_content = ""
220 split_content = re.split(r"(\s+)", content)
221
222 for potential_link in split_content:
223 if not potential_link:
224 continue
225 wrapped = _wrapped(potential_link)
226 if wrapped:
227 wrapper_close = potential_link[-1]
228 formatted_content += potential_link[0]
229 potential_link = potential_link[1:-1]
230
231 try:
232 # raises an error on anything that's not a valid link
233 validator(potential_link)
234
235 # use everything but the scheme in the presentation of the link
236 url = urlparse(potential_link)
237 link = url.netloc + url.path + url.params
238 if url.query != "":
239 link += "?" + url.query
240 if url.fragment != "":
241 link += "#" + url.fragment
242
243 formatted_content += f'<a href="{potential_link}">{link}</a>'
244 except (ValidationError, UnicodeError):
245 formatted_content += potential_link
246
247 if wrapped:
248 formatted_content += wrapper_close
249
250 return formatted_content
251
252
253 def _wrapped(text):
254 """check if a line of text is wrapped"""
255 wrappers = [("(", ")"), ("[", "]"), ("{", "}")]
256 for wrapper in wrappers:
257 if text[0] == wrapper[0] and text[-1] == wrapper[-1]:
258 return True
259 return False
260
261
262 def to_markdown(content):
263 """catch links and convert to markdown"""
264 content = format_links(content)
265 content = markdown(content)
266 # sanitize resulting html
267 sanitizer = InputHtmlParser()
268 sanitizer.feed(content)
269 return sanitizer.get_output()
270
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bookwyrm/views/status.py b/bookwyrm/views/status.py
--- a/bookwyrm/views/status.py
+++ b/bookwyrm/views/status.py
@@ -85,9 +85,7 @@
if hasattr(status, "quote"):
status.raw_quote = status.quote
- if not status.sensitive and status.content_warning:
- # the cw text field remains populated when you click "remove"
- status.content_warning = None
+ status.sensitive = status.content_warning not in [None, ""]
status.save(broadcast=False)
# inspect the text for user tags
| {"golden_diff": "diff --git a/bookwyrm/views/status.py b/bookwyrm/views/status.py\n--- a/bookwyrm/views/status.py\n+++ b/bookwyrm/views/status.py\n@@ -85,9 +85,7 @@\n if hasattr(status, \"quote\"):\n status.raw_quote = status.quote\n \n- if not status.sensitive and status.content_warning:\n- # the cw text field remains populated when you click \"remove\"\n- status.content_warning = None\n+ status.sensitive = status.content_warning not in [None, \"\"]\n status.save(broadcast=False)\n \n # inspect the text for user tags\n", "issue": "Clarify UI around content warnings\nA few problems with content warnings:\r\n - [ ] The button is icon-only and hard to find for visual users in the compose UI (I think it's fine for screen reader users):\r\n<img width=\"360\" alt=\"Screen Shot 2021-09-18 at 11 11 48 AM\" src=\"https://user-images.githubusercontent.com/1807695/133898456-ec77f0d2-febd-428f-aa70-4702e8576d31.png\">\r\n\r\n - [x] ~When content warnings are active in the compose box, they aren't clearly labeled (again, for visual users only):~ (fixed by #1422)\r\n<img width=\"357\" alt=\"Screen Shot 2021-09-18 at 11 12 44 AM\" src=\"https://user-images.githubusercontent.com/1807695/133898461-71879dc4-7a69-4280-a33f-7607746b8454.png\">\r\n\r\n- [x] #1430\r\n<img width=\"779\" alt=\"Screen Shot 2021-09-18 at 11 13 09 AM\" src=\"https://user-images.githubusercontent.com/1807695/133898481-c63000d9-1ffd-4742-89ec-8e0558921c30.png\">\r\n\n", "before_files": [{"content": "\"\"\" what are we here for if not for posting \"\"\"\nimport re\nfrom urllib.parse import urlparse\n\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.validators import URLValidator\nfrom django.core.exceptions import ValidationError\nfrom django.http import HttpResponse, HttpResponseBadRequest, Http404\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils import timezone\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\nfrom django.views.decorators.http import require_POST\n\nfrom markdown import markdown\nfrom bookwyrm import forms, models\nfrom bookwyrm.sanitize_html import InputHtmlParser\nfrom bookwyrm.settings import DOMAIN\nfrom bookwyrm.utils import regex\nfrom .helpers import handle_remote_webfinger, is_api_request\nfrom .helpers import load_date_in_user_tz_as_utc\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\nclass EditStatus(View):\n \"\"\"the view for *posting*\"\"\"\n\n def get(self, request, status_id): # pylint: disable=unused-argument\n \"\"\"load the edit panel\"\"\"\n status = get_object_or_404(\n models.Status.objects.select_subclasses(), id=status_id\n )\n status.raise_not_editable(request.user)\n\n status_type = \"reply\" if status.reply_parent else status.status_type.lower()\n data = {\n \"type\": status_type,\n \"book\": getattr(status, \"book\", None),\n \"draft\": status,\n }\n return TemplateResponse(request, \"compose.html\", data)\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\nclass CreateStatus(View):\n \"\"\"the view for *posting*\"\"\"\n\n def get(self, request, status_type): # pylint: disable=unused-argument\n \"\"\"compose view (...not used?)\"\"\"\n book = get_object_or_404(models.Edition, id=request.GET.get(\"book\"))\n data = {\"book\": book}\n return TemplateResponse(request, \"compose.html\", data)\n\n # pylint: disable=too-many-branches\n def post(self, request, status_type, existing_status_id=None):\n \"\"\"create status of whatever type\"\"\"\n created = not existing_status_id\n existing_status = None\n if existing_status_id:\n existing_status = get_object_or_404(\n models.Status.objects.select_subclasses(), id=existing_status_id\n )\n existing_status.raise_not_editable(request.user)\n existing_status.edited_date = timezone.now()\n\n status_type = status_type[0].upper() + status_type[1:]\n\n try:\n form = getattr(forms, f\"{status_type}Form\")(\n request.POST, instance=existing_status\n )\n except AttributeError:\n return HttpResponseBadRequest()\n if not form.is_valid():\n if is_api_request(request):\n return HttpResponse(status=500)\n return redirect(request.headers.get(\"Referer\", \"/\"))\n\n status = form.save(commit=False)\n # save the plain, unformatted version of the status for future editing\n status.raw_content = status.content\n if hasattr(status, \"quote\"):\n status.raw_quote = status.quote\n\n if not status.sensitive and status.content_warning:\n # the cw text field remains populated when you click \"remove\"\n status.content_warning = None\n status.save(broadcast=False)\n\n # inspect the text for user tags\n content = status.content\n for (mention_text, mention_user) in find_mentions(content):\n # add them to status mentions fk\n status.mention_users.add(mention_user)\n\n # turn the mention into a link\n content = re.sub(\n rf\"{mention_text}([^@]|$)\",\n rf'<a href=\"{mention_user.remote_id}\">{mention_text}</a>\\g<1>',\n content,\n )\n # add reply parent to mentions\n if status.reply_parent:\n status.mention_users.add(status.reply_parent.user)\n\n # deduplicate mentions\n status.mention_users.set(set(status.mention_users.all()))\n\n # don't apply formatting to generated notes\n if not isinstance(status, models.GeneratedNote) and content:\n status.content = to_markdown(content)\n # do apply formatting to quotes\n if hasattr(status, \"quote\"):\n status.quote = to_markdown(status.quote)\n\n status.save(created=created)\n\n # update a readthrough, if needed\n if bool(request.POST.get(\"id\")):\n try:\n edit_readthrough(request)\n except Http404:\n pass\n\n if is_api_request(request):\n return HttpResponse()\n return redirect(\"/\")\n\n\n@method_decorator(login_required, name=\"dispatch\")\nclass DeleteStatus(View):\n \"\"\"tombstone that bad boy\"\"\"\n\n def post(self, request, status_id):\n \"\"\"delete and tombstone a status\"\"\"\n status = get_object_or_404(models.Status, id=status_id)\n\n # don't let people delete other people's statuses\n status.raise_not_deletable(request.user)\n\n # perform deletion\n status.delete()\n return redirect(request.headers.get(\"Referer\", \"/\"))\n\n\n@login_required\n@require_POST\ndef update_progress(request, book_id): # pylint: disable=unused-argument\n \"\"\"Either it's just a progress update, or it's a comment with a progress update\"\"\"\n if request.POST.get(\"post-status\"):\n return CreateStatus.as_view()(request, \"comment\")\n return edit_readthrough(request)\n\n\n@login_required\n@require_POST\ndef edit_readthrough(request):\n \"\"\"can't use the form because the dates are too finnicky\"\"\"\n # TODO: remove this, it duplicates the code in the ReadThrough view\n readthrough = get_object_or_404(models.ReadThrough, id=request.POST.get(\"id\"))\n readthrough.raise_not_editable(request.user)\n\n readthrough.start_date = load_date_in_user_tz_as_utc(\n request.POST.get(\"start_date\"), request.user\n )\n readthrough.finish_date = load_date_in_user_tz_as_utc(\n request.POST.get(\"finish_date\"), request.user\n )\n\n progress = request.POST.get(\"progress\")\n try:\n progress = int(progress)\n readthrough.progress = progress\n except (ValueError, TypeError):\n pass\n\n progress_mode = request.POST.get(\"progress_mode\")\n try:\n progress_mode = models.ProgressMode(progress_mode)\n readthrough.progress_mode = progress_mode\n except ValueError:\n pass\n\n readthrough.save()\n\n # record the progress update individually\n # use default now for date field\n readthrough.create_update()\n\n if is_api_request(request):\n return HttpResponse()\n return redirect(request.headers.get(\"Referer\", \"/\"))\n\n\ndef find_mentions(content):\n \"\"\"detect @mentions in raw status content\"\"\"\n if not content:\n return\n for match in re.finditer(regex.STRICT_USERNAME, content):\n username = match.group().strip().split(\"@\")[1:]\n if len(username) == 1:\n # this looks like a local user (@user), fill in the domain\n username.append(DOMAIN)\n username = \"@\".join(username)\n\n mention_user = handle_remote_webfinger(username)\n if not mention_user:\n # we can ignore users we don't know about\n continue\n yield (match.group(), mention_user)\n\n\ndef format_links(content):\n \"\"\"detect and format links\"\"\"\n validator = URLValidator()\n formatted_content = \"\"\n split_content = re.split(r\"(\\s+)\", content)\n\n for potential_link in split_content:\n if not potential_link:\n continue\n wrapped = _wrapped(potential_link)\n if wrapped:\n wrapper_close = potential_link[-1]\n formatted_content += potential_link[0]\n potential_link = potential_link[1:-1]\n\n try:\n # raises an error on anything that's not a valid link\n validator(potential_link)\n\n # use everything but the scheme in the presentation of the link\n url = urlparse(potential_link)\n link = url.netloc + url.path + url.params\n if url.query != \"\":\n link += \"?\" + url.query\n if url.fragment != \"\":\n link += \"#\" + url.fragment\n\n formatted_content += f'<a href=\"{potential_link}\">{link}</a>'\n except (ValidationError, UnicodeError):\n formatted_content += potential_link\n\n if wrapped:\n formatted_content += wrapper_close\n\n return formatted_content\n\n\ndef _wrapped(text):\n \"\"\"check if a line of text is wrapped\"\"\"\n wrappers = [(\"(\", \")\"), (\"[\", \"]\"), (\"{\", \"}\")]\n for wrapper in wrappers:\n if text[0] == wrapper[0] and text[-1] == wrapper[-1]:\n return True\n return False\n\n\ndef to_markdown(content):\n \"\"\"catch links and convert to markdown\"\"\"\n content = format_links(content)\n content = markdown(content)\n # sanitize resulting html\n sanitizer = InputHtmlParser()\n sanitizer.feed(content)\n return sanitizer.get_output()\n", "path": "bookwyrm/views/status.py"}], "after_files": [{"content": "\"\"\" what are we here for if not for posting \"\"\"\nimport re\nfrom urllib.parse import urlparse\n\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.validators import URLValidator\nfrom django.core.exceptions import ValidationError\nfrom django.http import HttpResponse, HttpResponseBadRequest, Http404\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils import timezone\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\nfrom django.views.decorators.http import require_POST\n\nfrom markdown import markdown\nfrom bookwyrm import forms, models\nfrom bookwyrm.sanitize_html import InputHtmlParser\nfrom bookwyrm.settings import DOMAIN\nfrom bookwyrm.utils import regex\nfrom .helpers import handle_remote_webfinger, is_api_request\nfrom .helpers import load_date_in_user_tz_as_utc\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\nclass EditStatus(View):\n \"\"\"the view for *posting*\"\"\"\n\n def get(self, request, status_id): # pylint: disable=unused-argument\n \"\"\"load the edit panel\"\"\"\n status = get_object_or_404(\n models.Status.objects.select_subclasses(), id=status_id\n )\n status.raise_not_editable(request.user)\n\n status_type = \"reply\" if status.reply_parent else status.status_type.lower()\n data = {\n \"type\": status_type,\n \"book\": getattr(status, \"book\", None),\n \"draft\": status,\n }\n return TemplateResponse(request, \"compose.html\", data)\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\nclass CreateStatus(View):\n \"\"\"the view for *posting*\"\"\"\n\n def get(self, request, status_type): # pylint: disable=unused-argument\n \"\"\"compose view (...not used?)\"\"\"\n book = get_object_or_404(models.Edition, id=request.GET.get(\"book\"))\n data = {\"book\": book}\n return TemplateResponse(request, \"compose.html\", data)\n\n # pylint: disable=too-many-branches\n def post(self, request, status_type, existing_status_id=None):\n \"\"\"create status of whatever type\"\"\"\n created = not existing_status_id\n existing_status = None\n if existing_status_id:\n existing_status = get_object_or_404(\n models.Status.objects.select_subclasses(), id=existing_status_id\n )\n existing_status.raise_not_editable(request.user)\n existing_status.edited_date = timezone.now()\n\n status_type = status_type[0].upper() + status_type[1:]\n\n try:\n form = getattr(forms, f\"{status_type}Form\")(\n request.POST, instance=existing_status\n )\n except AttributeError:\n return HttpResponseBadRequest()\n if not form.is_valid():\n if is_api_request(request):\n return HttpResponse(status=500)\n return redirect(request.headers.get(\"Referer\", \"/\"))\n\n status = form.save(commit=False)\n # save the plain, unformatted version of the status for future editing\n status.raw_content = status.content\n if hasattr(status, \"quote\"):\n status.raw_quote = status.quote\n\n status.sensitive = status.content_warning not in [None, \"\"]\n status.save(broadcast=False)\n\n # inspect the text for user tags\n content = status.content\n for (mention_text, mention_user) in find_mentions(content):\n # add them to status mentions fk\n status.mention_users.add(mention_user)\n\n # turn the mention into a link\n content = re.sub(\n rf\"{mention_text}([^@]|$)\",\n rf'<a href=\"{mention_user.remote_id}\">{mention_text}</a>\\g<1>',\n content,\n )\n # add reply parent to mentions\n if status.reply_parent:\n status.mention_users.add(status.reply_parent.user)\n\n # deduplicate mentions\n status.mention_users.set(set(status.mention_users.all()))\n\n # don't apply formatting to generated notes\n if not isinstance(status, models.GeneratedNote) and content:\n status.content = to_markdown(content)\n # do apply formatting to quotes\n if hasattr(status, \"quote\"):\n status.quote = to_markdown(status.quote)\n\n status.save(created=created)\n\n # update a readthrough, if needed\n if bool(request.POST.get(\"id\")):\n try:\n edit_readthrough(request)\n except Http404:\n pass\n\n if is_api_request(request):\n return HttpResponse()\n return redirect(\"/\")\n\n\n@method_decorator(login_required, name=\"dispatch\")\nclass DeleteStatus(View):\n \"\"\"tombstone that bad boy\"\"\"\n\n def post(self, request, status_id):\n \"\"\"delete and tombstone a status\"\"\"\n status = get_object_or_404(models.Status, id=status_id)\n\n # don't let people delete other people's statuses\n status.raise_not_deletable(request.user)\n\n # perform deletion\n status.delete()\n return redirect(request.headers.get(\"Referer\", \"/\"))\n\n\n@login_required\n@require_POST\ndef update_progress(request, book_id): # pylint: disable=unused-argument\n \"\"\"Either it's just a progress update, or it's a comment with a progress update\"\"\"\n if request.POST.get(\"post-status\"):\n return CreateStatus.as_view()(request, \"comment\")\n return edit_readthrough(request)\n\n\n@login_required\n@require_POST\ndef edit_readthrough(request):\n \"\"\"can't use the form because the dates are too finnicky\"\"\"\n # TODO: remove this, it duplicates the code in the ReadThrough view\n readthrough = get_object_or_404(models.ReadThrough, id=request.POST.get(\"id\"))\n readthrough.raise_not_editable(request.user)\n\n readthrough.start_date = load_date_in_user_tz_as_utc(\n request.POST.get(\"start_date\"), request.user\n )\n readthrough.finish_date = load_date_in_user_tz_as_utc(\n request.POST.get(\"finish_date\"), request.user\n )\n\n progress = request.POST.get(\"progress\")\n try:\n progress = int(progress)\n readthrough.progress = progress\n except (ValueError, TypeError):\n pass\n\n progress_mode = request.POST.get(\"progress_mode\")\n try:\n progress_mode = models.ProgressMode(progress_mode)\n readthrough.progress_mode = progress_mode\n except ValueError:\n pass\n\n readthrough.save()\n\n # record the progress update individually\n # use default now for date field\n readthrough.create_update()\n\n if is_api_request(request):\n return HttpResponse()\n return redirect(request.headers.get(\"Referer\", \"/\"))\n\n\ndef find_mentions(content):\n \"\"\"detect @mentions in raw status content\"\"\"\n if not content:\n return\n for match in re.finditer(regex.STRICT_USERNAME, content):\n username = match.group().strip().split(\"@\")[1:]\n if len(username) == 1:\n # this looks like a local user (@user), fill in the domain\n username.append(DOMAIN)\n username = \"@\".join(username)\n\n mention_user = handle_remote_webfinger(username)\n if not mention_user:\n # we can ignore users we don't know about\n continue\n yield (match.group(), mention_user)\n\n\ndef format_links(content):\n \"\"\"detect and format links\"\"\"\n validator = URLValidator()\n formatted_content = \"\"\n split_content = re.split(r\"(\\s+)\", content)\n\n for potential_link in split_content:\n if not potential_link:\n continue\n wrapped = _wrapped(potential_link)\n if wrapped:\n wrapper_close = potential_link[-1]\n formatted_content += potential_link[0]\n potential_link = potential_link[1:-1]\n\n try:\n # raises an error on anything that's not a valid link\n validator(potential_link)\n\n # use everything but the scheme in the presentation of the link\n url = urlparse(potential_link)\n link = url.netloc + url.path + url.params\n if url.query != \"\":\n link += \"?\" + url.query\n if url.fragment != \"\":\n link += \"#\" + url.fragment\n\n formatted_content += f'<a href=\"{potential_link}\">{link}</a>'\n except (ValidationError, UnicodeError):\n formatted_content += potential_link\n\n if wrapped:\n formatted_content += wrapper_close\n\n return formatted_content\n\n\ndef _wrapped(text):\n \"\"\"check if a line of text is wrapped\"\"\"\n wrappers = [(\"(\", \")\"), (\"[\", \"]\"), (\"{\", \"}\")]\n for wrapper in wrappers:\n if text[0] == wrapper[0] and text[-1] == wrapper[-1]:\n return True\n return False\n\n\ndef to_markdown(content):\n \"\"\"catch links and convert to markdown\"\"\"\n content = format_links(content)\n content = markdown(content)\n # sanitize resulting html\n sanitizer = InputHtmlParser()\n sanitizer.feed(content)\n return sanitizer.get_output()\n", "path": "bookwyrm/views/status.py"}]} | 3,290 | 132 |
gh_patches_debug_39219 | rasdani/github-patches | git_diff | ethereum__consensus-specs-1202 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
is_genesis_trigger fails to verify deposit merkle branch
## Issue
Deposits are processed against empty state with undefined `eth1_data` while `genesis_eth1_data` should be in place.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `deposit_contract/contracts/validator_registration.v.py`
Content:
```
1 MIN_DEPOSIT_AMOUNT: constant(uint256) = 1000000000 # Gwei
2 DEPOSIT_CONTRACT_TREE_DEPTH: constant(uint256) = 32
3 MAX_DEPOSIT_COUNT: constant(uint256) = 4294967295 # 2**DEPOSIT_CONTRACT_TREE_DEPTH - 1
4 PUBKEY_LENGTH: constant(uint256) = 48 # bytes
5 WITHDRAWAL_CREDENTIALS_LENGTH: constant(uint256) = 32 # bytes
6 AMOUNT_LENGTH: constant(uint256) = 8 # bytes
7 SIGNATURE_LENGTH: constant(uint256) = 96 # bytes
8
9 Deposit: event({
10 pubkey: bytes[48],
11 withdrawal_credentials: bytes[32],
12 amount: bytes[8],
13 signature: bytes[96],
14 index: bytes[8],
15 })
16
17 branch: bytes32[DEPOSIT_CONTRACT_TREE_DEPTH]
18 deposit_count: uint256
19
20 # Compute hashes in empty sparse Merkle tree
21 zero_hashes: bytes32[DEPOSIT_CONTRACT_TREE_DEPTH]
22 @public
23 def __init__():
24 for i in range(DEPOSIT_CONTRACT_TREE_DEPTH - 1):
25 self.zero_hashes[i + 1] = sha256(concat(self.zero_hashes[i], self.zero_hashes[i]))
26
27
28 @private
29 @constant
30 def to_little_endian_64(value: uint256) -> bytes[8]:
31 # Reversing bytes using bitwise uint256 manipulations
32 # Note: array accesses of bytes[] are not currently supported in Vyper
33 # Note: this function is only called when `value < 2**64`
34 y: uint256 = 0
35 x: uint256 = value
36 for _ in range(8):
37 y = shift(y, 8)
38 y = y + bitwise_and(x, 255)
39 x = shift(x, -8)
40 return slice(convert(y, bytes32), start=24, len=8)
41
42
43 @public
44 @constant
45 def get_deposit_root() -> bytes32:
46 node: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000
47 size: uint256 = self.deposit_count
48 for height in range(DEPOSIT_CONTRACT_TREE_DEPTH):
49 if bitwise_and(size, 1) == 1: # More gas efficient than `size % 2 == 1`
50 node = sha256(concat(self.branch[height], node))
51 else:
52 node = sha256(concat(node, self.zero_hashes[height]))
53 size /= 2
54 return node
55
56
57 @public
58 @constant
59 def get_deposit_count() -> bytes[8]:
60 return self.to_little_endian_64(self.deposit_count)
61
62
63 @payable
64 @public
65 def deposit(pubkey: bytes[PUBKEY_LENGTH],
66 withdrawal_credentials: bytes[WITHDRAWAL_CREDENTIALS_LENGTH],
67 signature: bytes[SIGNATURE_LENGTH]):
68 # Avoid overflowing the Merkle tree (and prevent edge case in computing `self.branch`)
69 assert self.deposit_count < MAX_DEPOSIT_COUNT
70
71 # Validate deposit data
72 deposit_amount: uint256 = msg.value / as_wei_value(1, "gwei")
73 assert deposit_amount >= MIN_DEPOSIT_AMOUNT
74 assert len(pubkey) == PUBKEY_LENGTH
75 assert len(withdrawal_credentials) == WITHDRAWAL_CREDENTIALS_LENGTH
76 assert len(signature) == SIGNATURE_LENGTH
77
78 # Emit `Deposit` log
79 amount: bytes[8] = self.to_little_endian_64(deposit_amount)
80 log.Deposit(pubkey, withdrawal_credentials, amount, signature, self.to_little_endian_64(self.deposit_count))
81
82 # Compute `DepositData` root
83 zero_bytes32: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000
84 pubkey_root: bytes32 = sha256(concat(pubkey, slice(zero_bytes32, start=0, len=64 - PUBKEY_LENGTH)))
85 signature_root: bytes32 = sha256(concat(
86 sha256(slice(signature, start=0, len=64)),
87 sha256(concat(slice(signature, start=64, len=SIGNATURE_LENGTH - 64), zero_bytes32)),
88 ))
89 node: bytes32 = sha256(concat(
90 sha256(concat(pubkey_root, withdrawal_credentials)),
91 sha256(concat(amount, slice(zero_bytes32, start=0, len=32 - AMOUNT_LENGTH), signature_root)),
92 ))
93
94 # Add `DepositData` root to Merkle tree (update a single `branch` node)
95 self.deposit_count += 1
96 size: uint256 = self.deposit_count
97 for height in range(DEPOSIT_CONTRACT_TREE_DEPTH):
98 if bitwise_and(size, 1) == 1: # More gas efficient than `size % 2 == 1`
99 self.branch[height] = node
100 break
101 node = sha256(concat(self.branch[height], node))
102 size /= 2
103
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/deposit_contract/contracts/validator_registration.v.py b/deposit_contract/contracts/validator_registration.v.py
--- a/deposit_contract/contracts/validator_registration.v.py
+++ b/deposit_contract/contracts/validator_registration.v.py
@@ -6,7 +6,7 @@
AMOUNT_LENGTH: constant(uint256) = 8 # bytes
SIGNATURE_LENGTH: constant(uint256) = 96 # bytes
-Deposit: event({
+DepositEvent: event({
pubkey: bytes[48],
withdrawal_credentials: bytes[32],
amount: bytes[8],
@@ -42,8 +42,9 @@
@public
@constant
-def get_deposit_root() -> bytes32:
- node: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000
+def get_hash_tree_root() -> bytes32:
+ zero_bytes32: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000
+ node: bytes32 = zero_bytes32
size: uint256 = self.deposit_count
for height in range(DEPOSIT_CONTRACT_TREE_DEPTH):
if bitwise_and(size, 1) == 1: # More gas efficient than `size % 2 == 1`
@@ -51,7 +52,7 @@
else:
node = sha256(concat(node, self.zero_hashes[height]))
size /= 2
- return node
+ return sha256(concat(node, self.to_little_endian_64(self.deposit_count), slice(zero_bytes32, start=0, len=24)))
@public
@@ -75,11 +76,11 @@
assert len(withdrawal_credentials) == WITHDRAWAL_CREDENTIALS_LENGTH
assert len(signature) == SIGNATURE_LENGTH
- # Emit `Deposit` log
+ # Emit `DepositEvent` log
amount: bytes[8] = self.to_little_endian_64(deposit_amount)
- log.Deposit(pubkey, withdrawal_credentials, amount, signature, self.to_little_endian_64(self.deposit_count))
+ log.DepositEvent(pubkey, withdrawal_credentials, amount, signature, self.to_little_endian_64(self.deposit_count))
- # Compute `DepositData` root
+ # Compute `DepositData` hash tree root
zero_bytes32: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000
pubkey_root: bytes32 = sha256(concat(pubkey, slice(zero_bytes32, start=0, len=64 - PUBKEY_LENGTH)))
signature_root: bytes32 = sha256(concat(
@@ -91,7 +92,7 @@
sha256(concat(amount, slice(zero_bytes32, start=0, len=32 - AMOUNT_LENGTH), signature_root)),
))
- # Add `DepositData` root to Merkle tree (update a single `branch` node)
+ # Add `DepositData` hash tree root to Merkle tree (update a single `branch` node)
self.deposit_count += 1
size: uint256 = self.deposit_count
for height in range(DEPOSIT_CONTRACT_TREE_DEPTH):
| {"golden_diff": "diff --git a/deposit_contract/contracts/validator_registration.v.py b/deposit_contract/contracts/validator_registration.v.py\n--- a/deposit_contract/contracts/validator_registration.v.py\n+++ b/deposit_contract/contracts/validator_registration.v.py\n@@ -6,7 +6,7 @@\n AMOUNT_LENGTH: constant(uint256) = 8 # bytes\n SIGNATURE_LENGTH: constant(uint256) = 96 # bytes\n \n-Deposit: event({\n+DepositEvent: event({\n pubkey: bytes[48],\n withdrawal_credentials: bytes[32],\n amount: bytes[8],\n@@ -42,8 +42,9 @@\n \n @public\n @constant\n-def get_deposit_root() -> bytes32:\n- node: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000\n+def get_hash_tree_root() -> bytes32:\n+ zero_bytes32: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000\n+ node: bytes32 = zero_bytes32\n size: uint256 = self.deposit_count\n for height in range(DEPOSIT_CONTRACT_TREE_DEPTH):\n if bitwise_and(size, 1) == 1: # More gas efficient than `size % 2 == 1`\n@@ -51,7 +52,7 @@\n else:\n node = sha256(concat(node, self.zero_hashes[height]))\n size /= 2\n- return node\n+ return sha256(concat(node, self.to_little_endian_64(self.deposit_count), slice(zero_bytes32, start=0, len=24)))\n \n \n @public\n@@ -75,11 +76,11 @@\n assert len(withdrawal_credentials) == WITHDRAWAL_CREDENTIALS_LENGTH\n assert len(signature) == SIGNATURE_LENGTH\n \n- # Emit `Deposit` log\n+ # Emit `DepositEvent` log\n amount: bytes[8] = self.to_little_endian_64(deposit_amount)\n- log.Deposit(pubkey, withdrawal_credentials, amount, signature, self.to_little_endian_64(self.deposit_count))\n+ log.DepositEvent(pubkey, withdrawal_credentials, amount, signature, self.to_little_endian_64(self.deposit_count))\n \n- # Compute `DepositData` root\n+ # Compute `DepositData` hash tree root\n zero_bytes32: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000\n pubkey_root: bytes32 = sha256(concat(pubkey, slice(zero_bytes32, start=0, len=64 - PUBKEY_LENGTH)))\n signature_root: bytes32 = sha256(concat(\n@@ -91,7 +92,7 @@\n sha256(concat(amount, slice(zero_bytes32, start=0, len=32 - AMOUNT_LENGTH), signature_root)),\n ))\n \n- # Add `DepositData` root to Merkle tree (update a single `branch` node)\n+ # Add `DepositData` hash tree root to Merkle tree (update a single `branch` node)\n self.deposit_count += 1\n size: uint256 = self.deposit_count\n for height in range(DEPOSIT_CONTRACT_TREE_DEPTH):\n", "issue": "is_genesis_trigger fails to verify deposit merkle branch\n## Issue\r\nDeposits are processed against empty state with undefined `eth1_data` while `genesis_eth1_data` should be in place.\r\n\n", "before_files": [{"content": "MIN_DEPOSIT_AMOUNT: constant(uint256) = 1000000000 # Gwei\nDEPOSIT_CONTRACT_TREE_DEPTH: constant(uint256) = 32\nMAX_DEPOSIT_COUNT: constant(uint256) = 4294967295 # 2**DEPOSIT_CONTRACT_TREE_DEPTH - 1\nPUBKEY_LENGTH: constant(uint256) = 48 # bytes\nWITHDRAWAL_CREDENTIALS_LENGTH: constant(uint256) = 32 # bytes\nAMOUNT_LENGTH: constant(uint256) = 8 # bytes\nSIGNATURE_LENGTH: constant(uint256) = 96 # bytes\n\nDeposit: event({\n pubkey: bytes[48],\n withdrawal_credentials: bytes[32],\n amount: bytes[8],\n signature: bytes[96],\n index: bytes[8],\n})\n\nbranch: bytes32[DEPOSIT_CONTRACT_TREE_DEPTH]\ndeposit_count: uint256\n\n# Compute hashes in empty sparse Merkle tree\nzero_hashes: bytes32[DEPOSIT_CONTRACT_TREE_DEPTH]\n@public\ndef __init__():\n for i in range(DEPOSIT_CONTRACT_TREE_DEPTH - 1):\n self.zero_hashes[i + 1] = sha256(concat(self.zero_hashes[i], self.zero_hashes[i]))\n\n\n@private\n@constant\ndef to_little_endian_64(value: uint256) -> bytes[8]:\n # Reversing bytes using bitwise uint256 manipulations\n # Note: array accesses of bytes[] are not currently supported in Vyper\n # Note: this function is only called when `value < 2**64`\n y: uint256 = 0\n x: uint256 = value\n for _ in range(8):\n y = shift(y, 8)\n y = y + bitwise_and(x, 255)\n x = shift(x, -8)\n return slice(convert(y, bytes32), start=24, len=8)\n\n\n@public\n@constant\ndef get_deposit_root() -> bytes32:\n node: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000\n size: uint256 = self.deposit_count\n for height in range(DEPOSIT_CONTRACT_TREE_DEPTH):\n if bitwise_and(size, 1) == 1: # More gas efficient than `size % 2 == 1`\n node = sha256(concat(self.branch[height], node))\n else:\n node = sha256(concat(node, self.zero_hashes[height]))\n size /= 2\n return node\n\n\n@public\n@constant\ndef get_deposit_count() -> bytes[8]:\n return self.to_little_endian_64(self.deposit_count)\n\n\n@payable\n@public\ndef deposit(pubkey: bytes[PUBKEY_LENGTH],\n withdrawal_credentials: bytes[WITHDRAWAL_CREDENTIALS_LENGTH],\n signature: bytes[SIGNATURE_LENGTH]):\n # Avoid overflowing the Merkle tree (and prevent edge case in computing `self.branch`)\n assert self.deposit_count < MAX_DEPOSIT_COUNT\n\n # Validate deposit data\n deposit_amount: uint256 = msg.value / as_wei_value(1, \"gwei\")\n assert deposit_amount >= MIN_DEPOSIT_AMOUNT\n assert len(pubkey) == PUBKEY_LENGTH\n assert len(withdrawal_credentials) == WITHDRAWAL_CREDENTIALS_LENGTH\n assert len(signature) == SIGNATURE_LENGTH\n\n # Emit `Deposit` log\n amount: bytes[8] = self.to_little_endian_64(deposit_amount)\n log.Deposit(pubkey, withdrawal_credentials, amount, signature, self.to_little_endian_64(self.deposit_count))\n\n # Compute `DepositData` root\n zero_bytes32: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000\n pubkey_root: bytes32 = sha256(concat(pubkey, slice(zero_bytes32, start=0, len=64 - PUBKEY_LENGTH)))\n signature_root: bytes32 = sha256(concat(\n sha256(slice(signature, start=0, len=64)),\n sha256(concat(slice(signature, start=64, len=SIGNATURE_LENGTH - 64), zero_bytes32)),\n ))\n node: bytes32 = sha256(concat(\n sha256(concat(pubkey_root, withdrawal_credentials)),\n sha256(concat(amount, slice(zero_bytes32, start=0, len=32 - AMOUNT_LENGTH), signature_root)),\n ))\n\n # Add `DepositData` root to Merkle tree (update a single `branch` node)\n self.deposit_count += 1\n size: uint256 = self.deposit_count\n for height in range(DEPOSIT_CONTRACT_TREE_DEPTH):\n if bitwise_and(size, 1) == 1: # More gas efficient than `size % 2 == 1`\n self.branch[height] = node\n break\n node = sha256(concat(self.branch[height], node))\n size /= 2\n\n", "path": "deposit_contract/contracts/validator_registration.v.py"}], "after_files": [{"content": "MIN_DEPOSIT_AMOUNT: constant(uint256) = 1000000000 # Gwei\nDEPOSIT_CONTRACT_TREE_DEPTH: constant(uint256) = 32\nMAX_DEPOSIT_COUNT: constant(uint256) = 4294967295 # 2**DEPOSIT_CONTRACT_TREE_DEPTH - 1\nPUBKEY_LENGTH: constant(uint256) = 48 # bytes\nWITHDRAWAL_CREDENTIALS_LENGTH: constant(uint256) = 32 # bytes\nAMOUNT_LENGTH: constant(uint256) = 8 # bytes\nSIGNATURE_LENGTH: constant(uint256) = 96 # bytes\n\nDepositEvent: event({\n pubkey: bytes[48],\n withdrawal_credentials: bytes[32],\n amount: bytes[8],\n signature: bytes[96],\n index: bytes[8],\n})\n\nbranch: bytes32[DEPOSIT_CONTRACT_TREE_DEPTH]\ndeposit_count: uint256\n\n# Compute hashes in empty sparse Merkle tree\nzero_hashes: bytes32[DEPOSIT_CONTRACT_TREE_DEPTH]\n@public\ndef __init__():\n for i in range(DEPOSIT_CONTRACT_TREE_DEPTH - 1):\n self.zero_hashes[i + 1] = sha256(concat(self.zero_hashes[i], self.zero_hashes[i]))\n\n\n@private\n@constant\ndef to_little_endian_64(value: uint256) -> bytes[8]:\n # Reversing bytes using bitwise uint256 manipulations\n # Note: array accesses of bytes[] are not currently supported in Vyper\n # Note: this function is only called when `value < 2**64`\n y: uint256 = 0\n x: uint256 = value\n for _ in range(8):\n y = shift(y, 8)\n y = y + bitwise_and(x, 255)\n x = shift(x, -8)\n return slice(convert(y, bytes32), start=24, len=8)\n\n\n@public\n@constant\ndef get_hash_tree_root() -> bytes32:\n zero_bytes32: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000\n node: bytes32 = zero_bytes32\n size: uint256 = self.deposit_count\n for height in range(DEPOSIT_CONTRACT_TREE_DEPTH):\n if bitwise_and(size, 1) == 1: # More gas efficient than `size % 2 == 1`\n node = sha256(concat(self.branch[height], node))\n else:\n node = sha256(concat(node, self.zero_hashes[height]))\n size /= 2\n return sha256(concat(node, self.to_little_endian_64(self.deposit_count), slice(zero_bytes32, start=0, len=24)))\n\n\n@public\n@constant\ndef get_deposit_count() -> bytes[8]:\n return self.to_little_endian_64(self.deposit_count)\n\n\n@payable\n@public\ndef deposit(pubkey: bytes[PUBKEY_LENGTH],\n withdrawal_credentials: bytes[WITHDRAWAL_CREDENTIALS_LENGTH],\n signature: bytes[SIGNATURE_LENGTH]):\n # Avoid overflowing the Merkle tree (and prevent edge case in computing `self.branch`)\n assert self.deposit_count < MAX_DEPOSIT_COUNT\n\n # Validate deposit data\n deposit_amount: uint256 = msg.value / as_wei_value(1, \"gwei\")\n assert deposit_amount >= MIN_DEPOSIT_AMOUNT\n assert len(pubkey) == PUBKEY_LENGTH\n assert len(withdrawal_credentials) == WITHDRAWAL_CREDENTIALS_LENGTH\n assert len(signature) == SIGNATURE_LENGTH\n\n # Emit `DepositEvent` log\n amount: bytes[8] = self.to_little_endian_64(deposit_amount)\n log.DepositEvent(pubkey, withdrawal_credentials, amount, signature, self.to_little_endian_64(self.deposit_count))\n\n # Compute `DepositData` hash tree root\n zero_bytes32: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000\n pubkey_root: bytes32 = sha256(concat(pubkey, slice(zero_bytes32, start=0, len=64 - PUBKEY_LENGTH)))\n signature_root: bytes32 = sha256(concat(\n sha256(slice(signature, start=0, len=64)),\n sha256(concat(slice(signature, start=64, len=SIGNATURE_LENGTH - 64), zero_bytes32)),\n ))\n node: bytes32 = sha256(concat(\n sha256(concat(pubkey_root, withdrawal_credentials)),\n sha256(concat(amount, slice(zero_bytes32, start=0, len=32 - AMOUNT_LENGTH), signature_root)),\n ))\n\n # Add `DepositData` hash tree root to Merkle tree (update a single `branch` node)\n self.deposit_count += 1\n size: uint256 = self.deposit_count\n for height in range(DEPOSIT_CONTRACT_TREE_DEPTH):\n if bitwise_and(size, 1) == 1: # More gas efficient than `size % 2 == 1`\n self.branch[height] = node\n break\n node = sha256(concat(self.branch[height], node))\n size /= 2\n\n", "path": "deposit_contract/contracts/validator_registration.v.py"}]} | 1,762 | 893 |
gh_patches_debug_37446 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1143 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
State Machine types missing Map type
The [current list](https://github.com/aws-cloudformation/cfn-python-lint/blob/master/src/cfnlint/rules/resources/stepfunctions/StateMachine.py#L53) of valid types for state machine tasks is missing the recently-announced [`Map`](https://aws.amazon.com/blogs/aws/new-step-functions-support-for-dynamic-parallelism/) type
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/resources/stepfunctions/StateMachine.py`
Content:
```
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 import json
18 import six
19 from cfnlint.rules import CloudFormationLintRule
20 from cfnlint.rules import RuleMatch
21
22
23 class StateMachine(CloudFormationLintRule):
24 """Check State Machine Definition"""
25 id = 'E2532'
26 shortdesc = 'Check State Machine Definition for proper syntax'
27 description = 'Check the State Machine String Definition to make sure its JSON. ' \
28 'Validate basic syntax of the file to determine validity.'
29 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-stepfunctions-statemachine.html'
30 tags = ['resources', 'stepfunctions']
31
32 def __init__(self):
33 """Init"""
34 super(StateMachine, self).__init__()
35 self.resource_property_types.append('AWS::StepFunctions::StateMachine')
36
37 def _check_state_json(self, def_json, state_name, path):
38 """Check State JSON Definition"""
39 matches = []
40
41 # https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-common-fields.html
42 common_state_keys = [
43 'Next',
44 'End',
45 'Type',
46 'Comment',
47 'InputPath',
48 'OutputPath',
49 ]
50 common_state_required_keys = [
51 'Type',
52 ]
53 state_key_types = {
54 'Pass': ['Result', 'ResultPath', 'Parameters'],
55 'Task': ['Resource', 'ResultPath', 'Retry', 'Catch', 'TimeoutSeconds', 'Parameters', 'HeartbeatSeconds'],
56 'Choice': ['Choices', 'Default'],
57 'Wait': ['Seconds', 'Timestamp', 'SecondsPath', 'TimestampPath'],
58 'Succeed': [],
59 'Fail': ['Cause', 'Error'],
60 'Parallel': ['Branches', 'ResultPath', 'Retry', 'Catch']
61 }
62 state_required_types = {
63 'Pass': [],
64 'Task': ['Resource'],
65 'Choice': ['Choices'],
66 'Wait': [],
67 'Succeed': [],
68 'Fail': [],
69 'Parallel': ['Branches']
70 }
71
72 for req_key in common_state_required_keys:
73 if req_key not in def_json:
74 message = 'State Machine Definition required key (%s) for State (%s) is missing' % (req_key, state_name)
75 matches.append(RuleMatch(path, message))
76 return matches
77
78 state_type = def_json.get('Type')
79
80 if state_type in state_key_types:
81 for state_key, _ in def_json.items():
82 if state_key not in common_state_keys + state_key_types.get(state_type, []):
83 message = 'State Machine Definition key (%s) for State (%s) of Type (%s) is not valid' % (state_key, state_name, state_type)
84 matches.append(RuleMatch(path, message))
85 for req_key in common_state_required_keys + state_required_types.get(state_type, []):
86 if req_key not in def_json:
87 message = 'State Machine Definition required key (%s) for State (%s) of Type (%s) is missing' % (req_key, state_name, state_type)
88 matches.append(RuleMatch(path, message))
89 return matches
90 else:
91 message = 'State Machine Definition Type (%s) is not valid' % (state_type)
92 matches.append(RuleMatch(path, message))
93
94 return matches
95
96 def _check_definition_json(self, def_json, path):
97 """Check JSON Definition"""
98 matches = []
99
100 top_level_keys = [
101 'Comment',
102 'StartAt',
103 'TimeoutSeconds',
104 'Version',
105 'States'
106 ]
107 top_level_required_keys = [
108 'StartAt',
109 'States'
110 ]
111 for top_key, _ in def_json.items():
112 if top_key not in top_level_keys:
113 message = 'State Machine Definition key (%s) is not valid' % top_key
114 matches.append(RuleMatch(path, message))
115
116 for req_key in top_level_required_keys:
117 if req_key not in def_json:
118 message = 'State Machine Definition required key (%s) is missing' % req_key
119 matches.append(RuleMatch(path, message))
120
121 for state_name, state_value in def_json.get('States', {}).items():
122 matches.extend(self._check_state_json(state_value, state_name, path))
123 return matches
124
125 def check_value(self, value, path, fail_on_loads=True):
126 """Check Definition Value"""
127 matches = []
128 try:
129 def_json = json.loads(value)
130 # pylint: disable=W0703
131 except Exception as err:
132 if fail_on_loads:
133 message = 'State Machine Definition needs to be formatted as JSON. Error %s' % err
134 matches.append(RuleMatch(path, message))
135 return matches
136
137 self.logger.debug('State Machine definition could not be parsed. Skipping')
138 return matches
139
140 matches.extend(self._check_definition_json(def_json, path))
141 return matches
142
143 def check_sub(self, value, path):
144 """Check Sub Object"""
145 matches = []
146 if isinstance(value, list):
147 matches.extend(self.check_value(value[0], path, False))
148 elif isinstance(value, six.string_types):
149 matches.extend(self.check_value(value, path, False))
150
151 return matches
152
153 def match_resource_properties(self, properties, _, path, cfn):
154 """Check CloudFormation Properties"""
155 matches = []
156
157 matches.extend(
158 cfn.check_value(
159 obj=properties, key='DefinitionString',
160 path=path[:],
161 check_value=self.check_value,
162 check_sub=self.check_sub
163 ))
164
165 return matches
166
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/resources/stepfunctions/StateMachine.py b/src/cfnlint/rules/resources/stepfunctions/StateMachine.py
--- a/src/cfnlint/rules/resources/stepfunctions/StateMachine.py
+++ b/src/cfnlint/rules/resources/stepfunctions/StateMachine.py
@@ -52,7 +52,10 @@
]
state_key_types = {
'Pass': ['Result', 'ResultPath', 'Parameters'],
- 'Task': ['Resource', 'ResultPath', 'Retry', 'Catch', 'TimeoutSeconds', 'Parameters', 'HeartbeatSeconds'],
+ 'Task': ['Resource', 'ResultPath', 'Retry', 'Catch',
+ 'TimeoutSeconds', 'Parameters', 'HeartbeatSeconds'],
+ 'Map': ['MaxConcurrency', 'Iterator', 'ItemsPath', 'ResultPath',
+ 'Retry', 'Catch'],
'Choice': ['Choices', 'Default'],
'Wait': ['Seconds', 'Timestamp', 'SecondsPath', 'TimestampPath'],
'Succeed': [],
@@ -71,7 +74,8 @@
for req_key in common_state_required_keys:
if req_key not in def_json:
- message = 'State Machine Definition required key (%s) for State (%s) is missing' % (req_key, state_name)
+ message = 'State Machine Definition required key (%s) for State (%s) is missing' % (
+ req_key, state_name)
matches.append(RuleMatch(path, message))
return matches
@@ -80,11 +84,13 @@
if state_type in state_key_types:
for state_key, _ in def_json.items():
if state_key not in common_state_keys + state_key_types.get(state_type, []):
- message = 'State Machine Definition key (%s) for State (%s) of Type (%s) is not valid' % (state_key, state_name, state_type)
+ message = 'State Machine Definition key (%s) for State (%s) of Type (%s) is not valid' % (
+ state_key, state_name, state_type)
matches.append(RuleMatch(path, message))
for req_key in common_state_required_keys + state_required_types.get(state_type, []):
if req_key not in def_json:
- message = 'State Machine Definition required key (%s) for State (%s) of Type (%s) is missing' % (req_key, state_name, state_type)
+ message = 'State Machine Definition required key (%s) for State (%s) of Type (%s) is missing' % (
+ req_key, state_name, state_type)
matches.append(RuleMatch(path, message))
return matches
else:
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/stepfunctions/StateMachine.py b/src/cfnlint/rules/resources/stepfunctions/StateMachine.py\n--- a/src/cfnlint/rules/resources/stepfunctions/StateMachine.py\n+++ b/src/cfnlint/rules/resources/stepfunctions/StateMachine.py\n@@ -52,7 +52,10 @@\n ]\n state_key_types = {\n 'Pass': ['Result', 'ResultPath', 'Parameters'],\n- 'Task': ['Resource', 'ResultPath', 'Retry', 'Catch', 'TimeoutSeconds', 'Parameters', 'HeartbeatSeconds'],\n+ 'Task': ['Resource', 'ResultPath', 'Retry', 'Catch',\n+ 'TimeoutSeconds', 'Parameters', 'HeartbeatSeconds'],\n+ 'Map': ['MaxConcurrency', 'Iterator', 'ItemsPath', 'ResultPath',\n+ 'Retry', 'Catch'],\n 'Choice': ['Choices', 'Default'],\n 'Wait': ['Seconds', 'Timestamp', 'SecondsPath', 'TimestampPath'],\n 'Succeed': [],\n@@ -71,7 +74,8 @@\n \n for req_key in common_state_required_keys:\n if req_key not in def_json:\n- message = 'State Machine Definition required key (%s) for State (%s) is missing' % (req_key, state_name)\n+ message = 'State Machine Definition required key (%s) for State (%s) is missing' % (\n+ req_key, state_name)\n matches.append(RuleMatch(path, message))\n return matches\n \n@@ -80,11 +84,13 @@\n if state_type in state_key_types:\n for state_key, _ in def_json.items():\n if state_key not in common_state_keys + state_key_types.get(state_type, []):\n- message = 'State Machine Definition key (%s) for State (%s) of Type (%s) is not valid' % (state_key, state_name, state_type)\n+ message = 'State Machine Definition key (%s) for State (%s) of Type (%s) is not valid' % (\n+ state_key, state_name, state_type)\n matches.append(RuleMatch(path, message))\n for req_key in common_state_required_keys + state_required_types.get(state_type, []):\n if req_key not in def_json:\n- message = 'State Machine Definition required key (%s) for State (%s) of Type (%s) is missing' % (req_key, state_name, state_type)\n+ message = 'State Machine Definition required key (%s) for State (%s) of Type (%s) is missing' % (\n+ req_key, state_name, state_type)\n matches.append(RuleMatch(path, message))\n return matches\n else:\n", "issue": "State Machine types missing Map type\nThe [current list](https://github.com/aws-cloudformation/cfn-python-lint/blob/master/src/cfnlint/rules/resources/stepfunctions/StateMachine.py#L53) of valid types for state machine tasks is missing the recently-announced [`Map`](https://aws.amazon.com/blogs/aws/new-step-functions-support-for-dynamic-parallelism/) type\n", "before_files": [{"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport json\nimport six\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass StateMachine(CloudFormationLintRule):\n \"\"\"Check State Machine Definition\"\"\"\n id = 'E2532'\n shortdesc = 'Check State Machine Definition for proper syntax'\n description = 'Check the State Machine String Definition to make sure its JSON. ' \\\n 'Validate basic syntax of the file to determine validity.'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-stepfunctions-statemachine.html'\n tags = ['resources', 'stepfunctions']\n\n def __init__(self):\n \"\"\"Init\"\"\"\n super(StateMachine, self).__init__()\n self.resource_property_types.append('AWS::StepFunctions::StateMachine')\n\n def _check_state_json(self, def_json, state_name, path):\n \"\"\"Check State JSON Definition\"\"\"\n matches = []\n\n # https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-common-fields.html\n common_state_keys = [\n 'Next',\n 'End',\n 'Type',\n 'Comment',\n 'InputPath',\n 'OutputPath',\n ]\n common_state_required_keys = [\n 'Type',\n ]\n state_key_types = {\n 'Pass': ['Result', 'ResultPath', 'Parameters'],\n 'Task': ['Resource', 'ResultPath', 'Retry', 'Catch', 'TimeoutSeconds', 'Parameters', 'HeartbeatSeconds'],\n 'Choice': ['Choices', 'Default'],\n 'Wait': ['Seconds', 'Timestamp', 'SecondsPath', 'TimestampPath'],\n 'Succeed': [],\n 'Fail': ['Cause', 'Error'],\n 'Parallel': ['Branches', 'ResultPath', 'Retry', 'Catch']\n }\n state_required_types = {\n 'Pass': [],\n 'Task': ['Resource'],\n 'Choice': ['Choices'],\n 'Wait': [],\n 'Succeed': [],\n 'Fail': [],\n 'Parallel': ['Branches']\n }\n\n for req_key in common_state_required_keys:\n if req_key not in def_json:\n message = 'State Machine Definition required key (%s) for State (%s) is missing' % (req_key, state_name)\n matches.append(RuleMatch(path, message))\n return matches\n\n state_type = def_json.get('Type')\n\n if state_type in state_key_types:\n for state_key, _ in def_json.items():\n if state_key not in common_state_keys + state_key_types.get(state_type, []):\n message = 'State Machine Definition key (%s) for State (%s) of Type (%s) is not valid' % (state_key, state_name, state_type)\n matches.append(RuleMatch(path, message))\n for req_key in common_state_required_keys + state_required_types.get(state_type, []):\n if req_key not in def_json:\n message = 'State Machine Definition required key (%s) for State (%s) of Type (%s) is missing' % (req_key, state_name, state_type)\n matches.append(RuleMatch(path, message))\n return matches\n else:\n message = 'State Machine Definition Type (%s) is not valid' % (state_type)\n matches.append(RuleMatch(path, message))\n\n return matches\n\n def _check_definition_json(self, def_json, path):\n \"\"\"Check JSON Definition\"\"\"\n matches = []\n\n top_level_keys = [\n 'Comment',\n 'StartAt',\n 'TimeoutSeconds',\n 'Version',\n 'States'\n ]\n top_level_required_keys = [\n 'StartAt',\n 'States'\n ]\n for top_key, _ in def_json.items():\n if top_key not in top_level_keys:\n message = 'State Machine Definition key (%s) is not valid' % top_key\n matches.append(RuleMatch(path, message))\n\n for req_key in top_level_required_keys:\n if req_key not in def_json:\n message = 'State Machine Definition required key (%s) is missing' % req_key\n matches.append(RuleMatch(path, message))\n\n for state_name, state_value in def_json.get('States', {}).items():\n matches.extend(self._check_state_json(state_value, state_name, path))\n return matches\n\n def check_value(self, value, path, fail_on_loads=True):\n \"\"\"Check Definition Value\"\"\"\n matches = []\n try:\n def_json = json.loads(value)\n # pylint: disable=W0703\n except Exception as err:\n if fail_on_loads:\n message = 'State Machine Definition needs to be formatted as JSON. Error %s' % err\n matches.append(RuleMatch(path, message))\n return matches\n\n self.logger.debug('State Machine definition could not be parsed. Skipping')\n return matches\n\n matches.extend(self._check_definition_json(def_json, path))\n return matches\n\n def check_sub(self, value, path):\n \"\"\"Check Sub Object\"\"\"\n matches = []\n if isinstance(value, list):\n matches.extend(self.check_value(value[0], path, False))\n elif isinstance(value, six.string_types):\n matches.extend(self.check_value(value, path, False))\n\n return matches\n\n def match_resource_properties(self, properties, _, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n\n matches.extend(\n cfn.check_value(\n obj=properties, key='DefinitionString',\n path=path[:],\n check_value=self.check_value,\n check_sub=self.check_sub\n ))\n\n return matches\n", "path": "src/cfnlint/rules/resources/stepfunctions/StateMachine.py"}], "after_files": [{"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport json\nimport six\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass StateMachine(CloudFormationLintRule):\n \"\"\"Check State Machine Definition\"\"\"\n id = 'E2532'\n shortdesc = 'Check State Machine Definition for proper syntax'\n description = 'Check the State Machine String Definition to make sure its JSON. ' \\\n 'Validate basic syntax of the file to determine validity.'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-stepfunctions-statemachine.html'\n tags = ['resources', 'stepfunctions']\n\n def __init__(self):\n \"\"\"Init\"\"\"\n super(StateMachine, self).__init__()\n self.resource_property_types.append('AWS::StepFunctions::StateMachine')\n\n def _check_state_json(self, def_json, state_name, path):\n \"\"\"Check State JSON Definition\"\"\"\n matches = []\n\n # https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-common-fields.html\n common_state_keys = [\n 'Next',\n 'End',\n 'Type',\n 'Comment',\n 'InputPath',\n 'OutputPath',\n ]\n common_state_required_keys = [\n 'Type',\n ]\n state_key_types = {\n 'Pass': ['Result', 'ResultPath', 'Parameters'],\n 'Task': ['Resource', 'ResultPath', 'Retry', 'Catch',\n 'TimeoutSeconds', 'Parameters', 'HeartbeatSeconds'],\n 'Map': ['MaxConcurrency', 'Iterator', 'ItemsPath', 'ResultPath',\n 'Retry', 'Catch'],\n 'Choice': ['Choices', 'Default'],\n 'Wait': ['Seconds', 'Timestamp', 'SecondsPath', 'TimestampPath'],\n 'Succeed': [],\n 'Fail': ['Cause', 'Error'],\n 'Parallel': ['Branches', 'ResultPath', 'Retry', 'Catch']\n }\n state_required_types = {\n 'Pass': [],\n 'Task': ['Resource'],\n 'Choice': ['Choices'],\n 'Wait': [],\n 'Succeed': [],\n 'Fail': [],\n 'Parallel': ['Branches']\n }\n\n for req_key in common_state_required_keys:\n if req_key not in def_json:\n message = 'State Machine Definition required key (%s) for State (%s) is missing' % (\n req_key, state_name)\n matches.append(RuleMatch(path, message))\n return matches\n\n state_type = def_json.get('Type')\n\n if state_type in state_key_types:\n for state_key, _ in def_json.items():\n if state_key not in common_state_keys + state_key_types.get(state_type, []):\n message = 'State Machine Definition key (%s) for State (%s) of Type (%s) is not valid' % (\n state_key, state_name, state_type)\n matches.append(RuleMatch(path, message))\n for req_key in common_state_required_keys + state_required_types.get(state_type, []):\n if req_key not in def_json:\n message = 'State Machine Definition required key (%s) for State (%s) of Type (%s) is missing' % (\n req_key, state_name, state_type)\n matches.append(RuleMatch(path, message))\n return matches\n else:\n message = 'State Machine Definition Type (%s) is not valid' % (state_type)\n matches.append(RuleMatch(path, message))\n\n return matches\n\n def _check_definition_json(self, def_json, path):\n \"\"\"Check JSON Definition\"\"\"\n matches = []\n\n top_level_keys = [\n 'Comment',\n 'StartAt',\n 'TimeoutSeconds',\n 'Version',\n 'States'\n ]\n top_level_required_keys = [\n 'StartAt',\n 'States'\n ]\n for top_key, _ in def_json.items():\n if top_key not in top_level_keys:\n message = 'State Machine Definition key (%s) is not valid' % top_key\n matches.append(RuleMatch(path, message))\n\n for req_key in top_level_required_keys:\n if req_key not in def_json:\n message = 'State Machine Definition required key (%s) is missing' % req_key\n matches.append(RuleMatch(path, message))\n\n for state_name, state_value in def_json.get('States', {}).items():\n matches.extend(self._check_state_json(state_value, state_name, path))\n return matches\n\n def check_value(self, value, path, fail_on_loads=True):\n \"\"\"Check Definition Value\"\"\"\n matches = []\n try:\n def_json = json.loads(value)\n # pylint: disable=W0703\n except Exception as err:\n if fail_on_loads:\n message = 'State Machine Definition needs to be formatted as JSON. Error %s' % err\n matches.append(RuleMatch(path, message))\n return matches\n\n self.logger.debug('State Machine definition could not be parsed. Skipping')\n return matches\n\n matches.extend(self._check_definition_json(def_json, path))\n return matches\n\n def check_sub(self, value, path):\n \"\"\"Check Sub Object\"\"\"\n matches = []\n if isinstance(value, list):\n matches.extend(self.check_value(value[0], path, False))\n elif isinstance(value, six.string_types):\n matches.extend(self.check_value(value, path, False))\n\n return matches\n\n def match_resource_properties(self, properties, _, path, cfn):\n \"\"\"Check CloudFormation Properties\"\"\"\n matches = []\n\n matches.extend(\n cfn.check_value(\n obj=properties, key='DefinitionString',\n path=path[:],\n check_value=self.check_value,\n check_sub=self.check_sub\n ))\n\n return matches\n", "path": "src/cfnlint/rules/resources/stepfunctions/StateMachine.py"}]} | 2,137 | 585 |
gh_patches_debug_17280 | rasdani/github-patches | git_diff | SciTools__cartopy-1983 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add better title for projections.rst?
The title of [projections.rst](https://github.com/SciTools/cartopy/blob/main/docs/source/reference/projections.rst) is currently "PlateCarree" - I think this is sub-optimal (I had trouble finding the overview...). I think the title should be Projections and all other titles moved one level down.
https://github.com/SciTools/cartopy/blob/main/docs/source/reference/projections.rst

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/make_projection.py`
Content:
```
1 # Copyright Cartopy Contributors
2 #
3 # This file is part of Cartopy and is released under the LGPL license.
4 # See COPYING and COPYING.LESSER in the root of the repository for full
5 # licensing details.
6
7 import os
8 import inspect
9 import textwrap
10 import numpy as np
11 import cartopy.crs as ccrs
12
13 #: A dictionary to allow examples to use non-default parameters to the CRS
14 #: constructor.
15 SPECIFIC_PROJECTION_KWARGS = {
16 ccrs.RotatedPole: {'pole_longitude': 177.5, 'pole_latitude': 37.5},
17 ccrs.AzimuthalEquidistant: {'central_latitude': 90},
18 ccrs.NearsidePerspective: {
19 'central_longitude': -3.53, 'central_latitude': 50.72,
20 'satellite_height': 10.0e6},
21 ccrs.OSGB: {'approx': False},
22 ccrs.OSNI: {'approx': False},
23 ccrs.TransverseMercator: {'approx': False},
24 }
25
26
27 def plate_carree_plot():
28 import matplotlib.pyplot as plt
29 import cartopy.crs as ccrs
30
31 nplots = 2
32
33 fig = plt.figure(figsize=(6, 6))
34
35 for i in range(0, nplots):
36 central_longitude = 0 if i == 0 else 180
37 ax = fig.add_subplot(
38 nplots, 1, i+1,
39 projection=ccrs.PlateCarree(central_longitude=central_longitude))
40 ax.coastlines(resolution='110m')
41 ax.gridlines()
42
43
44 def igh_plot():
45 import matplotlib.pyplot as plt
46 import cartopy.crs as ccrs
47
48 fig = plt.figure(figsize=(6.9228, 6))
49
50 ax1 = fig.add_subplot(2, 1, 1,
51 projection=ccrs.InterruptedGoodeHomolosine(
52 emphasis='land'))
53 ax1.coastlines(resolution='110m')
54 ax1.gridlines()
55
56 ax2 = fig.add_subplot(2, 1, 2,
57 projection=ccrs.InterruptedGoodeHomolosine(
58 central_longitude=-160, emphasis='ocean'))
59 ax2.coastlines(resolution='110m')
60 ax2.gridlines()
61
62
63 def utm_plot():
64 import matplotlib.pyplot as plt
65 import cartopy.crs as ccrs
66
67 nplots = 60
68
69 fig = plt.figure(figsize=(10, 3))
70
71 for i in range(0, nplots):
72 ax = fig.add_subplot(1, nplots, i+1,
73 projection=ccrs.UTM(zone=i+1,
74 southern_hemisphere=True))
75 ax.coastlines(resolution='110m')
76 ax.gridlines()
77
78
79 MULTI_PLOT_CASES = {
80 ccrs.PlateCarree: plate_carree_plot,
81 ccrs.InterruptedGoodeHomolosine: igh_plot,
82 ccrs.UTM: utm_plot,
83 }
84
85
86 COASTLINE_RESOLUTION = {ccrs.OSNI: '10m',
87 ccrs.OSGB: '50m',
88 ccrs.EuroPP: '50m'}
89
90
91 PRJ_SORT_ORDER = {'PlateCarree': 1,
92 'Mercator': 2, 'Mollweide': 2, 'Robinson': 2,
93 'TransverseMercator': 2, 'LambertCylindrical': 2,
94 'LambertConformal': 2, 'EquidistantConic': 2,
95 'Stereographic': 2, 'Miller': 2,
96 'Orthographic': 2, 'UTM': 2, 'AlbersEqualArea': 2,
97 'AzimuthalEquidistant': 2, 'Sinusoidal': 2,
98 'InterruptedGoodeHomolosine': 3, 'RotatedPole': 3,
99 'OSGB': 4, 'EuroPP': 5,
100 'Geostationary': 6, 'NearsidePerspective': 7,
101 'EckertI': 8.1, 'EckertII': 8.2, 'EckertIII': 8.3,
102 'EckertIV': 8.4, 'EckertV': 8.5, 'EckertVI': 8.6}
103
104
105 def find_projections():
106 for obj_name, o in vars(ccrs).copy().items():
107 if isinstance(o, type) and issubclass(o, ccrs.Projection) and \
108 not obj_name.startswith('_') and obj_name not in ['Projection']:
109
110 yield o
111
112
113 def create_instance(prj_cls, instance_args):
114 name = prj_cls.__name__
115
116 # Format instance arguments into strings
117 instance_params = ',\n '.join(
118 f'{k}={v}'
119 for k, v in sorted(instance_args.items()))
120
121 if instance_params:
122 instance_params = '\n ' \
123 + instance_params
124
125 instance_creation_code = f'{name}({instance_params})'
126
127 prj_inst = prj(**instance_args)
128
129 return prj_inst, instance_creation_code
130
131
132 if __name__ == '__main__':
133 fname = os.path.join(os.path.dirname(__file__), 'source',
134 'reference', 'projections.rst')
135 table = open(fname, 'w')
136
137 notes = """
138 .. (comment): DO NOT EDIT this file.
139 .. It is auto-generated by running : cartopy/docs/make_projection.py
140 .. Please adjust by making changes there.
141 .. It is included in the repository only to aid detection of changes.
142
143 .. _cartopy_projections:
144
145 """
146 table.write(textwrap.dedent(notes))
147
148 def prj_class_sorter(cls):
149 return (PRJ_SORT_ORDER.get(cls.__name__, 100),
150 cls.__name__)
151
152 for prj in sorted(find_projections(), key=prj_class_sorter):
153 name = prj.__name__
154
155 table.write(name + '\n')
156 table.write('-' * len(name) + '\n\n')
157
158 table.write('.. autoclass:: cartopy.crs.%s\n' % name)
159
160 if prj not in MULTI_PLOT_CASES:
161 # Get instance arguments and number of plots
162 instance_args = SPECIFIC_PROJECTION_KWARGS.get(prj, {})
163
164 prj_inst, instance_repr = create_instance(prj, instance_args)
165
166 aspect = (np.diff(prj_inst.x_limits) /
167 np.diff(prj_inst.y_limits))[0]
168
169 width = 3 * aspect
170 width = f'{width:.4f}'.rstrip('0').rstrip('.')
171
172 # Generate plotting code
173 code = textwrap.dedent("""
174 .. plot::
175
176 import matplotlib.pyplot as plt
177 import cartopy.crs as ccrs
178
179 plt.figure(figsize=({width}, 3))
180 ax = plt.axes(projection=ccrs.{proj_constructor})
181 ax.coastlines(resolution={coastline_resolution!r})
182 ax.gridlines()
183
184
185 """).format(width=width,
186 proj_constructor=instance_repr,
187 coastline_resolution=COASTLINE_RESOLUTION.get(prj,
188 '110m'))
189
190 else:
191 func = MULTI_PLOT_CASES[prj]
192
193 lines = inspect.getsourcelines(func)
194 func_code = "".join(lines[0][1:])
195
196 code = textwrap.dedent("""
197 .. plot::
198
199 {func_code}
200
201 """).format(func_code=func_code)
202
203 table.write(code)
204
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/make_projection.py b/docs/make_projection.py
--- a/docs/make_projection.py
+++ b/docs/make_projection.py
@@ -134,7 +134,7 @@
'reference', 'projections.rst')
table = open(fname, 'w')
- notes = """
+ header = """
.. (comment): DO NOT EDIT this file.
.. It is auto-generated by running : cartopy/docs/make_projection.py
.. Please adjust by making changes there.
@@ -142,8 +142,13 @@
.. _cartopy_projections:
+ Cartopy projection list
+ =======================
+
+
"""
- table.write(textwrap.dedent(notes))
+
+ table.write(textwrap.dedent(header))
def prj_class_sorter(cls):
return (PRJ_SORT_ORDER.get(cls.__name__, 100),
| {"golden_diff": "diff --git a/docs/make_projection.py b/docs/make_projection.py\n--- a/docs/make_projection.py\n+++ b/docs/make_projection.py\n@@ -134,7 +134,7 @@\n 'reference', 'projections.rst')\n table = open(fname, 'w')\n \n- notes = \"\"\"\n+ header = \"\"\"\n .. (comment): DO NOT EDIT this file.\n .. It is auto-generated by running : cartopy/docs/make_projection.py\n .. Please adjust by making changes there.\n@@ -142,8 +142,13 @@\n \n .. _cartopy_projections:\n \n+ Cartopy projection list\n+ =======================\n+\n+\n \"\"\"\n- table.write(textwrap.dedent(notes))\n+\n+ table.write(textwrap.dedent(header))\n \n def prj_class_sorter(cls):\n return (PRJ_SORT_ORDER.get(cls.__name__, 100),\n", "issue": "Add better title for projections.rst?\nThe title of [projections.rst](https://github.com/SciTools/cartopy/blob/main/docs/source/reference/projections.rst) is currently \"PlateCarree\" - I think this is sub-optimal (I had trouble finding the overview...). I think the title should be Projections and all other titles moved one level down.\r\n\r\nhttps://github.com/SciTools/cartopy/blob/main/docs/source/reference/projections.rst\r\n\r\n\n", "before_files": [{"content": "# Copyright Cartopy Contributors\n#\n# This file is part of Cartopy and is released under the LGPL license.\n# See COPYING and COPYING.LESSER in the root of the repository for full\n# licensing details.\n\nimport os\nimport inspect\nimport textwrap\nimport numpy as np\nimport cartopy.crs as ccrs\n\n#: A dictionary to allow examples to use non-default parameters to the CRS\n#: constructor.\nSPECIFIC_PROJECTION_KWARGS = {\n ccrs.RotatedPole: {'pole_longitude': 177.5, 'pole_latitude': 37.5},\n ccrs.AzimuthalEquidistant: {'central_latitude': 90},\n ccrs.NearsidePerspective: {\n 'central_longitude': -3.53, 'central_latitude': 50.72,\n 'satellite_height': 10.0e6},\n ccrs.OSGB: {'approx': False},\n ccrs.OSNI: {'approx': False},\n ccrs.TransverseMercator: {'approx': False},\n}\n\n\ndef plate_carree_plot():\n import matplotlib.pyplot as plt\n import cartopy.crs as ccrs\n\n nplots = 2\n\n fig = plt.figure(figsize=(6, 6))\n\n for i in range(0, nplots):\n central_longitude = 0 if i == 0 else 180\n ax = fig.add_subplot(\n nplots, 1, i+1,\n projection=ccrs.PlateCarree(central_longitude=central_longitude))\n ax.coastlines(resolution='110m')\n ax.gridlines()\n\n\ndef igh_plot():\n import matplotlib.pyplot as plt\n import cartopy.crs as ccrs\n\n fig = plt.figure(figsize=(6.9228, 6))\n\n ax1 = fig.add_subplot(2, 1, 1,\n projection=ccrs.InterruptedGoodeHomolosine(\n emphasis='land'))\n ax1.coastlines(resolution='110m')\n ax1.gridlines()\n\n ax2 = fig.add_subplot(2, 1, 2,\n projection=ccrs.InterruptedGoodeHomolosine(\n central_longitude=-160, emphasis='ocean'))\n ax2.coastlines(resolution='110m')\n ax2.gridlines()\n\n\ndef utm_plot():\n import matplotlib.pyplot as plt\n import cartopy.crs as ccrs\n\n nplots = 60\n\n fig = plt.figure(figsize=(10, 3))\n\n for i in range(0, nplots):\n ax = fig.add_subplot(1, nplots, i+1,\n projection=ccrs.UTM(zone=i+1,\n southern_hemisphere=True))\n ax.coastlines(resolution='110m')\n ax.gridlines()\n\n\nMULTI_PLOT_CASES = {\n ccrs.PlateCarree: plate_carree_plot,\n ccrs.InterruptedGoodeHomolosine: igh_plot,\n ccrs.UTM: utm_plot,\n}\n\n\nCOASTLINE_RESOLUTION = {ccrs.OSNI: '10m',\n ccrs.OSGB: '50m',\n ccrs.EuroPP: '50m'}\n\n\nPRJ_SORT_ORDER = {'PlateCarree': 1,\n 'Mercator': 2, 'Mollweide': 2, 'Robinson': 2,\n 'TransverseMercator': 2, 'LambertCylindrical': 2,\n 'LambertConformal': 2, 'EquidistantConic': 2,\n 'Stereographic': 2, 'Miller': 2,\n 'Orthographic': 2, 'UTM': 2, 'AlbersEqualArea': 2,\n 'AzimuthalEquidistant': 2, 'Sinusoidal': 2,\n 'InterruptedGoodeHomolosine': 3, 'RotatedPole': 3,\n 'OSGB': 4, 'EuroPP': 5,\n 'Geostationary': 6, 'NearsidePerspective': 7,\n 'EckertI': 8.1, 'EckertII': 8.2, 'EckertIII': 8.3,\n 'EckertIV': 8.4, 'EckertV': 8.5, 'EckertVI': 8.6}\n\n\ndef find_projections():\n for obj_name, o in vars(ccrs).copy().items():\n if isinstance(o, type) and issubclass(o, ccrs.Projection) and \\\n not obj_name.startswith('_') and obj_name not in ['Projection']:\n\n yield o\n\n\ndef create_instance(prj_cls, instance_args):\n name = prj_cls.__name__\n\n # Format instance arguments into strings\n instance_params = ',\\n '.join(\n f'{k}={v}'\n for k, v in sorted(instance_args.items()))\n\n if instance_params:\n instance_params = '\\n ' \\\n + instance_params\n\n instance_creation_code = f'{name}({instance_params})'\n\n prj_inst = prj(**instance_args)\n\n return prj_inst, instance_creation_code\n\n\nif __name__ == '__main__':\n fname = os.path.join(os.path.dirname(__file__), 'source',\n 'reference', 'projections.rst')\n table = open(fname, 'w')\n\n notes = \"\"\"\n .. (comment): DO NOT EDIT this file.\n .. It is auto-generated by running : cartopy/docs/make_projection.py\n .. Please adjust by making changes there.\n .. It is included in the repository only to aid detection of changes.\n\n .. _cartopy_projections:\n\n \"\"\"\n table.write(textwrap.dedent(notes))\n\n def prj_class_sorter(cls):\n return (PRJ_SORT_ORDER.get(cls.__name__, 100),\n cls.__name__)\n\n for prj in sorted(find_projections(), key=prj_class_sorter):\n name = prj.__name__\n\n table.write(name + '\\n')\n table.write('-' * len(name) + '\\n\\n')\n\n table.write('.. autoclass:: cartopy.crs.%s\\n' % name)\n\n if prj not in MULTI_PLOT_CASES:\n # Get instance arguments and number of plots\n instance_args = SPECIFIC_PROJECTION_KWARGS.get(prj, {})\n\n prj_inst, instance_repr = create_instance(prj, instance_args)\n\n aspect = (np.diff(prj_inst.x_limits) /\n np.diff(prj_inst.y_limits))[0]\n\n width = 3 * aspect\n width = f'{width:.4f}'.rstrip('0').rstrip('.')\n\n # Generate plotting code\n code = textwrap.dedent(\"\"\"\n .. plot::\n\n import matplotlib.pyplot as plt\n import cartopy.crs as ccrs\n\n plt.figure(figsize=({width}, 3))\n ax = plt.axes(projection=ccrs.{proj_constructor})\n ax.coastlines(resolution={coastline_resolution!r})\n ax.gridlines()\n\n\n \"\"\").format(width=width,\n proj_constructor=instance_repr,\n coastline_resolution=COASTLINE_RESOLUTION.get(prj,\n '110m'))\n\n else:\n func = MULTI_PLOT_CASES[prj]\n\n lines = inspect.getsourcelines(func)\n func_code = \"\".join(lines[0][1:])\n\n code = textwrap.dedent(\"\"\"\n .. plot::\n\n {func_code}\n\n \"\"\").format(func_code=func_code)\n\n table.write(code)\n", "path": "docs/make_projection.py"}], "after_files": [{"content": "# Copyright Cartopy Contributors\n#\n# This file is part of Cartopy and is released under the LGPL license.\n# See COPYING and COPYING.LESSER in the root of the repository for full\n# licensing details.\n\nimport os\nimport inspect\nimport textwrap\nimport numpy as np\nimport cartopy.crs as ccrs\n\n#: A dictionary to allow examples to use non-default parameters to the CRS\n#: constructor.\nSPECIFIC_PROJECTION_KWARGS = {\n ccrs.RotatedPole: {'pole_longitude': 177.5, 'pole_latitude': 37.5},\n ccrs.AzimuthalEquidistant: {'central_latitude': 90},\n ccrs.NearsidePerspective: {\n 'central_longitude': -3.53, 'central_latitude': 50.72,\n 'satellite_height': 10.0e6},\n ccrs.OSGB: {'approx': False},\n ccrs.OSNI: {'approx': False},\n ccrs.TransverseMercator: {'approx': False},\n}\n\n\ndef plate_carree_plot():\n import matplotlib.pyplot as plt\n import cartopy.crs as ccrs\n\n nplots = 2\n\n fig = plt.figure(figsize=(6, 6))\n\n for i in range(0, nplots):\n central_longitude = 0 if i == 0 else 180\n ax = fig.add_subplot(\n nplots, 1, i+1,\n projection=ccrs.PlateCarree(central_longitude=central_longitude))\n ax.coastlines(resolution='110m')\n ax.gridlines()\n\n\ndef igh_plot():\n import matplotlib.pyplot as plt\n import cartopy.crs as ccrs\n\n fig = plt.figure(figsize=(6.9228, 6))\n\n ax1 = fig.add_subplot(2, 1, 1,\n projection=ccrs.InterruptedGoodeHomolosine(\n emphasis='land'))\n ax1.coastlines(resolution='110m')\n ax1.gridlines()\n\n ax2 = fig.add_subplot(2, 1, 2,\n projection=ccrs.InterruptedGoodeHomolosine(\n central_longitude=-160, emphasis='ocean'))\n ax2.coastlines(resolution='110m')\n ax2.gridlines()\n\n\ndef utm_plot():\n import matplotlib.pyplot as plt\n import cartopy.crs as ccrs\n\n nplots = 60\n\n fig = plt.figure(figsize=(10, 3))\n\n for i in range(0, nplots):\n ax = fig.add_subplot(1, nplots, i+1,\n projection=ccrs.UTM(zone=i+1,\n southern_hemisphere=True))\n ax.coastlines(resolution='110m')\n ax.gridlines()\n\n\nMULTI_PLOT_CASES = {\n ccrs.PlateCarree: plate_carree_plot,\n ccrs.InterruptedGoodeHomolosine: igh_plot,\n ccrs.UTM: utm_plot,\n}\n\n\nCOASTLINE_RESOLUTION = {ccrs.OSNI: '10m',\n ccrs.OSGB: '50m',\n ccrs.EuroPP: '50m'}\n\n\nPRJ_SORT_ORDER = {'PlateCarree': 1,\n 'Mercator': 2, 'Mollweide': 2, 'Robinson': 2,\n 'TransverseMercator': 2, 'LambertCylindrical': 2,\n 'LambertConformal': 2, 'EquidistantConic': 2,\n 'Stereographic': 2, 'Miller': 2,\n 'Orthographic': 2, 'UTM': 2, 'AlbersEqualArea': 2,\n 'AzimuthalEquidistant': 2, 'Sinusoidal': 2,\n 'InterruptedGoodeHomolosine': 3, 'RotatedPole': 3,\n 'OSGB': 4, 'EuroPP': 5,\n 'Geostationary': 6, 'NearsidePerspective': 7,\n 'EckertI': 8.1, 'EckertII': 8.2, 'EckertIII': 8.3,\n 'EckertIV': 8.4, 'EckertV': 8.5, 'EckertVI': 8.6}\n\n\ndef find_projections():\n for obj_name, o in vars(ccrs).copy().items():\n if isinstance(o, type) and issubclass(o, ccrs.Projection) and \\\n not obj_name.startswith('_') and obj_name not in ['Projection']:\n\n yield o\n\n\ndef create_instance(prj_cls, instance_args):\n name = prj_cls.__name__\n\n # Format instance arguments into strings\n instance_params = ',\\n '.join(\n f'{k}={v}'\n for k, v in sorted(instance_args.items()))\n\n if instance_params:\n instance_params = '\\n ' \\\n + instance_params\n\n instance_creation_code = f'{name}({instance_params})'\n\n prj_inst = prj(**instance_args)\n\n return prj_inst, instance_creation_code\n\n\nif __name__ == '__main__':\n fname = os.path.join(os.path.dirname(__file__), 'source',\n 'reference', 'projections.rst')\n table = open(fname, 'w')\n\n header = \"\"\"\n .. (comment): DO NOT EDIT this file.\n .. It is auto-generated by running : cartopy/docs/make_projection.py\n .. Please adjust by making changes there.\n .. It is included in the repository only to aid detection of changes.\n\n .. _cartopy_projections:\n\n Cartopy projection list\n =======================\n\n\n \"\"\"\n\n table.write(textwrap.dedent(header))\n\n def prj_class_sorter(cls):\n return (PRJ_SORT_ORDER.get(cls.__name__, 100),\n cls.__name__)\n\n for prj in sorted(find_projections(), key=prj_class_sorter):\n name = prj.__name__\n\n table.write(name + '\\n')\n table.write('-' * len(name) + '\\n\\n')\n\n table.write('.. autoclass:: cartopy.crs.%s\\n' % name)\n\n if prj not in MULTI_PLOT_CASES:\n # Get instance arguments and number of plots\n instance_args = SPECIFIC_PROJECTION_KWARGS.get(prj, {})\n\n prj_inst, instance_repr = create_instance(prj, instance_args)\n\n aspect = (np.diff(prj_inst.x_limits) /\n np.diff(prj_inst.y_limits))[0]\n\n width = 3 * aspect\n width = f'{width:.4f}'.rstrip('0').rstrip('.')\n\n # Generate plotting code\n code = textwrap.dedent(\"\"\"\n .. plot::\n\n import matplotlib.pyplot as plt\n import cartopy.crs as ccrs\n\n plt.figure(figsize=({width}, 3))\n ax = plt.axes(projection=ccrs.{proj_constructor})\n ax.coastlines(resolution={coastline_resolution!r})\n ax.gridlines()\n\n\n \"\"\").format(width=width,\n proj_constructor=instance_repr,\n coastline_resolution=COASTLINE_RESOLUTION.get(prj,\n '110m'))\n\n else:\n func = MULTI_PLOT_CASES[prj]\n\n lines = inspect.getsourcelines(func)\n func_code = \"\".join(lines[0][1:])\n\n code = textwrap.dedent(\"\"\"\n .. plot::\n\n {func_code}\n\n \"\"\").format(func_code=func_code)\n\n table.write(code)\n", "path": "docs/make_projection.py"}]} | 2,634 | 204 |
gh_patches_debug_16538 | rasdani/github-patches | git_diff | sopel-irc__sopel-2063 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
isup responds twice
<!-- Before reporting a bug, please search both open *and closed* issues to
see if it has already been reported. If you can, try to reproduce the problem
on an unmodified copy of the `master` branch first, as sometimes bugs are found
and fixed without a report. If the problem is unreported and persists in
`master`, please help us fix it quickly by filling out as much of this
information as you can. Thanks! -->
### Description
when I run .isup or .isupinsecure, the bot responds twice. first is the error (if any) then it says that the site is up
### Reproduction steps
1. setup a sopel bot using the master branch.
2. in the irc channel run .isup with some url that is down
3. it responds twice
### Expected behavior
only responds with an error or else up, not up and error
### Environment
- Sopel `.version`: [e.g. 7.0.0 or d416e19] master branch
- Sopel installed via: [apt, pip, `setup.py install`, source, ?] source
- Python version: [e.g. 3.6.9] 3.7
- Operating system: [e.g. Debian 10] debian buster
- IRCd `/version`: [e.g. InspIRCd 3.0.1] freenode
- Relevant plugins: [adminchannel, weather, custom\_thing.py, ?] isup
### Notes
seems to be because in https://github.com/sopel-irc/sopel/blob/master/sopel/modules/isup.py#L89 none of the except statements return and so the bot.say for "website is up" is always executed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sopel/modules/isup.py`
Content:
```
1 # coding=utf-8
2 """
3 isup.py - Sopel Website Status Check Plugin
4 Copyright 2011, Elsie Powell http://embolalia.com
5 Licensed under the Eiffel Forum License 2.
6
7 https://sopel.chat
8 """
9 from __future__ import absolute_import, division, print_function, unicode_literals
10
11 import requests
12
13 from sopel import plugin
14
15
16 PLUGIN_OUTPUT_PREFIX = '[isup] '
17
18
19 def get_site_url(site):
20 """Get a ``site`` URL
21
22 :param str site: the site to get URL for
23 :return: a valid site URL
24 :raise ValueError: when site is empty, or isn't well formatted
25
26 The ``site`` argument is checked: its scheme must be ``http`` or ``https``,
27 or a :exc:`ValueError` is raised.
28
29 If the ``site`` does not have a scheme, ``http`` is used. If it doesn't
30 have a TLD, a :exc:`ValueError` is raised.
31 """
32 site = site.strip() if site else ''
33 if not site:
34 raise ValueError('What site do you want to check?')
35
36 if not site.startswith(('http://', 'https://')):
37 if '://' in site:
38 protocol = site.split('://')[0] + '://'
39 raise ValueError('Try it again without the %s' % protocol)
40
41 site = 'http://' + site
42
43 domain = site.split('/')[2].split(':')[0]
44 if '.' not in domain:
45 raise ValueError('I need a fully qualified domain name (with a dot).')
46 if domain.endswith(('.local', '.example', '.test', '.invalid', '.localhost')):
47 raise ValueError("I can't check LAN-local or invalid domains.")
48
49 return site
50
51
52 def handle_isup(bot, trigger, secure=True):
53 """Handle the ``bot`` command from ``trigger``
54
55 :param bot: Sopel instance
56 :type bot: :class:`sopel.bot.SopelWrapper`
57 :param trigger: Command's trigger instance
58 :type trigger: :class:`sopel.trigger.Trigger`
59 :param bool secure: Check SSL error if ``True`` (the default)
60 """
61 try:
62 site = get_site_url(trigger.group(2))
63 response = requests.head(site, verify=secure, timeout=(10.0, 5.0))
64 response.raise_for_status()
65 except ValueError as error:
66 bot.reply(str(error))
67 except requests.exceptions.SSLError:
68 bot.say(
69 '{} looks down to me (SSL error). Try using `{}isupinsecure`.'
70 .format(site, bot.config.core.help_prefix))
71 except requests.HTTPError:
72 bot.say(
73 '{} looks down to me (HTTP {} "{}").'
74 .format(site, response.status_code, response.reason))
75 except requests.ConnectTimeout:
76 bot.say(
77 '{} looks down to me (timed out while connecting).'
78 .format(site))
79 except requests.ReadTimeout:
80 bot.say(
81 '{} looks down to me (timed out waiting for reply).'
82 .format(site))
83 except requests.ConnectionError:
84 bot.say(
85 '{} looks down to me (connection error).'
86 .format(site))
87
88 # If no exception happened, the request succeeded.
89 bot.say(site + ' looks fine to me.')
90
91
92 @plugin.command('isupinsecure')
93 @plugin.output_prefix(PLUGIN_OUTPUT_PREFIX)
94 def isup_insecure(bot, trigger):
95 """Check if a website is up (without verifying HTTPS)."""
96 handle_isup(bot, trigger, secure=False)
97
98
99 @plugin.command('isup')
100 @plugin.example('.isup google.com',
101 'http://google.com looks fine to me.',
102 online=True, vcr=True)
103 @plugin.output_prefix(PLUGIN_OUTPUT_PREFIX)
104 def isup(bot, trigger):
105 """Check if a website is up or not."""
106 handle_isup(bot, trigger, secure=True)
107
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sopel/modules/isup.py b/sopel/modules/isup.py
--- a/sopel/modules/isup.py
+++ b/sopel/modules/isup.py
@@ -84,9 +84,9 @@
bot.say(
'{} looks down to me (connection error).'
.format(site))
-
- # If no exception happened, the request succeeded.
- bot.say(site + ' looks fine to me.')
+ else:
+ # If no exception happened, the request must have succeeded.
+ bot.say(site + ' looks fine to me.')
@plugin.command('isupinsecure')
@@ -97,9 +97,7 @@
@plugin.command('isup')
[email protected]('.isup google.com',
- 'http://google.com looks fine to me.',
- online=True, vcr=True)
[email protected]('.isup google.com')
@plugin.output_prefix(PLUGIN_OUTPUT_PREFIX)
def isup(bot, trigger):
"""Check if a website is up or not."""
| {"golden_diff": "diff --git a/sopel/modules/isup.py b/sopel/modules/isup.py\n--- a/sopel/modules/isup.py\n+++ b/sopel/modules/isup.py\n@@ -84,9 +84,9 @@\n bot.say(\n '{} looks down to me (connection error).'\n .format(site))\n-\n- # If no exception happened, the request succeeded.\n- bot.say(site + ' looks fine to me.')\n+ else:\n+ # If no exception happened, the request must have succeeded.\n+ bot.say(site + ' looks fine to me.')\n \n \n @plugin.command('isupinsecure')\n@@ -97,9 +97,7 @@\n \n \n @plugin.command('isup')\[email protected]('.isup google.com',\n- 'http://google.com looks fine to me.',\n- online=True, vcr=True)\[email protected]('.isup google.com')\n @plugin.output_prefix(PLUGIN_OUTPUT_PREFIX)\n def isup(bot, trigger):\n \"\"\"Check if a website is up or not.\"\"\"\n", "issue": "isup responds twice\n<!-- Before reporting a bug, please search both open *and closed* issues to\r\nsee if it has already been reported. If you can, try to reproduce the problem\r\non an unmodified copy of the `master` branch first, as sometimes bugs are found\r\nand fixed without a report. If the problem is unreported and persists in\r\n`master`, please help us fix it quickly by filling out as much of this\r\ninformation as you can. Thanks! -->\r\n\r\n### Description\r\nwhen I run .isup or .isupinsecure, the bot responds twice. first is the error (if any) then it says that the site is up\r\n### Reproduction steps\r\n1. setup a sopel bot using the master branch.\r\n2. in the irc channel run .isup with some url that is down\r\n3. it responds twice\r\n\r\n### Expected behavior\r\nonly responds with an error or else up, not up and error\r\n\r\n### Environment\r\n- Sopel `.version`: [e.g. 7.0.0 or d416e19] master branch\r\n- Sopel installed via: [apt, pip, `setup.py install`, source, ?] source\r\n- Python version: [e.g. 3.6.9] 3.7\r\n- Operating system: [e.g. Debian 10] debian buster\r\n- IRCd `/version`: [e.g. InspIRCd 3.0.1] freenode\r\n- Relevant plugins: [adminchannel, weather, custom\\_thing.py, ?] isup\r\n\r\n### Notes\r\nseems to be because in https://github.com/sopel-irc/sopel/blob/master/sopel/modules/isup.py#L89 none of the except statements return and so the bot.say for \"website is up\" is always executed.\r\n\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"\nisup.py - Sopel Website Status Check Plugin\nCopyright 2011, Elsie Powell http://embolalia.com\nLicensed under the Eiffel Forum License 2.\n\nhttps://sopel.chat\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport requests\n\nfrom sopel import plugin\n\n\nPLUGIN_OUTPUT_PREFIX = '[isup] '\n\n\ndef get_site_url(site):\n \"\"\"Get a ``site`` URL\n\n :param str site: the site to get URL for\n :return: a valid site URL\n :raise ValueError: when site is empty, or isn't well formatted\n\n The ``site`` argument is checked: its scheme must be ``http`` or ``https``,\n or a :exc:`ValueError` is raised.\n\n If the ``site`` does not have a scheme, ``http`` is used. If it doesn't\n have a TLD, a :exc:`ValueError` is raised.\n \"\"\"\n site = site.strip() if site else ''\n if not site:\n raise ValueError('What site do you want to check?')\n\n if not site.startswith(('http://', 'https://')):\n if '://' in site:\n protocol = site.split('://')[0] + '://'\n raise ValueError('Try it again without the %s' % protocol)\n\n site = 'http://' + site\n\n domain = site.split('/')[2].split(':')[0]\n if '.' not in domain:\n raise ValueError('I need a fully qualified domain name (with a dot).')\n if domain.endswith(('.local', '.example', '.test', '.invalid', '.localhost')):\n raise ValueError(\"I can't check LAN-local or invalid domains.\")\n\n return site\n\n\ndef handle_isup(bot, trigger, secure=True):\n \"\"\"Handle the ``bot`` command from ``trigger``\n\n :param bot: Sopel instance\n :type bot: :class:`sopel.bot.SopelWrapper`\n :param trigger: Command's trigger instance\n :type trigger: :class:`sopel.trigger.Trigger`\n :param bool secure: Check SSL error if ``True`` (the default)\n \"\"\"\n try:\n site = get_site_url(trigger.group(2))\n response = requests.head(site, verify=secure, timeout=(10.0, 5.0))\n response.raise_for_status()\n except ValueError as error:\n bot.reply(str(error))\n except requests.exceptions.SSLError:\n bot.say(\n '{} looks down to me (SSL error). Try using `{}isupinsecure`.'\n .format(site, bot.config.core.help_prefix))\n except requests.HTTPError:\n bot.say(\n '{} looks down to me (HTTP {} \"{}\").'\n .format(site, response.status_code, response.reason))\n except requests.ConnectTimeout:\n bot.say(\n '{} looks down to me (timed out while connecting).'\n .format(site))\n except requests.ReadTimeout:\n bot.say(\n '{} looks down to me (timed out waiting for reply).'\n .format(site))\n except requests.ConnectionError:\n bot.say(\n '{} looks down to me (connection error).'\n .format(site))\n\n # If no exception happened, the request succeeded.\n bot.say(site + ' looks fine to me.')\n\n\[email protected]('isupinsecure')\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef isup_insecure(bot, trigger):\n \"\"\"Check if a website is up (without verifying HTTPS).\"\"\"\n handle_isup(bot, trigger, secure=False)\n\n\[email protected]('isup')\[email protected]('.isup google.com',\n 'http://google.com looks fine to me.',\n online=True, vcr=True)\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef isup(bot, trigger):\n \"\"\"Check if a website is up or not.\"\"\"\n handle_isup(bot, trigger, secure=True)\n", "path": "sopel/modules/isup.py"}], "after_files": [{"content": "# coding=utf-8\n\"\"\"\nisup.py - Sopel Website Status Check Plugin\nCopyright 2011, Elsie Powell http://embolalia.com\nLicensed under the Eiffel Forum License 2.\n\nhttps://sopel.chat\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport requests\n\nfrom sopel import plugin\n\n\nPLUGIN_OUTPUT_PREFIX = '[isup] '\n\n\ndef get_site_url(site):\n \"\"\"Get a ``site`` URL\n\n :param str site: the site to get URL for\n :return: a valid site URL\n :raise ValueError: when site is empty, or isn't well formatted\n\n The ``site`` argument is checked: its scheme must be ``http`` or ``https``,\n or a :exc:`ValueError` is raised.\n\n If the ``site`` does not have a scheme, ``http`` is used. If it doesn't\n have a TLD, a :exc:`ValueError` is raised.\n \"\"\"\n site = site.strip() if site else ''\n if not site:\n raise ValueError('What site do you want to check?')\n\n if not site.startswith(('http://', 'https://')):\n if '://' in site:\n protocol = site.split('://')[0] + '://'\n raise ValueError('Try it again without the %s' % protocol)\n\n site = 'http://' + site\n\n domain = site.split('/')[2].split(':')[0]\n if '.' not in domain:\n raise ValueError('I need a fully qualified domain name (with a dot).')\n if domain.endswith(('.local', '.example', '.test', '.invalid', '.localhost')):\n raise ValueError(\"I can't check LAN-local or invalid domains.\")\n\n return site\n\n\ndef handle_isup(bot, trigger, secure=True):\n \"\"\"Handle the ``bot`` command from ``trigger``\n\n :param bot: Sopel instance\n :type bot: :class:`sopel.bot.SopelWrapper`\n :param trigger: Command's trigger instance\n :type trigger: :class:`sopel.trigger.Trigger`\n :param bool secure: Check SSL error if ``True`` (the default)\n \"\"\"\n try:\n site = get_site_url(trigger.group(2))\n response = requests.head(site, verify=secure, timeout=(10.0, 5.0))\n response.raise_for_status()\n except ValueError as error:\n bot.reply(str(error))\n except requests.exceptions.SSLError:\n bot.say(\n '{} looks down to me (SSL error). Try using `{}isupinsecure`.'\n .format(site, bot.config.core.help_prefix))\n except requests.HTTPError:\n bot.say(\n '{} looks down to me (HTTP {} \"{}\").'\n .format(site, response.status_code, response.reason))\n except requests.ConnectTimeout:\n bot.say(\n '{} looks down to me (timed out while connecting).'\n .format(site))\n except requests.ReadTimeout:\n bot.say(\n '{} looks down to me (timed out waiting for reply).'\n .format(site))\n except requests.ConnectionError:\n bot.say(\n '{} looks down to me (connection error).'\n .format(site))\n else:\n # If no exception happened, the request must have succeeded.\n bot.say(site + ' looks fine to me.')\n\n\[email protected]('isupinsecure')\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef isup_insecure(bot, trigger):\n \"\"\"Check if a website is up (without verifying HTTPS).\"\"\"\n handle_isup(bot, trigger, secure=False)\n\n\[email protected]('isup')\[email protected]('.isup google.com')\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef isup(bot, trigger):\n \"\"\"Check if a website is up or not.\"\"\"\n handle_isup(bot, trigger, secure=True)\n", "path": "sopel/modules/isup.py"}]} | 1,719 | 225 |
gh_patches_debug_24694 | rasdani/github-patches | git_diff | streamlit__streamlit-7018 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Markdown support for radio buttons
### Problem
Colored text and other markdown elements work in the label of `st.radio` but not in the texts of the radio elements. This is a bit weird since we do support it in the texts of checkboxes (where the text next to the checkbox is the label).
### Solution
Allow markdown in the options of `st.radio`.
---
Community voting on feature requests enables the Streamlit team to understand which features are most important to our users.
**If you'd like the Streamlit team to prioritize this feature request, please use the 👍 (thumbs up emoji) reaction in response to the initial post.**
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `e2e/scripts/st_radio.py`
Content:
```
1 # Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import pandas as pd
16
17 import streamlit as st
18 from streamlit import runtime
19 from tests.streamlit import pyspark_mocks
20
21 options = ("female", "male")
22 i1 = st.radio("radio 1", options, 1)
23 st.write("value 1:", i1)
24
25 i2 = st.radio("radio 2", options, 0, format_func=lambda x: x.capitalize())
26 st.write("value 2:", i2)
27
28 i3 = st.radio("radio 3", [])
29 st.write("value 3:", i3)
30
31 i4 = st.radio("radio 4", options, disabled=True)
32 st.write("value 4:", i4)
33
34 i5 = st.radio("radio 5", options, horizontal=True)
35 st.write("value 5:", i5)
36
37 i6 = st.radio("radio 6", pd.DataFrame({"foo": list(options)}))
38 st.write("value 6:", i6)
39
40 i7 = st.radio("radio 7", options, label_visibility="hidden")
41 st.write("value 7:", i7)
42
43 i8 = st.radio("radio 8", options, label_visibility="collapsed")
44 st.write("value 8:", i8)
45
46
47 if runtime.exists():
48
49 def on_change():
50 st.session_state.radio_changed = True
51
52 st.radio("radio 9", options, 1, key="radio9", on_change=on_change)
53 st.write("value 9:", st.session_state.radio9)
54 st.write("radio changed:", "radio_changed" in st.session_state)
55
56 st.radio("PySpark radio", pyspark_mocks.DataFrame()) # type: ignore
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/e2e/scripts/st_radio.py b/e2e/scripts/st_radio.py
--- a/e2e/scripts/st_radio.py
+++ b/e2e/scripts/st_radio.py
@@ -19,6 +19,16 @@
from tests.streamlit import pyspark_mocks
options = ("female", "male")
+markdown_options = (
+ "**bold text**",
+ "*italics text*",
+ "~strikethrough text~",
+ "shortcode: :blush:",
+ # link should not work in radio options
+ "[link text](www.example.com)",
+ "`code text`",
+ ":red[red] :blue[blue] :green[green] :violet[violet] :orange[orange]",
+)
i1 = st.radio("radio 1", options, 1)
st.write("value 1:", i1)
@@ -43,14 +53,16 @@
i8 = st.radio("radio 8", options, label_visibility="collapsed")
st.write("value 8:", i8)
+i9 = st.radio("radio 9", markdown_options)
+st.write("value 9:", i9)
if runtime.exists():
def on_change():
st.session_state.radio_changed = True
- st.radio("radio 9", options, 1, key="radio9", on_change=on_change)
- st.write("value 9:", st.session_state.radio9)
+ st.radio("radio 10", options, 1, key="radio10", on_change=on_change)
+ st.write("value 10:", st.session_state.radio10)
st.write("radio changed:", "radio_changed" in st.session_state)
st.radio("PySpark radio", pyspark_mocks.DataFrame()) # type: ignore
| {"golden_diff": "diff --git a/e2e/scripts/st_radio.py b/e2e/scripts/st_radio.py\n--- a/e2e/scripts/st_radio.py\n+++ b/e2e/scripts/st_radio.py\n@@ -19,6 +19,16 @@\n from tests.streamlit import pyspark_mocks\n \n options = (\"female\", \"male\")\n+markdown_options = (\n+ \"**bold text**\",\n+ \"*italics text*\",\n+ \"~strikethrough text~\",\n+ \"shortcode: :blush:\",\n+ # link should not work in radio options\n+ \"[link text](www.example.com)\",\n+ \"`code text`\",\n+ \":red[red] :blue[blue] :green[green] :violet[violet] :orange[orange]\",\n+)\n i1 = st.radio(\"radio 1\", options, 1)\n st.write(\"value 1:\", i1)\n \n@@ -43,14 +53,16 @@\n i8 = st.radio(\"radio 8\", options, label_visibility=\"collapsed\")\n st.write(\"value 8:\", i8)\n \n+i9 = st.radio(\"radio 9\", markdown_options)\n+st.write(\"value 9:\", i9)\n \n if runtime.exists():\n \n def on_change():\n st.session_state.radio_changed = True\n \n- st.radio(\"radio 9\", options, 1, key=\"radio9\", on_change=on_change)\n- st.write(\"value 9:\", st.session_state.radio9)\n+ st.radio(\"radio 10\", options, 1, key=\"radio10\", on_change=on_change)\n+ st.write(\"value 10:\", st.session_state.radio10)\n st.write(\"radio changed:\", \"radio_changed\" in st.session_state)\n \n st.radio(\"PySpark radio\", pyspark_mocks.DataFrame()) # type: ignore\n", "issue": "Markdown support for radio buttons\n### Problem\r\n\r\nColored text and other markdown elements work in the label of `st.radio` but not in the texts of the radio elements. This is a bit weird since we do support it in the texts of checkboxes (where the text next to the checkbox is the label). \r\n\r\n\r\n### Solution\r\n\r\nAllow markdown in the options of `st.radio`. \r\n\r\n\r\n---\r\n\r\nCommunity voting on feature requests enables the Streamlit team to understand which features are most important to our users.\r\n\r\n**If you'd like the Streamlit team to prioritize this feature request, please use the \ud83d\udc4d (thumbs up emoji) reaction in response to the initial post.**\r\n\n", "before_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport pandas as pd\n\nimport streamlit as st\nfrom streamlit import runtime\nfrom tests.streamlit import pyspark_mocks\n\noptions = (\"female\", \"male\")\ni1 = st.radio(\"radio 1\", options, 1)\nst.write(\"value 1:\", i1)\n\ni2 = st.radio(\"radio 2\", options, 0, format_func=lambda x: x.capitalize())\nst.write(\"value 2:\", i2)\n\ni3 = st.radio(\"radio 3\", [])\nst.write(\"value 3:\", i3)\n\ni4 = st.radio(\"radio 4\", options, disabled=True)\nst.write(\"value 4:\", i4)\n\ni5 = st.radio(\"radio 5\", options, horizontal=True)\nst.write(\"value 5:\", i5)\n\ni6 = st.radio(\"radio 6\", pd.DataFrame({\"foo\": list(options)}))\nst.write(\"value 6:\", i6)\n\ni7 = st.radio(\"radio 7\", options, label_visibility=\"hidden\")\nst.write(\"value 7:\", i7)\n\ni8 = st.radio(\"radio 8\", options, label_visibility=\"collapsed\")\nst.write(\"value 8:\", i8)\n\n\nif runtime.exists():\n\n def on_change():\n st.session_state.radio_changed = True\n\n st.radio(\"radio 9\", options, 1, key=\"radio9\", on_change=on_change)\n st.write(\"value 9:\", st.session_state.radio9)\n st.write(\"radio changed:\", \"radio_changed\" in st.session_state)\n\nst.radio(\"PySpark radio\", pyspark_mocks.DataFrame()) # type: ignore\n", "path": "e2e/scripts/st_radio.py"}], "after_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport pandas as pd\n\nimport streamlit as st\nfrom streamlit import runtime\nfrom tests.streamlit import pyspark_mocks\n\noptions = (\"female\", \"male\")\nmarkdown_options = (\n \"**bold text**\",\n \"*italics text*\",\n \"~strikethrough text~\",\n \"shortcode: :blush:\",\n # link should not work in radio options\n \"[link text](www.example.com)\",\n \"`code text`\",\n \":red[red] :blue[blue] :green[green] :violet[violet] :orange[orange]\",\n)\ni1 = st.radio(\"radio 1\", options, 1)\nst.write(\"value 1:\", i1)\n\ni2 = st.radio(\"radio 2\", options, 0, format_func=lambda x: x.capitalize())\nst.write(\"value 2:\", i2)\n\ni3 = st.radio(\"radio 3\", [])\nst.write(\"value 3:\", i3)\n\ni4 = st.radio(\"radio 4\", options, disabled=True)\nst.write(\"value 4:\", i4)\n\ni5 = st.radio(\"radio 5\", options, horizontal=True)\nst.write(\"value 5:\", i5)\n\ni6 = st.radio(\"radio 6\", pd.DataFrame({\"foo\": list(options)}))\nst.write(\"value 6:\", i6)\n\ni7 = st.radio(\"radio 7\", options, label_visibility=\"hidden\")\nst.write(\"value 7:\", i7)\n\ni8 = st.radio(\"radio 8\", options, label_visibility=\"collapsed\")\nst.write(\"value 8:\", i8)\n\ni9 = st.radio(\"radio 9\", markdown_options)\nst.write(\"value 9:\", i9)\n\nif runtime.exists():\n\n def on_change():\n st.session_state.radio_changed = True\n\n st.radio(\"radio 10\", options, 1, key=\"radio10\", on_change=on_change)\n st.write(\"value 10:\", st.session_state.radio10)\n st.write(\"radio changed:\", \"radio_changed\" in st.session_state)\n\nst.radio(\"PySpark radio\", pyspark_mocks.DataFrame()) # type: ignore\n", "path": "e2e/scripts/st_radio.py"}]} | 1,007 | 400 |
gh_patches_debug_27 | rasdani/github-patches | git_diff | netket__netket-214 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
module 'netket' has no attribute 'MPI'
With the merge #193 we have lost the MPI module
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netket/__init__.py`
Content:
```
1 # Copyright 2019 The Simons Foundation, Inc. - All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import absolute_import
16 from . import (
17 _C_netket,
18 dynamics,
19 exact,
20 graph,
21 hilbert,
22 layer,
23 machine,
24 operator,
25 optimizer,
26 output,
27 sampler,
28 stats,
29 supervised,
30 unsupervised,
31 utils,
32 variational,
33 )
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netket/__init__.py b/netket/__init__.py
--- a/netket/__init__.py
+++ b/netket/__init__.py
@@ -31,3 +31,4 @@
utils,
variational,
)
+from ._C_netket import MPI, LookupReal, LookupComplex
| {"golden_diff": "diff --git a/netket/__init__.py b/netket/__init__.py\n--- a/netket/__init__.py\n+++ b/netket/__init__.py\n@@ -31,3 +31,4 @@\n utils,\n variational,\n )\n+from ._C_netket import MPI, LookupReal, LookupComplex\n", "issue": "module 'netket' has no attribute 'MPI'\nWith the merge #193 we have lost the MPI module \n", "before_files": [{"content": "# Copyright 2019 The Simons Foundation, Inc. - All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nfrom . import (\n _C_netket,\n dynamics,\n exact,\n graph,\n hilbert,\n layer,\n machine,\n operator,\n optimizer,\n output,\n sampler,\n stats,\n supervised,\n unsupervised,\n utils,\n variational,\n)\n", "path": "netket/__init__.py"}], "after_files": [{"content": "# Copyright 2019 The Simons Foundation, Inc. - All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nfrom . import (\n _C_netket,\n dynamics,\n exact,\n graph,\n hilbert,\n layer,\n machine,\n operator,\n optimizer,\n output,\n sampler,\n stats,\n supervised,\n unsupervised,\n utils,\n variational,\n)\nfrom ._C_netket import MPI, LookupReal, LookupComplex\n", "path": "netket/__init__.py"}]} | 552 | 71 |
gh_patches_debug_3491 | rasdani/github-patches | git_diff | ESMCI__cime-3863 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
nuopc run complete message location
In commit b9d7b65fd case_run.py was changed to look for the run completed message in drv.log instead of in med.log
for nuopc runs, however the system_tests_common.py was not changed to correspond. PR incoming.
@mvertens @uturuncoglu
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/lib/CIME/SystemTests/nodefail.py`
Content:
```
1 """
2 CIME restart upon failed node test.
3 """
4 from CIME.XML.standard_module_setup import *
5 from CIME.SystemTests.ers import ERS
6 from CIME.utils import get_model
7
8 logger = logging.getLogger(__name__)
9
10 class NODEFAIL(ERS):
11
12 def __init__(self, case):
13 """
14 initialize an object interface to the ERS system test
15 """
16 ERS.__init__(self, case)
17
18 self._fail_sentinel = os.path.join(case.get_value("RUNDIR"), "FAIL_SENTINEL")
19 self._fail_str = case.get_value("NODE_FAIL_REGEX")
20
21 def _restart_fake_phase(self):
22 # Swap out model.exe for one that emits node failures
23 rundir = self._case.get_value("RUNDIR")
24 exeroot = self._case.get_value("EXEROOT")
25 driver = self._case.get_value("COMP_INTERFACE")
26 if driver == "nuopc":
27 logname = "med"
28 else:
29 logname = "cpl"
30 fake_exe = \
31 """#!/bin/bash
32
33 fail_sentinel={0}
34 cpl_log={1}/{4}.log.$LID
35 model_log={1}/{2}.log.$LID
36 touch $cpl_log
37 touch $fail_sentinel
38 declare -i num_fails=$(cat $fail_sentinel | wc -l)
39 declare -i times_to_fail=${{NODEFAIL_NUM_FAILS:-3}}
40
41 if ((num_fails < times_to_fail)); then
42 echo FAKE FAIL >> $cpl_log
43 echo FAIL >> $fail_sentinel
44 echo '{3}' >> $model_log
45 sleep 1
46 exit -1
47 else
48 echo Insta pass
49 echo SUCCESSFUL TERMINATION > $cpl_log
50 fi
51 """.format(self._fail_sentinel, rundir, get_model(), self._fail_str, logname)
52
53 fake_exe_file = os.path.join(exeroot, "fake.sh")
54 with open(fake_exe_file, "w") as fd:
55 fd.write(fake_exe)
56
57 os.chmod(fake_exe_file, 0o755)
58
59 prev_run_exe = self._case.get_value("run_exe")
60 env_mach_specific = self._case.get_env("mach_specific")
61 env_mach_specific.set_value("run_exe", fake_exe_file)
62 self._case.flush(flushall=True)
63
64 # This flag is needed by mpt to run a script under mpiexec
65 mpilib = self._case.get_value("MPILIB")
66 if mpilib == "mpt":
67 os.environ["MPI_SHEPHERD"] = "true"
68
69 self.run_indv(suffix=None)
70
71 if mpilib == "mpt":
72 del os.environ["MPI_SHEPHERD"]
73
74 env_mach_specific = self._case.get_env("mach_specific")
75 env_mach_specific.set_value("run_exe", prev_run_exe)
76 self._case.flush(flushall=True)
77
78 def run_phase(self):
79 self._ers_first_phase()
80 self._restart_fake_phase()
81 self._ers_second_phase()
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/lib/CIME/SystemTests/nodefail.py b/scripts/lib/CIME/SystemTests/nodefail.py
--- a/scripts/lib/CIME/SystemTests/nodefail.py
+++ b/scripts/lib/CIME/SystemTests/nodefail.py
@@ -24,7 +24,7 @@
exeroot = self._case.get_value("EXEROOT")
driver = self._case.get_value("COMP_INTERFACE")
if driver == "nuopc":
- logname = "med"
+ logname = "drv"
else:
logname = "cpl"
fake_exe = \
| {"golden_diff": "diff --git a/scripts/lib/CIME/SystemTests/nodefail.py b/scripts/lib/CIME/SystemTests/nodefail.py\n--- a/scripts/lib/CIME/SystemTests/nodefail.py\n+++ b/scripts/lib/CIME/SystemTests/nodefail.py\n@@ -24,7 +24,7 @@\n exeroot = self._case.get_value(\"EXEROOT\")\n driver = self._case.get_value(\"COMP_INTERFACE\")\n if driver == \"nuopc\":\n- logname = \"med\"\n+ logname = \"drv\"\n else:\n logname = \"cpl\"\n fake_exe = \\\n", "issue": "nuopc run complete message location \nIn commit b9d7b65fd case_run.py was changed to look for the run completed message in drv.log instead of in med.log\r\nfor nuopc runs, however the system_tests_common.py was not changed to correspond. PR incoming.\r\n@mvertens @uturuncoglu \n", "before_files": [{"content": "\"\"\"\nCIME restart upon failed node test.\n\"\"\"\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.SystemTests.ers import ERS\nfrom CIME.utils import get_model\n\nlogger = logging.getLogger(__name__)\n\nclass NODEFAIL(ERS):\n\n def __init__(self, case):\n \"\"\"\n initialize an object interface to the ERS system test\n \"\"\"\n ERS.__init__(self, case)\n\n self._fail_sentinel = os.path.join(case.get_value(\"RUNDIR\"), \"FAIL_SENTINEL\")\n self._fail_str = case.get_value(\"NODE_FAIL_REGEX\")\n\n def _restart_fake_phase(self):\n # Swap out model.exe for one that emits node failures\n rundir = self._case.get_value(\"RUNDIR\")\n exeroot = self._case.get_value(\"EXEROOT\")\n driver = self._case.get_value(\"COMP_INTERFACE\")\n if driver == \"nuopc\":\n logname = \"med\"\n else:\n logname = \"cpl\"\n fake_exe = \\\n\"\"\"#!/bin/bash\n\nfail_sentinel={0}\ncpl_log={1}/{4}.log.$LID\nmodel_log={1}/{2}.log.$LID\ntouch $cpl_log\ntouch $fail_sentinel\ndeclare -i num_fails=$(cat $fail_sentinel | wc -l)\ndeclare -i times_to_fail=${{NODEFAIL_NUM_FAILS:-3}}\n\nif ((num_fails < times_to_fail)); then\n echo FAKE FAIL >> $cpl_log\n echo FAIL >> $fail_sentinel\n echo '{3}' >> $model_log\n sleep 1\n exit -1\nelse\n echo Insta pass\n echo SUCCESSFUL TERMINATION > $cpl_log\nfi\n\"\"\".format(self._fail_sentinel, rundir, get_model(), self._fail_str, logname)\n\n fake_exe_file = os.path.join(exeroot, \"fake.sh\")\n with open(fake_exe_file, \"w\") as fd:\n fd.write(fake_exe)\n\n os.chmod(fake_exe_file, 0o755)\n\n prev_run_exe = self._case.get_value(\"run_exe\")\n env_mach_specific = self._case.get_env(\"mach_specific\")\n env_mach_specific.set_value(\"run_exe\", fake_exe_file)\n self._case.flush(flushall=True)\n\n # This flag is needed by mpt to run a script under mpiexec\n mpilib = self._case.get_value(\"MPILIB\")\n if mpilib == \"mpt\":\n os.environ[\"MPI_SHEPHERD\"] = \"true\"\n\n self.run_indv(suffix=None)\n\n if mpilib == \"mpt\":\n del os.environ[\"MPI_SHEPHERD\"]\n\n env_mach_specific = self._case.get_env(\"mach_specific\")\n env_mach_specific.set_value(\"run_exe\", prev_run_exe)\n self._case.flush(flushall=True)\n\n def run_phase(self):\n self._ers_first_phase()\n self._restart_fake_phase()\n self._ers_second_phase()\n", "path": "scripts/lib/CIME/SystemTests/nodefail.py"}], "after_files": [{"content": "\"\"\"\nCIME restart upon failed node test.\n\"\"\"\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.SystemTests.ers import ERS\nfrom CIME.utils import get_model\n\nlogger = logging.getLogger(__name__)\n\nclass NODEFAIL(ERS):\n\n def __init__(self, case):\n \"\"\"\n initialize an object interface to the ERS system test\n \"\"\"\n ERS.__init__(self, case)\n\n self._fail_sentinel = os.path.join(case.get_value(\"RUNDIR\"), \"FAIL_SENTINEL\")\n self._fail_str = case.get_value(\"NODE_FAIL_REGEX\")\n\n def _restart_fake_phase(self):\n # Swap out model.exe for one that emits node failures\n rundir = self._case.get_value(\"RUNDIR\")\n exeroot = self._case.get_value(\"EXEROOT\")\n driver = self._case.get_value(\"COMP_INTERFACE\")\n if driver == \"nuopc\":\n logname = \"drv\"\n else:\n logname = \"cpl\"\n fake_exe = \\\n\"\"\"#!/bin/bash\n\nfail_sentinel={0}\ncpl_log={1}/{4}.log.$LID\nmodel_log={1}/{2}.log.$LID\ntouch $cpl_log\ntouch $fail_sentinel\ndeclare -i num_fails=$(cat $fail_sentinel | wc -l)\ndeclare -i times_to_fail=${{NODEFAIL_NUM_FAILS:-3}}\n\nif ((num_fails < times_to_fail)); then\n echo FAKE FAIL >> $cpl_log\n echo FAIL >> $fail_sentinel\n echo '{3}' >> $model_log\n sleep 1\n exit -1\nelse\n echo Insta pass\n echo SUCCESSFUL TERMINATION > $cpl_log\nfi\n\"\"\".format(self._fail_sentinel, rundir, get_model(), self._fail_str, logname)\n\n fake_exe_file = os.path.join(exeroot, \"fake.sh\")\n with open(fake_exe_file, \"w\") as fd:\n fd.write(fake_exe)\n\n os.chmod(fake_exe_file, 0o755)\n\n prev_run_exe = self._case.get_value(\"run_exe\")\n env_mach_specific = self._case.get_env(\"mach_specific\")\n env_mach_specific.set_value(\"run_exe\", fake_exe_file)\n self._case.flush(flushall=True)\n\n # This flag is needed by mpt to run a script under mpiexec\n mpilib = self._case.get_value(\"MPILIB\")\n if mpilib == \"mpt\":\n os.environ[\"MPI_SHEPHERD\"] = \"true\"\n\n self.run_indv(suffix=None)\n\n if mpilib == \"mpt\":\n del os.environ[\"MPI_SHEPHERD\"]\n\n env_mach_specific = self._case.get_env(\"mach_specific\")\n env_mach_specific.set_value(\"run_exe\", prev_run_exe)\n self._case.flush(flushall=True)\n\n def run_phase(self):\n self._ers_first_phase()\n self._restart_fake_phase()\n self._ers_second_phase()\n", "path": "scripts/lib/CIME/SystemTests/nodefail.py"}]} | 1,150 | 127 |
gh_patches_debug_40801 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1767 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
E2529 error with multiple `AWS::Logs::SubscriptionFilter` resources.
*cfn-lint version: 0.38.0*
*[`E2529`](https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/rules.md#E2529) error with multiple [`AWS::Logs::SubscriptionFilter`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-subscriptionfilter.html) resources.*
I have a CloudFormation template (using AWS SAM) that has multiple SubscriptionFilters. These SubscriptionFilters have the same `LogGroupName`, which causes an `E2529` error despite these filters having separate FilterPatterns. The template passes the [`aws cloudformation validate-template`](https://docs.aws.amazon.com/cli/latest/reference/cloudformation/validate-template.html) command.
[`src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py`](https://github.com/aws-cloudformation/cfn-python-lint/blob/master/src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py)
```
MainFunctionLogGroup:
Type: AWS::Logs::LogGroup
Properties:
RetentionInDays: 14
LogGroupName: !Join ["", [/aws/lambda/, !Ref MainFunction]]
MainFunctionLogFilter:
Type: AWS::Logs::SubscriptionFilter
Properties:
DestinationArn: !Ref LogIngestionARN
FilterPattern: "FilterPattern1"
LogGroupName: !Ref MainFunctionLogGroup
SecondaryLogFilter:
Type: AWS::Logs::SubscriptionFilter
Properties:
DestinationArn: !Ref LogIngestionARN
FilterPattern: "FilterPattern2"
LogGroupName: !Ref MainFunctionLogGroup
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py`
Content:
```
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 from cfnlint.rules import CloudFormationLintRule
6 from cfnlint.rules import RuleMatch
7
8
9 class EventsLogGroupName(CloudFormationLintRule):
10 """Check if the settings of multiple subscriptions are included for one LogGroup"""
11 id = 'E2529'
12 shortdesc = 'Check for duplicate Lambda events'
13 description = 'Check if there are any duplicate log groups in the Lambda event trigger element.'
14 source_url = 'https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#user-content-cloudwatchlogs'
15 tags = ['resources', 'lambda']
16
17 def check_events_subscription_duplicated(self, cfn):
18 """Check if Lambda Events Subscription is duplicated"""
19 matches = []
20 message = 'You must specify the AWS::Serverless::Function event correctly. ' \
21 'LogGroups are duplicated. '
22
23 log_group_name_list = self.__get_log_group_name_list(cfn)
24
25 if self.__is_duplicated(log_group_name_list):
26 matches.append(
27 RuleMatch(
28 'path', message.format()
29 )
30 )
31
32 return matches
33
34 def __is_duplicated(self, duplicate_list):
35 unique_list = self.__remove(duplicate_list)
36 return len(unique_list) != len(duplicate_list)
37
38 def __remove(self, duplicate):
39 final_list = []
40 for ele in duplicate:
41 if ele not in final_list:
42 final_list.append(ele)
43 return final_list
44
45 def __get_log_group_name_list(self, cfn):
46 log_group_name_list = []
47 for value in cfn.get_resources('AWS::Logs::SubscriptionFilter').items():
48 prop = value[1].get('Properties')
49 log_group_name_list.append(prop.get('LogGroupName'))
50 return log_group_name_list
51
52 def match(self, cfn):
53 """Check if Lambda Events Subscription is duplicated"""
54 matches = []
55 matches.extend(
56 self.check_events_subscription_duplicated(cfn)
57 )
58 return matches
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py b/src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py
--- a/src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py
+++ b/src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py
@@ -2,6 +2,7 @@
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
SPDX-License-Identifier: MIT-0
"""
+import json
from cfnlint.rules import CloudFormationLintRule
from cfnlint.rules import RuleMatch
@@ -9,45 +10,41 @@
class EventsLogGroupName(CloudFormationLintRule):
"""Check if the settings of multiple subscriptions are included for one LogGroup"""
id = 'E2529'
- shortdesc = 'Check for duplicate Lambda events'
- description = 'Check if there are any duplicate log groups in the Lambda event trigger element.'
+ shortdesc = 'Check for SubscriptionFilters have beyond 2 attachments to a CloudWatch Log Group'
+ description = 'The current limit for a CloudWatch Log Group is they can have 2 subscription filters. ' \
+ 'We will look for duplicate LogGroupNames inside Subscription Filters and make sure they are within 2. ' \
+ 'This doesn\'t account for any other subscription filters getting set.'
source_url = 'https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#user-content-cloudwatchlogs'
tags = ['resources', 'lambda']
+ limit = 2
def check_events_subscription_duplicated(self, cfn):
"""Check if Lambda Events Subscription is duplicated"""
matches = []
- message = 'You must specify the AWS::Serverless::Function event correctly. ' \
- 'LogGroups are duplicated. '
-
- log_group_name_list = self.__get_log_group_name_list(cfn)
-
- if self.__is_duplicated(log_group_name_list):
- matches.append(
- RuleMatch(
- 'path', message.format()
+ message = 'You can only have {} Subscription Filters per CloudWatch Log Group'.format(self.limit)
+
+ log_group_paths = self.__get_log_group_name_list(cfn)
+ for _, c in log_group_paths.items():
+ if len(c) > self.limit:
+ matches.append(
+ RuleMatch(
+ ['Resources', c[2]], message.format()
+ )
)
- )
return matches
- def __is_duplicated(self, duplicate_list):
- unique_list = self.__remove(duplicate_list)
- return len(unique_list) != len(duplicate_list)
-
- def __remove(self, duplicate):
- final_list = []
- for ele in duplicate:
- if ele not in final_list:
- final_list.append(ele)
- return final_list
-
def __get_log_group_name_list(self, cfn):
- log_group_name_list = []
+ log_group_paths = {}
for value in cfn.get_resources('AWS::Logs::SubscriptionFilter').items():
prop = value[1].get('Properties')
- log_group_name_list.append(prop.get('LogGroupName'))
- return log_group_name_list
+ log_group_name = json.dumps(prop.get('LogGroupName'))
+
+ if log_group_name not in log_group_paths:
+ log_group_paths[log_group_name] = []
+
+ log_group_paths[log_group_name].append(value[0])
+ return log_group_paths
def match(self, cfn):
"""Check if Lambda Events Subscription is duplicated"""
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py b/src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py\n--- a/src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py\n+++ b/src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py\n@@ -2,6 +2,7 @@\n Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n SPDX-License-Identifier: MIT-0\n \"\"\"\n+import json\n from cfnlint.rules import CloudFormationLintRule\n from cfnlint.rules import RuleMatch\n \n@@ -9,45 +10,41 @@\n class EventsLogGroupName(CloudFormationLintRule):\n \"\"\"Check if the settings of multiple subscriptions are included for one LogGroup\"\"\"\n id = 'E2529'\n- shortdesc = 'Check for duplicate Lambda events'\n- description = 'Check if there are any duplicate log groups in the Lambda event trigger element.'\n+ shortdesc = 'Check for SubscriptionFilters have beyond 2 attachments to a CloudWatch Log Group'\n+ description = 'The current limit for a CloudWatch Log Group is they can have 2 subscription filters. ' \\\n+ 'We will look for duplicate LogGroupNames inside Subscription Filters and make sure they are within 2. ' \\\n+ 'This doesn\\'t account for any other subscription filters getting set.'\n source_url = 'https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#user-content-cloudwatchlogs'\n tags = ['resources', 'lambda']\n+ limit = 2\n \n def check_events_subscription_duplicated(self, cfn):\n \"\"\"Check if Lambda Events Subscription is duplicated\"\"\"\n matches = []\n- message = 'You must specify the AWS::Serverless::Function event correctly. ' \\\n- 'LogGroups are duplicated. '\n-\n- log_group_name_list = self.__get_log_group_name_list(cfn)\n-\n- if self.__is_duplicated(log_group_name_list):\n- matches.append(\n- RuleMatch(\n- 'path', message.format()\n+ message = 'You can only have {} Subscription Filters per CloudWatch Log Group'.format(self.limit)\n+\n+ log_group_paths = self.__get_log_group_name_list(cfn)\n+ for _, c in log_group_paths.items():\n+ if len(c) > self.limit:\n+ matches.append(\n+ RuleMatch(\n+ ['Resources', c[2]], message.format()\n+ )\n )\n- )\n \n return matches\n \n- def __is_duplicated(self, duplicate_list):\n- unique_list = self.__remove(duplicate_list)\n- return len(unique_list) != len(duplicate_list)\n-\n- def __remove(self, duplicate):\n- final_list = []\n- for ele in duplicate:\n- if ele not in final_list:\n- final_list.append(ele)\n- return final_list\n-\n def __get_log_group_name_list(self, cfn):\n- log_group_name_list = []\n+ log_group_paths = {}\n for value in cfn.get_resources('AWS::Logs::SubscriptionFilter').items():\n prop = value[1].get('Properties')\n- log_group_name_list.append(prop.get('LogGroupName'))\n- return log_group_name_list\n+ log_group_name = json.dumps(prop.get('LogGroupName'))\n+\n+ if log_group_name not in log_group_paths:\n+ log_group_paths[log_group_name] = []\n+\n+ log_group_paths[log_group_name].append(value[0])\n+ return log_group_paths\n \n def match(self, cfn):\n \"\"\"Check if Lambda Events Subscription is duplicated\"\"\"\n", "issue": "E2529 error with multiple `AWS::Logs::SubscriptionFilter` resources.\n*cfn-lint version: 0.38.0*\r\n\r\n*[`E2529`](https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/rules.md#E2529) error with multiple [`AWS::Logs::SubscriptionFilter`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-subscriptionfilter.html) resources.*\r\n\r\nI have a CloudFormation template (using AWS SAM) that has multiple SubscriptionFilters. These SubscriptionFilters have the same `LogGroupName`, which causes an `E2529` error despite these filters having separate FilterPatterns. The template passes the [`aws cloudformation validate-template`](https://docs.aws.amazon.com/cli/latest/reference/cloudformation/validate-template.html) command.\r\n\r\n[`src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py`](https://github.com/aws-cloudformation/cfn-python-lint/blob/master/src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py)\r\n\r\n```\r\n MainFunctionLogGroup:\r\n Type: AWS::Logs::LogGroup\r\n Properties:\r\n RetentionInDays: 14\r\n LogGroupName: !Join [\"\", [/aws/lambda/, !Ref MainFunction]]\r\n\r\n MainFunctionLogFilter:\r\n Type: AWS::Logs::SubscriptionFilter\r\n Properties:\r\n DestinationArn: !Ref LogIngestionARN\r\n FilterPattern: \"FilterPattern1\"\r\n LogGroupName: !Ref MainFunctionLogGroup\r\n\r\n SecondaryLogFilter:\r\n Type: AWS::Logs::SubscriptionFilter\r\n Properties:\r\n DestinationArn: !Ref LogIngestionARN\r\n FilterPattern: \"FilterPattern2\"\r\n LogGroupName: !Ref MainFunctionLogGroup\r\n```\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass EventsLogGroupName(CloudFormationLintRule):\n \"\"\"Check if the settings of multiple subscriptions are included for one LogGroup\"\"\"\n id = 'E2529'\n shortdesc = 'Check for duplicate Lambda events'\n description = 'Check if there are any duplicate log groups in the Lambda event trigger element.'\n source_url = 'https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#user-content-cloudwatchlogs'\n tags = ['resources', 'lambda']\n\n def check_events_subscription_duplicated(self, cfn):\n \"\"\"Check if Lambda Events Subscription is duplicated\"\"\"\n matches = []\n message = 'You must specify the AWS::Serverless::Function event correctly. ' \\\n 'LogGroups are duplicated. '\n\n log_group_name_list = self.__get_log_group_name_list(cfn)\n\n if self.__is_duplicated(log_group_name_list):\n matches.append(\n RuleMatch(\n 'path', message.format()\n )\n )\n\n return matches\n\n def __is_duplicated(self, duplicate_list):\n unique_list = self.__remove(duplicate_list)\n return len(unique_list) != len(duplicate_list)\n\n def __remove(self, duplicate):\n final_list = []\n for ele in duplicate:\n if ele not in final_list:\n final_list.append(ele)\n return final_list\n\n def __get_log_group_name_list(self, cfn):\n log_group_name_list = []\n for value in cfn.get_resources('AWS::Logs::SubscriptionFilter').items():\n prop = value[1].get('Properties')\n log_group_name_list.append(prop.get('LogGroupName'))\n return log_group_name_list\n\n def match(self, cfn):\n \"\"\"Check if Lambda Events Subscription is duplicated\"\"\"\n matches = []\n matches.extend(\n self.check_events_subscription_duplicated(cfn)\n )\n return matches\n", "path": "src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport json\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass EventsLogGroupName(CloudFormationLintRule):\n \"\"\"Check if the settings of multiple subscriptions are included for one LogGroup\"\"\"\n id = 'E2529'\n shortdesc = 'Check for SubscriptionFilters have beyond 2 attachments to a CloudWatch Log Group'\n description = 'The current limit for a CloudWatch Log Group is they can have 2 subscription filters. ' \\\n 'We will look for duplicate LogGroupNames inside Subscription Filters and make sure they are within 2. ' \\\n 'This doesn\\'t account for any other subscription filters getting set.'\n source_url = 'https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#user-content-cloudwatchlogs'\n tags = ['resources', 'lambda']\n limit = 2\n\n def check_events_subscription_duplicated(self, cfn):\n \"\"\"Check if Lambda Events Subscription is duplicated\"\"\"\n matches = []\n message = 'You can only have {} Subscription Filters per CloudWatch Log Group'.format(self.limit)\n\n log_group_paths = self.__get_log_group_name_list(cfn)\n for _, c in log_group_paths.items():\n if len(c) > self.limit:\n matches.append(\n RuleMatch(\n ['Resources', c[2]], message.format()\n )\n )\n\n return matches\n\n def __get_log_group_name_list(self, cfn):\n log_group_paths = {}\n for value in cfn.get_resources('AWS::Logs::SubscriptionFilter').items():\n prop = value[1].get('Properties')\n log_group_name = json.dumps(prop.get('LogGroupName'))\n\n if log_group_name not in log_group_paths:\n log_group_paths[log_group_name] = []\n\n log_group_paths[log_group_name].append(value[0])\n return log_group_paths\n\n def match(self, cfn):\n \"\"\"Check if Lambda Events Subscription is duplicated\"\"\"\n matches = []\n matches.extend(\n self.check_events_subscription_duplicated(cfn)\n )\n return matches\n", "path": "src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py"}]} | 1,206 | 791 |
gh_patches_debug_39910 | rasdani/github-patches | git_diff | openstates__openstates-scrapers-1300 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
NY: events aren't scraping
State: NY
Event scraper is broken & needs investigation.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openstates/ny/events.py`
Content:
```
1 import re
2 import datetime as dt
3 import pytz
4 import lxml.html
5 from billy.scrape.events import EventScraper, Event
6 from openstates.utils import LXMLMixin
7
8 url = "http://assembly.state.ny.us/leg/?sh=hear"
9
10
11 class NYEventScraper(EventScraper, LXMLMixin):
12 _tz = pytz.timezone('US/Eastern')
13 jurisdiction = 'ny'
14
15 def lower_parse_page(self, url, session):
16 page = self.lxmlize(url)
17 tables = page.xpath("//table[@class='pubhrgtbl']")
18 date = None
19 ctty = None
20 chamber = 'other'
21 for table in tables:
22 metainf = {}
23 rows = table.xpath(".//tr")
24 for row in rows:
25 tds = row.xpath("./*")
26 if len(tds) < 2:
27 continue
28 key, value = tds
29 if key.tag == 'th':
30 date = key.text_content()
31 date = re.sub("\s+", " ", date)
32 date = re.sub(".*POSTPONED NEW DATE", "", date).strip()
33 ctty = value.xpath(".//strong")[0]
34 ctty = ctty.text_content()
35
36 chamber = 'other'
37 if "senate" in ctty.lower():
38 chamber = 'upper'
39 if "house" in ctty.lower():
40 chamber = 'lower'
41 if "joint" in ctty.lower():
42 chamber = 'joint'
43 elif key.tag == 'td':
44 key = key.text_content().strip()
45 value = value.text_content().strip()
46 value = value.replace(u'\x96', '-')
47 value = re.sub("\s+", " ", value)
48 metainf[key] = value
49
50 time = metainf['Time:']
51 repl = {
52 "A.M.": "AM",
53 "P.M.": "PM",
54 }
55 drepl = {
56 "Sept": "Sep"
57 }
58 for r in repl:
59 time = time.replace(r, repl[r])
60
61 for r in drepl:
62 date = date.replace(r, drepl[r])
63
64 time = re.sub("-.*", "", time)
65 time = time.strip()
66
67 year = dt.datetime.now().year
68
69 date = "%s %s %s" % (
70 date,
71 year,
72 time
73 )
74
75 if "tbd" in date.lower():
76 continue
77
78 date = date.replace(' PLEASE NOTE NEW TIME', '')
79
80 # Check if the event has been postponed.
81 postponed = 'POSTPONED' in date
82 if postponed:
83 date = date.replace(' POSTPONED', '')
84
85 date_formats = ["%B %d %Y %I:%M %p", "%b. %d %Y %I:%M %p"]
86 datetime = None
87 for fmt in date_formats:
88 try:
89 datetime = dt.datetime.strptime(date, fmt)
90 except ValueError:
91 pass
92
93 # If the datetime can't be parsed, bail.
94 if datetime is None:
95 return
96
97 title_key = set(metainf) & set([
98 'Public Hearing:', 'Summit:', 'Roundtable:',
99 'Public Roundtable:', 'Public Meeting:', 'Public Forum:',
100 'Meeting:'])
101 assert len(title_key) == 1, "Couldn't determine event title."
102 title_key = list(title_key).pop()
103 title = metainf[title_key]
104
105 title = re.sub(
106 r"\*\*Click here to view public hearing notice\*\*",
107 "",
108 title
109 )
110
111 # If event was postponed, add a warning to the title.
112 if postponed:
113 title = 'POSTPONED: %s' % title
114
115 event = Event(session, datetime, 'committee:meeting',
116 title,
117 location=metainf['Place:'],
118 contact=metainf['Contact:'])
119 if 'Media Contact:' in metainf:
120 event.update(media_contact=metainf['Media Contact:'])
121 event.add_source(url)
122 event.add_participant('host',
123 ctty,
124 'committee',
125 chamber=chamber)
126
127 self.save_event(event)
128
129 def scrape(self, chamber, session):
130 self.scrape_lower(chamber, session)
131 #self.scrape_upper(chamber, session)
132
133 def scrape_lower(self, chamber, session):
134 if chamber == 'other':
135 self.lower_parse_page(url, session)
136
137 """
138 def scrape_upper(self, chamber, session):
139 if chamber != 'upper':
140 return
141
142 url = (r'http://open.nysenate.gov/legislation/2.0/search.json?'
143 r'term=otype:meeting&pageSize=1000&pageIdx=%d')
144 page_index = 1
145 while True:
146 resp = self.get(url % page_index)
147 if not resp.json():
148 break
149 if not resp.json()['response']['results']:
150 break
151 for obj in resp.json()['response']['results']:
152 event = self.upper_scrape_event(chamber, session, obj)
153 if event:
154 self.save_event(event)
155 page_index += 1
156
157 def upper_scrape_event(self, chamber, session, obj):
158 meeting = obj['data']['meeting']
159 date = int(meeting['meetingDateTime'])
160 date = dt.datetime.fromtimestamp(date / 1000)
161 if str(date.year) not in session:
162 return
163 description = 'Committee Meeting: ' + meeting['committeeName']
164 event = Event(session, date, 'committee:meeting',
165 description=description,
166 location=meeting['location'] or 'No location given.')
167 event.add_source(obj['url'])
168 event.add_participant('chair', meeting['committeeChair'],
169 'legislator', chamber='upper')
170 event.add_participant('host', meeting['committeeName'],
171 'committee', chamber='upper')
172
173 rgx = r'([a-z]+)(\d+)'
174 for bill in meeting['bills']:
175 raw_id = bill['senateBillNo']
176 bill_id = ' '.join(re.search(rgx, raw_id, re.I).groups())
177 event.add_related_bill(
178 bill_id, type='bill',
179 description=bill['summary'] or 'No description given.')
180 return event
181 """
182
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/openstates/ny/events.py b/openstates/ny/events.py
--- a/openstates/ny/events.py
+++ b/openstates/ny/events.py
@@ -26,12 +26,16 @@
if len(tds) < 2:
continue
key, value = tds
- if key.tag == 'th':
+
+ if key.tag == 'th' and key.get("class") == 'hrgdate':
date = key.text_content()
- date = re.sub("\s+", " ", date)
+ date = re.sub(r"\s+", " ", date)
date = re.sub(".*POSTPONED NEW DATE", "", date).strip()
- ctty = value.xpath(".//strong")[0]
- ctty = ctty.text_content()
+
+ # Due to the html structure this shouldn't be an elif
+ # It needs to fire twice in the same loop iteration
+ if value.tag == 'th' and value.get("class") == 'commtitle':
+ ctty = value.text_content()
chamber = 'other'
if "senate" in ctty.lower():
@@ -40,11 +44,14 @@
chamber = 'lower'
if "joint" in ctty.lower():
chamber = 'joint'
+
+ coms = value.xpath('.//div[contains(@class,"comm-txt")]/text()')
+
elif key.tag == 'td':
key = key.text_content().strip()
value = value.text_content().strip()
value = value.replace(u'\x96', '-')
- value = re.sub("\s+", " ", value)
+ value = re.sub(r"\s+", " ", value)
metainf[key] = value
time = metainf['Time:']
@@ -119,10 +126,12 @@
if 'Media Contact:' in metainf:
event.update(media_contact=metainf['Media Contact:'])
event.add_source(url)
- event.add_participant('host',
- ctty,
- 'committee',
- chamber=chamber)
+
+ for com in coms:
+ event.add_participant('host',
+ com.strip(),
+ 'committee',
+ chamber=self.classify_committee(com))
self.save_event(event)
@@ -134,6 +143,16 @@
if chamber == 'other':
self.lower_parse_page(url, session)
+ def classify_committee(self, name):
+ chamber = 'other'
+ if "senate" in name.lower():
+ chamber = 'upper'
+ if "assembly" in name.lower():
+ chamber = 'lower'
+ if "joint" in name.lower():
+ chamber = 'joint'
+ return chamber
+
"""
def scrape_upper(self, chamber, session):
if chamber != 'upper':
| {"golden_diff": "diff --git a/openstates/ny/events.py b/openstates/ny/events.py\n--- a/openstates/ny/events.py\n+++ b/openstates/ny/events.py\n@@ -26,12 +26,16 @@\n if len(tds) < 2:\n continue\n key, value = tds\n- if key.tag == 'th':\n+\n+ if key.tag == 'th' and key.get(\"class\") == 'hrgdate':\n date = key.text_content()\n- date = re.sub(\"\\s+\", \" \", date)\n+ date = re.sub(r\"\\s+\", \" \", date)\n date = re.sub(\".*POSTPONED NEW DATE\", \"\", date).strip()\n- ctty = value.xpath(\".//strong\")[0]\n- ctty = ctty.text_content()\n+\n+ # Due to the html structure this shouldn't be an elif\n+ # It needs to fire twice in the same loop iteration\n+ if value.tag == 'th' and value.get(\"class\") == 'commtitle':\n+ ctty = value.text_content()\n \n chamber = 'other'\n if \"senate\" in ctty.lower():\n@@ -40,11 +44,14 @@\n chamber = 'lower'\n if \"joint\" in ctty.lower():\n chamber = 'joint'\n+\n+ coms = value.xpath('.//div[contains(@class,\"comm-txt\")]/text()')\n+\n elif key.tag == 'td':\n key = key.text_content().strip()\n value = value.text_content().strip()\n value = value.replace(u'\\x96', '-')\n- value = re.sub(\"\\s+\", \" \", value)\n+ value = re.sub(r\"\\s+\", \" \", value)\n metainf[key] = value\n \n time = metainf['Time:']\n@@ -119,10 +126,12 @@\n if 'Media Contact:' in metainf:\n event.update(media_contact=metainf['Media Contact:'])\n event.add_source(url)\n- event.add_participant('host',\n- ctty,\n- 'committee',\n- chamber=chamber)\n+\n+ for com in coms:\n+ event.add_participant('host',\n+ com.strip(),\n+ 'committee',\n+ chamber=self.classify_committee(com))\n \n self.save_event(event)\n \n@@ -134,6 +143,16 @@\n if chamber == 'other':\n self.lower_parse_page(url, session)\n \n+ def classify_committee(self, name):\n+ chamber = 'other'\n+ if \"senate\" in name.lower():\n+ chamber = 'upper'\n+ if \"assembly\" in name.lower():\n+ chamber = 'lower'\n+ if \"joint\" in name.lower():\n+ chamber = 'joint'\n+ return chamber\n+\n \"\"\"\n def scrape_upper(self, chamber, session):\n if chamber != 'upper':\n", "issue": "NY: events aren't scraping\nState: NY\r\n\r\nEvent scraper is broken & needs investigation.\n", "before_files": [{"content": "import re\nimport datetime as dt\nimport pytz\nimport lxml.html\nfrom billy.scrape.events import EventScraper, Event\nfrom openstates.utils import LXMLMixin\n\nurl = \"http://assembly.state.ny.us/leg/?sh=hear\"\n\n\nclass NYEventScraper(EventScraper, LXMLMixin):\n _tz = pytz.timezone('US/Eastern')\n jurisdiction = 'ny'\n\n def lower_parse_page(self, url, session):\n page = self.lxmlize(url)\n tables = page.xpath(\"//table[@class='pubhrgtbl']\")\n date = None\n ctty = None\n chamber = 'other'\n for table in tables:\n metainf = {}\n rows = table.xpath(\".//tr\")\n for row in rows:\n tds = row.xpath(\"./*\")\n if len(tds) < 2:\n continue\n key, value = tds\n if key.tag == 'th':\n date = key.text_content()\n date = re.sub(\"\\s+\", \" \", date)\n date = re.sub(\".*POSTPONED NEW DATE\", \"\", date).strip()\n ctty = value.xpath(\".//strong\")[0]\n ctty = ctty.text_content()\n\n chamber = 'other'\n if \"senate\" in ctty.lower():\n chamber = 'upper'\n if \"house\" in ctty.lower():\n chamber = 'lower'\n if \"joint\" in ctty.lower():\n chamber = 'joint'\n elif key.tag == 'td':\n key = key.text_content().strip()\n value = value.text_content().strip()\n value = value.replace(u'\\x96', '-')\n value = re.sub(\"\\s+\", \" \", value)\n metainf[key] = value\n\n time = metainf['Time:']\n repl = {\n \"A.M.\": \"AM\",\n \"P.M.\": \"PM\",\n }\n drepl = {\n \"Sept\": \"Sep\"\n }\n for r in repl:\n time = time.replace(r, repl[r])\n\n for r in drepl:\n date = date.replace(r, drepl[r])\n\n time = re.sub(\"-.*\", \"\", time)\n time = time.strip()\n\n year = dt.datetime.now().year\n\n date = \"%s %s %s\" % (\n date,\n year,\n time\n )\n\n if \"tbd\" in date.lower():\n continue\n\n date = date.replace(' PLEASE NOTE NEW TIME', '')\n\n # Check if the event has been postponed.\n postponed = 'POSTPONED' in date\n if postponed:\n date = date.replace(' POSTPONED', '')\n\n date_formats = [\"%B %d %Y %I:%M %p\", \"%b. %d %Y %I:%M %p\"]\n datetime = None\n for fmt in date_formats:\n try:\n datetime = dt.datetime.strptime(date, fmt)\n except ValueError:\n pass\n\n # If the datetime can't be parsed, bail.\n if datetime is None:\n return\n\n title_key = set(metainf) & set([\n 'Public Hearing:', 'Summit:', 'Roundtable:',\n 'Public Roundtable:', 'Public Meeting:', 'Public Forum:',\n 'Meeting:'])\n assert len(title_key) == 1, \"Couldn't determine event title.\"\n title_key = list(title_key).pop()\n title = metainf[title_key]\n\n title = re.sub(\n r\"\\*\\*Click here to view public hearing notice\\*\\*\",\n \"\",\n title\n )\n\n # If event was postponed, add a warning to the title.\n if postponed:\n title = 'POSTPONED: %s' % title\n\n event = Event(session, datetime, 'committee:meeting',\n title,\n location=metainf['Place:'],\n contact=metainf['Contact:'])\n if 'Media Contact:' in metainf:\n event.update(media_contact=metainf['Media Contact:'])\n event.add_source(url)\n event.add_participant('host',\n ctty,\n 'committee',\n chamber=chamber)\n\n self.save_event(event)\n\n def scrape(self, chamber, session):\n self.scrape_lower(chamber, session)\n #self.scrape_upper(chamber, session)\n\n def scrape_lower(self, chamber, session):\n if chamber == 'other':\n self.lower_parse_page(url, session)\n\n \"\"\"\n def scrape_upper(self, chamber, session):\n if chamber != 'upper':\n return\n\n url = (r'http://open.nysenate.gov/legislation/2.0/search.json?'\n r'term=otype:meeting&pageSize=1000&pageIdx=%d')\n page_index = 1\n while True:\n resp = self.get(url % page_index)\n if not resp.json():\n break\n if not resp.json()['response']['results']:\n break\n for obj in resp.json()['response']['results']:\n event = self.upper_scrape_event(chamber, session, obj)\n if event:\n self.save_event(event)\n page_index += 1\n\n def upper_scrape_event(self, chamber, session, obj):\n meeting = obj['data']['meeting']\n date = int(meeting['meetingDateTime'])\n date = dt.datetime.fromtimestamp(date / 1000)\n if str(date.year) not in session:\n return\n description = 'Committee Meeting: ' + meeting['committeeName']\n event = Event(session, date, 'committee:meeting',\n description=description,\n location=meeting['location'] or 'No location given.')\n event.add_source(obj['url'])\n event.add_participant('chair', meeting['committeeChair'],\n 'legislator', chamber='upper')\n event.add_participant('host', meeting['committeeName'],\n 'committee', chamber='upper')\n\n rgx = r'([a-z]+)(\\d+)'\n for bill in meeting['bills']:\n raw_id = bill['senateBillNo']\n bill_id = ' '.join(re.search(rgx, raw_id, re.I).groups())\n event.add_related_bill(\n bill_id, type='bill',\n description=bill['summary'] or 'No description given.')\n return event\n \"\"\"\n", "path": "openstates/ny/events.py"}], "after_files": [{"content": "import re\nimport datetime as dt\nimport pytz\nimport lxml.html\nfrom billy.scrape.events import EventScraper, Event\nfrom openstates.utils import LXMLMixin\n\nurl = \"http://assembly.state.ny.us/leg/?sh=hear\"\n\n\nclass NYEventScraper(EventScraper, LXMLMixin):\n _tz = pytz.timezone('US/Eastern')\n jurisdiction = 'ny'\n\n def lower_parse_page(self, url, session):\n page = self.lxmlize(url)\n tables = page.xpath(\"//table[@class='pubhrgtbl']\")\n date = None\n ctty = None\n chamber = 'other'\n for table in tables:\n metainf = {}\n rows = table.xpath(\".//tr\")\n for row in rows:\n tds = row.xpath(\"./*\")\n if len(tds) < 2:\n continue\n key, value = tds\n\n if key.tag == 'th' and key.get(\"class\") == 'hrgdate':\n date = key.text_content()\n date = re.sub(r\"\\s+\", \" \", date)\n date = re.sub(\".*POSTPONED NEW DATE\", \"\", date).strip()\n\n # Due to the html structure this shouldn't be an elif\n # It needs to fire twice in the same loop iteration\n if value.tag == 'th' and value.get(\"class\") == 'commtitle':\n ctty = value.text_content()\n\n chamber = 'other'\n if \"senate\" in ctty.lower():\n chamber = 'upper'\n if \"house\" in ctty.lower():\n chamber = 'lower'\n if \"joint\" in ctty.lower():\n chamber = 'joint'\n\n coms = value.xpath('.//div[contains(@class,\"comm-txt\")]/text()')\n\n elif key.tag == 'td':\n key = key.text_content().strip()\n value = value.text_content().strip()\n value = value.replace(u'\\x96', '-')\n value = re.sub(r\"\\s+\", \" \", value)\n metainf[key] = value\n\n time = metainf['Time:']\n repl = {\n \"A.M.\": \"AM\",\n \"P.M.\": \"PM\",\n }\n drepl = {\n \"Sept\": \"Sep\"\n }\n for r in repl:\n time = time.replace(r, repl[r])\n\n for r in drepl:\n date = date.replace(r, drepl[r])\n\n time = re.sub(\"-.*\", \"\", time)\n time = time.strip()\n\n year = dt.datetime.now().year\n\n date = \"%s %s %s\" % (\n date,\n year,\n time\n )\n\n if \"tbd\" in date.lower():\n continue\n\n date = date.replace(' PLEASE NOTE NEW TIME', '')\n\n # Check if the event has been postponed.\n postponed = 'POSTPONED' in date\n if postponed:\n date = date.replace(' POSTPONED', '')\n\n date_formats = [\"%B %d %Y %I:%M %p\", \"%b. %d %Y %I:%M %p\"]\n datetime = None\n for fmt in date_formats:\n try:\n datetime = dt.datetime.strptime(date, fmt)\n except ValueError:\n pass\n\n # If the datetime can't be parsed, bail.\n if datetime is None:\n return\n\n title_key = set(metainf) & set([\n 'Public Hearing:', 'Summit:', 'Roundtable:',\n 'Public Roundtable:', 'Public Meeting:', 'Public Forum:',\n 'Meeting:'])\n assert len(title_key) == 1, \"Couldn't determine event title.\"\n title_key = list(title_key).pop()\n title = metainf[title_key]\n\n title = re.sub(\n r\"\\*\\*Click here to view public hearing notice\\*\\*\",\n \"\",\n title\n )\n\n # If event was postponed, add a warning to the title.\n if postponed:\n title = 'POSTPONED: %s' % title\n\n event = Event(session, datetime, 'committee:meeting',\n title,\n location=metainf['Place:'],\n contact=metainf['Contact:'])\n if 'Media Contact:' in metainf:\n event.update(media_contact=metainf['Media Contact:'])\n event.add_source(url)\n\n for com in coms:\n event.add_participant('host',\n com.strip(),\n 'committee',\n chamber=self.classify_committee(com))\n\n self.save_event(event)\n\n def scrape(self, chamber, session):\n self.scrape_lower(chamber, session)\n #self.scrape_upper(chamber, session)\n\n def scrape_lower(self, chamber, session):\n if chamber == 'other':\n self.lower_parse_page(url, session)\n\n def classify_committee(self, name):\n chamber = 'other'\n if \"senate\" in name.lower():\n chamber = 'upper'\n if \"assembly\" in name.lower():\n chamber = 'lower'\n if \"joint\" in name.lower():\n chamber = 'joint'\n return chamber\n\n \"\"\"\n def scrape_upper(self, chamber, session):\n if chamber != 'upper':\n return\n\n url = (r'http://open.nysenate.gov/legislation/2.0/search.json?'\n r'term=otype:meeting&pageSize=1000&pageIdx=%d')\n page_index = 1\n while True:\n resp = self.get(url % page_index)\n if not resp.json():\n break\n if not resp.json()['response']['results']:\n break\n for obj in resp.json()['response']['results']:\n event = self.upper_scrape_event(chamber, session, obj)\n if event:\n self.save_event(event)\n page_index += 1\n\n def upper_scrape_event(self, chamber, session, obj):\n meeting = obj['data']['meeting']\n date = int(meeting['meetingDateTime'])\n date = dt.datetime.fromtimestamp(date / 1000)\n if str(date.year) not in session:\n return\n description = 'Committee Meeting: ' + meeting['committeeName']\n event = Event(session, date, 'committee:meeting',\n description=description,\n location=meeting['location'] or 'No location given.')\n event.add_source(obj['url'])\n event.add_participant('chair', meeting['committeeChair'],\n 'legislator', chamber='upper')\n event.add_participant('host', meeting['committeeName'],\n 'committee', chamber='upper')\n\n rgx = r'([a-z]+)(\\d+)'\n for bill in meeting['bills']:\n raw_id = bill['senateBillNo']\n bill_id = ' '.join(re.search(rgx, raw_id, re.I).groups())\n event.add_related_bill(\n bill_id, type='bill',\n description=bill['summary'] or 'No description given.')\n return event\n \"\"\"\n", "path": "openstates/ny/events.py"}]} | 2,092 | 643 |
gh_patches_debug_2259 | rasdani/github-patches | git_diff | zigpy__zha-device-handlers-184 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Philips Remote DIM_DOWN typo?
https://github.com/dmulcahey/zha-device-handlers/blob/833ee24710496d317a03b0f0b9f61df31291d75b/zhaquirks/philips/rwl021.py#L137
It seems that it should be:
`ARGS: [1, 30, 9],`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zhaquirks/philips/rwl021.py`
Content:
```
1 """Phillips RWL021 device."""
2 from zigpy.profiles import zha, zll
3 from zigpy.quirks import CustomCluster, CustomDevice
4 import zigpy.types as t
5 from zigpy.zcl.clusters.general import (
6 Basic,
7 BinaryInput,
8 Groups,
9 Identify,
10 LevelControl,
11 OnOff,
12 Ota,
13 PowerConfiguration,
14 Scenes,
15 )
16
17 from ..const import (
18 ARGS,
19 CLUSTER_ID,
20 COMMAND,
21 COMMAND_OFF_WITH_EFFECT,
22 COMMAND_ON,
23 COMMAND_STEP,
24 DEVICE_TYPE,
25 DIM_DOWN,
26 DIM_UP,
27 ENDPOINT_ID,
28 ENDPOINTS,
29 INPUT_CLUSTERS,
30 LONG_PRESS,
31 OUTPUT_CLUSTERS,
32 PROFILE_ID,
33 SHORT_PRESS,
34 TURN_OFF,
35 TURN_ON,
36 )
37
38 DIAGNOSTICS_CLUSTER_ID = 0x0B05 # decimal = 2821
39
40
41 class BasicCluster(CustomCluster, Basic):
42 """Centralite acceleration cluster."""
43
44 def __init__(self, *args, **kwargs):
45 """Init."""
46 super().__init__(*args, **kwargs)
47 self.attributes = super().attributes.copy()
48 self.attributes.update({0x0031: ("phillips", t.bitmap16)})
49
50
51 class PhilipsRWL021(CustomDevice):
52 """Phillips RWL021 device."""
53
54 signature = {
55 # <SimpleDescriptor endpoint=1 profile=49246 device_type=2096
56 # device_version=2
57 # input_clusters=[0]
58 # output_clusters=[0, 3, 4, 6, 8, 5]>
59 ENDPOINTS: {
60 1: {
61 PROFILE_ID: zll.PROFILE_ID,
62 DEVICE_TYPE: zll.DeviceType.SCENE_CONTROLLER,
63 INPUT_CLUSTERS: [Basic.cluster_id],
64 OUTPUT_CLUSTERS: [
65 Basic.cluster_id,
66 Identify.cluster_id,
67 Groups.cluster_id,
68 OnOff.cluster_id,
69 LevelControl.cluster_id,
70 Scenes.cluster_id,
71 ],
72 },
73 # <SimpleDescriptor endpoint=2 profile=260 device_type=12
74 # device_version=0
75 # input_clusters=[0, 1, 3, 15, 64512]
76 # output_clusters=[25]>
77 2: {
78 PROFILE_ID: zha.PROFILE_ID,
79 DEVICE_TYPE: zha.DeviceType.SIMPLE_SENSOR,
80 INPUT_CLUSTERS: [
81 Basic.cluster_id,
82 PowerConfiguration.cluster_id,
83 Identify.cluster_id,
84 BinaryInput.cluster_id,
85 64512,
86 ],
87 OUTPUT_CLUSTERS: [Ota.cluster_id],
88 },
89 }
90 }
91
92 replacement = {
93 ENDPOINTS: {
94 1: {
95 INPUT_CLUSTERS: [Basic.cluster_id],
96 OUTPUT_CLUSTERS: [
97 Basic.cluster_id,
98 Identify.cluster_id,
99 Groups.cluster_id,
100 OnOff.cluster_id,
101 LevelControl.cluster_id,
102 Scenes.cluster_id,
103 ],
104 },
105 2: {
106 INPUT_CLUSTERS: [
107 BasicCluster,
108 PowerConfiguration.cluster_id,
109 Identify.cluster_id,
110 BinaryInput.cluster_id,
111 64512,
112 ],
113 OUTPUT_CLUSTERS: [Ota.cluster_id],
114 },
115 }
116 }
117
118 device_automation_triggers = {
119 (SHORT_PRESS, TURN_ON): {COMMAND: COMMAND_ON},
120 (LONG_PRESS, TURN_OFF): {COMMAND: COMMAND_OFF_WITH_EFFECT},
121 (SHORT_PRESS, DIM_UP): {
122 COMMAND: COMMAND_STEP,
123 CLUSTER_ID: 8,
124 ENDPOINT_ID: 1,
125 ARGS: [0, 30, 9],
126 },
127 (LONG_PRESS, DIM_UP): {
128 COMMAND: COMMAND_STEP,
129 CLUSTER_ID: 8,
130 ENDPOINT_ID: 1,
131 ARGS: [0, 56, 9],
132 },
133 (SHORT_PRESS, DIM_DOWN): {
134 COMMAND: COMMAND_STEP,
135 CLUSTER_ID: 8,
136 ENDPOINT_ID: 1,
137 ARGS: [1, 56, 9],
138 },
139 (LONG_PRESS, DIM_DOWN): {
140 COMMAND: COMMAND_STEP,
141 CLUSTER_ID: 8,
142 ENDPOINT_ID: 1,
143 ARGS: [1, 56, 9],
144 },
145 }
146
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/zhaquirks/philips/rwl021.py b/zhaquirks/philips/rwl021.py
--- a/zhaquirks/philips/rwl021.py
+++ b/zhaquirks/philips/rwl021.py
@@ -134,7 +134,7 @@
COMMAND: COMMAND_STEP,
CLUSTER_ID: 8,
ENDPOINT_ID: 1,
- ARGS: [1, 56, 9],
+ ARGS: [1, 30, 9],
},
(LONG_PRESS, DIM_DOWN): {
COMMAND: COMMAND_STEP,
| {"golden_diff": "diff --git a/zhaquirks/philips/rwl021.py b/zhaquirks/philips/rwl021.py\n--- a/zhaquirks/philips/rwl021.py\n+++ b/zhaquirks/philips/rwl021.py\n@@ -134,7 +134,7 @@\n COMMAND: COMMAND_STEP,\n CLUSTER_ID: 8,\n ENDPOINT_ID: 1,\n- ARGS: [1, 56, 9],\n+ ARGS: [1, 30, 9],\n },\n (LONG_PRESS, DIM_DOWN): {\n COMMAND: COMMAND_STEP,\n", "issue": "Philips Remote DIM_DOWN typo?\nhttps://github.com/dmulcahey/zha-device-handlers/blob/833ee24710496d317a03b0f0b9f61df31291d75b/zhaquirks/philips/rwl021.py#L137\r\n\r\nIt seems that it should be:\r\n`ARGS: [1, 30, 9],`\n", "before_files": [{"content": "\"\"\"Phillips RWL021 device.\"\"\"\nfrom zigpy.profiles import zha, zll\nfrom zigpy.quirks import CustomCluster, CustomDevice\nimport zigpy.types as t\nfrom zigpy.zcl.clusters.general import (\n Basic,\n BinaryInput,\n Groups,\n Identify,\n LevelControl,\n OnOff,\n Ota,\n PowerConfiguration,\n Scenes,\n)\n\nfrom ..const import (\n ARGS,\n CLUSTER_ID,\n COMMAND,\n COMMAND_OFF_WITH_EFFECT,\n COMMAND_ON,\n COMMAND_STEP,\n DEVICE_TYPE,\n DIM_DOWN,\n DIM_UP,\n ENDPOINT_ID,\n ENDPOINTS,\n INPUT_CLUSTERS,\n LONG_PRESS,\n OUTPUT_CLUSTERS,\n PROFILE_ID,\n SHORT_PRESS,\n TURN_OFF,\n TURN_ON,\n)\n\nDIAGNOSTICS_CLUSTER_ID = 0x0B05 # decimal = 2821\n\n\nclass BasicCluster(CustomCluster, Basic):\n \"\"\"Centralite acceleration cluster.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Init.\"\"\"\n super().__init__(*args, **kwargs)\n self.attributes = super().attributes.copy()\n self.attributes.update({0x0031: (\"phillips\", t.bitmap16)})\n\n\nclass PhilipsRWL021(CustomDevice):\n \"\"\"Phillips RWL021 device.\"\"\"\n\n signature = {\n # <SimpleDescriptor endpoint=1 profile=49246 device_type=2096\n # device_version=2\n # input_clusters=[0]\n # output_clusters=[0, 3, 4, 6, 8, 5]>\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zll.PROFILE_ID,\n DEVICE_TYPE: zll.DeviceType.SCENE_CONTROLLER,\n INPUT_CLUSTERS: [Basic.cluster_id],\n OUTPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n ],\n },\n # <SimpleDescriptor endpoint=2 profile=260 device_type=12\n # device_version=0\n # input_clusters=[0, 1, 3, 15, 64512]\n # output_clusters=[25]>\n 2: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.SIMPLE_SENSOR,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n PowerConfiguration.cluster_id,\n Identify.cluster_id,\n BinaryInput.cluster_id,\n 64512,\n ],\n OUTPUT_CLUSTERS: [Ota.cluster_id],\n },\n }\n }\n\n replacement = {\n ENDPOINTS: {\n 1: {\n INPUT_CLUSTERS: [Basic.cluster_id],\n OUTPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n ],\n },\n 2: {\n INPUT_CLUSTERS: [\n BasicCluster,\n PowerConfiguration.cluster_id,\n Identify.cluster_id,\n BinaryInput.cluster_id,\n 64512,\n ],\n OUTPUT_CLUSTERS: [Ota.cluster_id],\n },\n }\n }\n\n device_automation_triggers = {\n (SHORT_PRESS, TURN_ON): {COMMAND: COMMAND_ON},\n (LONG_PRESS, TURN_OFF): {COMMAND: COMMAND_OFF_WITH_EFFECT},\n (SHORT_PRESS, DIM_UP): {\n COMMAND: COMMAND_STEP,\n CLUSTER_ID: 8,\n ENDPOINT_ID: 1,\n ARGS: [0, 30, 9],\n },\n (LONG_PRESS, DIM_UP): {\n COMMAND: COMMAND_STEP,\n CLUSTER_ID: 8,\n ENDPOINT_ID: 1,\n ARGS: [0, 56, 9],\n },\n (SHORT_PRESS, DIM_DOWN): {\n COMMAND: COMMAND_STEP,\n CLUSTER_ID: 8,\n ENDPOINT_ID: 1,\n ARGS: [1, 56, 9],\n },\n (LONG_PRESS, DIM_DOWN): {\n COMMAND: COMMAND_STEP,\n CLUSTER_ID: 8,\n ENDPOINT_ID: 1,\n ARGS: [1, 56, 9],\n },\n }\n", "path": "zhaquirks/philips/rwl021.py"}], "after_files": [{"content": "\"\"\"Phillips RWL021 device.\"\"\"\nfrom zigpy.profiles import zha, zll\nfrom zigpy.quirks import CustomCluster, CustomDevice\nimport zigpy.types as t\nfrom zigpy.zcl.clusters.general import (\n Basic,\n BinaryInput,\n Groups,\n Identify,\n LevelControl,\n OnOff,\n Ota,\n PowerConfiguration,\n Scenes,\n)\n\nfrom ..const import (\n ARGS,\n CLUSTER_ID,\n COMMAND,\n COMMAND_OFF_WITH_EFFECT,\n COMMAND_ON,\n COMMAND_STEP,\n DEVICE_TYPE,\n DIM_DOWN,\n DIM_UP,\n ENDPOINT_ID,\n ENDPOINTS,\n INPUT_CLUSTERS,\n LONG_PRESS,\n OUTPUT_CLUSTERS,\n PROFILE_ID,\n SHORT_PRESS,\n TURN_OFF,\n TURN_ON,\n)\n\nDIAGNOSTICS_CLUSTER_ID = 0x0B05 # decimal = 2821\n\n\nclass BasicCluster(CustomCluster, Basic):\n \"\"\"Centralite acceleration cluster.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Init.\"\"\"\n super().__init__(*args, **kwargs)\n self.attributes = super().attributes.copy()\n self.attributes.update({0x0031: (\"phillips\", t.bitmap16)})\n\n\nclass PhilipsRWL021(CustomDevice):\n \"\"\"Phillips RWL021 device.\"\"\"\n\n signature = {\n # <SimpleDescriptor endpoint=1 profile=49246 device_type=2096\n # device_version=2\n # input_clusters=[0]\n # output_clusters=[0, 3, 4, 6, 8, 5]>\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zll.PROFILE_ID,\n DEVICE_TYPE: zll.DeviceType.SCENE_CONTROLLER,\n INPUT_CLUSTERS: [Basic.cluster_id],\n OUTPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n ],\n },\n # <SimpleDescriptor endpoint=2 profile=260 device_type=12\n # device_version=0\n # input_clusters=[0, 1, 3, 15, 64512]\n # output_clusters=[25]>\n 2: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.SIMPLE_SENSOR,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n PowerConfiguration.cluster_id,\n Identify.cluster_id,\n BinaryInput.cluster_id,\n 64512,\n ],\n OUTPUT_CLUSTERS: [Ota.cluster_id],\n },\n }\n }\n\n replacement = {\n ENDPOINTS: {\n 1: {\n INPUT_CLUSTERS: [Basic.cluster_id],\n OUTPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n ],\n },\n 2: {\n INPUT_CLUSTERS: [\n BasicCluster,\n PowerConfiguration.cluster_id,\n Identify.cluster_id,\n BinaryInput.cluster_id,\n 64512,\n ],\n OUTPUT_CLUSTERS: [Ota.cluster_id],\n },\n }\n }\n\n device_automation_triggers = {\n (SHORT_PRESS, TURN_ON): {COMMAND: COMMAND_ON},\n (LONG_PRESS, TURN_OFF): {COMMAND: COMMAND_OFF_WITH_EFFECT},\n (SHORT_PRESS, DIM_UP): {\n COMMAND: COMMAND_STEP,\n CLUSTER_ID: 8,\n ENDPOINT_ID: 1,\n ARGS: [0, 30, 9],\n },\n (LONG_PRESS, DIM_UP): {\n COMMAND: COMMAND_STEP,\n CLUSTER_ID: 8,\n ENDPOINT_ID: 1,\n ARGS: [0, 56, 9],\n },\n (SHORT_PRESS, DIM_DOWN): {\n COMMAND: COMMAND_STEP,\n CLUSTER_ID: 8,\n ENDPOINT_ID: 1,\n ARGS: [1, 30, 9],\n },\n (LONG_PRESS, DIM_DOWN): {\n COMMAND: COMMAND_STEP,\n CLUSTER_ID: 8,\n ENDPOINT_ID: 1,\n ARGS: [1, 56, 9],\n },\n }\n", "path": "zhaquirks/philips/rwl021.py"}]} | 1,649 | 145 |
gh_patches_debug_13705 | rasdani/github-patches | git_diff | angr__angr-3210 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
stdin dependent symbolic execution is broken by gcc 11.2
<!--
*Disclaimer:
The angr suite is maintained by a small team of volunteers.
While we cannot guarantee any timeliness for fixes and enhancements, we will do our best.
For more real-time help with angr, from us and the community, join our [Slack.](https://angr.io/invite/)*
-->
---
**Describe the bug.**
<!--
Please include a clear and concise description of what the bug is.
-->
When working with binaries compiled with gcc 11.2 symbolic execution always takes only one path as if the stdin which they should depend on wasn't symbolic at all.
**Environment Information.**
<!--
Many common issues are caused by problems with the local Python environment.
Before submitting, double-check that your versions of all modules in the angr suite (angr, cle, pyvex, ...) are up to date.
Please include the output of `python -m angr.misc.bug_report` here.
-->
Tested on arch linux and fedora, freshly installed angr environment from pip (angr 9.1.11752)
**To Reproduce.**
<!--
Please include *both a script to reproduce the crash, and attach the binary used, if possible*
-->
Compile this test code with gcc:
```c
#include <stdio.h>
int main() {
int a;
scanf("%d", &a);
if(a == 0)
printf("do this\n");
else
printf("do that\n");
return 0;
}
```
The angr script:
```py
import angr, claripy
p = angr.Project('test')
entry = p.factory.entry_state()
simgr = p.factory.simgr(entry)
simgr.run()
import IPython
IPython.embed()
```
When complied with gcc 11.1:
```
In [1]: simgr
Out[1]: <SimulationManager with 2 deadended>
```
When compiled with gcc 11.2:
```
In [1]: simgr
Out[1]: <SimulationManager with all stashes empty (1 errored)>
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `angr/procedures/glibc/__libc_start_main.py`
Content:
```
1
2 import logging
3
4 import angr
5
6 l = logging.getLogger(name=__name__)
7
8 ######################################
9 # __libc_start_main
10 ######################################
11 class __libc_start_main(angr.SimProcedure):
12 #pylint:disable=arguments-differ,unused-argument,attribute-defined-outside-init
13
14 ADDS_EXITS = True
15 NO_RET = True
16 local_vars = ('main', 'argc', 'argv', 'init', 'fini')
17
18 def _initialize_b_loc_table(self):
19 """
20 Initialize ptable for ctype
21
22 See __ctype_b_loc.c in libc implementation
23 """
24 malloc = angr.SIM_PROCEDURES['libc']['malloc']
25 table = self.inline_call(malloc, 768).ret_expr
26 table_ptr = self.inline_call(malloc, self.state.arch.bytes).ret_expr
27
28 for pos, c in enumerate(self.state.libc.LOCALE_ARRAY):
29 # Each entry is 2 bytes
30 self.state.memory.store(table + (pos*2),
31 self.state.solver.BVV(c, 16),
32 inspect=False,
33 disable_actions=True,
34 )
35 # Offset for negative chars
36 # 256 because 2 bytes each, -128 * 2
37 table += 256
38 self.state.memory.store(table_ptr,
39 table,
40 size=self.state.arch.bytes,
41 endness=self.state.arch.memory_endness,
42 inspect=False,
43 disable_actions=True,
44 )
45
46 self.state.libc.ctype_b_loc_table_ptr = table_ptr
47
48 def _initialize_tolower_loc_table(self):
49 """
50 Initialize ptable for ctype
51
52 See __ctype_tolower_loc.c in libc implementation
53 """
54 malloc = angr.SIM_PROCEDURES['libc']['malloc']
55 # 384 entries, 4 bytes each
56 table = self.inline_call(malloc, 384*4).ret_expr
57 table_ptr = self.inline_call(malloc, self.state.arch.bytes).ret_expr
58
59 for pos, c in enumerate(self.state.libc.TOLOWER_LOC_ARRAY):
60 self.state.memory.store(table + (pos * 4),
61 self.state.solver.BVV(c, 32),
62 endness=self.state.arch.memory_endness,
63 inspect=False,
64 disable_actions=True,
65 )
66
67 # Offset for negative chars: -128 index (4 bytes per index)
68 table += (128 * 4)
69 self.state.memory.store(table_ptr,
70 table,
71 size=self.state.arch.bytes,
72 endness=self.state.arch.memory_endness,
73 inspect=False,
74 disable_actions=True,
75 )
76
77 self.state.libc.ctype_tolower_loc_table_ptr = table_ptr
78
79 def _initialize_toupper_loc_table(self):
80 """
81 Initialize ptable for ctype
82
83 See __ctype_toupper_loc.c in libc implementation
84 """
85 malloc = angr.SIM_PROCEDURES['libc']['malloc']
86 # 384 entries, 4 bytes each
87 table = self.inline_call(malloc, 384*4).ret_expr
88 table_ptr = self.inline_call(malloc, self.state.arch.bytes).ret_expr
89
90 for pos, c in enumerate(self.state.libc.TOUPPER_LOC_ARRAY):
91 self.state.memory.store(table + (pos * 4),
92 self.state.solver.BVV(c, 32),
93 endness=self.state.arch.memory_endness,
94 inspect=False,
95 disable_actions=True,
96 )
97
98 # Offset for negative chars: -128 index (4 bytes per index)
99 table += (128 * 4)
100 self.state.memory.store(table_ptr,
101 table,
102 size=self.state.arch.bytes,
103 endness=self.state.arch.memory_endness,
104 inspect=False,
105 disable_actions=True,
106 )
107
108 self.state.libc.ctype_toupper_loc_table_ptr = table_ptr
109
110 def _initialize_ctype_table(self):
111 self._initialize_b_loc_table()
112 self._initialize_tolower_loc_table()
113 self._initialize_toupper_loc_table()
114
115 def _initialize_errno(self):
116 malloc = angr.SIM_PROCEDURES['libc']['malloc']
117 errno_loc = self.inline_call(malloc, self.state.arch.bytes).ret_expr
118
119 self.state.libc.errno_location = errno_loc
120 self.state.memory.store(errno_loc, self.state.solver.BVV(0, self.state.arch.bits))
121
122 @property
123 def envp(self):
124 return self.argv + (self.argc+1)*self.state.arch.bytes
125
126 def run(self, main, argc, argv, init, fini):
127 # TODO: handle symbolic and static modes
128
129 self._initialize_ctype_table()
130 self._initialize_errno()
131
132 self.main, self.argc, self.argv, self.init, self.fini = self._extract_args(self.state, main, argc, argv, init,
133 fini)
134
135 # TODO: __cxa_atexit calls for various at-exit needs
136
137 self.call(self.init, (self.argc[31:0], self.argv, self.envp), 'after_init',
138 prototype = 'int main(int argc, char **argv, char **envp)')
139
140 def after_init(self, main, argc, argv, init, fini, exit_addr=0):
141 self.call(self.main, (self.argc[31:0], self.argv, self.envp), 'after_main',
142 prototype='int main(int argc, char **argv, char **envp)')
143
144 def after_main(self, main, argc, argv, init, fini, exit_addr=0):
145 self.exit(0)
146
147 def static_exits(self, blocks, cfg=None, **kwargs):
148 # Execute those blocks with a blank state, and then dump the arguments
149 blank_state = angr.SimState(project=self.project, mode="fastpath", cle_memory_backer=self.project.loader.memory,
150 add_options={angr.options.SYMBOL_FILL_UNCONSTRAINED_MEMORY,
151 angr.options.SYMBOL_FILL_UNCONSTRAINED_REGISTERS})
152 # set up the stack pointer
153 blank_state.regs.sp = 0x7ffffff0
154
155 # special handling for x86 PIE GCC binaries
156 #
157 # 08049C70 xor ebp, ebp
158 # 08049C72 pop esi
159 # 08049C73 mov ecx, esp
160 # 08049C75 and esp, 0FFFFFFF0h
161 # 08049C78 push eax
162 # 08049C79 push esp ; stack_end
163 # 08049C7A push edx ; rtld_fini
164 # 08049C7B call sub_8049CA3 // this is the get_pc function
165 # // first block starts here
166 # 08049C80 add ebx, (offset off_806B000 - $)
167 # 08049C86 lea eax, (nullsub_2 - 806B000h)[ebx]
168 # 08049C8C push eax ; fini
169 # 08049C8D lea eax, (sub_805F530 - 806B000h)[ebx]
170 # 08049C93 push eax ; init
171 # 08049C94 push ecx ; ubp_av
172 # 08049C95 push esi ; argc
173 # 08049C96 mov eax, offset main
174 # 08049C9C push eax ; main
175 # 08049C9D call ___libc_start_main
176 if cfg is not None and self.arch.name == "X86":
177 first_block = blocks[0]
178 first_node = cfg.model.get_any_node(first_block.addr)
179 if first_node is not None:
180 caller_nodes = cfg.model.get_predecessors(first_node, excluding_fakeret=False)
181 if len(caller_nodes) == 1:
182 caller_node = caller_nodes[0]
183 succ_and_jks = caller_node.successors_and_jumpkinds()
184 if len(succ_and_jks) == 1 and succ_and_jks[0][1] == 'Ijk_Call':
185 # get_pc
186 getpc_func = cfg.functions.get_by_addr(succ_and_jks[0][0].addr)
187 if getpc_func is not None and 'get_pc' in getpc_func.info:
188 # GCC-generated x86-pie binary confirmed.
189 # initialize the specified register with the block address
190 get_pc_reg = getpc_func.info['get_pc']
191 setattr(blank_state.regs, "_" + get_pc_reg, first_block.addr)
192
193 # Execute each block
194 state = blank_state
195 for b in blocks:
196 irsb = self.project.factory.default_engine.process(state, irsb=b, force_addr=b.addr)
197 if irsb.successors:
198 state = irsb.successors[0]
199 else:
200 break
201
202 cc = angr.DEFAULT_CC[self.arch.name](self.arch)
203 ty = angr.sim_type.parse_signature('void x(void*, void*, void*, void*, void*)').with_arch(self.arch)
204 args = cc.get_args(state, ty)
205 main, _, _, init, fini = self._extract_args(blank_state, *args)
206
207 all_exits = [
208 {'address': init, 'jumpkind': 'Ijk_Call', 'namehint': 'init'},
209 {'address': main, 'jumpkind': 'Ijk_Call', 'namehint': 'main'},
210 {'address': fini, 'jumpkind': 'Ijk_Call', 'namehint': 'fini'},
211 ]
212
213 return all_exits
214
215 @staticmethod
216 def _extract_args(state, main, argc, argv, init, fini):
217 """
218 Extract arguments and set them to
219
220 :param angr.sim_state.SimState state: The program state.
221 :param main: An argument to __libc_start_main.
222 :param argc: An argument to __libc_start_main.
223 :param argv: An argument to __libc_start_main.
224 :param init: An argument to __libc_start_main.
225 :param fini: An argument to __libc_start_main.
226 :return: A tuple of five elements: (main, argc, argv, init, fini)
227 :rtype: tuple
228 """
229
230 main_ = main
231 argc_ = argc
232 argv_ = argv
233 init_ = init
234 fini_ = fini
235
236 if state.arch.name == "PPC32":
237 # for some dumb reason, PPC passes arguments to libc_start_main in some completely absurd way
238 argv_ = argc_
239 argc_ = main_
240 main_ = state.mem[state.regs.r8 + 4:].int.resolved
241 init_ = state.mem[state.regs.r8 + 8:].int.resolved
242 fini_ = state.mem[state.regs.r8 + 12:].int.resolved
243
244 elif state.arch.name == "PPC64":
245 main_ = state.mem[state.regs.r8 + 8:].long.resolved
246 init_ = state.mem[state.regs.r8 + 16:].long.resolved
247 fini_ = state.mem[state.regs.r8 + 24:].long.resolved
248
249 return main_, argc_, argv_, init_, fini_
250
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/angr/procedures/glibc/__libc_start_main.py b/angr/procedures/glibc/__libc_start_main.py
--- a/angr/procedures/glibc/__libc_start_main.py
+++ b/angr/procedures/glibc/__libc_start_main.py
@@ -134,8 +134,11 @@
# TODO: __cxa_atexit calls for various at-exit needs
- self.call(self.init, (self.argc[31:0], self.argv, self.envp), 'after_init',
- prototype = 'int main(int argc, char **argv, char **envp)')
+ if not self.state.solver.is_true(self.init == 0):
+ self.call(self.init, (self.argc[31:0], self.argv, self.envp), 'after_init',
+ prototype = 'int main(int argc, char **argv, char **envp)')
+ else:
+ self.after_init(main, argc, argv, init, fini)
def after_init(self, main, argc, argv, init, fini, exit_addr=0):
self.call(self.main, (self.argc[31:0], self.argv, self.envp), 'after_main',
| {"golden_diff": "diff --git a/angr/procedures/glibc/__libc_start_main.py b/angr/procedures/glibc/__libc_start_main.py\n--- a/angr/procedures/glibc/__libc_start_main.py\n+++ b/angr/procedures/glibc/__libc_start_main.py\n@@ -134,8 +134,11 @@\n \n # TODO: __cxa_atexit calls for various at-exit needs\n \n- self.call(self.init, (self.argc[31:0], self.argv, self.envp), 'after_init',\n- prototype = 'int main(int argc, char **argv, char **envp)')\n+ if not self.state.solver.is_true(self.init == 0):\n+ self.call(self.init, (self.argc[31:0], self.argv, self.envp), 'after_init',\n+ prototype = 'int main(int argc, char **argv, char **envp)')\n+ else:\n+ self.after_init(main, argc, argv, init, fini)\n \n def after_init(self, main, argc, argv, init, fini, exit_addr=0):\n self.call(self.main, (self.argc[31:0], self.argv, self.envp), 'after_main',\n", "issue": "stdin dependent symbolic execution is broken by gcc 11.2\n<!--\r\n*Disclaimer:\r\nThe angr suite is maintained by a small team of volunteers.\r\nWhile we cannot guarantee any timeliness for fixes and enhancements, we will do our best.\r\nFor more real-time help with angr, from us and the community, join our [Slack.](https://angr.io/invite/)*\r\n-->\r\n---\r\n\r\n**Describe the bug.**\r\n\r\n<!--\r\nPlease include a clear and concise description of what the bug is.\r\n-->\r\n\r\nWhen working with binaries compiled with gcc 11.2 symbolic execution always takes only one path as if the stdin which they should depend on wasn't symbolic at all.\r\n\r\n**Environment Information.**\r\n<!--\r\nMany common issues are caused by problems with the local Python environment.\r\nBefore submitting, double-check that your versions of all modules in the angr suite (angr, cle, pyvex, ...) are up to date.\r\nPlease include the output of `python -m angr.misc.bug_report` here.\r\n-->\r\nTested on arch linux and fedora, freshly installed angr environment from pip (angr 9.1.11752)\r\n\r\n\r\n**To Reproduce.**\r\n\r\n<!--\r\nPlease include *both a script to reproduce the crash, and attach the binary used, if possible*\r\n-->\r\nCompile this test code with gcc:\r\n```c\r\n#include <stdio.h>\r\nint main() {\r\n int a;\r\n scanf(\"%d\", &a);\r\n if(a == 0)\r\n printf(\"do this\\n\");\r\n else\r\n printf(\"do that\\n\");\r\n return 0;\r\n}\r\n```\r\nThe angr script:\r\n```py\r\nimport angr, claripy\r\n\r\np = angr.Project('test')\r\n\r\nentry = p.factory.entry_state()\r\nsimgr = p.factory.simgr(entry)\r\n\r\nsimgr.run()\r\n\r\nimport IPython\r\nIPython.embed()\r\n```\r\nWhen complied with gcc 11.1:\r\n```\r\nIn [1]: simgr\r\nOut[1]: <SimulationManager with 2 deadended>\r\n```\r\nWhen compiled with gcc 11.2:\r\n```\r\nIn [1]: simgr\r\nOut[1]: <SimulationManager with all stashes empty (1 errored)>\r\n```\r\n\n", "before_files": [{"content": "\nimport logging\n\nimport angr\n\nl = logging.getLogger(name=__name__)\n\n######################################\n# __libc_start_main\n######################################\nclass __libc_start_main(angr.SimProcedure):\n #pylint:disable=arguments-differ,unused-argument,attribute-defined-outside-init\n\n ADDS_EXITS = True\n NO_RET = True\n local_vars = ('main', 'argc', 'argv', 'init', 'fini')\n\n def _initialize_b_loc_table(self):\n \"\"\"\n Initialize ptable for ctype\n\n See __ctype_b_loc.c in libc implementation\n \"\"\"\n malloc = angr.SIM_PROCEDURES['libc']['malloc']\n table = self.inline_call(malloc, 768).ret_expr\n table_ptr = self.inline_call(malloc, self.state.arch.bytes).ret_expr\n\n for pos, c in enumerate(self.state.libc.LOCALE_ARRAY):\n # Each entry is 2 bytes\n self.state.memory.store(table + (pos*2),\n self.state.solver.BVV(c, 16),\n inspect=False,\n disable_actions=True,\n )\n # Offset for negative chars\n # 256 because 2 bytes each, -128 * 2\n table += 256\n self.state.memory.store(table_ptr,\n table,\n size=self.state.arch.bytes,\n endness=self.state.arch.memory_endness,\n inspect=False,\n disable_actions=True,\n )\n\n self.state.libc.ctype_b_loc_table_ptr = table_ptr\n\n def _initialize_tolower_loc_table(self):\n \"\"\"\n Initialize ptable for ctype\n\n See __ctype_tolower_loc.c in libc implementation\n \"\"\"\n malloc = angr.SIM_PROCEDURES['libc']['malloc']\n # 384 entries, 4 bytes each\n table = self.inline_call(malloc, 384*4).ret_expr\n table_ptr = self.inline_call(malloc, self.state.arch.bytes).ret_expr\n\n for pos, c in enumerate(self.state.libc.TOLOWER_LOC_ARRAY):\n self.state.memory.store(table + (pos * 4),\n self.state.solver.BVV(c, 32),\n endness=self.state.arch.memory_endness,\n inspect=False,\n disable_actions=True,\n )\n\n # Offset for negative chars: -128 index (4 bytes per index)\n table += (128 * 4)\n self.state.memory.store(table_ptr,\n table,\n size=self.state.arch.bytes,\n endness=self.state.arch.memory_endness,\n inspect=False,\n disable_actions=True,\n )\n\n self.state.libc.ctype_tolower_loc_table_ptr = table_ptr\n\n def _initialize_toupper_loc_table(self):\n \"\"\"\n Initialize ptable for ctype\n\n See __ctype_toupper_loc.c in libc implementation\n \"\"\"\n malloc = angr.SIM_PROCEDURES['libc']['malloc']\n # 384 entries, 4 bytes each\n table = self.inline_call(malloc, 384*4).ret_expr\n table_ptr = self.inline_call(malloc, self.state.arch.bytes).ret_expr\n\n for pos, c in enumerate(self.state.libc.TOUPPER_LOC_ARRAY):\n self.state.memory.store(table + (pos * 4),\n self.state.solver.BVV(c, 32),\n endness=self.state.arch.memory_endness,\n inspect=False,\n disable_actions=True,\n )\n\n # Offset for negative chars: -128 index (4 bytes per index)\n table += (128 * 4)\n self.state.memory.store(table_ptr,\n table,\n size=self.state.arch.bytes,\n endness=self.state.arch.memory_endness,\n inspect=False,\n disable_actions=True,\n )\n\n self.state.libc.ctype_toupper_loc_table_ptr = table_ptr\n\n def _initialize_ctype_table(self):\n self._initialize_b_loc_table()\n self._initialize_tolower_loc_table()\n self._initialize_toupper_loc_table()\n\n def _initialize_errno(self):\n malloc = angr.SIM_PROCEDURES['libc']['malloc']\n errno_loc = self.inline_call(malloc, self.state.arch.bytes).ret_expr\n\n self.state.libc.errno_location = errno_loc\n self.state.memory.store(errno_loc, self.state.solver.BVV(0, self.state.arch.bits))\n\n @property\n def envp(self):\n return self.argv + (self.argc+1)*self.state.arch.bytes\n\n def run(self, main, argc, argv, init, fini):\n # TODO: handle symbolic and static modes\n\n self._initialize_ctype_table()\n self._initialize_errno()\n\n self.main, self.argc, self.argv, self.init, self.fini = self._extract_args(self.state, main, argc, argv, init,\n fini)\n\n # TODO: __cxa_atexit calls for various at-exit needs\n\n self.call(self.init, (self.argc[31:0], self.argv, self.envp), 'after_init',\n prototype = 'int main(int argc, char **argv, char **envp)')\n\n def after_init(self, main, argc, argv, init, fini, exit_addr=0):\n self.call(self.main, (self.argc[31:0], self.argv, self.envp), 'after_main',\n prototype='int main(int argc, char **argv, char **envp)')\n\n def after_main(self, main, argc, argv, init, fini, exit_addr=0):\n self.exit(0)\n\n def static_exits(self, blocks, cfg=None, **kwargs):\n # Execute those blocks with a blank state, and then dump the arguments\n blank_state = angr.SimState(project=self.project, mode=\"fastpath\", cle_memory_backer=self.project.loader.memory,\n add_options={angr.options.SYMBOL_FILL_UNCONSTRAINED_MEMORY,\n angr.options.SYMBOL_FILL_UNCONSTRAINED_REGISTERS})\n # set up the stack pointer\n blank_state.regs.sp = 0x7ffffff0\n\n # special handling for x86 PIE GCC binaries\n #\n # 08049C70 xor ebp, ebp\n # 08049C72 pop esi\n # 08049C73 mov ecx, esp\n # 08049C75 and esp, 0FFFFFFF0h\n # 08049C78 push eax\n # 08049C79 push esp ; stack_end\n # 08049C7A push edx ; rtld_fini\n # 08049C7B call sub_8049CA3 // this is the get_pc function\n # // first block starts here\n # 08049C80 add ebx, (offset off_806B000 - $)\n # 08049C86 lea eax, (nullsub_2 - 806B000h)[ebx]\n # 08049C8C push eax ; fini\n # 08049C8D lea eax, (sub_805F530 - 806B000h)[ebx]\n # 08049C93 push eax ; init\n # 08049C94 push ecx ; ubp_av\n # 08049C95 push esi ; argc\n # 08049C96 mov eax, offset main\n # 08049C9C push eax ; main\n # 08049C9D call ___libc_start_main\n if cfg is not None and self.arch.name == \"X86\":\n first_block = blocks[0]\n first_node = cfg.model.get_any_node(first_block.addr)\n if first_node is not None:\n caller_nodes = cfg.model.get_predecessors(first_node, excluding_fakeret=False)\n if len(caller_nodes) == 1:\n caller_node = caller_nodes[0]\n succ_and_jks = caller_node.successors_and_jumpkinds()\n if len(succ_and_jks) == 1 and succ_and_jks[0][1] == 'Ijk_Call':\n # get_pc\n getpc_func = cfg.functions.get_by_addr(succ_and_jks[0][0].addr)\n if getpc_func is not None and 'get_pc' in getpc_func.info:\n # GCC-generated x86-pie binary confirmed.\n # initialize the specified register with the block address\n get_pc_reg = getpc_func.info['get_pc']\n setattr(blank_state.regs, \"_\" + get_pc_reg, first_block.addr)\n\n # Execute each block\n state = blank_state\n for b in blocks:\n irsb = self.project.factory.default_engine.process(state, irsb=b, force_addr=b.addr)\n if irsb.successors:\n state = irsb.successors[0]\n else:\n break\n\n cc = angr.DEFAULT_CC[self.arch.name](self.arch)\n ty = angr.sim_type.parse_signature('void x(void*, void*, void*, void*, void*)').with_arch(self.arch)\n args = cc.get_args(state, ty)\n main, _, _, init, fini = self._extract_args(blank_state, *args)\n\n all_exits = [\n {'address': init, 'jumpkind': 'Ijk_Call', 'namehint': 'init'},\n {'address': main, 'jumpkind': 'Ijk_Call', 'namehint': 'main'},\n {'address': fini, 'jumpkind': 'Ijk_Call', 'namehint': 'fini'},\n ]\n\n return all_exits\n\n @staticmethod\n def _extract_args(state, main, argc, argv, init, fini):\n \"\"\"\n Extract arguments and set them to\n\n :param angr.sim_state.SimState state: The program state.\n :param main: An argument to __libc_start_main.\n :param argc: An argument to __libc_start_main.\n :param argv: An argument to __libc_start_main.\n :param init: An argument to __libc_start_main.\n :param fini: An argument to __libc_start_main.\n :return: A tuple of five elements: (main, argc, argv, init, fini)\n :rtype: tuple\n \"\"\"\n\n main_ = main\n argc_ = argc\n argv_ = argv\n init_ = init\n fini_ = fini\n\n if state.arch.name == \"PPC32\":\n # for some dumb reason, PPC passes arguments to libc_start_main in some completely absurd way\n argv_ = argc_\n argc_ = main_\n main_ = state.mem[state.regs.r8 + 4:].int.resolved\n init_ = state.mem[state.regs.r8 + 8:].int.resolved\n fini_ = state.mem[state.regs.r8 + 12:].int.resolved\n\n elif state.arch.name == \"PPC64\":\n main_ = state.mem[state.regs.r8 + 8:].long.resolved\n init_ = state.mem[state.regs.r8 + 16:].long.resolved\n fini_ = state.mem[state.regs.r8 + 24:].long.resolved\n\n return main_, argc_, argv_, init_, fini_\n", "path": "angr/procedures/glibc/__libc_start_main.py"}], "after_files": [{"content": "\nimport logging\n\nimport angr\n\nl = logging.getLogger(name=__name__)\n\n######################################\n# __libc_start_main\n######################################\nclass __libc_start_main(angr.SimProcedure):\n #pylint:disable=arguments-differ,unused-argument,attribute-defined-outside-init\n\n ADDS_EXITS = True\n NO_RET = True\n local_vars = ('main', 'argc', 'argv', 'init', 'fini')\n\n def _initialize_b_loc_table(self):\n \"\"\"\n Initialize ptable for ctype\n\n See __ctype_b_loc.c in libc implementation\n \"\"\"\n malloc = angr.SIM_PROCEDURES['libc']['malloc']\n table = self.inline_call(malloc, 768).ret_expr\n table_ptr = self.inline_call(malloc, self.state.arch.bytes).ret_expr\n\n for pos, c in enumerate(self.state.libc.LOCALE_ARRAY):\n # Each entry is 2 bytes\n self.state.memory.store(table + (pos*2),\n self.state.solver.BVV(c, 16),\n inspect=False,\n disable_actions=True,\n )\n # Offset for negative chars\n # 256 because 2 bytes each, -128 * 2\n table += 256\n self.state.memory.store(table_ptr,\n table,\n size=self.state.arch.bytes,\n endness=self.state.arch.memory_endness,\n inspect=False,\n disable_actions=True,\n )\n\n self.state.libc.ctype_b_loc_table_ptr = table_ptr\n\n def _initialize_tolower_loc_table(self):\n \"\"\"\n Initialize ptable for ctype\n\n See __ctype_tolower_loc.c in libc implementation\n \"\"\"\n malloc = angr.SIM_PROCEDURES['libc']['malloc']\n # 384 entries, 4 bytes each\n table = self.inline_call(malloc, 384*4).ret_expr\n table_ptr = self.inline_call(malloc, self.state.arch.bytes).ret_expr\n\n for pos, c in enumerate(self.state.libc.TOLOWER_LOC_ARRAY):\n self.state.memory.store(table + (pos * 4),\n self.state.solver.BVV(c, 32),\n endness=self.state.arch.memory_endness,\n inspect=False,\n disable_actions=True,\n )\n\n # Offset for negative chars: -128 index (4 bytes per index)\n table += (128 * 4)\n self.state.memory.store(table_ptr,\n table,\n size=self.state.arch.bytes,\n endness=self.state.arch.memory_endness,\n inspect=False,\n disable_actions=True,\n )\n\n self.state.libc.ctype_tolower_loc_table_ptr = table_ptr\n\n def _initialize_toupper_loc_table(self):\n \"\"\"\n Initialize ptable for ctype\n\n See __ctype_toupper_loc.c in libc implementation\n \"\"\"\n malloc = angr.SIM_PROCEDURES['libc']['malloc']\n # 384 entries, 4 bytes each\n table = self.inline_call(malloc, 384*4).ret_expr\n table_ptr = self.inline_call(malloc, self.state.arch.bytes).ret_expr\n\n for pos, c in enumerate(self.state.libc.TOUPPER_LOC_ARRAY):\n self.state.memory.store(table + (pos * 4),\n self.state.solver.BVV(c, 32),\n endness=self.state.arch.memory_endness,\n inspect=False,\n disable_actions=True,\n )\n\n # Offset for negative chars: -128 index (4 bytes per index)\n table += (128 * 4)\n self.state.memory.store(table_ptr,\n table,\n size=self.state.arch.bytes,\n endness=self.state.arch.memory_endness,\n inspect=False,\n disable_actions=True,\n )\n\n self.state.libc.ctype_toupper_loc_table_ptr = table_ptr\n\n def _initialize_ctype_table(self):\n self._initialize_b_loc_table()\n self._initialize_tolower_loc_table()\n self._initialize_toupper_loc_table()\n\n def _initialize_errno(self):\n malloc = angr.SIM_PROCEDURES['libc']['malloc']\n errno_loc = self.inline_call(malloc, self.state.arch.bytes).ret_expr\n\n self.state.libc.errno_location = errno_loc\n self.state.memory.store(errno_loc, self.state.solver.BVV(0, self.state.arch.bits))\n\n @property\n def envp(self):\n return self.argv + (self.argc+1)*self.state.arch.bytes\n\n def run(self, main, argc, argv, init, fini):\n # TODO: handle symbolic and static modes\n\n self._initialize_ctype_table()\n self._initialize_errno()\n\n self.main, self.argc, self.argv, self.init, self.fini = self._extract_args(self.state, main, argc, argv, init,\n fini)\n\n # TODO: __cxa_atexit calls for various at-exit needs\n\n if not self.state.solver.is_true(self.init == 0):\n self.call(self.init, (self.argc[31:0], self.argv, self.envp), 'after_init',\n prototype = 'int main(int argc, char **argv, char **envp)')\n else:\n self.after_init(main, argc, argv, init, fini)\n\n def after_init(self, main, argc, argv, init, fini, exit_addr=0):\n self.call(self.main, (self.argc[31:0], self.argv, self.envp), 'after_main',\n prototype='int main(int argc, char **argv, char **envp)')\n\n def after_main(self, main, argc, argv, init, fini, exit_addr=0):\n self.exit(0)\n\n def static_exits(self, blocks, cfg=None, **kwargs):\n # Execute those blocks with a blank state, and then dump the arguments\n blank_state = angr.SimState(project=self.project, mode=\"fastpath\", cle_memory_backer=self.project.loader.memory,\n add_options={angr.options.SYMBOL_FILL_UNCONSTRAINED_MEMORY,\n angr.options.SYMBOL_FILL_UNCONSTRAINED_REGISTERS})\n # set up the stack pointer\n blank_state.regs.sp = 0x7ffffff0\n\n # special handling for x86 PIE GCC binaries\n #\n # 08049C70 xor ebp, ebp\n # 08049C72 pop esi\n # 08049C73 mov ecx, esp\n # 08049C75 and esp, 0FFFFFFF0h\n # 08049C78 push eax\n # 08049C79 push esp ; stack_end\n # 08049C7A push edx ; rtld_fini\n # 08049C7B call sub_8049CA3 // this is the get_pc function\n # // first block starts here\n # 08049C80 add ebx, (offset off_806B000 - $)\n # 08049C86 lea eax, (nullsub_2 - 806B000h)[ebx]\n # 08049C8C push eax ; fini\n # 08049C8D lea eax, (sub_805F530 - 806B000h)[ebx]\n # 08049C93 push eax ; init\n # 08049C94 push ecx ; ubp_av\n # 08049C95 push esi ; argc\n # 08049C96 mov eax, offset main\n # 08049C9C push eax ; main\n # 08049C9D call ___libc_start_main\n if cfg is not None and self.arch.name == \"X86\":\n first_block = blocks[0]\n first_node = cfg.model.get_any_node(first_block.addr)\n if first_node is not None:\n caller_nodes = cfg.model.get_predecessors(first_node, excluding_fakeret=False)\n if len(caller_nodes) == 1:\n caller_node = caller_nodes[0]\n succ_and_jks = caller_node.successors_and_jumpkinds()\n if len(succ_and_jks) == 1 and succ_and_jks[0][1] == 'Ijk_Call':\n # get_pc\n getpc_func = cfg.functions.get_by_addr(succ_and_jks[0][0].addr)\n if getpc_func is not None and 'get_pc' in getpc_func.info:\n # GCC-generated x86-pie binary confirmed.\n # initialize the specified register with the block address\n get_pc_reg = getpc_func.info['get_pc']\n setattr(blank_state.regs, \"_\" + get_pc_reg, first_block.addr)\n\n # Execute each block\n state = blank_state\n for b in blocks:\n irsb = self.project.factory.default_engine.process(state, irsb=b, force_addr=b.addr)\n if irsb.successors:\n state = irsb.successors[0]\n else:\n break\n\n cc = angr.DEFAULT_CC[self.arch.name](self.arch)\n ty = angr.sim_type.parse_signature('void x(void*, void*, void*, void*, void*)').with_arch(self.arch)\n args = cc.get_args(state, ty)\n main, _, _, init, fini = self._extract_args(blank_state, *args)\n\n all_exits = [\n {'address': init, 'jumpkind': 'Ijk_Call', 'namehint': 'init'},\n {'address': main, 'jumpkind': 'Ijk_Call', 'namehint': 'main'},\n {'address': fini, 'jumpkind': 'Ijk_Call', 'namehint': 'fini'},\n ]\n\n return all_exits\n\n @staticmethod\n def _extract_args(state, main, argc, argv, init, fini):\n \"\"\"\n Extract arguments and set them to\n\n :param angr.sim_state.SimState state: The program state.\n :param main: An argument to __libc_start_main.\n :param argc: An argument to __libc_start_main.\n :param argv: An argument to __libc_start_main.\n :param init: An argument to __libc_start_main.\n :param fini: An argument to __libc_start_main.\n :return: A tuple of five elements: (main, argc, argv, init, fini)\n :rtype: tuple\n \"\"\"\n\n main_ = main\n argc_ = argc\n argv_ = argv\n init_ = init\n fini_ = fini\n\n if state.arch.name == \"PPC32\":\n # for some dumb reason, PPC passes arguments to libc_start_main in some completely absurd way\n argv_ = argc_\n argc_ = main_\n main_ = state.mem[state.regs.r8 + 4:].int.resolved\n init_ = state.mem[state.regs.r8 + 8:].int.resolved\n fini_ = state.mem[state.regs.r8 + 12:].int.resolved\n\n elif state.arch.name == \"PPC64\":\n main_ = state.mem[state.regs.r8 + 8:].long.resolved\n init_ = state.mem[state.regs.r8 + 16:].long.resolved\n fini_ = state.mem[state.regs.r8 + 24:].long.resolved\n\n return main_, argc_, argv_, init_, fini_\n", "path": "angr/procedures/glibc/__libc_start_main.py"}]} | 3,923 | 273 |
gh_patches_debug_9681 | rasdani/github-patches | git_diff | streamlit__streamlit-149 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
On rerun, faded elements are becoming opaque one by one. Instead they become opaque all at once.
Steps to repro:
1. Run `examples/reference.py`
2. When done, rerun it.
**Expected:** on rerun, all elements fade out and then become opaque one by one even before the run is done.
**Actual:** on rerun, all elements fade out and only become opaque when the entire run is done.
I believe this bug was introduced with the Sidebar code.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # Configuration file for the Sphinx documentation builder.
2 # -*- coding: utf-8 -*-
3
4 import recommonmark
5 from recommonmark.transform import AutoStructify
6
7 # -- Path setup --------------------------------------------------------------
8
9 # If extensions (or modules to document with autodoc) are in another directory,
10 # add these directories to sys.path here. If the directory is relative to the
11 # documentation root, use os.path.abspath to make it absolute, like shown here.
12
13 import os
14 import sys
15
16 sys.path.insert(0, os.path.abspath(".."))
17 sys.path.append(os.path.abspath("./_ext"))
18
19
20 # -- Project information -----------------------------------------------------
21
22 project = u"Streamlit"
23 copyright = u"2019, Streamlit Inc."
24 author = u"Streamlit Inc."
25 display_version = True
26
27 # The full version, including alpha/beta/rc tags
28 import streamlit as st
29
30 release = st.__version__
31
32 # The short X.Y version
33 version = ".".join(st.__version__.split(".")[0:2])
34
35
36 # -- General configuration ---------------------------------------------------
37
38 # If your documentation needs a minimal Sphinx version, state it here.
39 #
40 # needs_sphinx = '1.0'
41
42 # Add any Sphinx extension module names here, as strings. They can be
43 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
44 # ones.
45 extensions = [
46 "sphinx.ext.autodoc",
47 "sphinx.ext.autosummary",
48 "sphinx.ext.githubpages",
49 "sphinx.ext.napoleon",
50 "recommonmark",
51 "stoutput",
52 ]
53
54 # Add any paths that contain templates here, relative to this directory.
55 templates_path = ["_templates"]
56
57 # The suffix(es) of source filenames.
58 # You can specify multiple suffix as a list of string:
59 source_suffix = [".rst", ".md"]
60
61 # The master toctree document.
62 master_doc = "index"
63
64 # The language for content autogenerated by Sphinx. Refer to documentation
65 # for a list of supported languages.
66 #
67 # This is also used if you do content translation via gettext catalogs.
68 # Usually you set "language" from the command line for these cases.
69 language = None
70
71 # List of patterns, relative to source directory, that match files and
72 # directories to ignore when looking for source files.
73 # This pattern also affects html_static_path and html_extra_path.
74 with open(os.path.abspath("./exclude")) as exclude_file:
75 exclude_patterns = [line.strip() for line in exclude_file.readlines()]
76
77 # The name of the Pygments (syntax highlighting) style to use.
78 pygments_style = None
79
80
81 # -- Options for HTML output -------------------------------------------------
82
83 # The theme to use for HTML and HTML Help pages. See the documentation for
84 # a list of builtin themes.
85 #
86 # html_theme = 'alabaster'
87 html_theme = "sphinx_rtd_theme"
88
89
90 # Theme options are theme-specific and customize the look and feel of a theme
91 # further. For a list of options available for each theme, see the
92 # documentation.
93 #
94 # html_theme_options = {}
95
96 # Add any paths that contain custom static files (such as style sheets) here,
97 # relative to this directory. They are copied after the builtin static files,
98 # so a file named "default.css" will overwrite the builtin "default.css".
99 html_static_path = ["_static"]
100
101 # Custom sidebar templates, must be a dictionary that maps document names
102 # to template names.
103 #
104 # The default sidebars (for documents that don't match any pattern) are
105 # defined by theme itself. Builtin themes are using these templates by
106 # default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
107 # 'searchbox.html']``.
108 #
109 # html_sidebars = {}
110
111 html_show_sourcelink = False
112
113
114 # -- Options for HTMLHelp output ---------------------------------------------
115
116 # Output file base name for HTML help builder.
117 htmlhelp_basename = "Streamlitdoc"
118
119
120 # -- Options for LaTeX output ------------------------------------------------
121
122 latex_elements = {
123 # The paper size ('letterpaper' or 'a4paper').
124 #
125 # 'papersize': 'letterpaper',
126 # The font size ('10pt', '11pt' or '12pt').
127 #
128 # 'pointsize': '10pt',
129 # Additional stuff for the LaTeX preamble.
130 #
131 # 'preamble': '',
132 # Latex figure (float) alignment
133 #
134 # 'figure_align': 'htbp',
135 }
136
137 # Grouping the document tree into LaTeX files. List of tuples
138 # (source start file, target name, title,
139 # author, documentclass [howto, manual, or own class]).
140 latex_documents = [
141 (
142 master_doc,
143 "Streamlit.tex",
144 u"Streamlit Documentation",
145 u"Streamlit Inc.",
146 "manual",
147 )
148 ]
149
150
151 # -- Options for manual page output ------------------------------------------
152
153 # One entry per manual page. List of tuples
154 # (source start file, name, description, authors, manual section).
155 man_pages = [(master_doc, "streamlit", u"Streamlit Documentation", [author], 1)]
156
157
158 # -- Options for Texinfo output ----------------------------------------------
159
160 # Grouping the document tree into Texinfo files. List of tuples
161 # (source start file, target name, title, author,
162 # dir menu entry, description, category)
163 texinfo_documents = [
164 (
165 master_doc,
166 "Streamlit",
167 u"Streamlit Documentation",
168 author,
169 "Streamlit",
170 "One line description of project.",
171 "Miscellaneous",
172 )
173 ]
174
175
176 # -- Options for Epub output -------------------------------------------------
177
178 # Bibliographic Dublin Core info.
179 epub_title = project
180
181 # The unique identifier of the text. This can be a ISBN number
182 # or the project homepage.
183 #
184 # epub_identifier = ''
185
186 # A unique identification for the text.
187 #
188 # epub_uid = ''
189
190 # A list of files that should not be packed into the epub file.
191 epub_exclude_files = ["search.html"]
192
193
194 # -- Extension configuration -------------------------------------------------
195
196
197 autodoc_default_flags = ["members"]
198 autosummary_generate = True
199 html_copy_source = False
200
201 # At the bottom of conf.py
202 def setup(app):
203 app.add_config_value(
204 "recommonmark_config",
205 {
206 "url_resolver": lambda url: github_doc_root + url,
207 "auto_toc_tree_section": "Contents",
208 },
209 True,
210 )
211 app.add_transform(AutoStructify)
212
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -90,8 +90,10 @@
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
-#
-# html_theme_options = {}
+
+html_theme_options = {
+ "analytics_id": "UA-122023594-3",
+}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -90,8 +90,10 @@\n # Theme options are theme-specific and customize the look and feel of a theme\n # further. For a list of options available for each theme, see the\n # documentation.\n-#\n-# html_theme_options = {}\n+\n+html_theme_options = {\n+ \"analytics_id\": \"UA-122023594-3\",\n+}\n \n # Add any paths that contain custom static files (such as style sheets) here,\n # relative to this directory. They are copied after the builtin static files,\n", "issue": "On rerun, faded elements are becoming opaque one by one. Instead they become opaque all at once.\nSteps to repro:\r\n1. Run `examples/reference.py`\r\n2. When done, rerun it.\r\n\r\n**Expected:** on rerun, all elements fade out and then become opaque one by one even before the run is done.\r\n**Actual:** on rerun, all elements fade out and only become opaque when the entire run is done.\r\n\r\nI believe this bug was introduced with the Sidebar code.\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n# -*- coding: utf-8 -*-\n\nimport recommonmark\nfrom recommonmark.transform import AutoStructify\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\nimport os\nimport sys\n\nsys.path.insert(0, os.path.abspath(\"..\"))\nsys.path.append(os.path.abspath(\"./_ext\"))\n\n\n# -- Project information -----------------------------------------------------\n\nproject = u\"Streamlit\"\ncopyright = u\"2019, Streamlit Inc.\"\nauthor = u\"Streamlit Inc.\"\ndisplay_version = True\n\n# The full version, including alpha/beta/rc tags\nimport streamlit as st\n\nrelease = st.__version__\n\n# The short X.Y version\nversion = \".\".join(st.__version__.split(\".\")[0:2])\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.autosummary\",\n \"sphinx.ext.githubpages\",\n \"sphinx.ext.napoleon\",\n \"recommonmark\",\n \"stoutput\",\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\nsource_suffix = [\".rst\", \".md\"]\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nwith open(os.path.abspath(\"./exclude\")) as exclude_file:\n exclude_patterns = [line.strip() for line in exclude_file.readlines()]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = None\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\n# html_theme = 'alabaster'\nhtml_theme = \"sphinx_rtd_theme\"\n\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\nhtml_show_sourcelink = False\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"Streamlitdoc\"\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n master_doc,\n \"Streamlit.tex\",\n u\"Streamlit Documentation\",\n u\"Streamlit Inc.\",\n \"manual\",\n )\n]\n\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"streamlit\", u\"Streamlit Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"Streamlit\",\n u\"Streamlit Documentation\",\n author,\n \"Streamlit\",\n \"One line description of project.\",\n \"Miscellaneous\",\n )\n]\n\n\n# -- Options for Epub output -------------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = project\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#\n# epub_identifier = ''\n\n# A unique identification for the text.\n#\n# epub_uid = ''\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = [\"search.html\"]\n\n\n# -- Extension configuration -------------------------------------------------\n\n\nautodoc_default_flags = [\"members\"]\nautosummary_generate = True\nhtml_copy_source = False\n\n# At the bottom of conf.py\ndef setup(app):\n app.add_config_value(\n \"recommonmark_config\",\n {\n \"url_resolver\": lambda url: github_doc_root + url,\n \"auto_toc_tree_section\": \"Contents\",\n },\n True,\n )\n app.add_transform(AutoStructify)\n", "path": "docs/conf.py"}], "after_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n# -*- coding: utf-8 -*-\n\nimport recommonmark\nfrom recommonmark.transform import AutoStructify\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\nimport os\nimport sys\n\nsys.path.insert(0, os.path.abspath(\"..\"))\nsys.path.append(os.path.abspath(\"./_ext\"))\n\n\n# -- Project information -----------------------------------------------------\n\nproject = u\"Streamlit\"\ncopyright = u\"2019, Streamlit Inc.\"\nauthor = u\"Streamlit Inc.\"\ndisplay_version = True\n\n# The full version, including alpha/beta/rc tags\nimport streamlit as st\n\nrelease = st.__version__\n\n# The short X.Y version\nversion = \".\".join(st.__version__.split(\".\")[0:2])\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.autosummary\",\n \"sphinx.ext.githubpages\",\n \"sphinx.ext.napoleon\",\n \"recommonmark\",\n \"stoutput\",\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\nsource_suffix = [\".rst\", \".md\"]\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nwith open(os.path.abspath(\"./exclude\")) as exclude_file:\n exclude_patterns = [line.strip() for line in exclude_file.readlines()]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = None\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\n# html_theme = 'alabaster'\nhtml_theme = \"sphinx_rtd_theme\"\n\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n\nhtml_theme_options = {\n \"analytics_id\": \"UA-122023594-3\",\n}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\nhtml_show_sourcelink = False\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"Streamlitdoc\"\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n master_doc,\n \"Streamlit.tex\",\n u\"Streamlit Documentation\",\n u\"Streamlit Inc.\",\n \"manual\",\n )\n]\n\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"streamlit\", u\"Streamlit Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"Streamlit\",\n u\"Streamlit Documentation\",\n author,\n \"Streamlit\",\n \"One line description of project.\",\n \"Miscellaneous\",\n )\n]\n\n\n# -- Options for Epub output -------------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = project\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#\n# epub_identifier = ''\n\n# A unique identification for the text.\n#\n# epub_uid = ''\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = [\"search.html\"]\n\n\n# -- Extension configuration -------------------------------------------------\n\n\nautodoc_default_flags = [\"members\"]\nautosummary_generate = True\nhtml_copy_source = False\n\n# At the bottom of conf.py\ndef setup(app):\n app.add_config_value(\n \"recommonmark_config\",\n {\n \"url_resolver\": lambda url: github_doc_root + url,\n \"auto_toc_tree_section\": \"Contents\",\n },\n True,\n )\n app.add_transform(AutoStructify)\n", "path": "docs/conf.py"}]} | 2,269 | 145 |
gh_patches_debug_17277 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-247 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Create packages, automate releases
Create new `opentelemetry-` packages for the API and SDK, and consider doing the same for all OC contrib packages.
Configure CI to build and release these packages when we tag a new release.
See #6 for CI bootstrapping.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opentelemetry-api/setup.py`
Content:
```
1 # Copyright 2019, OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16
17 import setuptools
18
19 BASE_DIR = os.path.dirname(__file__)
20 VERSION_FILENAME = os.path.join(
21 BASE_DIR, "src", "opentelemetry", "util", "version.py"
22 )
23 PACKAGE_INFO = {}
24 with open(VERSION_FILENAME) as f:
25 exec(f.read(), PACKAGE_INFO)
26
27 setuptools.setup(
28 name="opentelemetry-api",
29 version=PACKAGE_INFO["__version__"],
30 author="OpenTelemetry Authors",
31 author_email="[email protected]",
32 classifiers=[
33 "Development Status :: 3 - Alpha",
34 "Intended Audience :: Developers",
35 "License :: OSI Approved :: Apache Software License",
36 "Programming Language :: Python",
37 "Programming Language :: Python :: 3",
38 "Programming Language :: Python :: 3.4",
39 "Programming Language :: Python :: 3.5",
40 "Programming Language :: Python :: 3.6",
41 "Programming Language :: Python :: 3.7",
42 ],
43 description="OpenTelemetry Python API",
44 include_package_data=True,
45 long_description=open("README.rst").read(),
46 install_requires=["typing; python_version<'3.5'"],
47 extras_require={},
48 license="Apache-2.0",
49 package_dir={"": "src"},
50 packages=setuptools.find_namespace_packages(
51 where="src", include="opentelemetry.*"
52 ),
53 url=(
54 "https://github.com/open-telemetry/opentelemetry-python"
55 "/tree/master/opentelemetry-api"
56 ),
57 zip_safe=False,
58 )
59
```
Path: `opentelemetry-sdk/setup.py`
Content:
```
1 # Copyright 2019, OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16
17 import setuptools
18
19 BASE_DIR = os.path.dirname(__file__)
20 VERSION_FILENAME = os.path.join(
21 BASE_DIR, "src", "opentelemetry", "sdk", "version.py"
22 )
23 PACKAGE_INFO = {}
24 with open(VERSION_FILENAME) as f:
25 exec(f.read(), PACKAGE_INFO)
26
27 setuptools.setup(
28 name="opentelemetry-sdk",
29 version=PACKAGE_INFO["__version__"],
30 author="OpenTelemetry Authors",
31 author_email="[email protected]",
32 classifiers=[
33 "Development Status :: 3 - Alpha",
34 "Intended Audience :: Developers",
35 "License :: OSI Approved :: Apache Software License",
36 "Programming Language :: Python",
37 "Programming Language :: Python :: 3",
38 "Programming Language :: Python :: 3.4",
39 "Programming Language :: Python :: 3.5",
40 "Programming Language :: Python :: 3.6",
41 "Programming Language :: Python :: 3.7",
42 ],
43 description="OpenTelemetry Python SDK",
44 include_package_data=True,
45 long_description=open("README.rst").read(),
46 install_requires=["opentelemetry-api==0.1.dev0"],
47 extras_require={},
48 license="Apache-2.0",
49 package_dir={"": "src"},
50 packages=setuptools.find_namespace_packages(
51 where="src", include="opentelemetry.sdk.*"
52 ),
53 url=(
54 "https://github.com/open-telemetry/opentelemetry-python"
55 "/tree/master/opentelemetry-sdk"
56 ),
57 zip_safe=False,
58 )
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opentelemetry-api/setup.py b/opentelemetry-api/setup.py
--- a/opentelemetry-api/setup.py
+++ b/opentelemetry-api/setup.py
@@ -43,6 +43,7 @@
description="OpenTelemetry Python API",
include_package_data=True,
long_description=open("README.rst").read(),
+ long_description_content_type="text/x-rst",
install_requires=["typing; python_version<'3.5'"],
extras_require={},
license="Apache-2.0",
diff --git a/opentelemetry-sdk/setup.py b/opentelemetry-sdk/setup.py
--- a/opentelemetry-sdk/setup.py
+++ b/opentelemetry-sdk/setup.py
@@ -43,6 +43,7 @@
description="OpenTelemetry Python SDK",
include_package_data=True,
long_description=open("README.rst").read(),
+ long_description_content_type="text/x-rst",
install_requires=["opentelemetry-api==0.1.dev0"],
extras_require={},
license="Apache-2.0",
| {"golden_diff": "diff --git a/opentelemetry-api/setup.py b/opentelemetry-api/setup.py\n--- a/opentelemetry-api/setup.py\n+++ b/opentelemetry-api/setup.py\n@@ -43,6 +43,7 @@\n description=\"OpenTelemetry Python API\",\n include_package_data=True,\n long_description=open(\"README.rst\").read(),\n+ long_description_content_type=\"text/x-rst\",\n install_requires=[\"typing; python_version<'3.5'\"],\n extras_require={},\n license=\"Apache-2.0\",\ndiff --git a/opentelemetry-sdk/setup.py b/opentelemetry-sdk/setup.py\n--- a/opentelemetry-sdk/setup.py\n+++ b/opentelemetry-sdk/setup.py\n@@ -43,6 +43,7 @@\n description=\"OpenTelemetry Python SDK\",\n include_package_data=True,\n long_description=open(\"README.rst\").read(),\n+ long_description_content_type=\"text/x-rst\",\n install_requires=[\"opentelemetry-api==0.1.dev0\"],\n extras_require={},\n license=\"Apache-2.0\",\n", "issue": "Create packages, automate releases\nCreate new `opentelemetry-` packages for the API and SDK, and consider doing the same for all OC contrib packages.\r\n\r\nConfigure CI to build and release these packages when we tag a new release.\r\n\r\nSee #6 for CI bootstrapping.\n", "before_files": [{"content": "# Copyright 2019, OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nimport setuptools\n\nBASE_DIR = os.path.dirname(__file__)\nVERSION_FILENAME = os.path.join(\n BASE_DIR, \"src\", \"opentelemetry\", \"util\", \"version.py\"\n)\nPACKAGE_INFO = {}\nwith open(VERSION_FILENAME) as f:\n exec(f.read(), PACKAGE_INFO)\n\nsetuptools.setup(\n name=\"opentelemetry-api\",\n version=PACKAGE_INFO[\"__version__\"],\n author=\"OpenTelemetry Authors\",\n author_email=\"[email protected]\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n description=\"OpenTelemetry Python API\",\n include_package_data=True,\n long_description=open(\"README.rst\").read(),\n install_requires=[\"typing; python_version<'3.5'\"],\n extras_require={},\n license=\"Apache-2.0\",\n package_dir={\"\": \"src\"},\n packages=setuptools.find_namespace_packages(\n where=\"src\", include=\"opentelemetry.*\"\n ),\n url=(\n \"https://github.com/open-telemetry/opentelemetry-python\"\n \"/tree/master/opentelemetry-api\"\n ),\n zip_safe=False,\n)\n", "path": "opentelemetry-api/setup.py"}, {"content": "# Copyright 2019, OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nimport setuptools\n\nBASE_DIR = os.path.dirname(__file__)\nVERSION_FILENAME = os.path.join(\n BASE_DIR, \"src\", \"opentelemetry\", \"sdk\", \"version.py\"\n)\nPACKAGE_INFO = {}\nwith open(VERSION_FILENAME) as f:\n exec(f.read(), PACKAGE_INFO)\n\nsetuptools.setup(\n name=\"opentelemetry-sdk\",\n version=PACKAGE_INFO[\"__version__\"],\n author=\"OpenTelemetry Authors\",\n author_email=\"[email protected]\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n description=\"OpenTelemetry Python SDK\",\n include_package_data=True,\n long_description=open(\"README.rst\").read(),\n install_requires=[\"opentelemetry-api==0.1.dev0\"],\n extras_require={},\n license=\"Apache-2.0\",\n package_dir={\"\": \"src\"},\n packages=setuptools.find_namespace_packages(\n where=\"src\", include=\"opentelemetry.sdk.*\"\n ),\n url=(\n \"https://github.com/open-telemetry/opentelemetry-python\"\n \"/tree/master/opentelemetry-sdk\"\n ),\n zip_safe=False,\n)\n", "path": "opentelemetry-sdk/setup.py"}], "after_files": [{"content": "# Copyright 2019, OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nimport setuptools\n\nBASE_DIR = os.path.dirname(__file__)\nVERSION_FILENAME = os.path.join(\n BASE_DIR, \"src\", \"opentelemetry\", \"util\", \"version.py\"\n)\nPACKAGE_INFO = {}\nwith open(VERSION_FILENAME) as f:\n exec(f.read(), PACKAGE_INFO)\n\nsetuptools.setup(\n name=\"opentelemetry-api\",\n version=PACKAGE_INFO[\"__version__\"],\n author=\"OpenTelemetry Authors\",\n author_email=\"[email protected]\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n description=\"OpenTelemetry Python API\",\n include_package_data=True,\n long_description=open(\"README.rst\").read(),\n long_description_content_type=\"text/x-rst\",\n install_requires=[\"typing; python_version<'3.5'\"],\n extras_require={},\n license=\"Apache-2.0\",\n package_dir={\"\": \"src\"},\n packages=setuptools.find_namespace_packages(\n where=\"src\", include=\"opentelemetry.*\"\n ),\n url=(\n \"https://github.com/open-telemetry/opentelemetry-python\"\n \"/tree/master/opentelemetry-api\"\n ),\n zip_safe=False,\n)\n", "path": "opentelemetry-api/setup.py"}, {"content": "# Copyright 2019, OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nimport setuptools\n\nBASE_DIR = os.path.dirname(__file__)\nVERSION_FILENAME = os.path.join(\n BASE_DIR, \"src\", \"opentelemetry\", \"sdk\", \"version.py\"\n)\nPACKAGE_INFO = {}\nwith open(VERSION_FILENAME) as f:\n exec(f.read(), PACKAGE_INFO)\n\nsetuptools.setup(\n name=\"opentelemetry-sdk\",\n version=PACKAGE_INFO[\"__version__\"],\n author=\"OpenTelemetry Authors\",\n author_email=\"[email protected]\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n description=\"OpenTelemetry Python SDK\",\n include_package_data=True,\n long_description=open(\"README.rst\").read(),\n long_description_content_type=\"text/x-rst\",\n install_requires=[\"opentelemetry-api==0.1.dev0\"],\n extras_require={},\n license=\"Apache-2.0\",\n package_dir={\"\": \"src\"},\n packages=setuptools.find_namespace_packages(\n where=\"src\", include=\"opentelemetry.sdk.*\"\n ),\n url=(\n \"https://github.com/open-telemetry/opentelemetry-python\"\n \"/tree/master/opentelemetry-sdk\"\n ),\n zip_safe=False,\n)\n", "path": "opentelemetry-sdk/setup.py"}]} | 1,483 | 229 |
gh_patches_debug_24081 | rasdani/github-patches | git_diff | horovod__horovod-562 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Very FATAL BUG about hvd.broadcast_optimizer_state
It seems that hvd.broadcast_optimizer_state **cannot broadcast optimizer state to other gpus or other machines gpus.**
i have running `pytorch_imagenet_resnet50.py`, when i load a pretrained checkpoint, it seems that only my 1st gpu on my rank0 's learning have been changed to pretrained checkpoint. but others are **still init learning!!!**
my experiments result are as follows:
=> loading checkpoint 'horovod_imagenet_v6/checkpoint-180.pth.tar'
SGD (
Parameter Group 0
dampening: 0
lr: 0.1
momentum: 0.9
nesterov: False
weight_decay: 5e-05
)
SGD (
Parameter Group 0
dampening: 0
lr: 0.1
momentum: 0.9
nesterov: False
weight_decay: 5e-05
)
SGD (
Parameter Group 0
dampening: 0
lr: 0.1
momentum: 0.9
nesterov: False
weight_decay: 5e-05
)
SGD (
Parameter Group 0
dampening: 0
lr: 0.1
momentum: 0.9
nesterov: False
weight_decay: 5e-05
)
SGD (
Parameter Group 0
dampening: 0
lr: 2.0796608879436464e-11
momentum: 0.9
nesterov: False
weight_decay: 5e-05
)
SGD (
Parameter Group 0
dampening: 0
lr: 0.1
momentum: 0.9
nesterov: False
weight_decay: 5e-05
)
SGD (
Parameter Group 0
dampening: 0
lr: 0.1
momentum: 0.9
nesterov: False
weight_decay: 5e-05
)
/search/speech/yh/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Train Epoch #1: 0%| | 0/5005 [00:00<?, ?it/s]SGD (
Parameter Group 0
dampening: 0
lr: 0.1
momentum: 0.9
nesterov: False
weight_decay: 5e-05
)
As we can see above, only one gpu's lr is ` lr: 2.0796608879436464e-11`, but others is still 0.1 which is the init lr.
but in your code , ` # Horovod: broadcast parameters & optimizer state.
hvd.broadcast_parameters(model.state_dict(), root_rank=0)
hvd.broadcast_optimizer_state(optimizer, root_rank=0)` which says : Horovod: broadcast parameters & optimizer state. So maybe this is a very serious bug???
Please confirm it as soon as possible. Thanks a lot! :)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `horovod/torch/__init__.py`
Content:
```
1 # Copyright 2018 Uber Technologies, Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15
16 from __future__ import absolute_import
17 from __future__ import division
18 from __future__ import print_function
19
20 from horovod.common import check_extension
21
22 try:
23 check_extension('horovod.torch', 'HOROVOD_WITH_PYTORCH',
24 __file__, 'mpi_lib_v2')
25 except:
26 check_extension('horovod.torch', 'HOROVOD_WITH_PYTORCH',
27 __file__, 'mpi_lib', '_mpi_lib')
28
29 from horovod.torch.compression import Compression
30 from horovod.torch.mpi_ops import allreduce, allreduce_async, allreduce_, allreduce_async_
31 from horovod.torch.mpi_ops import allgather, allgather_async
32 from horovod.torch.mpi_ops import broadcast, broadcast_async, broadcast_, broadcast_async_
33 from horovod.torch.mpi_ops import poll, synchronize
34 from horovod.torch.mpi_ops import init, shutdown
35 from horovod.torch.mpi_ops import size, local_size, rank, local_rank
36 from horovod.torch.mpi_ops import mpi_threads_supported
37
38 import torch
39 import collections
40
41
42 class _DistributedOptimizer(torch.optim.Optimizer):
43 def __init__(self, params, named_parameters, compression):
44 super(self.__class__, self).__init__(params)
45 self._compression = compression
46
47 if named_parameters is not None:
48 named_parameters = list(named_parameters)
49 else:
50 named_parameters = []
51
52 # make sure that named_parameters are tuples
53 if any([not isinstance(p, tuple) for p in named_parameters]):
54 raise ValueError('named_parameters should be a sequence of '
55 'tuples (name, parameter), usually produced by '
56 'model.named_parameters().')
57
58 self._parameter_names = {v: k for k, v
59 in sorted(named_parameters)}
60 self._handles = {}
61 self._grad_accs = []
62
63 if size() > 1:
64 self._register_hooks()
65
66 def _register_hooks(self):
67 for param_group in self.param_groups:
68 for p in param_group['params']:
69 if p.requires_grad:
70 p_tmp = p.expand_as(p)
71 grad_acc = p_tmp.grad_fn.next_functions[0][0]
72 grad_acc.register_hook(self._make_hook(p))
73 self._grad_accs.append(grad_acc)
74
75 def _make_hook(self, p):
76 def hook(*ignore):
77 assert p not in self._handles
78 assert not p.grad.requires_grad
79 name = self._parameter_names.get(p)
80
81 tensor = p.grad.data
82 tensor_compressed, ctx = self._compression.compress(tensor)
83
84 handle = allreduce_async_(tensor_compressed, average=True, name=name)
85 self._handles[p] = (handle, ctx)
86 return hook
87
88 def synchronize(self):
89 for p, value in self._handles.items():
90 handle, ctx = value
91 output = synchronize(handle)
92 p.grad.data.set_(self._compression.decompress(output, ctx))
93 self._handles.clear()
94
95 def step(self, closure=None):
96 self.synchronize()
97 return super(self.__class__, self).step(closure)
98
99
100 def DistributedOptimizer(optimizer, named_parameters=None, compression=Compression.none):
101 """
102 An optimizer that wraps another torch.optim.Optimizer, using an allreduce to
103 average gradient values before applying gradients to model weights.
104
105 Allreduce operations are executed after each gradient is computed by `loss.backward()`
106 in parallel with each other. The `step()` method ensures that all allreduce operations are
107 finished before applying gradients to the model.
108
109 DistributedOptimizer exposes the `synchronize()` method, which forces allreduce operations
110 to finish before continuing the execution. It's useful in conjunction with gradient
111 clipping, or other operations that modify gradients in place before `step()` is executed.
112
113 Example of gradient clipping:
114 ```
115 output = model(data)
116 loss = F.nll_loss(output, target)
117 loss.backward()
118 optimizer.synchronize()
119 torch.nn.utils.clip_grad_norm(model.parameters(), args.clip)
120 optimizer.step()
121 ```
122
123 Arguments:
124 optimizer: Optimizer to use for computing gradients and applying updates.
125 named_parameters: A mapping between parameter names and values. Used for naming of
126 allreduce operations. Typically just `model.named_parameters()`.
127 compression: Compression algorithm used during allreduce to reduce the amount
128 of data sent during the each parameter update step. Defaults to
129 not using compression.
130 """
131 # We dynamically create a new class that inherits from the optimizer that was passed in.
132 # The goal is to override the `step()` method with an allreduce implementation.
133 cls = type(optimizer.__class__.__name__, (optimizer.__class__,),
134 dict(_DistributedOptimizer.__dict__))
135 return cls(optimizer.param_groups, named_parameters, compression)
136
137
138 def broadcast_parameters(params, root_rank):
139 """
140 Broadcasts the parameters from root rank to all other processes.
141 Typical usage is to broadcast the `model.state_dict()`,
142 `model.named_parameters()`, or `model.parameters()`.
143
144 Arguments:
145 params: One of the following:
146 - list of parameters to broadcast
147 - dict of parameters to broadcast
148 root_rank: The rank of the process from which parameters will be
149 broadcasted to all other processes.
150 """
151 if isinstance(params, dict):
152 params = sorted(params.items())
153 elif isinstance(params, list):
154 # support both named_parameters() and regular parameters()
155 params = [p if isinstance(p, tuple) else (None, p) for p in params]
156 else:
157 raise ValueError('invalid params of type: %s' % type(params))
158
159 # Run asynchronous broadcasts.
160 handles = []
161 for name, p in params:
162 handle = broadcast_async_(p, root_rank, name)
163 handles.append(handle)
164
165 # Wait for completion.
166 for handle in handles:
167 synchronize(handle)
168
169
170 def broadcast_optimizer_state(optimizer, root_rank):
171 """
172 Broadcasts an optimizer state from root rank to all other processes.
173
174 Arguments:
175 optimizer: An optimizer.
176 root_rank: The rank of the process from which the optimizer will be
177 broadcasted to all other processes.
178 """
179 if isinstance(optimizer, torch.optim.LBFGS):
180 # TODO(travis): L-BFGS cannot be easily supported without serializing
181 # the entire state_dict, as its structure is deeply nested and contains
182 # None type parameter values
183 raise ValueError('cannot broadcast torch.optim.LBFGS state')
184
185 state_dict = optimizer.state_dict()
186
187 # Newly created optimizers will not have their state initialized, so
188 # do that initialization here
189 if len(state_dict['state']) == 0:
190 for group in optimizer.param_groups:
191 for p in group['params']:
192 p.grad = p.data.new(p.size()).zero_()
193 optimizer.step()
194 state_dict = optimizer.state_dict()
195
196 # If the state_dict is still empty after initialization, then
197 # the optimizer is stateless, and there is nothing to broadcast.
198 # Furthermore, attempting to access the state dict would result in
199 # an error.
200 if len(state_dict['state']) == 0:
201 return
202
203 params = []
204 callbacks = {}
205 occurrences = collections.defaultdict(int)
206
207 # Some optimizer parameters may be represented as scalars instead of
208 # tensors. In such cases, we need to wrap the scalar in a tensor, then
209 # broadcast, then update the appropriate value in the state_dict with the
210 # new unwrapped scalar value via a callback.
211 def _create_callback(pid, name, t, p):
212 def _from_tensor():
213 state_dict['state'][pid][name] = t(p.numpy()[0])
214 return _from_tensor
215
216 # Groups are unordered, but their params will be distinct
217 for group in state_dict['param_groups']:
218 # The params list here is ordered by the layers in the model
219 for pid in group['params']:
220 param_state = state_dict['state'][pid]
221 for name, p in param_state.items():
222 # Some parameter names may appear more than once, in which
223 # case we ensure they have a unique identifier defined by
224 # their order
225 occurrences[name] += 1
226 key = '%s.%d' % (str(name), occurrences[name])
227
228 if not torch.is_tensor(p):
229 # Wrap the scalar in a FloatTensor, and remember its type
230 # so we can cast it back after unwrapping
231 t = type(p)
232 p = torch.Tensor([p])
233 callbacks[key] = _create_callback(pid, name, t, p)
234
235 params.append((key, p))
236
237 # Synchronized broadcast of all parameters
238 broadcast_parameters(params, root_rank)
239
240 # Post-broadcast clenaup for non-tensor parameters
241 for key, p in params:
242 if key in callbacks:
243 callbacks[key]()
244
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/horovod/torch/__init__.py b/horovod/torch/__init__.py
--- a/horovod/torch/__init__.py
+++ b/horovod/torch/__init__.py
@@ -213,8 +213,27 @@
state_dict['state'][pid][name] = t(p.numpy()[0])
return _from_tensor
- # Groups are unordered, but their params will be distinct
- for group in state_dict['param_groups']:
+ def _create_option_callback(index, option_key, option_tensor, dtype):
+ def _from_tensor():
+ optimizer.param_groups[index][option_key] = dtype(option_tensor.numpy()[0])
+ return _from_tensor
+
+ # Param groups are an ordered list, normally there is only one per model,
+ # but users can add additional param groups for example to train
+ # previously frozen layers
+ for index, group in enumerate(state_dict['param_groups']):
+ # Broadcast options like learning rate
+ for option_key, option_value in group.items():
+ if option_key == 'params':
+ continue
+
+ # Options like the learning rate are scalar, and need to be wrapped in tensors
+ key = '%s.%d' % (option_key, index)
+ dtype = type(option_value)
+ option_tensor = torch.Tensor([option_value])
+ callbacks[key] = _create_option_callback(index, option_key, option_tensor, dtype)
+ params.append((key, option_tensor))
+
# The params list here is ordered by the layers in the model
for pid in group['params']:
param_state = state_dict['state'][pid]
| {"golden_diff": "diff --git a/horovod/torch/__init__.py b/horovod/torch/__init__.py\n--- a/horovod/torch/__init__.py\n+++ b/horovod/torch/__init__.py\n@@ -213,8 +213,27 @@\n state_dict['state'][pid][name] = t(p.numpy()[0])\n return _from_tensor\n \n- # Groups are unordered, but their params will be distinct\n- for group in state_dict['param_groups']:\n+ def _create_option_callback(index, option_key, option_tensor, dtype):\n+ def _from_tensor():\n+ optimizer.param_groups[index][option_key] = dtype(option_tensor.numpy()[0])\n+ return _from_tensor\n+\n+ # Param groups are an ordered list, normally there is only one per model,\n+ # but users can add additional param groups for example to train\n+ # previously frozen layers\n+ for index, group in enumerate(state_dict['param_groups']):\n+ # Broadcast options like learning rate\n+ for option_key, option_value in group.items():\n+ if option_key == 'params':\n+ continue\n+\n+ # Options like the learning rate are scalar, and need to be wrapped in tensors\n+ key = '%s.%d' % (option_key, index)\n+ dtype = type(option_value)\n+ option_tensor = torch.Tensor([option_value])\n+ callbacks[key] = _create_option_callback(index, option_key, option_tensor, dtype)\n+ params.append((key, option_tensor))\n+\n # The params list here is ordered by the layers in the model\n for pid in group['params']:\n param_state = state_dict['state'][pid]\n", "issue": "Very FATAL BUG about hvd.broadcast_optimizer_state\nIt seems that hvd.broadcast_optimizer_state **cannot broadcast optimizer state to other gpus or other machines gpus.**\r\n\r\ni have running `pytorch_imagenet_resnet50.py`, when i load a pretrained checkpoint, it seems that only my 1st gpu on my rank0 's learning have been changed to pretrained checkpoint. but others are **still init learning!!!**\r\nmy experiments result are as follows:\r\n\r\n=> loading checkpoint 'horovod_imagenet_v6/checkpoint-180.pth.tar'\r\nSGD (\r\nParameter Group 0\r\n dampening: 0\r\n lr: 0.1\r\n momentum: 0.9\r\n nesterov: False\r\n weight_decay: 5e-05\r\n)\r\nSGD (\r\nParameter Group 0\r\n dampening: 0\r\n lr: 0.1\r\n momentum: 0.9\r\n nesterov: False\r\n weight_decay: 5e-05\r\n)\r\nSGD (\r\nParameter Group 0\r\n dampening: 0\r\n lr: 0.1\r\n momentum: 0.9\r\n nesterov: False\r\n weight_decay: 5e-05\r\n)\r\nSGD (\r\nParameter Group 0\r\n dampening: 0\r\n lr: 0.1\r\n momentum: 0.9\r\n nesterov: False\r\n weight_decay: 5e-05\r\n)\r\nSGD (\r\nParameter Group 0\r\n dampening: 0\r\n lr: 2.0796608879436464e-11\r\n momentum: 0.9\r\n nesterov: False\r\n weight_decay: 5e-05\r\n)\r\nSGD (\r\nParameter Group 0\r\n dampening: 0\r\n lr: 0.1\r\n momentum: 0.9\r\n nesterov: False\r\n weight_decay: 5e-05\r\n)\r\nSGD (\r\nParameter Group 0\r\n dampening: 0\r\n lr: 0.1\r\n momentum: 0.9\r\n nesterov: False\r\n weight_decay: 5e-05\r\n)\r\n/search/speech/yh/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\r\n from ._conv import register_converters as _register_converters\r\nTrain Epoch #1: 0%| | 0/5005 [00:00<?, ?it/s]SGD (\r\nParameter Group 0\r\n dampening: 0\r\n lr: 0.1\r\n momentum: 0.9\r\n nesterov: False\r\n weight_decay: 5e-05\r\n)\r\n\r\n\r\n\r\n\r\nAs we can see above, only one gpu's lr is ` lr: 2.0796608879436464e-11`, but others is still 0.1 which is the init lr.\r\n\r\nbut in your code , ` # Horovod: broadcast parameters & optimizer state.\r\n hvd.broadcast_parameters(model.state_dict(), root_rank=0)\r\n hvd.broadcast_optimizer_state(optimizer, root_rank=0)` which says : Horovod: broadcast parameters & optimizer state. So maybe this is a very serious bug???\r\n\r\nPlease confirm it as soon as possible. Thanks a lot! :)\n", "before_files": [{"content": "# Copyright 2018 Uber Technologies, Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom horovod.common import check_extension\n\ntry:\n check_extension('horovod.torch', 'HOROVOD_WITH_PYTORCH',\n __file__, 'mpi_lib_v2')\nexcept:\n check_extension('horovod.torch', 'HOROVOD_WITH_PYTORCH',\n __file__, 'mpi_lib', '_mpi_lib')\n\nfrom horovod.torch.compression import Compression\nfrom horovod.torch.mpi_ops import allreduce, allreduce_async, allreduce_, allreduce_async_\nfrom horovod.torch.mpi_ops import allgather, allgather_async\nfrom horovod.torch.mpi_ops import broadcast, broadcast_async, broadcast_, broadcast_async_\nfrom horovod.torch.mpi_ops import poll, synchronize\nfrom horovod.torch.mpi_ops import init, shutdown\nfrom horovod.torch.mpi_ops import size, local_size, rank, local_rank\nfrom horovod.torch.mpi_ops import mpi_threads_supported\n\nimport torch\nimport collections\n\n\nclass _DistributedOptimizer(torch.optim.Optimizer):\n def __init__(self, params, named_parameters, compression):\n super(self.__class__, self).__init__(params)\n self._compression = compression\n\n if named_parameters is not None:\n named_parameters = list(named_parameters)\n else:\n named_parameters = []\n\n # make sure that named_parameters are tuples\n if any([not isinstance(p, tuple) for p in named_parameters]):\n raise ValueError('named_parameters should be a sequence of '\n 'tuples (name, parameter), usually produced by '\n 'model.named_parameters().')\n\n self._parameter_names = {v: k for k, v\n in sorted(named_parameters)}\n self._handles = {}\n self._grad_accs = []\n\n if size() > 1:\n self._register_hooks()\n\n def _register_hooks(self):\n for param_group in self.param_groups:\n for p in param_group['params']:\n if p.requires_grad:\n p_tmp = p.expand_as(p)\n grad_acc = p_tmp.grad_fn.next_functions[0][0]\n grad_acc.register_hook(self._make_hook(p))\n self._grad_accs.append(grad_acc)\n\n def _make_hook(self, p):\n def hook(*ignore):\n assert p not in self._handles\n assert not p.grad.requires_grad\n name = self._parameter_names.get(p)\n\n tensor = p.grad.data\n tensor_compressed, ctx = self._compression.compress(tensor)\n\n handle = allreduce_async_(tensor_compressed, average=True, name=name)\n self._handles[p] = (handle, ctx)\n return hook\n\n def synchronize(self):\n for p, value in self._handles.items():\n handle, ctx = value\n output = synchronize(handle)\n p.grad.data.set_(self._compression.decompress(output, ctx))\n self._handles.clear()\n\n def step(self, closure=None):\n self.synchronize()\n return super(self.__class__, self).step(closure)\n\n\ndef DistributedOptimizer(optimizer, named_parameters=None, compression=Compression.none):\n \"\"\"\n An optimizer that wraps another torch.optim.Optimizer, using an allreduce to\n average gradient values before applying gradients to model weights.\n\n Allreduce operations are executed after each gradient is computed by `loss.backward()`\n in parallel with each other. The `step()` method ensures that all allreduce operations are\n finished before applying gradients to the model.\n\n DistributedOptimizer exposes the `synchronize()` method, which forces allreduce operations\n to finish before continuing the execution. It's useful in conjunction with gradient\n clipping, or other operations that modify gradients in place before `step()` is executed.\n\n Example of gradient clipping:\n ```\n output = model(data)\n loss = F.nll_loss(output, target)\n loss.backward()\n optimizer.synchronize()\n torch.nn.utils.clip_grad_norm(model.parameters(), args.clip)\n optimizer.step()\n ```\n\n Arguments:\n optimizer: Optimizer to use for computing gradients and applying updates.\n named_parameters: A mapping between parameter names and values. Used for naming of\n allreduce operations. Typically just `model.named_parameters()`.\n compression: Compression algorithm used during allreduce to reduce the amount\n of data sent during the each parameter update step. Defaults to\n not using compression.\n \"\"\"\n # We dynamically create a new class that inherits from the optimizer that was passed in.\n # The goal is to override the `step()` method with an allreduce implementation.\n cls = type(optimizer.__class__.__name__, (optimizer.__class__,),\n dict(_DistributedOptimizer.__dict__))\n return cls(optimizer.param_groups, named_parameters, compression)\n\n\ndef broadcast_parameters(params, root_rank):\n \"\"\"\n Broadcasts the parameters from root rank to all other processes.\n Typical usage is to broadcast the `model.state_dict()`,\n `model.named_parameters()`, or `model.parameters()`.\n\n Arguments:\n params: One of the following:\n - list of parameters to broadcast\n - dict of parameters to broadcast\n root_rank: The rank of the process from which parameters will be\n broadcasted to all other processes.\n \"\"\"\n if isinstance(params, dict):\n params = sorted(params.items())\n elif isinstance(params, list):\n # support both named_parameters() and regular parameters()\n params = [p if isinstance(p, tuple) else (None, p) for p in params]\n else:\n raise ValueError('invalid params of type: %s' % type(params))\n\n # Run asynchronous broadcasts.\n handles = []\n for name, p in params:\n handle = broadcast_async_(p, root_rank, name)\n handles.append(handle)\n\n # Wait for completion.\n for handle in handles:\n synchronize(handle)\n\n\ndef broadcast_optimizer_state(optimizer, root_rank):\n \"\"\"\n Broadcasts an optimizer state from root rank to all other processes.\n\n Arguments:\n optimizer: An optimizer.\n root_rank: The rank of the process from which the optimizer will be\n broadcasted to all other processes.\n \"\"\"\n if isinstance(optimizer, torch.optim.LBFGS):\n # TODO(travis): L-BFGS cannot be easily supported without serializing\n # the entire state_dict, as its structure is deeply nested and contains\n # None type parameter values\n raise ValueError('cannot broadcast torch.optim.LBFGS state')\n\n state_dict = optimizer.state_dict()\n\n # Newly created optimizers will not have their state initialized, so\n # do that initialization here\n if len(state_dict['state']) == 0:\n for group in optimizer.param_groups:\n for p in group['params']:\n p.grad = p.data.new(p.size()).zero_()\n optimizer.step()\n state_dict = optimizer.state_dict()\n\n # If the state_dict is still empty after initialization, then\n # the optimizer is stateless, and there is nothing to broadcast.\n # Furthermore, attempting to access the state dict would result in\n # an error.\n if len(state_dict['state']) == 0:\n return\n\n params = []\n callbacks = {}\n occurrences = collections.defaultdict(int)\n\n # Some optimizer parameters may be represented as scalars instead of\n # tensors. In such cases, we need to wrap the scalar in a tensor, then\n # broadcast, then update the appropriate value in the state_dict with the\n # new unwrapped scalar value via a callback.\n def _create_callback(pid, name, t, p):\n def _from_tensor():\n state_dict['state'][pid][name] = t(p.numpy()[0])\n return _from_tensor\n\n # Groups are unordered, but their params will be distinct\n for group in state_dict['param_groups']:\n # The params list here is ordered by the layers in the model\n for pid in group['params']:\n param_state = state_dict['state'][pid]\n for name, p in param_state.items():\n # Some parameter names may appear more than once, in which\n # case we ensure they have a unique identifier defined by\n # their order\n occurrences[name] += 1\n key = '%s.%d' % (str(name), occurrences[name])\n\n if not torch.is_tensor(p):\n # Wrap the scalar in a FloatTensor, and remember its type\n # so we can cast it back after unwrapping\n t = type(p)\n p = torch.Tensor([p])\n callbacks[key] = _create_callback(pid, name, t, p)\n\n params.append((key, p))\n\n # Synchronized broadcast of all parameters\n broadcast_parameters(params, root_rank)\n\n # Post-broadcast clenaup for non-tensor parameters\n for key, p in params:\n if key in callbacks:\n callbacks[key]()\n", "path": "horovod/torch/__init__.py"}], "after_files": [{"content": "# Copyright 2018 Uber Technologies, Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom horovod.common import check_extension\n\ntry:\n check_extension('horovod.torch', 'HOROVOD_WITH_PYTORCH',\n __file__, 'mpi_lib_v2')\nexcept:\n check_extension('horovod.torch', 'HOROVOD_WITH_PYTORCH',\n __file__, 'mpi_lib', '_mpi_lib')\n\nfrom horovod.torch.compression import Compression\nfrom horovod.torch.mpi_ops import allreduce, allreduce_async, allreduce_, allreduce_async_\nfrom horovod.torch.mpi_ops import allgather, allgather_async\nfrom horovod.torch.mpi_ops import broadcast, broadcast_async, broadcast_, broadcast_async_\nfrom horovod.torch.mpi_ops import poll, synchronize\nfrom horovod.torch.mpi_ops import init, shutdown\nfrom horovod.torch.mpi_ops import size, local_size, rank, local_rank\nfrom horovod.torch.mpi_ops import mpi_threads_supported\n\nimport torch\nimport collections\n\n\nclass _DistributedOptimizer(torch.optim.Optimizer):\n def __init__(self, params, named_parameters, compression):\n super(self.__class__, self).__init__(params)\n self._compression = compression\n\n if named_parameters is not None:\n named_parameters = list(named_parameters)\n else:\n named_parameters = []\n\n # make sure that named_parameters are tuples\n if any([not isinstance(p, tuple) for p in named_parameters]):\n raise ValueError('named_parameters should be a sequence of '\n 'tuples (name, parameter), usually produced by '\n 'model.named_parameters().')\n\n self._parameter_names = {v: k for k, v\n in sorted(named_parameters)}\n self._handles = {}\n self._grad_accs = []\n\n if size() > 1:\n self._register_hooks()\n\n def _register_hooks(self):\n for param_group in self.param_groups:\n for p in param_group['params']:\n if p.requires_grad:\n p_tmp = p.expand_as(p)\n grad_acc = p_tmp.grad_fn.next_functions[0][0]\n grad_acc.register_hook(self._make_hook(p))\n self._grad_accs.append(grad_acc)\n\n def _make_hook(self, p):\n def hook(*ignore):\n assert p not in self._handles\n assert not p.grad.requires_grad\n name = self._parameter_names.get(p)\n\n tensor = p.grad.data\n tensor_compressed, ctx = self._compression.compress(tensor)\n\n handle = allreduce_async_(tensor_compressed, average=True, name=name)\n self._handles[p] = (handle, ctx)\n return hook\n\n def synchronize(self):\n for p, value in self._handles.items():\n handle, ctx = value\n output = synchronize(handle)\n p.grad.data.set_(self._compression.decompress(output, ctx))\n self._handles.clear()\n\n def step(self, closure=None):\n self.synchronize()\n return super(self.__class__, self).step(closure)\n\n\ndef DistributedOptimizer(optimizer, named_parameters=None, compression=Compression.none):\n \"\"\"\n An optimizer that wraps another torch.optim.Optimizer, using an allreduce to\n average gradient values before applying gradients to model weights.\n\n Allreduce operations are executed after each gradient is computed by `loss.backward()`\n in parallel with each other. The `step()` method ensures that all allreduce operations are\n finished before applying gradients to the model.\n\n DistributedOptimizer exposes the `synchronize()` method, which forces allreduce operations\n to finish before continuing the execution. It's useful in conjunction with gradient\n clipping, or other operations that modify gradients in place before `step()` is executed.\n\n Example of gradient clipping:\n ```\n output = model(data)\n loss = F.nll_loss(output, target)\n loss.backward()\n optimizer.synchronize()\n torch.nn.utils.clip_grad_norm(model.parameters(), args.clip)\n optimizer.step()\n ```\n\n Arguments:\n optimizer: Optimizer to use for computing gradients and applying updates.\n named_parameters: A mapping between parameter names and values. Used for naming of\n allreduce operations. Typically just `model.named_parameters()`.\n compression: Compression algorithm used during allreduce to reduce the amount\n of data sent during the each parameter update step. Defaults to\n not using compression.\n \"\"\"\n # We dynamically create a new class that inherits from the optimizer that was passed in.\n # The goal is to override the `step()` method with an allreduce implementation.\n cls = type(optimizer.__class__.__name__, (optimizer.__class__,),\n dict(_DistributedOptimizer.__dict__))\n return cls(optimizer.param_groups, named_parameters, compression)\n\n\ndef broadcast_parameters(params, root_rank):\n \"\"\"\n Broadcasts the parameters from root rank to all other processes.\n Typical usage is to broadcast the `model.state_dict()`,\n `model.named_parameters()`, or `model.parameters()`.\n\n Arguments:\n params: One of the following:\n - list of parameters to broadcast\n - dict of parameters to broadcast\n root_rank: The rank of the process from which parameters will be\n broadcasted to all other processes.\n \"\"\"\n if isinstance(params, dict):\n params = sorted(params.items())\n elif isinstance(params, list):\n # support both named_parameters() and regular parameters()\n params = [p if isinstance(p, tuple) else (None, p) for p in params]\n else:\n raise ValueError('invalid params of type: %s' % type(params))\n\n # Run asynchronous broadcasts.\n handles = []\n for name, p in params:\n handle = broadcast_async_(p, root_rank, name)\n handles.append(handle)\n\n # Wait for completion.\n for handle in handles:\n synchronize(handle)\n\n\ndef broadcast_optimizer_state(optimizer, root_rank):\n \"\"\"\n Broadcasts an optimizer state from root rank to all other processes.\n\n Arguments:\n optimizer: An optimizer.\n root_rank: The rank of the process from which the optimizer will be\n broadcasted to all other processes.\n \"\"\"\n if isinstance(optimizer, torch.optim.LBFGS):\n # TODO(travis): L-BFGS cannot be easily supported without serializing\n # the entire state_dict, as its structure is deeply nested and contains\n # None type parameter values\n raise ValueError('cannot broadcast torch.optim.LBFGS state')\n\n state_dict = optimizer.state_dict()\n\n # Newly created optimizers will not have their state initialized, so\n # do that initialization here\n if len(state_dict['state']) == 0:\n for group in optimizer.param_groups:\n for p in group['params']:\n p.grad = p.data.new(p.size()).zero_()\n optimizer.step()\n state_dict = optimizer.state_dict()\n\n # If the state_dict is still empty after initialization, then\n # the optimizer is stateless, and there is nothing to broadcast.\n # Furthermore, attempting to access the state dict would result in\n # an error.\n if len(state_dict['state']) == 0:\n return\n\n params = []\n callbacks = {}\n occurrences = collections.defaultdict(int)\n\n # Some optimizer parameters may be represented as scalars instead of\n # tensors. In such cases, we need to wrap the scalar in a tensor, then\n # broadcast, then update the appropriate value in the state_dict with the\n # new unwrapped scalar value via a callback.\n def _create_callback(pid, name, t, p):\n def _from_tensor():\n state_dict['state'][pid][name] = t(p.numpy()[0])\n return _from_tensor\n\n def _create_option_callback(index, option_key, option_tensor, dtype):\n def _from_tensor():\n optimizer.param_groups[index][option_key] = dtype(option_tensor.numpy()[0])\n return _from_tensor\n\n # Param groups are an ordered list, normally there is only one per model,\n # but users can add additional param groups for example to train\n # previously frozen layers\n for index, group in enumerate(state_dict['param_groups']):\n # Broadcast options like learning rate\n for option_key, option_value in group.items():\n if option_key == 'params':\n continue\n\n # Options like the learning rate are scalar, and need to be wrapped in tensors\n key = '%s.%d' % (option_key, index)\n dtype = type(option_value)\n option_tensor = torch.Tensor([option_value])\n callbacks[key] = _create_option_callback(index, option_key, option_tensor, dtype)\n params.append((key, option_tensor))\n\n # The params list here is ordered by the layers in the model\n for pid in group['params']:\n param_state = state_dict['state'][pid]\n for name, p in param_state.items():\n # Some parameter names may appear more than once, in which\n # case we ensure they have a unique identifier defined by\n # their order\n occurrences[name] += 1\n key = '%s.%d' % (str(name), occurrences[name])\n\n if not torch.is_tensor(p):\n # Wrap the scalar in a FloatTensor, and remember its type\n # so we can cast it back after unwrapping\n t = type(p)\n p = torch.Tensor([p])\n callbacks[key] = _create_callback(pid, name, t, p)\n\n params.append((key, p))\n\n # Synchronized broadcast of all parameters\n broadcast_parameters(params, root_rank)\n\n # Post-broadcast clenaup for non-tensor parameters\n for key, p in params:\n if key in callbacks:\n callbacks[key]()\n", "path": "horovod/torch/__init__.py"}]} | 3,734 | 373 |
gh_patches_debug_7031 | rasdani/github-patches | git_diff | facebookresearch__hydra-1961 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[CI] hydra_nevergrad_sweeper tests are failing on main branch.
example failure https://app.circleci.com/pipelines/github/facebookresearch/hydra/11235/workflows/273a5296-bebf-4808-8e68-14b9889b63a5/jobs/102829
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/hydra_nevergrad_sweeper/setup.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 # type: ignore
3 from pathlib import Path
4
5 from read_version import read_version
6 from setuptools import find_namespace_packages, setup
7
8 setup(
9 name="hydra-nevergrad-sweeper",
10 version=read_version("hydra_plugins/hydra_nevergrad_sweeper", "__init__.py"),
11 author="Jeremy Rapin, Omry Yadan, Jieru Hu",
12 author_email="[email protected], [email protected], [email protected]",
13 description="Hydra Nevergrad Sweeper plugin",
14 long_description=(Path(__file__).parent / "README.md").read_text(),
15 long_description_content_type="text/markdown",
16 url="https://github.com/facebookresearch/hydra/",
17 packages=find_namespace_packages(include=["hydra_plugins.*"]),
18 classifiers=[
19 "License :: OSI Approved :: MIT License",
20 "Programming Language :: Python :: 3.6",
21 "Programming Language :: Python :: 3.7",
22 "Programming Language :: Python :: 3.8",
23 "Programming Language :: Python :: 3.9",
24 "Operating System :: OS Independent",
25 "Development Status :: 4 - Beta",
26 ],
27 install_requires=[
28 "hydra-core>=1.1.0.dev7",
29 "nevergrad>=0.4.3.post2,<0.4.3.post7", # https://github.com/facebookresearch/hydra/issues/1768
30 "cma==3.0.3", # https://github.com/facebookresearch/hydra/issues/1684
31 "numpy<1.20.0", # remove once nevergrad is upgraded to support numpy 1.20
32 ],
33 include_package_data=True,
34 )
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugins/hydra_nevergrad_sweeper/setup.py b/plugins/hydra_nevergrad_sweeper/setup.py
--- a/plugins/hydra_nevergrad_sweeper/setup.py
+++ b/plugins/hydra_nevergrad_sweeper/setup.py
@@ -26,7 +26,7 @@
],
install_requires=[
"hydra-core>=1.1.0.dev7",
- "nevergrad>=0.4.3.post2,<0.4.3.post7", # https://github.com/facebookresearch/hydra/issues/1768
+ "nevergrad>=0.4.3.post9",
"cma==3.0.3", # https://github.com/facebookresearch/hydra/issues/1684
"numpy<1.20.0", # remove once nevergrad is upgraded to support numpy 1.20
],
| {"golden_diff": "diff --git a/plugins/hydra_nevergrad_sweeper/setup.py b/plugins/hydra_nevergrad_sweeper/setup.py\n--- a/plugins/hydra_nevergrad_sweeper/setup.py\n+++ b/plugins/hydra_nevergrad_sweeper/setup.py\n@@ -26,7 +26,7 @@\n ],\n install_requires=[\n \"hydra-core>=1.1.0.dev7\",\n- \"nevergrad>=0.4.3.post2,<0.4.3.post7\", # https://github.com/facebookresearch/hydra/issues/1768\n+ \"nevergrad>=0.4.3.post9\",\n \"cma==3.0.3\", # https://github.com/facebookresearch/hydra/issues/1684\n \"numpy<1.20.0\", # remove once nevergrad is upgraded to support numpy 1.20\n ],\n", "issue": "[CI] hydra_nevergrad_sweeper tests are failing on main branch.\nexample failure https://app.circleci.com/pipelines/github/facebookresearch/hydra/11235/workflows/273a5296-bebf-4808-8e68-14b9889b63a5/jobs/102829\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# type: ignore\nfrom pathlib import Path\n\nfrom read_version import read_version\nfrom setuptools import find_namespace_packages, setup\n\nsetup(\n name=\"hydra-nevergrad-sweeper\",\n version=read_version(\"hydra_plugins/hydra_nevergrad_sweeper\", \"__init__.py\"),\n author=\"Jeremy Rapin, Omry Yadan, Jieru Hu\",\n author_email=\"[email protected], [email protected], [email protected]\",\n description=\"Hydra Nevergrad Sweeper plugin\",\n long_description=(Path(__file__).parent / \"README.md\").read_text(),\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/facebookresearch/hydra/\",\n packages=find_namespace_packages(include=[\"hydra_plugins.*\"]),\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 4 - Beta\",\n ],\n install_requires=[\n \"hydra-core>=1.1.0.dev7\",\n \"nevergrad>=0.4.3.post2,<0.4.3.post7\", # https://github.com/facebookresearch/hydra/issues/1768\n \"cma==3.0.3\", # https://github.com/facebookresearch/hydra/issues/1684\n \"numpy<1.20.0\", # remove once nevergrad is upgraded to support numpy 1.20\n ],\n include_package_data=True,\n)\n", "path": "plugins/hydra_nevergrad_sweeper/setup.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# type: ignore\nfrom pathlib import Path\n\nfrom read_version import read_version\nfrom setuptools import find_namespace_packages, setup\n\nsetup(\n name=\"hydra-nevergrad-sweeper\",\n version=read_version(\"hydra_plugins/hydra_nevergrad_sweeper\", \"__init__.py\"),\n author=\"Jeremy Rapin, Omry Yadan, Jieru Hu\",\n author_email=\"[email protected], [email protected], [email protected]\",\n description=\"Hydra Nevergrad Sweeper plugin\",\n long_description=(Path(__file__).parent / \"README.md\").read_text(),\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/facebookresearch/hydra/\",\n packages=find_namespace_packages(include=[\"hydra_plugins.*\"]),\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 4 - Beta\",\n ],\n install_requires=[\n \"hydra-core>=1.1.0.dev7\",\n \"nevergrad>=0.4.3.post9\",\n \"cma==3.0.3\", # https://github.com/facebookresearch/hydra/issues/1684\n \"numpy<1.20.0\", # remove once nevergrad is upgraded to support numpy 1.20\n ],\n include_package_data=True,\n)\n", "path": "plugins/hydra_nevergrad_sweeper/setup.py"}]} | 803 | 205 |
gh_patches_debug_6394 | rasdani/github-patches | git_diff | pantsbuild__pants-20657 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Docker: base image build arg detection does not support parametrized targets
**Describe the bug**
If the base image is parametrized, the dependency inference does not pick it up.
**Pants version**
`2.19.0`
**OS**
Any.
**Additional info**
[Reported](https://chat.pantsbuild.org/t/16633559/i-have-a-structure-where-i-have-a-set-of-containers-that-are#0e224a89-4839-45a4-91c5-bd9c8fa88c27) by @rbuckland
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/python/pants/backend/docker/subsystems/dockerfile_wrapper_script.py`
Content:
```
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import json
7 import re
8 import sys
9 from dataclasses import asdict, dataclass
10 from itertools import chain
11 from typing import Iterator
12
13 #
14 # Note: This file is used as a pex entry point in the execution sandbox.
15 #
16
17
18 @dataclass(frozen=True)
19 class ParsedDockerfileInfo:
20 """Keep fields in sync with `dockerfile_parser.py:DockerfileInfo`."""
21
22 source: str
23 build_args: tuple[str, ...] # "ARG_NAME=VALUE", ...
24 copy_source_paths: tuple[str, ...]
25 from_image_build_args: tuple[str, ...] # "ARG_NAME=UPSTREAM_TARGET_ADDRESS", ...
26 version_tags: tuple[str, ...] # "STAGE TAG", ...
27
28
29 _address_regexp = re.compile(
30 r"""
31 (?://)?[^:# ]*:[^:#!@?/\= ]+(?:\#[^:#!@?= ]+)?$
32 """,
33 re.VERBOSE,
34 )
35
36
37 def valid_address(value: str) -> bool:
38 """Checks if `value` may pass as an address."""
39 return bool(re.match(_address_regexp, value))
40
41
42 _image_ref_regexp = re.compile(
43 r"""
44 ^
45 # Optional registry.
46 ((?P<registry>[^/:_ ]+:?[^/:_ ]*)/)?
47 # Repository.
48 (?P<repository>[^:@ \t\n\r\f\v]+)
49 # Optionally with `:tag`.
50 (:(?P<tag>[^@ ]+))?
51 # Optionally with `@digest`.
52 (@(?P<digest>\S+))?
53 $
54 """,
55 re.VERBOSE,
56 )
57
58
59 def main(*dockerfile_names: str) -> Iterator[ParsedDockerfileInfo]:
60 # import here to allow the rest of the file to be tested without a dependency on dockerfile
61 from dockerfile import Command, parse_file, parse_string # pants: no-infer-dep
62
63 @dataclass(frozen=True)
64 class ParsedDockerfile:
65 filename: str
66 commands: tuple[Command, ...]
67
68 @classmethod
69 def from_file(cls, dockerfile: str) -> ParsedDockerfile:
70 return cls(dockerfile, parse_file(dockerfile))
71
72 @classmethod
73 def from_string(cls, dockerfile_contents: str) -> ParsedDockerfile:
74 return cls("<text>", parse_string(dockerfile_contents))
75
76 def get_info(self) -> ParsedDockerfileInfo:
77 return ParsedDockerfileInfo(
78 source=self.filename,
79 build_args=self.build_args(),
80 copy_source_paths=self.copy_source_paths(),
81 from_image_build_args=self.from_image_build_args(),
82 version_tags=self.baseimage_tags(),
83 )
84
85 def get_all(self, command_name: str) -> Iterator[Command]:
86 for command in self.commands:
87 if command.cmd.upper() == command_name:
88 yield command
89
90 def from_image_build_args(self) -> tuple[str, ...]:
91 build_args = {
92 key: value.strip("\"'")
93 for key, has_value, value in [
94 build_arg.partition("=") for build_arg in self.build_args()
95 ]
96 if has_value and valid_address(value)
97 }
98
99 return tuple(
100 f"{image_build_arg}={build_args[image_build_arg]}"
101 for image_build_arg in self.from_image_build_arg_names()
102 if image_build_arg in build_args
103 )
104
105 @staticmethod
106 def _get_image_ref_build_arg(image_ref: str) -> str | None:
107 build_arg = re.match(r"\$\{?([a-zA-Z0-9_]+)\}?$", image_ref)
108 return build_arg.group(1) if build_arg else None
109
110 def from_image_build_arg_names(self) -> Iterator[str]:
111 """Return build args used as the image ref in `FROM` instructions.
112
113 Example:
114
115 ARG BASE_IMAGE
116 FROM ${BASE_IMAGE}
117 """
118 for cmd in self.get_all("FROM"):
119 build_arg = self._get_image_ref_build_arg(cmd.value[0])
120 if build_arg:
121 yield build_arg
122
123 def from_baseimages(self) -> Iterator[tuple[str, tuple[str, ...]]]:
124 for idx, cmd in enumerate(self.get_all("FROM")):
125 name_parts = cmd.value[0].split("/")
126 if len(cmd.value) == 3 and cmd.value[1].upper() == "AS":
127 stage = cmd.value[2]
128 else:
129 stage = f"stage{idx}"
130 yield stage, name_parts
131
132 def baseimage_tags(self) -> tuple[str, ...]:
133 """Return all base image tags, prefix with the stage alias or index.
134
135 In case the base image is entirely made up of a build arg, use that with a `build-arg:`
136 prefix.
137
138 Example:
139
140 FROM base:1.0 AS build
141 ...
142 FROM interim
143 FROM $argname as dynamic
144 ...
145 FROM final as out
146
147 Gives:
148
149 build 1.0
150 stage1 latest
151 dynamic build-arg:argname
152 out latest
153 """
154
155 def _get_tag(image_ref: str) -> str | None:
156 """The image ref is in the form `registry/repo/name[/...][:tag][@digest]` and where
157 `digest` is `sha256:hex value`, or a build arg reference with $ARG."""
158 if image_ref.startswith("$"):
159 build_arg = self._get_image_ref_build_arg(image_ref)
160 if build_arg:
161 return f"build-arg:{build_arg}"
162 parsed = re.match(_image_ref_regexp, image_ref)
163 if not parsed:
164 return None
165 tag = parsed.group("tag")
166 if tag:
167 return tag
168 if not parsed.group("digest"):
169 return "latest"
170 return None
171
172 return tuple(
173 f"{stage} {tag}"
174 for stage, name_parts in self.from_baseimages()
175 for tag in [_get_tag(name_parts[-1])]
176 if tag
177 )
178
179 def build_args(self) -> tuple[str, ...]:
180 """Return all defined build args, including any default values."""
181 return tuple(cmd.original[4:].strip() for cmd in self.get_all("ARG"))
182
183 def copy_source_paths(self) -> tuple[str, ...]:
184 """Return all files referenced from the build context using COPY instruction."""
185 # Exclude COPY --from instructions, as they don't refer to files from the build context.
186 return tuple(
187 chain(
188 *(
189 cmd.value[:-1]
190 for cmd in self.get_all("COPY")
191 if all("--from" not in flag for flag in cmd.flags)
192 )
193 )
194 )
195
196 for parsed in map(ParsedDockerfile.from_file, dockerfile_names):
197 yield parsed.get_info()
198
199
200 if __name__ == "__main__":
201 if len(sys.argv) > 1:
202 print(json.dumps([asdict(info) for info in main(*sys.argv[1:])]))
203 else:
204 print(f"Not enough arguments.\nUsage: {sys.argv[0]} [DOCKERFILE ...]")
205 sys.exit(1)
206
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/python/pants/backend/docker/subsystems/dockerfile_wrapper_script.py b/src/python/pants/backend/docker/subsystems/dockerfile_wrapper_script.py
--- a/src/python/pants/backend/docker/subsystems/dockerfile_wrapper_script.py
+++ b/src/python/pants/backend/docker/subsystems/dockerfile_wrapper_script.py
@@ -28,7 +28,22 @@
_address_regexp = re.compile(
r"""
- (?://)?[^:# ]*:[^:#!@?/\= ]+(?:\#[^:#!@?= ]+)?$
+ # Optionally root:ed.
+ (?://)?
+ # Optional path.
+ [^:# ]*
+ # Optional target name.
+ (?::[^:#!@?/\= ]+)?
+ # Optional generated name.
+ (?:\#[^:#!@?= ]+)?
+ # Optional parametrizations.
+ (?:@
+ # key=value
+ [^=: ]+=[^,: ]*
+ # Optional additional `,key=value`s
+ (?:,[^=: ]+=[^,: ]*)*
+ )?
+ $
""",
re.VERBOSE,
)
| {"golden_diff": "diff --git a/src/python/pants/backend/docker/subsystems/dockerfile_wrapper_script.py b/src/python/pants/backend/docker/subsystems/dockerfile_wrapper_script.py\n--- a/src/python/pants/backend/docker/subsystems/dockerfile_wrapper_script.py\n+++ b/src/python/pants/backend/docker/subsystems/dockerfile_wrapper_script.py\n@@ -28,7 +28,22 @@\n \n _address_regexp = re.compile(\n r\"\"\"\n- (?://)?[^:# ]*:[^:#!@?/\\= ]+(?:\\#[^:#!@?= ]+)?$\n+ # Optionally root:ed.\n+ (?://)?\n+ # Optional path.\n+ [^:# ]*\n+ # Optional target name.\n+ (?::[^:#!@?/\\= ]+)?\n+ # Optional generated name.\n+ (?:\\#[^:#!@?= ]+)?\n+ # Optional parametrizations.\n+ (?:@\n+ # key=value\n+ [^=: ]+=[^,: ]*\n+ # Optional additional `,key=value`s\n+ (?:,[^=: ]+=[^,: ]*)*\n+ )?\n+ $\n \"\"\",\n re.VERBOSE,\n )\n", "issue": "Docker: base image build arg detection does not support parametrized targets\n**Describe the bug**\r\nIf the base image is parametrized, the dependency inference does not pick it up.\r\n\r\n**Pants version**\r\n`2.19.0`\r\n\r\n**OS**\r\nAny.\r\n\r\n**Additional info**\r\n[Reported](https://chat.pantsbuild.org/t/16633559/i-have-a-structure-where-i-have-a-set-of-containers-that-are#0e224a89-4839-45a4-91c5-bd9c8fa88c27) by @rbuckland \r\n\n", "before_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nimport json\nimport re\nimport sys\nfrom dataclasses import asdict, dataclass\nfrom itertools import chain\nfrom typing import Iterator\n\n#\n# Note: This file is used as a pex entry point in the execution sandbox.\n#\n\n\n@dataclass(frozen=True)\nclass ParsedDockerfileInfo:\n \"\"\"Keep fields in sync with `dockerfile_parser.py:DockerfileInfo`.\"\"\"\n\n source: str\n build_args: tuple[str, ...] # \"ARG_NAME=VALUE\", ...\n copy_source_paths: tuple[str, ...]\n from_image_build_args: tuple[str, ...] # \"ARG_NAME=UPSTREAM_TARGET_ADDRESS\", ...\n version_tags: tuple[str, ...] # \"STAGE TAG\", ...\n\n\n_address_regexp = re.compile(\n r\"\"\"\n (?://)?[^:# ]*:[^:#!@?/\\= ]+(?:\\#[^:#!@?= ]+)?$\n \"\"\",\n re.VERBOSE,\n)\n\n\ndef valid_address(value: str) -> bool:\n \"\"\"Checks if `value` may pass as an address.\"\"\"\n return bool(re.match(_address_regexp, value))\n\n\n_image_ref_regexp = re.compile(\n r\"\"\"\n ^\n # Optional registry.\n ((?P<registry>[^/:_ ]+:?[^/:_ ]*)/)?\n # Repository.\n (?P<repository>[^:@ \\t\\n\\r\\f\\v]+)\n # Optionally with `:tag`.\n (:(?P<tag>[^@ ]+))?\n # Optionally with `@digest`.\n (@(?P<digest>\\S+))?\n $\n \"\"\",\n re.VERBOSE,\n)\n\n\ndef main(*dockerfile_names: str) -> Iterator[ParsedDockerfileInfo]:\n # import here to allow the rest of the file to be tested without a dependency on dockerfile\n from dockerfile import Command, parse_file, parse_string # pants: no-infer-dep\n\n @dataclass(frozen=True)\n class ParsedDockerfile:\n filename: str\n commands: tuple[Command, ...]\n\n @classmethod\n def from_file(cls, dockerfile: str) -> ParsedDockerfile:\n return cls(dockerfile, parse_file(dockerfile))\n\n @classmethod\n def from_string(cls, dockerfile_contents: str) -> ParsedDockerfile:\n return cls(\"<text>\", parse_string(dockerfile_contents))\n\n def get_info(self) -> ParsedDockerfileInfo:\n return ParsedDockerfileInfo(\n source=self.filename,\n build_args=self.build_args(),\n copy_source_paths=self.copy_source_paths(),\n from_image_build_args=self.from_image_build_args(),\n version_tags=self.baseimage_tags(),\n )\n\n def get_all(self, command_name: str) -> Iterator[Command]:\n for command in self.commands:\n if command.cmd.upper() == command_name:\n yield command\n\n def from_image_build_args(self) -> tuple[str, ...]:\n build_args = {\n key: value.strip(\"\\\"'\")\n for key, has_value, value in [\n build_arg.partition(\"=\") for build_arg in self.build_args()\n ]\n if has_value and valid_address(value)\n }\n\n return tuple(\n f\"{image_build_arg}={build_args[image_build_arg]}\"\n for image_build_arg in self.from_image_build_arg_names()\n if image_build_arg in build_args\n )\n\n @staticmethod\n def _get_image_ref_build_arg(image_ref: str) -> str | None:\n build_arg = re.match(r\"\\$\\{?([a-zA-Z0-9_]+)\\}?$\", image_ref)\n return build_arg.group(1) if build_arg else None\n\n def from_image_build_arg_names(self) -> Iterator[str]:\n \"\"\"Return build args used as the image ref in `FROM` instructions.\n\n Example:\n\n ARG BASE_IMAGE\n FROM ${BASE_IMAGE}\n \"\"\"\n for cmd in self.get_all(\"FROM\"):\n build_arg = self._get_image_ref_build_arg(cmd.value[0])\n if build_arg:\n yield build_arg\n\n def from_baseimages(self) -> Iterator[tuple[str, tuple[str, ...]]]:\n for idx, cmd in enumerate(self.get_all(\"FROM\")):\n name_parts = cmd.value[0].split(\"/\")\n if len(cmd.value) == 3 and cmd.value[1].upper() == \"AS\":\n stage = cmd.value[2]\n else:\n stage = f\"stage{idx}\"\n yield stage, name_parts\n\n def baseimage_tags(self) -> tuple[str, ...]:\n \"\"\"Return all base image tags, prefix with the stage alias or index.\n\n In case the base image is entirely made up of a build arg, use that with a `build-arg:`\n prefix.\n\n Example:\n\n FROM base:1.0 AS build\n ...\n FROM interim\n FROM $argname as dynamic\n ...\n FROM final as out\n\n Gives:\n\n build 1.0\n stage1 latest\n dynamic build-arg:argname\n out latest\n \"\"\"\n\n def _get_tag(image_ref: str) -> str | None:\n \"\"\"The image ref is in the form `registry/repo/name[/...][:tag][@digest]` and where\n `digest` is `sha256:hex value`, or a build arg reference with $ARG.\"\"\"\n if image_ref.startswith(\"$\"):\n build_arg = self._get_image_ref_build_arg(image_ref)\n if build_arg:\n return f\"build-arg:{build_arg}\"\n parsed = re.match(_image_ref_regexp, image_ref)\n if not parsed:\n return None\n tag = parsed.group(\"tag\")\n if tag:\n return tag\n if not parsed.group(\"digest\"):\n return \"latest\"\n return None\n\n return tuple(\n f\"{stage} {tag}\"\n for stage, name_parts in self.from_baseimages()\n for tag in [_get_tag(name_parts[-1])]\n if tag\n )\n\n def build_args(self) -> tuple[str, ...]:\n \"\"\"Return all defined build args, including any default values.\"\"\"\n return tuple(cmd.original[4:].strip() for cmd in self.get_all(\"ARG\"))\n\n def copy_source_paths(self) -> tuple[str, ...]:\n \"\"\"Return all files referenced from the build context using COPY instruction.\"\"\"\n # Exclude COPY --from instructions, as they don't refer to files from the build context.\n return tuple(\n chain(\n *(\n cmd.value[:-1]\n for cmd in self.get_all(\"COPY\")\n if all(\"--from\" not in flag for flag in cmd.flags)\n )\n )\n )\n\n for parsed in map(ParsedDockerfile.from_file, dockerfile_names):\n yield parsed.get_info()\n\n\nif __name__ == \"__main__\":\n if len(sys.argv) > 1:\n print(json.dumps([asdict(info) for info in main(*sys.argv[1:])]))\n else:\n print(f\"Not enough arguments.\\nUsage: {sys.argv[0]} [DOCKERFILE ...]\")\n sys.exit(1)\n", "path": "src/python/pants/backend/docker/subsystems/dockerfile_wrapper_script.py"}], "after_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nimport json\nimport re\nimport sys\nfrom dataclasses import asdict, dataclass\nfrom itertools import chain\nfrom typing import Iterator\n\n#\n# Note: This file is used as a pex entry point in the execution sandbox.\n#\n\n\n@dataclass(frozen=True)\nclass ParsedDockerfileInfo:\n \"\"\"Keep fields in sync with `dockerfile_parser.py:DockerfileInfo`.\"\"\"\n\n source: str\n build_args: tuple[str, ...] # \"ARG_NAME=VALUE\", ...\n copy_source_paths: tuple[str, ...]\n from_image_build_args: tuple[str, ...] # \"ARG_NAME=UPSTREAM_TARGET_ADDRESS\", ...\n version_tags: tuple[str, ...] # \"STAGE TAG\", ...\n\n\n_address_regexp = re.compile(\n r\"\"\"\n # Optionally root:ed.\n (?://)?\n # Optional path.\n [^:# ]*\n # Optional target name.\n (?::[^:#!@?/\\= ]+)?\n # Optional generated name.\n (?:\\#[^:#!@?= ]+)?\n # Optional parametrizations.\n (?:@\n # key=value\n [^=: ]+=[^,: ]*\n # Optional additional `,key=value`s\n (?:,[^=: ]+=[^,: ]*)*\n )?\n $\n \"\"\",\n re.VERBOSE,\n)\n\n\ndef valid_address(value: str) -> bool:\n \"\"\"Checks if `value` may pass as an address.\"\"\"\n return bool(re.match(_address_regexp, value))\n\n\n_image_ref_regexp = re.compile(\n r\"\"\"\n ^\n # Optional registry.\n ((?P<registry>[^/:_ ]+:?[^/:_ ]*)/)?\n # Repository.\n (?P<repository>[^:@ \\t\\n\\r\\f\\v]+)\n # Optionally with `:tag`.\n (:(?P<tag>[^@ ]+))?\n # Optionally with `@digest`.\n (@(?P<digest>\\S+))?\n $\n \"\"\",\n re.VERBOSE,\n)\n\n\ndef main(*dockerfile_names: str) -> Iterator[ParsedDockerfileInfo]:\n # import here to allow the rest of the file to be tested without a dependency on dockerfile\n from dockerfile import Command, parse_file, parse_string # pants: no-infer-dep\n\n @dataclass(frozen=True)\n class ParsedDockerfile:\n filename: str\n commands: tuple[Command, ...]\n\n @classmethod\n def from_file(cls, dockerfile: str) -> ParsedDockerfile:\n return cls(dockerfile, parse_file(dockerfile))\n\n @classmethod\n def from_string(cls, dockerfile_contents: str) -> ParsedDockerfile:\n return cls(\"<text>\", parse_string(dockerfile_contents))\n\n def get_info(self) -> ParsedDockerfileInfo:\n return ParsedDockerfileInfo(\n source=self.filename,\n build_args=self.build_args(),\n copy_source_paths=self.copy_source_paths(),\n from_image_build_args=self.from_image_build_args(),\n version_tags=self.baseimage_tags(),\n )\n\n def get_all(self, command_name: str) -> Iterator[Command]:\n for command in self.commands:\n if command.cmd.upper() == command_name:\n yield command\n\n def from_image_build_args(self) -> tuple[str, ...]:\n build_args = {\n key: value.strip(\"\\\"'\")\n for key, has_value, value in [\n build_arg.partition(\"=\") for build_arg in self.build_args()\n ]\n if has_value and valid_address(value)\n }\n\n return tuple(\n f\"{image_build_arg}={build_args[image_build_arg]}\"\n for image_build_arg in self.from_image_build_arg_names()\n if image_build_arg in build_args\n )\n\n @staticmethod\n def _get_image_ref_build_arg(image_ref: str) -> str | None:\n build_arg = re.match(r\"\\$\\{?([a-zA-Z0-9_]+)\\}?$\", image_ref)\n return build_arg.group(1) if build_arg else None\n\n def from_image_build_arg_names(self) -> Iterator[str]:\n \"\"\"Return build args used as the image ref in `FROM` instructions.\n\n Example:\n\n ARG BASE_IMAGE\n FROM ${BASE_IMAGE}\n \"\"\"\n for cmd in self.get_all(\"FROM\"):\n build_arg = self._get_image_ref_build_arg(cmd.value[0])\n if build_arg:\n yield build_arg\n\n def from_baseimages(self) -> Iterator[tuple[str, tuple[str, ...]]]:\n for idx, cmd in enumerate(self.get_all(\"FROM\")):\n name_parts = cmd.value[0].split(\"/\")\n if len(cmd.value) == 3 and cmd.value[1].upper() == \"AS\":\n stage = cmd.value[2]\n else:\n stage = f\"stage{idx}\"\n yield stage, name_parts\n\n def baseimage_tags(self) -> tuple[str, ...]:\n \"\"\"Return all base image tags, prefix with the stage alias or index.\n\n In case the base image is entirely made up of a build arg, use that with a `build-arg:`\n prefix.\n\n Example:\n\n FROM base:1.0 AS build\n ...\n FROM interim\n FROM $argname as dynamic\n ...\n FROM final as out\n\n Gives:\n\n build 1.0\n stage1 latest\n dynamic build-arg:argname\n out latest\n \"\"\"\n\n def _get_tag(image_ref: str) -> str | None:\n \"\"\"The image ref is in the form `registry/repo/name[/...][:tag][@digest]` and where\n `digest` is `sha256:hex value`, or a build arg reference with $ARG.\"\"\"\n if image_ref.startswith(\"$\"):\n build_arg = self._get_image_ref_build_arg(image_ref)\n if build_arg:\n return f\"build-arg:{build_arg}\"\n parsed = re.match(_image_ref_regexp, image_ref)\n if not parsed:\n return None\n tag = parsed.group(\"tag\")\n if tag:\n return tag\n if not parsed.group(\"digest\"):\n return \"latest\"\n return None\n\n return tuple(\n f\"{stage} {tag}\"\n for stage, name_parts in self.from_baseimages()\n for tag in [_get_tag(name_parts[-1])]\n if tag\n )\n\n def build_args(self) -> tuple[str, ...]:\n \"\"\"Return all defined build args, including any default values.\"\"\"\n return tuple(cmd.original[4:].strip() for cmd in self.get_all(\"ARG\"))\n\n def copy_source_paths(self) -> tuple[str, ...]:\n \"\"\"Return all files referenced from the build context using COPY instruction.\"\"\"\n # Exclude COPY --from instructions, as they don't refer to files from the build context.\n return tuple(\n chain(\n *(\n cmd.value[:-1]\n for cmd in self.get_all(\"COPY\")\n if all(\"--from\" not in flag for flag in cmd.flags)\n )\n )\n )\n\n for parsed in map(ParsedDockerfile.from_file, dockerfile_names):\n yield parsed.get_info()\n\n\nif __name__ == \"__main__\":\n if len(sys.argv) > 1:\n print(json.dumps([asdict(info) for info in main(*sys.argv[1:])]))\n else:\n print(f\"Not enough arguments.\\nUsage: {sys.argv[0]} [DOCKERFILE ...]\")\n sys.exit(1)\n", "path": "src/python/pants/backend/docker/subsystems/dockerfile_wrapper_script.py"}]} | 2,502 | 260 |
gh_patches_debug_11560 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-4282 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Azure - Docs missing for VM Images
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/c7n_azure/c7n_azure/resources/image.py`
Content:
```
1 # Copyright 2018 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from c7n_azure.provider import resources
16 from c7n_azure.resources.arm import ArmResourceManager
17
18
19 @resources.register('image')
20 class Image(ArmResourceManager):
21 class resource_type(ArmResourceManager.resource_type):
22 service = 'azure.mgmt.compute'
23 client = 'ComputeManagementClient'
24 enum_spec = ('images', 'list', None)
25 default_report_fields = (
26 'name',
27 'location',
28 'resourceGroup',
29 )
30 resource_type = 'Microsoft.Compute/images'
31
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/c7n_azure/c7n_azure/resources/image.py b/tools/c7n_azure/c7n_azure/resources/image.py
--- a/tools/c7n_azure/c7n_azure/resources/image.py
+++ b/tools/c7n_azure/c7n_azure/resources/image.py
@@ -18,6 +18,23 @@
@resources.register('image')
class Image(ArmResourceManager):
+ """Virtual Machine Image
+
+ :example:
+ Returns all virtual machine images named my-test-vm-image
+
+ .. code-block:: yaml
+
+ policies:
+ - name: get-vm-image
+ resource: azure.image
+ filters:
+ - type: value
+ key: name
+ op: eq
+ value: my-test-vm-image
+
+ """
class resource_type(ArmResourceManager.resource_type):
service = 'azure.mgmt.compute'
client = 'ComputeManagementClient'
| {"golden_diff": "diff --git a/tools/c7n_azure/c7n_azure/resources/image.py b/tools/c7n_azure/c7n_azure/resources/image.py\n--- a/tools/c7n_azure/c7n_azure/resources/image.py\n+++ b/tools/c7n_azure/c7n_azure/resources/image.py\n@@ -18,6 +18,23 @@\n \n @resources.register('image')\n class Image(ArmResourceManager):\n+ \"\"\"Virtual Machine Image\n+\n+ :example:\n+ Returns all virtual machine images named my-test-vm-image\n+\n+ .. code-block:: yaml\n+\n+ policies:\n+ - name: get-vm-image\n+ resource: azure.image\n+ filters:\n+ - type: value\n+ key: name\n+ op: eq\n+ value: my-test-vm-image\n+\n+ \"\"\"\n class resource_type(ArmResourceManager.resource_type):\n service = 'azure.mgmt.compute'\n client = 'ComputeManagementClient'\n", "issue": "Azure - Docs missing for VM Images\n\n", "before_files": [{"content": "# Copyright 2018 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom c7n_azure.provider import resources\nfrom c7n_azure.resources.arm import ArmResourceManager\n\n\[email protected]('image')\nclass Image(ArmResourceManager):\n class resource_type(ArmResourceManager.resource_type):\n service = 'azure.mgmt.compute'\n client = 'ComputeManagementClient'\n enum_spec = ('images', 'list', None)\n default_report_fields = (\n 'name',\n 'location',\n 'resourceGroup',\n )\n resource_type = 'Microsoft.Compute/images'\n", "path": "tools/c7n_azure/c7n_azure/resources/image.py"}], "after_files": [{"content": "# Copyright 2018 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom c7n_azure.provider import resources\nfrom c7n_azure.resources.arm import ArmResourceManager\n\n\[email protected]('image')\nclass Image(ArmResourceManager):\n \"\"\"Virtual Machine Image\n\n :example:\n Returns all virtual machine images named my-test-vm-image\n\n .. code-block:: yaml\n\n policies:\n - name: get-vm-image\n resource: azure.image\n filters:\n - type: value\n key: name\n op: eq\n value: my-test-vm-image\n\n \"\"\"\n class resource_type(ArmResourceManager.resource_type):\n service = 'azure.mgmt.compute'\n client = 'ComputeManagementClient'\n enum_spec = ('images', 'list', None)\n default_report_fields = (\n 'name',\n 'location',\n 'resourceGroup',\n )\n resource_type = 'Microsoft.Compute/images'\n", "path": "tools/c7n_azure/c7n_azure/resources/image.py"}]} | 567 | 216 |
gh_patches_debug_22249 | rasdani/github-patches | git_diff | pypa__pipenv-5032 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CLI docs broken
### Issue description
[CLI docs](https://pipenv.pypa.io/en/latest/cli/)
[CLI docs source](https://pipenv.pypa.io/en/latest/_sources/cli.rst.txt)
### Expected result
CLI docs
### Actual result
Empty page
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 #
2 # pipenv documentation build configuration file, created by
3 # sphinx-quickstart on Mon Jan 30 13:28:36 2017.
4 #
5 # This file is execfile()d with the current directory set to its
6 # containing dir.
7 #
8 # Note that not all possible configuration values are present in this
9 # autogenerated file.
10 #
11 # All configuration values have a default; values that are commented out
12 # serve to show the default.
13
14 # If extensions (or modules to document with autodoc) are in another directory,
15 # add these directories to sys.path here. If the directory is relative to the
16 # documentation root, use os.path.abspath to make it absolute, like shown here.
17 #
18 import os
19
20 # Path hackery to get current version number.
21 here = os.path.abspath(os.path.dirname(__file__))
22
23 about = {}
24 with open(os.path.join(here, "..", "pipenv", "__version__.py")) as f:
25 exec(f.read(), about)
26
27 # -- General configuration ------------------------------------------------
28
29 # If your documentation needs a minimal Sphinx version, state it here.
30 #
31 # needs_sphinx = '1.0'
32
33 # Add any Sphinx extension module names here, as strings. They can be
34 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
35 # ones.
36 extensions = [
37 "sphinx.ext.autodoc",
38 "sphinx.ext.todo",
39 "sphinx.ext.coverage",
40 "sphinx.ext.viewcode",
41 "sphinx_click.ext",
42 ]
43
44 # Add any paths that contain templates here, relative to this directory.
45 templates_path = ["_templates"]
46
47 # The suffix(es) of source filenames.
48 # You can specify multiple suffix as a list of string:
49 #
50 # source_suffix = ['.rst', '.md']
51 source_suffix = ".rst"
52
53 # The master toctree document.
54 master_doc = "index"
55
56 # General information about the project.
57 project = "pipenv"
58 copyright = '2020. A project founded by <a href="http://kennethreitz.com/pages/open-projects.html">Kenneth Reitz</a>'
59 author = "Python Packaging Authority"
60
61 # The version info for the project you're documenting, acts as replacement for
62 # |version| and |release|, also used in various other places throughout the
63 # built documents.
64 #
65 # The short X.Y version.
66 version = about["__version__"]
67 # The full version, including alpha/beta/rc tags.
68 release = about["__version__"]
69
70 # The language for content autogenerated by Sphinx. Refer to documentation
71 # for a list of supported languages.
72 #
73 # This is also used if you do content translation via gettext catalogs.
74 # Usually you set "language" from the command line for these cases.
75 language = None
76
77 # List of patterns, relative to source directory, that match files and
78 # directories to ignore when looking for source files.
79 # This patterns also effect to html_static_path and html_extra_path
80 exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
81
82 # The name of the Pygments (syntax highlighting) style to use.
83 pygments_style = "sphinx"
84
85 # If true, `todo` and `todoList` produce output, else they produce nothing.
86 todo_include_todos = True
87
88 # -- Options for HTML output ----------------------------------------------
89
90 # The theme to use for HTML and HTML Help pages. See the documentation for
91 # a list of builtin themes.
92 #
93 html_theme = "alabaster"
94
95 # Theme options are theme-specific and customize the look and feel of a theme
96 # further. For a list of options available for each theme, see the
97 # documentation.
98 #
99 html_theme_options = {
100 "show_powered_by": False,
101 "github_user": "pypa",
102 "github_repo": "pipenv",
103 "github_banner": False,
104 "show_related": False,
105 }
106
107 html_sidebars = {
108 "index": ["sidebarintro.html", "sourcelink.html", "searchbox.html", "hacks.html"],
109 "**": [
110 "sidebarlogo.html",
111 "localtoc.html",
112 "relations.html",
113 "sourcelink.html",
114 "searchbox.html",
115 "hacks.html",
116 ],
117 }
118
119
120 # Add any paths that contain custom static files (such as style sheets) here,
121 # relative to this directory. They are copied after the builtin static files,
122 # so a file named "default.css" will overwrite the builtin "default.css".
123 html_static_path = ["_static"]
124
125
126 def setup(app):
127 app.add_stylesheet("custom.css")
128
129
130 # -- Options for HTMLHelp output ------------------------------------------
131
132 # Output file base name for HTML help builder.
133 htmlhelp_basename = "pipenvdoc"
134
135
136 # -- Options for LaTeX output ---------------------------------------------
137
138 latex_elements = {
139 # The paper size ('letterpaper' or 'a4paper').
140 #
141 # 'papersize': 'letterpaper',
142 # The font size ('10pt', '11pt' or '12pt').
143 #
144 # 'pointsize': '10pt',
145 # Additional stuff for the LaTeX preamble.
146 #
147 # 'preamble': '',
148 # Latex figure (float) alignment
149 #
150 # 'figure_align': 'htbp',
151 }
152
153 # Grouping the document tree into LaTeX files. List of tuples
154 # (source start file, target name, title,
155 # author, documentclass [howto, manual, or own class]).
156 latex_documents = [
157 (master_doc, "pipenv.tex", "pipenv Documentation", "Kenneth Reitz", "manual"),
158 ]
159
160
161 # -- Options for manual page output ---------------------------------------
162
163 # One entry per manual page. List of tuples
164 # (source start file, name, description, authors, manual section).
165 man_pages = [(master_doc, "pipenv", "pipenv Documentation", [author], 1)]
166
167
168 # -- Options for Texinfo output -------------------------------------------
169
170 # Grouping the document tree into Texinfo files. List of tuples
171 # (source start file, target name, title, author,
172 # dir menu entry, description, category)
173 texinfo_documents = [
174 (
175 master_doc,
176 "pipenv",
177 "pipenv Documentation",
178 author,
179 "pipenv",
180 "One line description of project.",
181 "Miscellaneous",
182 ),
183 ]
184
185
186 # -- Options for Epub output ----------------------------------------------
187
188 # Bibliographic Dublin Core info.
189 epub_title = project
190 epub_author = author
191 epub_publisher = author
192 epub_copyright = copyright
193
194 # The unique identifier of the text. This can be a ISBN number
195 # or the project homepage.
196 #
197 # epub_identifier = ''
198
199 # A unique identification for the text.
200 #
201 # epub_uid = ''
202
203 # A list of files that should not be packed into the epub file.
204 epub_exclude_files = ["search.html"]
205
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -24,6 +24,13 @@
with open(os.path.join(here, "..", "pipenv", "__version__.py")) as f:
exec(f.read(), about)
+# Hackery to get the CLI docs to generate
+import click
+
+import pipenv.vendor.click
+
+click.BaseCommand = pipenv.vendor.click.BaseCommand
+
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
@@ -38,7 +45,7 @@
"sphinx.ext.todo",
"sphinx.ext.coverage",
"sphinx.ext.viewcode",
- "sphinx_click.ext",
+ "sphinx_click",
]
# Add any paths that contain templates here, relative to this directory.
@@ -124,7 +131,7 @@
def setup(app):
- app.add_stylesheet("custom.css")
+ app.add_css_file("custom.css")
# -- Options for HTMLHelp output ------------------------------------------
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -24,6 +24,13 @@\n with open(os.path.join(here, \"..\", \"pipenv\", \"__version__.py\")) as f:\n exec(f.read(), about)\n \n+# Hackery to get the CLI docs to generate\n+import click\n+\n+import pipenv.vendor.click\n+\n+click.BaseCommand = pipenv.vendor.click.BaseCommand\n+\n # -- General configuration ------------------------------------------------\n \n # If your documentation needs a minimal Sphinx version, state it here.\n@@ -38,7 +45,7 @@\n \"sphinx.ext.todo\",\n \"sphinx.ext.coverage\",\n \"sphinx.ext.viewcode\",\n- \"sphinx_click.ext\",\n+ \"sphinx_click\",\n ]\n \n # Add any paths that contain templates here, relative to this directory.\n@@ -124,7 +131,7 @@\n \n \n def setup(app):\n- app.add_stylesheet(\"custom.css\")\n+ app.add_css_file(\"custom.css\")\n \n \n # -- Options for HTMLHelp output ------------------------------------------\n", "issue": "CLI docs broken\n### Issue description\r\n[CLI docs](https://pipenv.pypa.io/en/latest/cli/)\r\n[CLI docs source](https://pipenv.pypa.io/en/latest/_sources/cli.rst.txt)\r\n\r\n### Expected result\r\nCLI docs\r\n\r\n### Actual result\r\nEmpty page\n", "before_files": [{"content": "#\n# pipenv documentation build configuration file, created by\n# sphinx-quickstart on Mon Jan 30 13:28:36 2017.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\n\n# Path hackery to get current version number.\nhere = os.path.abspath(os.path.dirname(__file__))\n\nabout = {}\nwith open(os.path.join(here, \"..\", \"pipenv\", \"__version__.py\")) as f:\n exec(f.read(), about)\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.coverage\",\n \"sphinx.ext.viewcode\",\n \"sphinx_click.ext\",\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = \".rst\"\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"pipenv\"\ncopyright = '2020. A project founded by <a href=\"http://kennethreitz.com/pages/open-projects.html\">Kenneth Reitz</a>'\nauthor = \"Python Packaging Authority\"\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = about[\"__version__\"]\n# The full version, including alpha/beta/rc tags.\nrelease = about[\"__version__\"]\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"alabaster\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n \"show_powered_by\": False,\n \"github_user\": \"pypa\",\n \"github_repo\": \"pipenv\",\n \"github_banner\": False,\n \"show_related\": False,\n}\n\nhtml_sidebars = {\n \"index\": [\"sidebarintro.html\", \"sourcelink.html\", \"searchbox.html\", \"hacks.html\"],\n \"**\": [\n \"sidebarlogo.html\",\n \"localtoc.html\",\n \"relations.html\",\n \"sourcelink.html\",\n \"searchbox.html\",\n \"hacks.html\",\n ],\n}\n\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n\ndef setup(app):\n app.add_stylesheet(\"custom.css\")\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"pipenvdoc\"\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, \"pipenv.tex\", \"pipenv Documentation\", \"Kenneth Reitz\", \"manual\"),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"pipenv\", \"pipenv Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"pipenv\",\n \"pipenv Documentation\",\n author,\n \"pipenv\",\n \"One line description of project.\",\n \"Miscellaneous\",\n ),\n]\n\n\n# -- Options for Epub output ----------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = project\nepub_author = author\nepub_publisher = author\nepub_copyright = copyright\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#\n# epub_identifier = ''\n\n# A unique identification for the text.\n#\n# epub_uid = ''\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = [\"search.html\"]\n", "path": "docs/conf.py"}], "after_files": [{"content": "#\n# pipenv documentation build configuration file, created by\n# sphinx-quickstart on Mon Jan 30 13:28:36 2017.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\n\n# Path hackery to get current version number.\nhere = os.path.abspath(os.path.dirname(__file__))\n\nabout = {}\nwith open(os.path.join(here, \"..\", \"pipenv\", \"__version__.py\")) as f:\n exec(f.read(), about)\n\n# Hackery to get the CLI docs to generate\nimport click\n\nimport pipenv.vendor.click\n\nclick.BaseCommand = pipenv.vendor.click.BaseCommand\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.coverage\",\n \"sphinx.ext.viewcode\",\n \"sphinx_click\",\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = \".rst\"\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"pipenv\"\ncopyright = '2020. A project founded by <a href=\"http://kennethreitz.com/pages/open-projects.html\">Kenneth Reitz</a>'\nauthor = \"Python Packaging Authority\"\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = about[\"__version__\"]\n# The full version, including alpha/beta/rc tags.\nrelease = about[\"__version__\"]\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"alabaster\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n \"show_powered_by\": False,\n \"github_user\": \"pypa\",\n \"github_repo\": \"pipenv\",\n \"github_banner\": False,\n \"show_related\": False,\n}\n\nhtml_sidebars = {\n \"index\": [\"sidebarintro.html\", \"sourcelink.html\", \"searchbox.html\", \"hacks.html\"],\n \"**\": [\n \"sidebarlogo.html\",\n \"localtoc.html\",\n \"relations.html\",\n \"sourcelink.html\",\n \"searchbox.html\",\n \"hacks.html\",\n ],\n}\n\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n\ndef setup(app):\n app.add_css_file(\"custom.css\")\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"pipenvdoc\"\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, \"pipenv.tex\", \"pipenv Documentation\", \"Kenneth Reitz\", \"manual\"),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"pipenv\", \"pipenv Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"pipenv\",\n \"pipenv Documentation\",\n author,\n \"pipenv\",\n \"One line description of project.\",\n \"Miscellaneous\",\n ),\n]\n\n\n# -- Options for Epub output ----------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = project\nepub_author = author\nepub_publisher = author\nepub_copyright = copyright\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#\n# epub_identifier = ''\n\n# A unique identification for the text.\n#\n# epub_uid = ''\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = [\"search.html\"]\n", "path": "docs/conf.py"}]} | 2,284 | 239 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.