problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
10.2k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 582
21k
| num_tokens
int64 271
2.05k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_41534 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-8855 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[CT-3107] [Bug] nested dependencies not installed when package is a tarball
### Is this a new bug in dbt-core?
- [X] I believe this is a new bug in dbt-core
- [X] I have searched the existing issues, and I could not find an existing issue for this bug
### Current Behavior
when running `dbt deps` to install a package specified as a tarball, dbt doesn't install nested dependencies (i.e. packages specified in the imported package's `packages.yml` file) as it does when installing a package from local, git or the dbt hub.
### Expected Behavior
consistent behaviour across import methods regarding nested dependencies. dbt should install any dependencies specified in the tarball project's packages.yml file.
### Steps To Reproduce
this can be reproduced by importing the tarball of a package with nested dependencies. In this case, importing dbt_expectations should cause dbt_date to be installed, as its included in the package's dependencies here: https://github.com/calogica/dbt-expectations/blob/0.9.0/packages.yml
Steps:
1. create a `packages.yml` file in a project with the following structure:
``` yaml
packages:
- tarball: "https://github.com/calogica/dbt-expectations/archive/refs/tags/0.9.0.tar.gz"
name: "dbt_expectations"
```
2. run `dbt deps`
running dbt deps will only install dbt_expectations:
```
20:08:55 Running with dbt=1.5.6
20:08:55 Installing dbt_expectations
20:08:56 Installed from tarball (url: https://github.com/calogica/dbt-expectations/archive/refs/tags/0.9.0.tar.gz)
```
compare this to installing the same package from dbt hub, with the following `packages.yml`:
``` yaml
packages:
- package: calogica/dbt_expectations
version: "0.9.0"
```
```
20:14:24 Running with dbt=1.5.6
20:14:24 Installing calogica/dbt_expectations
20:14:25 Installed from version 0.9.0
20:14:25 Up to date!
20:14:25 Installing calogica/dbt_date
20:14:25 Installed from version 0.8.1
20:14:25 Updated version available: 0.9.1
20:14:25
20:14:25 Updates available for packages: ['calogica/dbt_date']
Update your versions in packages.yml, then run dbt deps
```
### Relevant log output
_No response_
### Environment
```markdown
- OS: Mac OS 13.5.2 (22G91)
- Python: 3.9
- dbt: 1.5.6
```
### Which database adapter are you using with dbt?
snowflake
### Additional Context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/dbt/deps/tarball.py`
Content:
```
1 from typing import Dict
2
3 from dbt.contracts.project import RegistryPackageMetadata, TarballPackage
4 from dbt.deps.base import PinnedPackage, UnpinnedPackage
5
6
7 class TarballPackageMixin:
8 def __init__(self, tarball: str) -> None:
9 super().__init__()
10 self.tarball = tarball
11
12 @property
13 def name(self):
14 return self.tarball
15
16 def source_type(self) -> str:
17 return "tarball"
18
19
20 class TarballPinnedPackage(TarballPackageMixin, PinnedPackage):
21 def __init__(self, tarball: str, package: str) -> None:
22 super().__init__(tarball)
23 # setup to recycle RegistryPinnedPackage fns
24 self.package = package
25 self.version = "tarball"
26
27 @property
28 def name(self):
29 return self.package
30
31 def to_dict(self) -> Dict[str, str]:
32 return {
33 "tarball": self.tarball,
34 "version": self.version,
35 "package": self.package,
36 }
37
38 def get_version(self):
39 return self.version
40
41 def nice_version_name(self):
42 return f"tarball (url: {self.tarball})"
43
44 def _fetch_metadata(self, project, renderer):
45 """
46 recycle RegistryPackageMetadata so that we can use the install and
47 download_and_untar from RegistryPinnedPackage next.
48 build RegistryPackageMetadata from info passed via packages.yml since no
49 'metadata' service exists in this case.
50 """
51
52 dct = {
53 "name": self.package,
54 "packages": [], # note: required by RegistryPackageMetadata
55 "downloads": {"tarball": self.tarball},
56 }
57
58 return RegistryPackageMetadata.from_dict(dct)
59
60 def install(self, project, renderer):
61 self._install(project, renderer)
62
63
64 class TarballUnpinnedPackage(TarballPackageMixin, UnpinnedPackage[TarballPinnedPackage]):
65 def __init__(
66 self,
67 tarball: str,
68 package: str,
69 ) -> None:
70 super().__init__(tarball)
71 # setup to recycle RegistryPinnedPackage fns
72 self.package = package
73 self.version = "tarball"
74
75 @classmethod
76 def from_contract(cls, contract: TarballPackage) -> "TarballUnpinnedPackage":
77 return cls(tarball=contract.tarball, package=contract.name)
78
79 def incorporate(self, other: "TarballUnpinnedPackage") -> "TarballUnpinnedPackage":
80 return TarballUnpinnedPackage(tarball=self.tarball, package=self.package)
81
82 def resolved(self) -> TarballPinnedPackage:
83 return TarballPinnedPackage(tarball=self.tarball, package=self.package)
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/dbt/deps/tarball.py b/core/dbt/deps/tarball.py
--- a/core/dbt/deps/tarball.py
+++ b/core/dbt/deps/tarball.py
@@ -1,7 +1,14 @@
+import functools
+import os
+from pathlib import Path
from typing import Dict
-from dbt.contracts.project import RegistryPackageMetadata, TarballPackage
-from dbt.deps.base import PinnedPackage, UnpinnedPackage
+from dbt.clients import system
+from dbt.config.project import PartialProject
+from dbt.contracts.project import TarballPackage
+from dbt.deps.base import PinnedPackage, UnpinnedPackage, get_downloads_path
+from dbt.exceptions import DependencyError
+from dbt.utils import _connection_exception_retry as connection_exception_retry
class TarballPackageMixin:
@@ -20,9 +27,10 @@
class TarballPinnedPackage(TarballPackageMixin, PinnedPackage):
def __init__(self, tarball: str, package: str) -> None:
super().__init__(tarball)
- # setup to recycle RegistryPinnedPackage fns
self.package = package
self.version = "tarball"
+ self.tar_path = os.path.join(Path(get_downloads_path()), self.package)
+ self.untarred_path = f"{self.tar_path}_untarred"
@property
def name(self):
@@ -31,8 +39,7 @@
def to_dict(self) -> Dict[str, str]:
return {
"tarball": self.tarball,
- "version": self.version,
- "package": self.package,
+ "name": self.package,
}
def get_version(self):
@@ -42,23 +49,38 @@
return f"tarball (url: {self.tarball})"
def _fetch_metadata(self, project, renderer):
- """
- recycle RegistryPackageMetadata so that we can use the install and
- download_and_untar from RegistryPinnedPackage next.
- build RegistryPackageMetadata from info passed via packages.yml since no
- 'metadata' service exists in this case.
- """
-
- dct = {
- "name": self.package,
- "packages": [], # note: required by RegistryPackageMetadata
- "downloads": {"tarball": self.tarball},
- }
-
- return RegistryPackageMetadata.from_dict(dct)
+ """Download and untar the project and parse metadata from the project folder."""
+ download_untar_fn = functools.partial(
+ self.download_and_untar, self.tarball, self.tar_path, self.untarred_path, self.name
+ )
+ connection_exception_retry(download_untar_fn, 5)
+
+ tar_contents = os.listdir(self.untarred_path)
+ if len(tar_contents) != 1:
+ raise DependencyError(
+ f"Incorrect structure for package extracted from {self.tarball}."
+ f"The extracted package needs to follow the structure {self.name}/<package_contents>."
+ )
+ child_folder = os.listdir(self.untarred_path)[0]
+
+ self.untarred_path = os.path.join(self.untarred_path, child_folder)
+ partial = PartialProject.from_project_root(self.untarred_path)
+ metadata = partial.render_package_metadata(renderer)
+ metadata.name = self.package if self.package else metadata.name
+ return metadata
def install(self, project, renderer):
- self._install(project, renderer)
+ download_untar_fn = functools.partial(
+ self.download_and_untar, self.tarball, self.tar_path, self.untarred_path, self.name
+ )
+ connection_exception_retry(download_untar_fn, 5)
+ dest_path = self.get_installation_path(project, renderer)
+ if os.path.exists(dest_path):
+ if system.path_is_symlink(dest_path):
+ system.remove_file(dest_path)
+ else:
+ system.rmdir(dest_path)
+ system.move(self.untarred_path, dest_path)
class TarballUnpinnedPackage(TarballPackageMixin, UnpinnedPackage[TarballPinnedPackage]):
| {"golden_diff": "diff --git a/core/dbt/deps/tarball.py b/core/dbt/deps/tarball.py\n--- a/core/dbt/deps/tarball.py\n+++ b/core/dbt/deps/tarball.py\n@@ -1,7 +1,14 @@\n+import functools\n+import os\n+from pathlib import Path\n from typing import Dict\n \n-from dbt.contracts.project import RegistryPackageMetadata, TarballPackage\n-from dbt.deps.base import PinnedPackage, UnpinnedPackage\n+from dbt.clients import system\n+from dbt.config.project import PartialProject\n+from dbt.contracts.project import TarballPackage\n+from dbt.deps.base import PinnedPackage, UnpinnedPackage, get_downloads_path\n+from dbt.exceptions import DependencyError\n+from dbt.utils import _connection_exception_retry as connection_exception_retry\n \n \n class TarballPackageMixin:\n@@ -20,9 +27,10 @@\n class TarballPinnedPackage(TarballPackageMixin, PinnedPackage):\n def __init__(self, tarball: str, package: str) -> None:\n super().__init__(tarball)\n- # setup to recycle RegistryPinnedPackage fns\n self.package = package\n self.version = \"tarball\"\n+ self.tar_path = os.path.join(Path(get_downloads_path()), self.package)\n+ self.untarred_path = f\"{self.tar_path}_untarred\"\n \n @property\n def name(self):\n@@ -31,8 +39,7 @@\n def to_dict(self) -> Dict[str, str]:\n return {\n \"tarball\": self.tarball,\n- \"version\": self.version,\n- \"package\": self.package,\n+ \"name\": self.package,\n }\n \n def get_version(self):\n@@ -42,23 +49,38 @@\n return f\"tarball (url: {self.tarball})\"\n \n def _fetch_metadata(self, project, renderer):\n- \"\"\"\n- recycle RegistryPackageMetadata so that we can use the install and\n- download_and_untar from RegistryPinnedPackage next.\n- build RegistryPackageMetadata from info passed via packages.yml since no\n- 'metadata' service exists in this case.\n- \"\"\"\n-\n- dct = {\n- \"name\": self.package,\n- \"packages\": [], # note: required by RegistryPackageMetadata\n- \"downloads\": {\"tarball\": self.tarball},\n- }\n-\n- return RegistryPackageMetadata.from_dict(dct)\n+ \"\"\"Download and untar the project and parse metadata from the project folder.\"\"\"\n+ download_untar_fn = functools.partial(\n+ self.download_and_untar, self.tarball, self.tar_path, self.untarred_path, self.name\n+ )\n+ connection_exception_retry(download_untar_fn, 5)\n+\n+ tar_contents = os.listdir(self.untarred_path)\n+ if len(tar_contents) != 1:\n+ raise DependencyError(\n+ f\"Incorrect structure for package extracted from {self.tarball}.\"\n+ f\"The extracted package needs to follow the structure {self.name}/<package_contents>.\"\n+ )\n+ child_folder = os.listdir(self.untarred_path)[0]\n+\n+ self.untarred_path = os.path.join(self.untarred_path, child_folder)\n+ partial = PartialProject.from_project_root(self.untarred_path)\n+ metadata = partial.render_package_metadata(renderer)\n+ metadata.name = self.package if self.package else metadata.name\n+ return metadata\n \n def install(self, project, renderer):\n- self._install(project, renderer)\n+ download_untar_fn = functools.partial(\n+ self.download_and_untar, self.tarball, self.tar_path, self.untarred_path, self.name\n+ )\n+ connection_exception_retry(download_untar_fn, 5)\n+ dest_path = self.get_installation_path(project, renderer)\n+ if os.path.exists(dest_path):\n+ if system.path_is_symlink(dest_path):\n+ system.remove_file(dest_path)\n+ else:\n+ system.rmdir(dest_path)\n+ system.move(self.untarred_path, dest_path)\n \n \n class TarballUnpinnedPackage(TarballPackageMixin, UnpinnedPackage[TarballPinnedPackage]):\n", "issue": "[CT-3107] [Bug] nested dependencies not installed when package is a tarball\n### Is this a new bug in dbt-core?\r\n\r\n- [X] I believe this is a new bug in dbt-core\r\n- [X] I have searched the existing issues, and I could not find an existing issue for this bug\r\n\r\n### Current Behavior\r\n\r\nwhen running `dbt deps` to install a package specified as a tarball, dbt doesn't install nested dependencies (i.e. packages specified in the imported package's `packages.yml` file) as it does when installing a package from local, git or the dbt hub.\r\n\r\n### Expected Behavior\r\n\r\nconsistent behaviour across import methods regarding nested dependencies. dbt should install any dependencies specified in the tarball project's packages.yml file.\r\n\r\n\r\n### Steps To Reproduce\r\n\r\nthis can be reproduced by importing the tarball of a package with nested dependencies. In this case, importing dbt_expectations should cause dbt_date to be installed, as its included in the package's dependencies here: https://github.com/calogica/dbt-expectations/blob/0.9.0/packages.yml\r\n\r\nSteps:\r\n1. create a `packages.yml` file in a project with the following structure:\r\n``` yaml\r\npackages:\r\n - tarball: \"https://github.com/calogica/dbt-expectations/archive/refs/tags/0.9.0.tar.gz\"\r\n name: \"dbt_expectations\"\r\n```\r\n2. run `dbt deps`\r\n\r\n\r\nrunning dbt deps will only install dbt_expectations:\r\n```\r\n20:08:55 Running with dbt=1.5.6\r\n20:08:55 Installing dbt_expectations\r\n20:08:56 Installed from tarball (url: https://github.com/calogica/dbt-expectations/archive/refs/tags/0.9.0.tar.gz)\r\n```\r\ncompare this to installing the same package from dbt hub, with the following `packages.yml`:\r\n``` yaml\r\npackages:\r\n - package: calogica/dbt_expectations\r\n version: \"0.9.0\"\r\n```\r\n```\r\n20:14:24 Running with dbt=1.5.6\r\n20:14:24 Installing calogica/dbt_expectations\r\n20:14:25 Installed from version 0.9.0\r\n20:14:25 Up to date!\r\n20:14:25 Installing calogica/dbt_date\r\n20:14:25 Installed from version 0.8.1\r\n20:14:25 Updated version available: 0.9.1\r\n20:14:25 \r\n20:14:25 Updates available for packages: ['calogica/dbt_date'] \r\nUpdate your versions in packages.yml, then run dbt deps\r\n```\r\n\r\n### Relevant log output\r\n\r\n_No response_\r\n\r\n### Environment\r\n\r\n```markdown\r\n- OS: Mac OS 13.5.2 (22G91)\r\n- Python: 3.9\r\n- dbt: 1.5.6\r\n```\r\n\r\n\r\n### Which database adapter are you using with dbt?\r\n\r\nsnowflake\r\n\r\n### Additional Context\r\n\r\n_No response_\n", "before_files": [{"content": "from typing import Dict\n\nfrom dbt.contracts.project import RegistryPackageMetadata, TarballPackage\nfrom dbt.deps.base import PinnedPackage, UnpinnedPackage\n\n\nclass TarballPackageMixin:\n def __init__(self, tarball: str) -> None:\n super().__init__()\n self.tarball = tarball\n\n @property\n def name(self):\n return self.tarball\n\n def source_type(self) -> str:\n return \"tarball\"\n\n\nclass TarballPinnedPackage(TarballPackageMixin, PinnedPackage):\n def __init__(self, tarball: str, package: str) -> None:\n super().__init__(tarball)\n # setup to recycle RegistryPinnedPackage fns\n self.package = package\n self.version = \"tarball\"\n\n @property\n def name(self):\n return self.package\n\n def to_dict(self) -> Dict[str, str]:\n return {\n \"tarball\": self.tarball,\n \"version\": self.version,\n \"package\": self.package,\n }\n\n def get_version(self):\n return self.version\n\n def nice_version_name(self):\n return f\"tarball (url: {self.tarball})\"\n\n def _fetch_metadata(self, project, renderer):\n \"\"\"\n recycle RegistryPackageMetadata so that we can use the install and\n download_and_untar from RegistryPinnedPackage next.\n build RegistryPackageMetadata from info passed via packages.yml since no\n 'metadata' service exists in this case.\n \"\"\"\n\n dct = {\n \"name\": self.package,\n \"packages\": [], # note: required by RegistryPackageMetadata\n \"downloads\": {\"tarball\": self.tarball},\n }\n\n return RegistryPackageMetadata.from_dict(dct)\n\n def install(self, project, renderer):\n self._install(project, renderer)\n\n\nclass TarballUnpinnedPackage(TarballPackageMixin, UnpinnedPackage[TarballPinnedPackage]):\n def __init__(\n self,\n tarball: str,\n package: str,\n ) -> None:\n super().__init__(tarball)\n # setup to recycle RegistryPinnedPackage fns\n self.package = package\n self.version = \"tarball\"\n\n @classmethod\n def from_contract(cls, contract: TarballPackage) -> \"TarballUnpinnedPackage\":\n return cls(tarball=contract.tarball, package=contract.name)\n\n def incorporate(self, other: \"TarballUnpinnedPackage\") -> \"TarballUnpinnedPackage\":\n return TarballUnpinnedPackage(tarball=self.tarball, package=self.package)\n\n def resolved(self) -> TarballPinnedPackage:\n return TarballPinnedPackage(tarball=self.tarball, package=self.package)\n", "path": "core/dbt/deps/tarball.py"}], "after_files": [{"content": "import functools\nimport os\nfrom pathlib import Path\nfrom typing import Dict\n\nfrom dbt.clients import system\nfrom dbt.config.project import PartialProject\nfrom dbt.contracts.project import TarballPackage\nfrom dbt.deps.base import PinnedPackage, UnpinnedPackage, get_downloads_path\nfrom dbt.exceptions import DependencyError\nfrom dbt.utils import _connection_exception_retry as connection_exception_retry\n\n\nclass TarballPackageMixin:\n def __init__(self, tarball: str) -> None:\n super().__init__()\n self.tarball = tarball\n\n @property\n def name(self):\n return self.tarball\n\n def source_type(self) -> str:\n return \"tarball\"\n\n\nclass TarballPinnedPackage(TarballPackageMixin, PinnedPackage):\n def __init__(self, tarball: str, package: str) -> None:\n super().__init__(tarball)\n self.package = package\n self.version = \"tarball\"\n self.tar_path = os.path.join(Path(get_downloads_path()), self.package)\n self.untarred_path = f\"{self.tar_path}_untarred\"\n\n @property\n def name(self):\n return self.package\n\n def to_dict(self) -> Dict[str, str]:\n return {\n \"tarball\": self.tarball,\n \"name\": self.package,\n }\n\n def get_version(self):\n return self.version\n\n def nice_version_name(self):\n return f\"tarball (url: {self.tarball})\"\n\n def _fetch_metadata(self, project, renderer):\n \"\"\"Download and untar the project and parse metadata from the project folder.\"\"\"\n download_untar_fn = functools.partial(\n self.download_and_untar, self.tarball, self.tar_path, self.untarred_path, self.name\n )\n connection_exception_retry(download_untar_fn, 5)\n\n tar_contents = os.listdir(self.untarred_path)\n if len(tar_contents) != 1:\n raise DependencyError(\n f\"Incorrect structure for package extracted from {self.tarball}.\"\n f\"The extracted package needs to follow the structure {self.name}/<package_contents>.\"\n )\n child_folder = os.listdir(self.untarred_path)[0]\n\n self.untarred_path = os.path.join(self.untarred_path, child_folder)\n partial = PartialProject.from_project_root(self.untarred_path)\n metadata = partial.render_package_metadata(renderer)\n metadata.name = self.package if self.package else metadata.name\n return metadata\n\n def install(self, project, renderer):\n download_untar_fn = functools.partial(\n self.download_and_untar, self.tarball, self.tar_path, self.untarred_path, self.name\n )\n connection_exception_retry(download_untar_fn, 5)\n dest_path = self.get_installation_path(project, renderer)\n if os.path.exists(dest_path):\n if system.path_is_symlink(dest_path):\n system.remove_file(dest_path)\n else:\n system.rmdir(dest_path)\n system.move(self.untarred_path, dest_path)\n\n\nclass TarballUnpinnedPackage(TarballPackageMixin, UnpinnedPackage[TarballPinnedPackage]):\n def __init__(\n self,\n tarball: str,\n package: str,\n ) -> None:\n super().__init__(tarball)\n # setup to recycle RegistryPinnedPackage fns\n self.package = package\n self.version = \"tarball\"\n\n @classmethod\n def from_contract(cls, contract: TarballPackage) -> \"TarballUnpinnedPackage\":\n return cls(tarball=contract.tarball, package=contract.name)\n\n def incorporate(self, other: \"TarballUnpinnedPackage\") -> \"TarballUnpinnedPackage\":\n return TarballUnpinnedPackage(tarball=self.tarball, package=self.package)\n\n def resolved(self) -> TarballPinnedPackage:\n return TarballPinnedPackage(tarball=self.tarball, package=self.package)\n", "path": "core/dbt/deps/tarball.py"}]} | 1,732 | 944 |
gh_patches_debug_27699 | rasdani/github-patches | git_diff | cowrie__cowrie-638 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
csirtg plugin no longer working
I'm not sure exactly when this happened, but just happend to check the logs today, and noticed the csirtg plugin has some errors.
```
2017-11-02T17:05:41-0400 [cowrie.telnet.transport.HoneyPotTelnetFactory] New connection: 45.32.221.61:59776 (x.x.x.x:23) [session: TT0]
2017-11-02T17:05:41-0400 [twisted.logger._observer#critical] Temporarily disabling observer LegacyLogObserverWrapper(<bound method Output.emit of <cowrie.output.csirtg.Output object at 0x7f3a5ce9bb50>>) due to exception: [Failure instance: Traceback: <type 'exceptions.TypeError'>: string indices must be integers
/home/cowrie/cowrie/cowrie/telnet/transport.py:218:connectionMade
/usr/local/lib/python2.7/dist-packages/twisted/python/threadable.py:53:sync
/usr/local/lib/python2.7/dist-packages/twisted/python/log.py:286:msg
/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py:154:publishToNewObserver
--- <exception caught here> ---
/usr/local/lib/python2.7/dist-packages/twisted/logger/_observer.py:131:__call__
/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py:93:__call__
/home/cowrie/cowrie/cowrie/core/output.py:190:emit
/home/cowrie/cowrie/cowrie/output/csirtg.py:82:write
]
Traceback (most recent call last):
File "/home/cowrie/cowrie/cowrie/telnet/transport.py", line 218, in connectionMade
session=self.transportId, sessionno='T'+str(sessionno))
File "/usr/local/lib/python2.7/dist-packages/twisted/python/threadable.py", line 53, in sync
return function(self, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/twisted/python/log.py", line 286, in msg
_publishNew(self._publishPublisher, actualEventDict, textFromEventDict)
File "/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py", line 154, in publishToNewObserver
observer(eventDict)
--- <exception caught here> ---
File "/usr/local/lib/python2.7/dist-packages/twisted/logger/_observer.py", line 131, in __call__
observer(event)
File "/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py", line 93, in __call__
self.legacyObserver(event)
File "/home/cowrie/cowrie/cowrie/core/output.py", line 190, in emit
self.write(ev)
File "/home/cowrie/cowrie/cowrie/output/csirtg.py", line 82, in write
logger.info('logged to csirtg %s ' % ret['indicator']['location'])
exceptions.TypeError: string indices must be integers
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cowrie/output/csirtg.py`
Content:
```
1 from __future__ import division, absolute_import
2
3 import cowrie.core.output
4
5 from csirtgsdk.indicator import Indicator
6 from csirtgsdk.client import Client
7 from datetime import datetime
8 import logging
9 import os
10
11 logger = logging.getLogger(__name__)
12
13 USERNAME = os.environ.get('CSIRTG_USER')
14 FEED = os.environ.get('CSIRTG_FEED')
15 TOKEN = os.environ.get('CSIRG_TOKEN')
16 DESCRIPTION = os.environ.get('CSIRTG_DESCRIPTION', 'random scanning activity')
17
18
19 class Output(cowrie.core.output.Output):
20 def __init__(self, cfg):
21 cowrie.core.output.Output.__init__(self, cfg)
22 self.user = cfg.get('output_csirtg', 'username') or USERNAME
23 self.feed = cfg.get('output_csirtg', 'feed') or FEED
24 self.token = cfg.get('output_csirtg', 'token') or TOKEN
25 try:
26 self.description = cfg.get('output_csirtg', 'description')
27 except Exception:
28 self.description = DESCRIPTION
29 self.context = {}
30 self.client = Client(token=self.token)
31
32 def start(self,):
33 pass
34
35 def stop(self):
36 pass
37
38 def write(self, e):
39 sid = e['session']
40 peerIP = e['src_ip']
41 ts = e['timestamp']
42 system = e['system']
43
44 if system not in ['cowrie.ssh.factory.CowrieSSHFactory', 'cowrie.telnet.transport.HoneyPotTelnetFactory']:
45 logger.debug('skipping {}'.format(system))
46 return
47
48 today = str(datetime.now().date())
49
50 if not self.context.get(today):
51 logger.debug('resetting context for %s' % today)
52 self.context = {}
53 self.context[today] = set()
54
55 key = ','.join([peerIP, system])
56
57 if key in self.context[today]:
58 logger.debug('skipping {}'.format(key))
59 return
60
61 self.context[today].add(key)
62
63 tags = 'scanner,ssh'
64 port = 22
65 if e['system'] == 'cowrie.telnet.transport.HoneyPotTelnetFactory':
66 tags = 'scanner,telnet'
67 port = 23
68
69 i = {
70 'user': self.user,
71 'feed': self.feed,
72 'indicator': peerIP,
73 'portlist': port,
74 'protocol': 'tcp',
75 'tags': tags,
76 'firsttime': ts,
77 'lasttime': ts,
78 'description': self.description
79 }
80
81 ret = Indicator(self.client, i).submit()
82 logger.info('logged to csirtg %s ' % ret['indicator']['location'])
83
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cowrie/output/csirtg.py b/cowrie/output/csirtg.py
--- a/cowrie/output/csirtg.py
+++ b/cowrie/output/csirtg.py
@@ -7,8 +7,7 @@
from datetime import datetime
import logging
import os
-
-logger = logging.getLogger(__name__)
+from twisted.python import log
USERNAME = os.environ.get('CSIRTG_USER')
FEED = os.environ.get('CSIRTG_FEED')
@@ -42,20 +41,17 @@
system = e['system']
if system not in ['cowrie.ssh.factory.CowrieSSHFactory', 'cowrie.telnet.transport.HoneyPotTelnetFactory']:
- logger.debug('skipping {}'.format(system))
return
today = str(datetime.now().date())
if not self.context.get(today):
- logger.debug('resetting context for %s' % today)
self.context = {}
self.context[today] = set()
key = ','.join([peerIP, system])
if key in self.context[today]:
- logger.debug('skipping {}'.format(key))
return
self.context[today].add(key)
@@ -79,5 +75,5 @@
}
ret = Indicator(self.client, i).submit()
- logger.info('logged to csirtg %s ' % ret['indicator']['location'])
+ log.msg('logged to csirtg %s ' % ret['location'])
| {"golden_diff": "diff --git a/cowrie/output/csirtg.py b/cowrie/output/csirtg.py\n--- a/cowrie/output/csirtg.py\n+++ b/cowrie/output/csirtg.py\n@@ -7,8 +7,7 @@\n from datetime import datetime\n import logging\n import os\n-\n-logger = logging.getLogger(__name__)\n+from twisted.python import log\n \n USERNAME = os.environ.get('CSIRTG_USER')\n FEED = os.environ.get('CSIRTG_FEED')\n@@ -42,20 +41,17 @@\n system = e['system']\n \n if system not in ['cowrie.ssh.factory.CowrieSSHFactory', 'cowrie.telnet.transport.HoneyPotTelnetFactory']:\n- logger.debug('skipping {}'.format(system))\n return\n \n today = str(datetime.now().date())\n \n if not self.context.get(today):\n- logger.debug('resetting context for %s' % today)\n self.context = {}\n self.context[today] = set()\n \n key = ','.join([peerIP, system])\n \n if key in self.context[today]:\n- logger.debug('skipping {}'.format(key))\n return\n \n self.context[today].add(key)\n@@ -79,5 +75,5 @@\n }\n \n ret = Indicator(self.client, i).submit()\n- logger.info('logged to csirtg %s ' % ret['indicator']['location'])\n+ log.msg('logged to csirtg %s ' % ret['location'])\n", "issue": "csirtg plugin no longer working\nI'm not sure exactly when this happened, but just happend to check the logs today, and noticed the csirtg plugin has some errors.\r\n\r\n```\r\n2017-11-02T17:05:41-0400 [cowrie.telnet.transport.HoneyPotTelnetFactory] New connection: 45.32.221.61:59776 (x.x.x.x:23) [session: TT0]\r\n2017-11-02T17:05:41-0400 [twisted.logger._observer#critical] Temporarily disabling observer LegacyLogObserverWrapper(<bound method Output.emit of <cowrie.output.csirtg.Output object at 0x7f3a5ce9bb50>>) due to exception: [Failure instance: Traceback: <type 'exceptions.TypeError'>: string indices must be integers\r\n\t/home/cowrie/cowrie/cowrie/telnet/transport.py:218:connectionMade\r\n\t/usr/local/lib/python2.7/dist-packages/twisted/python/threadable.py:53:sync\r\n\t/usr/local/lib/python2.7/dist-packages/twisted/python/log.py:286:msg\r\n\t/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py:154:publishToNewObserver\r\n\t--- <exception caught here> ---\r\n\t/usr/local/lib/python2.7/dist-packages/twisted/logger/_observer.py:131:__call__\r\n\t/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py:93:__call__\r\n\t/home/cowrie/cowrie/cowrie/core/output.py:190:emit\r\n\t/home/cowrie/cowrie/cowrie/output/csirtg.py:82:write\r\n\t]\r\n\tTraceback (most recent call last):\r\n\t File \"/home/cowrie/cowrie/cowrie/telnet/transport.py\", line 218, in connectionMade\r\n\t session=self.transportId, sessionno='T'+str(sessionno))\r\n\t File \"/usr/local/lib/python2.7/dist-packages/twisted/python/threadable.py\", line 53, in sync\r\n\t return function(self, *args, **kwargs)\r\n\t File \"/usr/local/lib/python2.7/dist-packages/twisted/python/log.py\", line 286, in msg\r\n\t _publishNew(self._publishPublisher, actualEventDict, textFromEventDict)\r\n\t File \"/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py\", line 154, in publishToNewObserver\r\n\t observer(eventDict)\r\n\t--- <exception caught here> ---\r\n\t File \"/usr/local/lib/python2.7/dist-packages/twisted/logger/_observer.py\", line 131, in __call__\r\n\t observer(event)\r\n\t File \"/usr/local/lib/python2.7/dist-packages/twisted/logger/_legacy.py\", line 93, in __call__\r\n\t self.legacyObserver(event)\r\n\t File \"/home/cowrie/cowrie/cowrie/core/output.py\", line 190, in emit\r\n\t self.write(ev)\r\n\t File \"/home/cowrie/cowrie/cowrie/output/csirtg.py\", line 82, in write\r\n\t logger.info('logged to csirtg %s ' % ret['indicator']['location'])\r\n\texceptions.TypeError: string indices must be integers\r\n```\n", "before_files": [{"content": "from __future__ import division, absolute_import\n\nimport cowrie.core.output\n\nfrom csirtgsdk.indicator import Indicator\nfrom csirtgsdk.client import Client\nfrom datetime import datetime\nimport logging\nimport os\n\nlogger = logging.getLogger(__name__)\n\nUSERNAME = os.environ.get('CSIRTG_USER')\nFEED = os.environ.get('CSIRTG_FEED')\nTOKEN = os.environ.get('CSIRG_TOKEN')\nDESCRIPTION = os.environ.get('CSIRTG_DESCRIPTION', 'random scanning activity')\n\n\nclass Output(cowrie.core.output.Output):\n def __init__(self, cfg):\n cowrie.core.output.Output.__init__(self, cfg)\n self.user = cfg.get('output_csirtg', 'username') or USERNAME\n self.feed = cfg.get('output_csirtg', 'feed') or FEED\n self.token = cfg.get('output_csirtg', 'token') or TOKEN\n try:\n self.description = cfg.get('output_csirtg', 'description')\n except Exception:\n self.description = DESCRIPTION\n self.context = {}\n self.client = Client(token=self.token)\n\n def start(self,):\n pass\n\n def stop(self):\n pass\n\n def write(self, e):\n sid = e['session']\n peerIP = e['src_ip']\n ts = e['timestamp']\n system = e['system']\n\n if system not in ['cowrie.ssh.factory.CowrieSSHFactory', 'cowrie.telnet.transport.HoneyPotTelnetFactory']:\n logger.debug('skipping {}'.format(system))\n return\n\n today = str(datetime.now().date())\n\n if not self.context.get(today):\n logger.debug('resetting context for %s' % today)\n self.context = {}\n self.context[today] = set()\n\n key = ','.join([peerIP, system])\n\n if key in self.context[today]:\n logger.debug('skipping {}'.format(key))\n return\n\n self.context[today].add(key)\n\n tags = 'scanner,ssh'\n port = 22\n if e['system'] == 'cowrie.telnet.transport.HoneyPotTelnetFactory':\n tags = 'scanner,telnet'\n port = 23\n\n i = {\n 'user': self.user,\n 'feed': self.feed,\n 'indicator': peerIP,\n 'portlist': port,\n 'protocol': 'tcp',\n 'tags': tags,\n 'firsttime': ts,\n 'lasttime': ts,\n 'description': self.description\n }\n\n ret = Indicator(self.client, i).submit()\n logger.info('logged to csirtg %s ' % ret['indicator']['location'])\n\n", "path": "cowrie/output/csirtg.py"}], "after_files": [{"content": "from __future__ import division, absolute_import\n\nimport cowrie.core.output\n\nfrom csirtgsdk.indicator import Indicator\nfrom csirtgsdk.client import Client\nfrom datetime import datetime\nimport logging\nimport os\nfrom twisted.python import log\n\nUSERNAME = os.environ.get('CSIRTG_USER')\nFEED = os.environ.get('CSIRTG_FEED')\nTOKEN = os.environ.get('CSIRG_TOKEN')\nDESCRIPTION = os.environ.get('CSIRTG_DESCRIPTION', 'random scanning activity')\n\n\nclass Output(cowrie.core.output.Output):\n def __init__(self, cfg):\n cowrie.core.output.Output.__init__(self, cfg)\n self.user = cfg.get('output_csirtg', 'username') or USERNAME\n self.feed = cfg.get('output_csirtg', 'feed') or FEED\n self.token = cfg.get('output_csirtg', 'token') or TOKEN\n try:\n self.description = cfg.get('output_csirtg', 'description')\n except Exception:\n self.description = DESCRIPTION\n self.context = {}\n self.client = Client(token=self.token)\n\n def start(self,):\n pass\n\n def stop(self):\n pass\n\n def write(self, e):\n sid = e['session']\n peerIP = e['src_ip']\n ts = e['timestamp']\n system = e['system']\n\n if system not in ['cowrie.ssh.factory.CowrieSSHFactory', 'cowrie.telnet.transport.HoneyPotTelnetFactory']:\n return\n\n today = str(datetime.now().date())\n\n if not self.context.get(today):\n self.context = {}\n self.context[today] = set()\n\n key = ','.join([peerIP, system])\n\n if key in self.context[today]:\n return\n\n self.context[today].add(key)\n\n tags = 'scanner,ssh'\n port = 22\n if e['system'] == 'cowrie.telnet.transport.HoneyPotTelnetFactory':\n tags = 'scanner,telnet'\n port = 23\n\n i = {\n 'user': self.user,\n 'feed': self.feed,\n 'indicator': peerIP,\n 'portlist': port,\n 'protocol': 'tcp',\n 'tags': tags,\n 'firsttime': ts,\n 'lasttime': ts,\n 'description': self.description\n }\n\n ret = Indicator(self.client, i).submit()\n log.msg('logged to csirtg %s ' % ret['location'])\n\n", "path": "cowrie/output/csirtg.py"}]} | 1,759 | 326 |
gh_patches_debug_26662 | rasdani/github-patches | git_diff | chainer__chainer-903 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Stream object should have .ptr set to 0, not None.
The event object expects the stream.ptr to be an integer (size_t) here:
https://github.com/pfnet/chainer/blob/master/cupy/cuda/stream.py#L56
https://github.com/pfnet/chainer/blob/master/cupy/cuda/runtime.pyx#L309
In trunk at the moment, recording events with default stream fails via:
Traceback (most recent call last):
File "train_imagenet.py", line 85, in <module>
train_loop()
File "train_imagenet.py", line 67, in train_loop
start.record()
File "/home/awesomebox/anaconda/lib/python2.7/site-packages/chainer-1.5.1-py2.7-linux-x86_64.egg/cupy/cuda/stream.py", line 56, in record
runtime.eventRecord(self.ptr, stream.ptr)
File "cupy/cuda/runtime.pyx", line 309, in cupy.cuda.runtime.eventRecord (cupy/cuda/runtime.cpp:6139)
TypeError: an integer is required
The fix seems simple:
https://github.com/pfnet/chainer/blob/master/cupy/cuda/stream.py#L103
self.ptr = 0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cupy/cuda/stream.py`
Content:
```
1 from cupy.cuda import runtime
2
3
4 class Event(object):
5
6 """CUDA event, a synchronization point of CUDA streams.
7
8 This class handles the CUDA event handle in RAII way, i.e., when an Event
9 instance is destroyed by the GC, its handle is also destroyed.
10
11 Args:
12 block (bool): If True, the event blocks on the
13 :meth:`~cupy.cuda.Event.synchronize` method.
14 disable_timing (bool): If True, the event does not prepare the timing
15 data.
16 interprocess (bool): If True, the event can be passed to other
17 processes.
18
19 Attributes:
20 ptr (cupy.cuda.runtime.Stream): Raw stream handle. It can be passed to
21 the CUDA Runtime API via ctypes.
22
23 """
24 def __init__(self, block=False, disable_timing=False, interprocess=False):
25 self.ptr = None
26
27 if interprocess and not disable_timing:
28 raise ValueError('Timing must be disabled for interprocess events')
29 flag = ((block and runtime.eventBlockingSync) |
30 (disable_timing and runtime.eventDisableTiming) |
31 (interprocess and runtime.eventInterprocess))
32 self.ptr = runtime.eventCreateWithFlags(flag)
33
34 def __del__(self):
35 if self.ptr:
36 runtime.eventDestroy(self.ptr)
37 self.ptr = None
38
39 @property
40 def done(self):
41 """True if the event is done."""
42 return bool(runtime.eventQuery(self.ptr))
43
44 def record(self, stream=None):
45 """Records the event to a stream.
46
47 Args:
48 stream (cupy.cuda.Stream): CUDA stream to record event. The null
49 stream is used by default.
50
51 .. seealso:: :meth:`cupy.cuda.Stream.record`
52
53 """
54 if stream is None:
55 stream = Stream(null=True)
56 runtime.eventRecord(self.ptr, stream.ptr)
57
58 def synchronize(self):
59 """Synchronizes all device work to the event.
60
61 If the event is created as a blocking event, it also blocks the CPU
62 thread until the event is done.
63
64 """
65 runtime.eventSynchronize(self.ptr)
66
67
68 def get_elapsed_time(start_event, end_event):
69 """Gets the elapsed time between two events.
70
71 Args:
72 start_event (Event): Earlier event.
73 end_event (Event): Later event.
74
75 Returns:
76 float: Elapsed time in milliseconds.
77
78 """
79 return runtime.eventElapsedTime(start_event.ptr, end_event.ptr)
80
81
82 class Stream(object):
83
84 """CUDA stream.
85
86 This class handles the CUDA stream handle in RAII way, i.e., when an Stream
87 instance is destroyed by the GC, its handle is also destroyed.
88
89 Args:
90 null (bool): If True, the stream is a null stream (i.e. the default
91 stream that synchronizes with all streams). Otherwise, a plain new
92 stream is created.
93 non_blocking (bool): If True, the stream does not synchronize with the
94 NULL stream.
95
96 Attributes:
97 ptr (cupy.cuda.runtime.Stream): Raw stream handle. It can be passed to
98 the CUDA Runtime API via ctypes.
99
100 """
101 def __init__(self, null=False, non_blocking=False):
102 if null:
103 self.ptr = None
104 elif non_blocking:
105 self.ptr = runtime.streamCreateWithFlags(runtime.streamNonBlocking)
106 else:
107 self.ptr = runtime.streamCreate()
108
109 def __del__(self):
110 if self.ptr:
111 runtime.streamDestroy(self.ptr)
112 self.ptr = None
113
114 @property
115 def done(self):
116 """True if all work on this stream has been done."""
117 return bool(runtime.streamQuery(self.ptr))
118
119 def synchronize(self):
120 """Waits for the stream completing all queued work."""
121 runtime.streamSynchronize(self.ptr)
122
123 def add_callback(self, callback, arg):
124 """Adds a callback that is called when all queued work is done.
125
126 Args:
127 callback (function): Callback function. It must take three
128 arguments (Stream object, int error status, and user data
129 object), and returns nothing.
130 arg (object): Argument to the callback.
131
132 """
133 runtime.streamAddCallback(self.ptr, callback, arg)
134
135 def record(self, event=None):
136 """Records an event on the stream.
137
138 Args:
139 event (None or cupy.cuda.Event): CUDA event. If None, then a new
140 plain event is created and used.
141
142 Returns:
143 cupy.cuda.Event: The recorded event.
144
145 .. seealso:: :meth:`cupy.cuda.Event.record`
146
147 """
148 if event is None:
149 event = Event()
150 runtime.eventRecord(event.ptr, self.ptr)
151 return event
152
153 def wait_event(self, event):
154 """Makes the stream wait for an event.
155
156 The future work on this stream will be done after the event.
157
158 Args:
159 event (cupy.cuda.Event): CUDA event.
160
161 """
162 runtime.streamWaitEvent(self.ptr, event)
163
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cupy/cuda/stream.py b/cupy/cuda/stream.py
--- a/cupy/cuda/stream.py
+++ b/cupy/cuda/stream.py
@@ -22,7 +22,7 @@
"""
def __init__(self, block=False, disable_timing=False, interprocess=False):
- self.ptr = None
+ self.ptr = 0
if interprocess and not disable_timing:
raise ValueError('Timing must be disabled for interprocess events')
@@ -34,7 +34,6 @@
def __del__(self):
if self.ptr:
runtime.eventDestroy(self.ptr)
- self.ptr = None
@property
def done(self):
@@ -100,7 +99,7 @@
"""
def __init__(self, null=False, non_blocking=False):
if null:
- self.ptr = None
+ self.ptr = 0
elif non_blocking:
self.ptr = runtime.streamCreateWithFlags(runtime.streamNonBlocking)
else:
@@ -109,7 +108,6 @@
def __del__(self):
if self.ptr:
runtime.streamDestroy(self.ptr)
- self.ptr = None
@property
def done(self):
| {"golden_diff": "diff --git a/cupy/cuda/stream.py b/cupy/cuda/stream.py\n--- a/cupy/cuda/stream.py\n+++ b/cupy/cuda/stream.py\n@@ -22,7 +22,7 @@\n \n \"\"\"\n def __init__(self, block=False, disable_timing=False, interprocess=False):\n- self.ptr = None\n+ self.ptr = 0\n \n if interprocess and not disable_timing:\n raise ValueError('Timing must be disabled for interprocess events')\n@@ -34,7 +34,6 @@\n def __del__(self):\n if self.ptr:\n runtime.eventDestroy(self.ptr)\n- self.ptr = None\n \n @property\n def done(self):\n@@ -100,7 +99,7 @@\n \"\"\"\n def __init__(self, null=False, non_blocking=False):\n if null:\n- self.ptr = None\n+ self.ptr = 0\n elif non_blocking:\n self.ptr = runtime.streamCreateWithFlags(runtime.streamNonBlocking)\n else:\n@@ -109,7 +108,6 @@\n def __del__(self):\n if self.ptr:\n runtime.streamDestroy(self.ptr)\n- self.ptr = None\n \n @property\n def done(self):\n", "issue": "Stream object should have .ptr set to 0, not None.\nThe event object expects the stream.ptr to be an integer (size_t) here:\nhttps://github.com/pfnet/chainer/blob/master/cupy/cuda/stream.py#L56\nhttps://github.com/pfnet/chainer/blob/master/cupy/cuda/runtime.pyx#L309\n\nIn trunk at the moment, recording events with default stream fails via:\nTraceback (most recent call last):\n File \"train_imagenet.py\", line 85, in <module>\n train_loop()\n File \"train_imagenet.py\", line 67, in train_loop\n start.record()\n File \"/home/awesomebox/anaconda/lib/python2.7/site-packages/chainer-1.5.1-py2.7-linux-x86_64.egg/cupy/cuda/stream.py\", line 56, in record\n runtime.eventRecord(self.ptr, stream.ptr)\n File \"cupy/cuda/runtime.pyx\", line 309, in cupy.cuda.runtime.eventRecord (cupy/cuda/runtime.cpp:6139)\nTypeError: an integer is required\n\nThe fix seems simple:\n\nhttps://github.com/pfnet/chainer/blob/master/cupy/cuda/stream.py#L103\nself.ptr = 0\n\n", "before_files": [{"content": "from cupy.cuda import runtime\n\n\nclass Event(object):\n\n \"\"\"CUDA event, a synchronization point of CUDA streams.\n\n This class handles the CUDA event handle in RAII way, i.e., when an Event\n instance is destroyed by the GC, its handle is also destroyed.\n\n Args:\n block (bool): If True, the event blocks on the\n :meth:`~cupy.cuda.Event.synchronize` method.\n disable_timing (bool): If True, the event does not prepare the timing\n data.\n interprocess (bool): If True, the event can be passed to other\n processes.\n\n Attributes:\n ptr (cupy.cuda.runtime.Stream): Raw stream handle. It can be passed to\n the CUDA Runtime API via ctypes.\n\n \"\"\"\n def __init__(self, block=False, disable_timing=False, interprocess=False):\n self.ptr = None\n\n if interprocess and not disable_timing:\n raise ValueError('Timing must be disabled for interprocess events')\n flag = ((block and runtime.eventBlockingSync) |\n (disable_timing and runtime.eventDisableTiming) |\n (interprocess and runtime.eventInterprocess))\n self.ptr = runtime.eventCreateWithFlags(flag)\n\n def __del__(self):\n if self.ptr:\n runtime.eventDestroy(self.ptr)\n self.ptr = None\n\n @property\n def done(self):\n \"\"\"True if the event is done.\"\"\"\n return bool(runtime.eventQuery(self.ptr))\n\n def record(self, stream=None):\n \"\"\"Records the event to a stream.\n\n Args:\n stream (cupy.cuda.Stream): CUDA stream to record event. The null\n stream is used by default.\n\n .. seealso:: :meth:`cupy.cuda.Stream.record`\n\n \"\"\"\n if stream is None:\n stream = Stream(null=True)\n runtime.eventRecord(self.ptr, stream.ptr)\n\n def synchronize(self):\n \"\"\"Synchronizes all device work to the event.\n\n If the event is created as a blocking event, it also blocks the CPU\n thread until the event is done.\n\n \"\"\"\n runtime.eventSynchronize(self.ptr)\n\n\ndef get_elapsed_time(start_event, end_event):\n \"\"\"Gets the elapsed time between two events.\n\n Args:\n start_event (Event): Earlier event.\n end_event (Event): Later event.\n\n Returns:\n float: Elapsed time in milliseconds.\n\n \"\"\"\n return runtime.eventElapsedTime(start_event.ptr, end_event.ptr)\n\n\nclass Stream(object):\n\n \"\"\"CUDA stream.\n\n This class handles the CUDA stream handle in RAII way, i.e., when an Stream\n instance is destroyed by the GC, its handle is also destroyed.\n\n Args:\n null (bool): If True, the stream is a null stream (i.e. the default\n stream that synchronizes with all streams). Otherwise, a plain new\n stream is created.\n non_blocking (bool): If True, the stream does not synchronize with the\n NULL stream.\n\n Attributes:\n ptr (cupy.cuda.runtime.Stream): Raw stream handle. It can be passed to\n the CUDA Runtime API via ctypes.\n\n \"\"\"\n def __init__(self, null=False, non_blocking=False):\n if null:\n self.ptr = None\n elif non_blocking:\n self.ptr = runtime.streamCreateWithFlags(runtime.streamNonBlocking)\n else:\n self.ptr = runtime.streamCreate()\n\n def __del__(self):\n if self.ptr:\n runtime.streamDestroy(self.ptr)\n self.ptr = None\n\n @property\n def done(self):\n \"\"\"True if all work on this stream has been done.\"\"\"\n return bool(runtime.streamQuery(self.ptr))\n\n def synchronize(self):\n \"\"\"Waits for the stream completing all queued work.\"\"\"\n runtime.streamSynchronize(self.ptr)\n\n def add_callback(self, callback, arg):\n \"\"\"Adds a callback that is called when all queued work is done.\n\n Args:\n callback (function): Callback function. It must take three\n arguments (Stream object, int error status, and user data\n object), and returns nothing.\n arg (object): Argument to the callback.\n\n \"\"\"\n runtime.streamAddCallback(self.ptr, callback, arg)\n\n def record(self, event=None):\n \"\"\"Records an event on the stream.\n\n Args:\n event (None or cupy.cuda.Event): CUDA event. If None, then a new\n plain event is created and used.\n\n Returns:\n cupy.cuda.Event: The recorded event.\n\n .. seealso:: :meth:`cupy.cuda.Event.record`\n\n \"\"\"\n if event is None:\n event = Event()\n runtime.eventRecord(event.ptr, self.ptr)\n return event\n\n def wait_event(self, event):\n \"\"\"Makes the stream wait for an event.\n\n The future work on this stream will be done after the event.\n\n Args:\n event (cupy.cuda.Event): CUDA event.\n\n \"\"\"\n runtime.streamWaitEvent(self.ptr, event)\n", "path": "cupy/cuda/stream.py"}], "after_files": [{"content": "from cupy.cuda import runtime\n\n\nclass Event(object):\n\n \"\"\"CUDA event, a synchronization point of CUDA streams.\n\n This class handles the CUDA event handle in RAII way, i.e., when an Event\n instance is destroyed by the GC, its handle is also destroyed.\n\n Args:\n block (bool): If True, the event blocks on the\n :meth:`~cupy.cuda.Event.synchronize` method.\n disable_timing (bool): If True, the event does not prepare the timing\n data.\n interprocess (bool): If True, the event can be passed to other\n processes.\n\n Attributes:\n ptr (cupy.cuda.runtime.Stream): Raw stream handle. It can be passed to\n the CUDA Runtime API via ctypes.\n\n \"\"\"\n def __init__(self, block=False, disable_timing=False, interprocess=False):\n self.ptr = 0\n\n if interprocess and not disable_timing:\n raise ValueError('Timing must be disabled for interprocess events')\n flag = ((block and runtime.eventBlockingSync) |\n (disable_timing and runtime.eventDisableTiming) |\n (interprocess and runtime.eventInterprocess))\n self.ptr = runtime.eventCreateWithFlags(flag)\n\n def __del__(self):\n if self.ptr:\n runtime.eventDestroy(self.ptr)\n\n @property\n def done(self):\n \"\"\"True if the event is done.\"\"\"\n return bool(runtime.eventQuery(self.ptr))\n\n def record(self, stream=None):\n \"\"\"Records the event to a stream.\n\n Args:\n stream (cupy.cuda.Stream): CUDA stream to record event. The null\n stream is used by default.\n\n .. seealso:: :meth:`cupy.cuda.Stream.record`\n\n \"\"\"\n if stream is None:\n stream = Stream(null=True)\n runtime.eventRecord(self.ptr, stream.ptr)\n\n def synchronize(self):\n \"\"\"Synchronizes all device work to the event.\n\n If the event is created as a blocking event, it also blocks the CPU\n thread until the event is done.\n\n \"\"\"\n runtime.eventSynchronize(self.ptr)\n\n\ndef get_elapsed_time(start_event, end_event):\n \"\"\"Gets the elapsed time between two events.\n\n Args:\n start_event (Event): Earlier event.\n end_event (Event): Later event.\n\n Returns:\n float: Elapsed time in milliseconds.\n\n \"\"\"\n return runtime.eventElapsedTime(start_event.ptr, end_event.ptr)\n\n\nclass Stream(object):\n\n \"\"\"CUDA stream.\n\n This class handles the CUDA stream handle in RAII way, i.e., when an Stream\n instance is destroyed by the GC, its handle is also destroyed.\n\n Args:\n null (bool): If True, the stream is a null stream (i.e. the default\n stream that synchronizes with all streams). Otherwise, a plain new\n stream is created.\n non_blocking (bool): If True, the stream does not synchronize with the\n NULL stream.\n\n Attributes:\n ptr (cupy.cuda.runtime.Stream): Raw stream handle. It can be passed to\n the CUDA Runtime API via ctypes.\n\n \"\"\"\n def __init__(self, null=False, non_blocking=False):\n if null:\n self.ptr = 0\n elif non_blocking:\n self.ptr = runtime.streamCreateWithFlags(runtime.streamNonBlocking)\n else:\n self.ptr = runtime.streamCreate()\n\n def __del__(self):\n if self.ptr:\n runtime.streamDestroy(self.ptr)\n\n @property\n def done(self):\n \"\"\"True if all work on this stream has been done.\"\"\"\n return bool(runtime.streamQuery(self.ptr))\n\n def synchronize(self):\n \"\"\"Waits for the stream completing all queued work.\"\"\"\n runtime.streamSynchronize(self.ptr)\n\n def add_callback(self, callback, arg):\n \"\"\"Adds a callback that is called when all queued work is done.\n\n Args:\n callback (function): Callback function. It must take three\n arguments (Stream object, int error status, and user data\n object), and returns nothing.\n arg (object): Argument to the callback.\n\n \"\"\"\n runtime.streamAddCallback(self.ptr, callback, arg)\n\n def record(self, event=None):\n \"\"\"Records an event on the stream.\n\n Args:\n event (None or cupy.cuda.Event): CUDA event. If None, then a new\n plain event is created and used.\n\n Returns:\n cupy.cuda.Event: The recorded event.\n\n .. seealso:: :meth:`cupy.cuda.Event.record`\n\n \"\"\"\n if event is None:\n event = Event()\n runtime.eventRecord(event.ptr, self.ptr)\n return event\n\n def wait_event(self, event):\n \"\"\"Makes the stream wait for an event.\n\n The future work on this stream will be done after the event.\n\n Args:\n event (cupy.cuda.Event): CUDA event.\n\n \"\"\"\n runtime.streamWaitEvent(self.ptr, event)\n", "path": "cupy/cuda/stream.py"}]} | 1,995 | 272 |
gh_patches_debug_18933 | rasdani/github-patches | git_diff | Qiskit__qiskit-1720 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BackendConfiguration fails validation if backend supports pulse
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Informations
- **Qiskit Terra version**:0.8.0
- **Python version**3.6.6
- **Operating system**:OSX
### What is the current behavior?
If a backend sets `open_pulse=true` in its configuration Qiskit will raise a validation error when creating a `BackendConfigurationSchema`
### Steps to reproduce the problem
Create a backend with `open_pulse=true` set in its configuration.
### What is the expected behavior?
Should not fail.
### Suggested solutions
Allow `open_pulse=true` to be valid.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qiskit/providers/models/backendconfiguration.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2018, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """Model and schema for backend configuration."""
9
10 from marshmallow.validate import Equal, Length, OneOf, Range, Regexp
11
12 from qiskit.validation import BaseModel, BaseSchema, bind_schema
13 from qiskit.validation.fields import Boolean, DateTime, Integer, List, Nested, String
14
15
16 class GateConfigSchema(BaseSchema):
17 """Schema for GateConfig."""
18
19 # Required properties.
20 name = String(required=True)
21 parameters = List(String(), required=True)
22 qasm_def = String(required=True)
23
24 # Optional properties.
25 coupling_map = List(List(Integer(),
26 validate=Length(min=1)),
27 validate=Length(min=1))
28 latency_map = List(List(Integer(validate=OneOf([0, 1])),
29 validate=Length(min=1)),
30 validate=Length(min=1))
31 conditional = Boolean()
32 description = String()
33
34
35 class BackendConfigurationSchema(BaseSchema):
36 """Schema for BackendConfiguration."""
37
38 # Required properties.
39 backend_name = String(required=True)
40 backend_version = String(required=True,
41 validate=Regexp("[0-9]+.[0-9]+.[0-9]+$"))
42 n_qubits = Integer(required=True, validate=Range(min=1))
43 basis_gates = List(String(), required=True,
44 validate=Length(min=1))
45 gates = Nested(GateConfigSchema, required=True, many=True,
46 validate=Length(min=1))
47 local = Boolean(required=True)
48 simulator = Boolean(required=True)
49 conditional = Boolean(required=True)
50 open_pulse = Boolean(required=True, validate=Equal(False))
51 memory = Boolean(required=True)
52 max_shots = Integer(required=True, validate=Range(min=1))
53
54 # Optional properties.
55 max_experiments = Integer(validate=Range(min=1))
56 sample_name = String()
57 coupling_map = List(List(Integer(),
58 validate=Length(min=1)),
59 validate=Length(min=1))
60 n_registers = Integer(validate=Range(min=1))
61 register_map = List(List(Integer(validate=OneOf([0, 1])),
62 validate=Length(min=1)),
63 validate=Length(min=1))
64 configurable = Boolean()
65 credits_required = Boolean()
66 online_date = DateTime()
67 display_name = String()
68 description = String()
69 tags = List(String())
70
71
72 @bind_schema(GateConfigSchema)
73 class GateConfig(BaseModel):
74 """Model for GateConfig.
75
76 Please note that this class only describes the required fields. For the
77 full description of the model, please check ``GateConfigSchema``.
78
79 Attributes:
80 name (str): the gate name as it will be referred to in QASM.
81 parameters (list[str]): variable names for the gate parameters (if any).
82 qasm_def (str): definition of this gate in terms of QASM primitives U
83 and CX.
84 """
85
86 def __init__(self, name, parameters, qasm_def, **kwargs):
87 self.name = name
88 self.parameters = parameters
89 self.qasm_def = qasm_def
90
91 super().__init__(**kwargs)
92
93
94 @bind_schema(BackendConfigurationSchema)
95 class BackendConfiguration(BaseModel):
96 """Model for BackendConfiguration.
97
98 Please note that this class only describes the required fields. For the
99 full description of the model, please check ``BackendConfigurationSchema``.
100 Attributes:
101 backend_name (str): backend name.
102 backend_version (str): backend version in the form X.Y.Z.
103 n_qubits (int): number of qubits.
104 basis_gates (list[str]): list of basis gates names on the backend.
105 gates (GateConfig): list of basis gates on the backend.
106 local (bool): backend is local or remote.
107 simulator (bool): backend is a simulator.
108 conditional (bool): backend supports conditional operations.
109 open_pulse (bool): backend supports open pulse.
110 memory (bool): backend supports memory.
111 max_shots (int): maximum number of shots supported.
112 """
113
114 def __init__(self, backend_name, backend_version, n_qubits, basis_gates,
115 gates, local, simulator, conditional, open_pulse, memory,
116 max_shots, **kwargs):
117 self.backend_name = backend_name
118 self.backend_version = backend_version
119 self.n_qubits = n_qubits
120 self.basis_gates = basis_gates
121 self.gates = gates
122 self.local = local
123 self.simulator = simulator
124 self.conditional = conditional
125 self.open_pulse = open_pulse
126 self.memory = memory
127 self.max_shots = max_shots
128
129 super().__init__(**kwargs)
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/qiskit/providers/models/backendconfiguration.py b/qiskit/providers/models/backendconfiguration.py
--- a/qiskit/providers/models/backendconfiguration.py
+++ b/qiskit/providers/models/backendconfiguration.py
@@ -7,7 +7,7 @@
"""Model and schema for backend configuration."""
-from marshmallow.validate import Equal, Length, OneOf, Range, Regexp
+from marshmallow.validate import Length, OneOf, Range, Regexp
from qiskit.validation import BaseModel, BaseSchema, bind_schema
from qiskit.validation.fields import Boolean, DateTime, Integer, List, Nested, String
@@ -47,7 +47,7 @@
local = Boolean(required=True)
simulator = Boolean(required=True)
conditional = Boolean(required=True)
- open_pulse = Boolean(required=True, validate=Equal(False))
+ open_pulse = Boolean(required=True)
memory = Boolean(required=True)
max_shots = Integer(required=True, validate=Range(min=1))
| {"golden_diff": "diff --git a/qiskit/providers/models/backendconfiguration.py b/qiskit/providers/models/backendconfiguration.py\n--- a/qiskit/providers/models/backendconfiguration.py\n+++ b/qiskit/providers/models/backendconfiguration.py\n@@ -7,7 +7,7 @@\n \n \"\"\"Model and schema for backend configuration.\"\"\"\n \n-from marshmallow.validate import Equal, Length, OneOf, Range, Regexp\n+from marshmallow.validate import Length, OneOf, Range, Regexp\n \n from qiskit.validation import BaseModel, BaseSchema, bind_schema\n from qiskit.validation.fields import Boolean, DateTime, Integer, List, Nested, String\n@@ -47,7 +47,7 @@\n local = Boolean(required=True)\n simulator = Boolean(required=True)\n conditional = Boolean(required=True)\n- open_pulse = Boolean(required=True, validate=Equal(False))\n+ open_pulse = Boolean(required=True)\n memory = Boolean(required=True)\n max_shots = Integer(required=True, validate=Range(min=1))\n", "issue": "BackendConfiguration fails validation if backend supports pulse\n<!-- \u26a0\ufe0f If you do not respect this template, your issue will be closed -->\r\n<!-- \u26a0\ufe0f Make sure to browse the opened and closed issues -->\r\n\r\n### Informations\r\n\r\n- **Qiskit Terra version**:0.8.0\r\n- **Python version**3.6.6\r\n- **Operating system**:OSX\r\n\r\n### What is the current behavior?\r\nIf a backend sets `open_pulse=true` in its configuration Qiskit will raise a validation error when creating a `BackendConfigurationSchema`\r\n\r\n\r\n### Steps to reproduce the problem\r\nCreate a backend with `open_pulse=true` set in its configuration.\r\n\r\n\r\n### What is the expected behavior?\r\nShould not fail.\r\n\r\n\r\n### Suggested solutions\r\nAllow `open_pulse=true` to be valid.\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright 2018, IBM.\n#\n# This source code is licensed under the Apache License, Version 2.0 found in\n# the LICENSE.txt file in the root directory of this source tree.\n\n\"\"\"Model and schema for backend configuration.\"\"\"\n\nfrom marshmallow.validate import Equal, Length, OneOf, Range, Regexp\n\nfrom qiskit.validation import BaseModel, BaseSchema, bind_schema\nfrom qiskit.validation.fields import Boolean, DateTime, Integer, List, Nested, String\n\n\nclass GateConfigSchema(BaseSchema):\n \"\"\"Schema for GateConfig.\"\"\"\n\n # Required properties.\n name = String(required=True)\n parameters = List(String(), required=True)\n qasm_def = String(required=True)\n\n # Optional properties.\n coupling_map = List(List(Integer(),\n validate=Length(min=1)),\n validate=Length(min=1))\n latency_map = List(List(Integer(validate=OneOf([0, 1])),\n validate=Length(min=1)),\n validate=Length(min=1))\n conditional = Boolean()\n description = String()\n\n\nclass BackendConfigurationSchema(BaseSchema):\n \"\"\"Schema for BackendConfiguration.\"\"\"\n\n # Required properties.\n backend_name = String(required=True)\n backend_version = String(required=True,\n validate=Regexp(\"[0-9]+.[0-9]+.[0-9]+$\"))\n n_qubits = Integer(required=True, validate=Range(min=1))\n basis_gates = List(String(), required=True,\n validate=Length(min=1))\n gates = Nested(GateConfigSchema, required=True, many=True,\n validate=Length(min=1))\n local = Boolean(required=True)\n simulator = Boolean(required=True)\n conditional = Boolean(required=True)\n open_pulse = Boolean(required=True, validate=Equal(False))\n memory = Boolean(required=True)\n max_shots = Integer(required=True, validate=Range(min=1))\n\n # Optional properties.\n max_experiments = Integer(validate=Range(min=1))\n sample_name = String()\n coupling_map = List(List(Integer(),\n validate=Length(min=1)),\n validate=Length(min=1))\n n_registers = Integer(validate=Range(min=1))\n register_map = List(List(Integer(validate=OneOf([0, 1])),\n validate=Length(min=1)),\n validate=Length(min=1))\n configurable = Boolean()\n credits_required = Boolean()\n online_date = DateTime()\n display_name = String()\n description = String()\n tags = List(String())\n\n\n@bind_schema(GateConfigSchema)\nclass GateConfig(BaseModel):\n \"\"\"Model for GateConfig.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``GateConfigSchema``.\n\n Attributes:\n name (str): the gate name as it will be referred to in QASM.\n parameters (list[str]): variable names for the gate parameters (if any).\n qasm_def (str): definition of this gate in terms of QASM primitives U\n and CX.\n \"\"\"\n\n def __init__(self, name, parameters, qasm_def, **kwargs):\n self.name = name\n self.parameters = parameters\n self.qasm_def = qasm_def\n\n super().__init__(**kwargs)\n\n\n@bind_schema(BackendConfigurationSchema)\nclass BackendConfiguration(BaseModel):\n \"\"\"Model for BackendConfiguration.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``BackendConfigurationSchema``.\n Attributes:\n backend_name (str): backend name.\n backend_version (str): backend version in the form X.Y.Z.\n n_qubits (int): number of qubits.\n basis_gates (list[str]): list of basis gates names on the backend.\n gates (GateConfig): list of basis gates on the backend.\n local (bool): backend is local or remote.\n simulator (bool): backend is a simulator.\n conditional (bool): backend supports conditional operations.\n open_pulse (bool): backend supports open pulse.\n memory (bool): backend supports memory.\n max_shots (int): maximum number of shots supported.\n \"\"\"\n\n def __init__(self, backend_name, backend_version, n_qubits, basis_gates,\n gates, local, simulator, conditional, open_pulse, memory,\n max_shots, **kwargs):\n self.backend_name = backend_name\n self.backend_version = backend_version\n self.n_qubits = n_qubits\n self.basis_gates = basis_gates\n self.gates = gates\n self.local = local\n self.simulator = simulator\n self.conditional = conditional\n self.open_pulse = open_pulse\n self.memory = memory\n self.max_shots = max_shots\n\n super().__init__(**kwargs)\n", "path": "qiskit/providers/models/backendconfiguration.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright 2018, IBM.\n#\n# This source code is licensed under the Apache License, Version 2.0 found in\n# the LICENSE.txt file in the root directory of this source tree.\n\n\"\"\"Model and schema for backend configuration.\"\"\"\n\nfrom marshmallow.validate import Length, OneOf, Range, Regexp\n\nfrom qiskit.validation import BaseModel, BaseSchema, bind_schema\nfrom qiskit.validation.fields import Boolean, DateTime, Integer, List, Nested, String\n\n\nclass GateConfigSchema(BaseSchema):\n \"\"\"Schema for GateConfig.\"\"\"\n\n # Required properties.\n name = String(required=True)\n parameters = List(String(), required=True)\n qasm_def = String(required=True)\n\n # Optional properties.\n coupling_map = List(List(Integer(),\n validate=Length(min=1)),\n validate=Length(min=1))\n latency_map = List(List(Integer(validate=OneOf([0, 1])),\n validate=Length(min=1)),\n validate=Length(min=1))\n conditional = Boolean()\n description = String()\n\n\nclass BackendConfigurationSchema(BaseSchema):\n \"\"\"Schema for BackendConfiguration.\"\"\"\n\n # Required properties.\n backend_name = String(required=True)\n backend_version = String(required=True,\n validate=Regexp(\"[0-9]+.[0-9]+.[0-9]+$\"))\n n_qubits = Integer(required=True, validate=Range(min=1))\n basis_gates = List(String(), required=True,\n validate=Length(min=1))\n gates = Nested(GateConfigSchema, required=True, many=True,\n validate=Length(min=1))\n local = Boolean(required=True)\n simulator = Boolean(required=True)\n conditional = Boolean(required=True)\n open_pulse = Boolean(required=True)\n memory = Boolean(required=True)\n max_shots = Integer(required=True, validate=Range(min=1))\n\n # Optional properties.\n max_experiments = Integer(validate=Range(min=1))\n sample_name = String()\n coupling_map = List(List(Integer(),\n validate=Length(min=1)),\n validate=Length(min=1))\n n_registers = Integer(validate=Range(min=1))\n register_map = List(List(Integer(validate=OneOf([0, 1])),\n validate=Length(min=1)),\n validate=Length(min=1))\n configurable = Boolean()\n credits_required = Boolean()\n online_date = DateTime()\n display_name = String()\n description = String()\n tags = List(String())\n\n\n@bind_schema(GateConfigSchema)\nclass GateConfig(BaseModel):\n \"\"\"Model for GateConfig.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``GateConfigSchema``.\n\n Attributes:\n name (str): the gate name as it will be referred to in QASM.\n parameters (list[str]): variable names for the gate parameters (if any).\n qasm_def (str): definition of this gate in terms of QASM primitives U\n and CX.\n \"\"\"\n\n def __init__(self, name, parameters, qasm_def, **kwargs):\n self.name = name\n self.parameters = parameters\n self.qasm_def = qasm_def\n\n super().__init__(**kwargs)\n\n\n@bind_schema(BackendConfigurationSchema)\nclass BackendConfiguration(BaseModel):\n \"\"\"Model for BackendConfiguration.\n\n Please note that this class only describes the required fields. For the\n full description of the model, please check ``BackendConfigurationSchema``.\n Attributes:\n backend_name (str): backend name.\n backend_version (str): backend version in the form X.Y.Z.\n n_qubits (int): number of qubits.\n basis_gates (list[str]): list of basis gates names on the backend.\n gates (GateConfig): list of basis gates on the backend.\n local (bool): backend is local or remote.\n simulator (bool): backend is a simulator.\n conditional (bool): backend supports conditional operations.\n open_pulse (bool): backend supports open pulse.\n memory (bool): backend supports memory.\n max_shots (int): maximum number of shots supported.\n \"\"\"\n\n def __init__(self, backend_name, backend_version, n_qubits, basis_gates,\n gates, local, simulator, conditional, open_pulse, memory,\n max_shots, **kwargs):\n self.backend_name = backend_name\n self.backend_version = backend_version\n self.n_qubits = n_qubits\n self.basis_gates = basis_gates\n self.gates = gates\n self.local = local\n self.simulator = simulator\n self.conditional = conditional\n self.open_pulse = open_pulse\n self.memory = memory\n self.max_shots = max_shots\n\n super().__init__(**kwargs)\n", "path": "qiskit/providers/models/backendconfiguration.py"}]} | 1,745 | 209 |
gh_patches_debug_23149 | rasdani/github-patches | git_diff | frappe__frappe-26301 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Typing validations should be ignored for tests
## Description of the issue
https://github.com/frappe/frappe/blob/010aa4636ace30a9df4c09f0ca991169f34274b9/frappe/utils/typing_validations.py#L164
If you're writing Frappe tests using the `unittest.mock` module, there might be cases where the argument object is replaced with a `Mock` or `MagicMock` object. This breaks typing validations when running CI tests using the `develop` branch.
I think a reasonable approach could be to either ignore all validations during tests, and/or allow configuring this behaviour per-test (with the default being "ignore").
## Context
**Output of `bench version`**
```
frappe 14.14.2
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `frappe/utils/typing_validations.py`
Content:
```
1 from collections.abc import Callable
2 from functools import lru_cache, wraps
3 from inspect import _empty, isclass, signature
4 from types import EllipsisType
5 from typing import ForwardRef, TypeVar, Union
6
7 from pydantic import ConfigDict
8
9 from frappe.exceptions import FrappeTypeError
10
11 SLACK_DICT = {
12 bool: (int, bool, float),
13 }
14 T = TypeVar("T")
15
16
17 FrappePydanticConfig = ConfigDict(arbitrary_types_allowed=True)
18
19
20 def validate_argument_types(func: Callable, apply_condition: Callable = lambda: True):
21 @wraps(func)
22 def wrapper(*args, **kwargs):
23 """Validate argument types of whitelisted functions.
24
25 :param args: Function arguments.
26 :param kwargs: Function keyword arguments."""
27
28 if apply_condition():
29 args, kwargs = transform_parameter_types(func, args, kwargs)
30
31 return func(*args, **kwargs)
32
33 return wrapper
34
35
36 def qualified_name(obj) -> str:
37 """
38 Return the qualified name (e.g. package.module.Type) for the given object.
39
40 Builtins and types from the :mod:typing package get special treatment by having the module
41 name stripped from the generated name.
42
43 """
44 discovered_type = obj if isclass(obj) else type(obj)
45 module, qualname = discovered_type.__module__, discovered_type.__qualname__
46
47 if module in {"typing", "types"}:
48 return obj
49 elif module in {"builtins"}:
50 return qualname
51 else:
52 return f"{module}.{qualname}"
53
54
55 def raise_type_error(
56 arg_name: str, arg_type: type, arg_value: object, current_exception: Exception | None = None
57 ):
58 """
59 Raise a TypeError with a message that includes the name of the argument, the expected type
60 and the actual type of the value passed.
61
62 """
63 raise FrappeTypeError(
64 f"Argument '{arg_name}' should be of type '{qualified_name(arg_type)}' but got "
65 f"'{qualified_name(arg_value)}' instead."
66 ) from current_exception
67
68
69 @lru_cache(maxsize=2048)
70 def TypeAdapter(type_):
71 from pydantic import TypeAdapter as PyTypeAdapter
72
73 return PyTypeAdapter(type_, config=FrappePydanticConfig)
74
75
76 def transform_parameter_types(func: Callable, args: tuple, kwargs: dict):
77 """
78 Validate the types of the arguments passed to a function with the type annotations
79 defined on the function.
80
81 """
82 if not (args or kwargs) or not func.__annotations__:
83 return args, kwargs
84
85 from pydantic import ValidationError as PyValidationError
86
87 annotations = func.__annotations__
88 new_args, new_kwargs = list(args), kwargs
89
90 # generate kwargs dict from args
91 arg_names = func.__code__.co_varnames[: func.__code__.co_argcount]
92
93 if not args:
94 prepared_args = kwargs
95
96 elif kwargs:
97 arg_values = args or func.__defaults__ or []
98 prepared_args = dict(zip(arg_names, arg_values, strict=False))
99 prepared_args.update(kwargs)
100
101 else:
102 prepared_args = dict(zip(arg_names, args, strict=False))
103
104 # check if type hints dont match the default values
105 func_signature = signature(func)
106 func_params = dict(func_signature.parameters)
107
108 # check if the argument types are correct
109 for current_arg, current_arg_type in annotations.items():
110 if current_arg not in prepared_args:
111 continue
112
113 current_arg_value = prepared_args[current_arg]
114
115 # if the type is a ForwardRef or str, ignore it
116 if isinstance(current_arg_type, ForwardRef | str):
117 continue
118 elif any(isinstance(x, ForwardRef | str) for x in getattr(current_arg_type, "__args__", [])):
119 continue
120
121 # allow slack for Frappe types
122 if current_arg_type in SLACK_DICT:
123 current_arg_type = SLACK_DICT[current_arg_type]
124
125 param_def = func_params.get(current_arg)
126
127 # add default value's type in acceptable types
128 if param_def.default is not _empty:
129 if isinstance(current_arg_type, tuple):
130 if type(param_def.default) not in current_arg_type:
131 current_arg_type += (type(param_def.default),)
132 current_arg_type = Union[current_arg_type] # noqa: UP007
133
134 elif param_def.default != current_arg_type:
135 current_arg_type = Union[current_arg_type, type(param_def.default)] # noqa: UP007
136 elif isinstance(current_arg_type, tuple):
137 current_arg_type = Union[current_arg_type] # noqa: UP007
138
139 # validate the type set using pydantic - raise a TypeError if Validation is raised or Ellipsis is returned
140 try:
141 current_arg_value_after = TypeAdapter(current_arg_type).validate_python(current_arg_value)
142 except (TypeError, PyValidationError) as e:
143 raise_type_error(current_arg, current_arg_type, current_arg_value, current_exception=e)
144
145 if isinstance(current_arg_value_after, EllipsisType):
146 raise_type_error(current_arg, current_arg_type, current_arg_value)
147
148 # update the args and kwargs with possibly casted value
149 if current_arg in kwargs:
150 new_kwargs[current_arg] = current_arg_value_after
151 else:
152 new_args[arg_names.index(current_arg)] = current_arg_value_after
153
154 return new_args, new_kwargs
155
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/frappe/utils/typing_validations.py b/frappe/utils/typing_validations.py
--- a/frappe/utils/typing_validations.py
+++ b/frappe/utils/typing_validations.py
@@ -3,6 +3,7 @@
from inspect import _empty, isclass, signature
from types import EllipsisType
from typing import ForwardRef, TypeVar, Union
+from unittest import mock
from pydantic import ConfigDict
@@ -77,8 +78,8 @@
"""
Validate the types of the arguments passed to a function with the type annotations
defined on the function.
-
"""
+
if not (args or kwargs) or not func.__annotations__:
return args, kwargs
@@ -117,6 +118,9 @@
continue
elif any(isinstance(x, ForwardRef | str) for x in getattr(current_arg_type, "__args__", [])):
continue
+ # ignore unittest.mock objects
+ elif isinstance(current_arg_value, mock.Mock):
+ continue
# allow slack for Frappe types
if current_arg_type in SLACK_DICT:
| {"golden_diff": "diff --git a/frappe/utils/typing_validations.py b/frappe/utils/typing_validations.py\n--- a/frappe/utils/typing_validations.py\n+++ b/frappe/utils/typing_validations.py\n@@ -3,6 +3,7 @@\n from inspect import _empty, isclass, signature\n from types import EllipsisType\n from typing import ForwardRef, TypeVar, Union\n+from unittest import mock\n \n from pydantic import ConfigDict\n \n@@ -77,8 +78,8 @@\n \t\"\"\"\n \tValidate the types of the arguments passed to a function with the type annotations\n \tdefined on the function.\n-\n \t\"\"\"\n+\n \tif not (args or kwargs) or not func.__annotations__:\n \t\treturn args, kwargs\n \n@@ -117,6 +118,9 @@\n \t\t\tcontinue\n \t\telif any(isinstance(x, ForwardRef | str) for x in getattr(current_arg_type, \"__args__\", [])):\n \t\t\tcontinue\n+\t\t# ignore unittest.mock objects\n+\t\telif isinstance(current_arg_value, mock.Mock):\n+\t\t\tcontinue\n \n \t\t# allow slack for Frappe types\n \t\tif current_arg_type in SLACK_DICT:\n", "issue": "Typing validations should be ignored for tests\n## Description of the issue\r\nhttps://github.com/frappe/frappe/blob/010aa4636ace30a9df4c09f0ca991169f34274b9/frappe/utils/typing_validations.py#L164\r\n\r\nIf you're writing Frappe tests using the `unittest.mock` module, there might be cases where the argument object is replaced with a `Mock` or `MagicMock` object. This breaks typing validations when running CI tests using the `develop` branch.\r\n\r\nI think a reasonable approach could be to either ignore all validations during tests, and/or allow configuring this behaviour per-test (with the default being \"ignore\").\r\n\r\n## Context\r\n\r\n**Output of `bench version`**\r\n```\r\nfrappe 14.14.2\r\n```\r\n\n", "before_files": [{"content": "from collections.abc import Callable\nfrom functools import lru_cache, wraps\nfrom inspect import _empty, isclass, signature\nfrom types import EllipsisType\nfrom typing import ForwardRef, TypeVar, Union\n\nfrom pydantic import ConfigDict\n\nfrom frappe.exceptions import FrappeTypeError\n\nSLACK_DICT = {\n\tbool: (int, bool, float),\n}\nT = TypeVar(\"T\")\n\n\nFrappePydanticConfig = ConfigDict(arbitrary_types_allowed=True)\n\n\ndef validate_argument_types(func: Callable, apply_condition: Callable = lambda: True):\n\t@wraps(func)\n\tdef wrapper(*args, **kwargs):\n\t\t\"\"\"Validate argument types of whitelisted functions.\n\n\t\t:param args: Function arguments.\n\t\t:param kwargs: Function keyword arguments.\"\"\"\n\n\t\tif apply_condition():\n\t\t\targs, kwargs = transform_parameter_types(func, args, kwargs)\n\n\t\treturn func(*args, **kwargs)\n\n\treturn wrapper\n\n\ndef qualified_name(obj) -> str:\n\t\"\"\"\n\tReturn the qualified name (e.g. package.module.Type) for the given object.\n\n\tBuiltins and types from the :mod:typing package get special treatment by having the module\n\tname stripped from the generated name.\n\n\t\"\"\"\n\tdiscovered_type = obj if isclass(obj) else type(obj)\n\tmodule, qualname = discovered_type.__module__, discovered_type.__qualname__\n\n\tif module in {\"typing\", \"types\"}:\n\t\treturn obj\n\telif module in {\"builtins\"}:\n\t\treturn qualname\n\telse:\n\t\treturn f\"{module}.{qualname}\"\n\n\ndef raise_type_error(\n\targ_name: str, arg_type: type, arg_value: object, current_exception: Exception | None = None\n):\n\t\"\"\"\n\tRaise a TypeError with a message that includes the name of the argument, the expected type\n\tand the actual type of the value passed.\n\n\t\"\"\"\n\traise FrappeTypeError(\n\t\tf\"Argument '{arg_name}' should be of type '{qualified_name(arg_type)}' but got \"\n\t\tf\"'{qualified_name(arg_value)}' instead.\"\n\t) from current_exception\n\n\n@lru_cache(maxsize=2048)\ndef TypeAdapter(type_):\n\tfrom pydantic import TypeAdapter as PyTypeAdapter\n\n\treturn PyTypeAdapter(type_, config=FrappePydanticConfig)\n\n\ndef transform_parameter_types(func: Callable, args: tuple, kwargs: dict):\n\t\"\"\"\n\tValidate the types of the arguments passed to a function with the type annotations\n\tdefined on the function.\n\n\t\"\"\"\n\tif not (args or kwargs) or not func.__annotations__:\n\t\treturn args, kwargs\n\n\tfrom pydantic import ValidationError as PyValidationError\n\n\tannotations = func.__annotations__\n\tnew_args, new_kwargs = list(args), kwargs\n\n\t# generate kwargs dict from args\n\targ_names = func.__code__.co_varnames[: func.__code__.co_argcount]\n\n\tif not args:\n\t\tprepared_args = kwargs\n\n\telif kwargs:\n\t\targ_values = args or func.__defaults__ or []\n\t\tprepared_args = dict(zip(arg_names, arg_values, strict=False))\n\t\tprepared_args.update(kwargs)\n\n\telse:\n\t\tprepared_args = dict(zip(arg_names, args, strict=False))\n\n\t# check if type hints dont match the default values\n\tfunc_signature = signature(func)\n\tfunc_params = dict(func_signature.parameters)\n\n\t# check if the argument types are correct\n\tfor current_arg, current_arg_type in annotations.items():\n\t\tif current_arg not in prepared_args:\n\t\t\tcontinue\n\n\t\tcurrent_arg_value = prepared_args[current_arg]\n\n\t\t# if the type is a ForwardRef or str, ignore it\n\t\tif isinstance(current_arg_type, ForwardRef | str):\n\t\t\tcontinue\n\t\telif any(isinstance(x, ForwardRef | str) for x in getattr(current_arg_type, \"__args__\", [])):\n\t\t\tcontinue\n\n\t\t# allow slack for Frappe types\n\t\tif current_arg_type in SLACK_DICT:\n\t\t\tcurrent_arg_type = SLACK_DICT[current_arg_type]\n\n\t\tparam_def = func_params.get(current_arg)\n\n\t\t# add default value's type in acceptable types\n\t\tif param_def.default is not _empty:\n\t\t\tif isinstance(current_arg_type, tuple):\n\t\t\t\tif type(param_def.default) not in current_arg_type:\n\t\t\t\t\tcurrent_arg_type += (type(param_def.default),)\n\t\t\t\tcurrent_arg_type = Union[current_arg_type] # noqa: UP007\n\n\t\t\telif param_def.default != current_arg_type:\n\t\t\t\tcurrent_arg_type = Union[current_arg_type, type(param_def.default)] # noqa: UP007\n\t\telif isinstance(current_arg_type, tuple):\n\t\t\tcurrent_arg_type = Union[current_arg_type] # noqa: UP007\n\n\t\t# validate the type set using pydantic - raise a TypeError if Validation is raised or Ellipsis is returned\n\t\ttry:\n\t\t\tcurrent_arg_value_after = TypeAdapter(current_arg_type).validate_python(current_arg_value)\n\t\texcept (TypeError, PyValidationError) as e:\n\t\t\traise_type_error(current_arg, current_arg_type, current_arg_value, current_exception=e)\n\n\t\tif isinstance(current_arg_value_after, EllipsisType):\n\t\t\traise_type_error(current_arg, current_arg_type, current_arg_value)\n\n\t\t# update the args and kwargs with possibly casted value\n\t\tif current_arg in kwargs:\n\t\t\tnew_kwargs[current_arg] = current_arg_value_after\n\t\telse:\n\t\t\tnew_args[arg_names.index(current_arg)] = current_arg_value_after\n\n\treturn new_args, new_kwargs\n", "path": "frappe/utils/typing_validations.py"}], "after_files": [{"content": "from collections.abc import Callable\nfrom functools import lru_cache, wraps\nfrom inspect import _empty, isclass, signature\nfrom types import EllipsisType\nfrom typing import ForwardRef, TypeVar, Union\nfrom unittest import mock\n\nfrom pydantic import ConfigDict\n\nfrom frappe.exceptions import FrappeTypeError\n\nSLACK_DICT = {\n\tbool: (int, bool, float),\n}\nT = TypeVar(\"T\")\n\n\nFrappePydanticConfig = ConfigDict(arbitrary_types_allowed=True)\n\n\ndef validate_argument_types(func: Callable, apply_condition: Callable = lambda: True):\n\t@wraps(func)\n\tdef wrapper(*args, **kwargs):\n\t\t\"\"\"Validate argument types of whitelisted functions.\n\n\t\t:param args: Function arguments.\n\t\t:param kwargs: Function keyword arguments.\"\"\"\n\n\t\tif apply_condition():\n\t\t\targs, kwargs = transform_parameter_types(func, args, kwargs)\n\n\t\treturn func(*args, **kwargs)\n\n\treturn wrapper\n\n\ndef qualified_name(obj) -> str:\n\t\"\"\"\n\tReturn the qualified name (e.g. package.module.Type) for the given object.\n\n\tBuiltins and types from the :mod:typing package get special treatment by having the module\n\tname stripped from the generated name.\n\n\t\"\"\"\n\tdiscovered_type = obj if isclass(obj) else type(obj)\n\tmodule, qualname = discovered_type.__module__, discovered_type.__qualname__\n\n\tif module in {\"typing\", \"types\"}:\n\t\treturn obj\n\telif module in {\"builtins\"}:\n\t\treturn qualname\n\telse:\n\t\treturn f\"{module}.{qualname}\"\n\n\ndef raise_type_error(\n\targ_name: str, arg_type: type, arg_value: object, current_exception: Exception | None = None\n):\n\t\"\"\"\n\tRaise a TypeError with a message that includes the name of the argument, the expected type\n\tand the actual type of the value passed.\n\n\t\"\"\"\n\traise FrappeTypeError(\n\t\tf\"Argument '{arg_name}' should be of type '{qualified_name(arg_type)}' but got \"\n\t\tf\"'{qualified_name(arg_value)}' instead.\"\n\t) from current_exception\n\n\n@lru_cache(maxsize=2048)\ndef TypeAdapter(type_):\n\tfrom pydantic import TypeAdapter as PyTypeAdapter\n\n\treturn PyTypeAdapter(type_, config=FrappePydanticConfig)\n\n\ndef transform_parameter_types(func: Callable, args: tuple, kwargs: dict):\n\t\"\"\"\n\tValidate the types of the arguments passed to a function with the type annotations\n\tdefined on the function.\n\t\"\"\"\n\n\tif not (args or kwargs) or not func.__annotations__:\n\t\treturn args, kwargs\n\n\tfrom pydantic import ValidationError as PyValidationError\n\n\tannotations = func.__annotations__\n\tnew_args, new_kwargs = list(args), kwargs\n\n\t# generate kwargs dict from args\n\targ_names = func.__code__.co_varnames[: func.__code__.co_argcount]\n\n\tif not args:\n\t\tprepared_args = kwargs\n\n\telif kwargs:\n\t\targ_values = args or func.__defaults__ or []\n\t\tprepared_args = dict(zip(arg_names, arg_values, strict=False))\n\t\tprepared_args.update(kwargs)\n\n\telse:\n\t\tprepared_args = dict(zip(arg_names, args, strict=False))\n\n\t# check if type hints dont match the default values\n\tfunc_signature = signature(func)\n\tfunc_params = dict(func_signature.parameters)\n\n\t# check if the argument types are correct\n\tfor current_arg, current_arg_type in annotations.items():\n\t\tif current_arg not in prepared_args:\n\t\t\tcontinue\n\n\t\tcurrent_arg_value = prepared_args[current_arg]\n\n\t\t# if the type is a ForwardRef or str, ignore it\n\t\tif isinstance(current_arg_type, ForwardRef | str):\n\t\t\tcontinue\n\t\telif any(isinstance(x, ForwardRef | str) for x in getattr(current_arg_type, \"__args__\", [])):\n\t\t\tcontinue\n\t\t# ignore unittest.mock objects\n\t\telif isinstance(current_arg_value, mock.Mock):\n\t\t\tcontinue\n\n\t\t# allow slack for Frappe types\n\t\tif current_arg_type in SLACK_DICT:\n\t\t\tcurrent_arg_type = SLACK_DICT[current_arg_type]\n\n\t\tparam_def = func_params.get(current_arg)\n\n\t\t# add default value's type in acceptable types\n\t\tif param_def.default is not _empty:\n\t\t\tif isinstance(current_arg_type, tuple):\n\t\t\t\tif type(param_def.default) not in current_arg_type:\n\t\t\t\t\tcurrent_arg_type += (type(param_def.default),)\n\t\t\t\tcurrent_arg_type = Union[current_arg_type] # noqa: UP007\n\n\t\t\telif param_def.default != current_arg_type:\n\t\t\t\tcurrent_arg_type = Union[current_arg_type, type(param_def.default)] # noqa: UP007\n\t\telif isinstance(current_arg_type, tuple):\n\t\t\tcurrent_arg_type = Union[current_arg_type] # noqa: UP007\n\n\t\t# validate the type set using pydantic - raise a TypeError if Validation is raised or Ellipsis is returned\n\t\ttry:\n\t\t\tcurrent_arg_value_after = TypeAdapter(current_arg_type).validate_python(current_arg_value)\n\t\texcept (TypeError, PyValidationError) as e:\n\t\t\traise_type_error(current_arg, current_arg_type, current_arg_value, current_exception=e)\n\n\t\tif isinstance(current_arg_value_after, EllipsisType):\n\t\t\traise_type_error(current_arg, current_arg_type, current_arg_value)\n\n\t\t# update the args and kwargs with possibly casted value\n\t\tif current_arg in kwargs:\n\t\t\tnew_kwargs[current_arg] = current_arg_value_after\n\t\telse:\n\t\t\tnew_args[arg_names.index(current_arg)] = current_arg_value_after\n\n\treturn new_args, new_kwargs\n", "path": "frappe/utils/typing_validations.py"}]} | 2,006 | 251 |
gh_patches_debug_16021 | rasdani/github-patches | git_diff | wagtail__wagtail-8270 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ThumbnailMixin does not display in header the value defined under thumb_col_header_text
<!--
Found a bug? Please fill out the sections below. 👍
-->
### Issue Summary
When adding ThumbnailMixin to a ModelAdmin, and giving it the `thumb_col_header_text` attribute, should display that on the list header for the thumbnail. but it always uses the default defined 'image'


### Steps to Reproduce
1. (for example) Start a new project with `wagtail start myproject`
2. in models.py add a new model (non page) with a forignkey to wagtailimages.Image
3. add model admin definition in wagtail_hooks.py
4. add ThumbnailMixin to model admin super classes
5. add some value to thumb_col_header_text
6. register new model admin
7. load app
8. add new instance of your new model with an image
9. in list header for your image it will say 'image' not what you defined in thumb_col_header_text
Any other relevant information. For example, why do you consider this a bug and what did you expect to happen instead?
* I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: (yes)
* i already know why this is happening and will submit a pull request shortly
### Technical details
* Python version: 3.9.7
* Django version: 4.0.3
* Wagtail version: 2.16.1
* Browser version: Chrome Version 100.0.4896.60 (Official Build) (x86_64)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/contrib/modeladmin/mixins.py`
Content:
```
1 from django.conf import settings
2 from django.core.exceptions import ImproperlyConfigured
3 from django.forms.utils import flatatt
4 from django.utils.safestring import mark_safe
5 from django.utils.translation import gettext_lazy as _
6
7
8 class ThumbnailMixin:
9 """
10 Mixin class to help display thumbnail images in ModelAdmin listing results.
11 `thumb_image_field_name` must be overridden to name a ForeignKey field on
12 your model, linking to `wagtailimages.Image`.
13 """
14
15 thumb_image_field_name = "image"
16 thumb_image_filter_spec = "fill-100x100"
17 thumb_image_width = 50
18 thumb_classname = "admin-thumb"
19 thumb_col_header_text = _("image")
20 thumb_default = None
21
22 def __init__(self, *args, **kwargs):
23 if "wagtail.images" not in settings.INSTALLED_APPS:
24 raise ImproperlyConfigured(
25 "The `wagtail.images` app must be installed in order "
26 "to use the `ThumbnailMixin` class."
27 )
28 super().__init__(*args, **kwargs)
29
30 def admin_thumb(self, obj):
31 try:
32 image = getattr(obj, self.thumb_image_field_name, None)
33 except AttributeError:
34 raise ImproperlyConfigured(
35 "The `thumb_image_field_name` attribute on your `%s` class "
36 "must name a field on your model." % self.__class__.__name__
37 )
38
39 img_attrs = {
40 "src": self.thumb_default,
41 "width": self.thumb_image_width,
42 "class": self.thumb_classname,
43 }
44 if not image:
45 if self.thumb_default:
46 return mark_safe("<img{}>".format(flatatt(img_attrs)))
47 return ""
48
49 # try to get a rendition of the image to use
50 from wagtail.images.shortcuts import get_rendition_or_not_found
51
52 spec = self.thumb_image_filter_spec
53 rendition = get_rendition_or_not_found(image, spec)
54 img_attrs.update({"src": rendition.url})
55 return mark_safe("<img{}>".format(flatatt(img_attrs)))
56
57 admin_thumb.short_description = thumb_col_header_text
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/contrib/modeladmin/mixins.py b/wagtail/contrib/modeladmin/mixins.py
--- a/wagtail/contrib/modeladmin/mixins.py
+++ b/wagtail/contrib/modeladmin/mixins.py
@@ -25,6 +25,7 @@
"The `wagtail.images` app must be installed in order "
"to use the `ThumbnailMixin` class."
)
+ self.__class__.admin_thumb.short_description = self.thumb_col_header_text
super().__init__(*args, **kwargs)
def admin_thumb(self, obj):
@@ -53,5 +54,3 @@
rendition = get_rendition_or_not_found(image, spec)
img_attrs.update({"src": rendition.url})
return mark_safe("<img{}>".format(flatatt(img_attrs)))
-
- admin_thumb.short_description = thumb_col_header_text
| {"golden_diff": "diff --git a/wagtail/contrib/modeladmin/mixins.py b/wagtail/contrib/modeladmin/mixins.py\n--- a/wagtail/contrib/modeladmin/mixins.py\n+++ b/wagtail/contrib/modeladmin/mixins.py\n@@ -25,6 +25,7 @@\n \"The `wagtail.images` app must be installed in order \"\n \"to use the `ThumbnailMixin` class.\"\n )\n+ self.__class__.admin_thumb.short_description = self.thumb_col_header_text\n super().__init__(*args, **kwargs)\n \n def admin_thumb(self, obj):\n@@ -53,5 +54,3 @@\n rendition = get_rendition_or_not_found(image, spec)\n img_attrs.update({\"src\": rendition.url})\n return mark_safe(\"<img{}>\".format(flatatt(img_attrs)))\n-\n- admin_thumb.short_description = thumb_col_header_text\n", "issue": "ThumbnailMixin does not display in header the value defined under thumb_col_header_text \n<!--\r\nFound a bug? Please fill out the sections below. \ud83d\udc4d\r\n-->\r\n\r\n### Issue Summary\r\n\r\nWhen adding ThumbnailMixin to a ModelAdmin, and giving it the `thumb_col_header_text` attribute, should display that on the list header for the thumbnail. but it always uses the default defined 'image' \r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n### Steps to Reproduce\r\n\r\n1. (for example) Start a new project with `wagtail start myproject`\r\n2. in models.py add a new model (non page) with a forignkey to wagtailimages.Image \r\n3. add model admin definition in wagtail_hooks.py\r\n4. add ThumbnailMixin to model admin super classes\r\n5. add some value to thumb_col_header_text\r\n6. register new model admin\r\n7. load app\r\n8. add new instance of your new model with an image\r\n9. in list header for your image it will say 'image' not what you defined in thumb_col_header_text\r\n\r\nAny other relevant information. For example, why do you consider this a bug and what did you expect to happen instead?\r\n\r\n* I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: (yes)\r\n* i already know why this is happening and will submit a pull request shortly\r\n\r\n\r\n### Technical details\r\n\r\n* Python version: 3.9.7\r\n* Django version: 4.0.3\r\n* Wagtail version: 2.16.1\r\n* Browser version: Chrome Version 100.0.4896.60 (Official Build) (x86_64)\r\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.forms.utils import flatatt\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import gettext_lazy as _\n\n\nclass ThumbnailMixin:\n \"\"\"\n Mixin class to help display thumbnail images in ModelAdmin listing results.\n `thumb_image_field_name` must be overridden to name a ForeignKey field on\n your model, linking to `wagtailimages.Image`.\n \"\"\"\n\n thumb_image_field_name = \"image\"\n thumb_image_filter_spec = \"fill-100x100\"\n thumb_image_width = 50\n thumb_classname = \"admin-thumb\"\n thumb_col_header_text = _(\"image\")\n thumb_default = None\n\n def __init__(self, *args, **kwargs):\n if \"wagtail.images\" not in settings.INSTALLED_APPS:\n raise ImproperlyConfigured(\n \"The `wagtail.images` app must be installed in order \"\n \"to use the `ThumbnailMixin` class.\"\n )\n super().__init__(*args, **kwargs)\n\n def admin_thumb(self, obj):\n try:\n image = getattr(obj, self.thumb_image_field_name, None)\n except AttributeError:\n raise ImproperlyConfigured(\n \"The `thumb_image_field_name` attribute on your `%s` class \"\n \"must name a field on your model.\" % self.__class__.__name__\n )\n\n img_attrs = {\n \"src\": self.thumb_default,\n \"width\": self.thumb_image_width,\n \"class\": self.thumb_classname,\n }\n if not image:\n if self.thumb_default:\n return mark_safe(\"<img{}>\".format(flatatt(img_attrs)))\n return \"\"\n\n # try to get a rendition of the image to use\n from wagtail.images.shortcuts import get_rendition_or_not_found\n\n spec = self.thumb_image_filter_spec\n rendition = get_rendition_or_not_found(image, spec)\n img_attrs.update({\"src\": rendition.url})\n return mark_safe(\"<img{}>\".format(flatatt(img_attrs)))\n\n admin_thumb.short_description = thumb_col_header_text\n", "path": "wagtail/contrib/modeladmin/mixins.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.forms.utils import flatatt\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import gettext_lazy as _\n\n\nclass ThumbnailMixin:\n \"\"\"\n Mixin class to help display thumbnail images in ModelAdmin listing results.\n `thumb_image_field_name` must be overridden to name a ForeignKey field on\n your model, linking to `wagtailimages.Image`.\n \"\"\"\n\n thumb_image_field_name = \"image\"\n thumb_image_filter_spec = \"fill-100x100\"\n thumb_image_width = 50\n thumb_classname = \"admin-thumb\"\n thumb_col_header_text = _(\"image\")\n thumb_default = None\n\n def __init__(self, *args, **kwargs):\n if \"wagtail.images\" not in settings.INSTALLED_APPS:\n raise ImproperlyConfigured(\n \"The `wagtail.images` app must be installed in order \"\n \"to use the `ThumbnailMixin` class.\"\n )\n self.__class__.admin_thumb.short_description = self.thumb_col_header_text\n super().__init__(*args, **kwargs)\n\n def admin_thumb(self, obj):\n try:\n image = getattr(obj, self.thumb_image_field_name, None)\n except AttributeError:\n raise ImproperlyConfigured(\n \"The `thumb_image_field_name` attribute on your `%s` class \"\n \"must name a field on your model.\" % self.__class__.__name__\n )\n\n img_attrs = {\n \"src\": self.thumb_default,\n \"width\": self.thumb_image_width,\n \"class\": self.thumb_classname,\n }\n if not image:\n if self.thumb_default:\n return mark_safe(\"<img{}>\".format(flatatt(img_attrs)))\n return \"\"\n\n # try to get a rendition of the image to use\n from wagtail.images.shortcuts import get_rendition_or_not_found\n\n spec = self.thumb_image_filter_spec\n rendition = get_rendition_or_not_found(image, spec)\n img_attrs.update({\"src\": rendition.url})\n return mark_safe(\"<img{}>\".format(flatatt(img_attrs)))\n", "path": "wagtail/contrib/modeladmin/mixins.py"}]} | 1,307 | 197 |
gh_patches_debug_39253 | rasdani/github-patches | git_diff | lightly-ai__lightly-1531 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug in `GatherLayer.backward`
Hi,
We've been implementing a model at [cellarium-ml](https://github.com/cellarium-ai/cellarium-ml) using your `NTXentLoss`. Comparing the model training with a single GPU and two GPUs we noticed that they do not match. By investigating it we found an apparent bug in the `GatherLayer.backward` where gradients are not sum-reduced over GPUs. Here is our fixed version (https://github.com/cellarium-ai/cellarium-ml/blob/main/cellarium/ml/distributed/gather.py#L17-L21):
```py
@staticmethod
def backward(ctx, *grads) -> torch.Tensor:
grad_out = grads[dist.get_rank()].contiguous()
dist.all_reduce(grad_out, op=dist.ReduceOp.SUM)
return grad_out
```
and the [test](https://github.com/cellarium-ai/cellarium-ml/blob/main/tests/distributed/test_gather.py) we wrote. Would you agree that this is indeed a bug? I would be happy to contribute a PR with the fix.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lightly/utils/dist.py`
Content:
```
1 from typing import Optional, Tuple
2
3 import torch
4 import torch.distributed as dist
5
6
7 class GatherLayer(torch.autograd.Function):
8 """Gather tensors from all processes, supporting backward propagation.
9
10 This code was taken and adapted from here:
11 https://github.com/Spijkervet/SimCLR
12
13 """
14
15 @staticmethod
16 def forward(ctx, input: torch.Tensor) -> Tuple[torch.Tensor, ...]:
17 ctx.save_for_backward(input)
18 output = [torch.empty_like(input) for _ in range(dist.get_world_size())]
19 dist.all_gather(output, input)
20 return tuple(output)
21
22 @staticmethod
23 def backward(ctx, *grads: torch.Tensor) -> torch.Tensor:
24 (input,) = ctx.saved_tensors
25 grad_out = torch.empty_like(input)
26 grad_out[:] = grads[dist.get_rank()]
27 return grad_out
28
29
30 def rank() -> int:
31 """Returns the rank of the current process."""
32 return dist.get_rank() if dist.is_initialized() else 0
33
34
35 def world_size() -> int:
36 """Returns the current world size (number of distributed processes)."""
37 return dist.get_world_size() if dist.is_initialized() else 1
38
39
40 def gather(input: torch.Tensor) -> Tuple[torch.Tensor]:
41 """Gathers this tensor from all processes. Supports backprop."""
42 return GatherLayer.apply(input)
43
44
45 def eye_rank(n: int, device: Optional[torch.device] = None) -> torch.Tensor:
46 """Returns an (n, n * world_size) zero matrix with the diagonal for the rank
47 of this process set to 1.
48
49 Example output where n=3, the current process has rank 1, and there are
50 4 processes in total:
51
52 rank0 rank1 rank2 rank3
53 0 0 0 | 1 0 0 | 0 0 0 | 0 0 0
54 0 0 0 | 0 1 0 | 0 0 0 | 0 0 0
55 0 0 0 | 0 0 1 | 0 0 0 | 0 0 0
56
57 Equivalent to torch.eye for undistributed settings or if world size == 1.
58
59 Args:
60 n:
61 Size of the square matrix on a single process.
62 device:
63 Device on which the matrix should be created.
64
65 """
66 rows = torch.arange(n, device=device, dtype=torch.long)
67 cols = rows + rank() * n
68 diag_mask = torch.zeros((n, n * world_size()), dtype=torch.bool)
69 diag_mask[(rows, cols)] = True
70 return diag_mask
71
72
73 def rank_zero_only(fn):
74 """Decorator that only runs the function on the process with rank 0.
75
76 Example:
77 >>> @rank_zero_only
78 >>> def print_rank_zero(message: str):
79 >>> print(message)
80 >>>
81 >>> print_rank_zero("Hello from rank 0!")
82
83 """
84
85 def wrapped(*args, **kwargs):
86 if rank() == 0:
87 return fn(*args, **kwargs)
88
89 return wrapped
90
91
92 @rank_zero_only
93 def print_rank_zero(*args, **kwargs) -> None:
94 """Equivalent to print, but only runs on the process with rank 0."""
95 print(*args, **kwargs)
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lightly/utils/dist.py b/lightly/utils/dist.py
--- a/lightly/utils/dist.py
+++ b/lightly/utils/dist.py
@@ -1,29 +1,29 @@
-from typing import Optional, Tuple
+from typing import Any, Callable, Optional, Tuple, TypeVar
import torch
import torch.distributed as dist
+from torch.autograd.function import FunctionCtx
class GatherLayer(torch.autograd.Function):
"""Gather tensors from all processes, supporting backward propagation.
This code was taken and adapted from here:
- https://github.com/Spijkervet/SimCLR
+ https://github.com/vturrisi/solo-learn/blob/b69b4bd27472593919956d9ac58902a301537a4d/solo/utils/misc.py#L187
"""
@staticmethod
- def forward(ctx, input: torch.Tensor) -> Tuple[torch.Tensor, ...]:
- ctx.save_for_backward(input)
+ def forward(ctx, input: torch.Tensor) -> Tuple[torch.Tensor, ...]: # type: ignore
output = [torch.empty_like(input) for _ in range(dist.get_world_size())]
dist.all_gather(output, input)
return tuple(output)
@staticmethod
- def backward(ctx, *grads: torch.Tensor) -> torch.Tensor:
- (input,) = ctx.saved_tensors
- grad_out = torch.empty_like(input)
- grad_out[:] = grads[dist.get_rank()]
+ def backward(ctx, *grads) -> torch.Tensor: # type: ignore
+ all_gradients = torch.stack(grads)
+ dist.all_reduce(all_gradients)
+ grad_out = all_gradients[dist.get_rank()]
return grad_out
@@ -39,7 +39,7 @@
def gather(input: torch.Tensor) -> Tuple[torch.Tensor]:
"""Gathers this tensor from all processes. Supports backprop."""
- return GatherLayer.apply(input)
+ return GatherLayer.apply(input) # type: ignore[no-any-return]
def eye_rank(n: int, device: Optional[torch.device] = None) -> torch.Tensor:
@@ -70,7 +70,10 @@
return diag_mask
-def rank_zero_only(fn):
+R = TypeVar("R")
+
+
+def rank_zero_only(fn: Callable[..., R]) -> Callable[..., Optional[R]]:
"""Decorator that only runs the function on the process with rank 0.
Example:
@@ -79,17 +82,17 @@
>>> print(message)
>>>
>>> print_rank_zero("Hello from rank 0!")
-
"""
- def wrapped(*args, **kwargs):
+ def wrapped(*args: Any, **kwargs: Any) -> Optional[R]:
if rank() == 0:
return fn(*args, **kwargs)
+ return None
return wrapped
@rank_zero_only
-def print_rank_zero(*args, **kwargs) -> None:
+def print_rank_zero(*args: Any, **kwargs: Any) -> None: # type: ignore[misc]
"""Equivalent to print, but only runs on the process with rank 0."""
print(*args, **kwargs)
| {"golden_diff": "diff --git a/lightly/utils/dist.py b/lightly/utils/dist.py\n--- a/lightly/utils/dist.py\n+++ b/lightly/utils/dist.py\n@@ -1,29 +1,29 @@\n-from typing import Optional, Tuple\n+from typing import Any, Callable, Optional, Tuple, TypeVar\n \n import torch\n import torch.distributed as dist\n+from torch.autograd.function import FunctionCtx\n \n \n class GatherLayer(torch.autograd.Function):\n \"\"\"Gather tensors from all processes, supporting backward propagation.\n \n This code was taken and adapted from here:\n- https://github.com/Spijkervet/SimCLR\n+ https://github.com/vturrisi/solo-learn/blob/b69b4bd27472593919956d9ac58902a301537a4d/solo/utils/misc.py#L187\n \n \"\"\"\n \n @staticmethod\n- def forward(ctx, input: torch.Tensor) -> Tuple[torch.Tensor, ...]:\n- ctx.save_for_backward(input)\n+ def forward(ctx, input: torch.Tensor) -> Tuple[torch.Tensor, ...]: # type: ignore\n output = [torch.empty_like(input) for _ in range(dist.get_world_size())]\n dist.all_gather(output, input)\n return tuple(output)\n \n @staticmethod\n- def backward(ctx, *grads: torch.Tensor) -> torch.Tensor:\n- (input,) = ctx.saved_tensors\n- grad_out = torch.empty_like(input)\n- grad_out[:] = grads[dist.get_rank()]\n+ def backward(ctx, *grads) -> torch.Tensor: # type: ignore\n+ all_gradients = torch.stack(grads)\n+ dist.all_reduce(all_gradients)\n+ grad_out = all_gradients[dist.get_rank()]\n return grad_out\n \n \n@@ -39,7 +39,7 @@\n \n def gather(input: torch.Tensor) -> Tuple[torch.Tensor]:\n \"\"\"Gathers this tensor from all processes. Supports backprop.\"\"\"\n- return GatherLayer.apply(input)\n+ return GatherLayer.apply(input) # type: ignore[no-any-return]\n \n \n def eye_rank(n: int, device: Optional[torch.device] = None) -> torch.Tensor:\n@@ -70,7 +70,10 @@\n return diag_mask\n \n \n-def rank_zero_only(fn):\n+R = TypeVar(\"R\")\n+\n+\n+def rank_zero_only(fn: Callable[..., R]) -> Callable[..., Optional[R]]:\n \"\"\"Decorator that only runs the function on the process with rank 0.\n \n Example:\n@@ -79,17 +82,17 @@\n >>> print(message)\n >>>\n >>> print_rank_zero(\"Hello from rank 0!\")\n-\n \"\"\"\n \n- def wrapped(*args, **kwargs):\n+ def wrapped(*args: Any, **kwargs: Any) -> Optional[R]:\n if rank() == 0:\n return fn(*args, **kwargs)\n+ return None\n \n return wrapped\n \n \n @rank_zero_only\n-def print_rank_zero(*args, **kwargs) -> None:\n+def print_rank_zero(*args: Any, **kwargs: Any) -> None: # type: ignore[misc]\n \"\"\"Equivalent to print, but only runs on the process with rank 0.\"\"\"\n print(*args, **kwargs)\n", "issue": "Bug in `GatherLayer.backward`\nHi,\r\n\r\nWe've been implementing a model at [cellarium-ml](https://github.com/cellarium-ai/cellarium-ml) using your `NTXentLoss`. Comparing the model training with a single GPU and two GPUs we noticed that they do not match. By investigating it we found an apparent bug in the `GatherLayer.backward` where gradients are not sum-reduced over GPUs. Here is our fixed version (https://github.com/cellarium-ai/cellarium-ml/blob/main/cellarium/ml/distributed/gather.py#L17-L21):\r\n\r\n```py\r\n @staticmethod\r\n def backward(ctx, *grads) -> torch.Tensor:\r\n grad_out = grads[dist.get_rank()].contiguous()\r\n dist.all_reduce(grad_out, op=dist.ReduceOp.SUM)\r\n return grad_out\r\n```\r\n\r\nand the [test](https://github.com/cellarium-ai/cellarium-ml/blob/main/tests/distributed/test_gather.py) we wrote. Would you agree that this is indeed a bug? I would be happy to contribute a PR with the fix.\n", "before_files": [{"content": "from typing import Optional, Tuple\n\nimport torch\nimport torch.distributed as dist\n\n\nclass GatherLayer(torch.autograd.Function):\n \"\"\"Gather tensors from all processes, supporting backward propagation.\n\n This code was taken and adapted from here:\n https://github.com/Spijkervet/SimCLR\n\n \"\"\"\n\n @staticmethod\n def forward(ctx, input: torch.Tensor) -> Tuple[torch.Tensor, ...]:\n ctx.save_for_backward(input)\n output = [torch.empty_like(input) for _ in range(dist.get_world_size())]\n dist.all_gather(output, input)\n return tuple(output)\n\n @staticmethod\n def backward(ctx, *grads: torch.Tensor) -> torch.Tensor:\n (input,) = ctx.saved_tensors\n grad_out = torch.empty_like(input)\n grad_out[:] = grads[dist.get_rank()]\n return grad_out\n\n\ndef rank() -> int:\n \"\"\"Returns the rank of the current process.\"\"\"\n return dist.get_rank() if dist.is_initialized() else 0\n\n\ndef world_size() -> int:\n \"\"\"Returns the current world size (number of distributed processes).\"\"\"\n return dist.get_world_size() if dist.is_initialized() else 1\n\n\ndef gather(input: torch.Tensor) -> Tuple[torch.Tensor]:\n \"\"\"Gathers this tensor from all processes. Supports backprop.\"\"\"\n return GatherLayer.apply(input)\n\n\ndef eye_rank(n: int, device: Optional[torch.device] = None) -> torch.Tensor:\n \"\"\"Returns an (n, n * world_size) zero matrix with the diagonal for the rank\n of this process set to 1.\n\n Example output where n=3, the current process has rank 1, and there are\n 4 processes in total:\n\n rank0 rank1 rank2 rank3\n 0 0 0 | 1 0 0 | 0 0 0 | 0 0 0\n 0 0 0 | 0 1 0 | 0 0 0 | 0 0 0\n 0 0 0 | 0 0 1 | 0 0 0 | 0 0 0\n\n Equivalent to torch.eye for undistributed settings or if world size == 1.\n\n Args:\n n:\n Size of the square matrix on a single process.\n device:\n Device on which the matrix should be created.\n\n \"\"\"\n rows = torch.arange(n, device=device, dtype=torch.long)\n cols = rows + rank() * n\n diag_mask = torch.zeros((n, n * world_size()), dtype=torch.bool)\n diag_mask[(rows, cols)] = True\n return diag_mask\n\n\ndef rank_zero_only(fn):\n \"\"\"Decorator that only runs the function on the process with rank 0.\n\n Example:\n >>> @rank_zero_only\n >>> def print_rank_zero(message: str):\n >>> print(message)\n >>>\n >>> print_rank_zero(\"Hello from rank 0!\")\n\n \"\"\"\n\n def wrapped(*args, **kwargs):\n if rank() == 0:\n return fn(*args, **kwargs)\n\n return wrapped\n\n\n@rank_zero_only\ndef print_rank_zero(*args, **kwargs) -> None:\n \"\"\"Equivalent to print, but only runs on the process with rank 0.\"\"\"\n print(*args, **kwargs)\n", "path": "lightly/utils/dist.py"}], "after_files": [{"content": "from typing import Any, Callable, Optional, Tuple, TypeVar\n\nimport torch\nimport torch.distributed as dist\nfrom torch.autograd.function import FunctionCtx\n\n\nclass GatherLayer(torch.autograd.Function):\n \"\"\"Gather tensors from all processes, supporting backward propagation.\n\n This code was taken and adapted from here:\n https://github.com/vturrisi/solo-learn/blob/b69b4bd27472593919956d9ac58902a301537a4d/solo/utils/misc.py#L187\n\n \"\"\"\n\n @staticmethod\n def forward(ctx, input: torch.Tensor) -> Tuple[torch.Tensor, ...]: # type: ignore\n output = [torch.empty_like(input) for _ in range(dist.get_world_size())]\n dist.all_gather(output, input)\n return tuple(output)\n\n @staticmethod\n def backward(ctx, *grads) -> torch.Tensor: # type: ignore\n all_gradients = torch.stack(grads)\n dist.all_reduce(all_gradients)\n grad_out = all_gradients[dist.get_rank()]\n return grad_out\n\n\ndef rank() -> int:\n \"\"\"Returns the rank of the current process.\"\"\"\n return dist.get_rank() if dist.is_initialized() else 0\n\n\ndef world_size() -> int:\n \"\"\"Returns the current world size (number of distributed processes).\"\"\"\n return dist.get_world_size() if dist.is_initialized() else 1\n\n\ndef gather(input: torch.Tensor) -> Tuple[torch.Tensor]:\n \"\"\"Gathers this tensor from all processes. Supports backprop.\"\"\"\n return GatherLayer.apply(input) # type: ignore[no-any-return]\n\n\ndef eye_rank(n: int, device: Optional[torch.device] = None) -> torch.Tensor:\n \"\"\"Returns an (n, n * world_size) zero matrix with the diagonal for the rank\n of this process set to 1.\n\n Example output where n=3, the current process has rank 1, and there are\n 4 processes in total:\n\n rank0 rank1 rank2 rank3\n 0 0 0 | 1 0 0 | 0 0 0 | 0 0 0\n 0 0 0 | 0 1 0 | 0 0 0 | 0 0 0\n 0 0 0 | 0 0 1 | 0 0 0 | 0 0 0\n\n Equivalent to torch.eye for undistributed settings or if world size == 1.\n\n Args:\n n:\n Size of the square matrix on a single process.\n device:\n Device on which the matrix should be created.\n\n \"\"\"\n rows = torch.arange(n, device=device, dtype=torch.long)\n cols = rows + rank() * n\n diag_mask = torch.zeros((n, n * world_size()), dtype=torch.bool)\n diag_mask[(rows, cols)] = True\n return diag_mask\n\n\nR = TypeVar(\"R\")\n\n\ndef rank_zero_only(fn: Callable[..., R]) -> Callable[..., Optional[R]]:\n \"\"\"Decorator that only runs the function on the process with rank 0.\n\n Example:\n >>> @rank_zero_only\n >>> def print_rank_zero(message: str):\n >>> print(message)\n >>>\n >>> print_rank_zero(\"Hello from rank 0!\")\n \"\"\"\n\n def wrapped(*args: Any, **kwargs: Any) -> Optional[R]:\n if rank() == 0:\n return fn(*args, **kwargs)\n return None\n\n return wrapped\n\n\n@rank_zero_only\ndef print_rank_zero(*args: Any, **kwargs: Any) -> None: # type: ignore[misc]\n \"\"\"Equivalent to print, but only runs on the process with rank 0.\"\"\"\n print(*args, **kwargs)\n", "path": "lightly/utils/dist.py"}]} | 1,415 | 723 |
gh_patches_debug_23382 | rasdani/github-patches | git_diff | coala__coala-1290 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`DocstyleDefinition`: Accept a single marker set also
Via the normal constructor or a class method.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `coalib/bearlib/languages/documentation/DocstyleDefinition.py`
Content:
```
1 import os.path
2
3 from coalib.misc.Decorators import generate_eq, generate_repr, enforce_signature
4 from coalib.parsing.ConfParser import ConfParser
5
6
7 @generate_repr()
8 @generate_eq("language", "docstyle", "markers")
9 class DocstyleDefinition:
10 """
11 The DocstyleDefinition class holds values that identify a certain type of
12 documentation comment (for which language, documentation style/tool used
13 etc.).
14 """
15
16 @enforce_signature
17 def __init__(self, language: str, docstyle: str, markers):
18 """
19 Instantiates a new DocstyleDefinition.
20
21 :param language: The case insensitive programming language of the
22 documentation comment, e.g. `"CPP"` for C++ or
23 `"PYTHON3"`.
24 :param docstyle: The case insensitive documentation style/tool used
25 to document code, e.g. `"default"` or `"doxygen"`.
26 :param markers: An iterable of marker/delimiter string iterables that
27 identify a documentation comment. See `markers`
28 property for more details on markers.
29 """
30 self._language = language.lower()
31 self._docstyle = docstyle.lower()
32 self._markers = tuple(tuple(marker_set) for marker_set in markers)
33
34 # Check marker set dimensions.
35 for marker_set in self._markers:
36 length = len(marker_set)
37 if length != 3:
38 raise ValueError("Length of a given marker set was not 3 (was "
39 "actually {}).".format(length))
40
41 @property
42 def language(self):
43 """
44 The programming language.
45
46 :return: A lower-case string defining the programming language (i.e.
47 "cpp" or "python").
48 """
49 return self._language
50
51 @property
52 def docstyle(self):
53 """
54 The documentation style/tool used to document code.
55
56 :return: A lower-case string defining the docstyle (i.e. "default" or
57 "doxygen").
58 """
59 return self._docstyle
60
61 @property
62 def markers(self):
63 """
64 A tuple of marker sets that identify a documentation comment.
65
66 Marker sets consist of 3 entries where the first is the start-marker,
67 the second one the each-line marker and the last one the end-marker.
68 For example a marker tuple with a single marker set
69 `(("/**", "*", "*/"),)` would match following documentation comment:
70
71 ```
72 /**
73 * This is documentation.
74 */
75 ```
76
77 It's also possible to supply an empty each-line marker
78 (`("/**", "", "*/")`):
79
80 ```
81 /**
82 This is more documentation.
83 */
84 ```
85
86 Markers are matched "greedy", that means it will match as many
87 each-line markers as possible. I.e. for `("///", "///", "///")`):
88
89 ```
90 /// Brief documentation.
91 ///
92 /// Detailed documentation.
93 ```
94
95 :return: A tuple of marker/delimiter string tuples that identify a
96 documentation comment.
97 """
98 return self._markers
99
100 @classmethod
101 @enforce_signature
102 def load(cls, language: str, docstyle: str):
103 """
104 Loads a `DocstyleDefinition` from the coala docstyle definition files.
105
106 This function considers all settings inside the according coalang-files
107 as markers.
108
109 :param language: The case insensitive programming language of
110 the documentation comment as a string.
111 :param docstyle: The case insensitive documentation
112 style/tool used to document code, e.g.
113 `"default"` or `"doxygen"`.
114 :raises FileNotFoundError: Raised when the given docstyle was not
115 found.
116 :raises KeyError: Raised when the given language is not
117 defined for given docstyle.
118 :return: The `DocstyleDefinition` for given language
119 and docstyle.
120 """
121
122 docstyle = docstyle.lower()
123
124 language_config_parser = ConfParser(remove_empty_iter_elements=False)
125 try:
126 docstyle_settings = language_config_parser.parse(
127 os.path.dirname(__file__) + "/" + docstyle + ".coalang")
128 except FileNotFoundError:
129 raise FileNotFoundError("Docstyle definition " + repr(docstyle) +
130 " not found.")
131
132 language = language.lower()
133
134 try:
135 docstyle_settings = docstyle_settings[language]
136 except KeyError:
137 raise KeyError("Language {} is not defined for docstyle {}."
138 .format(repr(language), repr(docstyle)))
139
140 marker_sets = (tuple(value)
141 for key, value in
142 filter(lambda kv: not kv[0].startswith("comment"),
143 docstyle_settings.contents.items()))
144
145 return cls(language, docstyle, marker_sets)
146
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/coalib/bearlib/languages/documentation/DocstyleDefinition.py b/coalib/bearlib/languages/documentation/DocstyleDefinition.py
--- a/coalib/bearlib/languages/documentation/DocstyleDefinition.py
+++ b/coalib/bearlib/languages/documentation/DocstyleDefinition.py
@@ -23,12 +23,19 @@
`"PYTHON3"`.
:param docstyle: The case insensitive documentation style/tool used
to document code, e.g. `"default"` or `"doxygen"`.
- :param markers: An iterable of marker/delimiter string iterables that
+ :param markers: An iterable of marker/delimiter string iterables
+ or a single marker/delimiter string iterable that
identify a documentation comment. See `markers`
property for more details on markers.
"""
self._language = language.lower()
self._docstyle = docstyle.lower()
+
+ # Check and modify tuple if only one marker_set exists.
+ markers = tuple(markers)
+ if len(markers) == 3 and all(isinstance(x, str) for x in markers):
+ markers = (markers,)
+
self._markers = tuple(tuple(marker_set) for marker_set in markers)
# Check marker set dimensions.
| {"golden_diff": "diff --git a/coalib/bearlib/languages/documentation/DocstyleDefinition.py b/coalib/bearlib/languages/documentation/DocstyleDefinition.py\n--- a/coalib/bearlib/languages/documentation/DocstyleDefinition.py\n+++ b/coalib/bearlib/languages/documentation/DocstyleDefinition.py\n@@ -23,12 +23,19 @@\n `\"PYTHON3\"`.\n :param docstyle: The case insensitive documentation style/tool used\n to document code, e.g. `\"default\"` or `\"doxygen\"`.\n- :param markers: An iterable of marker/delimiter string iterables that\n+ :param markers: An iterable of marker/delimiter string iterables\n+ or a single marker/delimiter string iterable that\n identify a documentation comment. See `markers`\n property for more details on markers.\n \"\"\"\n self._language = language.lower()\n self._docstyle = docstyle.lower()\n+\n+ # Check and modify tuple if only one marker_set exists.\n+ markers = tuple(markers)\n+ if len(markers) == 3 and all(isinstance(x, str) for x in markers):\n+ markers = (markers,)\n+\n self._markers = tuple(tuple(marker_set) for marker_set in markers)\n \n # Check marker set dimensions.\n", "issue": "`DocstyleDefinition`: Accept a single marker set also\nVia the normal constructor or a class method.\n\n", "before_files": [{"content": "import os.path\n\nfrom coalib.misc.Decorators import generate_eq, generate_repr, enforce_signature\nfrom coalib.parsing.ConfParser import ConfParser\n\n\n@generate_repr()\n@generate_eq(\"language\", \"docstyle\", \"markers\")\nclass DocstyleDefinition:\n \"\"\"\n The DocstyleDefinition class holds values that identify a certain type of\n documentation comment (for which language, documentation style/tool used\n etc.).\n \"\"\"\n\n @enforce_signature\n def __init__(self, language: str, docstyle: str, markers):\n \"\"\"\n Instantiates a new DocstyleDefinition.\n\n :param language: The case insensitive programming language of the\n documentation comment, e.g. `\"CPP\"` for C++ or\n `\"PYTHON3\"`.\n :param docstyle: The case insensitive documentation style/tool used\n to document code, e.g. `\"default\"` or `\"doxygen\"`.\n :param markers: An iterable of marker/delimiter string iterables that\n identify a documentation comment. See `markers`\n property for more details on markers.\n \"\"\"\n self._language = language.lower()\n self._docstyle = docstyle.lower()\n self._markers = tuple(tuple(marker_set) for marker_set in markers)\n\n # Check marker set dimensions.\n for marker_set in self._markers:\n length = len(marker_set)\n if length != 3:\n raise ValueError(\"Length of a given marker set was not 3 (was \"\n \"actually {}).\".format(length))\n\n @property\n def language(self):\n \"\"\"\n The programming language.\n\n :return: A lower-case string defining the programming language (i.e.\n \"cpp\" or \"python\").\n \"\"\"\n return self._language\n\n @property\n def docstyle(self):\n \"\"\"\n The documentation style/tool used to document code.\n\n :return: A lower-case string defining the docstyle (i.e. \"default\" or\n \"doxygen\").\n \"\"\"\n return self._docstyle\n\n @property\n def markers(self):\n \"\"\"\n A tuple of marker sets that identify a documentation comment.\n\n Marker sets consist of 3 entries where the first is the start-marker,\n the second one the each-line marker and the last one the end-marker.\n For example a marker tuple with a single marker set\n `((\"/**\", \"*\", \"*/\"),)` would match following documentation comment:\n\n ```\n /**\n * This is documentation.\n */\n ```\n\n It's also possible to supply an empty each-line marker\n (`(\"/**\", \"\", \"*/\")`):\n\n ```\n /**\n This is more documentation.\n */\n ```\n\n Markers are matched \"greedy\", that means it will match as many\n each-line markers as possible. I.e. for `(\"///\", \"///\", \"///\")`):\n\n ```\n /// Brief documentation.\n ///\n /// Detailed documentation.\n ```\n\n :return: A tuple of marker/delimiter string tuples that identify a\n documentation comment.\n \"\"\"\n return self._markers\n\n @classmethod\n @enforce_signature\n def load(cls, language: str, docstyle: str):\n \"\"\"\n Loads a `DocstyleDefinition` from the coala docstyle definition files.\n\n This function considers all settings inside the according coalang-files\n as markers.\n\n :param language: The case insensitive programming language of\n the documentation comment as a string.\n :param docstyle: The case insensitive documentation\n style/tool used to document code, e.g.\n `\"default\"` or `\"doxygen\"`.\n :raises FileNotFoundError: Raised when the given docstyle was not\n found.\n :raises KeyError: Raised when the given language is not\n defined for given docstyle.\n :return: The `DocstyleDefinition` for given language\n and docstyle.\n \"\"\"\n\n docstyle = docstyle.lower()\n\n language_config_parser = ConfParser(remove_empty_iter_elements=False)\n try:\n docstyle_settings = language_config_parser.parse(\n os.path.dirname(__file__) + \"/\" + docstyle + \".coalang\")\n except FileNotFoundError:\n raise FileNotFoundError(\"Docstyle definition \" + repr(docstyle) +\n \" not found.\")\n\n language = language.lower()\n\n try:\n docstyle_settings = docstyle_settings[language]\n except KeyError:\n raise KeyError(\"Language {} is not defined for docstyle {}.\"\n .format(repr(language), repr(docstyle)))\n\n marker_sets = (tuple(value)\n for key, value in\n filter(lambda kv: not kv[0].startswith(\"comment\"),\n docstyle_settings.contents.items()))\n\n return cls(language, docstyle, marker_sets)\n", "path": "coalib/bearlib/languages/documentation/DocstyleDefinition.py"}], "after_files": [{"content": "import os.path\n\nfrom coalib.misc.Decorators import generate_eq, generate_repr, enforce_signature\nfrom coalib.parsing.ConfParser import ConfParser\n\n\n@generate_repr()\n@generate_eq(\"language\", \"docstyle\", \"markers\")\nclass DocstyleDefinition:\n \"\"\"\n The DocstyleDefinition class holds values that identify a certain type of\n documentation comment (for which language, documentation style/tool used\n etc.).\n \"\"\"\n\n @enforce_signature\n def __init__(self, language: str, docstyle: str, markers):\n \"\"\"\n Instantiates a new DocstyleDefinition.\n\n :param language: The case insensitive programming language of the\n documentation comment, e.g. `\"CPP\"` for C++ or\n `\"PYTHON3\"`.\n :param docstyle: The case insensitive documentation style/tool used\n to document code, e.g. `\"default\"` or `\"doxygen\"`.\n :param markers: An iterable of marker/delimiter string iterables\n or a single marker/delimiter string iterable that\n identify a documentation comment. See `markers`\n property for more details on markers.\n \"\"\"\n self._language = language.lower()\n self._docstyle = docstyle.lower()\n\n # Check and modify tuple if only one marker_set exists.\n markers = tuple(markers)\n if len(markers) == 3 and all(isinstance(x, str) for x in markers):\n markers = (markers,)\n\n self._markers = tuple(tuple(marker_set) for marker_set in markers)\n\n # Check marker set dimensions.\n for marker_set in self._markers:\n length = len(marker_set)\n if length != 3:\n raise ValueError(\"Length of a given marker set was not 3 (was \"\n \"actually {}).\".format(length))\n\n @property\n def language(self):\n \"\"\"\n The programming language.\n\n :return: A lower-case string defining the programming language (i.e.\n \"cpp\" or \"python\").\n \"\"\"\n return self._language\n\n @property\n def docstyle(self):\n \"\"\"\n The documentation style/tool used to document code.\n\n :return: A lower-case string defining the docstyle (i.e. \"default\" or\n \"doxygen\").\n \"\"\"\n return self._docstyle\n\n @property\n def markers(self):\n \"\"\"\n A tuple of marker sets that identify a documentation comment.\n\n Marker sets consist of 3 entries where the first is the start-marker,\n the second one the each-line marker and the last one the end-marker.\n For example a marker tuple with a single marker set\n `((\"/**\", \"*\", \"*/\"),)` would match following documentation comment:\n\n ```\n /**\n * This is documentation.\n */\n ```\n\n It's also possible to supply an empty each-line marker\n (`(\"/**\", \"\", \"*/\")`):\n\n ```\n /**\n This is more documentation.\n */\n ```\n\n Markers are matched \"greedy\", that means it will match as many\n each-line markers as possible. I.e. for `(\"///\", \"///\", \"///\")`):\n\n ```\n /// Brief documentation.\n ///\n /// Detailed documentation.\n ```\n\n :return: A tuple of marker/delimiter string tuples that identify a\n documentation comment.\n \"\"\"\n return self._markers\n\n @classmethod\n @enforce_signature\n def load(cls, language: str, docstyle: str):\n \"\"\"\n Loads a `DocstyleDefinition` from the coala docstyle definition files.\n\n This function considers all settings inside the according coalang-files\n as markers.\n\n :param language: The case insensitive programming language of\n the documentation comment as a string.\n :param docstyle: The case insensitive documentation\n style/tool used to document code, e.g.\n `\"default\"` or `\"doxygen\"`.\n :raises FileNotFoundError: Raised when the given docstyle was not\n found.\n :raises KeyError: Raised when the given language is not\n defined for given docstyle.\n :return: The `DocstyleDefinition` for given language\n and docstyle.\n \"\"\"\n\n docstyle = docstyle.lower()\n\n language_config_parser = ConfParser(remove_empty_iter_elements=False)\n try:\n docstyle_settings = language_config_parser.parse(\n os.path.dirname(__file__) + \"/\" + docstyle + \".coalang\")\n except FileNotFoundError:\n raise FileNotFoundError(\"Docstyle definition \" + repr(docstyle) +\n \" not found.\")\n\n language = language.lower()\n\n try:\n docstyle_settings = docstyle_settings[language]\n except KeyError:\n raise KeyError(\"Language {} is not defined for docstyle {}.\"\n .format(repr(language), repr(docstyle)))\n\n marker_sets = (tuple(value)\n for key, value in\n filter(lambda kv: not kv[0].startswith(\"comment\"),\n docstyle_settings.contents.items()))\n\n return cls(language, docstyle, marker_sets)\n", "path": "coalib/bearlib/languages/documentation/DocstyleDefinition.py"}]} | 1,632 | 284 |
gh_patches_debug_4297 | rasdani/github-patches | git_diff | NVIDIA-Merlin__NVTabular-1312 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Getting error when loading the TF4Rec PyTorch model to the TIS
**Describe the bug**
I am getting the following error when I load a trained TF4Rec PyTorch to TIS:
```
| t4r_pytorch_pt | 1 | UNAVAILABLE: Internal: ImportError: cannot import name '_convert_string2pytorch_dty |
| | | pe' from 'nvtabular.inference.triton' (/nvtabular/nvtabular/inference/triton/__init |
| | | __.py) |
| | | |
| | | At: |
| | | /workspace/models/t4r_pytorch_pt/1/model.py(42): <module> |
| | | <frozen importlib._bootstrap>(219): _call_with_frames_removed |
| | | <frozen importlib._bootstrap_external>(848): exec_module |
| | | <frozen importlib._bootstrap>(686): _load_unlocked |
| | | <frozen importlib._bootstrap>(975): _find_and_load_unlocked |
| | | <frozen importlib._bootstrap>(991): _find_and_load |
+-----------------+---------+---------------------------------------------------------
```
**Steps/Code to reproduce bug**
Run the 02 and 03 notebooks Transformers4Rec tutorial [notebooks](https://github.com/NVIDIA-Merlin/Transformers4Rec/tree/main/examples/tutorial) to train the model. Then serve the model to TIS based on the instructions given on the [inference notebook](https://github.com/NVIDIA-Merlin/Transformers4Rec/blob/main/examples/tutorial/04-Inference-with-Triton.ipynb).
`Oct-2019.parquet` Dataset can be downloaded from here: https://drive.google.com/drive/u/0/folders/1nTuG6UHWOEaZnBJj7YSIVvnphE1zGc1h
**Expected behavior**
Model should be loaded to the TIS without issue.
**Environment details (please complete the following information):**
- Environment location: [Bare-metal, Docker, Cloud(specify cloud provider)] : Docker
- Method of NVTabular install: [conda, Docker, or from source]: Docker `merlin-inference:21.11` and `merlin-pytoch-training:21.11` `
Please do `git pull origin main` && `pip install -e .` to pull the latest main branch.
- If method of install is [Docker], provide `docker pull` & `docker run` commands used
This issue was also submitted by a user on TF4Rec GH repo- https://github.com/NVIDIA-Merlin/Transformers4Rec/issues/339
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nvtabular/inference/triton/__init__.py`
Content:
```
1 # Copyright (c) 2021, NVIDIA CORPORATION.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 import json
16 import os
17
18 import pandas as pd
19
20 # this needs to be before any modules that import protobuf
21 os.environ["PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION"] = "python"
22
23 import tritonclient.grpc as grpcclient # noqa
24 from tritonclient.utils import np_to_triton_dtype # noqa
25
26 from nvtabular.dispatch import _is_list_dtype, _is_string_dtype, _make_df # noqa
27 from nvtabular.inference.triton.ensemble import ( # noqa
28 export_hugectr_ensemble,
29 export_pytorch_ensemble,
30 export_tensorflow_ensemble,
31 generate_hugectr_model,
32 generate_nvtabular_model,
33 )
34
35
36 def convert_df_to_triton_input(column_names, batch, input_class=grpcclient.InferInput):
37 columns = [(col, batch[col]) for col in column_names]
38 inputs = []
39 for i, (name, col) in enumerate(columns):
40 if _is_list_dtype(col):
41 if isinstance(col, pd.Series):
42 raise ValueError("this function doesn't support CPU list values yet")
43 inputs.append(
44 _convert_column_to_triton_input(
45 col._column.offsets.values_host.astype("int64"), name + "__nnzs", input_class
46 )
47 )
48 inputs.append(
49 _convert_column_to_triton_input(
50 col.list.leaves.values_host.astype("int64"), name + "__values", input_class
51 )
52 )
53 else:
54 values = col.values if isinstance(col, pd.Series) else col.values_host
55 inputs.append(_convert_column_to_triton_input(values, name, input_class))
56 return inputs
57
58
59 def _convert_column_to_triton_input(col, name, input_class=grpcclient.InferInput):
60 col = col.reshape(len(col), 1)
61 input_tensor = input_class(name, col.shape, np_to_triton_dtype(col.dtype))
62 input_tensor.set_data_from_numpy(col)
63 return input_tensor
64
65
66 def convert_triton_output_to_df(columns, response):
67 return _make_df({col: response.as_numpy(col) for col in columns})
68
69
70 def get_column_types(path):
71 return json.load(open(os.path.join(path, "column_types.json")))
72
73
74 def _convert_tensor(t):
75 out = t.as_numpy()
76 if len(out.shape) == 2:
77 out = out[:, 0]
78 # cudf doesn't seem to handle dtypes like |S15 or object that well
79 if _is_string_dtype(out.dtype):
80 out = out.astype("str")
81 return out
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nvtabular/inference/triton/__init__.py b/nvtabular/inference/triton/__init__.py
--- a/nvtabular/inference/triton/__init__.py
+++ b/nvtabular/inference/triton/__init__.py
@@ -25,6 +25,7 @@
from nvtabular.dispatch import _is_list_dtype, _is_string_dtype, _make_df # noqa
from nvtabular.inference.triton.ensemble import ( # noqa
+ _convert_string2pytorch_dtype,
export_hugectr_ensemble,
export_pytorch_ensemble,
export_tensorflow_ensemble,
| {"golden_diff": "diff --git a/nvtabular/inference/triton/__init__.py b/nvtabular/inference/triton/__init__.py\n--- a/nvtabular/inference/triton/__init__.py\n+++ b/nvtabular/inference/triton/__init__.py\n@@ -25,6 +25,7 @@\n \n from nvtabular.dispatch import _is_list_dtype, _is_string_dtype, _make_df # noqa\n from nvtabular.inference.triton.ensemble import ( # noqa\n+ _convert_string2pytorch_dtype,\n export_hugectr_ensemble,\n export_pytorch_ensemble,\n export_tensorflow_ensemble,\n", "issue": "[BUG] Getting error when loading the TF4Rec PyTorch model to the TIS\n**Describe the bug**\r\nI am getting the following error when I load a trained TF4Rec PyTorch to TIS:\r\n\r\n```\r\n | t4r_pytorch_pt | 1 | UNAVAILABLE: Internal: ImportError: cannot import name '_convert_string2pytorch_dty |\r\n| | | pe' from 'nvtabular.inference.triton' (/nvtabular/nvtabular/inference/triton/__init |\r\n| | | __.py) |\r\n| | | |\r\n| | | At: |\r\n| | | /workspace/models/t4r_pytorch_pt/1/model.py(42): <module> |\r\n| | | <frozen importlib._bootstrap>(219): _call_with_frames_removed |\r\n| | | <frozen importlib._bootstrap_external>(848): exec_module |\r\n| | | <frozen importlib._bootstrap>(686): _load_unlocked |\r\n| | | <frozen importlib._bootstrap>(975): _find_and_load_unlocked |\r\n| | | <frozen importlib._bootstrap>(991): _find_and_load |\r\n+-----------------+---------+---------------------------------------------------------\r\n```\r\n\r\n**Steps/Code to reproduce bug**\r\n\r\nRun the 02 and 03 notebooks Transformers4Rec tutorial [notebooks](https://github.com/NVIDIA-Merlin/Transformers4Rec/tree/main/examples/tutorial) to train the model. Then serve the model to TIS based on the instructions given on the [inference notebook](https://github.com/NVIDIA-Merlin/Transformers4Rec/blob/main/examples/tutorial/04-Inference-with-Triton.ipynb).\r\n\r\n`Oct-2019.parquet` Dataset can be downloaded from here: https://drive.google.com/drive/u/0/folders/1nTuG6UHWOEaZnBJj7YSIVvnphE1zGc1h\r\n\r\n**Expected behavior**\r\nModel should be loaded to the TIS without issue.\r\n\r\n**Environment details (please complete the following information):**\r\n - Environment location: [Bare-metal, Docker, Cloud(specify cloud provider)] : Docker\r\n - Method of NVTabular install: [conda, Docker, or from source]: Docker `merlin-inference:21.11` and `merlin-pytoch-training:21.11` `\r\n Please do `git pull origin main` && `pip install -e .` to pull the latest main branch.\r\n - If method of install is [Docker], provide `docker pull` & `docker run` commands used\r\n \r\nThis issue was also submitted by a user on TF4Rec GH repo- https://github.com/NVIDIA-Merlin/Transformers4Rec/issues/339\r\n\n", "before_files": [{"content": "# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport json\nimport os\n\nimport pandas as pd\n\n# this needs to be before any modules that import protobuf\nos.environ[\"PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION\"] = \"python\"\n\nimport tritonclient.grpc as grpcclient # noqa\nfrom tritonclient.utils import np_to_triton_dtype # noqa\n\nfrom nvtabular.dispatch import _is_list_dtype, _is_string_dtype, _make_df # noqa\nfrom nvtabular.inference.triton.ensemble import ( # noqa\n export_hugectr_ensemble,\n export_pytorch_ensemble,\n export_tensorflow_ensemble,\n generate_hugectr_model,\n generate_nvtabular_model,\n)\n\n\ndef convert_df_to_triton_input(column_names, batch, input_class=grpcclient.InferInput):\n columns = [(col, batch[col]) for col in column_names]\n inputs = []\n for i, (name, col) in enumerate(columns):\n if _is_list_dtype(col):\n if isinstance(col, pd.Series):\n raise ValueError(\"this function doesn't support CPU list values yet\")\n inputs.append(\n _convert_column_to_triton_input(\n col._column.offsets.values_host.astype(\"int64\"), name + \"__nnzs\", input_class\n )\n )\n inputs.append(\n _convert_column_to_triton_input(\n col.list.leaves.values_host.astype(\"int64\"), name + \"__values\", input_class\n )\n )\n else:\n values = col.values if isinstance(col, pd.Series) else col.values_host\n inputs.append(_convert_column_to_triton_input(values, name, input_class))\n return inputs\n\n\ndef _convert_column_to_triton_input(col, name, input_class=grpcclient.InferInput):\n col = col.reshape(len(col), 1)\n input_tensor = input_class(name, col.shape, np_to_triton_dtype(col.dtype))\n input_tensor.set_data_from_numpy(col)\n return input_tensor\n\n\ndef convert_triton_output_to_df(columns, response):\n return _make_df({col: response.as_numpy(col) for col in columns})\n\n\ndef get_column_types(path):\n return json.load(open(os.path.join(path, \"column_types.json\")))\n\n\ndef _convert_tensor(t):\n out = t.as_numpy()\n if len(out.shape) == 2:\n out = out[:, 0]\n # cudf doesn't seem to handle dtypes like |S15 or object that well\n if _is_string_dtype(out.dtype):\n out = out.astype(\"str\")\n return out\n", "path": "nvtabular/inference/triton/__init__.py"}], "after_files": [{"content": "# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport json\nimport os\n\nimport pandas as pd\n\n# this needs to be before any modules that import protobuf\nos.environ[\"PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION\"] = \"python\"\n\nimport tritonclient.grpc as grpcclient # noqa\nfrom tritonclient.utils import np_to_triton_dtype # noqa\n\nfrom nvtabular.dispatch import _is_list_dtype, _is_string_dtype, _make_df # noqa\nfrom nvtabular.inference.triton.ensemble import ( # noqa\n _convert_string2pytorch_dtype,\n export_hugectr_ensemble,\n export_pytorch_ensemble,\n export_tensorflow_ensemble,\n generate_hugectr_model,\n generate_nvtabular_model,\n)\n\n\ndef convert_df_to_triton_input(column_names, batch, input_class=grpcclient.InferInput):\n columns = [(col, batch[col]) for col in column_names]\n inputs = []\n for i, (name, col) in enumerate(columns):\n if _is_list_dtype(col):\n if isinstance(col, pd.Series):\n raise ValueError(\"this function doesn't support CPU list values yet\")\n inputs.append(\n _convert_column_to_triton_input(\n col._column.offsets.values_host.astype(\"int64\"), name + \"__nnzs\", input_class\n )\n )\n inputs.append(\n _convert_column_to_triton_input(\n col.list.leaves.values_host.astype(\"int64\"), name + \"__values\", input_class\n )\n )\n else:\n values = col.values if isinstance(col, pd.Series) else col.values_host\n inputs.append(_convert_column_to_triton_input(values, name, input_class))\n return inputs\n\n\ndef _convert_column_to_triton_input(col, name, input_class=grpcclient.InferInput):\n col = col.reshape(len(col), 1)\n input_tensor = input_class(name, col.shape, np_to_triton_dtype(col.dtype))\n input_tensor.set_data_from_numpy(col)\n return input_tensor\n\n\ndef convert_triton_output_to_df(columns, response):\n return _make_df({col: response.as_numpy(col) for col in columns})\n\n\ndef get_column_types(path):\n return json.load(open(os.path.join(path, \"column_types.json\")))\n\n\ndef _convert_tensor(t):\n out = t.as_numpy()\n if len(out.shape) == 2:\n out = out[:, 0]\n # cudf doesn't seem to handle dtypes like |S15 or object that well\n if _is_string_dtype(out.dtype):\n out = out.astype(\"str\")\n return out\n", "path": "nvtabular/inference/triton/__init__.py"}]} | 1,744 | 145 |
gh_patches_debug_28929 | rasdani/github-patches | git_diff | iterative__dvc-7729 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
dvc list: Error on empty directory.
# Bug Report
Got error message on an empty directory, shouldn't it show nothing? like ls command.
<!--
## Issue name
Issue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug.
Example: `repro: doesn't detect input changes`
-->
## Description

Error when list a empty path, strange behavior.
Might relate to https://github.com/iterative/dvc/blob/daf07451f8e8f3e76a791c696b0ea175e8ed3ac1/dvc/repo/ls.py#L40-L41
<!--
A clear and concise description of what the bug is.
-->
### Reproduce
1. git init
2. dvc init
3. mkdir empty
4. dvc list . empty
<!--
Step list of how to reproduce the bug
-->
<!--
Example:
1. dvc init
2. Copy dataset.zip to the directory
3. dvc add dataset.zip
4. dvc run -d dataset.zip -o model ./train.sh
5. modify dataset.zip
6. dvc repro
-->
### Expected
Show nothing like ls command

<!--
A clear and concise description of what you expect to happen.
-->
### Environment information
DVC version: 2.0.17+7e4851
---------------------------------
Platform: Python 3.8.8 on macOS-10.16-x86_64-i386-64bit
Supports: All remotes
Cache types: <https://error.dvc.org/no-dvc-cache>
Caches: local
Remotes: None
Workspace directory: apfs on /dev/disk3s1s1
Repo: dvc, git
<!--
This is required to ensure that we can reproduce the bug.
-->
**Output of `dvc doctor`:**
```console
$ dvc doctor
```
**Additional Information (if any):**
<!--
Please check https://github.com/iterative/dvc/wiki/Debugging-DVC on ways to gather more information regarding the issue.
If applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`.
If the issue is regarding the performance, please attach the profiling information and the benchmark comparisons.
-->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/repo/ls.py`
Content:
```
1 import os
2 from itertools import chain
3
4 from dvc.exceptions import PathMissingError
5
6
7 def ls(url, path=None, rev=None, recursive=None, dvc_only=False):
8 """Methods for getting files and outputs for the repo.
9
10 Args:
11 url (str): the repo url
12 path (str, optional): relative path into the repo
13 rev (str, optional): SHA commit, branch or tag name
14 recursive (bool, optional): recursively walk the repo
15 dvc_only (bool, optional): show only DVC-artifacts
16
17 Returns:
18 list of `entry`
19
20 Notes:
21 `entry` is a dictionary with structure
22 {
23 "path": str,
24 "isout": bool,
25 "isdir": bool,
26 "isexec": bool,
27 }
28 """
29 from . import Repo
30
31 with Repo.open(url, rev=rev, subrepos=True, uninitialized=True) as repo:
32 path = path or ""
33
34 ret = _ls(repo.repo_fs, path, recursive, dvc_only)
35
36 if path and not ret:
37 raise PathMissingError(path, repo, dvc_only=dvc_only)
38
39 ret_list = []
40 for path, info in ret.items():
41 info["path"] = path
42 ret_list.append(info)
43 ret_list.sort(key=lambda f: f["path"])
44 return ret_list
45
46
47 def _ls(fs, path, recursive=None, dvc_only=False):
48 fs_path = fs.from_os_path(path)
49
50 try:
51 fs_path = fs.info(fs_path)["name"]
52 except FileNotFoundError:
53 return {}
54
55 infos = {}
56 for root, dirs, files in fs.walk(
57 fs_path, dvcfiles=True, dvc_only=dvc_only
58 ):
59 entries = chain(files, dirs) if not recursive else files
60
61 for entry in entries:
62 entry_fs_path = fs.path.join(root, entry)
63 relparts = fs.path.relparts(entry_fs_path, fs_path)
64 name = os.path.join(*relparts)
65 infos[name] = fs.info(entry_fs_path)
66
67 if not recursive:
68 break
69
70 if not infos and fs.isfile(fs_path):
71 infos[os.path.basename(path)] = fs.info(fs_path)
72
73 ret = {}
74 for name, info in infos.items():
75 dvc_info = info.get("dvc_info", {})
76 if dvc_info.get("outs") or not dvc_only:
77 ret[name] = {
78 "isout": dvc_info.get("isout", False),
79 "isdir": info["type"] == "directory",
80 "isexec": info.get("isexec", False),
81 }
82
83 return ret
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dvc/repo/ls.py b/dvc/repo/ls.py
--- a/dvc/repo/ls.py
+++ b/dvc/repo/ls.py
@@ -1,10 +1,22 @@
import os
from itertools import chain
+from typing import TYPE_CHECKING, Optional
from dvc.exceptions import PathMissingError
+if TYPE_CHECKING:
+ from dvc.fs.repo import RepoFileSystem
-def ls(url, path=None, rev=None, recursive=None, dvc_only=False):
+ from . import Repo
+
+
+def ls(
+ url: str,
+ path: Optional[str] = None,
+ rev: str = None,
+ recursive: bool = None,
+ dvc_only: bool = False,
+):
"""Methods for getting files and outputs for the repo.
Args:
@@ -31,10 +43,7 @@
with Repo.open(url, rev=rev, subrepos=True, uninitialized=True) as repo:
path = path or ""
- ret = _ls(repo.repo_fs, path, recursive, dvc_only)
-
- if path and not ret:
- raise PathMissingError(path, repo, dvc_only=dvc_only)
+ ret = _ls(repo, path, recursive, dvc_only)
ret_list = []
for path, info in ret.items():
@@ -44,13 +53,16 @@
return ret_list
-def _ls(fs, path, recursive=None, dvc_only=False):
+def _ls(
+ repo: "Repo", path: str, recursive: bool = None, dvc_only: bool = False
+):
+ fs: "RepoFileSystem" = repo.repo_fs
fs_path = fs.from_os_path(path)
try:
fs_path = fs.info(fs_path)["name"]
except FileNotFoundError:
- return {}
+ raise PathMissingError(path, repo, dvc_only=dvc_only)
infos = {}
for root, dirs, files in fs.walk(
| {"golden_diff": "diff --git a/dvc/repo/ls.py b/dvc/repo/ls.py\n--- a/dvc/repo/ls.py\n+++ b/dvc/repo/ls.py\n@@ -1,10 +1,22 @@\n import os\n from itertools import chain\n+from typing import TYPE_CHECKING, Optional\n \n from dvc.exceptions import PathMissingError\n \n+if TYPE_CHECKING:\n+ from dvc.fs.repo import RepoFileSystem\n \n-def ls(url, path=None, rev=None, recursive=None, dvc_only=False):\n+ from . import Repo\n+\n+\n+def ls(\n+ url: str,\n+ path: Optional[str] = None,\n+ rev: str = None,\n+ recursive: bool = None,\n+ dvc_only: bool = False,\n+):\n \"\"\"Methods for getting files and outputs for the repo.\n \n Args:\n@@ -31,10 +43,7 @@\n with Repo.open(url, rev=rev, subrepos=True, uninitialized=True) as repo:\n path = path or \"\"\n \n- ret = _ls(repo.repo_fs, path, recursive, dvc_only)\n-\n- if path and not ret:\n- raise PathMissingError(path, repo, dvc_only=dvc_only)\n+ ret = _ls(repo, path, recursive, dvc_only)\n \n ret_list = []\n for path, info in ret.items():\n@@ -44,13 +53,16 @@\n return ret_list\n \n \n-def _ls(fs, path, recursive=None, dvc_only=False):\n+def _ls(\n+ repo: \"Repo\", path: str, recursive: bool = None, dvc_only: bool = False\n+):\n+ fs: \"RepoFileSystem\" = repo.repo_fs\n fs_path = fs.from_os_path(path)\n \n try:\n fs_path = fs.info(fs_path)[\"name\"]\n except FileNotFoundError:\n- return {}\n+ raise PathMissingError(path, repo, dvc_only=dvc_only)\n \n infos = {}\n for root, dirs, files in fs.walk(\n", "issue": "dvc list: Error on empty directory. \n# Bug Report\r\n\r\nGot error message on an empty directory, shouldn't it show nothing? like ls command.\r\n\r\n\r\n<!--\r\n## Issue name\r\n\r\nIssue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug. \r\n\r\nExample: `repro: doesn't detect input changes`\r\n-->\r\n\r\n## Description\r\n\r\nError when list a empty path, strange behavior.\r\nMight relate to https://github.com/iterative/dvc/blob/daf07451f8e8f3e76a791c696b0ea175e8ed3ac1/dvc/repo/ls.py#L40-L41\r\n\r\n<!--\r\nA clear and concise description of what the bug is.\r\n-->\r\n\r\n### Reproduce\r\n\r\n1. git init\r\n2. dvc init\r\n3. mkdir empty\r\n4. dvc list . empty\r\n\r\n<!--\r\nStep list of how to reproduce the bug\r\n-->\r\n\r\n<!--\r\nExample:\r\n\r\n1. dvc init\r\n2. Copy dataset.zip to the directory\r\n3. dvc add dataset.zip\r\n4. dvc run -d dataset.zip -o model ./train.sh\r\n5. modify dataset.zip\r\n6. dvc repro\r\n-->\r\n\r\n### Expected\r\nShow nothing like ls command\r\n\r\n\r\n<!--\r\nA clear and concise description of what you expect to happen.\r\n-->\r\n\r\n### Environment information\r\nDVC version: 2.0.17+7e4851\r\n---------------------------------\r\nPlatform: Python 3.8.8 on macOS-10.16-x86_64-i386-64bit\r\nSupports: All remotes\r\nCache types: <https://error.dvc.org/no-dvc-cache>\r\nCaches: local\r\nRemotes: None\r\nWorkspace directory: apfs on /dev/disk3s1s1\r\nRepo: dvc, git\r\n<!--\r\nThis is required to ensure that we can reproduce the bug.\r\n-->\r\n\r\n**Output of `dvc doctor`:**\r\n\r\n```console\r\n$ dvc doctor\r\n```\r\n\r\n**Additional Information (if any):**\r\n\r\n<!--\r\nPlease check https://github.com/iterative/dvc/wiki/Debugging-DVC on ways to gather more information regarding the issue.\r\n\r\nIf applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`.\r\nIf the issue is regarding the performance, please attach the profiling information and the benchmark comparisons.\r\n-->\r\n\n", "before_files": [{"content": "import os\nfrom itertools import chain\n\nfrom dvc.exceptions import PathMissingError\n\n\ndef ls(url, path=None, rev=None, recursive=None, dvc_only=False):\n \"\"\"Methods for getting files and outputs for the repo.\n\n Args:\n url (str): the repo url\n path (str, optional): relative path into the repo\n rev (str, optional): SHA commit, branch or tag name\n recursive (bool, optional): recursively walk the repo\n dvc_only (bool, optional): show only DVC-artifacts\n\n Returns:\n list of `entry`\n\n Notes:\n `entry` is a dictionary with structure\n {\n \"path\": str,\n \"isout\": bool,\n \"isdir\": bool,\n \"isexec\": bool,\n }\n \"\"\"\n from . import Repo\n\n with Repo.open(url, rev=rev, subrepos=True, uninitialized=True) as repo:\n path = path or \"\"\n\n ret = _ls(repo.repo_fs, path, recursive, dvc_only)\n\n if path and not ret:\n raise PathMissingError(path, repo, dvc_only=dvc_only)\n\n ret_list = []\n for path, info in ret.items():\n info[\"path\"] = path\n ret_list.append(info)\n ret_list.sort(key=lambda f: f[\"path\"])\n return ret_list\n\n\ndef _ls(fs, path, recursive=None, dvc_only=False):\n fs_path = fs.from_os_path(path)\n\n try:\n fs_path = fs.info(fs_path)[\"name\"]\n except FileNotFoundError:\n return {}\n\n infos = {}\n for root, dirs, files in fs.walk(\n fs_path, dvcfiles=True, dvc_only=dvc_only\n ):\n entries = chain(files, dirs) if not recursive else files\n\n for entry in entries:\n entry_fs_path = fs.path.join(root, entry)\n relparts = fs.path.relparts(entry_fs_path, fs_path)\n name = os.path.join(*relparts)\n infos[name] = fs.info(entry_fs_path)\n\n if not recursive:\n break\n\n if not infos and fs.isfile(fs_path):\n infos[os.path.basename(path)] = fs.info(fs_path)\n\n ret = {}\n for name, info in infos.items():\n dvc_info = info.get(\"dvc_info\", {})\n if dvc_info.get(\"outs\") or not dvc_only:\n ret[name] = {\n \"isout\": dvc_info.get(\"isout\", False),\n \"isdir\": info[\"type\"] == \"directory\",\n \"isexec\": info.get(\"isexec\", False),\n }\n\n return ret\n", "path": "dvc/repo/ls.py"}], "after_files": [{"content": "import os\nfrom itertools import chain\nfrom typing import TYPE_CHECKING, Optional\n\nfrom dvc.exceptions import PathMissingError\n\nif TYPE_CHECKING:\n from dvc.fs.repo import RepoFileSystem\n\n from . import Repo\n\n\ndef ls(\n url: str,\n path: Optional[str] = None,\n rev: str = None,\n recursive: bool = None,\n dvc_only: bool = False,\n):\n \"\"\"Methods for getting files and outputs for the repo.\n\n Args:\n url (str): the repo url\n path (str, optional): relative path into the repo\n rev (str, optional): SHA commit, branch or tag name\n recursive (bool, optional): recursively walk the repo\n dvc_only (bool, optional): show only DVC-artifacts\n\n Returns:\n list of `entry`\n\n Notes:\n `entry` is a dictionary with structure\n {\n \"path\": str,\n \"isout\": bool,\n \"isdir\": bool,\n \"isexec\": bool,\n }\n \"\"\"\n from . import Repo\n\n with Repo.open(url, rev=rev, subrepos=True, uninitialized=True) as repo:\n path = path or \"\"\n\n ret = _ls(repo, path, recursive, dvc_only)\n\n ret_list = []\n for path, info in ret.items():\n info[\"path\"] = path\n ret_list.append(info)\n ret_list.sort(key=lambda f: f[\"path\"])\n return ret_list\n\n\ndef _ls(\n repo: \"Repo\", path: str, recursive: bool = None, dvc_only: bool = False\n):\n fs: \"RepoFileSystem\" = repo.repo_fs\n fs_path = fs.from_os_path(path)\n\n try:\n fs_path = fs.info(fs_path)[\"name\"]\n except FileNotFoundError:\n raise PathMissingError(path, repo, dvc_only=dvc_only)\n\n infos = {}\n for root, dirs, files in fs.walk(\n fs_path, dvcfiles=True, dvc_only=dvc_only\n ):\n entries = chain(files, dirs) if not recursive else files\n\n for entry in entries:\n entry_fs_path = fs.path.join(root, entry)\n relparts = fs.path.relparts(entry_fs_path, fs_path)\n name = os.path.join(*relparts)\n infos[name] = fs.info(entry_fs_path)\n\n if not recursive:\n break\n\n if not infos and fs.isfile(fs_path):\n infos[os.path.basename(path)] = fs.info(fs_path)\n\n ret = {}\n for name, info in infos.items():\n dvc_info = info.get(\"dvc_info\", {})\n if dvc_info.get(\"outs\") or not dvc_only:\n ret[name] = {\n \"isout\": dvc_info.get(\"isout\", False),\n \"isdir\": info[\"type\"] == \"directory\",\n \"isexec\": info.get(\"isexec\", False),\n }\n\n return ret\n", "path": "dvc/repo/ls.py"}]} | 1,662 | 448 |
gh_patches_debug_14771 | rasdani/github-patches | git_diff | litestar-org__litestar-992 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: Running `starlite run` after installing starlite[cli] gives error about missing cryptography package
The error is here:
```
Traceback (most recent call last):
File "C:\Users\hanne\Documents\Programme\analyze-wiktionary\.venv\lib\site-packages\starlite\middleware\session\cookie_backend.py", line 20,
in <module>
from cryptography.exceptions import InvalidTag
ModuleNotFoundError: No module named 'cryptography'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\hanne\Documents\Programme\analyze-wiktionary\.venv\Scripts\starlite.exe\__main__.py", line 4, in <module>
File "C:\Users\hanne\Documents\Programme\analyze-wiktionary\.venv\lib\site-packages\starlite\cli.py", line 41, in <module>
from starlite.middleware.session import SessionMiddleware
File "C:\Users\hanne\Documents\Programme\analyze-wiktionary\.venv\lib\site-packages\starlite\middleware\session\__init__.py", line 2, in <module>
from .cookie_backend import (
File "C:\Users\hanne\Documents\Programme\analyze-wiktionary\.venv\lib\site-packages\starlite\middleware\session\cookie_backend.py", line 23,
in <module>
raise MissingDependencyException("cryptography is not installed") from e
starlite.exceptions.base_exceptions.MissingDependencyException: cryptography is not installed
```
I thought it might be a good idea to install the package automatically with the CLI extra. (Or to update the [docs](https://starlite-api.github.io/starlite/usage/19-cli/?h=uvicorn) if I'm missing something).
My versions: Windows, Python 3.10, starlite 1.46.0
PS: Thank you all for the great amount of effort you spend on this project!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlite/middleware/session/__init__.py`
Content:
```
1 from .base import SessionMiddleware
2 from .cookie_backend import (
3 CookieBackendConfig as SessionCookieConfig, # backwards compatible export
4 )
5
6 __all__ = [
7 "SessionMiddleware",
8 "SessionCookieConfig",
9 ]
10
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/starlite/middleware/session/__init__.py b/starlite/middleware/session/__init__.py
--- a/starlite/middleware/session/__init__.py
+++ b/starlite/middleware/session/__init__.py
@@ -1,9 +1,27 @@
+from typing import Any
+
+from starlite.utils import warn_deprecation
+
from .base import SessionMiddleware
-from .cookie_backend import (
- CookieBackendConfig as SessionCookieConfig, # backwards compatible export
-)
-
-__all__ = [
- "SessionMiddleware",
- "SessionCookieConfig",
-]
+
+
+def __getattr__(name: str) -> Any:
+ """Provide lazy importing as per https://peps.python.org/pep-0562/"""
+
+ if name != "SessionCookieConfig":
+ raise AttributeError(f"Module {__package__} has no attribute {name}")
+
+ from .cookie_backend import CookieBackendConfig
+
+ warn_deprecation(
+ deprecated_name=f"{name} from {__package__}",
+ kind="import",
+ alternative="'from startlite.middleware.sessions.cookie_backend import CookieBackendConfig'",
+ version="1.47.0",
+ )
+
+ globals()[name] = CookieBackendConfig
+ return CookieBackendConfig
+
+
+__all__ = ["SessionMiddleware"]
| {"golden_diff": "diff --git a/starlite/middleware/session/__init__.py b/starlite/middleware/session/__init__.py\n--- a/starlite/middleware/session/__init__.py\n+++ b/starlite/middleware/session/__init__.py\n@@ -1,9 +1,27 @@\n+from typing import Any\n+\n+from starlite.utils import warn_deprecation\n+\n from .base import SessionMiddleware\n-from .cookie_backend import (\n- CookieBackendConfig as SessionCookieConfig, # backwards compatible export\n-)\n-\n-__all__ = [\n- \"SessionMiddleware\",\n- \"SessionCookieConfig\",\n-]\n+\n+\n+def __getattr__(name: str) -> Any:\n+ \"\"\"Provide lazy importing as per https://peps.python.org/pep-0562/\"\"\"\n+\n+ if name != \"SessionCookieConfig\":\n+ raise AttributeError(f\"Module {__package__} has no attribute {name}\")\n+\n+ from .cookie_backend import CookieBackendConfig\n+\n+ warn_deprecation(\n+ deprecated_name=f\"{name} from {__package__}\",\n+ kind=\"import\",\n+ alternative=\"'from startlite.middleware.sessions.cookie_backend import CookieBackendConfig'\",\n+ version=\"1.47.0\",\n+ )\n+\n+ globals()[name] = CookieBackendConfig\n+ return CookieBackendConfig\n+\n+\n+__all__ = [\"SessionMiddleware\"]\n", "issue": "Bug: Running `starlite run` after installing starlite[cli] gives error about missing cryptography package\nThe error is here:\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\hanne\\Documents\\Programme\\analyze-wiktionary\\.venv\\lib\\site-packages\\starlite\\middleware\\session\\cookie_backend.py\", line 20, \r\nin <module>\r\n from cryptography.exceptions import InvalidTag\r\nModuleNotFoundError: No module named 'cryptography'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Python310\\lib\\runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"C:\\Python310\\lib\\runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\hanne\\Documents\\Programme\\analyze-wiktionary\\.venv\\Scripts\\starlite.exe\\__main__.py\", line 4, in <module>\r\n File \"C:\\Users\\hanne\\Documents\\Programme\\analyze-wiktionary\\.venv\\lib\\site-packages\\starlite\\cli.py\", line 41, in <module>\r\n from starlite.middleware.session import SessionMiddleware\r\n File \"C:\\Users\\hanne\\Documents\\Programme\\analyze-wiktionary\\.venv\\lib\\site-packages\\starlite\\middleware\\session\\__init__.py\", line 2, in <module>\r\n from .cookie_backend import (\r\n File \"C:\\Users\\hanne\\Documents\\Programme\\analyze-wiktionary\\.venv\\lib\\site-packages\\starlite\\middleware\\session\\cookie_backend.py\", line 23, \r\nin <module>\r\n raise MissingDependencyException(\"cryptography is not installed\") from e\r\nstarlite.exceptions.base_exceptions.MissingDependencyException: cryptography is not installed\r\n```\r\n\r\nI thought it might be a good idea to install the package automatically with the CLI extra. (Or to update the [docs](https://starlite-api.github.io/starlite/usage/19-cli/?h=uvicorn) if I'm missing something).\r\n\r\nMy versions: Windows, Python 3.10, starlite 1.46.0 \r\n\r\nPS: Thank you all for the great amount of effort you spend on this project!\n", "before_files": [{"content": "from .base import SessionMiddleware\nfrom .cookie_backend import (\n CookieBackendConfig as SessionCookieConfig, # backwards compatible export\n)\n\n__all__ = [\n \"SessionMiddleware\",\n \"SessionCookieConfig\",\n]\n", "path": "starlite/middleware/session/__init__.py"}], "after_files": [{"content": "from typing import Any\n\nfrom starlite.utils import warn_deprecation\n\nfrom .base import SessionMiddleware\n\n\ndef __getattr__(name: str) -> Any:\n \"\"\"Provide lazy importing as per https://peps.python.org/pep-0562/\"\"\"\n\n if name != \"SessionCookieConfig\":\n raise AttributeError(f\"Module {__package__} has no attribute {name}\")\n\n from .cookie_backend import CookieBackendConfig\n\n warn_deprecation(\n deprecated_name=f\"{name} from {__package__}\",\n kind=\"import\",\n alternative=\"'from startlite.middleware.sessions.cookie_backend import CookieBackendConfig'\",\n version=\"1.47.0\",\n )\n\n globals()[name] = CookieBackendConfig\n return CookieBackendConfig\n\n\n__all__ = [\"SessionMiddleware\"]\n", "path": "starlite/middleware/session/__init__.py"}]} | 848 | 296 |
gh_patches_debug_19426 | rasdani/github-patches | git_diff | nautobot__nautobot-975 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`::1/128` is not a valid prefix
<!--
NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.
This form is only for reporting reproducible bugs. If you need assistance
with Nautobot installation, or if you have a general question, please start a
discussion instead: https://github.com/nautobot/nautobot/discussions
Please describe the environment in which you are running Nautobot. Be sure
that you are running an unmodified instance of the latest stable release
before submitting a bug report, and that any plugins have been disabled.
-->
### Environment
* Python version: 3.6
* Nautobot version: 1.1.3
<!--
Describe in detail the exact steps that someone else can take to reproduce
this bug using the current stable release of Nautobot. Begin with the
creation of any necessary database objects and call out every operation
being performed explicitly. If reporting a bug in the REST API, be sure to
reconstruct the raw HTTP request(s) being made: Don't rely on a client
library such as pynautobot.
-->
When trying to create the prefix `::1/128` I get the following error:
```no-highlight
<class 'netaddr.core.AddrFormatError'>
invalid IPNetwork 0.0.0.1/128
```
Both Python netaddr and ipaddress modules see this as a valid IPNetwork.
### Steps to Reproduce
1. Create a prefix or aggregate using the prefix `::1/128`
<!-- What did you expect to happen? -->
### Expected Behavior
Prefix created
<!-- What happened instead? -->
### Observed Behavior
```
invalid IPNetwork 0.0.0.1/128
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nautobot/ipam/fields.py`
Content:
```
1 from django.core.exceptions import ValidationError
2 from django.db import models
3 from django.utils.datastructures import DictWrapper
4 import netaddr
5
6 from .formfields import IPNetworkFormField
7
8
9 class VarbinaryIPField(models.BinaryField):
10 """
11 IP network address
12 """
13
14 description = "IP network address"
15
16 def __init__(self, **kwargs):
17 super().__init__(**kwargs)
18
19 def db_type(self, connection):
20 """Returns the correct field type for a given database vendor."""
21
22 # Use 'bytea' type for PostgreSQL.
23 if connection.vendor == "postgresql":
24 return "bytea"
25
26 # Or 'varbinary' for everyone else.
27 return "varbinary(16)"
28
29 def value_to_string(self, obj):
30 """IPField is serialized as str(IPAddress())"""
31 value = self.value_from_object(obj)
32 if not value:
33 return value
34
35 return str(self._parse_address(value))
36
37 def _parse_address(self, value):
38 """
39 Parse `str`, `bytes` (varbinary), or `netaddr.IPAddress to `netaddr.IPAddress`.
40 """
41 try:
42 value = int.from_bytes(value, "big")
43 except TypeError:
44 pass # It's a string
45
46 try:
47 return netaddr.IPAddress(value)
48 except netaddr.AddrFormatError:
49 raise ValidationError("Invalid IP address format: {}".format(value))
50 except (TypeError, ValueError) as e:
51 raise ValidationError(e)
52
53 def from_db_value(self, value, expression, connection):
54 """Converts DB (varbinary) to Python (str)."""
55 return self.to_python(value)
56
57 def to_python(self, value):
58 """Converts `value` to Python (str)."""
59 if isinstance(value, netaddr.IPAddress):
60 return str(value)
61
62 if value is None:
63 return value
64
65 return str(self._parse_address(value))
66
67 def get_db_prep_value(self, value, connection, prepared=False):
68 """Converts Python (str) to DB (varbinary)."""
69 if value is None:
70 return value
71
72 # Parse the address and then pack it to binary.
73 value = self._parse_address(value).packed
74
75 # Use defaults for PostgreSQL
76 if connection.vendor == "postgresql":
77 return super().get_db_prep_value(value, connection, prepared)
78
79 return value
80
81 def form_class(self):
82 return IPNetworkFormField
83
84 def formfield(self, **kwargs):
85 defaults = {"form_class": self.form_class()}
86 defaults.update(kwargs)
87 return super().formfield(**defaults)
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nautobot/ipam/fields.py b/nautobot/ipam/fields.py
--- a/nautobot/ipam/fields.py
+++ b/nautobot/ipam/fields.py
@@ -39,12 +39,17 @@
Parse `str`, `bytes` (varbinary), or `netaddr.IPAddress to `netaddr.IPAddress`.
"""
try:
- value = int.from_bytes(value, "big")
+ int_value = int.from_bytes(value, "big")
+ # Distinguish between
+ # \x00\x00\x00\x01 (IPv4 0.0.0.1) and
+ # \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01 (IPv6 ::1), among other cases
+ version = 4 if len(value) == 4 else 6
+ value = int_value
except TypeError:
- pass # It's a string
+ version = None # It's a string, IP version should be self-evident
try:
- return netaddr.IPAddress(value)
+ return netaddr.IPAddress(value, version=version)
except netaddr.AddrFormatError:
raise ValidationError("Invalid IP address format: {}".format(value))
except (TypeError, ValueError) as e:
| {"golden_diff": "diff --git a/nautobot/ipam/fields.py b/nautobot/ipam/fields.py\n--- a/nautobot/ipam/fields.py\n+++ b/nautobot/ipam/fields.py\n@@ -39,12 +39,17 @@\n Parse `str`, `bytes` (varbinary), or `netaddr.IPAddress to `netaddr.IPAddress`.\n \"\"\"\n try:\n- value = int.from_bytes(value, \"big\")\n+ int_value = int.from_bytes(value, \"big\")\n+ # Distinguish between\n+ # \\x00\\x00\\x00\\x01 (IPv4 0.0.0.1) and\n+ # \\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01 (IPv6 ::1), among other cases\n+ version = 4 if len(value) == 4 else 6\n+ value = int_value\n except TypeError:\n- pass # It's a string\n+ version = None # It's a string, IP version should be self-evident\n \n try:\n- return netaddr.IPAddress(value)\n+ return netaddr.IPAddress(value, version=version)\n except netaddr.AddrFormatError:\n raise ValidationError(\"Invalid IP address format: {}\".format(value))\n except (TypeError, ValueError) as e:\n", "issue": "`::1/128` is not a valid prefix\n<!--\r\n NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.\r\n\r\n This form is only for reporting reproducible bugs. If you need assistance\r\n with Nautobot installation, or if you have a general question, please start a\r\n discussion instead: https://github.com/nautobot/nautobot/discussions\r\n\r\n Please describe the environment in which you are running Nautobot. Be sure\r\n that you are running an unmodified instance of the latest stable release\r\n before submitting a bug report, and that any plugins have been disabled.\r\n-->\r\n### Environment\r\n* Python version: 3.6\r\n* Nautobot version: 1.1.3\r\n\r\n<!--\r\n Describe in detail the exact steps that someone else can take to reproduce\r\n this bug using the current stable release of Nautobot. Begin with the\r\n creation of any necessary database objects and call out every operation\r\n being performed explicitly. If reporting a bug in the REST API, be sure to\r\n reconstruct the raw HTTP request(s) being made: Don't rely on a client\r\n library such as pynautobot.\r\n-->\r\n\r\nWhen trying to create the prefix `::1/128` I get the following error:\r\n\r\n```no-highlight\r\n<class 'netaddr.core.AddrFormatError'>\r\n\r\ninvalid IPNetwork 0.0.0.1/128\r\n```\r\n\r\nBoth Python netaddr and ipaddress modules see this as a valid IPNetwork. \r\n\r\n### Steps to Reproduce\r\n1. Create a prefix or aggregate using the prefix `::1/128`\r\n\r\n<!-- What did you expect to happen? -->\r\n### Expected Behavior\r\n\r\nPrefix created\r\n\r\n<!-- What happened instead? -->\r\n### Observed Behavior\r\n\r\n```\r\ninvalid IPNetwork 0.0.0.1/128\r\n```\n", "before_files": [{"content": "from django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.utils.datastructures import DictWrapper\nimport netaddr\n\nfrom .formfields import IPNetworkFormField\n\n\nclass VarbinaryIPField(models.BinaryField):\n \"\"\"\n IP network address\n \"\"\"\n\n description = \"IP network address\"\n\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n\n def db_type(self, connection):\n \"\"\"Returns the correct field type for a given database vendor.\"\"\"\n\n # Use 'bytea' type for PostgreSQL.\n if connection.vendor == \"postgresql\":\n return \"bytea\"\n\n # Or 'varbinary' for everyone else.\n return \"varbinary(16)\"\n\n def value_to_string(self, obj):\n \"\"\"IPField is serialized as str(IPAddress())\"\"\"\n value = self.value_from_object(obj)\n if not value:\n return value\n\n return str(self._parse_address(value))\n\n def _parse_address(self, value):\n \"\"\"\n Parse `str`, `bytes` (varbinary), or `netaddr.IPAddress to `netaddr.IPAddress`.\n \"\"\"\n try:\n value = int.from_bytes(value, \"big\")\n except TypeError:\n pass # It's a string\n\n try:\n return netaddr.IPAddress(value)\n except netaddr.AddrFormatError:\n raise ValidationError(\"Invalid IP address format: {}\".format(value))\n except (TypeError, ValueError) as e:\n raise ValidationError(e)\n\n def from_db_value(self, value, expression, connection):\n \"\"\"Converts DB (varbinary) to Python (str).\"\"\"\n return self.to_python(value)\n\n def to_python(self, value):\n \"\"\"Converts `value` to Python (str).\"\"\"\n if isinstance(value, netaddr.IPAddress):\n return str(value)\n\n if value is None:\n return value\n\n return str(self._parse_address(value))\n\n def get_db_prep_value(self, value, connection, prepared=False):\n \"\"\"Converts Python (str) to DB (varbinary).\"\"\"\n if value is None:\n return value\n\n # Parse the address and then pack it to binary.\n value = self._parse_address(value).packed\n\n # Use defaults for PostgreSQL\n if connection.vendor == \"postgresql\":\n return super().get_db_prep_value(value, connection, prepared)\n\n return value\n\n def form_class(self):\n return IPNetworkFormField\n\n def formfield(self, **kwargs):\n defaults = {\"form_class\": self.form_class()}\n defaults.update(kwargs)\n return super().formfield(**defaults)\n", "path": "nautobot/ipam/fields.py"}], "after_files": [{"content": "from django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.utils.datastructures import DictWrapper\nimport netaddr\n\nfrom .formfields import IPNetworkFormField\n\n\nclass VarbinaryIPField(models.BinaryField):\n \"\"\"\n IP network address\n \"\"\"\n\n description = \"IP network address\"\n\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n\n def db_type(self, connection):\n \"\"\"Returns the correct field type for a given database vendor.\"\"\"\n\n # Use 'bytea' type for PostgreSQL.\n if connection.vendor == \"postgresql\":\n return \"bytea\"\n\n # Or 'varbinary' for everyone else.\n return \"varbinary(16)\"\n\n def value_to_string(self, obj):\n \"\"\"IPField is serialized as str(IPAddress())\"\"\"\n value = self.value_from_object(obj)\n if not value:\n return value\n\n return str(self._parse_address(value))\n\n def _parse_address(self, value):\n \"\"\"\n Parse `str`, `bytes` (varbinary), or `netaddr.IPAddress to `netaddr.IPAddress`.\n \"\"\"\n try:\n int_value = int.from_bytes(value, \"big\")\n # Distinguish between\n # \\x00\\x00\\x00\\x01 (IPv4 0.0.0.1) and\n # \\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01 (IPv6 ::1), among other cases\n version = 4 if len(value) == 4 else 6\n value = int_value\n except TypeError:\n version = None # It's a string, IP version should be self-evident\n\n try:\n return netaddr.IPAddress(value, version=version)\n except netaddr.AddrFormatError:\n raise ValidationError(\"Invalid IP address format: {}\".format(value))\n except (TypeError, ValueError) as e:\n raise ValidationError(e)\n\n def from_db_value(self, value, expression, connection):\n \"\"\"Converts DB (varbinary) to Python (str).\"\"\"\n return self.to_python(value)\n\n def to_python(self, value):\n \"\"\"Converts `value` to Python (str).\"\"\"\n if isinstance(value, netaddr.IPAddress):\n return str(value)\n\n if value is None:\n return value\n\n return str(self._parse_address(value))\n\n def get_db_prep_value(self, value, connection, prepared=False):\n \"\"\"Converts Python (str) to DB (varbinary).\"\"\"\n if value is None:\n return value\n\n # Parse the address and then pack it to binary.\n value = self._parse_address(value).packed\n\n # Use defaults for PostgreSQL\n if connection.vendor == \"postgresql\":\n return super().get_db_prep_value(value, connection, prepared)\n\n return value\n\n def form_class(self):\n return IPNetworkFormField\n\n def formfield(self, **kwargs):\n defaults = {\"form_class\": self.form_class()}\n defaults.update(kwargs)\n return super().formfield(**defaults)\n", "path": "nautobot/ipam/fields.py"}]} | 1,382 | 327 |
gh_patches_debug_5419 | rasdani/github-patches | git_diff | scrapy__scrapy-475 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Ability to not send specific headers in HTTP requests
Some web servers behave differently when they are receive or don't receive specific headers.
For example FeedBurner (http://feeds.feedburner.com/someblog) sends out XML RSS feeds **only is you do not set the "Referer" header.**
The idea would be to use the `headers` dict with some keys with a `None` value, and skip these headers when sending the HTTP request.
Currently, for the "Referer" example:
- `headers={"Referer": None}` sends "Referer: None"
- `headers={"Referer": ""}` sends "Referer: " (which works for the FeedBurner case, but is not satisfactory)
- disable `RefererMiddleware` but that feels a bit heavy
(for this FeedBurner thing, apparently adding `?format=xml` also does the trick)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/http/headers.py`
Content:
```
1 from w3lib.http import headers_dict_to_raw
2 from scrapy.utils.datatypes import CaselessDict
3
4
5 class Headers(CaselessDict):
6 """Case insensitive http headers dictionary"""
7
8 def __init__(self, seq=None, encoding='utf-8'):
9 self.encoding = encoding
10 super(Headers, self).__init__(seq)
11
12 def normkey(self, key):
13 """Headers must not be unicode"""
14 if isinstance(key, unicode):
15 return key.title().encode(self.encoding)
16 return key.title()
17
18 def normvalue(self, value):
19 """Headers must not be unicode"""
20 if not hasattr(value, '__iter__'):
21 value = [value]
22 return [x.encode(self.encoding) if isinstance(x, unicode) else x \
23 for x in value]
24
25 def __getitem__(self, key):
26 try:
27 return super(Headers, self).__getitem__(key)[-1]
28 except IndexError:
29 return None
30
31 def get(self, key, def_val=None):
32 try:
33 return super(Headers, self).get(key, def_val)[-1]
34 except IndexError:
35 return None
36
37 def getlist(self, key, def_val=None):
38 try:
39 return super(Headers, self).__getitem__(key)
40 except KeyError:
41 if def_val is not None:
42 return self.normvalue(def_val)
43 return []
44
45 def setlist(self, key, list_):
46 self[key] = list_
47
48 def setlistdefault(self, key, default_list=()):
49 return self.setdefault(key, default_list)
50
51 def appendlist(self, key, value):
52 lst = self.getlist(key)
53 lst.extend(self.normvalue(value))
54 self[key] = lst
55
56 def items(self):
57 return list(self.iteritems())
58
59 def iteritems(self):
60 return ((k, self.getlist(k)) for k in self.keys())
61
62 def values(self):
63 return [self[k] for k in self.keys()]
64
65 def to_string(self):
66 return headers_dict_to_raw(self)
67
68 def __copy__(self):
69 return self.__class__(self)
70 copy = __copy__
71
72
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scrapy/http/headers.py b/scrapy/http/headers.py
--- a/scrapy/http/headers.py
+++ b/scrapy/http/headers.py
@@ -17,7 +17,9 @@
def normvalue(self, value):
"""Headers must not be unicode"""
- if not hasattr(value, '__iter__'):
+ if value is None:
+ value = []
+ elif not hasattr(value, '__iter__'):
value = [value]
return [x.encode(self.encoding) if isinstance(x, unicode) else x \
for x in value]
| {"golden_diff": "diff --git a/scrapy/http/headers.py b/scrapy/http/headers.py\n--- a/scrapy/http/headers.py\n+++ b/scrapy/http/headers.py\n@@ -17,7 +17,9 @@\n \n def normvalue(self, value):\n \"\"\"Headers must not be unicode\"\"\"\n- if not hasattr(value, '__iter__'):\n+ if value is None:\n+ value = []\n+ elif not hasattr(value, '__iter__'):\n value = [value]\n return [x.encode(self.encoding) if isinstance(x, unicode) else x \\\n for x in value]\n", "issue": "Ability to not send specific headers in HTTP requests\nSome web servers behave differently when they are receive or don't receive specific headers.\n\nFor example FeedBurner (http://feeds.feedburner.com/someblog) sends out XML RSS feeds **only is you do not set the \"Referer\" header.**\n\nThe idea would be to use the `headers` dict with some keys with a `None` value, and skip these headers when sending the HTTP request.\n\nCurrently, for the \"Referer\" example:\n- `headers={\"Referer\": None}` sends \"Referer: None\"\n- `headers={\"Referer\": \"\"}` sends \"Referer: \" (which works for the FeedBurner case, but is not satisfactory)\n- disable `RefererMiddleware` but that feels a bit heavy\n\n(for this FeedBurner thing, apparently adding `?format=xml` also does the trick)\n\n", "before_files": [{"content": "from w3lib.http import headers_dict_to_raw\nfrom scrapy.utils.datatypes import CaselessDict\n\n\nclass Headers(CaselessDict):\n \"\"\"Case insensitive http headers dictionary\"\"\"\n\n def __init__(self, seq=None, encoding='utf-8'):\n self.encoding = encoding\n super(Headers, self).__init__(seq)\n\n def normkey(self, key):\n \"\"\"Headers must not be unicode\"\"\"\n if isinstance(key, unicode):\n return key.title().encode(self.encoding)\n return key.title()\n\n def normvalue(self, value):\n \"\"\"Headers must not be unicode\"\"\"\n if not hasattr(value, '__iter__'):\n value = [value]\n return [x.encode(self.encoding) if isinstance(x, unicode) else x \\\n for x in value]\n\n def __getitem__(self, key):\n try:\n return super(Headers, self).__getitem__(key)[-1]\n except IndexError:\n return None\n\n def get(self, key, def_val=None):\n try:\n return super(Headers, self).get(key, def_val)[-1]\n except IndexError:\n return None\n\n def getlist(self, key, def_val=None):\n try:\n return super(Headers, self).__getitem__(key)\n except KeyError:\n if def_val is not None:\n return self.normvalue(def_val)\n return []\n\n def setlist(self, key, list_):\n self[key] = list_\n\n def setlistdefault(self, key, default_list=()):\n return self.setdefault(key, default_list)\n\n def appendlist(self, key, value):\n lst = self.getlist(key)\n lst.extend(self.normvalue(value))\n self[key] = lst\n\n def items(self):\n return list(self.iteritems())\n\n def iteritems(self):\n return ((k, self.getlist(k)) for k in self.keys())\n\n def values(self):\n return [self[k] for k in self.keys()]\n\n def to_string(self):\n return headers_dict_to_raw(self)\n\n def __copy__(self):\n return self.__class__(self)\n copy = __copy__\n\n\n", "path": "scrapy/http/headers.py"}], "after_files": [{"content": "from w3lib.http import headers_dict_to_raw\nfrom scrapy.utils.datatypes import CaselessDict\n\n\nclass Headers(CaselessDict):\n \"\"\"Case insensitive http headers dictionary\"\"\"\n\n def __init__(self, seq=None, encoding='utf-8'):\n self.encoding = encoding\n super(Headers, self).__init__(seq)\n\n def normkey(self, key):\n \"\"\"Headers must not be unicode\"\"\"\n if isinstance(key, unicode):\n return key.title().encode(self.encoding)\n return key.title()\n\n def normvalue(self, value):\n \"\"\"Headers must not be unicode\"\"\"\n if value is None:\n value = []\n elif not hasattr(value, '__iter__'):\n value = [value]\n return [x.encode(self.encoding) if isinstance(x, unicode) else x \\\n for x in value]\n\n def __getitem__(self, key):\n try:\n return super(Headers, self).__getitem__(key)[-1]\n except IndexError:\n return None\n\n def get(self, key, def_val=None):\n try:\n return super(Headers, self).get(key, def_val)[-1]\n except IndexError:\n return None\n\n def getlist(self, key, def_val=None):\n try:\n return super(Headers, self).__getitem__(key)\n except KeyError:\n if def_val is not None:\n return self.normvalue(def_val)\n return []\n\n def setlist(self, key, list_):\n self[key] = list_\n\n def setlistdefault(self, key, default_list=()):\n return self.setdefault(key, default_list)\n\n def appendlist(self, key, value):\n lst = self.getlist(key)\n lst.extend(self.normvalue(value))\n self[key] = lst\n\n def items(self):\n return list(self.iteritems())\n\n def iteritems(self):\n return ((k, self.getlist(k)) for k in self.keys())\n\n def values(self):\n return [self[k] for k in self.keys()]\n\n def to_string(self):\n return headers_dict_to_raw(self)\n\n def __copy__(self):\n return self.__class__(self)\n copy = __copy__\n\n\n", "path": "scrapy/http/headers.py"}]} | 1,040 | 128 |
gh_patches_debug_6813 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-436 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make master exist when there are no tasks left.
Currently, master exists when there are no tasks left AND all workers are gone. It might left hanging if a worker got preempted.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticdl/python/elasticdl/master/main.py`
Content:
```
1 import logging
2 import time
3 import argparse
4 import os
5
6 import grpc
7 import tensorflow as tf
8
9 tf.enable_eager_execution()
10
11 from concurrent import futures
12 from recordio import File
13 from elasticdl.proto import master_pb2_grpc
14 from elasticdl.master.servicer import MasterServicer
15 from elasticdl.master.task_queue import _TaskQueue
16 from elasticdl.master.k8s_worker_manager import WorkerManager
17 from elasticdl.common.model_helper import load_user_model, build_model
18
19
20 def _make_task_queue(data_dir, record_per_task, num_epoch):
21 f_records = {}
22 for f in os.listdir(data_dir):
23 p = os.path.join(data_dir, f)
24 with File(p, "r") as rio:
25 f_records[p] = rio.count()
26 return _TaskQueue(f_records, record_per_task, num_epoch)
27
28
29 def _parse_args():
30 parser = argparse.ArgumentParser(description="ElasticDL Master")
31 parser.add_argument(
32 "--model_file",
33 help="Full file path of user defined neural model",
34 required=True,
35 )
36 parser.add_argument(
37 "--train_data_dir",
38 help="Training data directory. Files should be in RecordIO format",
39 required=True,
40 )
41 parser.add_argument("--record_per_task", type=int, required=True)
42 parser.add_argument("--num_epoch", type=int, required=True)
43 parser.add_argument(
44 "--grads_to_wait",
45 type=int,
46 help="Number of gradients to wait before updating model",
47 required=True,
48 )
49 parser.add_argument(
50 "--minibatch_size",
51 type=int,
52 help="Minibatch size used by workers to compute gradients",
53 required=True,
54 )
55 parser.add_argument(
56 "--num_worker",
57 type=int,
58 help="the number of workers used in training",
59 default=0,
60 )
61 parser.add_argument(
62 "--worker_cpu_request",
63 help="the minimal cpu required by worker in training",
64 default="1000m",
65 )
66 parser.add_argument(
67 "--worker_cpu_limit",
68 help="the maximal cpu used by worker in training",
69 default="1000m",
70 )
71 parser.add_argument(
72 "--worker_memory_request",
73 help="the minimal memory required by worker in training",
74 default="4096Mi",
75 )
76 parser.add_argument(
77 "--worker_memory_limit",
78 help="the maximal memory used by worker in training",
79 default="4096Mi",
80 )
81 parser.add_argument(
82 "--worker_pod_priority",
83 help="the requested priority of worker pod")
84 parser.add_argument(
85 "--worker_image", help="docker image for worker", default=None
86 )
87 parser.add_argument("--job_name", help="job name", required=True)
88 parser.add_argument(
89 "--codec_type",
90 default="bytes",
91 choices=["tf_example", "bytes"],
92 help="Type of codec(tf_example or bytes)",
93 )
94 return parser.parse_args()
95
96
97 def main():
98 # TODO: pass port via flags.
99 PORT = 50001
100 logger = logging.getLogger("master")
101 args = _parse_args()
102 task_q = _make_task_queue(
103 args.train_data_dir, args.record_per_task, args.num_epoch
104 )
105 model_module = load_user_model(args.model_file)
106 model_inst = model_module.model
107 build_model(model_inst, model_module.feature_columns())
108 optimizer = model_module.optimizer()
109
110 server = grpc.server(futures.ThreadPoolExecutor(max_workers=64))
111 master_pb2_grpc.add_MasterServicer_to_server(
112 MasterServicer(
113 logger,
114 args.grads_to_wait,
115 args.minibatch_size,
116 optimizer,
117 task_q,
118 init_var=model_inst.trainable_variables,
119 ),
120 server,
121 )
122 server.add_insecure_port("[::]:{}".format(PORT))
123 server.start()
124 logger.warning("Server started at port: %d", PORT)
125
126 if args.num_worker:
127 master_addr = "%s:%d" % (os.getenv("MY_POD_IP", "localhost"), PORT)
128 worker_command = ["python"]
129 worker_args = [
130 "-m",
131 "elasticdl.worker.main",
132 "--model_file",
133 args.model_file,
134 "--master_addr",
135 master_addr,
136 "--codec_type",
137 args.codec_type
138 ]
139
140 worker_manager = WorkerManager(
141 job_name=args.job_name,
142 worker_image=args.worker_image,
143 command=worker_command,
144 args=worker_args,
145 namespace="default",
146 num_worker=args.num_worker,
147 cpu_request=args.worker_cpu_request,
148 cpu_limit=args.worker_cpu_limit,
149 memory_request=args.worker_memory_request,
150 memory_limit=args.worker_memory_limit,
151 pod_priority=args.worker_pod_priority,
152 )
153 worker_manager.start_workers(restart_policy="Never")
154
155 try:
156 while True:
157 if task_q.finished():
158 break
159 time.sleep(30)
160 except KeyboardInterrupt:
161 logger.warning("Server stopping")
162
163 if args.num_worker:
164 # TODO: worker_manager.remove_workers supports synchronized call
165 worker_manager.remove_workers()
166 # wait for worker pod to be deleted
167 max_check_num = 10
168 for _ in range(max_check_num):
169 time.sleep(3)
170 counters = worker_manager.get_counters()
171 if not counters:
172 break
173 server.stop(0)
174
175
176 if __name__ == "__main__":
177 logging.basicConfig()
178 main()
179
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticdl/python/elasticdl/master/main.py b/elasticdl/python/elasticdl/master/main.py
--- a/elasticdl/python/elasticdl/master/main.py
+++ b/elasticdl/python/elasticdl/master/main.py
@@ -163,13 +163,7 @@
if args.num_worker:
# TODO: worker_manager.remove_workers supports synchronized call
worker_manager.remove_workers()
- # wait for worker pod to be deleted
- max_check_num = 10
- for _ in range(max_check_num):
- time.sleep(3)
- counters = worker_manager.get_counters()
- if not counters:
- break
+
server.stop(0)
| {"golden_diff": "diff --git a/elasticdl/python/elasticdl/master/main.py b/elasticdl/python/elasticdl/master/main.py\n--- a/elasticdl/python/elasticdl/master/main.py\n+++ b/elasticdl/python/elasticdl/master/main.py\n@@ -163,13 +163,7 @@\n if args.num_worker:\n # TODO: worker_manager.remove_workers supports synchronized call\n worker_manager.remove_workers()\n- # wait for worker pod to be deleted\n- max_check_num = 10\n- for _ in range(max_check_num):\n- time.sleep(3)\n- counters = worker_manager.get_counters()\n- if not counters:\n- break\n+\n server.stop(0)\n", "issue": "Make master exist when there are no tasks left.\nCurrently, master exists when there are no tasks left AND all workers are gone. It might left hanging if a worker got preempted.\n", "before_files": [{"content": "import logging\nimport time\nimport argparse\nimport os\n\nimport grpc\nimport tensorflow as tf\n\ntf.enable_eager_execution()\n\nfrom concurrent import futures\nfrom recordio import File\nfrom elasticdl.proto import master_pb2_grpc\nfrom elasticdl.master.servicer import MasterServicer\nfrom elasticdl.master.task_queue import _TaskQueue\nfrom elasticdl.master.k8s_worker_manager import WorkerManager\nfrom elasticdl.common.model_helper import load_user_model, build_model\n\n\ndef _make_task_queue(data_dir, record_per_task, num_epoch):\n f_records = {}\n for f in os.listdir(data_dir):\n p = os.path.join(data_dir, f)\n with File(p, \"r\") as rio:\n f_records[p] = rio.count()\n return _TaskQueue(f_records, record_per_task, num_epoch)\n\n\ndef _parse_args():\n parser = argparse.ArgumentParser(description=\"ElasticDL Master\")\n parser.add_argument(\n \"--model_file\",\n help=\"Full file path of user defined neural model\",\n required=True,\n )\n parser.add_argument(\n \"--train_data_dir\",\n help=\"Training data directory. Files should be in RecordIO format\",\n required=True,\n )\n parser.add_argument(\"--record_per_task\", type=int, required=True)\n parser.add_argument(\"--num_epoch\", type=int, required=True)\n parser.add_argument(\n \"--grads_to_wait\",\n type=int,\n help=\"Number of gradients to wait before updating model\",\n required=True,\n )\n parser.add_argument(\n \"--minibatch_size\",\n type=int,\n help=\"Minibatch size used by workers to compute gradients\",\n required=True,\n )\n parser.add_argument(\n \"--num_worker\",\n type=int,\n help=\"the number of workers used in training\",\n default=0,\n )\n parser.add_argument(\n \"--worker_cpu_request\",\n help=\"the minimal cpu required by worker in training\",\n default=\"1000m\",\n )\n parser.add_argument(\n \"--worker_cpu_limit\",\n help=\"the maximal cpu used by worker in training\",\n default=\"1000m\",\n )\n parser.add_argument(\n \"--worker_memory_request\",\n help=\"the minimal memory required by worker in training\",\n default=\"4096Mi\",\n )\n parser.add_argument(\n \"--worker_memory_limit\",\n help=\"the maximal memory used by worker in training\",\n default=\"4096Mi\",\n )\n parser.add_argument(\n \"--worker_pod_priority\",\n help=\"the requested priority of worker pod\")\n parser.add_argument(\n \"--worker_image\", help=\"docker image for worker\", default=None\n )\n parser.add_argument(\"--job_name\", help=\"job name\", required=True)\n parser.add_argument(\n \"--codec_type\",\n default=\"bytes\",\n choices=[\"tf_example\", \"bytes\"],\n help=\"Type of codec(tf_example or bytes)\",\n )\n return parser.parse_args()\n\n\ndef main():\n # TODO: pass port via flags.\n PORT = 50001\n logger = logging.getLogger(\"master\")\n args = _parse_args()\n task_q = _make_task_queue(\n args.train_data_dir, args.record_per_task, args.num_epoch\n )\n model_module = load_user_model(args.model_file)\n model_inst = model_module.model\n build_model(model_inst, model_module.feature_columns())\n optimizer = model_module.optimizer()\n\n server = grpc.server(futures.ThreadPoolExecutor(max_workers=64))\n master_pb2_grpc.add_MasterServicer_to_server(\n MasterServicer(\n logger,\n args.grads_to_wait,\n args.minibatch_size,\n optimizer,\n task_q,\n init_var=model_inst.trainable_variables,\n ),\n server,\n )\n server.add_insecure_port(\"[::]:{}\".format(PORT))\n server.start()\n logger.warning(\"Server started at port: %d\", PORT)\n\n if args.num_worker:\n master_addr = \"%s:%d\" % (os.getenv(\"MY_POD_IP\", \"localhost\"), PORT)\n worker_command = [\"python\"]\n worker_args = [\n \"-m\",\n \"elasticdl.worker.main\",\n \"--model_file\",\n args.model_file,\n \"--master_addr\",\n master_addr,\n \"--codec_type\",\n args.codec_type\n ]\n\n worker_manager = WorkerManager(\n job_name=args.job_name,\n worker_image=args.worker_image,\n command=worker_command,\n args=worker_args,\n namespace=\"default\",\n num_worker=args.num_worker,\n cpu_request=args.worker_cpu_request,\n cpu_limit=args.worker_cpu_limit,\n memory_request=args.worker_memory_request,\n memory_limit=args.worker_memory_limit,\n pod_priority=args.worker_pod_priority,\n )\n worker_manager.start_workers(restart_policy=\"Never\")\n\n try:\n while True:\n if task_q.finished():\n break\n time.sleep(30)\n except KeyboardInterrupt:\n logger.warning(\"Server stopping\")\n\n if args.num_worker:\n # TODO: worker_manager.remove_workers supports synchronized call\n worker_manager.remove_workers()\n # wait for worker pod to be deleted\n max_check_num = 10\n for _ in range(max_check_num):\n time.sleep(3)\n counters = worker_manager.get_counters()\n if not counters:\n break\n server.stop(0)\n\n\nif __name__ == \"__main__\":\n logging.basicConfig()\n main()\n", "path": "elasticdl/python/elasticdl/master/main.py"}], "after_files": [{"content": "import logging\nimport time\nimport argparse\nimport os\n\nimport grpc\nimport tensorflow as tf\n\ntf.enable_eager_execution()\n\nfrom concurrent import futures\nfrom recordio import File\nfrom elasticdl.proto import master_pb2_grpc\nfrom elasticdl.master.servicer import MasterServicer\nfrom elasticdl.master.task_queue import _TaskQueue\nfrom elasticdl.master.k8s_worker_manager import WorkerManager\nfrom elasticdl.common.model_helper import load_user_model, build_model\n\n\ndef _make_task_queue(data_dir, record_per_task, num_epoch):\n f_records = {}\n for f in os.listdir(data_dir):\n p = os.path.join(data_dir, f)\n with File(p, \"r\") as rio:\n f_records[p] = rio.count()\n return _TaskQueue(f_records, record_per_task, num_epoch)\n\n\ndef _parse_args():\n parser = argparse.ArgumentParser(description=\"ElasticDL Master\")\n parser.add_argument(\n \"--model_file\",\n help=\"Full file path of user defined neural model\",\n required=True,\n )\n parser.add_argument(\n \"--train_data_dir\",\n help=\"Training data directory. Files should be in RecordIO format\",\n required=True,\n )\n parser.add_argument(\"--record_per_task\", type=int, required=True)\n parser.add_argument(\"--num_epoch\", type=int, required=True)\n parser.add_argument(\n \"--grads_to_wait\",\n type=int,\n help=\"Number of gradients to wait before updating model\",\n required=True,\n )\n parser.add_argument(\n \"--minibatch_size\",\n type=int,\n help=\"Minibatch size used by workers to compute gradients\",\n required=True,\n )\n parser.add_argument(\n \"--num_worker\",\n type=int,\n help=\"the number of workers used in training\",\n default=0,\n )\n parser.add_argument(\n \"--worker_cpu_request\",\n help=\"the minimal cpu required by worker in training\",\n default=\"1000m\",\n )\n parser.add_argument(\n \"--worker_cpu_limit\",\n help=\"the maximal cpu used by worker in training\",\n default=\"1000m\",\n )\n parser.add_argument(\n \"--worker_memory_request\",\n help=\"the minimal memory required by worker in training\",\n default=\"4096Mi\",\n )\n parser.add_argument(\n \"--worker_memory_limit\",\n help=\"the maximal memory used by worker in training\",\n default=\"4096Mi\",\n )\n parser.add_argument(\n \"--worker_pod_priority\",\n help=\"the requested priority of worker pod\")\n parser.add_argument(\n \"--worker_image\", help=\"docker image for worker\", default=None\n )\n parser.add_argument(\"--job_name\", help=\"job name\", required=True)\n parser.add_argument(\n \"--codec_type\",\n default=\"bytes\",\n choices=[\"tf_example\", \"bytes\"],\n help=\"Type of codec(tf_example or bytes)\",\n )\n return parser.parse_args()\n\n\ndef main():\n # TODO: pass port via flags.\n PORT = 50001\n logger = logging.getLogger(\"master\")\n args = _parse_args()\n task_q = _make_task_queue(\n args.train_data_dir, args.record_per_task, args.num_epoch\n )\n model_module = load_user_model(args.model_file)\n model_inst = model_module.model\n build_model(model_inst, model_module.feature_columns())\n optimizer = model_module.optimizer()\n\n server = grpc.server(futures.ThreadPoolExecutor(max_workers=64))\n master_pb2_grpc.add_MasterServicer_to_server(\n MasterServicer(\n logger,\n args.grads_to_wait,\n args.minibatch_size,\n optimizer,\n task_q,\n init_var=model_inst.trainable_variables,\n ),\n server,\n )\n server.add_insecure_port(\"[::]:{}\".format(PORT))\n server.start()\n logger.warning(\"Server started at port: %d\", PORT)\n\n if args.num_worker:\n master_addr = \"%s:%d\" % (os.getenv(\"MY_POD_IP\", \"localhost\"), PORT)\n worker_command = [\"python\"]\n worker_args = [\n \"-m\",\n \"elasticdl.worker.main\",\n \"--model_file\",\n args.model_file,\n \"--master_addr\",\n master_addr,\n \"--codec_type\",\n args.codec_type\n ]\n\n worker_manager = WorkerManager(\n job_name=args.job_name,\n worker_image=args.worker_image,\n command=worker_command,\n args=worker_args,\n namespace=\"default\",\n num_worker=args.num_worker,\n cpu_request=args.worker_cpu_request,\n cpu_limit=args.worker_cpu_limit,\n memory_request=args.worker_memory_request,\n memory_limit=args.worker_memory_limit,\n pod_priority=args.worker_pod_priority,\n )\n worker_manager.start_workers(restart_policy=\"Never\")\n\n try:\n while True:\n if task_q.finished():\n break\n time.sleep(30)\n except KeyboardInterrupt:\n logger.warning(\"Server stopping\")\n\n if args.num_worker:\n # TODO: worker_manager.remove_workers supports synchronized call\n worker_manager.remove_workers()\n\n server.stop(0)\n\n\nif __name__ == \"__main__\":\n logging.basicConfig()\n main()\n", "path": "elasticdl/python/elasticdl/master/main.py"}]} | 1,879 | 155 |
gh_patches_debug_38762 | rasdani/github-patches | git_diff | nilearn__nilearn-1225 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
remove examples/03_connectivity/plot_power_connectome.py ?
- Signal extraction from spheres placed on Power coordinates is already done in `examples/03_connectivity/plot_seed_based_connectome.py`
- Sparse inverse covariance estimation is already explained in `examples/03_connectivity/plot_inverse_covariance_connectome.py` for MSDL atlas. For me, it doesn't really make a difference estimating it on timeseries extracted from probabilistic maps or spheric ROIs.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/03_connectivity/plot_power_connectome.py`
Content:
```
1 """
2 Extracting signals and plotting a connectome for the Power-264 seed-region atlas
3 ================================================================================
4
5 This example shows how to extract signals from spherical seed-regions based
6 on the Power-264 atlas (Power, 2011) and estimating a connectome using sparse
7 inverse covariance.
8
9 Power, Jonathan D., et al. "Functional network organization of the
10 human brain." Neuron 72.4 (2011): 665-678.
11
12 """
13
14 import numpy as np
15 import matplotlib.pyplot as plt
16 from nilearn import datasets, connectome, plotting, input_data
17
18
19 ###############################################################################
20 # Atlas and dataset fetching
21
22 # Fetch the coordinates of power atlas
23 power = datasets.fetch_coords_power_2011()
24 power_coords = np.vstack((
25 power.rois['x'],
26 power.rois['y'],
27 power.rois['z'],
28 )).T
29
30 # Fetch the first subject of ADHD dataset
31 adhd = datasets.fetch_adhd(n_subjects=1)
32
33
34 ###############################################################################
35 # Masking: taking the signal in a sphere of radius 5mm around Power coords
36
37 masker = input_data.NiftiSpheresMasker(seeds=power_coords,
38 smoothing_fwhm=4,
39 radius=5.,
40 standardize=True,
41 detrend=True,
42 low_pass=0.1,
43 high_pass=0.01,
44 t_r=2.5)
45
46 timeseries = masker.fit_transform(adhd.func[0], confounds=adhd.confounds[0])
47
48 ###############################################################################
49 # Extract and plot correlation matrix
50
51 # calculate connectivity and plot Power-264 correlation matrix
52 connectivity = connectome.ConnectivityMeasure(kind='correlation')
53 corr_matrix = connectivity.fit_transform([timeseries])[0]
54 np.fill_diagonal(corr_matrix, 0)
55 plt.imshow(corr_matrix, vmin=-1., vmax=1., cmap='RdBu_r')
56 plt.colorbar()
57 plt.title('Power 264 Connectivity')
58
59 # Plot the connectome
60
61 plotting.plot_connectome(corr_matrix,
62 power_coords,
63 edge_threshold='99.8%',
64 node_size=20)
65
66
67 ###############################################################################
68 # Extract and plot covariance and sparse covariance
69
70 # Compute the sparse inverse covariance
71 from sklearn.covariance import GraphLassoCV
72
73 estimator = GraphLassoCV()
74 estimator.fit(timeseries)
75
76 # Display the covariance
77 plt.figure(figsize=(5, 5))
78 plt.imshow(estimator.covariance_, interpolation="nearest",
79 vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)
80 plt.title('Covariance matrix')
81
82 # display the corresponding graph
83 plotting.plot_connectome(estimator.covariance_,
84 power_coords,
85 title='Covariance connectome',
86 edge_threshold='99.8%',
87 node_size=20)
88
89 # Display the sparse inverse covariance
90 plt.figure(figsize=(5, 5))
91 plt.imshow(estimator.precision_, interpolation="nearest",
92 vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)
93 plt.title('Precision matrix')
94
95 # And now display the corresponding graph
96 plotting.plot_connectome(estimator.precision_, power_coords,
97 title='Precision connectome',
98 edge_threshold="99.8%",
99 node_size=20)
100 plotting.show()
101
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/03_connectivity/plot_power_connectome.py b/examples/03_connectivity/plot_power_connectome.py
deleted file mode 100644
--- a/examples/03_connectivity/plot_power_connectome.py
+++ /dev/null
@@ -1,100 +0,0 @@
-"""
-Extracting signals and plotting a connectome for the Power-264 seed-region atlas
-================================================================================
-
-This example shows how to extract signals from spherical seed-regions based
-on the Power-264 atlas (Power, 2011) and estimating a connectome using sparse
-inverse covariance.
-
-Power, Jonathan D., et al. "Functional network organization of the
-human brain." Neuron 72.4 (2011): 665-678.
-
-"""
-
-import numpy as np
-import matplotlib.pyplot as plt
-from nilearn import datasets, connectome, plotting, input_data
-
-
-###############################################################################
-# Atlas and dataset fetching
-
-# Fetch the coordinates of power atlas
-power = datasets.fetch_coords_power_2011()
-power_coords = np.vstack((
- power.rois['x'],
- power.rois['y'],
- power.rois['z'],
-)).T
-
-# Fetch the first subject of ADHD dataset
-adhd = datasets.fetch_adhd(n_subjects=1)
-
-
-###############################################################################
-# Masking: taking the signal in a sphere of radius 5mm around Power coords
-
-masker = input_data.NiftiSpheresMasker(seeds=power_coords,
- smoothing_fwhm=4,
- radius=5.,
- standardize=True,
- detrend=True,
- low_pass=0.1,
- high_pass=0.01,
- t_r=2.5)
-
-timeseries = masker.fit_transform(adhd.func[0], confounds=adhd.confounds[0])
-
-###############################################################################
-# Extract and plot correlation matrix
-
-# calculate connectivity and plot Power-264 correlation matrix
-connectivity = connectome.ConnectivityMeasure(kind='correlation')
-corr_matrix = connectivity.fit_transform([timeseries])[0]
-np.fill_diagonal(corr_matrix, 0)
-plt.imshow(corr_matrix, vmin=-1., vmax=1., cmap='RdBu_r')
-plt.colorbar()
-plt.title('Power 264 Connectivity')
-
-# Plot the connectome
-
-plotting.plot_connectome(corr_matrix,
- power_coords,
- edge_threshold='99.8%',
- node_size=20)
-
-
-###############################################################################
-# Extract and plot covariance and sparse covariance
-
-# Compute the sparse inverse covariance
-from sklearn.covariance import GraphLassoCV
-
-estimator = GraphLassoCV()
-estimator.fit(timeseries)
-
-# Display the covariance
-plt.figure(figsize=(5, 5))
-plt.imshow(estimator.covariance_, interpolation="nearest",
- vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)
-plt.title('Covariance matrix')
-
-# display the corresponding graph
-plotting.plot_connectome(estimator.covariance_,
- power_coords,
- title='Covariance connectome',
- edge_threshold='99.8%',
- node_size=20)
-
-# Display the sparse inverse covariance
-plt.figure(figsize=(5, 5))
-plt.imshow(estimator.precision_, interpolation="nearest",
- vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)
-plt.title('Precision matrix')
-
-# And now display the corresponding graph
-plotting.plot_connectome(estimator.precision_, power_coords,
- title='Precision connectome',
- edge_threshold="99.8%",
- node_size=20)
-plotting.show()
| {"golden_diff": "diff --git a/examples/03_connectivity/plot_power_connectome.py b/examples/03_connectivity/plot_power_connectome.py\ndeleted file mode 100644\n--- a/examples/03_connectivity/plot_power_connectome.py\n+++ /dev/null\n@@ -1,100 +0,0 @@\n-\"\"\"\n-Extracting signals and plotting a connectome for the Power-264 seed-region atlas\n-================================================================================\n-\n-This example shows how to extract signals from spherical seed-regions based\n-on the Power-264 atlas (Power, 2011) and estimating a connectome using sparse\n-inverse covariance.\n-\n-Power, Jonathan D., et al. \"Functional network organization of the\n-human brain.\" Neuron 72.4 (2011): 665-678.\n-\n-\"\"\"\n-\n-import numpy as np\n-import matplotlib.pyplot as plt\n-from nilearn import datasets, connectome, plotting, input_data\n-\n-\n-###############################################################################\n-# Atlas and dataset fetching\n-\n-# Fetch the coordinates of power atlas\n-power = datasets.fetch_coords_power_2011()\n-power_coords = np.vstack((\n- power.rois['x'],\n- power.rois['y'],\n- power.rois['z'],\n-)).T\n-\n-# Fetch the first subject of ADHD dataset\n-adhd = datasets.fetch_adhd(n_subjects=1)\n-\n-\n-###############################################################################\n-# Masking: taking the signal in a sphere of radius 5mm around Power coords\n-\n-masker = input_data.NiftiSpheresMasker(seeds=power_coords,\n- smoothing_fwhm=4,\n- radius=5.,\n- standardize=True,\n- detrend=True,\n- low_pass=0.1,\n- high_pass=0.01,\n- t_r=2.5)\n-\n-timeseries = masker.fit_transform(adhd.func[0], confounds=adhd.confounds[0])\n-\n-###############################################################################\n-# Extract and plot correlation matrix\n-\n-# calculate connectivity and plot Power-264 correlation matrix\n-connectivity = connectome.ConnectivityMeasure(kind='correlation')\n-corr_matrix = connectivity.fit_transform([timeseries])[0]\n-np.fill_diagonal(corr_matrix, 0)\n-plt.imshow(corr_matrix, vmin=-1., vmax=1., cmap='RdBu_r')\n-plt.colorbar()\n-plt.title('Power 264 Connectivity')\n-\n-# Plot the connectome\n-\n-plotting.plot_connectome(corr_matrix,\n- power_coords,\n- edge_threshold='99.8%',\n- node_size=20)\n-\n-\n-###############################################################################\n-# Extract and plot covariance and sparse covariance\n-\n-# Compute the sparse inverse covariance\n-from sklearn.covariance import GraphLassoCV\n-\n-estimator = GraphLassoCV()\n-estimator.fit(timeseries)\n-\n-# Display the covariance\n-plt.figure(figsize=(5, 5))\n-plt.imshow(estimator.covariance_, interpolation=\"nearest\",\n- vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)\n-plt.title('Covariance matrix')\n-\n-# display the corresponding graph\n-plotting.plot_connectome(estimator.covariance_,\n- power_coords,\n- title='Covariance connectome',\n- edge_threshold='99.8%',\n- node_size=20)\n-\n-# Display the sparse inverse covariance\n-plt.figure(figsize=(5, 5))\n-plt.imshow(estimator.precision_, interpolation=\"nearest\",\n- vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)\n-plt.title('Precision matrix')\n-\n-# And now display the corresponding graph\n-plotting.plot_connectome(estimator.precision_, power_coords,\n- title='Precision connectome',\n- edge_threshold=\"99.8%\",\n- node_size=20)\n-plotting.show()\n", "issue": "remove examples/03_connectivity/plot_power_connectome.py ?\n- Signal extraction from spheres placed on Power coordinates is already done in `examples/03_connectivity/plot_seed_based_connectome.py`\n- Sparse inverse covariance estimation is already explained in `examples/03_connectivity/plot_inverse_covariance_connectome.py` for MSDL atlas. For me, it doesn't really make a difference estimating it on timeseries extracted from probabilistic maps or spheric ROIs.\n\n", "before_files": [{"content": "\"\"\"\nExtracting signals and plotting a connectome for the Power-264 seed-region atlas\n================================================================================\n\nThis example shows how to extract signals from spherical seed-regions based\non the Power-264 atlas (Power, 2011) and estimating a connectome using sparse\ninverse covariance.\n\nPower, Jonathan D., et al. \"Functional network organization of the\nhuman brain.\" Neuron 72.4 (2011): 665-678.\n\n\"\"\"\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom nilearn import datasets, connectome, plotting, input_data\n\n\n###############################################################################\n# Atlas and dataset fetching\n\n# Fetch the coordinates of power atlas\npower = datasets.fetch_coords_power_2011()\npower_coords = np.vstack((\n power.rois['x'],\n power.rois['y'],\n power.rois['z'],\n)).T\n\n# Fetch the first subject of ADHD dataset\nadhd = datasets.fetch_adhd(n_subjects=1)\n\n\n###############################################################################\n# Masking: taking the signal in a sphere of radius 5mm around Power coords\n\nmasker = input_data.NiftiSpheresMasker(seeds=power_coords,\n smoothing_fwhm=4,\n radius=5.,\n standardize=True,\n detrend=True,\n low_pass=0.1,\n high_pass=0.01,\n t_r=2.5)\n\ntimeseries = masker.fit_transform(adhd.func[0], confounds=adhd.confounds[0])\n\n###############################################################################\n# Extract and plot correlation matrix\n\n# calculate connectivity and plot Power-264 correlation matrix\nconnectivity = connectome.ConnectivityMeasure(kind='correlation')\ncorr_matrix = connectivity.fit_transform([timeseries])[0]\nnp.fill_diagonal(corr_matrix, 0)\nplt.imshow(corr_matrix, vmin=-1., vmax=1., cmap='RdBu_r')\nplt.colorbar()\nplt.title('Power 264 Connectivity')\n\n# Plot the connectome\n\nplotting.plot_connectome(corr_matrix,\n power_coords,\n edge_threshold='99.8%',\n node_size=20)\n\n\n###############################################################################\n# Extract and plot covariance and sparse covariance\n\n# Compute the sparse inverse covariance\nfrom sklearn.covariance import GraphLassoCV\n\nestimator = GraphLassoCV()\nestimator.fit(timeseries)\n\n# Display the covariance\nplt.figure(figsize=(5, 5))\nplt.imshow(estimator.covariance_, interpolation=\"nearest\",\n vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)\nplt.title('Covariance matrix')\n\n# display the corresponding graph\nplotting.plot_connectome(estimator.covariance_,\n power_coords,\n title='Covariance connectome',\n edge_threshold='99.8%',\n node_size=20)\n\n# Display the sparse inverse covariance\nplt.figure(figsize=(5, 5))\nplt.imshow(estimator.precision_, interpolation=\"nearest\",\n vmax=1, vmin=-1, cmap=plt.cm.RdBu_r)\nplt.title('Precision matrix')\n\n# And now display the corresponding graph\nplotting.plot_connectome(estimator.precision_, power_coords,\n title='Precision connectome',\n edge_threshold=\"99.8%\",\n node_size=20)\nplotting.show()\n", "path": "examples/03_connectivity/plot_power_connectome.py"}], "after_files": [{"content": null, "path": "examples/03_connectivity/plot_power_connectome.py"}]} | 1,261 | 840 |
gh_patches_debug_30188 | rasdani/github-patches | git_diff | internetarchive__openlibrary-8966 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support different seeds for random.hourly sort
These carousels are all sorted by random.hourly, but we want them to have a different random subset!

### Proposal & Constraints
Expand `random.hourly` sorting to support a custom seed like `random`
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
### Stakeholders
@RayBB
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openlibrary/plugins/worksearch/schemes/__init__.py`
Content:
```
1 import logging
2 from collections.abc import Callable
3
4 import luqum.tree
5 from luqum.exceptions import ParseError
6 from openlibrary.solr.query_utils import (
7 escape_unknown_fields,
8 fully_escape_query,
9 luqum_parser,
10 )
11
12 logger = logging.getLogger("openlibrary.worksearch")
13
14
15 class SearchScheme:
16 # Set of queries that define the universe of this scheme
17 universe: list[str]
18 # All actual solr fields that can be in a user query
19 all_fields: set[str]
20 # These fields are fetched for facets and can also be url params
21 facet_fields: set[str]
22 # Mapping of user-only fields to solr fields
23 field_name_map: dict[str, str]
24 # Mapping of user sort to solr sort
25 sorts: dict[str, str | Callable[[], str]]
26 # Default
27 default_fetched_fields: set[str]
28 # Fields that should be rewritten
29 facet_rewrites: dict[tuple[str, str], str | Callable[[], str]]
30
31 def is_search_field(self, field: str):
32 return field in self.all_fields or field in self.field_name_map
33
34 def process_user_sort(self, user_sort: str) -> str:
35 """
36 Convert a user-provided sort to a solr sort
37
38 >>> from openlibrary.plugins.worksearch.schemes.works import WorkSearchScheme
39 >>> scheme = WorkSearchScheme()
40 >>> scheme.process_user_sort('editions')
41 'edition_count desc'
42 >>> scheme.process_user_sort('editions, new')
43 'edition_count desc,first_publish_year desc'
44 >>> scheme.process_user_sort('random')
45 'random_1 asc'
46 >>> scheme.process_user_sort('random_custom_seed')
47 'random_custom_seed asc'
48 >>> scheme.process_user_sort('random_custom_seed desc')
49 'random_custom_seed desc'
50 >>> scheme.process_user_sort('random_custom_seed asc')
51 'random_custom_seed asc'
52 """
53
54 def process_individual_sort(sort: str):
55 if sort.startswith('random_'):
56 # Allow custom randoms; so anything random_* is allowed
57 return sort if ' ' in sort else f'{sort} asc'
58 else:
59 solr_sort = self.sorts[sort]
60 return solr_sort() if callable(solr_sort) else solr_sort
61
62 return ','.join(
63 process_individual_sort(s.strip()) for s in user_sort.split(',')
64 )
65
66 def process_user_query(self, q_param: str) -> str:
67 if q_param == '*:*':
68 # This is a special solr syntax; don't process
69 return q_param
70
71 try:
72 q_param = escape_unknown_fields(
73 (
74 # Solr 4+ has support for regexes (eg `key:/foo.*/`)! But for now,
75 # let's not expose that and escape all '/'. Otherwise
76 # `key:/works/OL1W` is interpreted as a regex.
77 q_param.strip()
78 .replace('/', '\\/')
79 # Also escape unexposed lucene features
80 .replace('?', '\\?')
81 .replace('~', '\\~')
82 ),
83 self.is_search_field,
84 lower=True,
85 )
86 q_tree = luqum_parser(q_param)
87 except ParseError:
88 # This isn't a syntactically valid lucene query
89 logger.warning("Invalid lucene query", exc_info=True)
90 # Escape everything we can
91 q_tree = luqum_parser(fully_escape_query(q_param))
92
93 q_tree = self.transform_user_query(q_param, q_tree)
94 return str(q_tree)
95
96 def transform_user_query(
97 self,
98 user_query: str,
99 q_tree: luqum.tree.Item,
100 ) -> luqum.tree.Item:
101 return q_tree
102
103 def build_q_from_params(self, params: dict) -> str | None:
104 return None
105
106 def q_to_solr_params(
107 self,
108 q: str,
109 solr_fields: set[str],
110 cur_solr_params: list[tuple[str, str]],
111 ) -> list[tuple[str, str]]:
112 return [('q', q)]
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/openlibrary/plugins/worksearch/schemes/__init__.py b/openlibrary/plugins/worksearch/schemes/__init__.py
--- a/openlibrary/plugins/worksearch/schemes/__init__.py
+++ b/openlibrary/plugins/worksearch/schemes/__init__.py
@@ -44,17 +44,27 @@
>>> scheme.process_user_sort('random')
'random_1 asc'
>>> scheme.process_user_sort('random_custom_seed')
- 'random_custom_seed asc'
+ 'random_1_custom_seed asc'
>>> scheme.process_user_sort('random_custom_seed desc')
- 'random_custom_seed desc'
+ 'random_1_custom_seed desc'
>>> scheme.process_user_sort('random_custom_seed asc')
- 'random_custom_seed asc'
+ 'random_1_custom_seed asc'
"""
- def process_individual_sort(sort: str):
- if sort.startswith('random_'):
+ def process_individual_sort(sort: str) -> str:
+ if sort.startswith(('random_', 'random.hourly_', 'random.daily_')):
# Allow custom randoms; so anything random_* is allowed
- return sort if ' ' in sort else f'{sort} asc'
+ # Also Allow custom time randoms to allow carousels with overlapping
+ # books to have a fresh ordering when on the same collection
+ sort_order: str | None = None
+ if ' ' in sort:
+ sort, sort_order = sort.split(' ', 1)
+ random_type, random_seed = sort.split('_', 1)
+ solr_sort = self.sorts[random_type]
+ solr_sort_str = solr_sort() if callable(solr_sort) else solr_sort
+ solr_sort_field, solr_sort_order = solr_sort_str.split(' ', 1)
+ sort_order = sort_order or solr_sort_order
+ return f'{solr_sort_field}_{random_seed} {sort_order}'
else:
solr_sort = self.sorts[sort]
return solr_sort() if callable(solr_sort) else solr_sort
| {"golden_diff": "diff --git a/openlibrary/plugins/worksearch/schemes/__init__.py b/openlibrary/plugins/worksearch/schemes/__init__.py\n--- a/openlibrary/plugins/worksearch/schemes/__init__.py\n+++ b/openlibrary/plugins/worksearch/schemes/__init__.py\n@@ -44,17 +44,27 @@\n >>> scheme.process_user_sort('random')\n 'random_1 asc'\n >>> scheme.process_user_sort('random_custom_seed')\n- 'random_custom_seed asc'\n+ 'random_1_custom_seed asc'\n >>> scheme.process_user_sort('random_custom_seed desc')\n- 'random_custom_seed desc'\n+ 'random_1_custom_seed desc'\n >>> scheme.process_user_sort('random_custom_seed asc')\n- 'random_custom_seed asc'\n+ 'random_1_custom_seed asc'\n \"\"\"\n \n- def process_individual_sort(sort: str):\n- if sort.startswith('random_'):\n+ def process_individual_sort(sort: str) -> str:\n+ if sort.startswith(('random_', 'random.hourly_', 'random.daily_')):\n # Allow custom randoms; so anything random_* is allowed\n- return sort if ' ' in sort else f'{sort} asc'\n+ # Also Allow custom time randoms to allow carousels with overlapping\n+ # books to have a fresh ordering when on the same collection\n+ sort_order: str | None = None\n+ if ' ' in sort:\n+ sort, sort_order = sort.split(' ', 1)\n+ random_type, random_seed = sort.split('_', 1)\n+ solr_sort = self.sorts[random_type]\n+ solr_sort_str = solr_sort() if callable(solr_sort) else solr_sort\n+ solr_sort_field, solr_sort_order = solr_sort_str.split(' ', 1)\n+ sort_order = sort_order or solr_sort_order\n+ return f'{solr_sort_field}_{random_seed} {sort_order}'\n else:\n solr_sort = self.sorts[sort]\n return solr_sort() if callable(solr_sort) else solr_sort\n", "issue": "Support different seeds for random.hourly sort\nThese carousels are all sorted by random.hourly, but we want them to have a different random subset!\r\n\r\n\r\n\r\n\r\n### Proposal & Constraints\r\nExpand `random.hourly` sorting to support a custom seed like `random`\r\n\r\n### Additional context\r\n<!-- Add any other context or screenshots about the feature request here. -->\r\n\r\n### Stakeholders\r\n@RayBB \n", "before_files": [{"content": "import logging\nfrom collections.abc import Callable\n\nimport luqum.tree\nfrom luqum.exceptions import ParseError\nfrom openlibrary.solr.query_utils import (\n escape_unknown_fields,\n fully_escape_query,\n luqum_parser,\n)\n\nlogger = logging.getLogger(\"openlibrary.worksearch\")\n\n\nclass SearchScheme:\n # Set of queries that define the universe of this scheme\n universe: list[str]\n # All actual solr fields that can be in a user query\n all_fields: set[str]\n # These fields are fetched for facets and can also be url params\n facet_fields: set[str]\n # Mapping of user-only fields to solr fields\n field_name_map: dict[str, str]\n # Mapping of user sort to solr sort\n sorts: dict[str, str | Callable[[], str]]\n # Default\n default_fetched_fields: set[str]\n # Fields that should be rewritten\n facet_rewrites: dict[tuple[str, str], str | Callable[[], str]]\n\n def is_search_field(self, field: str):\n return field in self.all_fields or field in self.field_name_map\n\n def process_user_sort(self, user_sort: str) -> str:\n \"\"\"\n Convert a user-provided sort to a solr sort\n\n >>> from openlibrary.plugins.worksearch.schemes.works import WorkSearchScheme\n >>> scheme = WorkSearchScheme()\n >>> scheme.process_user_sort('editions')\n 'edition_count desc'\n >>> scheme.process_user_sort('editions, new')\n 'edition_count desc,first_publish_year desc'\n >>> scheme.process_user_sort('random')\n 'random_1 asc'\n >>> scheme.process_user_sort('random_custom_seed')\n 'random_custom_seed asc'\n >>> scheme.process_user_sort('random_custom_seed desc')\n 'random_custom_seed desc'\n >>> scheme.process_user_sort('random_custom_seed asc')\n 'random_custom_seed asc'\n \"\"\"\n\n def process_individual_sort(sort: str):\n if sort.startswith('random_'):\n # Allow custom randoms; so anything random_* is allowed\n return sort if ' ' in sort else f'{sort} asc'\n else:\n solr_sort = self.sorts[sort]\n return solr_sort() if callable(solr_sort) else solr_sort\n\n return ','.join(\n process_individual_sort(s.strip()) for s in user_sort.split(',')\n )\n\n def process_user_query(self, q_param: str) -> str:\n if q_param == '*:*':\n # This is a special solr syntax; don't process\n return q_param\n\n try:\n q_param = escape_unknown_fields(\n (\n # Solr 4+ has support for regexes (eg `key:/foo.*/`)! But for now,\n # let's not expose that and escape all '/'. Otherwise\n # `key:/works/OL1W` is interpreted as a regex.\n q_param.strip()\n .replace('/', '\\\\/')\n # Also escape unexposed lucene features\n .replace('?', '\\\\?')\n .replace('~', '\\\\~')\n ),\n self.is_search_field,\n lower=True,\n )\n q_tree = luqum_parser(q_param)\n except ParseError:\n # This isn't a syntactically valid lucene query\n logger.warning(\"Invalid lucene query\", exc_info=True)\n # Escape everything we can\n q_tree = luqum_parser(fully_escape_query(q_param))\n\n q_tree = self.transform_user_query(q_param, q_tree)\n return str(q_tree)\n\n def transform_user_query(\n self,\n user_query: str,\n q_tree: luqum.tree.Item,\n ) -> luqum.tree.Item:\n return q_tree\n\n def build_q_from_params(self, params: dict) -> str | None:\n return None\n\n def q_to_solr_params(\n self,\n q: str,\n solr_fields: set[str],\n cur_solr_params: list[tuple[str, str]],\n ) -> list[tuple[str, str]]:\n return [('q', q)]\n", "path": "openlibrary/plugins/worksearch/schemes/__init__.py"}], "after_files": [{"content": "import logging\nfrom collections.abc import Callable\n\nimport luqum.tree\nfrom luqum.exceptions import ParseError\nfrom openlibrary.solr.query_utils import (\n escape_unknown_fields,\n fully_escape_query,\n luqum_parser,\n)\n\nlogger = logging.getLogger(\"openlibrary.worksearch\")\n\n\nclass SearchScheme:\n # Set of queries that define the universe of this scheme\n universe: list[str]\n # All actual solr fields that can be in a user query\n all_fields: set[str]\n # These fields are fetched for facets and can also be url params\n facet_fields: set[str]\n # Mapping of user-only fields to solr fields\n field_name_map: dict[str, str]\n # Mapping of user sort to solr sort\n sorts: dict[str, str | Callable[[], str]]\n # Default\n default_fetched_fields: set[str]\n # Fields that should be rewritten\n facet_rewrites: dict[tuple[str, str], str | Callable[[], str]]\n\n def is_search_field(self, field: str):\n return field in self.all_fields or field in self.field_name_map\n\n def process_user_sort(self, user_sort: str) -> str:\n \"\"\"\n Convert a user-provided sort to a solr sort\n\n >>> from openlibrary.plugins.worksearch.schemes.works import WorkSearchScheme\n >>> scheme = WorkSearchScheme()\n >>> scheme.process_user_sort('editions')\n 'edition_count desc'\n >>> scheme.process_user_sort('editions, new')\n 'edition_count desc,first_publish_year desc'\n >>> scheme.process_user_sort('random')\n 'random_1 asc'\n >>> scheme.process_user_sort('random_custom_seed')\n 'random_1_custom_seed asc'\n >>> scheme.process_user_sort('random_custom_seed desc')\n 'random_1_custom_seed desc'\n >>> scheme.process_user_sort('random_custom_seed asc')\n 'random_1_custom_seed asc'\n \"\"\"\n\n def process_individual_sort(sort: str) -> str:\n if sort.startswith(('random_', 'random.hourly_', 'random.daily_')):\n # Allow custom randoms; so anything random_* is allowed\n # Also Allow custom time randoms to allow carousels with overlapping\n # books to have a fresh ordering when on the same collection\n sort_order: str | None = None\n if ' ' in sort:\n sort, sort_order = sort.split(' ', 1)\n random_type, random_seed = sort.split('_', 1)\n solr_sort = self.sorts[random_type]\n solr_sort_str = solr_sort() if callable(solr_sort) else solr_sort\n solr_sort_field, solr_sort_order = solr_sort_str.split(' ', 1)\n sort_order = sort_order or solr_sort_order\n return f'{solr_sort_field}_{random_seed} {sort_order}'\n else:\n solr_sort = self.sorts[sort]\n return solr_sort() if callable(solr_sort) else solr_sort\n\n return ','.join(\n process_individual_sort(s.strip()) for s in user_sort.split(',')\n )\n\n def process_user_query(self, q_param: str) -> str:\n if q_param == '*:*':\n # This is a special solr syntax; don't process\n return q_param\n\n try:\n q_param = escape_unknown_fields(\n (\n # Solr 4+ has support for regexes (eg `key:/foo.*/`)! But for now,\n # let's not expose that and escape all '/'. Otherwise\n # `key:/works/OL1W` is interpreted as a regex.\n q_param.strip()\n .replace('/', '\\\\/')\n # Also escape unexposed lucene features\n .replace('?', '\\\\?')\n .replace('~', '\\\\~')\n ),\n self.is_search_field,\n lower=True,\n )\n q_tree = luqum_parser(q_param)\n except ParseError:\n # This isn't a syntactically valid lucene query\n logger.warning(\"Invalid lucene query\", exc_info=True)\n # Escape everything we can\n q_tree = luqum_parser(fully_escape_query(q_param))\n\n q_tree = self.transform_user_query(q_param, q_tree)\n return str(q_tree)\n\n def transform_user_query(\n self,\n user_query: str,\n q_tree: luqum.tree.Item,\n ) -> luqum.tree.Item:\n return q_tree\n\n def build_q_from_params(self, params: dict) -> str | None:\n return None\n\n def q_to_solr_params(\n self,\n q: str,\n solr_fields: set[str],\n cur_solr_params: list[tuple[str, str]],\n ) -> list[tuple[str, str]]:\n return [('q', q)]\n", "path": "openlibrary/plugins/worksearch/schemes/__init__.py"}]} | 1,515 | 454 |
gh_patches_debug_11300 | rasdani/github-patches | git_diff | pypa__setuptools-1986 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Deprecated distutils bdist_wininst is going to be removed
I proposed to remove the bdist_winstinst command from distutils in Python 3.9:
* https://bugs.python.org/issue39541
* https://discuss.python.org/t/remove-distutils-bdist-wininst-command/3115
* https://github.com/python/cpython/pull/18329
Problem: setuptools always uses it on all platforms at: setuptools/command/install_scripts.py, line 35:
```
bw_cmd = self.get_finalized_command("bdist_wininst")
```
See #857 which is a closed duplicated which proposed different options to fix the issue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setuptools/command/install_scripts.py`
Content:
```
1 from distutils import log
2 import distutils.command.install_scripts as orig
3 import os
4 import sys
5
6 from pkg_resources import Distribution, PathMetadata, ensure_directory
7
8
9 class install_scripts(orig.install_scripts):
10 """Do normal script install, plus any egg_info wrapper scripts"""
11
12 def initialize_options(self):
13 orig.install_scripts.initialize_options(self)
14 self.no_ep = False
15
16 def run(self):
17 import setuptools.command.easy_install as ei
18
19 self.run_command("egg_info")
20 if self.distribution.scripts:
21 orig.install_scripts.run(self) # run first to set up self.outfiles
22 else:
23 self.outfiles = []
24 if self.no_ep:
25 # don't install entry point scripts into .egg file!
26 return
27
28 ei_cmd = self.get_finalized_command("egg_info")
29 dist = Distribution(
30 ei_cmd.egg_base, PathMetadata(ei_cmd.egg_base, ei_cmd.egg_info),
31 ei_cmd.egg_name, ei_cmd.egg_version,
32 )
33 bs_cmd = self.get_finalized_command('build_scripts')
34 exec_param = getattr(bs_cmd, 'executable', None)
35 bw_cmd = self.get_finalized_command("bdist_wininst")
36 is_wininst = getattr(bw_cmd, '_is_running', False)
37 writer = ei.ScriptWriter
38 if is_wininst:
39 exec_param = "python.exe"
40 writer = ei.WindowsScriptWriter
41 if exec_param == sys.executable:
42 # In case the path to the Python executable contains a space, wrap
43 # it so it's not split up.
44 exec_param = [exec_param]
45 # resolve the writer to the environment
46 writer = writer.best()
47 cmd = writer.command_spec_class.best().from_param(exec_param)
48 for args in writer.get_args(dist, cmd.as_header()):
49 self.write_script(*args)
50
51 def write_script(self, script_name, contents, mode="t", *ignored):
52 """Write an executable file to the scripts directory"""
53 from setuptools.command.easy_install import chmod, current_umask
54
55 log.info("Installing %s script to %s", script_name, self.install_dir)
56 target = os.path.join(self.install_dir, script_name)
57 self.outfiles.append(target)
58
59 mask = current_umask()
60 if not self.dry_run:
61 ensure_directory(target)
62 f = open(target, "w" + mode)
63 f.write(contents)
64 f.close()
65 chmod(target, 0o777 - mask)
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setuptools/command/install_scripts.py b/setuptools/command/install_scripts.py
--- a/setuptools/command/install_scripts.py
+++ b/setuptools/command/install_scripts.py
@@ -32,8 +32,11 @@
)
bs_cmd = self.get_finalized_command('build_scripts')
exec_param = getattr(bs_cmd, 'executable', None)
- bw_cmd = self.get_finalized_command("bdist_wininst")
- is_wininst = getattr(bw_cmd, '_is_running', False)
+ try:
+ bw_cmd = self.get_finalized_command("bdist_wininst")
+ is_wininst = getattr(bw_cmd, '_is_running', False)
+ except ImportError:
+ is_wininst = False
writer = ei.ScriptWriter
if is_wininst:
exec_param = "python.exe"
| {"golden_diff": "diff --git a/setuptools/command/install_scripts.py b/setuptools/command/install_scripts.py\n--- a/setuptools/command/install_scripts.py\n+++ b/setuptools/command/install_scripts.py\n@@ -32,8 +32,11 @@\n )\n bs_cmd = self.get_finalized_command('build_scripts')\n exec_param = getattr(bs_cmd, 'executable', None)\n- bw_cmd = self.get_finalized_command(\"bdist_wininst\")\n- is_wininst = getattr(bw_cmd, '_is_running', False)\n+ try:\n+ bw_cmd = self.get_finalized_command(\"bdist_wininst\")\n+ is_wininst = getattr(bw_cmd, '_is_running', False)\n+ except ImportError:\n+ is_wininst = False\n writer = ei.ScriptWriter\n if is_wininst:\n exec_param = \"python.exe\"\n", "issue": "Deprecated distutils bdist_wininst is going to be removed\nI proposed to remove the bdist_winstinst command from distutils in Python 3.9:\r\n\r\n* https://bugs.python.org/issue39541\r\n* https://discuss.python.org/t/remove-distutils-bdist-wininst-command/3115\r\n* https://github.com/python/cpython/pull/18329\r\n\r\nProblem: setuptools always uses it on all platforms at: setuptools/command/install_scripts.py, line 35:\r\n\r\n```\r\n bw_cmd = self.get_finalized_command(\"bdist_wininst\")\r\n```\r\n\r\nSee #857 which is a closed duplicated which proposed different options to fix the issue.\n", "before_files": [{"content": "from distutils import log\nimport distutils.command.install_scripts as orig\nimport os\nimport sys\n\nfrom pkg_resources import Distribution, PathMetadata, ensure_directory\n\n\nclass install_scripts(orig.install_scripts):\n \"\"\"Do normal script install, plus any egg_info wrapper scripts\"\"\"\n\n def initialize_options(self):\n orig.install_scripts.initialize_options(self)\n self.no_ep = False\n\n def run(self):\n import setuptools.command.easy_install as ei\n\n self.run_command(\"egg_info\")\n if self.distribution.scripts:\n orig.install_scripts.run(self) # run first to set up self.outfiles\n else:\n self.outfiles = []\n if self.no_ep:\n # don't install entry point scripts into .egg file!\n return\n\n ei_cmd = self.get_finalized_command(\"egg_info\")\n dist = Distribution(\n ei_cmd.egg_base, PathMetadata(ei_cmd.egg_base, ei_cmd.egg_info),\n ei_cmd.egg_name, ei_cmd.egg_version,\n )\n bs_cmd = self.get_finalized_command('build_scripts')\n exec_param = getattr(bs_cmd, 'executable', None)\n bw_cmd = self.get_finalized_command(\"bdist_wininst\")\n is_wininst = getattr(bw_cmd, '_is_running', False)\n writer = ei.ScriptWriter\n if is_wininst:\n exec_param = \"python.exe\"\n writer = ei.WindowsScriptWriter\n if exec_param == sys.executable:\n # In case the path to the Python executable contains a space, wrap\n # it so it's not split up.\n exec_param = [exec_param]\n # resolve the writer to the environment\n writer = writer.best()\n cmd = writer.command_spec_class.best().from_param(exec_param)\n for args in writer.get_args(dist, cmd.as_header()):\n self.write_script(*args)\n\n def write_script(self, script_name, contents, mode=\"t\", *ignored):\n \"\"\"Write an executable file to the scripts directory\"\"\"\n from setuptools.command.easy_install import chmod, current_umask\n\n log.info(\"Installing %s script to %s\", script_name, self.install_dir)\n target = os.path.join(self.install_dir, script_name)\n self.outfiles.append(target)\n\n mask = current_umask()\n if not self.dry_run:\n ensure_directory(target)\n f = open(target, \"w\" + mode)\n f.write(contents)\n f.close()\n chmod(target, 0o777 - mask)\n", "path": "setuptools/command/install_scripts.py"}], "after_files": [{"content": "from distutils import log\nimport distutils.command.install_scripts as orig\nimport os\nimport sys\n\nfrom pkg_resources import Distribution, PathMetadata, ensure_directory\n\n\nclass install_scripts(orig.install_scripts):\n \"\"\"Do normal script install, plus any egg_info wrapper scripts\"\"\"\n\n def initialize_options(self):\n orig.install_scripts.initialize_options(self)\n self.no_ep = False\n\n def run(self):\n import setuptools.command.easy_install as ei\n\n self.run_command(\"egg_info\")\n if self.distribution.scripts:\n orig.install_scripts.run(self) # run first to set up self.outfiles\n else:\n self.outfiles = []\n if self.no_ep:\n # don't install entry point scripts into .egg file!\n return\n\n ei_cmd = self.get_finalized_command(\"egg_info\")\n dist = Distribution(\n ei_cmd.egg_base, PathMetadata(ei_cmd.egg_base, ei_cmd.egg_info),\n ei_cmd.egg_name, ei_cmd.egg_version,\n )\n bs_cmd = self.get_finalized_command('build_scripts')\n exec_param = getattr(bs_cmd, 'executable', None)\n try:\n bw_cmd = self.get_finalized_command(\"bdist_wininst\")\n is_wininst = getattr(bw_cmd, '_is_running', False)\n except ImportError:\n is_wininst = False\n writer = ei.ScriptWriter\n if is_wininst:\n exec_param = \"python.exe\"\n writer = ei.WindowsScriptWriter\n if exec_param == sys.executable:\n # In case the path to the Python executable contains a space, wrap\n # it so it's not split up.\n exec_param = [exec_param]\n # resolve the writer to the environment\n writer = writer.best()\n cmd = writer.command_spec_class.best().from_param(exec_param)\n for args in writer.get_args(dist, cmd.as_header()):\n self.write_script(*args)\n\n def write_script(self, script_name, contents, mode=\"t\", *ignored):\n \"\"\"Write an executable file to the scripts directory\"\"\"\n from setuptools.command.easy_install import chmod, current_umask\n\n log.info(\"Installing %s script to %s\", script_name, self.install_dir)\n target = os.path.join(self.install_dir, script_name)\n self.outfiles.append(target)\n\n mask = current_umask()\n if not self.dry_run:\n ensure_directory(target)\n f = open(target, \"w\" + mode)\n f.write(contents)\n f.close()\n chmod(target, 0o777 - mask)\n", "path": "setuptools/command/install_scripts.py"}]} | 1,062 | 181 |
gh_patches_debug_39724 | rasdani/github-patches | git_diff | ephios-dev__ephios-178 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Event creation mails do not include event description
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ephios/event_management/mail.py`
Content:
```
1 from django.core import mail
2 from django.core.mail import EmailMultiAlternatives
3 from django.template.loader import render_to_string
4 from django.utils.translation import gettext as _
5 from guardian.shortcuts import get_users_with_perms
6
7 from ephios.event_management.models import AbstractParticipation
8 from ephios.extra.permissions import get_groups_with_perms
9 from ephios.settings import SITE_URL
10 from ephios.user_management.models import UserProfile
11
12
13 def new_event(event):
14 messages = []
15 users = UserProfile.objects.filter(
16 groups__in=get_groups_with_perms(event, only_with_perms_in=["view_event"]), is_active=True
17 ).distinct()
18 responsible_users = get_users_with_perms(event, only_with_perms_in=["change_event"]).distinct()
19 responsible_persons_mails = list(responsible_users.values_list("email", flat=True))
20
21 subject = _("New {type}: {title}").format(type=event.type, title=event.title)
22 text_content = _(
23 "A new {type} ({title}) has been added. \n You can view it here: {link}"
24 ).format(type=event.type, title=event.title, link=event.get_absolute_url())
25 html_content = render_to_string(
26 "event_management/mails/new_event.html", {"event": event, "site_url": SITE_URL}
27 )
28
29 for user in users:
30 message = EmailMultiAlternatives(
31 to=[user.email], subject=subject, body=text_content, reply_to=responsible_persons_mails
32 )
33 message.attach_alternative(html_content, "text/html")
34 messages.append(message)
35 mail.get_connection().send_messages(messages)
36
37
38 def participation_state_changed(participation: AbstractParticipation):
39 if participation.state != AbstractParticipation.States.USER_DECLINED:
40 messages = []
41
42 # send mail to the participant whose participation has been changed
43 if participation.participant.email is not None:
44 text_content = _(
45 "The status for your participation for {shift} has changed. It is now {status}."
46 ).format(shift=participation.shift, status=participation.get_state_display())
47 html_content = render_to_string("email_base.html", {"message_text": text_content})
48 message = EmailMultiAlternatives(
49 to=[participation.participant.email],
50 subject=_("Your participation state changed"),
51 body=text_content,
52 )
53 message.attach_alternative(html_content, "text/html")
54 messages.append(message)
55
56 # send mail to responsible users
57 responsible_users = get_users_with_perms(
58 participation.shift.event, only_with_perms_in=["change_event"]
59 ).distinct()
60 subject = _("Participation was changed for your event")
61 text_content = _(
62 "The participation of {participant} for {shift} was changed. The status is now {status}"
63 ).format(
64 participant=participation.participant,
65 shift=participation.shift,
66 status=participation.get_state_display(),
67 )
68 html_content = render_to_string("email_base.html", {"message_text": text_content})
69 for user in responsible_users:
70 message = EmailMultiAlternatives(to=[user.email], subject=subject, body=text_content)
71 message.attach_alternative(html_content, "text/html")
72 messages.append(message)
73
74 mail.get_connection().send_messages(messages)
75
```
Path: `ephios/user_management/mail.py`
Content:
```
1 from django.contrib.auth.tokens import default_token_generator
2 from django.core.mail import EmailMultiAlternatives
3 from django.template.loader import render_to_string
4 from django.urls import reverse
5 from django.utils.encoding import force_bytes
6 from django.utils.http import urlsafe_base64_encode
7 from django.utils.translation import gettext as _
8
9 from ephios.settings import SITE_URL
10
11
12 def send_account_creation_info(userprofile):
13 subject = _("Welcome to ephios!")
14 uid = urlsafe_base64_encode(force_bytes(userprofile.id))
15 token = default_token_generator.make_token(userprofile)
16 reset_link = reverse("password_reset_confirm", kwargs={"uidb64": uid, "token": token})
17 text_content = _(
18 "You're receiving this email because a new account has been created for you at ephios.\n"
19 "Please go to the following page and choose a password: {url}{reset_link}\n"
20 "Your username is your email address: {email}\n"
21 ).format(url=SITE_URL, reset_link=reset_link, email=userprofile.email)
22
23 html_content = render_to_string(
24 "user_management/new_account_email.html",
25 {"uid": uid, "token": token, "site_url": SITE_URL, "email": userprofile.email},
26 )
27 message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)
28 message.attach_alternative(html_content, "text/html")
29 message.send()
30
31
32 def send_account_update_info(userprofile):
33 subject = _("ephios account updated")
34 url = reverse("user_management:profile")
35 text_content = _(
36 "You're receiving this email because your account at ephios has been updated.\n"
37 "You can see the changes in your profile: {site_url}{url}\n"
38 "Your username is your email address: {email}\n"
39 ).format(site_url=SITE_URL, url=url, email=userprofile.email)
40
41 html_content = render_to_string(
42 "user_management/account_updated_email.html",
43 {"site_url": SITE_URL, "url": url, "email": userprofile.email},
44 )
45 message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)
46 message.attach_alternative(html_content, "text/html")
47 message.send()
48
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ephios/event_management/mail.py b/ephios/event_management/mail.py
--- a/ephios/event_management/mail.py
+++ b/ephios/event_management/mail.py
@@ -1,3 +1,5 @@
+from urllib.parse import urljoin
+
from django.core import mail
from django.core.mail import EmailMultiAlternatives
from django.template.loader import render_to_string
@@ -20,8 +22,16 @@
subject = _("New {type}: {title}").format(type=event.type, title=event.title)
text_content = _(
- "A new {type} ({title}) has been added. \n You can view it here: {link}"
- ).format(type=event.type, title=event.title, link=event.get_absolute_url())
+ "A new {type} ({title}, {location}) has been added.\n"
+ "Further information: {description}\n"
+ "You can view the event here: {url}"
+ ).format(
+ type=event.type,
+ title=event.title,
+ location=event.location,
+ description=event.description,
+ url=urljoin(SITE_URL, event.get_absolute_url()),
+ )
html_content = render_to_string(
"event_management/mails/new_event.html", {"event": event, "site_url": SITE_URL}
)
diff --git a/ephios/user_management/mail.py b/ephios/user_management/mail.py
--- a/ephios/user_management/mail.py
+++ b/ephios/user_management/mail.py
@@ -1,3 +1,5 @@
+from urllib.parse import urljoin
+
from django.contrib.auth.tokens import default_token_generator
from django.core.mail import EmailMultiAlternatives
from django.template.loader import render_to_string
@@ -16,9 +18,9 @@
reset_link = reverse("password_reset_confirm", kwargs={"uidb64": uid, "token": token})
text_content = _(
"You're receiving this email because a new account has been created for you at ephios.\n"
- "Please go to the following page and choose a password: {url}{reset_link}\n"
+ "Please go to the following page and choose a password: {url}\n"
"Your username is your email address: {email}\n"
- ).format(url=SITE_URL, reset_link=reset_link, email=userprofile.email)
+ ).format(url=urljoin(SITE_URL, reset_link), email=userprofile.email)
html_content = render_to_string(
"user_management/new_account_email.html",
@@ -34,9 +36,9 @@
url = reverse("user_management:profile")
text_content = _(
"You're receiving this email because your account at ephios has been updated.\n"
- "You can see the changes in your profile: {site_url}{url}\n"
+ "You can see the changes in your profile: {url}\n"
"Your username is your email address: {email}\n"
- ).format(site_url=SITE_URL, url=url, email=userprofile.email)
+ ).format(url=urljoin(SITE_URL, url), email=userprofile.email)
html_content = render_to_string(
"user_management/account_updated_email.html",
| {"golden_diff": "diff --git a/ephios/event_management/mail.py b/ephios/event_management/mail.py\n--- a/ephios/event_management/mail.py\n+++ b/ephios/event_management/mail.py\n@@ -1,3 +1,5 @@\n+from urllib.parse import urljoin\n+\n from django.core import mail\n from django.core.mail import EmailMultiAlternatives\n from django.template.loader import render_to_string\n@@ -20,8 +22,16 @@\n \n subject = _(\"New {type}: {title}\").format(type=event.type, title=event.title)\n text_content = _(\n- \"A new {type} ({title}) has been added. \\n You can view it here: {link}\"\n- ).format(type=event.type, title=event.title, link=event.get_absolute_url())\n+ \"A new {type} ({title}, {location}) has been added.\\n\"\n+ \"Further information: {description}\\n\"\n+ \"You can view the event here: {url}\"\n+ ).format(\n+ type=event.type,\n+ title=event.title,\n+ location=event.location,\n+ description=event.description,\n+ url=urljoin(SITE_URL, event.get_absolute_url()),\n+ )\n html_content = render_to_string(\n \"event_management/mails/new_event.html\", {\"event\": event, \"site_url\": SITE_URL}\n )\ndiff --git a/ephios/user_management/mail.py b/ephios/user_management/mail.py\n--- a/ephios/user_management/mail.py\n+++ b/ephios/user_management/mail.py\n@@ -1,3 +1,5 @@\n+from urllib.parse import urljoin\n+\n from django.contrib.auth.tokens import default_token_generator\n from django.core.mail import EmailMultiAlternatives\n from django.template.loader import render_to_string\n@@ -16,9 +18,9 @@\n reset_link = reverse(\"password_reset_confirm\", kwargs={\"uidb64\": uid, \"token\": token})\n text_content = _(\n \"You're receiving this email because a new account has been created for you at ephios.\\n\"\n- \"Please go to the following page and choose a password: {url}{reset_link}\\n\"\n+ \"Please go to the following page and choose a password: {url}\\n\"\n \"Your username is your email address: {email}\\n\"\n- ).format(url=SITE_URL, reset_link=reset_link, email=userprofile.email)\n+ ).format(url=urljoin(SITE_URL, reset_link), email=userprofile.email)\n \n html_content = render_to_string(\n \"user_management/new_account_email.html\",\n@@ -34,9 +36,9 @@\n url = reverse(\"user_management:profile\")\n text_content = _(\n \"You're receiving this email because your account at ephios has been updated.\\n\"\n- \"You can see the changes in your profile: {site_url}{url}\\n\"\n+ \"You can see the changes in your profile: {url}\\n\"\n \"Your username is your email address: {email}\\n\"\n- ).format(site_url=SITE_URL, url=url, email=userprofile.email)\n+ ).format(url=urljoin(SITE_URL, url), email=userprofile.email)\n \n html_content = render_to_string(\n \"user_management/account_updated_email.html\",\n", "issue": "Event creation mails do not include event description\n\n", "before_files": [{"content": "from django.core import mail\nfrom django.core.mail import EmailMultiAlternatives\nfrom django.template.loader import render_to_string\nfrom django.utils.translation import gettext as _\nfrom guardian.shortcuts import get_users_with_perms\n\nfrom ephios.event_management.models import AbstractParticipation\nfrom ephios.extra.permissions import get_groups_with_perms\nfrom ephios.settings import SITE_URL\nfrom ephios.user_management.models import UserProfile\n\n\ndef new_event(event):\n messages = []\n users = UserProfile.objects.filter(\n groups__in=get_groups_with_perms(event, only_with_perms_in=[\"view_event\"]), is_active=True\n ).distinct()\n responsible_users = get_users_with_perms(event, only_with_perms_in=[\"change_event\"]).distinct()\n responsible_persons_mails = list(responsible_users.values_list(\"email\", flat=True))\n\n subject = _(\"New {type}: {title}\").format(type=event.type, title=event.title)\n text_content = _(\n \"A new {type} ({title}) has been added. \\n You can view it here: {link}\"\n ).format(type=event.type, title=event.title, link=event.get_absolute_url())\n html_content = render_to_string(\n \"event_management/mails/new_event.html\", {\"event\": event, \"site_url\": SITE_URL}\n )\n\n for user in users:\n message = EmailMultiAlternatives(\n to=[user.email], subject=subject, body=text_content, reply_to=responsible_persons_mails\n )\n message.attach_alternative(html_content, \"text/html\")\n messages.append(message)\n mail.get_connection().send_messages(messages)\n\n\ndef participation_state_changed(participation: AbstractParticipation):\n if participation.state != AbstractParticipation.States.USER_DECLINED:\n messages = []\n\n # send mail to the participant whose participation has been changed\n if participation.participant.email is not None:\n text_content = _(\n \"The status for your participation for {shift} has changed. It is now {status}.\"\n ).format(shift=participation.shift, status=participation.get_state_display())\n html_content = render_to_string(\"email_base.html\", {\"message_text\": text_content})\n message = EmailMultiAlternatives(\n to=[participation.participant.email],\n subject=_(\"Your participation state changed\"),\n body=text_content,\n )\n message.attach_alternative(html_content, \"text/html\")\n messages.append(message)\n\n # send mail to responsible users\n responsible_users = get_users_with_perms(\n participation.shift.event, only_with_perms_in=[\"change_event\"]\n ).distinct()\n subject = _(\"Participation was changed for your event\")\n text_content = _(\n \"The participation of {participant} for {shift} was changed. The status is now {status}\"\n ).format(\n participant=participation.participant,\n shift=participation.shift,\n status=participation.get_state_display(),\n )\n html_content = render_to_string(\"email_base.html\", {\"message_text\": text_content})\n for user in responsible_users:\n message = EmailMultiAlternatives(to=[user.email], subject=subject, body=text_content)\n message.attach_alternative(html_content, \"text/html\")\n messages.append(message)\n\n mail.get_connection().send_messages(messages)\n", "path": "ephios/event_management/mail.py"}, {"content": "from django.contrib.auth.tokens import default_token_generator\nfrom django.core.mail import EmailMultiAlternatives\nfrom django.template.loader import render_to_string\nfrom django.urls import reverse\nfrom django.utils.encoding import force_bytes\nfrom django.utils.http import urlsafe_base64_encode\nfrom django.utils.translation import gettext as _\n\nfrom ephios.settings import SITE_URL\n\n\ndef send_account_creation_info(userprofile):\n subject = _(\"Welcome to ephios!\")\n uid = urlsafe_base64_encode(force_bytes(userprofile.id))\n token = default_token_generator.make_token(userprofile)\n reset_link = reverse(\"password_reset_confirm\", kwargs={\"uidb64\": uid, \"token\": token})\n text_content = _(\n \"You're receiving this email because a new account has been created for you at ephios.\\n\"\n \"Please go to the following page and choose a password: {url}{reset_link}\\n\"\n \"Your username is your email address: {email}\\n\"\n ).format(url=SITE_URL, reset_link=reset_link, email=userprofile.email)\n\n html_content = render_to_string(\n \"user_management/new_account_email.html\",\n {\"uid\": uid, \"token\": token, \"site_url\": SITE_URL, \"email\": userprofile.email},\n )\n message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)\n message.attach_alternative(html_content, \"text/html\")\n message.send()\n\n\ndef send_account_update_info(userprofile):\n subject = _(\"ephios account updated\")\n url = reverse(\"user_management:profile\")\n text_content = _(\n \"You're receiving this email because your account at ephios has been updated.\\n\"\n \"You can see the changes in your profile: {site_url}{url}\\n\"\n \"Your username is your email address: {email}\\n\"\n ).format(site_url=SITE_URL, url=url, email=userprofile.email)\n\n html_content = render_to_string(\n \"user_management/account_updated_email.html\",\n {\"site_url\": SITE_URL, \"url\": url, \"email\": userprofile.email},\n )\n message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)\n message.attach_alternative(html_content, \"text/html\")\n message.send()\n", "path": "ephios/user_management/mail.py"}], "after_files": [{"content": "from urllib.parse import urljoin\n\nfrom django.core import mail\nfrom django.core.mail import EmailMultiAlternatives\nfrom django.template.loader import render_to_string\nfrom django.utils.translation import gettext as _\nfrom guardian.shortcuts import get_users_with_perms\n\nfrom ephios.event_management.models import AbstractParticipation\nfrom ephios.extra.permissions import get_groups_with_perms\nfrom ephios.settings import SITE_URL\nfrom ephios.user_management.models import UserProfile\n\n\ndef new_event(event):\n messages = []\n users = UserProfile.objects.filter(\n groups__in=get_groups_with_perms(event, only_with_perms_in=[\"view_event\"]), is_active=True\n ).distinct()\n responsible_users = get_users_with_perms(event, only_with_perms_in=[\"change_event\"]).distinct()\n responsible_persons_mails = list(responsible_users.values_list(\"email\", flat=True))\n\n subject = _(\"New {type}: {title}\").format(type=event.type, title=event.title)\n text_content = _(\n \"A new {type} ({title}, {location}) has been added.\\n\"\n \"Further information: {description}\\n\"\n \"You can view the event here: {url}\"\n ).format(\n type=event.type,\n title=event.title,\n location=event.location,\n description=event.description,\n url=urljoin(SITE_URL, event.get_absolute_url()),\n )\n html_content = render_to_string(\n \"event_management/mails/new_event.html\", {\"event\": event, \"site_url\": SITE_URL}\n )\n\n for user in users:\n message = EmailMultiAlternatives(\n to=[user.email], subject=subject, body=text_content, reply_to=responsible_persons_mails\n )\n message.attach_alternative(html_content, \"text/html\")\n messages.append(message)\n mail.get_connection().send_messages(messages)\n\n\ndef participation_state_changed(participation: AbstractParticipation):\n if participation.state != AbstractParticipation.States.USER_DECLINED:\n messages = []\n\n # send mail to the participant whose participation has been changed\n if participation.participant.email is not None:\n text_content = _(\n \"The status for your participation for {shift} has changed. It is now {status}.\"\n ).format(shift=participation.shift, status=participation.get_state_display())\n html_content = render_to_string(\"email_base.html\", {\"message_text\": text_content})\n message = EmailMultiAlternatives(\n to=[participation.participant.email],\n subject=_(\"Your participation state changed\"),\n body=text_content,\n )\n message.attach_alternative(html_content, \"text/html\")\n messages.append(message)\n\n # send mail to responsible users\n responsible_users = get_users_with_perms(\n participation.shift.event, only_with_perms_in=[\"change_event\"]\n ).distinct()\n subject = _(\"Participation was changed for your event\")\n text_content = _(\n \"The participation of {participant} for {shift} was changed. The status is now {status}\"\n ).format(\n participant=participation.participant,\n shift=participation.shift,\n status=participation.get_state_display(),\n )\n html_content = render_to_string(\"email_base.html\", {\"message_text\": text_content})\n for user in responsible_users:\n message = EmailMultiAlternatives(to=[user.email], subject=subject, body=text_content)\n message.attach_alternative(html_content, \"text/html\")\n messages.append(message)\n\n mail.get_connection().send_messages(messages)\n", "path": "ephios/event_management/mail.py"}, {"content": "from urllib.parse import urljoin\n\nfrom django.contrib.auth.tokens import default_token_generator\nfrom django.core.mail import EmailMultiAlternatives\nfrom django.template.loader import render_to_string\nfrom django.urls import reverse\nfrom django.utils.encoding import force_bytes\nfrom django.utils.http import urlsafe_base64_encode\nfrom django.utils.translation import gettext as _\n\nfrom ephios.settings import SITE_URL\n\n\ndef send_account_creation_info(userprofile):\n subject = _(\"Welcome to ephios!\")\n uid = urlsafe_base64_encode(force_bytes(userprofile.id))\n token = default_token_generator.make_token(userprofile)\n reset_link = reverse(\"password_reset_confirm\", kwargs={\"uidb64\": uid, \"token\": token})\n text_content = _(\n \"You're receiving this email because a new account has been created for you at ephios.\\n\"\n \"Please go to the following page and choose a password: {url}\\n\"\n \"Your username is your email address: {email}\\n\"\n ).format(url=urljoin(SITE_URL, reset_link), email=userprofile.email)\n\n html_content = render_to_string(\n \"user_management/new_account_email.html\",\n {\"uid\": uid, \"token\": token, \"site_url\": SITE_URL, \"email\": userprofile.email},\n )\n message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)\n message.attach_alternative(html_content, \"text/html\")\n message.send()\n\n\ndef send_account_update_info(userprofile):\n subject = _(\"ephios account updated\")\n url = reverse(\"user_management:profile\")\n text_content = _(\n \"You're receiving this email because your account at ephios has been updated.\\n\"\n \"You can see the changes in your profile: {url}\\n\"\n \"Your username is your email address: {email}\\n\"\n ).format(url=urljoin(SITE_URL, url), email=userprofile.email)\n\n html_content = render_to_string(\n \"user_management/account_updated_email.html\",\n {\"site_url\": SITE_URL, \"url\": url, \"email\": userprofile.email},\n )\n message = EmailMultiAlternatives(to=[userprofile.email], subject=subject, body=text_content)\n message.attach_alternative(html_content, \"text/html\")\n message.send()\n", "path": "ephios/user_management/mail.py"}]} | 1,686 | 709 |
gh_patches_debug_535 | rasdani/github-patches | git_diff | neptune-ai__neptune-client-155 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
create_experiment() fails on windows 10
Hi there,
I enjoy neptune very much and on my macbook everything works fine. But when I run the same code on my Windows 10 machine, I get an error when calling create_experiment().
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\ProgramData\Anaconda3\envs\rl_insurance\lib\site-packages\neptune\__init__.py", line 177, in create_experiment
notebook_id=notebook_id
File "C:\ProgramData\Anaconda3\envs\rl_insurance\lib\site-packages\neptune\projects.py", line 400, in create_experiment
click.echo(str(experiment.id))
File "C:\ProgramData\Anaconda3\envs\rl_insurance\lib\site-packages\click\utils.py", line 218, in echo
file = _default_text_stdout()
File "C:\ProgramData\Anaconda3\envs\rl_insurance\lib\site-packages\click\_compat.py", line 675, in func
rv = wrapper_func()
File "C:\ProgramData\Anaconda3\envs\rl_insurance\lib\site-packages\click\_compat.py", line 436, in get_text_stdout
rv = _get_windows_console_stream(sys.stdout, encoding, errors)
File "C:\ProgramData\Anaconda3\envs\rl_insurance\lib\site-packages\click\_winconsole.py", line 295, in _get_windows_console_stream
func = _stream_factories.get(f.fileno())
AttributeError: 'StdOutWithUpload' object has no attribute 'fileno'`
It happens when I run:
`import neptune `
`import cfg`
`neptune.init(api_token=cfg.neptune_token, project_qualified_name=cfg.neptune_project_name) `
`neptune.create_experiment()`
I run it in conda environments both times.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `neptune/internal/streams/stdstream_uploader.py`
Content:
```
1 #
2 # Copyright (c) 2019, Neptune Labs Sp. z o.o.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 #
16 import sys
17
18 from neptune.internal.channels.channels import ChannelNamespace
19 from neptune.internal.streams.channel_writer import ChannelWriter
20
21
22 class StdStreamWithUpload(object):
23
24 def __init__(self, experiment, channel_name, stream):
25 # pylint:disable=protected-access
26 self._channel = experiment._get_channel(channel_name, 'text', ChannelNamespace.SYSTEM)
27 self._channel_writer = ChannelWriter(experiment, channel_name, ChannelNamespace.SYSTEM)
28 self._stream = stream
29
30 def write(self, data):
31 self._stream.write(data)
32 try:
33 self._channel_writer.write(data)
34 # pylint:disable=bare-except
35 except:
36 pass
37
38 def isatty(self):
39 return hasattr(self._stream, 'isatty') and self._stream.isatty()
40
41 def flush(self):
42 self._stream.flush()
43
44
45 class StdOutWithUpload(StdStreamWithUpload):
46
47 def __init__(self, experiment):
48 super(StdOutWithUpload, self).__init__(experiment, 'stdout', sys.__stdout__)
49 sys.stdout = self
50
51 def close(self):
52 sys.stdout = sys.__stdout__
53
54
55 class StdErrWithUpload(StdStreamWithUpload):
56
57 def __init__(self, experiment):
58 super(StdErrWithUpload, self).__init__(experiment, 'stderr', sys.__stderr__)
59 sys.stderr = self
60
61 def close(self):
62 sys.stderr = sys.__stderr__
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/neptune/internal/streams/stdstream_uploader.py b/neptune/internal/streams/stdstream_uploader.py
--- a/neptune/internal/streams/stdstream_uploader.py
+++ b/neptune/internal/streams/stdstream_uploader.py
@@ -41,6 +41,9 @@
def flush(self):
self._stream.flush()
+ def fileno(self):
+ return self._stream.fileno()
+
class StdOutWithUpload(StdStreamWithUpload):
| {"golden_diff": "diff --git a/neptune/internal/streams/stdstream_uploader.py b/neptune/internal/streams/stdstream_uploader.py\n--- a/neptune/internal/streams/stdstream_uploader.py\n+++ b/neptune/internal/streams/stdstream_uploader.py\n@@ -41,6 +41,9 @@\n def flush(self):\n self._stream.flush()\n \n+ def fileno(self):\n+ return self._stream.fileno()\n+\n \n class StdOutWithUpload(StdStreamWithUpload):\n", "issue": "create_experiment() fails on windows 10\nHi there, \r\n\r\nI enjoy neptune very much and on my macbook everything works fine. But when I run the same code on my Windows 10 machine, I get an error when calling create_experiment().\r\n\r\n`Traceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\rl_insurance\\lib\\site-packages\\neptune\\__init__.py\", line 177, in create_experiment\r\n notebook_id=notebook_id\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\rl_insurance\\lib\\site-packages\\neptune\\projects.py\", line 400, in create_experiment\r\n click.echo(str(experiment.id))\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\rl_insurance\\lib\\site-packages\\click\\utils.py\", line 218, in echo\r\n file = _default_text_stdout()\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\rl_insurance\\lib\\site-packages\\click\\_compat.py\", line 675, in func\r\n rv = wrapper_func()\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\rl_insurance\\lib\\site-packages\\click\\_compat.py\", line 436, in get_text_stdout\r\n rv = _get_windows_console_stream(sys.stdout, encoding, errors)\r\n File \"C:\\ProgramData\\Anaconda3\\envs\\rl_insurance\\lib\\site-packages\\click\\_winconsole.py\", line 295, in _get_windows_console_stream\r\n func = _stream_factories.get(f.fileno())\r\nAttributeError: 'StdOutWithUpload' object has no attribute 'fileno'`\r\n\r\nIt happens when I run:\r\n\r\n`import neptune `\r\n`import cfg`\r\n`neptune.init(api_token=cfg.neptune_token, project_qualified_name=cfg.neptune_project_name) `\r\n`neptune.create_experiment()`\r\n\r\nI run it in conda environments both times.\r\n\n", "before_files": [{"content": "#\n# Copyright (c) 2019, Neptune Labs Sp. z o.o.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport sys\n\nfrom neptune.internal.channels.channels import ChannelNamespace\nfrom neptune.internal.streams.channel_writer import ChannelWriter\n\n\nclass StdStreamWithUpload(object):\n\n def __init__(self, experiment, channel_name, stream):\n # pylint:disable=protected-access\n self._channel = experiment._get_channel(channel_name, 'text', ChannelNamespace.SYSTEM)\n self._channel_writer = ChannelWriter(experiment, channel_name, ChannelNamespace.SYSTEM)\n self._stream = stream\n\n def write(self, data):\n self._stream.write(data)\n try:\n self._channel_writer.write(data)\n # pylint:disable=bare-except\n except:\n pass\n\n def isatty(self):\n return hasattr(self._stream, 'isatty') and self._stream.isatty()\n\n def flush(self):\n self._stream.flush()\n\n\nclass StdOutWithUpload(StdStreamWithUpload):\n\n def __init__(self, experiment):\n super(StdOutWithUpload, self).__init__(experiment, 'stdout', sys.__stdout__)\n sys.stdout = self\n\n def close(self):\n sys.stdout = sys.__stdout__\n\n\nclass StdErrWithUpload(StdStreamWithUpload):\n\n def __init__(self, experiment):\n super(StdErrWithUpload, self).__init__(experiment, 'stderr', sys.__stderr__)\n sys.stderr = self\n\n def close(self):\n sys.stderr = sys.__stderr__\n", "path": "neptune/internal/streams/stdstream_uploader.py"}], "after_files": [{"content": "#\n# Copyright (c) 2019, Neptune Labs Sp. z o.o.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport sys\n\nfrom neptune.internal.channels.channels import ChannelNamespace\nfrom neptune.internal.streams.channel_writer import ChannelWriter\n\n\nclass StdStreamWithUpload(object):\n\n def __init__(self, experiment, channel_name, stream):\n # pylint:disable=protected-access\n self._channel = experiment._get_channel(channel_name, 'text', ChannelNamespace.SYSTEM)\n self._channel_writer = ChannelWriter(experiment, channel_name, ChannelNamespace.SYSTEM)\n self._stream = stream\n\n def write(self, data):\n self._stream.write(data)\n try:\n self._channel_writer.write(data)\n # pylint:disable=bare-except\n except:\n pass\n\n def isatty(self):\n return hasattr(self._stream, 'isatty') and self._stream.isatty()\n\n def flush(self):\n self._stream.flush()\n\n def fileno(self):\n return self._stream.fileno()\n\n\nclass StdOutWithUpload(StdStreamWithUpload):\n\n def __init__(self, experiment):\n super(StdOutWithUpload, self).__init__(experiment, 'stdout', sys.__stdout__)\n sys.stdout = self\n\n def close(self):\n sys.stdout = sys.__stdout__\n\n\nclass StdErrWithUpload(StdStreamWithUpload):\n\n def __init__(self, experiment):\n super(StdErrWithUpload, self).__init__(experiment, 'stderr', sys.__stderr__)\n sys.stderr = self\n\n def close(self):\n sys.stderr = sys.__stderr__\n", "path": "neptune/internal/streams/stdstream_uploader.py"}]} | 1,283 | 105 |
gh_patches_debug_32741 | rasdani/github-patches | git_diff | WeblateOrg__weblate-9260 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Document weblate migrate command
### Describe the problem
`weblate migrate` command is mentioned in the docs, but not actually documented.
It is also used inconsistently:
1. https://docs.weblate.org/en/latest/admin/languages.html#built-in-language-definitions
2. https://docs.weblate.org/en/latest/admin/install.html#filling-up-the-database
### Describe the solution you'd like
document the usage and link it in mentioned occurrences.
### Describe alternatives you've considered
_No response_
### Screenshots
_No response_
### Additional context
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/_ext/djangodocs.py`
Content:
```
1 """Sphinx plugins for Weblate documentation."""
2 import re
3
4 from sphinx import addnodes
5 from sphinx.domains.std import Cmdoption
6
7 # RE for option descriptions without a '--' prefix
8 simple_option_desc_re = re.compile(r"([-_a-zA-Z0-9]+)(\s*.*?)(?=,\s+(?:/|-|--)|$)")
9
10
11 def setup(app):
12 app.add_crossref_type(
13 directivename="setting", rolename="setting", indextemplate="pair: %s; setting"
14 )
15 app.add_object_type(
16 directivename="django-admin",
17 rolename="djadmin",
18 indextemplate="pair: %s; weblate admin command",
19 parse_node=parse_django_admin_node,
20 )
21 app.add_directive("django-admin-option", Cmdoption)
22
23
24 def parse_django_admin_node(env, sig, signode):
25 command = sig.split(" ")[0]
26 env.ref_context["std:program"] = command
27 title = f"weblate {sig}"
28 signode += addnodes.desc_name(title, title)
29 return command
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/_ext/djangodocs.py b/docs/_ext/djangodocs.py
--- a/docs/_ext/djangodocs.py
+++ b/docs/_ext/djangodocs.py
@@ -1,6 +1,7 @@
"""Sphinx plugins for Weblate documentation."""
import re
+from docutils.nodes import literal
from sphinx import addnodes
from sphinx.domains.std import Cmdoption
@@ -8,22 +9,45 @@
simple_option_desc_re = re.compile(r"([-_a-zA-Z0-9]+)(\s*.*?)(?=,\s+(?:/|-|--)|$)")
+class WeblateCommandLiteral(literal):
+ def __init__(self, rawsource="", text="", *children, **attributes):
+ if not text:
+ text = "weblate "
+ super().__init__(rawsource, text, *children, **attributes)
+
+
def setup(app):
app.add_crossref_type(
directivename="setting", rolename="setting", indextemplate="pair: %s; setting"
)
+ app.add_object_type(
+ directivename="weblate-admin",
+ rolename="wladmin",
+ indextemplate="pair: %s; weblate admin command",
+ parse_node=parse_weblate_admin_node,
+ ref_nodeclass=WeblateCommandLiteral,
+ )
+ app.add_directive("weblate-admin-option", Cmdoption)
app.add_object_type(
directivename="django-admin",
rolename="djadmin",
- indextemplate="pair: %s; weblate admin command",
+ indextemplate="pair: %s; django-admin command",
parse_node=parse_django_admin_node,
)
- app.add_directive("django-admin-option", Cmdoption)
-def parse_django_admin_node(env, sig, signode):
+def parse_weblate_admin_node(env, sig, signode):
command = sig.split(" ")[0]
+ # Context for options
env.ref_context["std:program"] = command
title = f"weblate {sig}"
signode += addnodes.desc_name(title, title)
return command
+
+
+def parse_django_admin_node(env, sig, signode):
+ command = sig.split(" ")[0]
+ env.ref_context["std:program"] = command
+ title = "django-admin %s" % sig
+ signode += addnodes.desc_name(title, title)
+ return command
| {"golden_diff": "diff --git a/docs/_ext/djangodocs.py b/docs/_ext/djangodocs.py\n--- a/docs/_ext/djangodocs.py\n+++ b/docs/_ext/djangodocs.py\n@@ -1,6 +1,7 @@\n \"\"\"Sphinx plugins for Weblate documentation.\"\"\"\n import re\n \n+from docutils.nodes import literal\n from sphinx import addnodes\n from sphinx.domains.std import Cmdoption\n \n@@ -8,22 +9,45 @@\n simple_option_desc_re = re.compile(r\"([-_a-zA-Z0-9]+)(\\s*.*?)(?=,\\s+(?:/|-|--)|$)\")\n \n \n+class WeblateCommandLiteral(literal):\n+ def __init__(self, rawsource=\"\", text=\"\", *children, **attributes):\n+ if not text:\n+ text = \"weblate \"\n+ super().__init__(rawsource, text, *children, **attributes)\n+\n+\n def setup(app):\n app.add_crossref_type(\n directivename=\"setting\", rolename=\"setting\", indextemplate=\"pair: %s; setting\"\n )\n+ app.add_object_type(\n+ directivename=\"weblate-admin\",\n+ rolename=\"wladmin\",\n+ indextemplate=\"pair: %s; weblate admin command\",\n+ parse_node=parse_weblate_admin_node,\n+ ref_nodeclass=WeblateCommandLiteral,\n+ )\n+ app.add_directive(\"weblate-admin-option\", Cmdoption)\n app.add_object_type(\n directivename=\"django-admin\",\n rolename=\"djadmin\",\n- indextemplate=\"pair: %s; weblate admin command\",\n+ indextemplate=\"pair: %s; django-admin command\",\n parse_node=parse_django_admin_node,\n )\n- app.add_directive(\"django-admin-option\", Cmdoption)\n \n \n-def parse_django_admin_node(env, sig, signode):\n+def parse_weblate_admin_node(env, sig, signode):\n command = sig.split(\" \")[0]\n+ # Context for options\n env.ref_context[\"std:program\"] = command\n title = f\"weblate {sig}\"\n signode += addnodes.desc_name(title, title)\n return command\n+\n+\n+def parse_django_admin_node(env, sig, signode):\n+ command = sig.split(\" \")[0]\n+ env.ref_context[\"std:program\"] = command\n+ title = \"django-admin %s\" % sig\n+ signode += addnodes.desc_name(title, title)\n+ return command\n", "issue": "Document weblate migrate command\n### Describe the problem\n\n`weblate migrate` command is mentioned in the docs, but not actually documented. \r\n\r\nIt is also used inconsistently:\r\n1. https://docs.weblate.org/en/latest/admin/languages.html#built-in-language-definitions\r\n2. https://docs.weblate.org/en/latest/admin/install.html#filling-up-the-database\n\n### Describe the solution you'd like\n\ndocument the usage and link it in mentioned occurrences.\n\n### Describe alternatives you've considered\n\n_No response_\n\n### Screenshots\n\n_No response_\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "\"\"\"Sphinx plugins for Weblate documentation.\"\"\"\nimport re\n\nfrom sphinx import addnodes\nfrom sphinx.domains.std import Cmdoption\n\n# RE for option descriptions without a '--' prefix\nsimple_option_desc_re = re.compile(r\"([-_a-zA-Z0-9]+)(\\s*.*?)(?=,\\s+(?:/|-|--)|$)\")\n\n\ndef setup(app):\n app.add_crossref_type(\n directivename=\"setting\", rolename=\"setting\", indextemplate=\"pair: %s; setting\"\n )\n app.add_object_type(\n directivename=\"django-admin\",\n rolename=\"djadmin\",\n indextemplate=\"pair: %s; weblate admin command\",\n parse_node=parse_django_admin_node,\n )\n app.add_directive(\"django-admin-option\", Cmdoption)\n\n\ndef parse_django_admin_node(env, sig, signode):\n command = sig.split(\" \")[0]\n env.ref_context[\"std:program\"] = command\n title = f\"weblate {sig}\"\n signode += addnodes.desc_name(title, title)\n return command\n", "path": "docs/_ext/djangodocs.py"}], "after_files": [{"content": "\"\"\"Sphinx plugins for Weblate documentation.\"\"\"\nimport re\n\nfrom docutils.nodes import literal\nfrom sphinx import addnodes\nfrom sphinx.domains.std import Cmdoption\n\n# RE for option descriptions without a '--' prefix\nsimple_option_desc_re = re.compile(r\"([-_a-zA-Z0-9]+)(\\s*.*?)(?=,\\s+(?:/|-|--)|$)\")\n\n\nclass WeblateCommandLiteral(literal):\n def __init__(self, rawsource=\"\", text=\"\", *children, **attributes):\n if not text:\n text = \"weblate \"\n super().__init__(rawsource, text, *children, **attributes)\n\n\ndef setup(app):\n app.add_crossref_type(\n directivename=\"setting\", rolename=\"setting\", indextemplate=\"pair: %s; setting\"\n )\n app.add_object_type(\n directivename=\"weblate-admin\",\n rolename=\"wladmin\",\n indextemplate=\"pair: %s; weblate admin command\",\n parse_node=parse_weblate_admin_node,\n ref_nodeclass=WeblateCommandLiteral,\n )\n app.add_directive(\"weblate-admin-option\", Cmdoption)\n app.add_object_type(\n directivename=\"django-admin\",\n rolename=\"djadmin\",\n indextemplate=\"pair: %s; django-admin command\",\n parse_node=parse_django_admin_node,\n )\n\n\ndef parse_weblate_admin_node(env, sig, signode):\n command = sig.split(\" \")[0]\n # Context for options\n env.ref_context[\"std:program\"] = command\n title = f\"weblate {sig}\"\n signode += addnodes.desc_name(title, title)\n return command\n\n\ndef parse_django_admin_node(env, sig, signode):\n command = sig.split(\" \")[0]\n env.ref_context[\"std:program\"] = command\n title = \"django-admin %s\" % sig\n signode += addnodes.desc_name(title, title)\n return command\n", "path": "docs/_ext/djangodocs.py"}]} | 679 | 559 |
gh_patches_debug_32632 | rasdani/github-patches | git_diff | docker__docker-py-727 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
split_port() does not properly handle ":80" or "127.0.0.1:" properly
Initially reported as https://github.com/docker/compose/issues/1887
Example:
``` python
def test_port_only_with_colon(self):
self.assertRaises(ValueError,
lambda: split_port(":80"))
def test_host_only_with_colon(self):
self.assertRaises(ValueError,
lambda: split_port("localhost:"))
```
Results:
```
======================================================================
ERROR: test_host_only_with_colon (__main__.UtilsTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tests/utils_test.py", line 428, in test_host_only_with_colon
lambda: split_port("localhost:"))
File "/usr/lib/python2.7/unittest/case.py", line 473, in assertRaises
callableObj(*args, **kwargs)
File "tests/utils_test.py", line 428, in <lambda>
lambda: split_port("localhost:"))
File "/home/mark/Projects/docker-py/docker/utils/ports/ports.py", line 77, in split_port
if len(internal_range) != len(external_range):
TypeError: object of type 'NoneType' has no len()
======================================================================
ERROR: test_port_only_with_colon (__main__.UtilsTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tests/utils_test.py", line 424, in test_port_only_with_colon
lambda: split_port(":80"))
File "/usr/lib/python2.7/unittest/case.py", line 473, in assertRaises
callableObj(*args, **kwargs)
File "tests/utils_test.py", line 424, in <lambda>
lambda: split_port(":80"))
File "/home/mark/Projects/docker-py/docker/utils/ports/ports.py", line 77, in split_port
if len(internal_range) != len(external_range):
TypeError: object of type 'NoneType' has no len()
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/utils/ports/ports.py`
Content:
```
1
2
3 def add_port_mapping(port_bindings, internal_port, external):
4 if internal_port in port_bindings:
5 port_bindings[internal_port].append(external)
6 else:
7 port_bindings[internal_port] = [external]
8
9
10 def add_port(port_bindings, internal_port_range, external_range):
11 if external_range is None:
12 for internal_port in internal_port_range:
13 add_port_mapping(port_bindings, internal_port, None)
14 else:
15 ports = zip(internal_port_range, external_range)
16 for internal_port, external_port in ports:
17 add_port_mapping(port_bindings, internal_port, external_port)
18
19
20 def build_port_bindings(ports):
21 port_bindings = {}
22 for port in ports:
23 internal_port_range, external_range = split_port(port)
24 add_port(port_bindings, internal_port_range, external_range)
25 return port_bindings
26
27
28 def to_port_range(port):
29 if not port:
30 return None
31
32 protocol = ""
33 if "/" in port:
34 parts = port.split("/")
35 if len(parts) != 2:
36 raise ValueError('Invalid port "%s", should be '
37 '[[remote_ip:]remote_port[-remote_port]:]'
38 'port[/protocol]' % port)
39 port, protocol = parts
40 protocol = "/" + protocol
41
42 parts = str(port).split('-')
43
44 if len(parts) == 1:
45 return ["%s%s" % (port, protocol)]
46
47 if len(parts) == 2:
48 full_port_range = range(int(parts[0]), int(parts[1]) + 1)
49 return ["%s%s" % (p, protocol) for p in full_port_range]
50
51 raise ValueError('Invalid port range "%s", should be '
52 'port or startport-endport' % port)
53
54
55 def split_port(port):
56 parts = str(port).split(':')
57 if not 1 <= len(parts) <= 3:
58 raise ValueError('Invalid port "%s", should be '
59 '[[remote_ip:]remote_port:]port[/protocol]' % port)
60
61 if len(parts) == 1:
62 internal_port, = parts
63 return to_port_range(internal_port), None
64 if len(parts) == 2:
65 external_port, internal_port = parts
66
67 internal_range = to_port_range(internal_port)
68 external_range = to_port_range(external_port)
69 if len(internal_range) != len(external_range):
70 raise ValueError('Port ranges don\'t match in length')
71
72 return internal_range, external_range
73
74 external_ip, external_port, internal_port = parts
75 internal_range = to_port_range(internal_port)
76 external_range = to_port_range(external_port)
77 if not external_range:
78 external_range = [None] * len(internal_range)
79
80 if len(internal_range) != len(external_range):
81 raise ValueError('Port ranges don\'t match in length')
82
83 return internal_range, [(external_ip, ex_port or None)
84 for ex_port in external_range]
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docker/utils/ports/ports.py b/docker/utils/ports/ports.py
--- a/docker/utils/ports/ports.py
+++ b/docker/utils/ports/ports.py
@@ -1,5 +1,4 @@
-
def add_port_mapping(port_bindings, internal_port, external):
if internal_port in port_bindings:
port_bindings[internal_port].append(external)
@@ -33,9 +32,8 @@
if "/" in port:
parts = port.split("/")
if len(parts) != 2:
- raise ValueError('Invalid port "%s", should be '
- '[[remote_ip:]remote_port[-remote_port]:]'
- 'port[/protocol]' % port)
+ _raise_invalid_port(port)
+
port, protocol = parts
protocol = "/" + protocol
@@ -52,11 +50,17 @@
'port or startport-endport' % port)
+def _raise_invalid_port(port):
+ raise ValueError('Invalid port "%s", should be '
+ '[[remote_ip:]remote_port[-remote_port]:]'
+ 'port[/protocol]' % port)
+
+
def split_port(port):
parts = str(port).split(':')
+
if not 1 <= len(parts) <= 3:
- raise ValueError('Invalid port "%s", should be '
- '[[remote_ip:]remote_port:]port[/protocol]' % port)
+ _raise_invalid_port(port)
if len(parts) == 1:
internal_port, = parts
@@ -66,6 +70,10 @@
internal_range = to_port_range(internal_port)
external_range = to_port_range(external_port)
+
+ if internal_range is None or external_range is None:
+ _raise_invalid_port(port)
+
if len(internal_range) != len(external_range):
raise ValueError('Port ranges don\'t match in length')
| {"golden_diff": "diff --git a/docker/utils/ports/ports.py b/docker/utils/ports/ports.py\n--- a/docker/utils/ports/ports.py\n+++ b/docker/utils/ports/ports.py\n@@ -1,5 +1,4 @@\n \n-\n def add_port_mapping(port_bindings, internal_port, external):\n if internal_port in port_bindings:\n port_bindings[internal_port].append(external)\n@@ -33,9 +32,8 @@\n if \"/\" in port:\n parts = port.split(\"/\")\n if len(parts) != 2:\n- raise ValueError('Invalid port \"%s\", should be '\n- '[[remote_ip:]remote_port[-remote_port]:]'\n- 'port[/protocol]' % port)\n+ _raise_invalid_port(port)\n+\n port, protocol = parts\n protocol = \"/\" + protocol\n \n@@ -52,11 +50,17 @@\n 'port or startport-endport' % port)\n \n \n+def _raise_invalid_port(port):\n+ raise ValueError('Invalid port \"%s\", should be '\n+ '[[remote_ip:]remote_port[-remote_port]:]'\n+ 'port[/protocol]' % port)\n+\n+\n def split_port(port):\n parts = str(port).split(':')\n+\n if not 1 <= len(parts) <= 3:\n- raise ValueError('Invalid port \"%s\", should be '\n- '[[remote_ip:]remote_port:]port[/protocol]' % port)\n+ _raise_invalid_port(port)\n \n if len(parts) == 1:\n internal_port, = parts\n@@ -66,6 +70,10 @@\n \n internal_range = to_port_range(internal_port)\n external_range = to_port_range(external_port)\n+\n+ if internal_range is None or external_range is None:\n+ _raise_invalid_port(port)\n+\n if len(internal_range) != len(external_range):\n raise ValueError('Port ranges don\\'t match in length')\n", "issue": "split_port() does not properly handle \":80\" or \"127.0.0.1:\" properly\nInitially reported as https://github.com/docker/compose/issues/1887 \n\nExample:\n\n``` python\n def test_port_only_with_colon(self):\n self.assertRaises(ValueError,\n lambda: split_port(\":80\"))\n\n def test_host_only_with_colon(self):\n self.assertRaises(ValueError,\n lambda: split_port(\"localhost:\"))\n```\n\nResults:\n\n```\n======================================================================\nERROR: test_host_only_with_colon (__main__.UtilsTest)\n----------------------------------------------------------------------\nTraceback (most recent call last):\n File \"tests/utils_test.py\", line 428, in test_host_only_with_colon\n lambda: split_port(\"localhost:\"))\n File \"/usr/lib/python2.7/unittest/case.py\", line 473, in assertRaises\n callableObj(*args, **kwargs)\n File \"tests/utils_test.py\", line 428, in <lambda>\n lambda: split_port(\"localhost:\"))\n File \"/home/mark/Projects/docker-py/docker/utils/ports/ports.py\", line 77, in split_port\n if len(internal_range) != len(external_range):\nTypeError: object of type 'NoneType' has no len()\n\n======================================================================\nERROR: test_port_only_with_colon (__main__.UtilsTest)\n----------------------------------------------------------------------\nTraceback (most recent call last):\n File \"tests/utils_test.py\", line 424, in test_port_only_with_colon\n lambda: split_port(\":80\"))\n File \"/usr/lib/python2.7/unittest/case.py\", line 473, in assertRaises\n callableObj(*args, **kwargs)\n File \"tests/utils_test.py\", line 424, in <lambda>\n lambda: split_port(\":80\"))\n File \"/home/mark/Projects/docker-py/docker/utils/ports/ports.py\", line 77, in split_port\n if len(internal_range) != len(external_range):\nTypeError: object of type 'NoneType' has no len()\n```\n\n", "before_files": [{"content": "\n\ndef add_port_mapping(port_bindings, internal_port, external):\n if internal_port in port_bindings:\n port_bindings[internal_port].append(external)\n else:\n port_bindings[internal_port] = [external]\n\n\ndef add_port(port_bindings, internal_port_range, external_range):\n if external_range is None:\n for internal_port in internal_port_range:\n add_port_mapping(port_bindings, internal_port, None)\n else:\n ports = zip(internal_port_range, external_range)\n for internal_port, external_port in ports:\n add_port_mapping(port_bindings, internal_port, external_port)\n\n\ndef build_port_bindings(ports):\n port_bindings = {}\n for port in ports:\n internal_port_range, external_range = split_port(port)\n add_port(port_bindings, internal_port_range, external_range)\n return port_bindings\n\n\ndef to_port_range(port):\n if not port:\n return None\n\n protocol = \"\"\n if \"/\" in port:\n parts = port.split(\"/\")\n if len(parts) != 2:\n raise ValueError('Invalid port \"%s\", should be '\n '[[remote_ip:]remote_port[-remote_port]:]'\n 'port[/protocol]' % port)\n port, protocol = parts\n protocol = \"/\" + protocol\n\n parts = str(port).split('-')\n\n if len(parts) == 1:\n return [\"%s%s\" % (port, protocol)]\n\n if len(parts) == 2:\n full_port_range = range(int(parts[0]), int(parts[1]) + 1)\n return [\"%s%s\" % (p, protocol) for p in full_port_range]\n\n raise ValueError('Invalid port range \"%s\", should be '\n 'port or startport-endport' % port)\n\n\ndef split_port(port):\n parts = str(port).split(':')\n if not 1 <= len(parts) <= 3:\n raise ValueError('Invalid port \"%s\", should be '\n '[[remote_ip:]remote_port:]port[/protocol]' % port)\n\n if len(parts) == 1:\n internal_port, = parts\n return to_port_range(internal_port), None\n if len(parts) == 2:\n external_port, internal_port = parts\n\n internal_range = to_port_range(internal_port)\n external_range = to_port_range(external_port)\n if len(internal_range) != len(external_range):\n raise ValueError('Port ranges don\\'t match in length')\n\n return internal_range, external_range\n\n external_ip, external_port, internal_port = parts\n internal_range = to_port_range(internal_port)\n external_range = to_port_range(external_port)\n if not external_range:\n external_range = [None] * len(internal_range)\n\n if len(internal_range) != len(external_range):\n raise ValueError('Port ranges don\\'t match in length')\n\n return internal_range, [(external_ip, ex_port or None)\n for ex_port in external_range]\n", "path": "docker/utils/ports/ports.py"}], "after_files": [{"content": "\ndef add_port_mapping(port_bindings, internal_port, external):\n if internal_port in port_bindings:\n port_bindings[internal_port].append(external)\n else:\n port_bindings[internal_port] = [external]\n\n\ndef add_port(port_bindings, internal_port_range, external_range):\n if external_range is None:\n for internal_port in internal_port_range:\n add_port_mapping(port_bindings, internal_port, None)\n else:\n ports = zip(internal_port_range, external_range)\n for internal_port, external_port in ports:\n add_port_mapping(port_bindings, internal_port, external_port)\n\n\ndef build_port_bindings(ports):\n port_bindings = {}\n for port in ports:\n internal_port_range, external_range = split_port(port)\n add_port(port_bindings, internal_port_range, external_range)\n return port_bindings\n\n\ndef to_port_range(port):\n if not port:\n return None\n\n protocol = \"\"\n if \"/\" in port:\n parts = port.split(\"/\")\n if len(parts) != 2:\n _raise_invalid_port(port)\n\n port, protocol = parts\n protocol = \"/\" + protocol\n\n parts = str(port).split('-')\n\n if len(parts) == 1:\n return [\"%s%s\" % (port, protocol)]\n\n if len(parts) == 2:\n full_port_range = range(int(parts[0]), int(parts[1]) + 1)\n return [\"%s%s\" % (p, protocol) for p in full_port_range]\n\n raise ValueError('Invalid port range \"%s\", should be '\n 'port or startport-endport' % port)\n\n\ndef _raise_invalid_port(port):\n raise ValueError('Invalid port \"%s\", should be '\n '[[remote_ip:]remote_port[-remote_port]:]'\n 'port[/protocol]' % port)\n\n\ndef split_port(port):\n parts = str(port).split(':')\n\n if not 1 <= len(parts) <= 3:\n _raise_invalid_port(port)\n\n if len(parts) == 1:\n internal_port, = parts\n return to_port_range(internal_port), None\n if len(parts) == 2:\n external_port, internal_port = parts\n\n internal_range = to_port_range(internal_port)\n external_range = to_port_range(external_port)\n\n if internal_range is None or external_range is None:\n _raise_invalid_port(port)\n\n if len(internal_range) != len(external_range):\n raise ValueError('Port ranges don\\'t match in length')\n\n return internal_range, external_range\n\n external_ip, external_port, internal_port = parts\n internal_range = to_port_range(internal_port)\n external_range = to_port_range(external_port)\n if not external_range:\n external_range = [None] * len(internal_range)\n\n if len(internal_range) != len(external_range):\n raise ValueError('Port ranges don\\'t match in length')\n\n return internal_range, [(external_ip, ex_port or None)\n for ex_port in external_range]\n", "path": "docker/utils/ports/ports.py"}]} | 1,499 | 416 |
gh_patches_debug_35677 | rasdani/github-patches | git_diff | medtagger__MedTagger-519 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Log in user after registration
## Current Behavior
User needs to log in after first registration.
## Expected Behavior
User should be logged into MedTagger right after filling registration form.
## Steps to Reproduce the Problem
1. Register new user.
2. You will be redirected to the login page.
3. Type your login once again...
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/medtagger/api/auth/business.py`
Content:
```
1 """Module responsible for business logic in all Auth endpoint."""
2 from medtagger.api import InvalidArgumentsException
3 from medtagger.api.security import hash_password, verify_user_password, generate_auth_token
4 from medtagger.database.models import User
5 from medtagger.repositories import roles as RolesRepository, users as UsersRepository
6
7
8 def create_user(email: str, password: str, first_name: str, last_name: str) -> int:
9 """Create user with the given user information. Password is being hashed.
10
11 :param email: user email in string format
12 :param password: user password in string format
13 :param first_name: user first name in string format
14 :param last_name: user last name in string format
15
16 :return: id of the new user
17 """
18 user = UsersRepository.get_user_by_email(email)
19 if user:
20 raise InvalidArgumentsException('User with this email already exists')
21 password_hash = hash_password(password)
22 new_user = User(email, password_hash, first_name, last_name)
23 role = RolesRepository.get_role_with_name('volunteer')
24 if not role:
25 raise InvalidArgumentsException('Role does not exist.')
26 new_user.roles.append(role)
27 return UsersRepository.add_new_user(new_user)
28
29
30 def sign_in_user(email: str, password: str) -> str:
31 """Sign in user using given username and password.
32
33 :param email: user email in string format
34 :param password: user password in string format
35
36 :return: authentication token
37 """
38 user = UsersRepository.get_user_by_email(email)
39 if not user:
40 raise InvalidArgumentsException('User does not exist.')
41 if not verify_user_password(user, password):
42 raise InvalidArgumentsException('Password does not match.')
43 return generate_auth_token(user)
44
```
Path: `backend/medtagger/api/auth/service.py`
Content:
```
1 """Module responsible for definition of Auth service."""
2 from typing import Any
3
4 from flask import request
5 from flask_restplus import Resource
6
7 from medtagger.api import api
8 from medtagger.api.auth.business import create_user, sign_in_user
9 from medtagger.api.auth import serializers
10
11 auth_ns = api.namespace('auth', 'Auth methods')
12
13
14 @auth_ns.route('/register')
15 class Register(Resource):
16 """Register user endpoint."""
17
18 @staticmethod
19 @api.expect(serializers.new_user)
20 @api.doc(responses={201: 'User created', 400: 'Invalid arguments'})
21 def post() -> Any:
22 """Register the user."""
23 user = request.json
24 user_id = create_user(user['email'], user['password'], user['firstName'], user['lastName'])
25 return {'id': user_id}, 201
26
27
28 @auth_ns.route('/sign-in')
29 class SignIn(Resource):
30 """Sign in endpoint."""
31
32 @staticmethod
33 @api.expect(serializers.sign_in)
34 @api.doc(responses={200: 'Signed in', 400: 'User does not exist or wrong password was provided'})
35 def post() -> Any:
36 """Sign in the user."""
37 sign_in = request.json
38 token = sign_in_user(sign_in['email'], sign_in['password'])
39 return {"token": token}, 200
40
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/backend/medtagger/api/auth/business.py b/backend/medtagger/api/auth/business.py
--- a/backend/medtagger/api/auth/business.py
+++ b/backend/medtagger/api/auth/business.py
@@ -1,11 +1,12 @@
"""Module responsible for business logic in all Auth endpoint."""
+from typing import Tuple
from medtagger.api import InvalidArgumentsException
from medtagger.api.security import hash_password, verify_user_password, generate_auth_token
from medtagger.database.models import User
from medtagger.repositories import roles as RolesRepository, users as UsersRepository
-def create_user(email: str, password: str, first_name: str, last_name: str) -> int:
+def create_user(email: str, password: str, first_name: str, last_name: str) -> Tuple[int, str]:
"""Create user with the given user information. Password is being hashed.
:param email: user email in string format
@@ -13,7 +14,7 @@
:param first_name: user first name in string format
:param last_name: user last name in string format
- :return: id of the new user
+ :return: tuple with user id and authentication token
"""
user = UsersRepository.get_user_by_email(email)
if user:
@@ -24,7 +25,9 @@
if not role:
raise InvalidArgumentsException('Role does not exist.')
new_user.roles.append(role)
- return UsersRepository.add_new_user(new_user)
+ user_id = UsersRepository.add_new_user(new_user)
+ user_token = generate_auth_token(new_user)
+ return user_id, user_token
def sign_in_user(email: str, password: str) -> str:
diff --git a/backend/medtagger/api/auth/service.py b/backend/medtagger/api/auth/service.py
--- a/backend/medtagger/api/auth/service.py
+++ b/backend/medtagger/api/auth/service.py
@@ -21,8 +21,8 @@
def post() -> Any:
"""Register the user."""
user = request.json
- user_id = create_user(user['email'], user['password'], user['firstName'], user['lastName'])
- return {'id': user_id}, 201
+ user_id, user_token = create_user(user['email'], user['password'], user['firstName'], user['lastName'])
+ return {'id': user_id, 'token': user_token}, 201
@auth_ns.route('/sign-in')
| {"golden_diff": "diff --git a/backend/medtagger/api/auth/business.py b/backend/medtagger/api/auth/business.py\n--- a/backend/medtagger/api/auth/business.py\n+++ b/backend/medtagger/api/auth/business.py\n@@ -1,11 +1,12 @@\n \"\"\"Module responsible for business logic in all Auth endpoint.\"\"\"\n+from typing import Tuple\n from medtagger.api import InvalidArgumentsException\n from medtagger.api.security import hash_password, verify_user_password, generate_auth_token\n from medtagger.database.models import User\n from medtagger.repositories import roles as RolesRepository, users as UsersRepository\n \n \n-def create_user(email: str, password: str, first_name: str, last_name: str) -> int:\n+def create_user(email: str, password: str, first_name: str, last_name: str) -> Tuple[int, str]:\n \"\"\"Create user with the given user information. Password is being hashed.\n \n :param email: user email in string format\n@@ -13,7 +14,7 @@\n :param first_name: user first name in string format\n :param last_name: user last name in string format\n \n- :return: id of the new user\n+ :return: tuple with user id and authentication token\n \"\"\"\n user = UsersRepository.get_user_by_email(email)\n if user:\n@@ -24,7 +25,9 @@\n if not role:\n raise InvalidArgumentsException('Role does not exist.')\n new_user.roles.append(role)\n- return UsersRepository.add_new_user(new_user)\n+ user_id = UsersRepository.add_new_user(new_user)\n+ user_token = generate_auth_token(new_user)\n+ return user_id, user_token\n \n \n def sign_in_user(email: str, password: str) -> str:\ndiff --git a/backend/medtagger/api/auth/service.py b/backend/medtagger/api/auth/service.py\n--- a/backend/medtagger/api/auth/service.py\n+++ b/backend/medtagger/api/auth/service.py\n@@ -21,8 +21,8 @@\n def post() -> Any:\n \"\"\"Register the user.\"\"\"\n user = request.json\n- user_id = create_user(user['email'], user['password'], user['firstName'], user['lastName'])\n- return {'id': user_id}, 201\n+ user_id, user_token = create_user(user['email'], user['password'], user['firstName'], user['lastName'])\n+ return {'id': user_id, 'token': user_token}, 201\n \n \n @auth_ns.route('/sign-in')\n", "issue": "Log in user after registration\n## Current Behavior\r\n\r\nUser needs to log in after first registration.\r\n\r\n## Expected Behavior\r\n\r\nUser should be logged into MedTagger right after filling registration form.\r\n\r\n## Steps to Reproduce the Problem\r\n\r\n 1. Register new user.\r\n 2. You will be redirected to the login page.\r\n 3. Type your login once again...\r\n\n", "before_files": [{"content": "\"\"\"Module responsible for business logic in all Auth endpoint.\"\"\"\nfrom medtagger.api import InvalidArgumentsException\nfrom medtagger.api.security import hash_password, verify_user_password, generate_auth_token\nfrom medtagger.database.models import User\nfrom medtagger.repositories import roles as RolesRepository, users as UsersRepository\n\n\ndef create_user(email: str, password: str, first_name: str, last_name: str) -> int:\n \"\"\"Create user with the given user information. Password is being hashed.\n\n :param email: user email in string format\n :param password: user password in string format\n :param first_name: user first name in string format\n :param last_name: user last name in string format\n\n :return: id of the new user\n \"\"\"\n user = UsersRepository.get_user_by_email(email)\n if user:\n raise InvalidArgumentsException('User with this email already exists')\n password_hash = hash_password(password)\n new_user = User(email, password_hash, first_name, last_name)\n role = RolesRepository.get_role_with_name('volunteer')\n if not role:\n raise InvalidArgumentsException('Role does not exist.')\n new_user.roles.append(role)\n return UsersRepository.add_new_user(new_user)\n\n\ndef sign_in_user(email: str, password: str) -> str:\n \"\"\"Sign in user using given username and password.\n\n :param email: user email in string format\n :param password: user password in string format\n\n :return: authentication token\n \"\"\"\n user = UsersRepository.get_user_by_email(email)\n if not user:\n raise InvalidArgumentsException('User does not exist.')\n if not verify_user_password(user, password):\n raise InvalidArgumentsException('Password does not match.')\n return generate_auth_token(user)\n", "path": "backend/medtagger/api/auth/business.py"}, {"content": "\"\"\"Module responsible for definition of Auth service.\"\"\"\nfrom typing import Any\n\nfrom flask import request\nfrom flask_restplus import Resource\n\nfrom medtagger.api import api\nfrom medtagger.api.auth.business import create_user, sign_in_user\nfrom medtagger.api.auth import serializers\n\nauth_ns = api.namespace('auth', 'Auth methods')\n\n\n@auth_ns.route('/register')\nclass Register(Resource):\n \"\"\"Register user endpoint.\"\"\"\n\n @staticmethod\n @api.expect(serializers.new_user)\n @api.doc(responses={201: 'User created', 400: 'Invalid arguments'})\n def post() -> Any:\n \"\"\"Register the user.\"\"\"\n user = request.json\n user_id = create_user(user['email'], user['password'], user['firstName'], user['lastName'])\n return {'id': user_id}, 201\n\n\n@auth_ns.route('/sign-in')\nclass SignIn(Resource):\n \"\"\"Sign in endpoint.\"\"\"\n\n @staticmethod\n @api.expect(serializers.sign_in)\n @api.doc(responses={200: 'Signed in', 400: 'User does not exist or wrong password was provided'})\n def post() -> Any:\n \"\"\"Sign in the user.\"\"\"\n sign_in = request.json\n token = sign_in_user(sign_in['email'], sign_in['password'])\n return {\"token\": token}, 200\n", "path": "backend/medtagger/api/auth/service.py"}], "after_files": [{"content": "\"\"\"Module responsible for business logic in all Auth endpoint.\"\"\"\nfrom typing import Tuple\nfrom medtagger.api import InvalidArgumentsException\nfrom medtagger.api.security import hash_password, verify_user_password, generate_auth_token\nfrom medtagger.database.models import User\nfrom medtagger.repositories import roles as RolesRepository, users as UsersRepository\n\n\ndef create_user(email: str, password: str, first_name: str, last_name: str) -> Tuple[int, str]:\n \"\"\"Create user with the given user information. Password is being hashed.\n\n :param email: user email in string format\n :param password: user password in string format\n :param first_name: user first name in string format\n :param last_name: user last name in string format\n\n :return: tuple with user id and authentication token\n \"\"\"\n user = UsersRepository.get_user_by_email(email)\n if user:\n raise InvalidArgumentsException('User with this email already exists')\n password_hash = hash_password(password)\n new_user = User(email, password_hash, first_name, last_name)\n role = RolesRepository.get_role_with_name('volunteer')\n if not role:\n raise InvalidArgumentsException('Role does not exist.')\n new_user.roles.append(role)\n user_id = UsersRepository.add_new_user(new_user)\n user_token = generate_auth_token(new_user)\n return user_id, user_token\n\n\ndef sign_in_user(email: str, password: str) -> str:\n \"\"\"Sign in user using given username and password.\n\n :param email: user email in string format\n :param password: user password in string format\n\n :return: authentication token\n \"\"\"\n user = UsersRepository.get_user_by_email(email)\n if not user:\n raise InvalidArgumentsException('User does not exist.')\n if not verify_user_password(user, password):\n raise InvalidArgumentsException('Password does not match.')\n return generate_auth_token(user)\n", "path": "backend/medtagger/api/auth/business.py"}, {"content": "\"\"\"Module responsible for definition of Auth service.\"\"\"\nfrom typing import Any\n\nfrom flask import request\nfrom flask_restplus import Resource\n\nfrom medtagger.api import api\nfrom medtagger.api.auth.business import create_user, sign_in_user\nfrom medtagger.api.auth import serializers\n\nauth_ns = api.namespace('auth', 'Auth methods')\n\n\n@auth_ns.route('/register')\nclass Register(Resource):\n \"\"\"Register user endpoint.\"\"\"\n\n @staticmethod\n @api.expect(serializers.new_user)\n @api.doc(responses={201: 'User created', 400: 'Invalid arguments'})\n def post() -> Any:\n \"\"\"Register the user.\"\"\"\n user = request.json\n user_id, user_token = create_user(user['email'], user['password'], user['firstName'], user['lastName'])\n return {'id': user_id, 'token': user_token}, 201\n\n\n@auth_ns.route('/sign-in')\nclass SignIn(Resource):\n \"\"\"Sign in endpoint.\"\"\"\n\n @staticmethod\n @api.expect(serializers.sign_in)\n @api.doc(responses={200: 'Signed in', 400: 'User does not exist or wrong password was provided'})\n def post() -> Any:\n \"\"\"Sign in the user.\"\"\"\n sign_in = request.json\n token = sign_in_user(sign_in['email'], sign_in['password'])\n return {\"token\": token}, 200\n", "path": "backend/medtagger/api/auth/service.py"}]} | 1,188 | 555 |
gh_patches_debug_23773 | rasdani/github-patches | git_diff | mirumee__ariadne-481 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unexpected Snake Case for Acronyms
The snake case conversion of the `snake_case_fallback_resolvers` yields unexpected results for words with multiple uppercase letters in a row, e.g.
- `getHTTPResponse` is converted to `get_h_t_t_p_response`, or
- `externalID` is converted to "external_i_d`.
These are unlikely names for python attributes and I would expect the resolver to look for `get_http_response` / `external_id` instead.
Possible implementations for the camel to snake case conversions are discussed here: https://stackoverflow.com/questions/1175208/elegant-python-function-to-convert-camelcase-to-snake-case
Unexpected Snake Case for Acronyms
The snake case conversion of the `snake_case_fallback_resolvers` yields unexpected results for words with multiple uppercase letters in a row, e.g.
- `getHTTPResponse` is converted to `get_h_t_t_p_response`, or
- `externalID` is converted to "external_i_d`.
These are unlikely names for python attributes and I would expect the resolver to look for `get_http_response` / `external_id` instead.
Possible implementations for the camel to snake case conversions are discussed here: https://stackoverflow.com/questions/1175208/elegant-python-function-to-convert-camelcase-to-snake-case
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ariadne/utils.py`
Content:
```
1 import asyncio
2 from functools import wraps
3 from typing import Optional, Union, Callable, Dict, Any
4
5 from graphql import GraphQLError, parse
6
7
8 def convert_camel_case_to_snake(graphql_name: str) -> str:
9 python_name = ""
10 for i, c in enumerate(graphql_name.lower()):
11 if (
12 i > 0
13 and (
14 all(
15 (
16 c != graphql_name[i],
17 graphql_name[i - 1] != "_",
18 graphql_name[i - 1] == python_name[-1],
19 )
20 )
21 )
22 or all((c.isdigit(), graphql_name[i - 1].isdigit() is False))
23 ):
24 python_name += "_"
25 python_name += c
26 return python_name
27
28
29 def gql(value: str) -> str:
30 parse(value)
31 return value
32
33
34 def unwrap_graphql_error(
35 error: Union[GraphQLError, Optional[Exception]]
36 ) -> Optional[Exception]:
37 if isinstance(error, GraphQLError):
38 return unwrap_graphql_error(error.original_error)
39 return error
40
41
42 def convert_kwargs_to_snake_case(func: Callable) -> Callable:
43 def convert_to_snake_case(d: Dict) -> Dict:
44 converted: Dict = {}
45 for k, v in d.items():
46 if isinstance(v, dict):
47 v = convert_to_snake_case(v)
48 if isinstance(v, list):
49 v = [convert_to_snake_case(i) if isinstance(i, dict) else i for i in v]
50 converted[convert_camel_case_to_snake(k)] = v
51 return converted
52
53 if asyncio.iscoroutinefunction(func):
54
55 @wraps(func)
56 async def async_wrapper(*args: Any, **kwargs: Any) -> Any:
57 return await func(*args, **convert_to_snake_case(kwargs))
58
59 return async_wrapper
60
61 @wraps(func)
62 def wrapper(*args: Any, **kwargs: Any) -> Any:
63 return func(*args, **convert_to_snake_case(kwargs))
64
65 return wrapper
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ariadne/utils.py b/ariadne/utils.py
--- a/ariadne/utils.py
+++ b/ariadne/utils.py
@@ -6,20 +6,29 @@
def convert_camel_case_to_snake(graphql_name: str) -> str:
+ # pylint: disable=too-many-boolean-expressions
+ max_index = len(graphql_name) - 1
+ lowered_name = graphql_name.lower()
+
python_name = ""
- for i, c in enumerate(graphql_name.lower()):
- if (
- i > 0
- and (
- all(
- (
- c != graphql_name[i],
- graphql_name[i - 1] != "_",
- graphql_name[i - 1] == python_name[-1],
- )
- )
+ for i, c in enumerate(lowered_name):
+ if i > 0 and (
+ # testWord -> test_word
+ (
+ c != graphql_name[i]
+ and graphql_name[i - 1] != "_"
+ and graphql_name[i - 1] == python_name[-1]
+ )
+ # TESTWord -> test_word
+ or (
+ i < max_index
+ and graphql_name[i] != lowered_name[i]
+ and graphql_name[i + 1] == lowered_name[i + 1]
)
- or all((c.isdigit(), graphql_name[i - 1].isdigit() is False))
+ # test134 -> test_134
+ or (c.isdigit() and not graphql_name[i - 1].isdigit())
+ # 134test -> 134_test
+ or (not c.isdigit() and graphql_name[i - 1].isdigit())
):
python_name += "_"
python_name += c
| {"golden_diff": "diff --git a/ariadne/utils.py b/ariadne/utils.py\n--- a/ariadne/utils.py\n+++ b/ariadne/utils.py\n@@ -6,20 +6,29 @@\n \n \n def convert_camel_case_to_snake(graphql_name: str) -> str:\n+ # pylint: disable=too-many-boolean-expressions\n+ max_index = len(graphql_name) - 1\n+ lowered_name = graphql_name.lower()\n+\n python_name = \"\"\n- for i, c in enumerate(graphql_name.lower()):\n- if (\n- i > 0\n- and (\n- all(\n- (\n- c != graphql_name[i],\n- graphql_name[i - 1] != \"_\",\n- graphql_name[i - 1] == python_name[-1],\n- )\n- )\n+ for i, c in enumerate(lowered_name):\n+ if i > 0 and (\n+ # testWord -> test_word\n+ (\n+ c != graphql_name[i]\n+ and graphql_name[i - 1] != \"_\"\n+ and graphql_name[i - 1] == python_name[-1]\n+ )\n+ # TESTWord -> test_word\n+ or (\n+ i < max_index\n+ and graphql_name[i] != lowered_name[i]\n+ and graphql_name[i + 1] == lowered_name[i + 1]\n )\n- or all((c.isdigit(), graphql_name[i - 1].isdigit() is False))\n+ # test134 -> test_134\n+ or (c.isdigit() and not graphql_name[i - 1].isdigit())\n+ # 134test -> 134_test\n+ or (not c.isdigit() and graphql_name[i - 1].isdigit())\n ):\n python_name += \"_\"\n python_name += c\n", "issue": "Unexpected Snake Case for Acronyms\nThe snake case conversion of the `snake_case_fallback_resolvers` yields unexpected results for words with multiple uppercase letters in a row, e.g.\r\n - `getHTTPResponse` is converted to `get_h_t_t_p_response`, or\r\n - `externalID` is converted to \"external_i_d`. \r\n\r\nThese are unlikely names for python attributes and I would expect the resolver to look for `get_http_response` / `external_id` instead.\r\n\r\nPossible implementations for the camel to snake case conversions are discussed here: https://stackoverflow.com/questions/1175208/elegant-python-function-to-convert-camelcase-to-snake-case\nUnexpected Snake Case for Acronyms\nThe snake case conversion of the `snake_case_fallback_resolvers` yields unexpected results for words with multiple uppercase letters in a row, e.g.\r\n - `getHTTPResponse` is converted to `get_h_t_t_p_response`, or\r\n - `externalID` is converted to \"external_i_d`. \r\n\r\nThese are unlikely names for python attributes and I would expect the resolver to look for `get_http_response` / `external_id` instead.\r\n\r\nPossible implementations for the camel to snake case conversions are discussed here: https://stackoverflow.com/questions/1175208/elegant-python-function-to-convert-camelcase-to-snake-case\n", "before_files": [{"content": "import asyncio\nfrom functools import wraps\nfrom typing import Optional, Union, Callable, Dict, Any\n\nfrom graphql import GraphQLError, parse\n\n\ndef convert_camel_case_to_snake(graphql_name: str) -> str:\n python_name = \"\"\n for i, c in enumerate(graphql_name.lower()):\n if (\n i > 0\n and (\n all(\n (\n c != graphql_name[i],\n graphql_name[i - 1] != \"_\",\n graphql_name[i - 1] == python_name[-1],\n )\n )\n )\n or all((c.isdigit(), graphql_name[i - 1].isdigit() is False))\n ):\n python_name += \"_\"\n python_name += c\n return python_name\n\n\ndef gql(value: str) -> str:\n parse(value)\n return value\n\n\ndef unwrap_graphql_error(\n error: Union[GraphQLError, Optional[Exception]]\n) -> Optional[Exception]:\n if isinstance(error, GraphQLError):\n return unwrap_graphql_error(error.original_error)\n return error\n\n\ndef convert_kwargs_to_snake_case(func: Callable) -> Callable:\n def convert_to_snake_case(d: Dict) -> Dict:\n converted: Dict = {}\n for k, v in d.items():\n if isinstance(v, dict):\n v = convert_to_snake_case(v)\n if isinstance(v, list):\n v = [convert_to_snake_case(i) if isinstance(i, dict) else i for i in v]\n converted[convert_camel_case_to_snake(k)] = v\n return converted\n\n if asyncio.iscoroutinefunction(func):\n\n @wraps(func)\n async def async_wrapper(*args: Any, **kwargs: Any) -> Any:\n return await func(*args, **convert_to_snake_case(kwargs))\n\n return async_wrapper\n\n @wraps(func)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n return func(*args, **convert_to_snake_case(kwargs))\n\n return wrapper\n", "path": "ariadne/utils.py"}], "after_files": [{"content": "import asyncio\nfrom functools import wraps\nfrom typing import Optional, Union, Callable, Dict, Any\n\nfrom graphql import GraphQLError, parse\n\n\ndef convert_camel_case_to_snake(graphql_name: str) -> str:\n # pylint: disable=too-many-boolean-expressions\n max_index = len(graphql_name) - 1\n lowered_name = graphql_name.lower()\n\n python_name = \"\"\n for i, c in enumerate(lowered_name):\n if i > 0 and (\n # testWord -> test_word\n (\n c != graphql_name[i]\n and graphql_name[i - 1] != \"_\"\n and graphql_name[i - 1] == python_name[-1]\n )\n # TESTWord -> test_word\n or (\n i < max_index\n and graphql_name[i] != lowered_name[i]\n and graphql_name[i + 1] == lowered_name[i + 1]\n )\n # test134 -> test_134\n or (c.isdigit() and not graphql_name[i - 1].isdigit())\n # 134test -> 134_test\n or (not c.isdigit() and graphql_name[i - 1].isdigit())\n ):\n python_name += \"_\"\n python_name += c\n return python_name\n\n\ndef gql(value: str) -> str:\n parse(value)\n return value\n\n\ndef unwrap_graphql_error(\n error: Union[GraphQLError, Optional[Exception]]\n) -> Optional[Exception]:\n if isinstance(error, GraphQLError):\n return unwrap_graphql_error(error.original_error)\n return error\n\n\ndef convert_kwargs_to_snake_case(func: Callable) -> Callable:\n def convert_to_snake_case(d: Dict) -> Dict:\n converted: Dict = {}\n for k, v in d.items():\n if isinstance(v, dict):\n v = convert_to_snake_case(v)\n if isinstance(v, list):\n v = [convert_to_snake_case(i) if isinstance(i, dict) else i for i in v]\n converted[convert_camel_case_to_snake(k)] = v\n return converted\n\n if asyncio.iscoroutinefunction(func):\n\n @wraps(func)\n async def async_wrapper(*args: Any, **kwargs: Any) -> Any:\n return await func(*args, **convert_to_snake_case(kwargs))\n\n return async_wrapper\n\n @wraps(func)\n def wrapper(*args: Any, **kwargs: Any) -> Any:\n return func(*args, **convert_to_snake_case(kwargs))\n\n return wrapper\n", "path": "ariadne/utils.py"}]} | 1,101 | 409 |
gh_patches_debug_23742 | rasdani/github-patches | git_diff | python-discord__bot-1556 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use embed timestamp in mod pings off
When a mods turns off mod pings, a confirmation is sent to inform the user that their pings have successfully been turned off.
In this confirmation, we currently include the time at which it is due to be sent, this time is in UTC.
I propose we refactor this part of the code to instead use an Embed, with a the timestamp field.
https://github.com/python-discord/bot/blob/ce819ade482e82ecbc474bce5fb8ac9dd8b37b40/bot/exts/moderation/modpings.py#L107
This would mean that the time would automatically get converted to the user's current time zone by Discord.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bot/exts/moderation/modpings.py`
Content:
```
1 import datetime
2 import logging
3
4 from async_rediscache import RedisCache
5 from dateutil.parser import isoparse
6 from discord import Member
7 from discord.ext.commands import Cog, Context, group, has_any_role
8
9 from bot.bot import Bot
10 from bot.constants import Emojis, Guild, MODERATION_ROLES, Roles
11 from bot.converters import Expiry
12 from bot.utils.scheduling import Scheduler
13
14 log = logging.getLogger(__name__)
15
16
17 class ModPings(Cog):
18 """Commands for a moderator to turn moderator pings on and off."""
19
20 # RedisCache[discord.Member.id, 'Naïve ISO 8601 string']
21 # The cache's keys are mods who have pings off.
22 # The cache's values are the times when the role should be re-applied to them, stored in ISO format.
23 pings_off_mods = RedisCache()
24
25 def __init__(self, bot: Bot):
26 self.bot = bot
27 self._role_scheduler = Scheduler(self.__class__.__name__)
28
29 self.guild = None
30 self.moderators_role = None
31
32 self.reschedule_task = self.bot.loop.create_task(self.reschedule_roles(), name="mod-pings-reschedule")
33
34 async def reschedule_roles(self) -> None:
35 """Reschedule moderators role re-apply times."""
36 await self.bot.wait_until_guild_available()
37 self.guild = self.bot.get_guild(Guild.id)
38 self.moderators_role = self.guild.get_role(Roles.moderators)
39
40 mod_team = self.guild.get_role(Roles.mod_team)
41 pings_on = self.moderators_role.members
42 pings_off = await self.pings_off_mods.to_dict()
43
44 log.trace("Applying the moderators role to the mod team where necessary.")
45 for mod in mod_team.members:
46 if mod in pings_on: # Make sure that on-duty mods aren't in the cache.
47 if mod in pings_off:
48 await self.pings_off_mods.delete(mod.id)
49 continue
50
51 # Keep the role off only for those in the cache.
52 if mod.id not in pings_off:
53 await self.reapply_role(mod)
54 else:
55 expiry = isoparse(pings_off[mod.id]).replace(tzinfo=None)
56 self._role_scheduler.schedule_at(expiry, mod.id, self.reapply_role(mod))
57
58 async def reapply_role(self, mod: Member) -> None:
59 """Reapply the moderator's role to the given moderator."""
60 log.trace(f"Re-applying role to mod with ID {mod.id}.")
61 await mod.add_roles(self.moderators_role, reason="Pings off period expired.")
62
63 @group(name='modpings', aliases=('modping',), invoke_without_command=True)
64 @has_any_role(*MODERATION_ROLES)
65 async def modpings_group(self, ctx: Context) -> None:
66 """Allow the removal and re-addition of the pingable moderators role."""
67 await ctx.send_help(ctx.command)
68
69 @modpings_group.command(name='off')
70 @has_any_role(*MODERATION_ROLES)
71 async def off_command(self, ctx: Context, duration: Expiry) -> None:
72 """
73 Temporarily removes the pingable moderators role for a set amount of time.
74
75 A unit of time should be appended to the duration.
76 Units (∗case-sensitive):
77 \u2003`y` - years
78 \u2003`m` - months∗
79 \u2003`w` - weeks
80 \u2003`d` - days
81 \u2003`h` - hours
82 \u2003`M` - minutes∗
83 \u2003`s` - seconds
84
85 Alternatively, an ISO 8601 timestamp can be provided for the duration.
86
87 The duration cannot be longer than 30 days.
88 """
89 duration: datetime.datetime
90 delta = duration - datetime.datetime.utcnow()
91 if delta > datetime.timedelta(days=30):
92 await ctx.send(":x: Cannot remove the role for longer than 30 days.")
93 return
94
95 mod = ctx.author
96
97 until_date = duration.replace(microsecond=0).isoformat() # Looks noisy with microseconds.
98 await mod.remove_roles(self.moderators_role, reason=f"Turned pings off until {until_date}.")
99
100 await self.pings_off_mods.set(mod.id, duration.isoformat())
101
102 # Allow rescheduling the task without cancelling it separately via the `on` command.
103 if mod.id in self._role_scheduler:
104 self._role_scheduler.cancel(mod.id)
105 self._role_scheduler.schedule_at(duration, mod.id, self.reapply_role(mod))
106
107 await ctx.send(f"{Emojis.check_mark} Moderators role has been removed until {until_date}.")
108
109 @modpings_group.command(name='on')
110 @has_any_role(*MODERATION_ROLES)
111 async def on_command(self, ctx: Context) -> None:
112 """Re-apply the pingable moderators role."""
113 mod = ctx.author
114 if mod in self.moderators_role.members:
115 await ctx.send(":question: You already have the role.")
116 return
117
118 await mod.add_roles(self.moderators_role, reason="Pings off period canceled.")
119
120 await self.pings_off_mods.delete(mod.id)
121
122 # We assume the task exists. Lack of it may indicate a bug.
123 self._role_scheduler.cancel(mod.id)
124
125 await ctx.send(f"{Emojis.check_mark} Moderators role has been re-applied.")
126
127 def cog_unload(self) -> None:
128 """Cancel role tasks when the cog unloads."""
129 log.trace("Cog unload: canceling role tasks.")
130 self.reschedule_task.cancel()
131 self._role_scheduler.cancel_all()
132
133
134 def setup(bot: Bot) -> None:
135 """Load the ModPings cog."""
136 bot.add_cog(ModPings(bot))
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bot/exts/moderation/modpings.py b/bot/exts/moderation/modpings.py
--- a/bot/exts/moderation/modpings.py
+++ b/bot/exts/moderation/modpings.py
@@ -3,11 +3,11 @@
from async_rediscache import RedisCache
from dateutil.parser import isoparse
-from discord import Member
+from discord import Embed, Member
from discord.ext.commands import Cog, Context, group, has_any_role
from bot.bot import Bot
-from bot.constants import Emojis, Guild, MODERATION_ROLES, Roles
+from bot.constants import Colours, Emojis, Guild, Icons, MODERATION_ROLES, Roles
from bot.converters import Expiry
from bot.utils.scheduling import Scheduler
@@ -104,7 +104,9 @@
self._role_scheduler.cancel(mod.id)
self._role_scheduler.schedule_at(duration, mod.id, self.reapply_role(mod))
- await ctx.send(f"{Emojis.check_mark} Moderators role has been removed until {until_date}.")
+ embed = Embed(timestamp=duration, colour=Colours.bright_green)
+ embed.set_footer(text="Moderators role has been removed until", icon_url=Icons.green_checkmark)
+ await ctx.send(embed=embed)
@modpings_group.command(name='on')
@has_any_role(*MODERATION_ROLES)
| {"golden_diff": "diff --git a/bot/exts/moderation/modpings.py b/bot/exts/moderation/modpings.py\n--- a/bot/exts/moderation/modpings.py\n+++ b/bot/exts/moderation/modpings.py\n@@ -3,11 +3,11 @@\n \n from async_rediscache import RedisCache\n from dateutil.parser import isoparse\n-from discord import Member\n+from discord import Embed, Member\n from discord.ext.commands import Cog, Context, group, has_any_role\n \n from bot.bot import Bot\n-from bot.constants import Emojis, Guild, MODERATION_ROLES, Roles\n+from bot.constants import Colours, Emojis, Guild, Icons, MODERATION_ROLES, Roles\n from bot.converters import Expiry\n from bot.utils.scheduling import Scheduler\n \n@@ -104,7 +104,9 @@\n self._role_scheduler.cancel(mod.id)\n self._role_scheduler.schedule_at(duration, mod.id, self.reapply_role(mod))\n \n- await ctx.send(f\"{Emojis.check_mark} Moderators role has been removed until {until_date}.\")\n+ embed = Embed(timestamp=duration, colour=Colours.bright_green)\n+ embed.set_footer(text=\"Moderators role has been removed until\", icon_url=Icons.green_checkmark)\n+ await ctx.send(embed=embed)\n \n @modpings_group.command(name='on')\n @has_any_role(*MODERATION_ROLES)\n", "issue": "Use embed timestamp in mod pings off\nWhen a mods turns off mod pings, a confirmation is sent to inform the user that their pings have successfully been turned off.\r\n\r\nIn this confirmation, we currently include the time at which it is due to be sent, this time is in UTC.\r\n\r\nI propose we refactor this part of the code to instead use an Embed, with a the timestamp field.\r\nhttps://github.com/python-discord/bot/blob/ce819ade482e82ecbc474bce5fb8ac9dd8b37b40/bot/exts/moderation/modpings.py#L107\r\nThis would mean that the time would automatically get converted to the user's current time zone by Discord.\n", "before_files": [{"content": "import datetime\nimport logging\n\nfrom async_rediscache import RedisCache\nfrom dateutil.parser import isoparse\nfrom discord import Member\nfrom discord.ext.commands import Cog, Context, group, has_any_role\n\nfrom bot.bot import Bot\nfrom bot.constants import Emojis, Guild, MODERATION_ROLES, Roles\nfrom bot.converters import Expiry\nfrom bot.utils.scheduling import Scheduler\n\nlog = logging.getLogger(__name__)\n\n\nclass ModPings(Cog):\n \"\"\"Commands for a moderator to turn moderator pings on and off.\"\"\"\n\n # RedisCache[discord.Member.id, 'Na\u00efve ISO 8601 string']\n # The cache's keys are mods who have pings off.\n # The cache's values are the times when the role should be re-applied to them, stored in ISO format.\n pings_off_mods = RedisCache()\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self._role_scheduler = Scheduler(self.__class__.__name__)\n\n self.guild = None\n self.moderators_role = None\n\n self.reschedule_task = self.bot.loop.create_task(self.reschedule_roles(), name=\"mod-pings-reschedule\")\n\n async def reschedule_roles(self) -> None:\n \"\"\"Reschedule moderators role re-apply times.\"\"\"\n await self.bot.wait_until_guild_available()\n self.guild = self.bot.get_guild(Guild.id)\n self.moderators_role = self.guild.get_role(Roles.moderators)\n\n mod_team = self.guild.get_role(Roles.mod_team)\n pings_on = self.moderators_role.members\n pings_off = await self.pings_off_mods.to_dict()\n\n log.trace(\"Applying the moderators role to the mod team where necessary.\")\n for mod in mod_team.members:\n if mod in pings_on: # Make sure that on-duty mods aren't in the cache.\n if mod in pings_off:\n await self.pings_off_mods.delete(mod.id)\n continue\n\n # Keep the role off only for those in the cache.\n if mod.id not in pings_off:\n await self.reapply_role(mod)\n else:\n expiry = isoparse(pings_off[mod.id]).replace(tzinfo=None)\n self._role_scheduler.schedule_at(expiry, mod.id, self.reapply_role(mod))\n\n async def reapply_role(self, mod: Member) -> None:\n \"\"\"Reapply the moderator's role to the given moderator.\"\"\"\n log.trace(f\"Re-applying role to mod with ID {mod.id}.\")\n await mod.add_roles(self.moderators_role, reason=\"Pings off period expired.\")\n\n @group(name='modpings', aliases=('modping',), invoke_without_command=True)\n @has_any_role(*MODERATION_ROLES)\n async def modpings_group(self, ctx: Context) -> None:\n \"\"\"Allow the removal and re-addition of the pingable moderators role.\"\"\"\n await ctx.send_help(ctx.command)\n\n @modpings_group.command(name='off')\n @has_any_role(*MODERATION_ROLES)\n async def off_command(self, ctx: Context, duration: Expiry) -> None:\n \"\"\"\n Temporarily removes the pingable moderators role for a set amount of time.\n\n A unit of time should be appended to the duration.\n Units (\u2217case-sensitive):\n \\u2003`y` - years\n \\u2003`m` - months\u2217\n \\u2003`w` - weeks\n \\u2003`d` - days\n \\u2003`h` - hours\n \\u2003`M` - minutes\u2217\n \\u2003`s` - seconds\n\n Alternatively, an ISO 8601 timestamp can be provided for the duration.\n\n The duration cannot be longer than 30 days.\n \"\"\"\n duration: datetime.datetime\n delta = duration - datetime.datetime.utcnow()\n if delta > datetime.timedelta(days=30):\n await ctx.send(\":x: Cannot remove the role for longer than 30 days.\")\n return\n\n mod = ctx.author\n\n until_date = duration.replace(microsecond=0).isoformat() # Looks noisy with microseconds.\n await mod.remove_roles(self.moderators_role, reason=f\"Turned pings off until {until_date}.\")\n\n await self.pings_off_mods.set(mod.id, duration.isoformat())\n\n # Allow rescheduling the task without cancelling it separately via the `on` command.\n if mod.id in self._role_scheduler:\n self._role_scheduler.cancel(mod.id)\n self._role_scheduler.schedule_at(duration, mod.id, self.reapply_role(mod))\n\n await ctx.send(f\"{Emojis.check_mark} Moderators role has been removed until {until_date}.\")\n\n @modpings_group.command(name='on')\n @has_any_role(*MODERATION_ROLES)\n async def on_command(self, ctx: Context) -> None:\n \"\"\"Re-apply the pingable moderators role.\"\"\"\n mod = ctx.author\n if mod in self.moderators_role.members:\n await ctx.send(\":question: You already have the role.\")\n return\n\n await mod.add_roles(self.moderators_role, reason=\"Pings off period canceled.\")\n\n await self.pings_off_mods.delete(mod.id)\n\n # We assume the task exists. Lack of it may indicate a bug.\n self._role_scheduler.cancel(mod.id)\n\n await ctx.send(f\"{Emojis.check_mark} Moderators role has been re-applied.\")\n\n def cog_unload(self) -> None:\n \"\"\"Cancel role tasks when the cog unloads.\"\"\"\n log.trace(\"Cog unload: canceling role tasks.\")\n self.reschedule_task.cancel()\n self._role_scheduler.cancel_all()\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the ModPings cog.\"\"\"\n bot.add_cog(ModPings(bot))\n", "path": "bot/exts/moderation/modpings.py"}], "after_files": [{"content": "import datetime\nimport logging\n\nfrom async_rediscache import RedisCache\nfrom dateutil.parser import isoparse\nfrom discord import Embed, Member\nfrom discord.ext.commands import Cog, Context, group, has_any_role\n\nfrom bot.bot import Bot\nfrom bot.constants import Colours, Emojis, Guild, Icons, MODERATION_ROLES, Roles\nfrom bot.converters import Expiry\nfrom bot.utils.scheduling import Scheduler\n\nlog = logging.getLogger(__name__)\n\n\nclass ModPings(Cog):\n \"\"\"Commands for a moderator to turn moderator pings on and off.\"\"\"\n\n # RedisCache[discord.Member.id, 'Na\u00efve ISO 8601 string']\n # The cache's keys are mods who have pings off.\n # The cache's values are the times when the role should be re-applied to them, stored in ISO format.\n pings_off_mods = RedisCache()\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self._role_scheduler = Scheduler(self.__class__.__name__)\n\n self.guild = None\n self.moderators_role = None\n\n self.reschedule_task = self.bot.loop.create_task(self.reschedule_roles(), name=\"mod-pings-reschedule\")\n\n async def reschedule_roles(self) -> None:\n \"\"\"Reschedule moderators role re-apply times.\"\"\"\n await self.bot.wait_until_guild_available()\n self.guild = self.bot.get_guild(Guild.id)\n self.moderators_role = self.guild.get_role(Roles.moderators)\n\n mod_team = self.guild.get_role(Roles.mod_team)\n pings_on = self.moderators_role.members\n pings_off = await self.pings_off_mods.to_dict()\n\n log.trace(\"Applying the moderators role to the mod team where necessary.\")\n for mod in mod_team.members:\n if mod in pings_on: # Make sure that on-duty mods aren't in the cache.\n if mod in pings_off:\n await self.pings_off_mods.delete(mod.id)\n continue\n\n # Keep the role off only for those in the cache.\n if mod.id not in pings_off:\n await self.reapply_role(mod)\n else:\n expiry = isoparse(pings_off[mod.id]).replace(tzinfo=None)\n self._role_scheduler.schedule_at(expiry, mod.id, self.reapply_role(mod))\n\n async def reapply_role(self, mod: Member) -> None:\n \"\"\"Reapply the moderator's role to the given moderator.\"\"\"\n log.trace(f\"Re-applying role to mod with ID {mod.id}.\")\n await mod.add_roles(self.moderators_role, reason=\"Pings off period expired.\")\n\n @group(name='modpings', aliases=('modping',), invoke_without_command=True)\n @has_any_role(*MODERATION_ROLES)\n async def modpings_group(self, ctx: Context) -> None:\n \"\"\"Allow the removal and re-addition of the pingable moderators role.\"\"\"\n await ctx.send_help(ctx.command)\n\n @modpings_group.command(name='off')\n @has_any_role(*MODERATION_ROLES)\n async def off_command(self, ctx: Context, duration: Expiry) -> None:\n \"\"\"\n Temporarily removes the pingable moderators role for a set amount of time.\n\n A unit of time should be appended to the duration.\n Units (\u2217case-sensitive):\n \\u2003`y` - years\n \\u2003`m` - months\u2217\n \\u2003`w` - weeks\n \\u2003`d` - days\n \\u2003`h` - hours\n \\u2003`M` - minutes\u2217\n \\u2003`s` - seconds\n\n Alternatively, an ISO 8601 timestamp can be provided for the duration.\n\n The duration cannot be longer than 30 days.\n \"\"\"\n duration: datetime.datetime\n delta = duration - datetime.datetime.utcnow()\n if delta > datetime.timedelta(days=30):\n await ctx.send(\":x: Cannot remove the role for longer than 30 days.\")\n return\n\n mod = ctx.author\n\n until_date = duration.replace(microsecond=0).isoformat() # Looks noisy with microseconds.\n await mod.remove_roles(self.moderators_role, reason=f\"Turned pings off until {until_date}.\")\n\n await self.pings_off_mods.set(mod.id, duration.isoformat())\n\n # Allow rescheduling the task without cancelling it separately via the `on` command.\n if mod.id in self._role_scheduler:\n self._role_scheduler.cancel(mod.id)\n self._role_scheduler.schedule_at(duration, mod.id, self.reapply_role(mod))\n\n embed = Embed(timestamp=duration, colour=Colours.bright_green)\n embed.set_footer(text=\"Moderators role has been removed until\", icon_url=Icons.green_checkmark)\n await ctx.send(embed=embed)\n\n @modpings_group.command(name='on')\n @has_any_role(*MODERATION_ROLES)\n async def on_command(self, ctx: Context) -> None:\n \"\"\"Re-apply the pingable moderators role.\"\"\"\n mod = ctx.author\n if mod in self.moderators_role.members:\n await ctx.send(\":question: You already have the role.\")\n return\n\n await mod.add_roles(self.moderators_role, reason=\"Pings off period canceled.\")\n\n await self.pings_off_mods.delete(mod.id)\n\n # We assume the task exists. Lack of it may indicate a bug.\n self._role_scheduler.cancel(mod.id)\n\n await ctx.send(f\"{Emojis.check_mark} Moderators role has been re-applied.\")\n\n def cog_unload(self) -> None:\n \"\"\"Cancel role tasks when the cog unloads.\"\"\"\n log.trace(\"Cog unload: canceling role tasks.\")\n self.reschedule_task.cancel()\n self._role_scheduler.cancel_all()\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the ModPings cog.\"\"\"\n bot.add_cog(ModPings(bot))\n", "path": "bot/exts/moderation/modpings.py"}]} | 2,018 | 313 |
gh_patches_debug_8649 | rasdani/github-patches | git_diff | mlcommons__GaNDLF-390 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
GaNDLF is not running on macOS
**Describe the bug**
Currently, we are requiring `torch==1.8.2`:
https://github.com/CBICA/GaNDLF/blob/e8f922266ec7af1c3fac36439290d22a5e63866d/setup.py#L56
Which is not supported by PyTorch on macOS[[ref](https://pytorch.org/get-started/locally/)].
**To Reproduce**
Steps to reproduce the behavior: https://cbica.github.io/GaNDLF/setup
**Expected behavior**
The only reason for us to drop support of an OS should be if something major is breaking.
**Screenshots**
N.A.
**GaNDLF Version**
<!-- Put the output of the following command:
python -c 'import GANDLF as g;print(g.__version__)'
-->
0.0.14-dev
**Desktop (please complete the following information):**
- OS: macOS
- Version: N.A.
**Additional context**
Reported by @Sofia-Mouchtaris
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 """The setup script."""
4
5
6 import os
7 from setuptools import setup, find_packages
8 from setuptools.command.install import install
9 from setuptools.command.develop import develop
10 from setuptools.command.egg_info import egg_info
11
12 with open("README.md") as readme_file:
13 readme = readme_file.read()
14
15
16 def git_submodule_update():
17 ## submodule update
18 os.system("git submodule update --init --recursive")
19
20
21 class CustomInstallCommand(install):
22 def run(self):
23 install.run(self)
24 git_submodule_update()
25
26
27 class CustomDevelopCommand(develop):
28 def run(self):
29 develop.run(self)
30 git_submodule_update()
31
32
33 class CustomEggInfoCommand(egg_info):
34 def run(self):
35 egg_info.run(self)
36 git_submodule_update()
37
38
39 # read version.py
40 import sys, re
41
42 try:
43 filepath = "GANDLF/version.py"
44 version_file = open(filepath)
45 (__version__,) = re.findall('__version__ = "(.*)"', version_file.read())
46
47 except Exception as error:
48 __version__ = "0.0.1"
49 sys.stderr.write("Warning: Could not open '%s' due %s\n" % (filepath, error))
50
51 requirements = [
52 "black",
53 "numpy==1.21.0",
54 "scipy",
55 "SimpleITK!=2.0.*",
56 "torch==1.8.2",
57 "torchvision",
58 "tqdm",
59 "torchio==0.18.57",
60 "pandas",
61 "pylint",
62 "scikit-learn>=0.23.2",
63 "pickle5>=0.0.11",
64 "setuptools",
65 "seaborn",
66 "pyyaml",
67 "tiffslide",
68 "scikit-image",
69 "matplotlib",
70 "requests>=2.25.0",
71 "pyvips",
72 "pytest",
73 "coverage",
74 "pytest-cov",
75 "psutil",
76 "medcam",
77 "opencv-python",
78 "torchmetrics",
79 "OpenPatchMiner==0.1.6",
80 "zarr==2.10.3",
81 "pydicom",
82 "onnx",
83 ]
84
85 setup(
86 name="GANDLF",
87 version=__version__,
88 author="Jose Agraz, Vinayak Ahluwalia, Bhakti Baheti, Spyridon Bakas, Ujjwal Baid, Megh Bhalerao, Brandon Edwards, Karol Gotkowski, Caleb Grenko, Orhun Güley, Ibrahim Ethem Hamamci, Sarthak Pati, Micah Sheller, Juliia Skobleva, Siddhesh Thakur, Spiros Thermos", # alphabetical order
89 author_email="[email protected]",
90 python_requires=">=3.7",
91 packages=find_packages(),
92 cmdclass={ # this ensures git_submodule_update is called during install
93 "install": CustomInstallCommand,
94 "develop": CustomDevelopCommand,
95 "egg_info": CustomEggInfoCommand,
96 },
97 scripts=[
98 "gandlf_run",
99 "gandlf_constructCSV",
100 "gandlf_collectStats",
101 "gandlf_patchMiner",
102 "gandlf_preprocess",
103 "gandlf_anonymizer",
104 "gandlf_verifyInstall",
105 ],
106 classifiers=[
107 "Development Status :: 3 - Alpha",
108 "Intended Audience :: Science/Research",
109 "License :: OSI Approved :: BSD License",
110 "Natural Language :: English",
111 "Operating System :: OS Independent",
112 "Programming Language :: Python :: 3.7",
113 "Programming Language :: Python :: 3.8",
114 "Programming Language :: Python :: 3.9",
115 "Topic :: Scientific/Engineering :: Medical Science Apps",
116 ],
117 description=(
118 "PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging."
119 ),
120 install_requires=requirements,
121 license="BSD-3-Clause License",
122 long_description=readme,
123 long_description_content_type="text/markdown",
124 include_package_data=True,
125 keywords="semantic, segmentation, regression, classification, data-augmentation, medical-imaging",
126 zip_safe=False,
127 )
128
129 ## windows vips installation
130 if os.name == "nt": # proceed for windows
131 from pathlib import Path
132
133 # download and extract if main dll is absent
134 if not Path("./vips/vips-dev-8.10/bin/libvips-42.dll").exists():
135 print("Downloading and extracting VIPS for Windows")
136 url = "https://github.com/libvips/libvips/releases/download/v8.10.2/vips-dev-w64-all-8.10.2.zip"
137 zip_to_extract = "./vips.zip"
138 import urllib.request, zipfile
139
140 urllib.request.urlretrieve(url, zip_to_extract)
141 z = zipfile.ZipFile(zip_to_extract)
142 z.extractall("./vips")
143 z.close()
144 os.remove(zip_to_extract)
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -53,7 +53,6 @@
"numpy==1.21.0",
"scipy",
"SimpleITK!=2.0.*",
- "torch==1.8.2",
"torchvision",
"tqdm",
"torchio==0.18.57",
@@ -82,6 +81,12 @@
"onnx",
]
+# pytorch doesn't have LTS support on OSX - https://github.com/CBICA/GaNDLF/issues/389
+if sys.platform == "darwin":
+ requirements.append("torch==1.9.0")
+else:
+ requirements.append("torch==1.8.2")
+
setup(
name="GANDLF",
version=__version__,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -53,7 +53,6 @@\n \"numpy==1.21.0\",\n \"scipy\",\n \"SimpleITK!=2.0.*\",\n- \"torch==1.8.2\",\n \"torchvision\",\n \"tqdm\",\n \"torchio==0.18.57\",\n@@ -82,6 +81,12 @@\n \"onnx\",\n ]\n \n+# pytorch doesn't have LTS support on OSX - https://github.com/CBICA/GaNDLF/issues/389\n+if sys.platform == \"darwin\":\n+ requirements.append(\"torch==1.9.0\")\n+else:\n+ requirements.append(\"torch==1.8.2\")\n+\n setup(\n name=\"GANDLF\",\n version=__version__,\n", "issue": "GaNDLF is not running on macOS\n**Describe the bug**\r\nCurrently, we are requiring `torch==1.8.2`:\r\nhttps://github.com/CBICA/GaNDLF/blob/e8f922266ec7af1c3fac36439290d22a5e63866d/setup.py#L56\r\nWhich is not supported by PyTorch on macOS[[ref](https://pytorch.org/get-started/locally/)].\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior: https://cbica.github.io/GaNDLF/setup\r\n\r\n**Expected behavior**\r\nThe only reason for us to drop support of an OS should be if something major is breaking.\r\n\r\n**Screenshots**\r\nN.A.\r\n\r\n**GaNDLF Version**\r\n<!-- Put the output of the following command:\r\npython -c 'import GANDLF as g;print(g.__version__)'\r\n-->\r\n0.0.14-dev\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: macOS\r\n - Version: N.A.\r\n\r\n**Additional context**\r\nReported by @Sofia-Mouchtaris \r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"The setup script.\"\"\"\n\n\nimport os\nfrom setuptools import setup, find_packages\nfrom setuptools.command.install import install\nfrom setuptools.command.develop import develop\nfrom setuptools.command.egg_info import egg_info\n\nwith open(\"README.md\") as readme_file:\n readme = readme_file.read()\n\n\ndef git_submodule_update():\n ## submodule update\n os.system(\"git submodule update --init --recursive\")\n\n\nclass CustomInstallCommand(install):\n def run(self):\n install.run(self)\n git_submodule_update()\n\n\nclass CustomDevelopCommand(develop):\n def run(self):\n develop.run(self)\n git_submodule_update()\n\n\nclass CustomEggInfoCommand(egg_info):\n def run(self):\n egg_info.run(self)\n git_submodule_update()\n\n\n# read version.py\nimport sys, re\n\ntry:\n filepath = \"GANDLF/version.py\"\n version_file = open(filepath)\n (__version__,) = re.findall('__version__ = \"(.*)\"', version_file.read())\n\nexcept Exception as error:\n __version__ = \"0.0.1\"\n sys.stderr.write(\"Warning: Could not open '%s' due %s\\n\" % (filepath, error))\n\nrequirements = [\n \"black\",\n \"numpy==1.21.0\",\n \"scipy\",\n \"SimpleITK!=2.0.*\",\n \"torch==1.8.2\",\n \"torchvision\",\n \"tqdm\",\n \"torchio==0.18.57\",\n \"pandas\",\n \"pylint\",\n \"scikit-learn>=0.23.2\",\n \"pickle5>=0.0.11\",\n \"setuptools\",\n \"seaborn\",\n \"pyyaml\",\n \"tiffslide\",\n \"scikit-image\",\n \"matplotlib\",\n \"requests>=2.25.0\",\n \"pyvips\",\n \"pytest\",\n \"coverage\",\n \"pytest-cov\",\n \"psutil\",\n \"medcam\",\n \"opencv-python\",\n \"torchmetrics\",\n \"OpenPatchMiner==0.1.6\",\n \"zarr==2.10.3\",\n \"pydicom\",\n \"onnx\",\n]\n\nsetup(\n name=\"GANDLF\",\n version=__version__,\n author=\"Jose Agraz, Vinayak Ahluwalia, Bhakti Baheti, Spyridon Bakas, Ujjwal Baid, Megh Bhalerao, Brandon Edwards, Karol Gotkowski, Caleb Grenko, Orhun G\u00fcley, Ibrahim Ethem Hamamci, Sarthak Pati, Micah Sheller, Juliia Skobleva, Siddhesh Thakur, Spiros Thermos\", # alphabetical order\n author_email=\"[email protected]\",\n python_requires=\">=3.7\",\n packages=find_packages(),\n cmdclass={ # this ensures git_submodule_update is called during install\n \"install\": CustomInstallCommand,\n \"develop\": CustomDevelopCommand,\n \"egg_info\": CustomEggInfoCommand,\n },\n scripts=[\n \"gandlf_run\",\n \"gandlf_constructCSV\",\n \"gandlf_collectStats\",\n \"gandlf_patchMiner\",\n \"gandlf_preprocess\",\n \"gandlf_anonymizer\",\n \"gandlf_verifyInstall\",\n ],\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Scientific/Engineering :: Medical Science Apps\",\n ],\n description=(\n \"PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging.\"\n ),\n install_requires=requirements,\n license=\"BSD-3-Clause License\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n include_package_data=True,\n keywords=\"semantic, segmentation, regression, classification, data-augmentation, medical-imaging\",\n zip_safe=False,\n)\n\n## windows vips installation\nif os.name == \"nt\": # proceed for windows\n from pathlib import Path\n\n # download and extract if main dll is absent\n if not Path(\"./vips/vips-dev-8.10/bin/libvips-42.dll\").exists():\n print(\"Downloading and extracting VIPS for Windows\")\n url = \"https://github.com/libvips/libvips/releases/download/v8.10.2/vips-dev-w64-all-8.10.2.zip\"\n zip_to_extract = \"./vips.zip\"\n import urllib.request, zipfile\n\n urllib.request.urlretrieve(url, zip_to_extract)\n z = zipfile.ZipFile(zip_to_extract)\n z.extractall(\"./vips\")\n z.close()\n os.remove(zip_to_extract)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"The setup script.\"\"\"\n\n\nimport os\nfrom setuptools import setup, find_packages\nfrom setuptools.command.install import install\nfrom setuptools.command.develop import develop\nfrom setuptools.command.egg_info import egg_info\n\nwith open(\"README.md\") as readme_file:\n readme = readme_file.read()\n\n\ndef git_submodule_update():\n ## submodule update\n os.system(\"git submodule update --init --recursive\")\n\n\nclass CustomInstallCommand(install):\n def run(self):\n install.run(self)\n git_submodule_update()\n\n\nclass CustomDevelopCommand(develop):\n def run(self):\n develop.run(self)\n git_submodule_update()\n\n\nclass CustomEggInfoCommand(egg_info):\n def run(self):\n egg_info.run(self)\n git_submodule_update()\n\n\n# read version.py\nimport sys, re\n\ntry:\n filepath = \"GANDLF/version.py\"\n version_file = open(filepath)\n (__version__,) = re.findall('__version__ = \"(.*)\"', version_file.read())\n\nexcept Exception as error:\n __version__ = \"0.0.1\"\n sys.stderr.write(\"Warning: Could not open '%s' due %s\\n\" % (filepath, error))\n\nrequirements = [\n \"black\",\n \"numpy==1.21.0\",\n \"scipy\",\n \"SimpleITK!=2.0.*\",\n \"torchvision\",\n \"tqdm\",\n \"torchio==0.18.57\",\n \"pandas\",\n \"pylint\",\n \"scikit-learn>=0.23.2\",\n \"pickle5>=0.0.11\",\n \"setuptools\",\n \"seaborn\",\n \"pyyaml\",\n \"tiffslide\",\n \"scikit-image\",\n \"matplotlib\",\n \"requests>=2.25.0\",\n \"pyvips\",\n \"pytest\",\n \"coverage\",\n \"pytest-cov\",\n \"psutil\",\n \"medcam\",\n \"opencv-python\",\n \"torchmetrics\",\n \"OpenPatchMiner==0.1.6\",\n \"zarr==2.10.3\",\n \"pydicom\",\n \"onnx\",\n]\n\n# pytorch doesn't have LTS support on OSX - https://github.com/CBICA/GaNDLF/issues/389\nif sys.platform == \"darwin\":\n requirements.append(\"torch==1.9.0\")\nelse:\n requirements.append(\"torch==1.8.2\")\n\nsetup(\n name=\"GANDLF\",\n version=__version__,\n author=\"Jose Agraz, Vinayak Ahluwalia, Bhakti Baheti, Spyridon Bakas, Ujjwal Baid, Megh Bhalerao, Brandon Edwards, Karol Gotkowski, Caleb Grenko, Orhun G\u00fcley, Ibrahim Ethem Hamamci, Sarthak Pati, Micah Sheller, Juliia Skobleva, Siddhesh Thakur, Spiros Thermos\", # alphabetical order\n author_email=\"[email protected]\",\n python_requires=\">=3.7\",\n packages=find_packages(),\n cmdclass={ # this ensures git_submodule_update is called during install\n \"install\": CustomInstallCommand,\n \"develop\": CustomDevelopCommand,\n \"egg_info\": CustomEggInfoCommand,\n },\n scripts=[\n \"gandlf_run\",\n \"gandlf_constructCSV\",\n \"gandlf_collectStats\",\n \"gandlf_patchMiner\",\n \"gandlf_preprocess\",\n \"gandlf_anonymizer\",\n \"gandlf_verifyInstall\",\n ],\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Scientific/Engineering :: Medical Science Apps\",\n ],\n description=(\n \"PyTorch-based framework that handles segmentation/regression/classification using various DL architectures for medical imaging.\"\n ),\n install_requires=requirements,\n license=\"BSD-3-Clause License\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n include_package_data=True,\n keywords=\"semantic, segmentation, regression, classification, data-augmentation, medical-imaging\",\n zip_safe=False,\n)\n\n## windows vips installation\nif os.name == \"nt\": # proceed for windows\n from pathlib import Path\n\n # download and extract if main dll is absent\n if not Path(\"./vips/vips-dev-8.10/bin/libvips-42.dll\").exists():\n print(\"Downloading and extracting VIPS for Windows\")\n url = \"https://github.com/libvips/libvips/releases/download/v8.10.2/vips-dev-w64-all-8.10.2.zip\"\n zip_to_extract = \"./vips.zip\"\n import urllib.request, zipfile\n\n urllib.request.urlretrieve(url, zip_to_extract)\n z = zipfile.ZipFile(zip_to_extract)\n z.extractall(\"./vips\")\n z.close()\n os.remove(zip_to_extract)\n", "path": "setup.py"}]} | 1,927 | 194 |
gh_patches_debug_18920 | rasdani/github-patches | git_diff | google__turbinia-1098 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Set a default file size limit for PlasoTask hashers
Currently, all PlasoTask instances will attempt to hash files of any size, potentially very large ones .This could lead to unusually long processing times.
This is a small part of a larger effort to try to optimize how Turbinia configures Plaso tasks to better utilize inherent parallel processing capabilities.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `turbinia/workers/plaso.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright 2015 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Task for running Plaso."""
16
17 from __future__ import unicode_literals
18
19 import os
20 import logging
21
22 from turbinia import config
23 from turbinia.evidence import EvidenceState as state
24 from turbinia.evidence import PlasoFile
25 from turbinia.workers import TurbiniaTask
26 from turbinia.lib import file_helpers
27
28
29 class PlasoTask(TurbiniaTask):
30 """Task to run Plaso (log2timeline)."""
31
32 # Plaso requires the Disk to be attached, but doesn't require it be mounted.
33 REQUIRED_STATES = [
34 state.ATTACHED, state.DECOMPRESSED, state.CONTAINER_MOUNTED
35 ]
36
37 TASK_CONFIG = {
38 # 'none' as indicated in the options for status_view within
39 # the Plaso documentation
40 'status_view': 'none',
41 'hashers': 'all',
42 'partitions': 'all',
43 'vss_stores': 'none',
44 # artifact_filters and file_filter are mutually exclusive
45 # parameters and Plaso will error out if both parameters are used.
46 'artifact_filters': None,
47 'file_filter': None,
48 'custom_artifact_definitions': None,
49 'parsers': None,
50 'yara_rules': None
51 }
52
53 def build_plaso_command(self, base_command, conf):
54 """Builds a typical plaso command, contains logic specific to log2timeline.
55
56 Args:
57 base_command (str): Command to invoke log2timeline (e.g. log2timeline.py)
58 conf (dict): Dynamic config containing the parameters for the command.
59
60 Returns:
61 String for valid Log2timeline command.
62 """
63 self.result.log(
64 'Generating Plaso command line from arguments: {0!s}'.format(conf),
65 level=logging.DEBUG)
66 cmd = [base_command]
67 for k, v in conf.items():
68 cli_args = [
69 'status_view', 'hashers', 'partitions', 'vss_stores',
70 'custom_artifact_definitions', 'parsers', 'artifact_filters',
71 'file_filter', 'yara_rules'
72 ]
73 if (k not in cli_args or not v):
74 continue
75 prepend = '-'
76 if len(k) > 1:
77 prepend = '--'
78 if k == 'file_filter':
79 file_path = file_helpers.write_list_to_temp_file(
80 v, preferred_dir=self.tmp_dir)
81 cmd.extend(['-f', file_path])
82 elif k == 'yara_rules':
83 file_path = file_helpers.write_str_to_temp_file(
84 v, preferred_dir=self.tmp_dir)
85 cmd.extend(['--yara_rules', file_path])
86 elif isinstance(v, list):
87 cmd.extend([prepend + k, ','.join(v)])
88 elif isinstance(v, bool):
89 cmd.append(prepend + k)
90 elif isinstance(v, str):
91 cmd.extend([prepend + k, v])
92 return cmd
93
94 def run(self, evidence, result):
95 """Task that process data with Plaso.
96
97 Args:
98 evidence (Evidence object): The evidence we will process.
99 result (TurbiniaTaskResult): The object to place task results into.
100
101 Returns:
102 TurbiniaTaskResult object.
103 """
104
105 config.LoadConfig()
106
107 # Write plaso file into tmp_dir because sqlite has issues with some shared
108 # filesystems (e.g NFS).
109 plaso_file = os.path.join(self.tmp_dir, '{0:s}.plaso'.format(self.id))
110 plaso_evidence = PlasoFile(source_path=plaso_file)
111 plaso_log = os.path.join(self.output_dir, '{0:s}.log'.format(self.id))
112
113 cmd = self.build_plaso_command('log2timeline.py', self.task_config)
114
115 if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):
116 cmd.append('-d')
117
118 if evidence.credentials:
119 for credential_type, credential_data in evidence.credentials:
120 cmd.extend([
121 '--credential', '{0:s}:{1:s}'.format(
122 credential_type, credential_data)
123 ])
124
125 cmd.extend(['--temporary_directory', self.tmp_dir])
126 cmd.extend(['--logfile', plaso_log])
127 cmd.extend(['--unattended'])
128 cmd.extend(['--storage_file', plaso_file])
129 cmd.extend([evidence.local_path])
130
131 result.log('Running plaso as [{0:s}]'.format(' '.join(cmd)))
132 self.execute(
133 cmd, result, log_files=[plaso_log], new_evidence=[plaso_evidence],
134 close=True)
135
136 return result
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/turbinia/workers/plaso.py b/turbinia/workers/plaso.py
--- a/turbinia/workers/plaso.py
+++ b/turbinia/workers/plaso.py
@@ -39,6 +39,7 @@
# the Plaso documentation
'status_view': 'none',
'hashers': 'all',
+ 'hasher_file_size_limit': '1073741824',
'partitions': 'all',
'vss_stores': 'none',
# artifact_filters and file_filter are mutually exclusive
@@ -66,9 +67,9 @@
cmd = [base_command]
for k, v in conf.items():
cli_args = [
- 'status_view', 'hashers', 'partitions', 'vss_stores',
- 'custom_artifact_definitions', 'parsers', 'artifact_filters',
- 'file_filter', 'yara_rules'
+ 'status_view', 'hashers', 'hasher_file_size_limit', 'partitions',
+ 'vss_stores', 'custom_artifact_definitions', 'parsers',
+ 'artifact_filters', 'file_filter', 'yara_rules'
]
if (k not in cli_args or not v):
continue
| {"golden_diff": "diff --git a/turbinia/workers/plaso.py b/turbinia/workers/plaso.py\n--- a/turbinia/workers/plaso.py\n+++ b/turbinia/workers/plaso.py\n@@ -39,6 +39,7 @@\n # the Plaso documentation\n 'status_view': 'none',\n 'hashers': 'all',\n+ 'hasher_file_size_limit': '1073741824',\n 'partitions': 'all',\n 'vss_stores': 'none',\n # artifact_filters and file_filter are mutually exclusive\n@@ -66,9 +67,9 @@\n cmd = [base_command]\n for k, v in conf.items():\n cli_args = [\n- 'status_view', 'hashers', 'partitions', 'vss_stores',\n- 'custom_artifact_definitions', 'parsers', 'artifact_filters',\n- 'file_filter', 'yara_rules'\n+ 'status_view', 'hashers', 'hasher_file_size_limit', 'partitions',\n+ 'vss_stores', 'custom_artifact_definitions', 'parsers',\n+ 'artifact_filters', 'file_filter', 'yara_rules'\n ]\n if (k not in cli_args or not v):\n continue\n", "issue": "Set a default file size limit for PlasoTask hashers\nCurrently, all PlasoTask instances will attempt to hash files of any size, potentially very large ones .This could lead to unusually long processing times.\r\n\r\nThis is a small part of a larger effort to try to optimize how Turbinia configures Plaso tasks to better utilize inherent parallel processing capabilities.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Task for running Plaso.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\nimport logging\n\nfrom turbinia import config\nfrom turbinia.evidence import EvidenceState as state\nfrom turbinia.evidence import PlasoFile\nfrom turbinia.workers import TurbiniaTask\nfrom turbinia.lib import file_helpers\n\n\nclass PlasoTask(TurbiniaTask):\n \"\"\"Task to run Plaso (log2timeline).\"\"\"\n\n # Plaso requires the Disk to be attached, but doesn't require it be mounted.\n REQUIRED_STATES = [\n state.ATTACHED, state.DECOMPRESSED, state.CONTAINER_MOUNTED\n ]\n\n TASK_CONFIG = {\n # 'none' as indicated in the options for status_view within\n # the Plaso documentation\n 'status_view': 'none',\n 'hashers': 'all',\n 'partitions': 'all',\n 'vss_stores': 'none',\n # artifact_filters and file_filter are mutually exclusive\n # parameters and Plaso will error out if both parameters are used.\n 'artifact_filters': None,\n 'file_filter': None,\n 'custom_artifact_definitions': None,\n 'parsers': None,\n 'yara_rules': None\n }\n\n def build_plaso_command(self, base_command, conf):\n \"\"\"Builds a typical plaso command, contains logic specific to log2timeline.\n\n Args:\n base_command (str): Command to invoke log2timeline (e.g. log2timeline.py)\n conf (dict): Dynamic config containing the parameters for the command.\n\n Returns:\n String for valid Log2timeline command.\n \"\"\"\n self.result.log(\n 'Generating Plaso command line from arguments: {0!s}'.format(conf),\n level=logging.DEBUG)\n cmd = [base_command]\n for k, v in conf.items():\n cli_args = [\n 'status_view', 'hashers', 'partitions', 'vss_stores',\n 'custom_artifact_definitions', 'parsers', 'artifact_filters',\n 'file_filter', 'yara_rules'\n ]\n if (k not in cli_args or not v):\n continue\n prepend = '-'\n if len(k) > 1:\n prepend = '--'\n if k == 'file_filter':\n file_path = file_helpers.write_list_to_temp_file(\n v, preferred_dir=self.tmp_dir)\n cmd.extend(['-f', file_path])\n elif k == 'yara_rules':\n file_path = file_helpers.write_str_to_temp_file(\n v, preferred_dir=self.tmp_dir)\n cmd.extend(['--yara_rules', file_path])\n elif isinstance(v, list):\n cmd.extend([prepend + k, ','.join(v)])\n elif isinstance(v, bool):\n cmd.append(prepend + k)\n elif isinstance(v, str):\n cmd.extend([prepend + k, v])\n return cmd\n\n def run(self, evidence, result):\n \"\"\"Task that process data with Plaso.\n\n Args:\n evidence (Evidence object): The evidence we will process.\n result (TurbiniaTaskResult): The object to place task results into.\n\n Returns:\n TurbiniaTaskResult object.\n \"\"\"\n\n config.LoadConfig()\n\n # Write plaso file into tmp_dir because sqlite has issues with some shared\n # filesystems (e.g NFS).\n plaso_file = os.path.join(self.tmp_dir, '{0:s}.plaso'.format(self.id))\n plaso_evidence = PlasoFile(source_path=plaso_file)\n plaso_log = os.path.join(self.output_dir, '{0:s}.log'.format(self.id))\n\n cmd = self.build_plaso_command('log2timeline.py', self.task_config)\n\n if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):\n cmd.append('-d')\n\n if evidence.credentials:\n for credential_type, credential_data in evidence.credentials:\n cmd.extend([\n '--credential', '{0:s}:{1:s}'.format(\n credential_type, credential_data)\n ])\n\n cmd.extend(['--temporary_directory', self.tmp_dir])\n cmd.extend(['--logfile', plaso_log])\n cmd.extend(['--unattended'])\n cmd.extend(['--storage_file', plaso_file])\n cmd.extend([evidence.local_path])\n\n result.log('Running plaso as [{0:s}]'.format(' '.join(cmd)))\n self.execute(\n cmd, result, log_files=[plaso_log], new_evidence=[plaso_evidence],\n close=True)\n\n return result\n", "path": "turbinia/workers/plaso.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Task for running Plaso.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\nimport logging\n\nfrom turbinia import config\nfrom turbinia.evidence import EvidenceState as state\nfrom turbinia.evidence import PlasoFile\nfrom turbinia.workers import TurbiniaTask\nfrom turbinia.lib import file_helpers\n\n\nclass PlasoTask(TurbiniaTask):\n \"\"\"Task to run Plaso (log2timeline).\"\"\"\n\n # Plaso requires the Disk to be attached, but doesn't require it be mounted.\n REQUIRED_STATES = [\n state.ATTACHED, state.DECOMPRESSED, state.CONTAINER_MOUNTED\n ]\n\n TASK_CONFIG = {\n # 'none' as indicated in the options for status_view within\n # the Plaso documentation\n 'status_view': 'none',\n 'hashers': 'all',\n 'hasher_file_size_limit': '1073741824',\n 'partitions': 'all',\n 'vss_stores': 'none',\n # artifact_filters and file_filter are mutually exclusive\n # parameters and Plaso will error out if both parameters are used.\n 'artifact_filters': None,\n 'file_filter': None,\n 'custom_artifact_definitions': None,\n 'parsers': None,\n 'yara_rules': None\n }\n\n def build_plaso_command(self, base_command, conf):\n \"\"\"Builds a typical plaso command, contains logic specific to log2timeline.\n\n Args:\n base_command (str): Command to invoke log2timeline (e.g. log2timeline.py)\n conf (dict): Dynamic config containing the parameters for the command.\n\n Returns:\n String for valid Log2timeline command.\n \"\"\"\n self.result.log(\n 'Generating Plaso command line from arguments: {0!s}'.format(conf),\n level=logging.DEBUG)\n cmd = [base_command]\n for k, v in conf.items():\n cli_args = [\n 'status_view', 'hashers', 'hasher_file_size_limit', 'partitions',\n 'vss_stores', 'custom_artifact_definitions', 'parsers',\n 'artifact_filters', 'file_filter', 'yara_rules'\n ]\n if (k not in cli_args or not v):\n continue\n prepend = '-'\n if len(k) > 1:\n prepend = '--'\n if k == 'file_filter':\n file_path = file_helpers.write_list_to_temp_file(\n v, preferred_dir=self.tmp_dir)\n cmd.extend(['-f', file_path])\n elif k == 'yara_rules':\n file_path = file_helpers.write_str_to_temp_file(\n v, preferred_dir=self.tmp_dir)\n cmd.extend(['--yara_rules', file_path])\n elif isinstance(v, list):\n cmd.extend([prepend + k, ','.join(v)])\n elif isinstance(v, bool):\n cmd.append(prepend + k)\n elif isinstance(v, str):\n cmd.extend([prepend + k, v])\n return cmd\n\n def run(self, evidence, result):\n \"\"\"Task that process data with Plaso.\n\n Args:\n evidence (Evidence object): The evidence we will process.\n result (TurbiniaTaskResult): The object to place task results into.\n\n Returns:\n TurbiniaTaskResult object.\n \"\"\"\n\n config.LoadConfig()\n\n # Write plaso file into tmp_dir because sqlite has issues with some shared\n # filesystems (e.g NFS).\n plaso_file = os.path.join(self.tmp_dir, '{0:s}.plaso'.format(self.id))\n plaso_evidence = PlasoFile(source_path=plaso_file)\n plaso_log = os.path.join(self.output_dir, '{0:s}.log'.format(self.id))\n\n cmd = self.build_plaso_command('log2timeline.py', self.task_config)\n\n if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):\n cmd.append('-d')\n\n if evidence.credentials:\n for credential_type, credential_data in evidence.credentials:\n cmd.extend([\n '--credential', '{0:s}:{1:s}'.format(\n credential_type, credential_data)\n ])\n\n cmd.extend(['--temporary_directory', self.tmp_dir])\n cmd.extend(['--logfile', plaso_log])\n cmd.extend(['--unattended'])\n cmd.extend(['--storage_file', plaso_file])\n cmd.extend([evidence.local_path])\n\n result.log('Running plaso as [{0:s}]'.format(' '.join(cmd)))\n self.execute(\n cmd, result, log_files=[plaso_log], new_evidence=[plaso_evidence],\n close=True)\n\n return result\n", "path": "turbinia/workers/plaso.py"}]} | 1,751 | 282 |
gh_patches_debug_2514 | rasdani/github-patches | git_diff | liberapay__liberapay.com-173 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Changing organization type doesn't work
In identity tab, when I change the organization type to set Organization instead of Business, my changes are not saved.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `liberapay/security/authentication.py`
Content:
```
1 """Defines website authentication helpers.
2 """
3 import binascii
4
5 from six.moves.urllib.parse import urlencode
6
7 from aspen import Response
8
9 from liberapay.constants import SESSION, SESSION_TIMEOUT
10 from liberapay.exceptions import AuthRequired
11 from liberapay.models.participant import Participant
12
13
14 class _ANON(object):
15 ANON = True
16 is_admin = False
17 id = None
18 __bool__ = __nonzero__ = lambda *a: False
19 get_tip_to = lambda self, tippee: Participant._zero_tip_dict(tippee)
20 __repr__ = lambda self: '<ANON>'
21
22
23 ANON = _ANON()
24
25
26 def _get_body(request):
27 try:
28 body = request.body
29 except Response:
30 return
31 if not isinstance(body, dict):
32 return
33 return body
34
35
36 def sign_in_with_form_data(body, state):
37 p = None
38 _, website = state['_'], state['website']
39
40 if body.get('log-in.id'):
41 id = body.pop('log-in.id')
42 k = 'email' if '@' in id else 'username'
43 p = Participant.authenticate(
44 k, 'password',
45 id, body.pop('log-in.password')
46 )
47 if p and p.status == 'closed':
48 p.update_status('active')
49
50 elif body.get('sign-in.username'):
51 if body.pop('sign-in.terms') != 'agree':
52 raise Response(400, 'you have to agree to the terms')
53 kind = body.pop('sign-in.kind')
54 if kind not in ('individual', 'organization'):
55 raise Response(400, 'bad kind')
56 with website.db.get_cursor() as c:
57 p = Participant.make_active(
58 body.pop('sign-in.username'), kind, body.pop('sign-in.password'),
59 cursor=c
60 )
61 p.add_email(body.pop('sign-in.email'), cursor=c)
62 p.authenticated = True
63
64 elif body.get('email-login.email'):
65 email = body.pop('email-login.email')
66 p = Participant._from_thing('email', email)
67 if p:
68 p.start_session()
69 qs = {'log-in.id': p.id, 'log-in.token': p.session_token}
70 p.send_email(
71 'password_reset',
72 email=email,
73 link=p.url('settings/', qs),
74 link_validity=SESSION_TIMEOUT,
75 )
76 state['email-login.sent-to'] = email
77 else:
78 state['sign-in.error'] = _(
79 "We didn't find any account whose primary email address is {0}.",
80 email
81 )
82 p = None
83
84 return p
85
86
87 def start_user_as_anon():
88 """Make sure we always have a user object, regardless of exceptions during authentication.
89 """
90 return {'user': ANON}
91
92
93 def authenticate_user_if_possible(request, state, user, _):
94 """This signs the user in.
95 """
96 if request.line.uri.startswith('/assets/'):
97 return
98
99 # HTTP auth
100 if 'Authorization' in request.headers:
101 header = request.headers['authorization']
102 if not header.startswith('Basic '):
103 raise Response(401, 'Unsupported authentication method')
104 try:
105 creds = binascii.a2b_base64(header[len('Basic '):]).split(':', 1)
106 except binascii.Error:
107 raise Response(400, 'Malformed "Authorization" header')
108 participant = Participant.authenticate('id', 'password', *creds)
109 if not participant:
110 raise Response(401)
111 return {'user': participant}
112
113 # Cookie and form auth
114 # We want to try cookie auth first, but we want form auth to supersede it
115 p = None
116 response = state.setdefault('response', Response())
117 if SESSION in request.headers.cookie:
118 creds = request.headers.cookie[SESSION].value.split(':', 1)
119 p = Participant.authenticate('id', 'session', *creds)
120 if p:
121 state['user'] = p
122 session_p, p = p, None
123 session_suffix = ''
124 redirect_url = request.line.uri
125 if request.method == 'POST':
126 body = _get_body(request)
127 if body:
128 p = sign_in_with_form_data(body, state)
129 carry_on = body.pop('email-login.carry-on', None)
130 if not p and carry_on:
131 p_email = session_p and (
132 session_p.email or session_p.get_emails()[0].address
133 )
134 if p_email != carry_on:
135 state['email-login.carry-on'] = carry_on
136 raise AuthRequired
137 elif request.method == 'GET' and request.qs.get('log-in.id'):
138 id, token = request.qs.pop('log-in.id'), request.qs.pop('log-in.token')
139 p = Participant.authenticate('id', 'session', id, token)
140 if not p and (not session_p or session_p.id != id):
141 raise Response(400, _("This login link is expired or invalid."))
142 else:
143 qs = '?' + urlencode(request.qs, doseq=True) if request.qs else ''
144 redirect_url = request.path.raw + qs
145 session_p = p
146 session_suffix = '.em'
147 if p:
148 if session_p:
149 session_p.sign_out(response.headers.cookie)
150 p.sign_in(response.headers.cookie, session_suffix)
151 state['user'] = p
152 if request.body.pop('form.repost', None) != 'true':
153 response.redirect(redirect_url)
154
155
156 def add_auth_to_response(response, request=None, user=ANON):
157 if request is None:
158 return # early parsing must've failed
159 if request.line.uri.startswith('/assets/'):
160 return # assets never get auth headers
161
162 if SESSION in request.headers.cookie:
163 if not user.ANON:
164 user.keep_signed_in(response.headers.cookie)
165
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/liberapay/security/authentication.py b/liberapay/security/authentication.py
--- a/liberapay/security/authentication.py
+++ b/liberapay/security/authentication.py
@@ -44,6 +44,8 @@
k, 'password',
id, body.pop('log-in.password')
)
+ if not p:
+ state['sign-in.error'] = _("Bad username or password.")
if p and p.status == 'closed':
p.update_status('active')
| {"golden_diff": "diff --git a/liberapay/security/authentication.py b/liberapay/security/authentication.py\n--- a/liberapay/security/authentication.py\n+++ b/liberapay/security/authentication.py\n@@ -44,6 +44,8 @@\n k, 'password',\n id, body.pop('log-in.password')\n )\n+ if not p:\n+ state['sign-in.error'] = _(\"Bad username or password.\")\n if p and p.status == 'closed':\n p.update_status('active')\n", "issue": "Changing organization type doesn't work\nIn identity tab, when I change the organization type to set Organization instead of Business, my changes are not saved. \n\n", "before_files": [{"content": "\"\"\"Defines website authentication helpers.\n\"\"\"\nimport binascii\n\nfrom six.moves.urllib.parse import urlencode\n\nfrom aspen import Response\n\nfrom liberapay.constants import SESSION, SESSION_TIMEOUT\nfrom liberapay.exceptions import AuthRequired\nfrom liberapay.models.participant import Participant\n\n\nclass _ANON(object):\n ANON = True\n is_admin = False\n id = None\n __bool__ = __nonzero__ = lambda *a: False\n get_tip_to = lambda self, tippee: Participant._zero_tip_dict(tippee)\n __repr__ = lambda self: '<ANON>'\n\n\nANON = _ANON()\n\n\ndef _get_body(request):\n try:\n body = request.body\n except Response:\n return\n if not isinstance(body, dict):\n return\n return body\n\n\ndef sign_in_with_form_data(body, state):\n p = None\n _, website = state['_'], state['website']\n\n if body.get('log-in.id'):\n id = body.pop('log-in.id')\n k = 'email' if '@' in id else 'username'\n p = Participant.authenticate(\n k, 'password',\n id, body.pop('log-in.password')\n )\n if p and p.status == 'closed':\n p.update_status('active')\n\n elif body.get('sign-in.username'):\n if body.pop('sign-in.terms') != 'agree':\n raise Response(400, 'you have to agree to the terms')\n kind = body.pop('sign-in.kind')\n if kind not in ('individual', 'organization'):\n raise Response(400, 'bad kind')\n with website.db.get_cursor() as c:\n p = Participant.make_active(\n body.pop('sign-in.username'), kind, body.pop('sign-in.password'),\n cursor=c\n )\n p.add_email(body.pop('sign-in.email'), cursor=c)\n p.authenticated = True\n\n elif body.get('email-login.email'):\n email = body.pop('email-login.email')\n p = Participant._from_thing('email', email)\n if p:\n p.start_session()\n qs = {'log-in.id': p.id, 'log-in.token': p.session_token}\n p.send_email(\n 'password_reset',\n email=email,\n link=p.url('settings/', qs),\n link_validity=SESSION_TIMEOUT,\n )\n state['email-login.sent-to'] = email\n else:\n state['sign-in.error'] = _(\n \"We didn't find any account whose primary email address is {0}.\",\n email\n )\n p = None\n\n return p\n\n\ndef start_user_as_anon():\n \"\"\"Make sure we always have a user object, regardless of exceptions during authentication.\n \"\"\"\n return {'user': ANON}\n\n\ndef authenticate_user_if_possible(request, state, user, _):\n \"\"\"This signs the user in.\n \"\"\"\n if request.line.uri.startswith('/assets/'):\n return\n\n # HTTP auth\n if 'Authorization' in request.headers:\n header = request.headers['authorization']\n if not header.startswith('Basic '):\n raise Response(401, 'Unsupported authentication method')\n try:\n creds = binascii.a2b_base64(header[len('Basic '):]).split(':', 1)\n except binascii.Error:\n raise Response(400, 'Malformed \"Authorization\" header')\n participant = Participant.authenticate('id', 'password', *creds)\n if not participant:\n raise Response(401)\n return {'user': participant}\n\n # Cookie and form auth\n # We want to try cookie auth first, but we want form auth to supersede it\n p = None\n response = state.setdefault('response', Response())\n if SESSION in request.headers.cookie:\n creds = request.headers.cookie[SESSION].value.split(':', 1)\n p = Participant.authenticate('id', 'session', *creds)\n if p:\n state['user'] = p\n session_p, p = p, None\n session_suffix = ''\n redirect_url = request.line.uri\n if request.method == 'POST':\n body = _get_body(request)\n if body:\n p = sign_in_with_form_data(body, state)\n carry_on = body.pop('email-login.carry-on', None)\n if not p and carry_on:\n p_email = session_p and (\n session_p.email or session_p.get_emails()[0].address\n )\n if p_email != carry_on:\n state['email-login.carry-on'] = carry_on\n raise AuthRequired\n elif request.method == 'GET' and request.qs.get('log-in.id'):\n id, token = request.qs.pop('log-in.id'), request.qs.pop('log-in.token')\n p = Participant.authenticate('id', 'session', id, token)\n if not p and (not session_p or session_p.id != id):\n raise Response(400, _(\"This login link is expired or invalid.\"))\n else:\n qs = '?' + urlencode(request.qs, doseq=True) if request.qs else ''\n redirect_url = request.path.raw + qs\n session_p = p\n session_suffix = '.em'\n if p:\n if session_p:\n session_p.sign_out(response.headers.cookie)\n p.sign_in(response.headers.cookie, session_suffix)\n state['user'] = p\n if request.body.pop('form.repost', None) != 'true':\n response.redirect(redirect_url)\n\n\ndef add_auth_to_response(response, request=None, user=ANON):\n if request is None:\n return # early parsing must've failed\n if request.line.uri.startswith('/assets/'):\n return # assets never get auth headers\n\n if SESSION in request.headers.cookie:\n if not user.ANON:\n user.keep_signed_in(response.headers.cookie)\n", "path": "liberapay/security/authentication.py"}], "after_files": [{"content": "\"\"\"Defines website authentication helpers.\n\"\"\"\nimport binascii\n\nfrom six.moves.urllib.parse import urlencode\n\nfrom aspen import Response\n\nfrom liberapay.constants import SESSION, SESSION_TIMEOUT\nfrom liberapay.exceptions import AuthRequired\nfrom liberapay.models.participant import Participant\n\n\nclass _ANON(object):\n ANON = True\n is_admin = False\n id = None\n __bool__ = __nonzero__ = lambda *a: False\n get_tip_to = lambda self, tippee: Participant._zero_tip_dict(tippee)\n __repr__ = lambda self: '<ANON>'\n\n\nANON = _ANON()\n\n\ndef _get_body(request):\n try:\n body = request.body\n except Response:\n return\n if not isinstance(body, dict):\n return\n return body\n\n\ndef sign_in_with_form_data(body, state):\n p = None\n _, website = state['_'], state['website']\n\n if body.get('log-in.id'):\n id = body.pop('log-in.id')\n k = 'email' if '@' in id else 'username'\n p = Participant.authenticate(\n k, 'password',\n id, body.pop('log-in.password')\n )\n if not p:\n state['sign-in.error'] = _(\"Bad username or password.\")\n if p and p.status == 'closed':\n p.update_status('active')\n\n elif body.get('sign-in.username'):\n if body.pop('sign-in.terms') != 'agree':\n raise Response(400, 'you have to agree to the terms')\n kind = body.pop('sign-in.kind')\n if kind not in ('individual', 'organization'):\n raise Response(400, 'bad kind')\n with website.db.get_cursor() as c:\n p = Participant.make_active(\n body.pop('sign-in.username'), kind, body.pop('sign-in.password'),\n cursor=c\n )\n p.add_email(body.pop('sign-in.email'), cursor=c)\n p.authenticated = True\n\n elif body.get('email-login.email'):\n email = body.pop('email-login.email')\n p = Participant._from_thing('email', email)\n if p:\n p.start_session()\n qs = {'log-in.id': p.id, 'log-in.token': p.session_token}\n p.send_email(\n 'password_reset',\n email=email,\n link=p.url('settings/', qs),\n link_validity=SESSION_TIMEOUT,\n )\n state['email-login.sent-to'] = email\n else:\n state['sign-in.error'] = _(\n \"We didn't find any account whose primary email address is {0}.\",\n email\n )\n p = None\n\n return p\n\n\ndef start_user_as_anon():\n \"\"\"Make sure we always have a user object, regardless of exceptions during authentication.\n \"\"\"\n return {'user': ANON}\n\n\ndef authenticate_user_if_possible(request, state, user, _):\n \"\"\"This signs the user in.\n \"\"\"\n if request.line.uri.startswith('/assets/'):\n return\n\n # HTTP auth\n if 'Authorization' in request.headers:\n header = request.headers['authorization']\n if not header.startswith('Basic '):\n raise Response(401, 'Unsupported authentication method')\n try:\n creds = binascii.a2b_base64(header[len('Basic '):]).split(':', 1)\n except binascii.Error:\n raise Response(400, 'Malformed \"Authorization\" header')\n participant = Participant.authenticate('id', 'password', *creds)\n if not participant:\n raise Response(401)\n return {'user': participant}\n\n # Cookie and form auth\n # We want to try cookie auth first, but we want form auth to supersede it\n p = None\n response = state.setdefault('response', Response())\n if SESSION in request.headers.cookie:\n creds = request.headers.cookie[SESSION].value.split(':', 1)\n p = Participant.authenticate('id', 'session', *creds)\n if p:\n state['user'] = p\n session_p, p = p, None\n session_suffix = ''\n redirect_url = request.line.uri\n if request.method == 'POST':\n body = _get_body(request)\n if body:\n p = sign_in_with_form_data(body, state)\n carry_on = body.pop('email-login.carry-on', None)\n if not p and carry_on:\n p_email = session_p and (\n session_p.email or session_p.get_emails()[0].address\n )\n if p_email != carry_on:\n state['email-login.carry-on'] = carry_on\n raise AuthRequired\n elif request.method == 'GET' and request.qs.get('log-in.id'):\n id, token = request.qs.pop('log-in.id'), request.qs.pop('log-in.token')\n p = Participant.authenticate('id', 'session', id, token)\n if not p and (not session_p or session_p.id != id):\n raise Response(400, _(\"This login link is expired or invalid.\"))\n else:\n qs = '?' + urlencode(request.qs, doseq=True) if request.qs else ''\n redirect_url = request.path.raw + qs\n session_p = p\n session_suffix = '.em'\n if p:\n if session_p:\n session_p.sign_out(response.headers.cookie)\n p.sign_in(response.headers.cookie, session_suffix)\n state['user'] = p\n if request.body.pop('form.repost', None) != 'true':\n response.redirect(redirect_url)\n\n\ndef add_auth_to_response(response, request=None, user=ANON):\n if request is None:\n return # early parsing must've failed\n if request.line.uri.startswith('/assets/'):\n return # assets never get auth headers\n\n if SESSION in request.headers.cookie:\n if not user.ANON:\n user.keep_signed_in(response.headers.cookie)\n", "path": "liberapay/security/authentication.py"}]} | 1,944 | 108 |
gh_patches_debug_49043 | rasdani/github-patches | git_diff | arviz-devs__arviz-2032 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plot_dot
**Describe the bug**
plotdot fig size doesn't behave the way I expect, in that when I set `figsize` in an axes that triple a previous one its not triple the size. There also are some minor bugs where the dots seem to be overlapping some
**To Reproduce**
```
samples = stats.beta(2,2).rvs(100)
width = 10
fig, ax = plt.subplots(figsize=(width, 10))
az.plot_dot(samples, ax=ax)
ax.set_title(f"Width: {width}")
ax.set_xlim(0,1)
```
Then try this, but see that figure is not three times the width
```
width = 30
fig, ax = plt.subplots(figsize=(width, 10))
az.plot_dot(samples, ax=ax)
ax.set_title(f"Width: {width}")
ax.set_xlim(0,1)
```


**Expected behavior**
Figsize from `plt.subplots` is respected
**Additional context**
Arviz '0.12.0'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `arviz/plots/backends/matplotlib/dotplot.py`
Content:
```
1 """Matplotlib dotplot."""
2 import math
3 import warnings
4 import numpy as np
5 import matplotlib.pyplot as plt
6 from matplotlib import _pylab_helpers
7
8 from ...plot_utils import _scale_fig_size
9 from . import backend_kwarg_defaults, create_axes_grid, backend_show
10 from ...plot_utils import plot_point_interval
11 from ...dotplot import wilkinson_algorithm, layout_stacks
12
13
14 def plot_dot(
15 values,
16 binwidth,
17 dotsize,
18 stackratio,
19 hdi_prob,
20 quartiles,
21 rotated,
22 dotcolor,
23 intervalcolor,
24 markersize,
25 markercolor,
26 marker,
27 figsize,
28 linewidth,
29 point_estimate,
30 nquantiles,
31 point_interval,
32 ax,
33 show,
34 backend_kwargs,
35 plot_kwargs,
36 ):
37 """Matplotlib dotplot."""
38 if backend_kwargs is None:
39 backend_kwargs = {}
40
41 backend_kwargs = {**backend_kwarg_defaults(), **backend_kwargs}
42
43 backend_kwargs.setdefault("figsize", figsize)
44 backend_kwargs["squeeze"] = True
45
46 (figsize, _, _, _, auto_linewidth, auto_markersize) = _scale_fig_size(figsize, None)
47
48 if plot_kwargs is None:
49 plot_kwargs = {}
50 plot_kwargs.setdefault("color", dotcolor)
51
52 if linewidth is None:
53 linewidth = auto_linewidth
54
55 if markersize is None:
56 markersize = auto_markersize
57
58 if ax is None:
59 fig_manager = _pylab_helpers.Gcf.get_active()
60 if fig_manager is not None:
61 ax = fig_manager.canvas.figure.gca()
62 else:
63 _, ax = create_axes_grid(
64 1,
65 backend_kwargs=backend_kwargs,
66 )
67
68 if point_interval:
69 ax = plot_point_interval(
70 ax,
71 values,
72 point_estimate,
73 hdi_prob,
74 quartiles,
75 linewidth,
76 markersize,
77 markercolor,
78 marker,
79 rotated,
80 intervalcolor,
81 "matplotlib",
82 )
83
84 if nquantiles > values.shape[0]:
85 warnings.warn(
86 "nquantiles must be less than or equal to the number of data points", UserWarning
87 )
88 nquantiles = values.shape[0]
89 else:
90 qlist = np.linspace(1 / (2 * nquantiles), 1 - 1 / (2 * nquantiles), nquantiles)
91 values = np.quantile(values, qlist)
92
93 if binwidth is None:
94 binwidth = math.sqrt((values[-1] - values[0] + 1) ** 2 / (2 * nquantiles * np.pi))
95
96 ## Wilkinson's Algorithm
97 stack_locs, stack_count = wilkinson_algorithm(values, binwidth)
98 x, y = layout_stacks(stack_locs, stack_count, binwidth, stackratio, rotated)
99
100 for (x_i, y_i) in zip(x, y):
101 dot = plt.Circle((x_i, y_i), dotsize * binwidth / 2, **plot_kwargs)
102 ax.add_patch(dot)
103
104 if rotated:
105 ax.tick_params(bottom=False, labelbottom=False)
106 else:
107 ax.tick_params(left=False, labelleft=False)
108
109 ax.set_aspect("equal", adjustable="box")
110 ax.autoscale()
111
112 if backend_show(show):
113 plt.show()
114
115 return ax
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/arviz/plots/backends/matplotlib/dotplot.py b/arviz/plots/backends/matplotlib/dotplot.py
--- a/arviz/plots/backends/matplotlib/dotplot.py
+++ b/arviz/plots/backends/matplotlib/dotplot.py
@@ -106,7 +106,7 @@
else:
ax.tick_params(left=False, labelleft=False)
- ax.set_aspect("equal", adjustable="box")
+ ax.set_aspect("equal", adjustable="datalim")
ax.autoscale()
if backend_show(show):
| {"golden_diff": "diff --git a/arviz/plots/backends/matplotlib/dotplot.py b/arviz/plots/backends/matplotlib/dotplot.py\n--- a/arviz/plots/backends/matplotlib/dotplot.py\n+++ b/arviz/plots/backends/matplotlib/dotplot.py\n@@ -106,7 +106,7 @@\n else:\n ax.tick_params(left=False, labelleft=False)\n \n- ax.set_aspect(\"equal\", adjustable=\"box\")\n+ ax.set_aspect(\"equal\", adjustable=\"datalim\")\n ax.autoscale()\n \n if backend_show(show):\n", "issue": "plot_dot \n**Describe the bug**\r\nplotdot fig size doesn't behave the way I expect, in that when I set `figsize` in an axes that triple a previous one its not triple the size. There also are some minor bugs where the dots seem to be overlapping some\r\n\r\n**To Reproduce**\r\n```\r\nsamples = stats.beta(2,2).rvs(100)\r\n\r\nwidth = 10\r\nfig, ax = plt.subplots(figsize=(width, 10))\r\naz.plot_dot(samples, ax=ax)\r\nax.set_title(f\"Width: {width}\")\r\nax.set_xlim(0,1)\r\n```\r\n\r\nThen try this, but see that figure is not three times the width\r\n```\r\nwidth = 30\r\nfig, ax = plt.subplots(figsize=(width, 10))\r\naz.plot_dot(samples, ax=ax)\r\nax.set_title(f\"Width: {width}\")\r\nax.set_xlim(0,1)\r\n```\r\n\r\n\r\n\r\n\r\n\r\n**Expected behavior**\r\nFigsize from `plt.subplots` is respected\r\n\r\n**Additional context**\r\nArviz '0.12.0'\r\n\n", "before_files": [{"content": "\"\"\"Matplotlib dotplot.\"\"\"\nimport math\nimport warnings\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import _pylab_helpers\n\nfrom ...plot_utils import _scale_fig_size\nfrom . import backend_kwarg_defaults, create_axes_grid, backend_show\nfrom ...plot_utils import plot_point_interval\nfrom ...dotplot import wilkinson_algorithm, layout_stacks\n\n\ndef plot_dot(\n values,\n binwidth,\n dotsize,\n stackratio,\n hdi_prob,\n quartiles,\n rotated,\n dotcolor,\n intervalcolor,\n markersize,\n markercolor,\n marker,\n figsize,\n linewidth,\n point_estimate,\n nquantiles,\n point_interval,\n ax,\n show,\n backend_kwargs,\n plot_kwargs,\n):\n \"\"\"Matplotlib dotplot.\"\"\"\n if backend_kwargs is None:\n backend_kwargs = {}\n\n backend_kwargs = {**backend_kwarg_defaults(), **backend_kwargs}\n\n backend_kwargs.setdefault(\"figsize\", figsize)\n backend_kwargs[\"squeeze\"] = True\n\n (figsize, _, _, _, auto_linewidth, auto_markersize) = _scale_fig_size(figsize, None)\n\n if plot_kwargs is None:\n plot_kwargs = {}\n plot_kwargs.setdefault(\"color\", dotcolor)\n\n if linewidth is None:\n linewidth = auto_linewidth\n\n if markersize is None:\n markersize = auto_markersize\n\n if ax is None:\n fig_manager = _pylab_helpers.Gcf.get_active()\n if fig_manager is not None:\n ax = fig_manager.canvas.figure.gca()\n else:\n _, ax = create_axes_grid(\n 1,\n backend_kwargs=backend_kwargs,\n )\n\n if point_interval:\n ax = plot_point_interval(\n ax,\n values,\n point_estimate,\n hdi_prob,\n quartiles,\n linewidth,\n markersize,\n markercolor,\n marker,\n rotated,\n intervalcolor,\n \"matplotlib\",\n )\n\n if nquantiles > values.shape[0]:\n warnings.warn(\n \"nquantiles must be less than or equal to the number of data points\", UserWarning\n )\n nquantiles = values.shape[0]\n else:\n qlist = np.linspace(1 / (2 * nquantiles), 1 - 1 / (2 * nquantiles), nquantiles)\n values = np.quantile(values, qlist)\n\n if binwidth is None:\n binwidth = math.sqrt((values[-1] - values[0] + 1) ** 2 / (2 * nquantiles * np.pi))\n\n ## Wilkinson's Algorithm\n stack_locs, stack_count = wilkinson_algorithm(values, binwidth)\n x, y = layout_stacks(stack_locs, stack_count, binwidth, stackratio, rotated)\n\n for (x_i, y_i) in zip(x, y):\n dot = plt.Circle((x_i, y_i), dotsize * binwidth / 2, **plot_kwargs)\n ax.add_patch(dot)\n\n if rotated:\n ax.tick_params(bottom=False, labelbottom=False)\n else:\n ax.tick_params(left=False, labelleft=False)\n\n ax.set_aspect(\"equal\", adjustable=\"box\")\n ax.autoscale()\n\n if backend_show(show):\n plt.show()\n\n return ax\n", "path": "arviz/plots/backends/matplotlib/dotplot.py"}], "after_files": [{"content": "\"\"\"Matplotlib dotplot.\"\"\"\nimport math\nimport warnings\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import _pylab_helpers\n\nfrom ...plot_utils import _scale_fig_size\nfrom . import backend_kwarg_defaults, create_axes_grid, backend_show\nfrom ...plot_utils import plot_point_interval\nfrom ...dotplot import wilkinson_algorithm, layout_stacks\n\n\ndef plot_dot(\n values,\n binwidth,\n dotsize,\n stackratio,\n hdi_prob,\n quartiles,\n rotated,\n dotcolor,\n intervalcolor,\n markersize,\n markercolor,\n marker,\n figsize,\n linewidth,\n point_estimate,\n nquantiles,\n point_interval,\n ax,\n show,\n backend_kwargs,\n plot_kwargs,\n):\n \"\"\"Matplotlib dotplot.\"\"\"\n if backend_kwargs is None:\n backend_kwargs = {}\n\n backend_kwargs = {**backend_kwarg_defaults(), **backend_kwargs}\n\n backend_kwargs.setdefault(\"figsize\", figsize)\n backend_kwargs[\"squeeze\"] = True\n\n (figsize, _, _, _, auto_linewidth, auto_markersize) = _scale_fig_size(figsize, None)\n\n if plot_kwargs is None:\n plot_kwargs = {}\n plot_kwargs.setdefault(\"color\", dotcolor)\n\n if linewidth is None:\n linewidth = auto_linewidth\n\n if markersize is None:\n markersize = auto_markersize\n\n if ax is None:\n fig_manager = _pylab_helpers.Gcf.get_active()\n if fig_manager is not None:\n ax = fig_manager.canvas.figure.gca()\n else:\n _, ax = create_axes_grid(\n 1,\n backend_kwargs=backend_kwargs,\n )\n\n if point_interval:\n ax = plot_point_interval(\n ax,\n values,\n point_estimate,\n hdi_prob,\n quartiles,\n linewidth,\n markersize,\n markercolor,\n marker,\n rotated,\n intervalcolor,\n \"matplotlib\",\n )\n\n if nquantiles > values.shape[0]:\n warnings.warn(\n \"nquantiles must be less than or equal to the number of data points\", UserWarning\n )\n nquantiles = values.shape[0]\n else:\n qlist = np.linspace(1 / (2 * nquantiles), 1 - 1 / (2 * nquantiles), nquantiles)\n values = np.quantile(values, qlist)\n\n if binwidth is None:\n binwidth = math.sqrt((values[-1] - values[0] + 1) ** 2 / (2 * nquantiles * np.pi))\n\n ## Wilkinson's Algorithm\n stack_locs, stack_count = wilkinson_algorithm(values, binwidth)\n x, y = layout_stacks(stack_locs, stack_count, binwidth, stackratio, rotated)\n\n for (x_i, y_i) in zip(x, y):\n dot = plt.Circle((x_i, y_i), dotsize * binwidth / 2, **plot_kwargs)\n ax.add_patch(dot)\n\n if rotated:\n ax.tick_params(bottom=False, labelbottom=False)\n else:\n ax.tick_params(left=False, labelleft=False)\n\n ax.set_aspect(\"equal\", adjustable=\"datalim\")\n ax.autoscale()\n\n if backend_show(show):\n plt.show()\n\n return ax\n", "path": "arviz/plots/backends/matplotlib/dotplot.py"}]} | 1,564 | 127 |
gh_patches_debug_9828 | rasdani/github-patches | git_diff | secdev__scapy-3473 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
L2TP post_build is broken
### Brief description
l2tp.py post_build is supposed to update the length. However, it only does this if current length is None, and the length field is initialized to 0, not None, resulting in the length never being updated.
### Scapy version
2.4.5
### Python version
3.8
### Operating system
Ubuntu 20.04
### Additional environment information
_No response_
### How to reproduce
print( (L2TP(header=['control', 'length'], version=2) / 'blahblah').build() )
### Actual result
b'\xc0\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00blahblah'
### Expected result
b'\xc0\x02\x00\x14\x00\x00\x00\x00\x00\x00\x00\x00blahblah'
### Related resources
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scapy/layers/l2tp.py`
Content:
```
1 # This file is part of Scapy
2 # See http://www.secdev.org/projects/scapy for more information
3 # Copyright (C) Philippe Biondi <[email protected]>
4 # This program is published under a GPLv2 license
5
6 """
7 L2TP (Layer 2 Tunneling Protocol) for VPNs.
8
9 [RFC 2661]
10 """
11
12 import struct
13
14 from scapy.packet import Packet, bind_layers, bind_bottom_up
15 from scapy.fields import BitEnumField, ConditionalField, FlagsField, \
16 PadField, ShortField
17 from scapy.layers.inet import UDP
18 from scapy.layers.ppp import PPP
19
20
21 class L2TP(Packet):
22 name = "L2TP"
23 fields_desc = [
24 FlagsField("hdr", 0, 12, ['res00', 'res01', 'res02', 'res03', 'priority', 'offset', # noqa: E501
25 'res06', 'sequence', 'res08', 'res09', 'length', 'control']), # noqa: E501
26 BitEnumField("version", 2, 4, {2: 'L2TPv2'}),
27
28 ConditionalField(ShortField("len", 0),
29 lambda pkt: pkt.hdr & 'control+length'),
30 ShortField("tunnel_id", 0),
31 ShortField("session_id", 0),
32 ConditionalField(ShortField("ns", 0),
33 lambda pkt: pkt.hdr & 'sequence+control'),
34 ConditionalField(ShortField("nr", 0),
35 lambda pkt: pkt.hdr & 'sequence+control'),
36 ConditionalField(
37 PadField(ShortField("offset", 0), 4, b"\x00"),
38 lambda pkt: not (pkt.hdr & 'control') and pkt.hdr & 'offset'
39 )
40 ]
41
42 def post_build(self, pkt, pay):
43 if self.len is None and self.hdr & 'control+length':
44 tmp_len = len(pkt) + len(pay)
45 pkt = pkt[:2] + struct.pack("!H", tmp_len) + pkt[4:]
46 return pkt + pay
47
48
49 bind_bottom_up(UDP, L2TP, dport=1701)
50 bind_bottom_up(UDP, L2TP, sport=1701)
51 bind_layers(UDP, L2TP, dport=1701, sport=1701)
52 bind_layers(L2TP, PPP,)
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scapy/layers/l2tp.py b/scapy/layers/l2tp.py
--- a/scapy/layers/l2tp.py
+++ b/scapy/layers/l2tp.py
@@ -25,7 +25,7 @@
'res06', 'sequence', 'res08', 'res09', 'length', 'control']), # noqa: E501
BitEnumField("version", 2, 4, {2: 'L2TPv2'}),
- ConditionalField(ShortField("len", 0),
+ ConditionalField(ShortField("len", None),
lambda pkt: pkt.hdr & 'control+length'),
ShortField("tunnel_id", 0),
ShortField("session_id", 0),
| {"golden_diff": "diff --git a/scapy/layers/l2tp.py b/scapy/layers/l2tp.py\n--- a/scapy/layers/l2tp.py\n+++ b/scapy/layers/l2tp.py\n@@ -25,7 +25,7 @@\n 'res06', 'sequence', 'res08', 'res09', 'length', 'control']), # noqa: E501\n BitEnumField(\"version\", 2, 4, {2: 'L2TPv2'}),\n \n- ConditionalField(ShortField(\"len\", 0),\n+ ConditionalField(ShortField(\"len\", None),\n lambda pkt: pkt.hdr & 'control+length'),\n ShortField(\"tunnel_id\", 0),\n ShortField(\"session_id\", 0),\n", "issue": "L2TP post_build is broken\n### Brief description\n\nl2tp.py post_build is supposed to update the length. However, it only does this if current length is None, and the length field is initialized to 0, not None, resulting in the length never being updated. \n\n### Scapy version\n\n2.4.5\n\n### Python version\n\n3.8\n\n### Operating system\n\nUbuntu 20.04\n\n### Additional environment information\n\n_No response_\n\n### How to reproduce\n\nprint( (L2TP(header=['control', 'length'], version=2) / 'blahblah').build() )\r\n\n\n### Actual result\n\nb'\\xc0\\x02\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00blahblah'\n\n### Expected result\n\nb'\\xc0\\x02\\x00\\x14\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00blahblah'\n\n### Related resources\n\n_No response_\n", "before_files": [{"content": "# This file is part of Scapy\n# See http://www.secdev.org/projects/scapy for more information\n# Copyright (C) Philippe Biondi <[email protected]>\n# This program is published under a GPLv2 license\n\n\"\"\"\nL2TP (Layer 2 Tunneling Protocol) for VPNs.\n\n[RFC 2661]\n\"\"\"\n\nimport struct\n\nfrom scapy.packet import Packet, bind_layers, bind_bottom_up\nfrom scapy.fields import BitEnumField, ConditionalField, FlagsField, \\\n PadField, ShortField\nfrom scapy.layers.inet import UDP\nfrom scapy.layers.ppp import PPP\n\n\nclass L2TP(Packet):\n name = \"L2TP\"\n fields_desc = [\n FlagsField(\"hdr\", 0, 12, ['res00', 'res01', 'res02', 'res03', 'priority', 'offset', # noqa: E501\n 'res06', 'sequence', 'res08', 'res09', 'length', 'control']), # noqa: E501\n BitEnumField(\"version\", 2, 4, {2: 'L2TPv2'}),\n\n ConditionalField(ShortField(\"len\", 0),\n lambda pkt: pkt.hdr & 'control+length'),\n ShortField(\"tunnel_id\", 0),\n ShortField(\"session_id\", 0),\n ConditionalField(ShortField(\"ns\", 0),\n lambda pkt: pkt.hdr & 'sequence+control'),\n ConditionalField(ShortField(\"nr\", 0),\n lambda pkt: pkt.hdr & 'sequence+control'),\n ConditionalField(\n PadField(ShortField(\"offset\", 0), 4, b\"\\x00\"),\n lambda pkt: not (pkt.hdr & 'control') and pkt.hdr & 'offset'\n )\n ]\n\n def post_build(self, pkt, pay):\n if self.len is None and self.hdr & 'control+length':\n tmp_len = len(pkt) + len(pay)\n pkt = pkt[:2] + struct.pack(\"!H\", tmp_len) + pkt[4:]\n return pkt + pay\n\n\nbind_bottom_up(UDP, L2TP, dport=1701)\nbind_bottom_up(UDP, L2TP, sport=1701)\nbind_layers(UDP, L2TP, dport=1701, sport=1701)\nbind_layers(L2TP, PPP,)\n", "path": "scapy/layers/l2tp.py"}], "after_files": [{"content": "# This file is part of Scapy\n# See http://www.secdev.org/projects/scapy for more information\n# Copyright (C) Philippe Biondi <[email protected]>\n# This program is published under a GPLv2 license\n\n\"\"\"\nL2TP (Layer 2 Tunneling Protocol) for VPNs.\n\n[RFC 2661]\n\"\"\"\n\nimport struct\n\nfrom scapy.packet import Packet, bind_layers, bind_bottom_up\nfrom scapy.fields import BitEnumField, ConditionalField, FlagsField, \\\n PadField, ShortField\nfrom scapy.layers.inet import UDP\nfrom scapy.layers.ppp import PPP\n\n\nclass L2TP(Packet):\n name = \"L2TP\"\n fields_desc = [\n FlagsField(\"hdr\", 0, 12, ['res00', 'res01', 'res02', 'res03', 'priority', 'offset', # noqa: E501\n 'res06', 'sequence', 'res08', 'res09', 'length', 'control']), # noqa: E501\n BitEnumField(\"version\", 2, 4, {2: 'L2TPv2'}),\n\n ConditionalField(ShortField(\"len\", None),\n lambda pkt: pkt.hdr & 'control+length'),\n ShortField(\"tunnel_id\", 0),\n ShortField(\"session_id\", 0),\n ConditionalField(ShortField(\"ns\", 0),\n lambda pkt: pkt.hdr & 'sequence+control'),\n ConditionalField(ShortField(\"nr\", 0),\n lambda pkt: pkt.hdr & 'sequence+control'),\n ConditionalField(\n PadField(ShortField(\"offset\", 0), 4, b\"\\x00\"),\n lambda pkt: not (pkt.hdr & 'control') and pkt.hdr & 'offset'\n )\n ]\n\n def post_build(self, pkt, pay):\n if self.len is None and self.hdr & 'control+length':\n tmp_len = len(pkt) + len(pay)\n pkt = pkt[:2] + struct.pack(\"!H\", tmp_len) + pkt[4:]\n return pkt + pay\n\n\nbind_bottom_up(UDP, L2TP, dport=1701)\nbind_bottom_up(UDP, L2TP, sport=1701)\nbind_layers(UDP, L2TP, dport=1701, sport=1701)\nbind_layers(L2TP, PPP,)\n", "path": "scapy/layers/l2tp.py"}]} | 1,130 | 173 |
gh_patches_debug_18028 | rasdani/github-patches | git_diff | Mailu__Mailu-1316 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Rainloop Webmail - Authentication fails if you have a special character in your password
In the admin interface, you can define a new password and you can put a special character like `è`.
It works fine with admin interface but it doesn't work at all with the Rainloop webmail. If you try to log in, you will have a message to indicate that the authentication fails, see screenshoot in french:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/admin/mailu/internal/nginx.py`
Content:
```
1 from mailu import models
2 from flask import current_app as app
3
4 import re
5 import urllib
6 import ipaddress
7 import socket
8 import tenacity
9
10
11 SUPPORTED_AUTH_METHODS = ["none", "plain"]
12
13
14 STATUSES = {
15 "authentication": ("Authentication credentials invalid", {
16 "imap": "AUTHENTICATIONFAILED",
17 "smtp": "535 5.7.8",
18 "pop3": "-ERR Authentication failed"
19 }),
20 }
21
22
23 def handle_authentication(headers):
24 """ Handle an HTTP nginx authentication request
25 See: http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html#protocol
26 """
27 method = headers["Auth-Method"]
28 protocol = headers["Auth-Protocol"]
29 # Incoming mail, no authentication
30 if method == "none" and protocol == "smtp":
31 server, port = get_server(headers["Auth-Protocol"], False)
32 return {
33 "Auth-Status": "OK",
34 "Auth-Server": server,
35 "Auth-Port": port
36 }
37 # Authenticated user
38 elif method == "plain":
39 server, port = get_server(headers["Auth-Protocol"], True)
40 user_email = urllib.parse.unquote(headers["Auth-User"])
41 password = urllib.parse.unquote(headers["Auth-Pass"])
42 ip = urllib.parse.unquote(headers["Client-Ip"])
43 user = models.User.query.get(user_email)
44 status = False
45 if user:
46 for token in user.tokens:
47 if (token.check_password(password) and
48 (not token.ip or token.ip == ip)):
49 status = True
50 if user.check_password(password):
51 status = True
52 if status:
53 if protocol == "imap" and not user.enable_imap:
54 status = False
55 elif protocol == "pop3" and not user.enable_pop:
56 status = False
57 if status and user.enabled:
58 return {
59 "Auth-Status": "OK",
60 "Auth-Server": server,
61 "Auth-Port": port
62 }
63 else:
64 status, code = get_status(protocol, "authentication")
65 return {
66 "Auth-Status": status,
67 "Auth-Error-Code": code,
68 "Auth-Wait": 0
69 }
70 # Unexpected
71 return {}
72
73
74 def get_status(protocol, status):
75 """ Return the proper error code depending on the protocol
76 """
77 status, codes = STATUSES[status]
78 return status, codes[protocol]
79
80 def extract_host_port(host_and_port, default_port):
81 host, _, port = re.match('^(.*)(:([0-9]*))?$', host_and_port).groups()
82 return host, int(port) if port else default_port
83
84 def get_server(protocol, authenticated=False):
85 if protocol == "imap":
86 hostname, port = extract_host_port(app.config['IMAP_ADDRESS'], 143)
87 elif protocol == "pop3":
88 hostname, port = extract_host_port(app.config['POP3_ADDRESS'], 110)
89 elif protocol == "smtp":
90 if authenticated:
91 hostname, port = extract_host_port(app.config['AUTHSMTP_ADDRESS'], 10025)
92 else:
93 hostname, port = extract_host_port(app.config['SMTP_ADDRESS'], 25)
94 try:
95 # test if hostname is already resolved to an ip adddress
96 ipaddress.ip_address(hostname)
97 except:
98 # hostname is not an ip address - so we need to resolve it
99 hostname = resolve_hostname(hostname)
100 return hostname, port
101
102 @tenacity.retry(stop=tenacity.stop_after_attempt(100),
103 wait=tenacity.wait_random(min=2, max=5))
104 def resolve_hostname(hostname):
105 """ This function uses system DNS to resolve a hostname.
106 It is capable of retrying in case the host is not immediately available
107 """
108 return socket.gethostbyname(hostname)
109
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/admin/mailu/internal/nginx.py b/core/admin/mailu/internal/nginx.py
--- a/core/admin/mailu/internal/nginx.py
+++ b/core/admin/mailu/internal/nginx.py
@@ -37,8 +37,14 @@
# Authenticated user
elif method == "plain":
server, port = get_server(headers["Auth-Protocol"], True)
- user_email = urllib.parse.unquote(headers["Auth-User"])
- password = urllib.parse.unquote(headers["Auth-Pass"])
+ # According to RFC2616 section 3.7.1 and PEP 3333, HTTP headers should
+ # be ASCII and are generally considered ISO8859-1. However when passing
+ # the password, nginx does not transcode the input UTF string, thus
+ # we need to manually decode.
+ raw_user_email = urllib.parse.unquote(headers["Auth-User"])
+ user_email = raw_user_email.encode("iso8859-1").decode("utf8")
+ raw_password = urllib.parse.unquote(headers["Auth-Pass"])
+ password = raw_password.encode("iso8859-1").decode("utf8")
ip = urllib.parse.unquote(headers["Client-Ip"])
user = models.User.query.get(user_email)
status = False
| {"golden_diff": "diff --git a/core/admin/mailu/internal/nginx.py b/core/admin/mailu/internal/nginx.py\n--- a/core/admin/mailu/internal/nginx.py\n+++ b/core/admin/mailu/internal/nginx.py\n@@ -37,8 +37,14 @@\n # Authenticated user\n elif method == \"plain\":\n server, port = get_server(headers[\"Auth-Protocol\"], True)\n- user_email = urllib.parse.unquote(headers[\"Auth-User\"])\n- password = urllib.parse.unquote(headers[\"Auth-Pass\"])\n+ # According to RFC2616 section 3.7.1 and PEP 3333, HTTP headers should\n+ # be ASCII and are generally considered ISO8859-1. However when passing\n+ # the password, nginx does not transcode the input UTF string, thus\n+ # we need to manually decode.\n+ raw_user_email = urllib.parse.unquote(headers[\"Auth-User\"])\n+ user_email = raw_user_email.encode(\"iso8859-1\").decode(\"utf8\")\n+ raw_password = urllib.parse.unquote(headers[\"Auth-Pass\"])\n+ password = raw_password.encode(\"iso8859-1\").decode(\"utf8\")\n ip = urllib.parse.unquote(headers[\"Client-Ip\"])\n user = models.User.query.get(user_email)\n status = False\n", "issue": "Rainloop Webmail - Authentication fails if you have a special character in your password\nIn the admin interface, you can define a new password and you can put a special character like `\u00e8`.\r\n\r\nIt works fine with admin interface but it doesn't work at all with the Rainloop webmail. If you try to log in, you will have a message to indicate that the authentication fails, see screenshoot in french:\r\n\r\n\r\n\n", "before_files": [{"content": "from mailu import models\nfrom flask import current_app as app\n\nimport re\nimport urllib\nimport ipaddress\nimport socket\nimport tenacity\n\n\nSUPPORTED_AUTH_METHODS = [\"none\", \"plain\"]\n\n\nSTATUSES = {\n \"authentication\": (\"Authentication credentials invalid\", {\n \"imap\": \"AUTHENTICATIONFAILED\",\n \"smtp\": \"535 5.7.8\",\n \"pop3\": \"-ERR Authentication failed\"\n }),\n}\n\n\ndef handle_authentication(headers):\n \"\"\" Handle an HTTP nginx authentication request\n See: http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html#protocol\n \"\"\"\n method = headers[\"Auth-Method\"]\n protocol = headers[\"Auth-Protocol\"]\n # Incoming mail, no authentication\n if method == \"none\" and protocol == \"smtp\":\n server, port = get_server(headers[\"Auth-Protocol\"], False)\n return {\n \"Auth-Status\": \"OK\",\n \"Auth-Server\": server,\n \"Auth-Port\": port\n }\n # Authenticated user\n elif method == \"plain\":\n server, port = get_server(headers[\"Auth-Protocol\"], True)\n user_email = urllib.parse.unquote(headers[\"Auth-User\"])\n password = urllib.parse.unquote(headers[\"Auth-Pass\"])\n ip = urllib.parse.unquote(headers[\"Client-Ip\"])\n user = models.User.query.get(user_email)\n status = False\n if user:\n for token in user.tokens:\n if (token.check_password(password) and\n (not token.ip or token.ip == ip)):\n status = True\n if user.check_password(password):\n status = True\n if status:\n if protocol == \"imap\" and not user.enable_imap:\n status = False\n elif protocol == \"pop3\" and not user.enable_pop:\n status = False\n if status and user.enabled:\n return {\n \"Auth-Status\": \"OK\",\n \"Auth-Server\": server,\n \"Auth-Port\": port\n }\n else:\n status, code = get_status(protocol, \"authentication\")\n return {\n \"Auth-Status\": status,\n \"Auth-Error-Code\": code,\n \"Auth-Wait\": 0\n }\n # Unexpected\n return {}\n\n\ndef get_status(protocol, status):\n \"\"\" Return the proper error code depending on the protocol\n \"\"\"\n status, codes = STATUSES[status]\n return status, codes[protocol]\n\ndef extract_host_port(host_and_port, default_port):\n host, _, port = re.match('^(.*)(:([0-9]*))?$', host_and_port).groups()\n return host, int(port) if port else default_port\n\ndef get_server(protocol, authenticated=False):\n if protocol == \"imap\":\n hostname, port = extract_host_port(app.config['IMAP_ADDRESS'], 143)\n elif protocol == \"pop3\":\n hostname, port = extract_host_port(app.config['POP3_ADDRESS'], 110)\n elif protocol == \"smtp\":\n if authenticated:\n hostname, port = extract_host_port(app.config['AUTHSMTP_ADDRESS'], 10025)\n else:\n hostname, port = extract_host_port(app.config['SMTP_ADDRESS'], 25)\n try:\n # test if hostname is already resolved to an ip adddress\n ipaddress.ip_address(hostname)\n except:\n # hostname is not an ip address - so we need to resolve it\n hostname = resolve_hostname(hostname)\n return hostname, port\n\[email protected](stop=tenacity.stop_after_attempt(100),\n wait=tenacity.wait_random(min=2, max=5))\ndef resolve_hostname(hostname):\n \"\"\" This function uses system DNS to resolve a hostname.\n It is capable of retrying in case the host is not immediately available\n \"\"\"\n return socket.gethostbyname(hostname)\n", "path": "core/admin/mailu/internal/nginx.py"}], "after_files": [{"content": "from mailu import models\nfrom flask import current_app as app\n\nimport re\nimport urllib\nimport ipaddress\nimport socket\nimport tenacity\n\n\nSUPPORTED_AUTH_METHODS = [\"none\", \"plain\"]\n\n\nSTATUSES = {\n \"authentication\": (\"Authentication credentials invalid\", {\n \"imap\": \"AUTHENTICATIONFAILED\",\n \"smtp\": \"535 5.7.8\",\n \"pop3\": \"-ERR Authentication failed\"\n }),\n}\n\n\ndef handle_authentication(headers):\n \"\"\" Handle an HTTP nginx authentication request\n See: http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html#protocol\n \"\"\"\n method = headers[\"Auth-Method\"]\n protocol = headers[\"Auth-Protocol\"]\n # Incoming mail, no authentication\n if method == \"none\" and protocol == \"smtp\":\n server, port = get_server(headers[\"Auth-Protocol\"], False)\n return {\n \"Auth-Status\": \"OK\",\n \"Auth-Server\": server,\n \"Auth-Port\": port\n }\n # Authenticated user\n elif method == \"plain\":\n server, port = get_server(headers[\"Auth-Protocol\"], True)\n # According to RFC2616 section 3.7.1 and PEP 3333, HTTP headers should\n # be ASCII and are generally considered ISO8859-1. However when passing\n # the password, nginx does not transcode the input UTF string, thus\n # we need to manually decode.\n raw_user_email = urllib.parse.unquote(headers[\"Auth-User\"])\n user_email = raw_user_email.encode(\"iso8859-1\").decode(\"utf8\")\n raw_password = urllib.parse.unquote(headers[\"Auth-Pass\"])\n password = raw_password.encode(\"iso8859-1\").decode(\"utf8\")\n ip = urllib.parse.unquote(headers[\"Client-Ip\"])\n user = models.User.query.get(user_email)\n status = False\n if user:\n for token in user.tokens:\n if (token.check_password(password) and\n (not token.ip or token.ip == ip)):\n status = True\n if user.check_password(password):\n status = True\n if status:\n if protocol == \"imap\" and not user.enable_imap:\n status = False\n elif protocol == \"pop3\" and not user.enable_pop:\n status = False\n if status and user.enabled:\n return {\n \"Auth-Status\": \"OK\",\n \"Auth-Server\": server,\n \"Auth-Port\": port\n }\n else:\n status, code = get_status(protocol, \"authentication\")\n return {\n \"Auth-Status\": status,\n \"Auth-Error-Code\": code,\n \"Auth-Wait\": 0\n }\n # Unexpected\n return {}\n\n\ndef get_status(protocol, status):\n \"\"\" Return the proper error code depending on the protocol\n \"\"\"\n status, codes = STATUSES[status]\n return status, codes[protocol]\n\ndef extract_host_port(host_and_port, default_port):\n host, _, port = re.match('^(.*)(:([0-9]*))?$', host_and_port).groups()\n return host, int(port) if port else default_port\n\ndef get_server(protocol, authenticated=False):\n if protocol == \"imap\":\n hostname, port = extract_host_port(app.config['IMAP_ADDRESS'], 143)\n elif protocol == \"pop3\":\n hostname, port = extract_host_port(app.config['POP3_ADDRESS'], 110)\n elif protocol == \"smtp\":\n if authenticated:\n hostname, port = extract_host_port(app.config['AUTHSMTP_ADDRESS'], 10025)\n else:\n hostname, port = extract_host_port(app.config['SMTP_ADDRESS'], 25)\n try:\n # test if hostname is already resolved to an ip adddress\n ipaddress.ip_address(hostname)\n except:\n # hostname is not an ip address - so we need to resolve it\n hostname = resolve_hostname(hostname)\n return hostname, port\n\[email protected](stop=tenacity.stop_after_attempt(100),\n wait=tenacity.wait_random(min=2, max=5))\ndef resolve_hostname(hostname):\n \"\"\" This function uses system DNS to resolve a hostname.\n It is capable of retrying in case the host is not immediately available\n \"\"\"\n return socket.gethostbyname(hostname)\n", "path": "core/admin/mailu/internal/nginx.py"}]} | 1,481 | 291 |
gh_patches_debug_15801 | rasdani/github-patches | git_diff | pyca__cryptography-1430 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
OpenSSL's HMAC Context isn't marked as implementing MACContext
It ought to be.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cryptography/hazmat/backends/commoncrypto/hmac.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from __future__ import absolute_import, division, print_function
15
16 from cryptography import utils
17 from cryptography.exceptions import UnsupportedAlgorithm, _Reasons
18 from cryptography.hazmat.primitives import interfaces
19
20
21 @utils.register_interface(interfaces.HashContext)
22 class _HMACContext(object):
23 def __init__(self, backend, key, algorithm, ctx=None):
24 self.algorithm = algorithm
25 self._backend = backend
26 if ctx is None:
27 ctx = self._backend._ffi.new("CCHmacContext *")
28 try:
29 alg = self._backend._supported_hmac_algorithms[algorithm.name]
30 except KeyError:
31 raise UnsupportedAlgorithm(
32 "{0} is not a supported HMAC hash on this backend.".format(
33 algorithm.name),
34 _Reasons.UNSUPPORTED_HASH
35 )
36
37 self._backend._lib.CCHmacInit(ctx, alg, key, len(key))
38
39 self._ctx = ctx
40 self._key = key
41
42 def copy(self):
43 copied_ctx = self._backend._ffi.new("CCHmacContext *")
44 # CommonCrypto has no APIs for copying HMACs, so we have to copy the
45 # underlying struct.
46 copied_ctx[0] = self._ctx[0]
47 return _HMACContext(
48 self._backend, self._key, self.algorithm, ctx=copied_ctx
49 )
50
51 def update(self, data):
52 self._backend._lib.CCHmacUpdate(self._ctx, data, len(data))
53
54 def finalize(self):
55 buf = self._backend._ffi.new("unsigned char[]",
56 self.algorithm.digest_size)
57 self._backend._lib.CCHmacFinal(self._ctx, buf)
58 return self._backend._ffi.buffer(buf)[:]
59
```
Path: `cryptography/hazmat/backends/openssl/hmac.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from __future__ import absolute_import, division, print_function
15
16
17 from cryptography import utils
18 from cryptography.exceptions import UnsupportedAlgorithm, _Reasons
19 from cryptography.hazmat.primitives import interfaces
20
21
22 @utils.register_interface(interfaces.HashContext)
23 class _HMACContext(object):
24 def __init__(self, backend, key, algorithm, ctx=None):
25 self.algorithm = algorithm
26 self._backend = backend
27
28 if ctx is None:
29 ctx = self._backend._ffi.new("HMAC_CTX *")
30 self._backend._lib.HMAC_CTX_init(ctx)
31 ctx = self._backend._ffi.gc(
32 ctx, self._backend._lib.HMAC_CTX_cleanup
33 )
34 evp_md = self._backend._lib.EVP_get_digestbyname(
35 algorithm.name.encode('ascii'))
36 if evp_md == self._backend._ffi.NULL:
37 raise UnsupportedAlgorithm(
38 "{0} is not a supported hash on this backend.".format(
39 algorithm.name),
40 _Reasons.UNSUPPORTED_HASH
41 )
42 res = self._backend._lib.Cryptography_HMAC_Init_ex(
43 ctx, key, len(key), evp_md, self._backend._ffi.NULL
44 )
45 assert res != 0
46
47 self._ctx = ctx
48 self._key = key
49
50 def copy(self):
51 copied_ctx = self._backend._ffi.new("HMAC_CTX *")
52 self._backend._lib.HMAC_CTX_init(copied_ctx)
53 copied_ctx = self._backend._ffi.gc(
54 copied_ctx, self._backend._lib.HMAC_CTX_cleanup
55 )
56 res = self._backend._lib.Cryptography_HMAC_CTX_copy(
57 copied_ctx, self._ctx
58 )
59 assert res != 0
60 return _HMACContext(
61 self._backend, self._key, self.algorithm, ctx=copied_ctx
62 )
63
64 def update(self, data):
65 res = self._backend._lib.Cryptography_HMAC_Update(
66 self._ctx, data, len(data)
67 )
68 assert res != 0
69
70 def finalize(self):
71 buf = self._backend._ffi.new("unsigned char[]",
72 self._backend._lib.EVP_MAX_MD_SIZE)
73 outlen = self._backend._ffi.new("unsigned int *")
74 res = self._backend._lib.Cryptography_HMAC_Final(
75 self._ctx, buf, outlen
76 )
77 assert res != 0
78 assert outlen[0] == self.algorithm.digest_size
79 self._backend._lib.HMAC_CTX_cleanup(self._ctx)
80 return self._backend._ffi.buffer(buf)[:outlen[0]]
81
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cryptography/hazmat/backends/commoncrypto/hmac.py b/cryptography/hazmat/backends/commoncrypto/hmac.py
--- a/cryptography/hazmat/backends/commoncrypto/hmac.py
+++ b/cryptography/hazmat/backends/commoncrypto/hmac.py
@@ -18,6 +18,7 @@
from cryptography.hazmat.primitives import interfaces
[email protected]_interface(interfaces.MACContext)
@utils.register_interface(interfaces.HashContext)
class _HMACContext(object):
def __init__(self, backend, key, algorithm, ctx=None):
diff --git a/cryptography/hazmat/backends/openssl/hmac.py b/cryptography/hazmat/backends/openssl/hmac.py
--- a/cryptography/hazmat/backends/openssl/hmac.py
+++ b/cryptography/hazmat/backends/openssl/hmac.py
@@ -19,6 +19,7 @@
from cryptography.hazmat.primitives import interfaces
[email protected]_interface(interfaces.MACContext)
@utils.register_interface(interfaces.HashContext)
class _HMACContext(object):
def __init__(self, backend, key, algorithm, ctx=None):
| {"golden_diff": "diff --git a/cryptography/hazmat/backends/commoncrypto/hmac.py b/cryptography/hazmat/backends/commoncrypto/hmac.py\n--- a/cryptography/hazmat/backends/commoncrypto/hmac.py\n+++ b/cryptography/hazmat/backends/commoncrypto/hmac.py\n@@ -18,6 +18,7 @@\n from cryptography.hazmat.primitives import interfaces\n \n \[email protected]_interface(interfaces.MACContext)\n @utils.register_interface(interfaces.HashContext)\n class _HMACContext(object):\n def __init__(self, backend, key, algorithm, ctx=None):\ndiff --git a/cryptography/hazmat/backends/openssl/hmac.py b/cryptography/hazmat/backends/openssl/hmac.py\n--- a/cryptography/hazmat/backends/openssl/hmac.py\n+++ b/cryptography/hazmat/backends/openssl/hmac.py\n@@ -19,6 +19,7 @@\n from cryptography.hazmat.primitives import interfaces\n \n \[email protected]_interface(interfaces.MACContext)\n @utils.register_interface(interfaces.HashContext)\n class _HMACContext(object):\n def __init__(self, backend, key, algorithm, ctx=None):\n", "issue": "OpenSSL's HMAC Context isn't marked as implementing MACContext\nIt ought to be.\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nfrom cryptography import utils\nfrom cryptography.exceptions import UnsupportedAlgorithm, _Reasons\nfrom cryptography.hazmat.primitives import interfaces\n\n\[email protected]_interface(interfaces.HashContext)\nclass _HMACContext(object):\n def __init__(self, backend, key, algorithm, ctx=None):\n self.algorithm = algorithm\n self._backend = backend\n if ctx is None:\n ctx = self._backend._ffi.new(\"CCHmacContext *\")\n try:\n alg = self._backend._supported_hmac_algorithms[algorithm.name]\n except KeyError:\n raise UnsupportedAlgorithm(\n \"{0} is not a supported HMAC hash on this backend.\".format(\n algorithm.name),\n _Reasons.UNSUPPORTED_HASH\n )\n\n self._backend._lib.CCHmacInit(ctx, alg, key, len(key))\n\n self._ctx = ctx\n self._key = key\n\n def copy(self):\n copied_ctx = self._backend._ffi.new(\"CCHmacContext *\")\n # CommonCrypto has no APIs for copying HMACs, so we have to copy the\n # underlying struct.\n copied_ctx[0] = self._ctx[0]\n return _HMACContext(\n self._backend, self._key, self.algorithm, ctx=copied_ctx\n )\n\n def update(self, data):\n self._backend._lib.CCHmacUpdate(self._ctx, data, len(data))\n\n def finalize(self):\n buf = self._backend._ffi.new(\"unsigned char[]\",\n self.algorithm.digest_size)\n self._backend._lib.CCHmacFinal(self._ctx, buf)\n return self._backend._ffi.buffer(buf)[:]\n", "path": "cryptography/hazmat/backends/commoncrypto/hmac.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\n\nfrom cryptography import utils\nfrom cryptography.exceptions import UnsupportedAlgorithm, _Reasons\nfrom cryptography.hazmat.primitives import interfaces\n\n\[email protected]_interface(interfaces.HashContext)\nclass _HMACContext(object):\n def __init__(self, backend, key, algorithm, ctx=None):\n self.algorithm = algorithm\n self._backend = backend\n\n if ctx is None:\n ctx = self._backend._ffi.new(\"HMAC_CTX *\")\n self._backend._lib.HMAC_CTX_init(ctx)\n ctx = self._backend._ffi.gc(\n ctx, self._backend._lib.HMAC_CTX_cleanup\n )\n evp_md = self._backend._lib.EVP_get_digestbyname(\n algorithm.name.encode('ascii'))\n if evp_md == self._backend._ffi.NULL:\n raise UnsupportedAlgorithm(\n \"{0} is not a supported hash on this backend.\".format(\n algorithm.name),\n _Reasons.UNSUPPORTED_HASH\n )\n res = self._backend._lib.Cryptography_HMAC_Init_ex(\n ctx, key, len(key), evp_md, self._backend._ffi.NULL\n )\n assert res != 0\n\n self._ctx = ctx\n self._key = key\n\n def copy(self):\n copied_ctx = self._backend._ffi.new(\"HMAC_CTX *\")\n self._backend._lib.HMAC_CTX_init(copied_ctx)\n copied_ctx = self._backend._ffi.gc(\n copied_ctx, self._backend._lib.HMAC_CTX_cleanup\n )\n res = self._backend._lib.Cryptography_HMAC_CTX_copy(\n copied_ctx, self._ctx\n )\n assert res != 0\n return _HMACContext(\n self._backend, self._key, self.algorithm, ctx=copied_ctx\n )\n\n def update(self, data):\n res = self._backend._lib.Cryptography_HMAC_Update(\n self._ctx, data, len(data)\n )\n assert res != 0\n\n def finalize(self):\n buf = self._backend._ffi.new(\"unsigned char[]\",\n self._backend._lib.EVP_MAX_MD_SIZE)\n outlen = self._backend._ffi.new(\"unsigned int *\")\n res = self._backend._lib.Cryptography_HMAC_Final(\n self._ctx, buf, outlen\n )\n assert res != 0\n assert outlen[0] == self.algorithm.digest_size\n self._backend._lib.HMAC_CTX_cleanup(self._ctx)\n return self._backend._ffi.buffer(buf)[:outlen[0]]\n", "path": "cryptography/hazmat/backends/openssl/hmac.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nfrom cryptography import utils\nfrom cryptography.exceptions import UnsupportedAlgorithm, _Reasons\nfrom cryptography.hazmat.primitives import interfaces\n\n\[email protected]_interface(interfaces.MACContext)\[email protected]_interface(interfaces.HashContext)\nclass _HMACContext(object):\n def __init__(self, backend, key, algorithm, ctx=None):\n self.algorithm = algorithm\n self._backend = backend\n if ctx is None:\n ctx = self._backend._ffi.new(\"CCHmacContext *\")\n try:\n alg = self._backend._supported_hmac_algorithms[algorithm.name]\n except KeyError:\n raise UnsupportedAlgorithm(\n \"{0} is not a supported HMAC hash on this backend.\".format(\n algorithm.name),\n _Reasons.UNSUPPORTED_HASH\n )\n\n self._backend._lib.CCHmacInit(ctx, alg, key, len(key))\n\n self._ctx = ctx\n self._key = key\n\n def copy(self):\n copied_ctx = self._backend._ffi.new(\"CCHmacContext *\")\n # CommonCrypto has no APIs for copying HMACs, so we have to copy the\n # underlying struct.\n copied_ctx[0] = self._ctx[0]\n return _HMACContext(\n self._backend, self._key, self.algorithm, ctx=copied_ctx\n )\n\n def update(self, data):\n self._backend._lib.CCHmacUpdate(self._ctx, data, len(data))\n\n def finalize(self):\n buf = self._backend._ffi.new(\"unsigned char[]\",\n self.algorithm.digest_size)\n self._backend._lib.CCHmacFinal(self._ctx, buf)\n return self._backend._ffi.buffer(buf)[:]\n", "path": "cryptography/hazmat/backends/commoncrypto/hmac.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\n\nfrom cryptography import utils\nfrom cryptography.exceptions import UnsupportedAlgorithm, _Reasons\nfrom cryptography.hazmat.primitives import interfaces\n\n\[email protected]_interface(interfaces.MACContext)\[email protected]_interface(interfaces.HashContext)\nclass _HMACContext(object):\n def __init__(self, backend, key, algorithm, ctx=None):\n self.algorithm = algorithm\n self._backend = backend\n\n if ctx is None:\n ctx = self._backend._ffi.new(\"HMAC_CTX *\")\n self._backend._lib.HMAC_CTX_init(ctx)\n ctx = self._backend._ffi.gc(\n ctx, self._backend._lib.HMAC_CTX_cleanup\n )\n evp_md = self._backend._lib.EVP_get_digestbyname(\n algorithm.name.encode('ascii'))\n if evp_md == self._backend._ffi.NULL:\n raise UnsupportedAlgorithm(\n \"{0} is not a supported hash on this backend.\".format(\n algorithm.name),\n _Reasons.UNSUPPORTED_HASH\n )\n res = self._backend._lib.Cryptography_HMAC_Init_ex(\n ctx, key, len(key), evp_md, self._backend._ffi.NULL\n )\n assert res != 0\n\n self._ctx = ctx\n self._key = key\n\n def copy(self):\n copied_ctx = self._backend._ffi.new(\"HMAC_CTX *\")\n self._backend._lib.HMAC_CTX_init(copied_ctx)\n copied_ctx = self._backend._ffi.gc(\n copied_ctx, self._backend._lib.HMAC_CTX_cleanup\n )\n res = self._backend._lib.Cryptography_HMAC_CTX_copy(\n copied_ctx, self._ctx\n )\n assert res != 0\n return _HMACContext(\n self._backend, self._key, self.algorithm, ctx=copied_ctx\n )\n\n def update(self, data):\n res = self._backend._lib.Cryptography_HMAC_Update(\n self._ctx, data, len(data)\n )\n assert res != 0\n\n def finalize(self):\n buf = self._backend._ffi.new(\"unsigned char[]\",\n self._backend._lib.EVP_MAX_MD_SIZE)\n outlen = self._backend._ffi.new(\"unsigned int *\")\n res = self._backend._lib.Cryptography_HMAC_Final(\n self._ctx, buf, outlen\n )\n assert res != 0\n assert outlen[0] == self.algorithm.digest_size\n self._backend._lib.HMAC_CTX_cleanup(self._ctx)\n return self._backend._ffi.buffer(buf)[:outlen[0]]\n", "path": "cryptography/hazmat/backends/openssl/hmac.py"}]} | 1,741 | 252 |
gh_patches_debug_23552 | rasdani/github-patches | git_diff | dotkom__onlineweb4-1914 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
G Suite group syncer does not respect group syncer whitelist
## What kind of an issue is this?
- [x] Bug report
## What is the expected behaviour?
If a group is not in the OW4_GSUITE_SETTINGS groups list, it should not be affected by changes to the group.
## What is the current behaviour?
If a group membership is changed, the group is synced, and people will be removed from the group if not using their online mail.
## How do you reproduce this problem?
Add or remove a person to a group not set up for G Suite syncing (in settings)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/gsuite/mail_syncer/main.py`
Content:
```
1 import logging
2
3 from apps.gsuite.mail_syncer.utils import (get_excess_groups_for_user, get_excess_users_in_g_suite,
4 get_g_suite_users_for_group,
5 get_missing_g_suite_group_names_for_user,
6 get_missing_ow4_users_for_g_suite,
7 get_ow4_users_for_group,
8 insert_ow4_user_into_g_suite_group,
9 remove_g_suite_user_from_group)
10
11 logger = logging.getLogger(__name__)
12
13
14 def insert_ow4_users_into_g_suite(domain, group_name, missing_users, suppress_http_errors=False):
15 """
16 Inserts a list of OW4 users into a G Suite group.
17 :param domain: The domain in which to insert a user into a group.
18 :type domain: str
19 :param group_name: The name of the group to insert the user into.
20 :type group_name: str
21 :param missing_users: A list of the missing users to be inserted into said group.
22 :type missing_users: list
23 """
24 for missing_user in missing_users:
25 insert_ow4_user_into_g_suite_group(domain, group_name, missing_user, suppress_http_errors=suppress_http_errors)
26
27
28 def remove_excess_g_suite_users(domain, group_name, g_suite_excess_users, suppress_http_errors=False):
29 """
30 Removes excess users from a G Suite group.
31 :param domain: The domain in which to remove a user from a group.
32 :type domain: str
33 :param group_name: The name of the group to remove the users from.
34 :type group_name: str
35 :param g_suite_excess_users: A list of the excess users to be removed from said group.
36 :type g_suite_excess_users: list
37 """
38 logger.info("Cleaning G Suite group '{group}'.".format(group=group_name),
39 extra={'group': group_name, 'excess_users': g_suite_excess_users})
40
41 for excess_user in g_suite_excess_users:
42 resp = remove_g_suite_user_from_group(domain, group_name, excess_user,
43 suppress_http_errors=suppress_http_errors)
44 logger.debug('Response from cleaning {group_name}: {resp}'.format(group_name=group_name, resp=resp))
45
46
47 def insert_ow4_user_into_groups(domain, user, group_names, suppress_http_errors=False):
48 """
49 Inserts a single OW4 user into a G Suite group.
50 :param domain: The domain in which to insert a user into a group.
51 :type domain: str
52 :param user: The user to update group memberships for.
53 :type user: apps.authentication.models.OnlineUser
54 :param group_names: A list of group names to insert the user into.
55 :type group_names: list
56 :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.
57 :type suppress_http_errors: bool
58 """
59 groups = ["{group}@{domain}".format(group=group_name, domain=domain) for group_name in group_names]
60 if groups:
61 logger.info('Inserting {user} into some new G Suite groups.'.format(user=user),
62 extra={'new_groups': group_names, 'user': user})
63 for group in groups:
64 insert_ow4_user_into_g_suite_group(domain, group, user, suppress_http_errors=suppress_http_errors)
65
66
67 def cleanup_groups_for_user(domain, user, suppress_http_errors=False):
68 """
69 Finds excess groups for a OW4 user, and removes the user from said groups.
70 :param domain: The domain in which to find a users excess group memberships.
71 :type domain: str
72 :param user: The user to remove excess group memberships for.
73 :type user: apps.authentication.models.OnlineUser
74 :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.
75 :type suppress_http_errors: bool
76 """
77 excess_groups = get_excess_groups_for_user(domain, user)
78 if excess_groups:
79 logger.debug('Removing "{user}" from some G Suite groups.'.format(user=user),
80 extra={'user': user, 'excess_groups': excess_groups})
81 for group in excess_groups:
82 remove_g_suite_user_from_group(domain, group, user.online_mail, suppress_http_errors=suppress_http_errors)
83
84
85 def update_g_suite_user(domain, ow4_user, suppress_http_errors=False):
86 """
87 Finds missing and excess groups and adds and removes the user to/from them, respectively.
88 :param domain: The domain in which to update a users group memberships.
89 :type domain: str
90 :param ow4_user: The user to update group memberships for.
91 :type ow4_user: apps.authentication.models.OnlineUser
92 :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.
93 :type suppress_http_errors: bool
94 """
95 cleanup_groups_for_user(domain, ow4_user, suppress_http_errors=suppress_http_errors)
96 insert_ow4_user_into_groups(domain, ow4_user, get_missing_g_suite_group_names_for_user(domain, ow4_user),
97 suppress_http_errors=suppress_http_errors)
98
99
100 def update_g_suite_group(domain, group_name, suppress_http_errors=False):
101 """
102 Finds missing and excess users and adds and removes the users to/from them, respectively.
103 :param domain: The domain in which to find a group's user lists.
104 :type domain: str
105 :param group_name: The name of the group to get group membership status for.
106 :type group_name: str
107 :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.
108 :type suppress_http_errors: bool
109 """
110 g_suite_users = get_g_suite_users_for_group(domain, group_name, suppress_http_errors=suppress_http_errors)
111 ow4_users = get_ow4_users_for_group(group_name)
112
113 excess_users = get_excess_users_in_g_suite(g_suite_users, ow4_users)
114 missing_users = get_missing_ow4_users_for_g_suite(g_suite_users, ow4_users)
115
116 # @ToDo: Look into bulk updates
117 insert_ow4_users_into_g_suite(domain, group_name, missing_users, suppress_http_errors=suppress_http_errors)
118 remove_excess_g_suite_users(domain, group_name, excess_users, suppress_http_errors=suppress_http_errors)
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/gsuite/mail_syncer/main.py b/apps/gsuite/mail_syncer/main.py
--- a/apps/gsuite/mail_syncer/main.py
+++ b/apps/gsuite/mail_syncer/main.py
@@ -1,5 +1,7 @@
import logging
+from django.conf import settings
+
from apps.gsuite.mail_syncer.utils import (get_excess_groups_for_user, get_excess_users_in_g_suite,
get_g_suite_users_for_group,
get_missing_g_suite_group_names_for_user,
@@ -107,6 +109,12 @@
:param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.
:type suppress_http_errors: bool
"""
+
+ if group_name.lower() not in settings.OW4_GSUITE_SYNC.get('GROUPS', {}).keys():
+ logger.debug('Not running group syncer for group {} - group syncing not enabled for this group'
+ .format(group_name))
+ return
+
g_suite_users = get_g_suite_users_for_group(domain, group_name, suppress_http_errors=suppress_http_errors)
ow4_users = get_ow4_users_for_group(group_name)
| {"golden_diff": "diff --git a/apps/gsuite/mail_syncer/main.py b/apps/gsuite/mail_syncer/main.py\n--- a/apps/gsuite/mail_syncer/main.py\n+++ b/apps/gsuite/mail_syncer/main.py\n@@ -1,5 +1,7 @@\n import logging\n \n+from django.conf import settings\n+\n from apps.gsuite.mail_syncer.utils import (get_excess_groups_for_user, get_excess_users_in_g_suite,\n get_g_suite_users_for_group,\n get_missing_g_suite_group_names_for_user,\n@@ -107,6 +109,12 @@\n :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.\n :type suppress_http_errors: bool\n \"\"\"\n+\n+ if group_name.lower() not in settings.OW4_GSUITE_SYNC.get('GROUPS', {}).keys():\n+ logger.debug('Not running group syncer for group {} - group syncing not enabled for this group'\n+ .format(group_name))\n+ return\n+\n g_suite_users = get_g_suite_users_for_group(domain, group_name, suppress_http_errors=suppress_http_errors)\n ow4_users = get_ow4_users_for_group(group_name)\n", "issue": "G Suite group syncer does not respect group syncer whitelist\n## What kind of an issue is this?\r\n\r\n- [x] Bug report\r\n\r\n## What is the expected behaviour?\r\n\r\nIf a group is not in the OW4_GSUITE_SETTINGS groups list, it should not be affected by changes to the group.\r\n\r\n## What is the current behaviour?\r\n\r\nIf a group membership is changed, the group is synced, and people will be removed from the group if not using their online mail.\r\n\r\n\r\n## How do you reproduce this problem? \r\n\r\nAdd or remove a person to a group not set up for G Suite syncing (in settings)\r\n\n", "before_files": [{"content": "import logging\n\nfrom apps.gsuite.mail_syncer.utils import (get_excess_groups_for_user, get_excess_users_in_g_suite,\n get_g_suite_users_for_group,\n get_missing_g_suite_group_names_for_user,\n get_missing_ow4_users_for_g_suite,\n get_ow4_users_for_group,\n insert_ow4_user_into_g_suite_group,\n remove_g_suite_user_from_group)\n\nlogger = logging.getLogger(__name__)\n\n\ndef insert_ow4_users_into_g_suite(domain, group_name, missing_users, suppress_http_errors=False):\n \"\"\"\n Inserts a list of OW4 users into a G Suite group.\n :param domain: The domain in which to insert a user into a group.\n :type domain: str\n :param group_name: The name of the group to insert the user into.\n :type group_name: str\n :param missing_users: A list of the missing users to be inserted into said group.\n :type missing_users: list\n \"\"\"\n for missing_user in missing_users:\n insert_ow4_user_into_g_suite_group(domain, group_name, missing_user, suppress_http_errors=suppress_http_errors)\n\n\ndef remove_excess_g_suite_users(domain, group_name, g_suite_excess_users, suppress_http_errors=False):\n \"\"\"\n Removes excess users from a G Suite group.\n :param domain: The domain in which to remove a user from a group.\n :type domain: str\n :param group_name: The name of the group to remove the users from.\n :type group_name: str\n :param g_suite_excess_users: A list of the excess users to be removed from said group.\n :type g_suite_excess_users: list\n \"\"\"\n logger.info(\"Cleaning G Suite group '{group}'.\".format(group=group_name),\n extra={'group': group_name, 'excess_users': g_suite_excess_users})\n\n for excess_user in g_suite_excess_users:\n resp = remove_g_suite_user_from_group(domain, group_name, excess_user,\n suppress_http_errors=suppress_http_errors)\n logger.debug('Response from cleaning {group_name}: {resp}'.format(group_name=group_name, resp=resp))\n\n\ndef insert_ow4_user_into_groups(domain, user, group_names, suppress_http_errors=False):\n \"\"\"\n Inserts a single OW4 user into a G Suite group.\n :param domain: The domain in which to insert a user into a group.\n :type domain: str\n :param user: The user to update group memberships for.\n :type user: apps.authentication.models.OnlineUser\n :param group_names: A list of group names to insert the user into.\n :type group_names: list\n :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.\n :type suppress_http_errors: bool\n \"\"\"\n groups = [\"{group}@{domain}\".format(group=group_name, domain=domain) for group_name in group_names]\n if groups:\n logger.info('Inserting {user} into some new G Suite groups.'.format(user=user),\n extra={'new_groups': group_names, 'user': user})\n for group in groups:\n insert_ow4_user_into_g_suite_group(domain, group, user, suppress_http_errors=suppress_http_errors)\n\n\ndef cleanup_groups_for_user(domain, user, suppress_http_errors=False):\n \"\"\"\n Finds excess groups for a OW4 user, and removes the user from said groups.\n :param domain: The domain in which to find a users excess group memberships.\n :type domain: str\n :param user: The user to remove excess group memberships for.\n :type user: apps.authentication.models.OnlineUser\n :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.\n :type suppress_http_errors: bool\n \"\"\"\n excess_groups = get_excess_groups_for_user(domain, user)\n if excess_groups:\n logger.debug('Removing \"{user}\" from some G Suite groups.'.format(user=user),\n extra={'user': user, 'excess_groups': excess_groups})\n for group in excess_groups:\n remove_g_suite_user_from_group(domain, group, user.online_mail, suppress_http_errors=suppress_http_errors)\n\n\ndef update_g_suite_user(domain, ow4_user, suppress_http_errors=False):\n \"\"\"\n Finds missing and excess groups and adds and removes the user to/from them, respectively.\n :param domain: The domain in which to update a users group memberships.\n :type domain: str\n :param ow4_user: The user to update group memberships for.\n :type ow4_user: apps.authentication.models.OnlineUser\n :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.\n :type suppress_http_errors: bool\n \"\"\"\n cleanup_groups_for_user(domain, ow4_user, suppress_http_errors=suppress_http_errors)\n insert_ow4_user_into_groups(domain, ow4_user, get_missing_g_suite_group_names_for_user(domain, ow4_user),\n suppress_http_errors=suppress_http_errors)\n\n\ndef update_g_suite_group(domain, group_name, suppress_http_errors=False):\n \"\"\"\n Finds missing and excess users and adds and removes the users to/from them, respectively.\n :param domain: The domain in which to find a group's user lists.\n :type domain: str\n :param group_name: The name of the group to get group membership status for.\n :type group_name: str\n :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.\n :type suppress_http_errors: bool\n \"\"\"\n g_suite_users = get_g_suite_users_for_group(domain, group_name, suppress_http_errors=suppress_http_errors)\n ow4_users = get_ow4_users_for_group(group_name)\n\n excess_users = get_excess_users_in_g_suite(g_suite_users, ow4_users)\n missing_users = get_missing_ow4_users_for_g_suite(g_suite_users, ow4_users)\n\n # @ToDo: Look into bulk updates\n insert_ow4_users_into_g_suite(domain, group_name, missing_users, suppress_http_errors=suppress_http_errors)\n remove_excess_g_suite_users(domain, group_name, excess_users, suppress_http_errors=suppress_http_errors)\n", "path": "apps/gsuite/mail_syncer/main.py"}], "after_files": [{"content": "import logging\n\nfrom django.conf import settings\n\nfrom apps.gsuite.mail_syncer.utils import (get_excess_groups_for_user, get_excess_users_in_g_suite,\n get_g_suite_users_for_group,\n get_missing_g_suite_group_names_for_user,\n get_missing_ow4_users_for_g_suite,\n get_ow4_users_for_group,\n insert_ow4_user_into_g_suite_group,\n remove_g_suite_user_from_group)\n\nlogger = logging.getLogger(__name__)\n\n\ndef insert_ow4_users_into_g_suite(domain, group_name, missing_users, suppress_http_errors=False):\n \"\"\"\n Inserts a list of OW4 users into a G Suite group.\n :param domain: The domain in which to insert a user into a group.\n :type domain: str\n :param group_name: The name of the group to insert the user into.\n :type group_name: str\n :param missing_users: A list of the missing users to be inserted into said group.\n :type missing_users: list\n \"\"\"\n for missing_user in missing_users:\n insert_ow4_user_into_g_suite_group(domain, group_name, missing_user, suppress_http_errors=suppress_http_errors)\n\n\ndef remove_excess_g_suite_users(domain, group_name, g_suite_excess_users, suppress_http_errors=False):\n \"\"\"\n Removes excess users from a G Suite group.\n :param domain: The domain in which to remove a user from a group.\n :type domain: str\n :param group_name: The name of the group to remove the users from.\n :type group_name: str\n :param g_suite_excess_users: A list of the excess users to be removed from said group.\n :type g_suite_excess_users: list\n \"\"\"\n logger.info(\"Cleaning G Suite group '{group}'.\".format(group=group_name),\n extra={'group': group_name, 'excess_users': g_suite_excess_users})\n\n for excess_user in g_suite_excess_users:\n resp = remove_g_suite_user_from_group(domain, group_name, excess_user,\n suppress_http_errors=suppress_http_errors)\n logger.debug('Response from cleaning {group_name}: {resp}'.format(group_name=group_name, resp=resp))\n\n\ndef insert_ow4_user_into_groups(domain, user, group_names, suppress_http_errors=False):\n \"\"\"\n Inserts a single OW4 user into a G Suite group.\n :param domain: The domain in which to insert a user into a group.\n :type domain: str\n :param user: The user to update group memberships for.\n :type user: apps.authentication.models.OnlineUser\n :param group_names: A list of group names to insert the user into.\n :type group_names: list\n :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.\n :type suppress_http_errors: bool\n \"\"\"\n groups = [\"{group}@{domain}\".format(group=group_name, domain=domain) for group_name in group_names]\n if groups:\n logger.info('Inserting {user} into some new G Suite groups.'.format(user=user),\n extra={'new_groups': group_names, 'user': user})\n for group in groups:\n insert_ow4_user_into_g_suite_group(domain, group, user, suppress_http_errors=suppress_http_errors)\n\n\ndef cleanup_groups_for_user(domain, user, suppress_http_errors=False):\n \"\"\"\n Finds excess groups for a OW4 user, and removes the user from said groups.\n :param domain: The domain in which to find a users excess group memberships.\n :type domain: str\n :param user: The user to remove excess group memberships for.\n :type user: apps.authentication.models.OnlineUser\n :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.\n :type suppress_http_errors: bool\n \"\"\"\n excess_groups = get_excess_groups_for_user(domain, user)\n if excess_groups:\n logger.debug('Removing \"{user}\" from some G Suite groups.'.format(user=user),\n extra={'user': user, 'excess_groups': excess_groups})\n for group in excess_groups:\n remove_g_suite_user_from_group(domain, group, user.online_mail, suppress_http_errors=suppress_http_errors)\n\n\ndef update_g_suite_user(domain, ow4_user, suppress_http_errors=False):\n \"\"\"\n Finds missing and excess groups and adds and removes the user to/from them, respectively.\n :param domain: The domain in which to update a users group memberships.\n :type domain: str\n :param ow4_user: The user to update group memberships for.\n :type ow4_user: apps.authentication.models.OnlineUser\n :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.\n :type suppress_http_errors: bool\n \"\"\"\n cleanup_groups_for_user(domain, ow4_user, suppress_http_errors=suppress_http_errors)\n insert_ow4_user_into_groups(domain, ow4_user, get_missing_g_suite_group_names_for_user(domain, ow4_user),\n suppress_http_errors=suppress_http_errors)\n\n\ndef update_g_suite_group(domain, group_name, suppress_http_errors=False):\n \"\"\"\n Finds missing and excess users and adds and removes the users to/from them, respectively.\n :param domain: The domain in which to find a group's user lists.\n :type domain: str\n :param group_name: The name of the group to get group membership status for.\n :type group_name: str\n :param suppress_http_errors: Whether or not to suppress HttpErrors happening during execution.\n :type suppress_http_errors: bool\n \"\"\"\n\n if group_name.lower() not in settings.OW4_GSUITE_SYNC.get('GROUPS', {}).keys():\n logger.debug('Not running group syncer for group {} - group syncing not enabled for this group'\n .format(group_name))\n return\n\n g_suite_users = get_g_suite_users_for_group(domain, group_name, suppress_http_errors=suppress_http_errors)\n ow4_users = get_ow4_users_for_group(group_name)\n\n excess_users = get_excess_users_in_g_suite(g_suite_users, ow4_users)\n missing_users = get_missing_ow4_users_for_g_suite(g_suite_users, ow4_users)\n\n # @ToDo: Look into bulk updates\n insert_ow4_users_into_g_suite(domain, group_name, missing_users, suppress_http_errors=suppress_http_errors)\n remove_excess_g_suite_users(domain, group_name, excess_users, suppress_http_errors=suppress_http_errors)\n", "path": "apps/gsuite/mail_syncer/main.py"}]} | 1,971 | 251 |
gh_patches_debug_6549 | rasdani/github-patches | git_diff | encode__uvicorn-754 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Lifespan scope is missing asgi dict
the lifespan scope (https://asgi.readthedocs.io/en/latest/specs/lifespan.html#scope) is missing the asgi dict
prompted by
_Originally posted by @florimondmanca in https://github.com/encode/uvicorn/pull/751_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `uvicorn/lifespan/on.py`
Content:
```
1 import asyncio
2 import logging
3
4 STATE_TRANSITION_ERROR = "Got invalid state transition on lifespan protocol."
5
6
7 class LifespanOn:
8 def __init__(self, config):
9 if not config.loaded:
10 config.load()
11
12 self.config = config
13 self.logger = logging.getLogger("uvicorn.error")
14 self.startup_event = asyncio.Event()
15 self.shutdown_event = asyncio.Event()
16 self.receive_queue = asyncio.Queue()
17 self.error_occured = False
18 self.startup_failed = False
19 self.should_exit = False
20
21 async def startup(self):
22 self.logger.info("Waiting for application startup.")
23
24 loop = asyncio.get_event_loop()
25 loop.create_task(self.main())
26
27 await self.receive_queue.put({"type": "lifespan.startup"})
28 await self.startup_event.wait()
29
30 if self.startup_failed or (self.error_occured and self.config.lifespan == "on"):
31 self.logger.error("Application startup failed. Exiting.")
32 self.should_exit = True
33 else:
34 self.logger.info("Application startup complete.")
35
36 async def shutdown(self):
37 if self.error_occured:
38 return
39 self.logger.info("Waiting for application shutdown.")
40 await self.receive_queue.put({"type": "lifespan.shutdown"})
41 await self.shutdown_event.wait()
42 self.logger.info("Application shutdown complete.")
43
44 async def main(self):
45 try:
46 app = self.config.loaded_app
47 scope = {"type": "lifespan"}
48 await app(scope, self.receive, self.send)
49 except BaseException as exc:
50 self.asgi = None
51 self.error_occured = True
52 if self.startup_failed:
53 return
54 if self.config.lifespan == "auto":
55 msg = "ASGI 'lifespan' protocol appears unsupported."
56 self.logger.info(msg)
57 else:
58 msg = "Exception in 'lifespan' protocol\n"
59 self.logger.error(msg, exc_info=exc)
60 finally:
61 self.startup_event.set()
62 self.shutdown_event.set()
63
64 async def send(self, message):
65 assert message["type"] in (
66 "lifespan.startup.complete",
67 "lifespan.startup.failed",
68 "lifespan.shutdown.complete",
69 )
70
71 if message["type"] == "lifespan.startup.complete":
72 assert not self.startup_event.is_set(), STATE_TRANSITION_ERROR
73 assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR
74 self.startup_event.set()
75
76 elif message["type"] == "lifespan.startup.failed":
77 assert not self.startup_event.is_set(), STATE_TRANSITION_ERROR
78 assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR
79 self.startup_event.set()
80 self.startup_failed = True
81 if message.get("message"):
82 self.logger.error(message["message"])
83
84 elif message["type"] == "lifespan.shutdown.complete":
85 assert self.startup_event.is_set(), STATE_TRANSITION_ERROR
86 assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR
87 self.shutdown_event.set()
88
89 async def receive(self):
90 return await self.receive_queue.get()
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/uvicorn/lifespan/on.py b/uvicorn/lifespan/on.py
--- a/uvicorn/lifespan/on.py
+++ b/uvicorn/lifespan/on.py
@@ -44,7 +44,10 @@
async def main(self):
try:
app = self.config.loaded_app
- scope = {"type": "lifespan"}
+ scope = {
+ "type": "lifespan",
+ "asgi": {"version": self.config.asgi_version, "spec_version": "2.0"},
+ }
await app(scope, self.receive, self.send)
except BaseException as exc:
self.asgi = None
| {"golden_diff": "diff --git a/uvicorn/lifespan/on.py b/uvicorn/lifespan/on.py\n--- a/uvicorn/lifespan/on.py\n+++ b/uvicorn/lifespan/on.py\n@@ -44,7 +44,10 @@\n async def main(self):\n try:\n app = self.config.loaded_app\n- scope = {\"type\": \"lifespan\"}\n+ scope = {\n+ \"type\": \"lifespan\",\n+ \"asgi\": {\"version\": self.config.asgi_version, \"spec_version\": \"2.0\"},\n+ }\n await app(scope, self.receive, self.send)\n except BaseException as exc:\n self.asgi = None\n", "issue": "Lifespan scope is missing asgi dict\nthe lifespan scope (https://asgi.readthedocs.io/en/latest/specs/lifespan.html#scope) is missing the asgi dict\r\n\r\nprompted by\r\n\r\n_Originally posted by @florimondmanca in https://github.com/encode/uvicorn/pull/751_\n", "before_files": [{"content": "import asyncio\nimport logging\n\nSTATE_TRANSITION_ERROR = \"Got invalid state transition on lifespan protocol.\"\n\n\nclass LifespanOn:\n def __init__(self, config):\n if not config.loaded:\n config.load()\n\n self.config = config\n self.logger = logging.getLogger(\"uvicorn.error\")\n self.startup_event = asyncio.Event()\n self.shutdown_event = asyncio.Event()\n self.receive_queue = asyncio.Queue()\n self.error_occured = False\n self.startup_failed = False\n self.should_exit = False\n\n async def startup(self):\n self.logger.info(\"Waiting for application startup.\")\n\n loop = asyncio.get_event_loop()\n loop.create_task(self.main())\n\n await self.receive_queue.put({\"type\": \"lifespan.startup\"})\n await self.startup_event.wait()\n\n if self.startup_failed or (self.error_occured and self.config.lifespan == \"on\"):\n self.logger.error(\"Application startup failed. Exiting.\")\n self.should_exit = True\n else:\n self.logger.info(\"Application startup complete.\")\n\n async def shutdown(self):\n if self.error_occured:\n return\n self.logger.info(\"Waiting for application shutdown.\")\n await self.receive_queue.put({\"type\": \"lifespan.shutdown\"})\n await self.shutdown_event.wait()\n self.logger.info(\"Application shutdown complete.\")\n\n async def main(self):\n try:\n app = self.config.loaded_app\n scope = {\"type\": \"lifespan\"}\n await app(scope, self.receive, self.send)\n except BaseException as exc:\n self.asgi = None\n self.error_occured = True\n if self.startup_failed:\n return\n if self.config.lifespan == \"auto\":\n msg = \"ASGI 'lifespan' protocol appears unsupported.\"\n self.logger.info(msg)\n else:\n msg = \"Exception in 'lifespan' protocol\\n\"\n self.logger.error(msg, exc_info=exc)\n finally:\n self.startup_event.set()\n self.shutdown_event.set()\n\n async def send(self, message):\n assert message[\"type\"] in (\n \"lifespan.startup.complete\",\n \"lifespan.startup.failed\",\n \"lifespan.shutdown.complete\",\n )\n\n if message[\"type\"] == \"lifespan.startup.complete\":\n assert not self.startup_event.is_set(), STATE_TRANSITION_ERROR\n assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR\n self.startup_event.set()\n\n elif message[\"type\"] == \"lifespan.startup.failed\":\n assert not self.startup_event.is_set(), STATE_TRANSITION_ERROR\n assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR\n self.startup_event.set()\n self.startup_failed = True\n if message.get(\"message\"):\n self.logger.error(message[\"message\"])\n\n elif message[\"type\"] == \"lifespan.shutdown.complete\":\n assert self.startup_event.is_set(), STATE_TRANSITION_ERROR\n assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR\n self.shutdown_event.set()\n\n async def receive(self):\n return await self.receive_queue.get()\n", "path": "uvicorn/lifespan/on.py"}], "after_files": [{"content": "import asyncio\nimport logging\n\nSTATE_TRANSITION_ERROR = \"Got invalid state transition on lifespan protocol.\"\n\n\nclass LifespanOn:\n def __init__(self, config):\n if not config.loaded:\n config.load()\n\n self.config = config\n self.logger = logging.getLogger(\"uvicorn.error\")\n self.startup_event = asyncio.Event()\n self.shutdown_event = asyncio.Event()\n self.receive_queue = asyncio.Queue()\n self.error_occured = False\n self.startup_failed = False\n self.should_exit = False\n\n async def startup(self):\n self.logger.info(\"Waiting for application startup.\")\n\n loop = asyncio.get_event_loop()\n loop.create_task(self.main())\n\n await self.receive_queue.put({\"type\": \"lifespan.startup\"})\n await self.startup_event.wait()\n\n if self.startup_failed or (self.error_occured and self.config.lifespan == \"on\"):\n self.logger.error(\"Application startup failed. Exiting.\")\n self.should_exit = True\n else:\n self.logger.info(\"Application startup complete.\")\n\n async def shutdown(self):\n if self.error_occured:\n return\n self.logger.info(\"Waiting for application shutdown.\")\n await self.receive_queue.put({\"type\": \"lifespan.shutdown\"})\n await self.shutdown_event.wait()\n self.logger.info(\"Application shutdown complete.\")\n\n async def main(self):\n try:\n app = self.config.loaded_app\n scope = {\n \"type\": \"lifespan\",\n \"asgi\": {\"version\": self.config.asgi_version, \"spec_version\": \"2.0\"},\n }\n await app(scope, self.receive, self.send)\n except BaseException as exc:\n self.asgi = None\n self.error_occured = True\n if self.startup_failed:\n return\n if self.config.lifespan == \"auto\":\n msg = \"ASGI 'lifespan' protocol appears unsupported.\"\n self.logger.info(msg)\n else:\n msg = \"Exception in 'lifespan' protocol\\n\"\n self.logger.error(msg, exc_info=exc)\n finally:\n self.startup_event.set()\n self.shutdown_event.set()\n\n async def send(self, message):\n assert message[\"type\"] in (\n \"lifespan.startup.complete\",\n \"lifespan.startup.failed\",\n \"lifespan.shutdown.complete\",\n )\n\n if message[\"type\"] == \"lifespan.startup.complete\":\n assert not self.startup_event.is_set(), STATE_TRANSITION_ERROR\n assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR\n self.startup_event.set()\n\n elif message[\"type\"] == \"lifespan.startup.failed\":\n assert not self.startup_event.is_set(), STATE_TRANSITION_ERROR\n assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR\n self.startup_event.set()\n self.startup_failed = True\n if message.get(\"message\"):\n self.logger.error(message[\"message\"])\n\n elif message[\"type\"] == \"lifespan.shutdown.complete\":\n assert self.startup_event.is_set(), STATE_TRANSITION_ERROR\n assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR\n self.shutdown_event.set()\n\n async def receive(self):\n return await self.receive_queue.get()\n", "path": "uvicorn/lifespan/on.py"}]} | 1,173 | 153 |
gh_patches_debug_3306 | rasdani/github-patches | git_diff | kivy__python-for-android-1815 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
packaged python is built with IPv6 disabled
Is there any reason for IPv6 being disabled?
https://github.com/kivy/python-for-android/blob/f16a66590c92bb57c69b214b2b69dfaa6dc1e0ed/pythonforandroid/recipes/python3/__init__.py#L37
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pythonforandroid/recipes/python3/__init__.py`
Content:
```
1 import sh
2 from pythonforandroid.python import GuestPythonRecipe
3 from pythonforandroid.recipe import Recipe
4
5
6 class Python3Recipe(GuestPythonRecipe):
7 '''
8 The python3's recipe.
9
10 .. note:: This recipe can be built only against API 21+. Also, in order to
11 build certain python modules, we need to add some extra recipes to our
12 build requirements:
13
14 - ctypes: you must add the recipe for ``libffi``.
15
16 .. versionchanged:: 0.6.0
17 Refactored into class
18 :class:`~pythonforandroid.python.GuestPythonRecipe`
19 '''
20
21 version = '3.7.1'
22 url = 'https://www.python.org/ftp/python/{version}/Python-{version}.tgz'
23 name = 'python3'
24
25 patches = ["patches/fix-ctypes-util-find-library.patch"]
26
27 if sh.which('lld') is not None:
28 patches = patches + ["patches/remove-fix-cortex-a8.patch"]
29
30 depends = ['hostpython3', 'sqlite3', 'openssl', 'libffi']
31 conflicts = ['python3crystax', 'python2', 'python2legacy']
32
33 configure_args = (
34 '--host={android_host}',
35 '--build={android_build}',
36 '--enable-shared',
37 '--disable-ipv6',
38 'ac_cv_file__dev_ptmx=yes',
39 'ac_cv_file__dev_ptc=no',
40 '--without-ensurepip',
41 'ac_cv_little_endian_double=yes',
42 '--prefix={prefix}',
43 '--exec-prefix={exec_prefix}')
44
45 def set_libs_flags(self, env, arch):
46 env = super(Python3Recipe, self).set_libs_flags(env, arch)
47 if 'openssl' in self.ctx.recipe_build_order:
48 recipe = Recipe.get_recipe('openssl', self.ctx)
49 self.configure_args += \
50 ('--with-openssl=' + recipe.get_build_dir(arch.arch),)
51 return env
52
53
54 recipe = Python3Recipe()
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pythonforandroid/recipes/python3/__init__.py b/pythonforandroid/recipes/python3/__init__.py
--- a/pythonforandroid/recipes/python3/__init__.py
+++ b/pythonforandroid/recipes/python3/__init__.py
@@ -34,7 +34,7 @@
'--host={android_host}',
'--build={android_build}',
'--enable-shared',
- '--disable-ipv6',
+ '--enable-ipv6',
'ac_cv_file__dev_ptmx=yes',
'ac_cv_file__dev_ptc=no',
'--without-ensurepip',
| {"golden_diff": "diff --git a/pythonforandroid/recipes/python3/__init__.py b/pythonforandroid/recipes/python3/__init__.py\n--- a/pythonforandroid/recipes/python3/__init__.py\n+++ b/pythonforandroid/recipes/python3/__init__.py\n@@ -34,7 +34,7 @@\n '--host={android_host}',\n '--build={android_build}',\n '--enable-shared',\n- '--disable-ipv6',\n+ '--enable-ipv6',\n 'ac_cv_file__dev_ptmx=yes',\n 'ac_cv_file__dev_ptc=no',\n '--without-ensurepip',\n", "issue": "packaged python is built with IPv6 disabled\nIs there any reason for IPv6 being disabled?\r\nhttps://github.com/kivy/python-for-android/blob/f16a66590c92bb57c69b214b2b69dfaa6dc1e0ed/pythonforandroid/recipes/python3/__init__.py#L37\n", "before_files": [{"content": "import sh\nfrom pythonforandroid.python import GuestPythonRecipe\nfrom pythonforandroid.recipe import Recipe\n\n\nclass Python3Recipe(GuestPythonRecipe):\n '''\n The python3's recipe.\n\n .. note:: This recipe can be built only against API 21+. Also, in order to\n build certain python modules, we need to add some extra recipes to our\n build requirements:\n\n - ctypes: you must add the recipe for ``libffi``.\n\n .. versionchanged:: 0.6.0\n Refactored into class\n :class:`~pythonforandroid.python.GuestPythonRecipe`\n '''\n\n version = '3.7.1'\n url = 'https://www.python.org/ftp/python/{version}/Python-{version}.tgz'\n name = 'python3'\n\n patches = [\"patches/fix-ctypes-util-find-library.patch\"]\n\n if sh.which('lld') is not None:\n patches = patches + [\"patches/remove-fix-cortex-a8.patch\"]\n\n depends = ['hostpython3', 'sqlite3', 'openssl', 'libffi']\n conflicts = ['python3crystax', 'python2', 'python2legacy']\n\n configure_args = (\n '--host={android_host}',\n '--build={android_build}',\n '--enable-shared',\n '--disable-ipv6',\n 'ac_cv_file__dev_ptmx=yes',\n 'ac_cv_file__dev_ptc=no',\n '--without-ensurepip',\n 'ac_cv_little_endian_double=yes',\n '--prefix={prefix}',\n '--exec-prefix={exec_prefix}')\n\n def set_libs_flags(self, env, arch):\n env = super(Python3Recipe, self).set_libs_flags(env, arch)\n if 'openssl' in self.ctx.recipe_build_order:\n recipe = Recipe.get_recipe('openssl', self.ctx)\n self.configure_args += \\\n ('--with-openssl=' + recipe.get_build_dir(arch.arch),)\n return env\n\n\nrecipe = Python3Recipe()\n", "path": "pythonforandroid/recipes/python3/__init__.py"}], "after_files": [{"content": "import sh\nfrom pythonforandroid.python import GuestPythonRecipe\nfrom pythonforandroid.recipe import Recipe\n\n\nclass Python3Recipe(GuestPythonRecipe):\n '''\n The python3's recipe.\n\n .. note:: This recipe can be built only against API 21+. Also, in order to\n build certain python modules, we need to add some extra recipes to our\n build requirements:\n\n - ctypes: you must add the recipe for ``libffi``.\n\n .. versionchanged:: 0.6.0\n Refactored into class\n :class:`~pythonforandroid.python.GuestPythonRecipe`\n '''\n\n version = '3.7.1'\n url = 'https://www.python.org/ftp/python/{version}/Python-{version}.tgz'\n name = 'python3'\n\n patches = [\"patches/fix-ctypes-util-find-library.patch\"]\n\n if sh.which('lld') is not None:\n patches = patches + [\"patches/remove-fix-cortex-a8.patch\"]\n\n depends = ['hostpython3', 'sqlite3', 'openssl', 'libffi']\n conflicts = ['python3crystax', 'python2', 'python2legacy']\n\n configure_args = (\n '--host={android_host}',\n '--build={android_build}',\n '--enable-shared',\n '--enable-ipv6',\n 'ac_cv_file__dev_ptmx=yes',\n 'ac_cv_file__dev_ptc=no',\n '--without-ensurepip',\n 'ac_cv_little_endian_double=yes',\n '--prefix={prefix}',\n '--exec-prefix={exec_prefix}')\n\n def set_libs_flags(self, env, arch):\n env = super(Python3Recipe, self).set_libs_flags(env, arch)\n if 'openssl' in self.ctx.recipe_build_order:\n recipe = Recipe.get_recipe('openssl', self.ctx)\n self.configure_args += \\\n ('--with-openssl=' + recipe.get_build_dir(arch.arch),)\n return env\n\n\nrecipe = Python3Recipe()\n", "path": "pythonforandroid/recipes/python3/__init__.py"}]} | 880 | 132 |
gh_patches_debug_596 | rasdani/github-patches | git_diff | pex-tool__pex-1673 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.72
On the docket:
+ [x] Fix Locker to prune un-downloaded entries. (#1666)
+ [x] Fix venv creation to ignore ambient PEX env vars. #1669
+ [x] Lockfiles: requirement might not be compatible with requested interpreter constraints #1667
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.71"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.71"
+__version__ = "2.1.72"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.71\"\n+__version__ = \"2.1.72\"\n", "issue": "Release 2.1.72\nOn the docket:\r\n+ [x] Fix Locker to prune un-downloaded entries. (#1666)\r\n+ [x] Fix venv creation to ignore ambient PEX env vars. #1669\r\n+ [x] Lockfiles: requirement might not be compatible with requested interpreter constraints #1667 \r\n\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.71\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.72\"\n", "path": "pex/version.py"}]} | 386 | 96 |
gh_patches_debug_361 | rasdani/github-patches | git_diff | NVIDIA__NVFlare-1314 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Command "nvflare" not found
Unfortunately I can't run nvflare on Ubuntu because it tells me that the command (`nvflare simulator -h`) is not found. Even if I use the Docker solution. Inside the docker it also shows "command not found". Thus unable to run local experiments.
I tried the MWE from https://nvflare.readthedocs.io/en/2.2.1/getting_started.html
Used OS: Ubuntu 18.04.5 LTS
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # Copyright (c) 2021, NVIDIA CORPORATION.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # Configuration file for the Sphinx documentation builder.
16 #
17 # This file only contains a selection of the most common options. For a full
18 # list see the documentation:
19 # http://www.sphinx-doc.org/en/master/config
20
21 # -- Path setup --------------------------------------------------------------
22
23 # If extensions (or modules to document with autodoc) are in another directory,
24 # add these directories to sys.path here. If the directory is relative to the
25 # documentation root, use os.path.abspath to make it absolute, like shown here.
26 #
27 import sphinx_rtd_theme
28 import os
29 import sys
30 from sphinx.domains.python import PythonDomain
31 import subprocess
32
33
34 class PatchedPythonDomain(PythonDomain):
35 def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode):
36 if "refspecific" in node:
37 del node["refspecific"]
38 return super(PatchedPythonDomain, self).resolve_xref(env, fromdocname, builder, typ, target, node, contnode)
39
40
41 sys.path.insert(0, os.path.abspath(".."))
42 print(sys.path)
43
44 # -- Project information -----------------------------------------------------
45
46 project = "NVIDIA FLARE"
47 copyright = "2022, NVIDIA"
48 author = "NVIDIA"
49
50 # The full version, including alpha/beta/rc tags
51 release = "2.2.1"
52 version = "2.2.1"
53
54
55 # -- General configuration ---------------------------------------------------
56
57 # Add any Sphinx extension module names here, as strings. They can be
58 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
59 # ones.
60 # Add napoleon to the extensions list
61 # source_parsers = {'.md': CommonMarkParser}
62
63 templates_path = ["templates"]
64
65 source_suffix = {
66 ".rst": "restructuredtext",
67 ".txt": "restructuredtext",
68 ".md": "markdown",
69 }
70
71 extensions = [
72 "recommonmark",
73 "sphinx.ext.intersphinx",
74 "sphinx.ext.mathjax",
75 "sphinx.ext.napoleon",
76 "sphinx.ext.autodoc",
77 "sphinx.ext.viewcode",
78 "sphinx.ext.autosectionlabel",
79 ]
80
81 autoclass_content = "both"
82 add_module_names = False
83 autosectionlabel_prefix_document = True
84
85 # Add any paths that contain templates here, relative to this directory.
86 # templates_path = ['_templates']
87
88 # List of patterns, relative to source directory, that match files and
89 # directories to ignore when looking for source files.
90 # This pattern also affects html_static_path and html_extra_path.
91 exclude_patterns = []
92
93
94 # -- Options for HTML output -------------------------------------------------
95
96 # The theme to use for HTML and HTML Help pages. See the documentation for
97 # a list of builtin themes.
98 #
99 html_theme = "sphinx_rtd_theme"
100 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
101 html_theme_options = {
102 "collapse_navigation": True,
103 "display_version": True,
104 "navigation_depth": 5,
105 "sticky_navigation": True, # Set to False to disable the sticky nav while scrolling.
106 # 'logo_only': True, # if we have a html_logo below, this shows /only/ the logo with no title text
107 }
108 html_scaled_image_link = False
109 html_show_sourcelink = True
110 html_favicon = "favicon.ico"
111
112 # Add any paths that contain custom static files (such as style sheets) here,
113 # relative to this directory. They are copied after the builtin static files,
114 # so a file named "default.css" will overwrite the builtin "default.css".
115 html_static_path = ["_static"]
116
117
118 def generate_apidocs(*args):
119 """Generate API docs automatically by trawling the available modules"""
120 module_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "nvflare"))
121 output_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "apidocs"))
122 print(f"output_path {output_path}")
123 print(f"module_path {module_path}")
124 subprocess.check_call(
125 [sys.executable, "-m", "sphinx.ext.apidoc", "-f", "-e"]
126 + ["-o", output_path]
127 + [module_path]
128 + [os.path.join(module_path, p) for p in exclude_patterns]
129 )
130
131
132 def setup(app):
133 app.connect("builder-inited", generate_apidocs)
134 app.add_domain(PatchedPythonDomain, override=True)
135 app.add_css_file("css/additions.css")
136
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -76,6 +76,7 @@
"sphinx.ext.autodoc",
"sphinx.ext.viewcode",
"sphinx.ext.autosectionlabel",
+ "sphinx_copybutton",
]
autoclass_content = "both"
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -76,6 +76,7 @@\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.autosectionlabel\",\n+ \"sphinx_copybutton\",\n ]\n \n autoclass_content = \"both\"\n", "issue": "[BUG] Command \"nvflare\" not found \nUnfortunately I can't run nvflare on Ubuntu because it tells me that the command (`nvflare simulator -h`) is not found. Even if I use the Docker solution. Inside the docker it also shows \"command not found\". Thus unable to run local experiments.\r\n\r\nI tried the MWE from https://nvflare.readthedocs.io/en/2.2.1/getting_started.html\r\n\r\nUsed OS: Ubuntu 18.04.5 LTS\r\n\n", "before_files": [{"content": "# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport sphinx_rtd_theme\nimport os\nimport sys\nfrom sphinx.domains.python import PythonDomain\nimport subprocess\n\n\nclass PatchedPythonDomain(PythonDomain):\n def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode):\n if \"refspecific\" in node:\n del node[\"refspecific\"]\n return super(PatchedPythonDomain, self).resolve_xref(env, fromdocname, builder, typ, target, node, contnode)\n\n\nsys.path.insert(0, os.path.abspath(\"..\"))\nprint(sys.path)\n\n# -- Project information -----------------------------------------------------\n\nproject = \"NVIDIA FLARE\"\ncopyright = \"2022, NVIDIA\"\nauthor = \"NVIDIA\"\n\n# The full version, including alpha/beta/rc tags\nrelease = \"2.2.1\"\nversion = \"2.2.1\"\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\n# Add napoleon to the extensions list\n# source_parsers = {'.md': CommonMarkParser}\n\ntemplates_path = [\"templates\"]\n\nsource_suffix = {\n \".rst\": \"restructuredtext\",\n \".txt\": \"restructuredtext\",\n \".md\": \"markdown\",\n}\n\nextensions = [\n \"recommonmark\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.mathjax\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.autosectionlabel\",\n]\n\nautoclass_content = \"both\"\nadd_module_names = False\nautosectionlabel_prefix_document = True\n\n# Add any paths that contain templates here, relative to this directory.\n# templates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = []\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\nhtml_theme_options = {\n \"collapse_navigation\": True,\n \"display_version\": True,\n \"navigation_depth\": 5,\n \"sticky_navigation\": True, # Set to False to disable the sticky nav while scrolling.\n # 'logo_only': True, # if we have a html_logo below, this shows /only/ the logo with no title text\n}\nhtml_scaled_image_link = False\nhtml_show_sourcelink = True\nhtml_favicon = \"favicon.ico\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n\ndef generate_apidocs(*args):\n \"\"\"Generate API docs automatically by trawling the available modules\"\"\"\n module_path = os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\", \"nvflare\"))\n output_path = os.path.abspath(os.path.join(os.path.dirname(__file__), \"apidocs\"))\n print(f\"output_path {output_path}\")\n print(f\"module_path {module_path}\")\n subprocess.check_call(\n [sys.executable, \"-m\", \"sphinx.ext.apidoc\", \"-f\", \"-e\"]\n + [\"-o\", output_path]\n + [module_path]\n + [os.path.join(module_path, p) for p in exclude_patterns]\n )\n\n\ndef setup(app):\n app.connect(\"builder-inited\", generate_apidocs)\n app.add_domain(PatchedPythonDomain, override=True)\n app.add_css_file(\"css/additions.css\")\n", "path": "docs/conf.py"}], "after_files": [{"content": "# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport sphinx_rtd_theme\nimport os\nimport sys\nfrom sphinx.domains.python import PythonDomain\nimport subprocess\n\n\nclass PatchedPythonDomain(PythonDomain):\n def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode):\n if \"refspecific\" in node:\n del node[\"refspecific\"]\n return super(PatchedPythonDomain, self).resolve_xref(env, fromdocname, builder, typ, target, node, contnode)\n\n\nsys.path.insert(0, os.path.abspath(\"..\"))\nprint(sys.path)\n\n# -- Project information -----------------------------------------------------\n\nproject = \"NVIDIA FLARE\"\ncopyright = \"2022, NVIDIA\"\nauthor = \"NVIDIA\"\n\n# The full version, including alpha/beta/rc tags\nrelease = \"2.2.1\"\nversion = \"2.2.1\"\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\n# Add napoleon to the extensions list\n# source_parsers = {'.md': CommonMarkParser}\n\ntemplates_path = [\"templates\"]\n\nsource_suffix = {\n \".rst\": \"restructuredtext\",\n \".txt\": \"restructuredtext\",\n \".md\": \"markdown\",\n}\n\nextensions = [\n \"recommonmark\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.mathjax\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.autosectionlabel\",\n \"sphinx_copybutton\",\n]\n\nautoclass_content = \"both\"\nadd_module_names = False\nautosectionlabel_prefix_document = True\n\n# Add any paths that contain templates here, relative to this directory.\n# templates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = []\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\nhtml_theme_options = {\n \"collapse_navigation\": True,\n \"display_version\": True,\n \"navigation_depth\": 5,\n \"sticky_navigation\": True, # Set to False to disable the sticky nav while scrolling.\n # 'logo_only': True, # if we have a html_logo below, this shows /only/ the logo with no title text\n}\nhtml_scaled_image_link = False\nhtml_show_sourcelink = True\nhtml_favicon = \"favicon.ico\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n\ndef generate_apidocs(*args):\n \"\"\"Generate API docs automatically by trawling the available modules\"\"\"\n module_path = os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\", \"nvflare\"))\n output_path = os.path.abspath(os.path.join(os.path.dirname(__file__), \"apidocs\"))\n print(f\"output_path {output_path}\")\n print(f\"module_path {module_path}\")\n subprocess.check_call(\n [sys.executable, \"-m\", \"sphinx.ext.apidoc\", \"-f\", \"-e\"]\n + [\"-o\", output_path]\n + [module_path]\n + [os.path.join(module_path, p) for p in exclude_patterns]\n )\n\n\ndef setup(app):\n app.connect(\"builder-inited\", generate_apidocs)\n app.add_domain(PatchedPythonDomain, override=True)\n app.add_css_file(\"css/additions.css\")\n", "path": "docs/conf.py"}]} | 1,760 | 80 |
gh_patches_debug_906 | rasdani/github-patches | git_diff | dotkom__onlineweb4-501 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
UserResource in API should not display last login date publicly
Somewhat sensitive information...
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/api/v0/authentication.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 from tastypie import fields
4 from tastypie.resources import ModelResource
5 from tastypie.authorization import Authorization
6
7 from apps.authentication.models import OnlineUser as User
8
9 class UserResource(ModelResource):
10
11 class Meta:
12 queryset = User.objects.all()
13 resource_name = 'user'
14 fields = ['username', 'first_name', 'last_name', 'last_login', 'email', ]
15
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/api/v0/authentication.py b/apps/api/v0/authentication.py
--- a/apps/api/v0/authentication.py
+++ b/apps/api/v0/authentication.py
@@ -11,4 +11,4 @@
class Meta:
queryset = User.objects.all()
resource_name = 'user'
- fields = ['username', 'first_name', 'last_name', 'last_login', 'email', ]
+ fields = ['username', 'first_name', 'last_name', 'email', ]
| {"golden_diff": "diff --git a/apps/api/v0/authentication.py b/apps/api/v0/authentication.py\n--- a/apps/api/v0/authentication.py\n+++ b/apps/api/v0/authentication.py\n@@ -11,4 +11,4 @@\n class Meta:\n queryset = User.objects.all()\n resource_name = 'user'\n- fields = ['username', 'first_name', 'last_name', 'last_login', 'email', ]\n+ fields = ['username', 'first_name', 'last_name', 'email', ]\n", "issue": "UserResource in API should not display last login date publicly\nSomewhat sensitive information...\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom tastypie import fields\nfrom tastypie.resources import ModelResource\nfrom tastypie.authorization import Authorization\n\nfrom apps.authentication.models import OnlineUser as User\n\nclass UserResource(ModelResource):\n\n class Meta:\n queryset = User.objects.all()\n resource_name = 'user'\n fields = ['username', 'first_name', 'last_name', 'last_login', 'email', ]\n", "path": "apps/api/v0/authentication.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom tastypie import fields\nfrom tastypie.resources import ModelResource\nfrom tastypie.authorization import Authorization\n\nfrom apps.authentication.models import OnlineUser as User\n\nclass UserResource(ModelResource):\n\n class Meta:\n queryset = User.objects.all()\n resource_name = 'user'\n fields = ['username', 'first_name', 'last_name', 'email', ]\n", "path": "apps/api/v0/authentication.py"}]} | 390 | 108 |
gh_patches_debug_11661 | rasdani/github-patches | git_diff | projectmesa__mesa-1439 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Seed and random should not be class attributes
**Describe the bug**
<!-- A clear and concise description the bug -->
Every time a model instance is created, it updates the seed and random attribute of every other instance of the same class.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen -->
I would expect each instance to have their own attribute. This way, the behavior of each instance is independent and predictable.
**To Reproduce**
<!-- Steps to reproduce the bug, or a link to a project where the bug is visible -->
```pycon
>>> from mesa import Model
>>> class Ex(Model):
... def __init__(self, seed=2):
... pass
...
>>> a = Ex(seed=1)
>>> print(a._seed, a.random)
1 <random.Random object at 0x282a8d0>
>>> b = Ex(seed=2)
>>> print(a._seed, a.random)
2 <random.Random object at 0x282b2d0>
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mesa/model.py`
Content:
```
1 """
2 The model class for Mesa framework.
3
4 Core Objects: Model
5
6 """
7 # Mypy; for the `|` operator purpose
8 # Remove this __future__ import once the oldest supported Python is 3.10
9 from __future__ import annotations
10
11 import random
12
13 from mesa.datacollection import DataCollector
14
15 # mypy
16 from typing import Any
17
18
19 class Model:
20 """Base class for models."""
21
22 def __new__(cls, *args: Any, **kwargs: Any) -> Any:
23 """Create a new model object and instantiate its RNG automatically."""
24 cls._seed = kwargs.get("seed", None)
25 cls.random = random.Random(cls._seed)
26 return object.__new__(cls)
27
28 def __init__(self, *args: Any, **kwargs: Any) -> None:
29 """Create a new model. Overload this method with the actual code to
30 start the model.
31
32 Attributes:
33 schedule: schedule object
34 running: a bool indicating if the model should continue running
35
36 """
37
38 self.running = True
39 self.schedule = None
40 self.current_id = 0
41
42 def run_model(self) -> None:
43 """Run the model until the end condition is reached. Overload as
44 needed.
45
46 """
47 while self.running:
48 self.step()
49
50 def step(self) -> None:
51 """A single step. Fill in here."""
52 pass
53
54 def next_id(self) -> int:
55 """Return the next unique ID for agents, increment current_id"""
56 self.current_id += 1
57 return self.current_id
58
59 def reset_randomizer(self, seed: int | None = None) -> None:
60 """Reset the model random number generator.
61
62 Args:
63 seed: A new seed for the RNG; if None, reset using the current seed
64 """
65
66 if seed is None:
67 seed = self._seed
68 self.random.seed(seed)
69 self._seed = seed
70
71 def initialize_data_collector(
72 self, model_reporters=None, agent_reporters=None, tables=None
73 ) -> None:
74 if not hasattr(self, "schedule") or self.schedule is None:
75 raise RuntimeError(
76 "You must initialize the scheduler (self.schedule) before initializing the data collector."
77 )
78 if self.schedule.get_agent_count() == 0:
79 raise RuntimeError(
80 "You must add agents to the scheduler before initializing the data collector."
81 )
82 self.datacollector = DataCollector(
83 model_reporters=model_reporters,
84 agent_reporters=agent_reporters,
85 tables=tables,
86 )
87 # Collect data for the first time during initialization.
88 self.datacollector.collect(self)
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mesa/model.py b/mesa/model.py
--- a/mesa/model.py
+++ b/mesa/model.py
@@ -21,9 +21,10 @@
def __new__(cls, *args: Any, **kwargs: Any) -> Any:
"""Create a new model object and instantiate its RNG automatically."""
- cls._seed = kwargs.get("seed", None)
- cls.random = random.Random(cls._seed)
- return object.__new__(cls)
+ obj = object.__new__(cls)
+ obj._seed = kwargs.get("seed", None)
+ obj.random = random.Random(obj._seed)
+ return obj
def __init__(self, *args: Any, **kwargs: Any) -> None:
"""Create a new model. Overload this method with the actual code to
| {"golden_diff": "diff --git a/mesa/model.py b/mesa/model.py\n--- a/mesa/model.py\n+++ b/mesa/model.py\n@@ -21,9 +21,10 @@\n \n def __new__(cls, *args: Any, **kwargs: Any) -> Any:\n \"\"\"Create a new model object and instantiate its RNG automatically.\"\"\"\n- cls._seed = kwargs.get(\"seed\", None)\n- cls.random = random.Random(cls._seed)\n- return object.__new__(cls)\n+ obj = object.__new__(cls)\n+ obj._seed = kwargs.get(\"seed\", None)\n+ obj.random = random.Random(obj._seed)\n+ return obj\n \n def __init__(self, *args: Any, **kwargs: Any) -> None:\n \"\"\"Create a new model. Overload this method with the actual code to\n", "issue": "Seed and random should not be class attributes\n**Describe the bug**\r\n<!-- A clear and concise description the bug -->\r\n\r\nEvery time a model instance is created, it updates the seed and random attribute of every other instance of the same class.\r\n\r\n**Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen -->\r\n\r\nI would expect each instance to have their own attribute. This way, the behavior of each instance is independent and predictable.\r\n\r\n**To Reproduce**\r\n<!-- Steps to reproduce the bug, or a link to a project where the bug is visible -->\r\n\r\n```pycon\r\n>>> from mesa import Model\r\n>>> class Ex(Model):\r\n... def __init__(self, seed=2):\r\n... pass\r\n...\r\n>>> a = Ex(seed=1)\r\n>>> print(a._seed, a.random)\r\n1 <random.Random object at 0x282a8d0>\r\n>>> b = Ex(seed=2)\r\n>>> print(a._seed, a.random)\r\n2 <random.Random object at 0x282b2d0>\r\n```\n", "before_files": [{"content": "\"\"\"\nThe model class for Mesa framework.\n\nCore Objects: Model\n\n\"\"\"\n# Mypy; for the `|` operator purpose\n# Remove this __future__ import once the oldest supported Python is 3.10\nfrom __future__ import annotations\n\nimport random\n\nfrom mesa.datacollection import DataCollector\n\n# mypy\nfrom typing import Any\n\n\nclass Model:\n \"\"\"Base class for models.\"\"\"\n\n def __new__(cls, *args: Any, **kwargs: Any) -> Any:\n \"\"\"Create a new model object and instantiate its RNG automatically.\"\"\"\n cls._seed = kwargs.get(\"seed\", None)\n cls.random = random.Random(cls._seed)\n return object.__new__(cls)\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n \"\"\"Create a new model. Overload this method with the actual code to\n start the model.\n\n Attributes:\n schedule: schedule object\n running: a bool indicating if the model should continue running\n\n \"\"\"\n\n self.running = True\n self.schedule = None\n self.current_id = 0\n\n def run_model(self) -> None:\n \"\"\"Run the model until the end condition is reached. Overload as\n needed.\n\n \"\"\"\n while self.running:\n self.step()\n\n def step(self) -> None:\n \"\"\"A single step. Fill in here.\"\"\"\n pass\n\n def next_id(self) -> int:\n \"\"\"Return the next unique ID for agents, increment current_id\"\"\"\n self.current_id += 1\n return self.current_id\n\n def reset_randomizer(self, seed: int | None = None) -> None:\n \"\"\"Reset the model random number generator.\n\n Args:\n seed: A new seed for the RNG; if None, reset using the current seed\n \"\"\"\n\n if seed is None:\n seed = self._seed\n self.random.seed(seed)\n self._seed = seed\n\n def initialize_data_collector(\n self, model_reporters=None, agent_reporters=None, tables=None\n ) -> None:\n if not hasattr(self, \"schedule\") or self.schedule is None:\n raise RuntimeError(\n \"You must initialize the scheduler (self.schedule) before initializing the data collector.\"\n )\n if self.schedule.get_agent_count() == 0:\n raise RuntimeError(\n \"You must add agents to the scheduler before initializing the data collector.\"\n )\n self.datacollector = DataCollector(\n model_reporters=model_reporters,\n agent_reporters=agent_reporters,\n tables=tables,\n )\n # Collect data for the first time during initialization.\n self.datacollector.collect(self)\n", "path": "mesa/model.py"}], "after_files": [{"content": "\"\"\"\nThe model class for Mesa framework.\n\nCore Objects: Model\n\n\"\"\"\n# Mypy; for the `|` operator purpose\n# Remove this __future__ import once the oldest supported Python is 3.10\nfrom __future__ import annotations\n\nimport random\n\nfrom mesa.datacollection import DataCollector\n\n# mypy\nfrom typing import Any\n\n\nclass Model:\n \"\"\"Base class for models.\"\"\"\n\n def __new__(cls, *args: Any, **kwargs: Any) -> Any:\n \"\"\"Create a new model object and instantiate its RNG automatically.\"\"\"\n obj = object.__new__(cls)\n obj._seed = kwargs.get(\"seed\", None)\n obj.random = random.Random(obj._seed)\n return obj\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n \"\"\"Create a new model. Overload this method with the actual code to\n start the model.\n\n Attributes:\n schedule: schedule object\n running: a bool indicating if the model should continue running\n\n \"\"\"\n\n self.running = True\n self.schedule = None\n self.current_id = 0\n\n def run_model(self) -> None:\n \"\"\"Run the model until the end condition is reached. Overload as\n needed.\n\n \"\"\"\n while self.running:\n self.step()\n\n def step(self) -> None:\n \"\"\"A single step. Fill in here.\"\"\"\n pass\n\n def next_id(self) -> int:\n \"\"\"Return the next unique ID for agents, increment current_id\"\"\"\n self.current_id += 1\n return self.current_id\n\n def reset_randomizer(self, seed: int | None = None) -> None:\n \"\"\"Reset the model random number generator.\n\n Args:\n seed: A new seed for the RNG; if None, reset using the current seed\n \"\"\"\n\n if seed is None:\n seed = self._seed\n self.random.seed(seed)\n self._seed = seed\n\n def initialize_data_collector(\n self, model_reporters=None, agent_reporters=None, tables=None\n ) -> None:\n if not hasattr(self, \"schedule\") or self.schedule is None:\n raise RuntimeError(\n \"You must initialize the scheduler (self.schedule) before initializing the data collector.\"\n )\n if self.schedule.get_agent_count() == 0:\n raise RuntimeError(\n \"You must add agents to the scheduler before initializing the data collector.\"\n )\n self.datacollector = DataCollector(\n model_reporters=model_reporters,\n agent_reporters=agent_reporters,\n tables=tables,\n )\n # Collect data for the first time during initialization.\n self.datacollector.collect(self)\n", "path": "mesa/model.py"}]} | 1,215 | 184 |
gh_patches_debug_25285 | rasdani/github-patches | git_diff | DataDog__dd-agent-2139 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot find Docker cgroup directory. Be sure your system is supported.
Running docker 1.8.3 on Ubuntu 14.04. Running dd-agent under Kubernetes. Kept getting the error message in the subject until I made a patch to 5.6.2 https://gist.github.com/bakins/00cc12f26e2947af6679
I am not running the packaged version of docker (ie, the deb) - just the binary (issues with Kubernetes for me using packaged version).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `utils/dockerutil.py`
Content:
```
1 # stdlib
2 import os
3
4 # 3rd party
5 from docker import Client
6 from docker import tls
7
8
9 class MountException(Exception):
10 pass
11
12 # Default docker client settings
13 DEFAULT_TIMEOUT = 5
14 DEFAULT_VERSION = 'auto'
15
16 _docker_client_settings = {"version": DEFAULT_VERSION}
17
18
19 def get_docker_settings():
20 global _docker_client_settings
21 return _docker_client_settings
22
23
24 def reset_docker_settings():
25 global _docker_client_settings
26 _docker_client_settings = {"version": DEFAULT_VERSION}
27
28
29 def set_docker_settings(init_config, instance):
30 global _docker_client_settings
31 _docker_client_settings = {
32 "version": init_config.get('api_version', DEFAULT_VERSION),
33 "base_url": instance.get("url"),
34 "timeout": int(init_config.get('timeout', DEFAULT_TIMEOUT)),
35 }
36
37 if init_config.get('tls', False):
38 client_cert_path = init_config.get('tls_client_cert')
39 client_key_path = init_config.get('tls_client_key')
40 cacert = init_config.get('tls_cacert')
41 verify = init_config.get('tls_verify')
42
43 client_cert = None
44 if client_cert_path is not None and client_key_path is not None:
45 client_cert = (client_cert_path, client_key_path)
46
47 verify = verify if verify is not None else cacert
48 tls_config = tls.TLSConfig(client_cert=client_cert, verify=verify)
49 _docker_client_settings["tls"] = tls_config
50
51
52 def get_client():
53 return Client(**_docker_client_settings)
54
55
56 def find_cgroup(hierarchy, docker_root):
57 """Find the mount point for a specified cgroup hierarchy.
58
59 Works with old style and new style mounts.
60 """
61 with open(os.path.join(docker_root, "/proc/mounts"), 'r') as fp:
62 mounts = map(lambda x: x.split(), fp.read().splitlines())
63 cgroup_mounts = filter(lambda x: x[2] == "cgroup", mounts)
64 if len(cgroup_mounts) == 0:
65 raise Exception(
66 "Can't find mounted cgroups. If you run the Agent inside a container,"
67 " please refer to the documentation.")
68 # Old cgroup style
69 if len(cgroup_mounts) == 1:
70 return os.path.join(docker_root, cgroup_mounts[0][1])
71
72 candidate = None
73 for _, mountpoint, _, opts, _, _ in cgroup_mounts:
74 if hierarchy in opts:
75 if mountpoint.startswith("/host/"):
76 return os.path.join(docker_root, mountpoint)
77 candidate = mountpoint
78
79 if candidate is not None:
80 return os.path.join(docker_root, candidate)
81 raise Exception("Can't find mounted %s cgroups." % hierarchy)
82
83
84 def find_cgroup_filename_pattern(mountpoints, container_id):
85 # We try with different cgroups so that it works even if only one is properly working
86 for mountpoint in mountpoints.itervalues():
87 stat_file_path_lxc = os.path.join(mountpoint, "lxc")
88 stat_file_path_docker = os.path.join(mountpoint, "docker")
89 stat_file_path_coreos = os.path.join(mountpoint, "system.slice")
90 stat_file_path_kubernetes = os.path.join(mountpoint, container_id)
91 stat_file_path_kubernetes_docker = os.path.join(mountpoint, "system", "docker", container_id)
92
93 if os.path.exists(stat_file_path_lxc):
94 return os.path.join('%(mountpoint)s/lxc/%(id)s/%(file)s')
95 elif os.path.exists(stat_file_path_docker):
96 return os.path.join('%(mountpoint)s/docker/%(id)s/%(file)s')
97 elif os.path.exists(stat_file_path_coreos):
98 return os.path.join('%(mountpoint)s/system.slice/docker-%(id)s.scope/%(file)s')
99 elif os.path.exists(stat_file_path_kubernetes):
100 return os.path.join('%(mountpoint)s/%(id)s/%(file)s')
101 elif os.path.exists(stat_file_path_kubernetes_docker):
102 return os.path.join('%(mountpoint)s/system/docker/%(id)s/%(file)s')
103
104 raise MountException("Cannot find Docker cgroup directory. Be sure your system is supported.")
105
106
107 def image_tag_extractor(entity, key):
108 if "Image" in entity:
109 split = entity["Image"].split(":")
110 if len(split) <= key:
111 return None
112 elif len(split) > 2:
113 # if the repo is in the image name and has the form 'docker.clearbit:5000'
114 # the split will be like [repo_url, repo_port/image_name, image_tag]. Let's avoid that
115 split = [':'.join(split[:-1]), split[-1]]
116 return [split[key]]
117 if "RepoTags" in entity:
118 splits = [el.split(":") for el in entity["RepoTags"]]
119 tags = set()
120 for split in splits:
121 if len(split) > 2:
122 split = [':'.join(split[:-1]), split[-1]]
123 if len(split) > key:
124 tags.add(split[key])
125 if len(tags) > 0:
126 return list(tags)
127 return None
128
129
130 def container_name_extractor(co):
131 names = co.get('Names', [])
132 if names is not None:
133 # we sort the list to make sure that a docker API update introducing
134 # new names with a single "/" won't make us report dups.
135 names = sorted(names)
136 for name in names:
137 # the leading "/" is legit, if there's another one it means the name is actually an alias
138 if name.count('/') <= 1:
139 return [str(name).lstrip('/')]
140 return co.get('Id')[:11]
141
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/utils/dockerutil.py b/utils/dockerutil.py
--- a/utils/dockerutil.py
+++ b/utils/dockerutil.py
@@ -89,6 +89,7 @@
stat_file_path_coreos = os.path.join(mountpoint, "system.slice")
stat_file_path_kubernetes = os.path.join(mountpoint, container_id)
stat_file_path_kubernetes_docker = os.path.join(mountpoint, "system", "docker", container_id)
+ stat_file_path_docker_daemon = os.path.join(mountpoint, "docker-daemon", "docker", container_id)
if os.path.exists(stat_file_path_lxc):
return os.path.join('%(mountpoint)s/lxc/%(id)s/%(file)s')
@@ -100,6 +101,9 @@
return os.path.join('%(mountpoint)s/%(id)s/%(file)s')
elif os.path.exists(stat_file_path_kubernetes_docker):
return os.path.join('%(mountpoint)s/system/docker/%(id)s/%(file)s')
+ elif os.path.exists(stat_file_path_docker_daemon):
+ return os.path.join('%(mountpoint)s/docker-daemon/docker/%(id)s/%(file)s')
+
raise MountException("Cannot find Docker cgroup directory. Be sure your system is supported.")
| {"golden_diff": "diff --git a/utils/dockerutil.py b/utils/dockerutil.py\n--- a/utils/dockerutil.py\n+++ b/utils/dockerutil.py\n@@ -89,6 +89,7 @@\n stat_file_path_coreos = os.path.join(mountpoint, \"system.slice\")\n stat_file_path_kubernetes = os.path.join(mountpoint, container_id)\n stat_file_path_kubernetes_docker = os.path.join(mountpoint, \"system\", \"docker\", container_id)\n+ stat_file_path_docker_daemon = os.path.join(mountpoint, \"docker-daemon\", \"docker\", container_id)\n \n if os.path.exists(stat_file_path_lxc):\n return os.path.join('%(mountpoint)s/lxc/%(id)s/%(file)s')\n@@ -100,6 +101,9 @@\n return os.path.join('%(mountpoint)s/%(id)s/%(file)s')\n elif os.path.exists(stat_file_path_kubernetes_docker):\n return os.path.join('%(mountpoint)s/system/docker/%(id)s/%(file)s')\n+ elif os.path.exists(stat_file_path_docker_daemon):\n+ return os.path.join('%(mountpoint)s/docker-daemon/docker/%(id)s/%(file)s')\n+\n \n raise MountException(\"Cannot find Docker cgroup directory. Be sure your system is supported.\")\n", "issue": "Cannot find Docker cgroup directory. Be sure your system is supported.\nRunning docker 1.8.3 on Ubuntu 14.04. Running dd-agent under Kubernetes. Kept getting the error message in the subject until I made a patch to 5.6.2 https://gist.github.com/bakins/00cc12f26e2947af6679\n\nI am not running the packaged version of docker (ie, the deb) - just the binary (issues with Kubernetes for me using packaged version).\n\n", "before_files": [{"content": "# stdlib\nimport os\n\n# 3rd party\nfrom docker import Client\nfrom docker import tls\n\n\nclass MountException(Exception):\n pass\n\n# Default docker client settings\nDEFAULT_TIMEOUT = 5\nDEFAULT_VERSION = 'auto'\n\n_docker_client_settings = {\"version\": DEFAULT_VERSION}\n\n\ndef get_docker_settings():\n global _docker_client_settings\n return _docker_client_settings\n\n\ndef reset_docker_settings():\n global _docker_client_settings\n _docker_client_settings = {\"version\": DEFAULT_VERSION}\n\n\ndef set_docker_settings(init_config, instance):\n global _docker_client_settings\n _docker_client_settings = {\n \"version\": init_config.get('api_version', DEFAULT_VERSION),\n \"base_url\": instance.get(\"url\"),\n \"timeout\": int(init_config.get('timeout', DEFAULT_TIMEOUT)),\n }\n\n if init_config.get('tls', False):\n client_cert_path = init_config.get('tls_client_cert')\n client_key_path = init_config.get('tls_client_key')\n cacert = init_config.get('tls_cacert')\n verify = init_config.get('tls_verify')\n\n client_cert = None\n if client_cert_path is not None and client_key_path is not None:\n client_cert = (client_cert_path, client_key_path)\n\n verify = verify if verify is not None else cacert\n tls_config = tls.TLSConfig(client_cert=client_cert, verify=verify)\n _docker_client_settings[\"tls\"] = tls_config\n\n\ndef get_client():\n return Client(**_docker_client_settings)\n\n\ndef find_cgroup(hierarchy, docker_root):\n \"\"\"Find the mount point for a specified cgroup hierarchy.\n\n Works with old style and new style mounts.\n \"\"\"\n with open(os.path.join(docker_root, \"/proc/mounts\"), 'r') as fp:\n mounts = map(lambda x: x.split(), fp.read().splitlines())\n cgroup_mounts = filter(lambda x: x[2] == \"cgroup\", mounts)\n if len(cgroup_mounts) == 0:\n raise Exception(\n \"Can't find mounted cgroups. If you run the Agent inside a container,\"\n \" please refer to the documentation.\")\n # Old cgroup style\n if len(cgroup_mounts) == 1:\n return os.path.join(docker_root, cgroup_mounts[0][1])\n\n candidate = None\n for _, mountpoint, _, opts, _, _ in cgroup_mounts:\n if hierarchy in opts:\n if mountpoint.startswith(\"/host/\"):\n return os.path.join(docker_root, mountpoint)\n candidate = mountpoint\n\n if candidate is not None:\n return os.path.join(docker_root, candidate)\n raise Exception(\"Can't find mounted %s cgroups.\" % hierarchy)\n\n\ndef find_cgroup_filename_pattern(mountpoints, container_id):\n # We try with different cgroups so that it works even if only one is properly working\n for mountpoint in mountpoints.itervalues():\n stat_file_path_lxc = os.path.join(mountpoint, \"lxc\")\n stat_file_path_docker = os.path.join(mountpoint, \"docker\")\n stat_file_path_coreos = os.path.join(mountpoint, \"system.slice\")\n stat_file_path_kubernetes = os.path.join(mountpoint, container_id)\n stat_file_path_kubernetes_docker = os.path.join(mountpoint, \"system\", \"docker\", container_id)\n\n if os.path.exists(stat_file_path_lxc):\n return os.path.join('%(mountpoint)s/lxc/%(id)s/%(file)s')\n elif os.path.exists(stat_file_path_docker):\n return os.path.join('%(mountpoint)s/docker/%(id)s/%(file)s')\n elif os.path.exists(stat_file_path_coreos):\n return os.path.join('%(mountpoint)s/system.slice/docker-%(id)s.scope/%(file)s')\n elif os.path.exists(stat_file_path_kubernetes):\n return os.path.join('%(mountpoint)s/%(id)s/%(file)s')\n elif os.path.exists(stat_file_path_kubernetes_docker):\n return os.path.join('%(mountpoint)s/system/docker/%(id)s/%(file)s')\n\n raise MountException(\"Cannot find Docker cgroup directory. Be sure your system is supported.\")\n\n\ndef image_tag_extractor(entity, key):\n if \"Image\" in entity:\n split = entity[\"Image\"].split(\":\")\n if len(split) <= key:\n return None\n elif len(split) > 2:\n # if the repo is in the image name and has the form 'docker.clearbit:5000'\n # the split will be like [repo_url, repo_port/image_name, image_tag]. Let's avoid that\n split = [':'.join(split[:-1]), split[-1]]\n return [split[key]]\n if \"RepoTags\" in entity:\n splits = [el.split(\":\") for el in entity[\"RepoTags\"]]\n tags = set()\n for split in splits:\n if len(split) > 2:\n split = [':'.join(split[:-1]), split[-1]]\n if len(split) > key:\n tags.add(split[key])\n if len(tags) > 0:\n return list(tags)\n return None\n\n\ndef container_name_extractor(co):\n names = co.get('Names', [])\n if names is not None:\n # we sort the list to make sure that a docker API update introducing\n # new names with a single \"/\" won't make us report dups.\n names = sorted(names)\n for name in names:\n # the leading \"/\" is legit, if there's another one it means the name is actually an alias\n if name.count('/') <= 1:\n return [str(name).lstrip('/')]\n return co.get('Id')[:11]\n", "path": "utils/dockerutil.py"}], "after_files": [{"content": "# stdlib\nimport os\n\n# 3rd party\nfrom docker import Client\nfrom docker import tls\n\n\nclass MountException(Exception):\n pass\n\n# Default docker client settings\nDEFAULT_TIMEOUT = 5\nDEFAULT_VERSION = 'auto'\n\n_docker_client_settings = {\"version\": DEFAULT_VERSION}\n\n\ndef get_docker_settings():\n global _docker_client_settings\n return _docker_client_settings\n\n\ndef reset_docker_settings():\n global _docker_client_settings\n _docker_client_settings = {\"version\": DEFAULT_VERSION}\n\n\ndef set_docker_settings(init_config, instance):\n global _docker_client_settings\n _docker_client_settings = {\n \"version\": init_config.get('api_version', DEFAULT_VERSION),\n \"base_url\": instance.get(\"url\"),\n \"timeout\": int(init_config.get('timeout', DEFAULT_TIMEOUT)),\n }\n\n if init_config.get('tls', False):\n client_cert_path = init_config.get('tls_client_cert')\n client_key_path = init_config.get('tls_client_key')\n cacert = init_config.get('tls_cacert')\n verify = init_config.get('tls_verify')\n\n client_cert = None\n if client_cert_path is not None and client_key_path is not None:\n client_cert = (client_cert_path, client_key_path)\n\n verify = verify if verify is not None else cacert\n tls_config = tls.TLSConfig(client_cert=client_cert, verify=verify)\n _docker_client_settings[\"tls\"] = tls_config\n\n\ndef get_client():\n return Client(**_docker_client_settings)\n\n\ndef find_cgroup(hierarchy, docker_root):\n \"\"\"Find the mount point for a specified cgroup hierarchy.\n\n Works with old style and new style mounts.\n \"\"\"\n with open(os.path.join(docker_root, \"/proc/mounts\"), 'r') as fp:\n mounts = map(lambda x: x.split(), fp.read().splitlines())\n cgroup_mounts = filter(lambda x: x[2] == \"cgroup\", mounts)\n if len(cgroup_mounts) == 0:\n raise Exception(\n \"Can't find mounted cgroups. If you run the Agent inside a container,\"\n \" please refer to the documentation.\")\n # Old cgroup style\n if len(cgroup_mounts) == 1:\n return os.path.join(docker_root, cgroup_mounts[0][1])\n\n candidate = None\n for _, mountpoint, _, opts, _, _ in cgroup_mounts:\n if hierarchy in opts:\n if mountpoint.startswith(\"/host/\"):\n return os.path.join(docker_root, mountpoint)\n candidate = mountpoint\n\n if candidate is not None:\n return os.path.join(docker_root, candidate)\n raise Exception(\"Can't find mounted %s cgroups.\" % hierarchy)\n\n\ndef find_cgroup_filename_pattern(mountpoints, container_id):\n # We try with different cgroups so that it works even if only one is properly working\n for mountpoint in mountpoints.itervalues():\n stat_file_path_lxc = os.path.join(mountpoint, \"lxc\")\n stat_file_path_docker = os.path.join(mountpoint, \"docker\")\n stat_file_path_coreos = os.path.join(mountpoint, \"system.slice\")\n stat_file_path_kubernetes = os.path.join(mountpoint, container_id)\n stat_file_path_kubernetes_docker = os.path.join(mountpoint, \"system\", \"docker\", container_id)\n stat_file_path_docker_daemon = os.path.join(mountpoint, \"docker-daemon\", \"docker\", container_id)\n\n if os.path.exists(stat_file_path_lxc):\n return os.path.join('%(mountpoint)s/lxc/%(id)s/%(file)s')\n elif os.path.exists(stat_file_path_docker):\n return os.path.join('%(mountpoint)s/docker/%(id)s/%(file)s')\n elif os.path.exists(stat_file_path_coreos):\n return os.path.join('%(mountpoint)s/system.slice/docker-%(id)s.scope/%(file)s')\n elif os.path.exists(stat_file_path_kubernetes):\n return os.path.join('%(mountpoint)s/%(id)s/%(file)s')\n elif os.path.exists(stat_file_path_kubernetes_docker):\n return os.path.join('%(mountpoint)s/system/docker/%(id)s/%(file)s')\n elif os.path.exists(stat_file_path_docker_daemon):\n return os.path.join('%(mountpoint)s/docker-daemon/docker/%(id)s/%(file)s')\n\n\n raise MountException(\"Cannot find Docker cgroup directory. Be sure your system is supported.\")\n\n\ndef image_tag_extractor(entity, key):\n if \"Image\" in entity:\n split = entity[\"Image\"].split(\":\")\n if len(split) <= key:\n return None\n elif len(split) > 2:\n # if the repo is in the image name and has the form 'docker.clearbit:5000'\n # the split will be like [repo_url, repo_port/image_name, image_tag]. Let's avoid that\n split = [':'.join(split[:-1]), split[-1]]\n return [split[key]]\n if \"RepoTags\" in entity:\n splits = [el.split(\":\") for el in entity[\"RepoTags\"]]\n tags = set()\n for split in splits:\n if len(split) > 2:\n split = [':'.join(split[:-1]), split[-1]]\n if len(split) > key:\n tags.add(split[key])\n if len(tags) > 0:\n return list(tags)\n return None\n\n\ndef container_name_extractor(co):\n names = co.get('Names', [])\n if names is not None:\n # we sort the list to make sure that a docker API update introducing\n # new names with a single \"/\" won't make us report dups.\n names = sorted(names)\n for name in names:\n # the leading \"/\" is legit, if there's another one it means the name is actually an alias\n if name.count('/') <= 1:\n return [str(name).lstrip('/')]\n return co.get('Id')[:11]\n", "path": "utils/dockerutil.py"}]} | 1,944 | 287 |
gh_patches_debug_11715 | rasdani/github-patches | git_diff | safe-global__safe-config-service-361 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Gas price editable from the Chain model
Please make the Gas Price editable directly via Chain configs.
Like we did for Features and Wallets.
Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/chains/admin.py`
Content:
```
1 from django.contrib import admin
2 from django.db.models import Model
3
4 from .models import Chain, Feature, GasPrice, Wallet
5
6
7 class FeatureInline(admin.TabularInline[Model]):
8 model = Feature.chains.through
9 extra = 0
10 verbose_name_plural = "Features enabled for this chain"
11
12
13 class WalletInline(admin.TabularInline[Model]):
14 model = Wallet.chains.through
15 extra = 0
16 verbose_name_plural = "Wallets enabled for this chain"
17
18
19 @admin.register(Chain)
20 class ChainAdmin(admin.ModelAdmin[Chain]):
21 list_display = (
22 "id",
23 "name",
24 "rpc_uri",
25 "safe_apps_rpc_uri",
26 "relevance",
27 )
28 search_fields = ("name", "id")
29 ordering = (
30 "relevance",
31 "name",
32 )
33 inlines = [FeatureInline, WalletInline]
34
35
36 @admin.register(GasPrice)
37 class GasPriceAdmin(admin.ModelAdmin[GasPrice]):
38 list_display = (
39 "chain_id",
40 "oracle_uri",
41 "fixed_wei_value",
42 "rank",
43 )
44 search_fields = ("chain_id", "oracle_uri")
45 ordering = ("rank",)
46
47
48 @admin.register(Wallet)
49 class WalletAdmin(admin.ModelAdmin[Wallet]):
50 list_display = ("key",)
51
52
53 @admin.register(Feature)
54 class FeatureAdmin(admin.ModelAdmin[Feature]):
55 list_display = ("key",)
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/chains/admin.py b/src/chains/admin.py
--- a/src/chains/admin.py
+++ b/src/chains/admin.py
@@ -4,6 +4,12 @@
from .models import Chain, Feature, GasPrice, Wallet
+class GasPriceInline(admin.TabularInline[Model]):
+ model = GasPrice
+ extra = 0
+ verbose_name_plural = "Gas prices set for this chain"
+
+
class FeatureInline(admin.TabularInline[Model]):
model = Feature.chains.through
extra = 0
@@ -30,7 +36,7 @@
"relevance",
"name",
)
- inlines = [FeatureInline, WalletInline]
+ inlines = [FeatureInline, GasPriceInline, WalletInline]
@admin.register(GasPrice)
| {"golden_diff": "diff --git a/src/chains/admin.py b/src/chains/admin.py\n--- a/src/chains/admin.py\n+++ b/src/chains/admin.py\n@@ -4,6 +4,12 @@\n from .models import Chain, Feature, GasPrice, Wallet\n \n \n+class GasPriceInline(admin.TabularInline[Model]):\n+ model = GasPrice\n+ extra = 0\n+ verbose_name_plural = \"Gas prices set for this chain\"\n+\n+\n class FeatureInline(admin.TabularInline[Model]):\n model = Feature.chains.through\n extra = 0\n@@ -30,7 +36,7 @@\n \"relevance\",\n \"name\",\n )\n- inlines = [FeatureInline, WalletInline]\n+ inlines = [FeatureInline, GasPriceInline, WalletInline]\n \n \n @admin.register(GasPrice)\n", "issue": "Gas price editable from the Chain model\nPlease make the Gas Price editable directly via Chain configs.\r\nLike we did for Features and Wallets.\r\nThanks!\n", "before_files": [{"content": "from django.contrib import admin\nfrom django.db.models import Model\n\nfrom .models import Chain, Feature, GasPrice, Wallet\n\n\nclass FeatureInline(admin.TabularInline[Model]):\n model = Feature.chains.through\n extra = 0\n verbose_name_plural = \"Features enabled for this chain\"\n\n\nclass WalletInline(admin.TabularInline[Model]):\n model = Wallet.chains.through\n extra = 0\n verbose_name_plural = \"Wallets enabled for this chain\"\n\n\[email protected](Chain)\nclass ChainAdmin(admin.ModelAdmin[Chain]):\n list_display = (\n \"id\",\n \"name\",\n \"rpc_uri\",\n \"safe_apps_rpc_uri\",\n \"relevance\",\n )\n search_fields = (\"name\", \"id\")\n ordering = (\n \"relevance\",\n \"name\",\n )\n inlines = [FeatureInline, WalletInline]\n\n\[email protected](GasPrice)\nclass GasPriceAdmin(admin.ModelAdmin[GasPrice]):\n list_display = (\n \"chain_id\",\n \"oracle_uri\",\n \"fixed_wei_value\",\n \"rank\",\n )\n search_fields = (\"chain_id\", \"oracle_uri\")\n ordering = (\"rank\",)\n\n\[email protected](Wallet)\nclass WalletAdmin(admin.ModelAdmin[Wallet]):\n list_display = (\"key\",)\n\n\[email protected](Feature)\nclass FeatureAdmin(admin.ModelAdmin[Feature]):\n list_display = (\"key\",)\n", "path": "src/chains/admin.py"}], "after_files": [{"content": "from django.contrib import admin\nfrom django.db.models import Model\n\nfrom .models import Chain, Feature, GasPrice, Wallet\n\n\nclass GasPriceInline(admin.TabularInline[Model]):\n model = GasPrice\n extra = 0\n verbose_name_plural = \"Gas prices set for this chain\"\n\n\nclass FeatureInline(admin.TabularInline[Model]):\n model = Feature.chains.through\n extra = 0\n verbose_name_plural = \"Features enabled for this chain\"\n\n\nclass WalletInline(admin.TabularInline[Model]):\n model = Wallet.chains.through\n extra = 0\n verbose_name_plural = \"Wallets enabled for this chain\"\n\n\[email protected](Chain)\nclass ChainAdmin(admin.ModelAdmin[Chain]):\n list_display = (\n \"id\",\n \"name\",\n \"rpc_uri\",\n \"safe_apps_rpc_uri\",\n \"relevance\",\n )\n search_fields = (\"name\", \"id\")\n ordering = (\n \"relevance\",\n \"name\",\n )\n inlines = [FeatureInline, GasPriceInline, WalletInline]\n\n\[email protected](GasPrice)\nclass GasPriceAdmin(admin.ModelAdmin[GasPrice]):\n list_display = (\n \"chain_id\",\n \"oracle_uri\",\n \"fixed_wei_value\",\n \"rank\",\n )\n search_fields = (\"chain_id\", \"oracle_uri\")\n ordering = (\"rank\",)\n\n\[email protected](Wallet)\nclass WalletAdmin(admin.ModelAdmin[Wallet]):\n list_display = (\"key\",)\n\n\[email protected](Feature)\nclass FeatureAdmin(admin.ModelAdmin[Feature]):\n list_display = (\"key\",)\n", "path": "src/chains/admin.py"}]} | 696 | 183 |
gh_patches_debug_18892 | rasdani/github-patches | git_diff | python__peps-634 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pep2rss can raise UnicodeEncodeError on certain platforms
This is due to the default encoding of [`open()`](https://docs.python.org/3/library/functions.html#open) being platform dependent.
For example, on Windows (CP-1252), it can raise
`UnicodeEncodeError: 'charmap' codec can't encode character '\u0141' in position 1545: character maps to <undefined>`
when handling the character, 'Ł', in the name of the author of [PEP 569](https://www.python.org/dev/peps/pep-0569/).
https://github.com/python/peps/blob/433faca4cf1bb8a0550d01b86b2b4581b3eda460/pep-0569.rst#L5
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pep2rss.py`
Content:
```
1 #!/usr/bin/env python
2
3 # usage: pep-hook.py $REPOS $REV
4 # (standard post-commit args)
5
6 import os, glob, time, datetime, stat, re, sys
7 import codecs
8 import PyRSS2Gen as rssgen
9
10 RSS_PATH = os.path.join(sys.argv[1], 'peps.rss')
11
12 def firstline_startingwith(full_path, text):
13 for line in codecs.open(full_path, encoding="utf-8"):
14 if line.startswith(text):
15 return line[len(text):].strip()
16 return None
17
18 # get list of peps with creation time (from "Created:" string in pep .txt)
19 peps = glob.glob('pep-*.txt')
20 def pep_creation_dt(full_path):
21 created_str = firstline_startingwith(full_path, 'Created:')
22 # bleh, I was hoping to avoid re but some PEPs editorialize
23 # on the Created line
24 m = re.search(r'''(\d+-\w+-\d{4})''', created_str)
25 if not m:
26 # some older ones have an empty line, that's okay, if it's old
27 # we ipso facto don't care about it.
28 # "return None" would make the most sense but datetime objects
29 # refuse to compare with that. :-|
30 return datetime.datetime(*time.localtime(0)[:6])
31 created_str = m.group(1)
32 try:
33 t = time.strptime(created_str, '%d-%b-%Y')
34 except ValueError:
35 t = time.strptime(created_str, '%d-%B-%Y')
36 return datetime.datetime(*t[:6])
37 peps_with_dt = [(pep_creation_dt(full_path), full_path) for full_path in peps]
38 # sort peps by date, newest first
39 peps_with_dt.sort(reverse=True)
40
41 # generate rss items for 10 most recent peps
42 items = []
43 for dt, full_path in peps_with_dt[:10]:
44 try:
45 n = int(full_path.split('-')[-1].split('.')[0])
46 except ValueError:
47 pass
48 title = firstline_startingwith(full_path, 'Title:')
49 author = firstline_startingwith(full_path, 'Author:')
50 url = 'http://www.python.org/dev/peps/pep-%0.4d' % n
51 item = rssgen.RSSItem(
52 title = 'PEP %d: %s' % (n, title),
53 link = url,
54 description = 'Author: %s' % author,
55 guid = rssgen.Guid(url),
56 pubDate = dt)
57 items.append(item)
58
59 # the rss envelope
60 desc = """
61 Newest Python Enhancement Proposals (PEPs) - Information on new
62 language features, and some meta-information like release
63 procedure and schedules
64 """.strip()
65 rss = rssgen.RSS2(
66 title = 'Newest Python PEPs',
67 link = 'http://www.python.org/dev/peps',
68 description = desc,
69 lastBuildDate = datetime.datetime.now(),
70 items = items)
71
72 with open(RSS_PATH, 'w') as fp:
73 fp.write(rss.to_xml())
74
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pep2rss.py b/pep2rss.py
--- a/pep2rss.py
+++ b/pep2rss.py
@@ -1,16 +1,15 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
# usage: pep-hook.py $REPOS $REV
# (standard post-commit args)
import os, glob, time, datetime, stat, re, sys
-import codecs
import PyRSS2Gen as rssgen
RSS_PATH = os.path.join(sys.argv[1], 'peps.rss')
def firstline_startingwith(full_path, text):
- for line in codecs.open(full_path, encoding="utf-8"):
+ for line in open(full_path, encoding="utf-8"):
if line.startswith(text):
return line[len(text):].strip()
return None
@@ -69,5 +68,5 @@
lastBuildDate = datetime.datetime.now(),
items = items)
-with open(RSS_PATH, 'w') as fp:
- fp.write(rss.to_xml())
+with open(RSS_PATH, 'w', encoding="utf-8") as fp:
+ fp.write(rss.to_xml(encoding="utf-8"))
| {"golden_diff": "diff --git a/pep2rss.py b/pep2rss.py\n--- a/pep2rss.py\n+++ b/pep2rss.py\n@@ -1,16 +1,15 @@\n-#!/usr/bin/env python\n+#!/usr/bin/env python3\n \n # usage: pep-hook.py $REPOS $REV\n # (standard post-commit args)\n \n import os, glob, time, datetime, stat, re, sys\n-import codecs\n import PyRSS2Gen as rssgen\n \n RSS_PATH = os.path.join(sys.argv[1], 'peps.rss')\n \n def firstline_startingwith(full_path, text):\n- for line in codecs.open(full_path, encoding=\"utf-8\"):\n+ for line in open(full_path, encoding=\"utf-8\"):\n if line.startswith(text):\n return line[len(text):].strip()\n return None\n@@ -69,5 +68,5 @@\n lastBuildDate = datetime.datetime.now(),\n items = items)\n \n-with open(RSS_PATH, 'w') as fp:\n- fp.write(rss.to_xml())\n+with open(RSS_PATH, 'w', encoding=\"utf-8\") as fp:\n+ fp.write(rss.to_xml(encoding=\"utf-8\"))\n", "issue": "pep2rss can raise UnicodeEncodeError on certain platforms\nThis is due to the default encoding of [`open()`](https://docs.python.org/3/library/functions.html#open) being platform dependent.\r\n\r\nFor example, on Windows (CP-1252), it can raise\r\n`UnicodeEncodeError: 'charmap' codec can't encode character '\\u0141' in position 1545: character maps to <undefined>`\r\nwhen handling the character, '\u0141', in the name of the author of [PEP 569](https://www.python.org/dev/peps/pep-0569/).\r\nhttps://github.com/python/peps/blob/433faca4cf1bb8a0550d01b86b2b4581b3eda460/pep-0569.rst#L5\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# usage: pep-hook.py $REPOS $REV\n# (standard post-commit args)\n\nimport os, glob, time, datetime, stat, re, sys\nimport codecs\nimport PyRSS2Gen as rssgen\n\nRSS_PATH = os.path.join(sys.argv[1], 'peps.rss')\n\ndef firstline_startingwith(full_path, text):\n for line in codecs.open(full_path, encoding=\"utf-8\"):\n if line.startswith(text):\n return line[len(text):].strip()\n return None\n\n# get list of peps with creation time (from \"Created:\" string in pep .txt)\npeps = glob.glob('pep-*.txt')\ndef pep_creation_dt(full_path):\n created_str = firstline_startingwith(full_path, 'Created:')\n # bleh, I was hoping to avoid re but some PEPs editorialize\n # on the Created line\n m = re.search(r'''(\\d+-\\w+-\\d{4})''', created_str)\n if not m:\n # some older ones have an empty line, that's okay, if it's old\n # we ipso facto don't care about it.\n # \"return None\" would make the most sense but datetime objects\n # refuse to compare with that. :-|\n return datetime.datetime(*time.localtime(0)[:6])\n created_str = m.group(1)\n try:\n t = time.strptime(created_str, '%d-%b-%Y')\n except ValueError:\n t = time.strptime(created_str, '%d-%B-%Y')\n return datetime.datetime(*t[:6])\npeps_with_dt = [(pep_creation_dt(full_path), full_path) for full_path in peps]\n# sort peps by date, newest first\npeps_with_dt.sort(reverse=True)\n\n# generate rss items for 10 most recent peps\nitems = []\nfor dt, full_path in peps_with_dt[:10]:\n try:\n n = int(full_path.split('-')[-1].split('.')[0])\n except ValueError:\n pass\n title = firstline_startingwith(full_path, 'Title:')\n author = firstline_startingwith(full_path, 'Author:')\n url = 'http://www.python.org/dev/peps/pep-%0.4d' % n\n item = rssgen.RSSItem(\n title = 'PEP %d: %s' % (n, title),\n link = url,\n description = 'Author: %s' % author,\n guid = rssgen.Guid(url),\n pubDate = dt)\n items.append(item)\n\n# the rss envelope\ndesc = \"\"\"\nNewest Python Enhancement Proposals (PEPs) - Information on new\nlanguage features, and some meta-information like release\nprocedure and schedules\n\"\"\".strip()\nrss = rssgen.RSS2(\n title = 'Newest Python PEPs',\n link = 'http://www.python.org/dev/peps',\n description = desc,\n lastBuildDate = datetime.datetime.now(),\n items = items)\n\nwith open(RSS_PATH, 'w') as fp:\n fp.write(rss.to_xml())\n", "path": "pep2rss.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n# usage: pep-hook.py $REPOS $REV\n# (standard post-commit args)\n\nimport os, glob, time, datetime, stat, re, sys\nimport PyRSS2Gen as rssgen\n\nRSS_PATH = os.path.join(sys.argv[1], 'peps.rss')\n\ndef firstline_startingwith(full_path, text):\n for line in open(full_path, encoding=\"utf-8\"):\n if line.startswith(text):\n return line[len(text):].strip()\n return None\n\n# get list of peps with creation time (from \"Created:\" string in pep .txt)\npeps = glob.glob('pep-*.txt')\ndef pep_creation_dt(full_path):\n created_str = firstline_startingwith(full_path, 'Created:')\n # bleh, I was hoping to avoid re but some PEPs editorialize\n # on the Created line\n m = re.search(r'''(\\d+-\\w+-\\d{4})''', created_str)\n if not m:\n # some older ones have an empty line, that's okay, if it's old\n # we ipso facto don't care about it.\n # \"return None\" would make the most sense but datetime objects\n # refuse to compare with that. :-|\n return datetime.datetime(*time.localtime(0)[:6])\n created_str = m.group(1)\n try:\n t = time.strptime(created_str, '%d-%b-%Y')\n except ValueError:\n t = time.strptime(created_str, '%d-%B-%Y')\n return datetime.datetime(*t[:6])\npeps_with_dt = [(pep_creation_dt(full_path), full_path) for full_path in peps]\n# sort peps by date, newest first\npeps_with_dt.sort(reverse=True)\n\n# generate rss items for 10 most recent peps\nitems = []\nfor dt, full_path in peps_with_dt[:10]:\n try:\n n = int(full_path.split('-')[-1].split('.')[0])\n except ValueError:\n pass\n title = firstline_startingwith(full_path, 'Title:')\n author = firstline_startingwith(full_path, 'Author:')\n url = 'http://www.python.org/dev/peps/pep-%0.4d' % n\n item = rssgen.RSSItem(\n title = 'PEP %d: %s' % (n, title),\n link = url,\n description = 'Author: %s' % author,\n guid = rssgen.Guid(url),\n pubDate = dt)\n items.append(item)\n\n# the rss envelope\ndesc = \"\"\"\nNewest Python Enhancement Proposals (PEPs) - Information on new\nlanguage features, and some meta-information like release\nprocedure and schedules\n\"\"\".strip()\nrss = rssgen.RSS2(\n title = 'Newest Python PEPs',\n link = 'http://www.python.org/dev/peps',\n description = desc,\n lastBuildDate = datetime.datetime.now(),\n items = items)\n\nwith open(RSS_PATH, 'w', encoding=\"utf-8\") as fp:\n fp.write(rss.to_xml(encoding=\"utf-8\"))\n", "path": "pep2rss.py"}]} | 1,271 | 270 |
gh_patches_debug_4567 | rasdani/github-patches | git_diff | mozmeao__snippets-service-813 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
UnicodeEncodeError when Slack request.post
UnicodeEncodeError: 'latin-1' codec can't encode characters in position 258-264: Body ('γιώργος') is not valid Latin-1. Use body.encode('utf-8') if you want to send it encoded in UTF-8.
Sentry Issue: https://sentry.prod.mozaws.net/operations/snippets/issues/4891896/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `snippets/base/slack.py`
Content:
```
1 import logging
2
3 from django.conf import settings
4 from django.template.loader import render_to_string
5
6 import requests
7 from raven.contrib.django.models import client as sentry_client
8
9 logger = logging.getLogger(__name__)
10
11
12 def send_slack(template_name, snippet):
13 data = render_to_string('slack/{}.jinja.json'.format(template_name),
14 context={'snippet': snippet})
15 _send_slack(data)
16
17
18 def _send_slack(data):
19 if not (settings.SLACK_ENABLE and settings.SLACK_WEBHOOK):
20 logger.info('Slack is not enabled.')
21 return
22
23 try:
24 response = requests.post(settings.SLACK_WEBHOOK, data=data,
25 headers={'Content-Type': 'application/json'},
26 timeout=4)
27 response.raise_for_status()
28 except requests.exceptions.RequestException:
29 sentry_client.captureException()
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/snippets/base/slack.py b/snippets/base/slack.py
--- a/snippets/base/slack.py
+++ b/snippets/base/slack.py
@@ -21,7 +21,7 @@
return
try:
- response = requests.post(settings.SLACK_WEBHOOK, data=data,
+ response = requests.post(settings.SLACK_WEBHOOK, data=data.encode('utf-8'),
headers={'Content-Type': 'application/json'},
timeout=4)
response.raise_for_status()
| {"golden_diff": "diff --git a/snippets/base/slack.py b/snippets/base/slack.py\n--- a/snippets/base/slack.py\n+++ b/snippets/base/slack.py\n@@ -21,7 +21,7 @@\n return\n \n try:\n- response = requests.post(settings.SLACK_WEBHOOK, data=data,\n+ response = requests.post(settings.SLACK_WEBHOOK, data=data.encode('utf-8'),\n headers={'Content-Type': 'application/json'},\n timeout=4)\n response.raise_for_status()\n", "issue": "UnicodeEncodeError when Slack request.post\nUnicodeEncodeError: 'latin-1' codec can't encode characters in position 258-264: Body ('\u03b3\u03b9\u03ce\u03c1\u03b3\u03bf\u03c2') is not valid Latin-1. Use body.encode('utf-8') if you want to send it encoded in UTF-8.\r\n\r\nSentry Issue: https://sentry.prod.mozaws.net/operations/snippets/issues/4891896/\n", "before_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.template.loader import render_to_string\n\nimport requests\nfrom raven.contrib.django.models import client as sentry_client\n\nlogger = logging.getLogger(__name__)\n\n\ndef send_slack(template_name, snippet):\n data = render_to_string('slack/{}.jinja.json'.format(template_name),\n context={'snippet': snippet})\n _send_slack(data)\n\n\ndef _send_slack(data):\n if not (settings.SLACK_ENABLE and settings.SLACK_WEBHOOK):\n logger.info('Slack is not enabled.')\n return\n\n try:\n response = requests.post(settings.SLACK_WEBHOOK, data=data,\n headers={'Content-Type': 'application/json'},\n timeout=4)\n response.raise_for_status()\n except requests.exceptions.RequestException:\n sentry_client.captureException()\n", "path": "snippets/base/slack.py"}], "after_files": [{"content": "import logging\n\nfrom django.conf import settings\nfrom django.template.loader import render_to_string\n\nimport requests\nfrom raven.contrib.django.models import client as sentry_client\n\nlogger = logging.getLogger(__name__)\n\n\ndef send_slack(template_name, snippet):\n data = render_to_string('slack/{}.jinja.json'.format(template_name),\n context={'snippet': snippet})\n _send_slack(data)\n\n\ndef _send_slack(data):\n if not (settings.SLACK_ENABLE and settings.SLACK_WEBHOOK):\n logger.info('Slack is not enabled.')\n return\n\n try:\n response = requests.post(settings.SLACK_WEBHOOK, data=data.encode('utf-8'),\n headers={'Content-Type': 'application/json'},\n timeout=4)\n response.raise_for_status()\n except requests.exceptions.RequestException:\n sentry_client.captureException()\n", "path": "snippets/base/slack.py"}]} | 589 | 112 |
gh_patches_debug_11352 | rasdani/github-patches | git_diff | netket__netket-1487 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error in fermi-hubbard example
Hello, there.
It seems that the definition of create and annihilation operators are defined reversely in fermi-hubbard model, and results are wrong. [link](https://github.com/netket/netket/blob/26372e3527ae12861cc7953c24654614e8679d87/Examples/Fermions/fermi_hubbard.py#L25C1-L26)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Examples/Fermions/fermi_hubbard.py`
Content:
```
1 import netket as nk
2 import numpy as np
3 import matplotlib.pyplot as plt
4 import json
5
6 from netket import experimental as nkx
7
8 L = 2 # take a 2x2 lattice
9 D = 2
10 t = 1 # tunneling/hopping
11 U = 0.01 # coulomb
12
13 # create the graph our fermions can hop on
14 g = nk.graph.Hypercube(length=L, n_dim=D, pbc=True)
15 n_sites = g.n_nodes
16
17 # create a hilbert space with 2 up and 2 down spins
18 hi = nkx.hilbert.SpinOrbitalFermions(n_sites, s=1 / 2, n_fermions=(2, 2))
19
20
21 # create an operator representing fermi hubbard interactions
22 # -t (i^ j + h.c.) + U (i^ i j^ j)
23 # we will create a helper function to abbreviate the creation, destruction and number operators
24 # each operator has a site and spin projection (sz) in order to find the right position in the hilbert space samples
25 def c(site, sz):
26 return nkx.operator.fermion.create(hi, site, sz=sz)
27
28
29 def cdag(site, sz):
30 return nkx.operator.fermion.destroy(hi, site, sz=sz)
31
32
33 def nc(site, sz):
34 return nkx.operator.fermion.number(hi, site, sz=sz)
35
36
37 up = +1 / 2
38 down = -1 / 2
39 ham = 0.0
40 for sz in (up, down):
41 for u, v in g.edges():
42 ham += -t * cdag(u, sz) * c(v, sz) - t * cdag(v, sz) * c(u, sz)
43 for u in g.nodes():
44 ham += U * nc(u, up) * nc(u, down)
45
46 print("Hamiltonian =", ham.operator_string())
47
48 # metropolis exchange moves fermions around according to a graph
49 # the physical graph has LxL vertices, but the computational basis defined by the
50 # hilbert space contains (2s+1)*L*L occupation numbers
51 # by taking a disjoint copy of the lattice, we can
52 # move the fermions around independently for both spins
53 # and therefore conserve the number of fermions with up and down spin
54
55 # g.n_nodes == L*L --> disj_graph == 2*L*L
56 disj_graph = nk.graph.disjoint_union(g, g)
57 sa = nk.sampler.MetropolisExchange(hi, graph=disj_graph, n_chains=16)
58
59 # since the hilbert basis is a set of occupation numbers, we can take a general RBM
60 # we take complex parameters, since it learns sign structures more easily, and for even fermion number, the wave function might be complex
61 ma = nk.models.RBM(alpha=1, param_dtype=complex, use_visible_bias=False)
62 vs = nk.vqs.MCState(sa, ma, n_discard_per_chain=100, n_samples=512)
63
64 # we will use sgd with Stochastic Reconfiguration
65 opt = nk.optimizer.Sgd(learning_rate=0.01)
66 sr = nk.optimizer.SR(diag_shift=0.1)
67
68 gs = nk.driver.VMC(ham, opt, variational_state=vs, preconditioner=sr)
69
70 # now run the optimization
71 # first step will take longer in order to compile
72 exp_name = "fermions_test"
73 gs.run(500, out=exp_name)
74
75 ############## plot #################
76
77 ed_energies = np.linalg.eigvalsh(ham.to_dense())
78
79 with open(f"{exp_name}.log", "r") as f:
80 data = json.load(f)
81
82 x = data["Energy"]["iters"]
83 y = data["Energy"]["Mean"]["real"]
84
85 # plot the energy levels
86 plt.axhline(ed_energies[0], color="red", label="E0")
87 for e in ed_energies[1:]:
88 plt.axhline(e, color="black")
89 plt.plot(x, y, color="red", label="VMC")
90 plt.xlabel("step")
91 plt.ylabel("E")
92 plt.show()
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Examples/Fermions/fermi_hubbard.py b/Examples/Fermions/fermi_hubbard.py
--- a/Examples/Fermions/fermi_hubbard.py
+++ b/Examples/Fermions/fermi_hubbard.py
@@ -23,11 +23,11 @@
# we will create a helper function to abbreviate the creation, destruction and number operators
# each operator has a site and spin projection (sz) in order to find the right position in the hilbert space samples
def c(site, sz):
- return nkx.operator.fermion.create(hi, site, sz=sz)
+ return nkx.operator.fermion.destroy(hi, site, sz=sz)
def cdag(site, sz):
- return nkx.operator.fermion.destroy(hi, site, sz=sz)
+ return nkx.operator.fermion.create(hi, site, sz=sz)
def nc(site, sz):
| {"golden_diff": "diff --git a/Examples/Fermions/fermi_hubbard.py b/Examples/Fermions/fermi_hubbard.py\n--- a/Examples/Fermions/fermi_hubbard.py\n+++ b/Examples/Fermions/fermi_hubbard.py\n@@ -23,11 +23,11 @@\n # we will create a helper function to abbreviate the creation, destruction and number operators\n # each operator has a site and spin projection (sz) in order to find the right position in the hilbert space samples\n def c(site, sz):\n- return nkx.operator.fermion.create(hi, site, sz=sz)\n+ return nkx.operator.fermion.destroy(hi, site, sz=sz)\n \n \n def cdag(site, sz):\n- return nkx.operator.fermion.destroy(hi, site, sz=sz)\n+ return nkx.operator.fermion.create(hi, site, sz=sz)\n \n \n def nc(site, sz):\n", "issue": "Error in fermi-hubbard example\nHello, there.\r\n\r\nIt seems that the definition of create and annihilation operators are defined reversely in fermi-hubbard model, and results are wrong. [link](https://github.com/netket/netket/blob/26372e3527ae12861cc7953c24654614e8679d87/Examples/Fermions/fermi_hubbard.py#L25C1-L26)\n", "before_files": [{"content": "import netket as nk\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport json\n\nfrom netket import experimental as nkx\n\nL = 2 # take a 2x2 lattice\nD = 2\nt = 1 # tunneling/hopping\nU = 0.01 # coulomb\n\n# create the graph our fermions can hop on\ng = nk.graph.Hypercube(length=L, n_dim=D, pbc=True)\nn_sites = g.n_nodes\n\n# create a hilbert space with 2 up and 2 down spins\nhi = nkx.hilbert.SpinOrbitalFermions(n_sites, s=1 / 2, n_fermions=(2, 2))\n\n\n# create an operator representing fermi hubbard interactions\n# -t (i^ j + h.c.) + U (i^ i j^ j)\n# we will create a helper function to abbreviate the creation, destruction and number operators\n# each operator has a site and spin projection (sz) in order to find the right position in the hilbert space samples\ndef c(site, sz):\n return nkx.operator.fermion.create(hi, site, sz=sz)\n\n\ndef cdag(site, sz):\n return nkx.operator.fermion.destroy(hi, site, sz=sz)\n\n\ndef nc(site, sz):\n return nkx.operator.fermion.number(hi, site, sz=sz)\n\n\nup = +1 / 2\ndown = -1 / 2\nham = 0.0\nfor sz in (up, down):\n for u, v in g.edges():\n ham += -t * cdag(u, sz) * c(v, sz) - t * cdag(v, sz) * c(u, sz)\nfor u in g.nodes():\n ham += U * nc(u, up) * nc(u, down)\n\nprint(\"Hamiltonian =\", ham.operator_string())\n\n# metropolis exchange moves fermions around according to a graph\n# the physical graph has LxL vertices, but the computational basis defined by the\n# hilbert space contains (2s+1)*L*L occupation numbers\n# by taking a disjoint copy of the lattice, we can\n# move the fermions around independently for both spins\n# and therefore conserve the number of fermions with up and down spin\n\n# g.n_nodes == L*L --> disj_graph == 2*L*L\ndisj_graph = nk.graph.disjoint_union(g, g)\nsa = nk.sampler.MetropolisExchange(hi, graph=disj_graph, n_chains=16)\n\n# since the hilbert basis is a set of occupation numbers, we can take a general RBM\n# we take complex parameters, since it learns sign structures more easily, and for even fermion number, the wave function might be complex\nma = nk.models.RBM(alpha=1, param_dtype=complex, use_visible_bias=False)\nvs = nk.vqs.MCState(sa, ma, n_discard_per_chain=100, n_samples=512)\n\n# we will use sgd with Stochastic Reconfiguration\nopt = nk.optimizer.Sgd(learning_rate=0.01)\nsr = nk.optimizer.SR(diag_shift=0.1)\n\ngs = nk.driver.VMC(ham, opt, variational_state=vs, preconditioner=sr)\n\n# now run the optimization\n# first step will take longer in order to compile\nexp_name = \"fermions_test\"\ngs.run(500, out=exp_name)\n\n############## plot #################\n\ned_energies = np.linalg.eigvalsh(ham.to_dense())\n\nwith open(f\"{exp_name}.log\", \"r\") as f:\n data = json.load(f)\n\nx = data[\"Energy\"][\"iters\"]\ny = data[\"Energy\"][\"Mean\"][\"real\"]\n\n# plot the energy levels\nplt.axhline(ed_energies[0], color=\"red\", label=\"E0\")\nfor e in ed_energies[1:]:\n plt.axhline(e, color=\"black\")\nplt.plot(x, y, color=\"red\", label=\"VMC\")\nplt.xlabel(\"step\")\nplt.ylabel(\"E\")\nplt.show()\n", "path": "Examples/Fermions/fermi_hubbard.py"}], "after_files": [{"content": "import netket as nk\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport json\n\nfrom netket import experimental as nkx\n\nL = 2 # take a 2x2 lattice\nD = 2\nt = 1 # tunneling/hopping\nU = 0.01 # coulomb\n\n# create the graph our fermions can hop on\ng = nk.graph.Hypercube(length=L, n_dim=D, pbc=True)\nn_sites = g.n_nodes\n\n# create a hilbert space with 2 up and 2 down spins\nhi = nkx.hilbert.SpinOrbitalFermions(n_sites, s=1 / 2, n_fermions=(2, 2))\n\n\n# create an operator representing fermi hubbard interactions\n# -t (i^ j + h.c.) + U (i^ i j^ j)\n# we will create a helper function to abbreviate the creation, destruction and number operators\n# each operator has a site and spin projection (sz) in order to find the right position in the hilbert space samples\ndef c(site, sz):\n return nkx.operator.fermion.destroy(hi, site, sz=sz)\n\n\ndef cdag(site, sz):\n return nkx.operator.fermion.create(hi, site, sz=sz)\n\n\ndef nc(site, sz):\n return nkx.operator.fermion.number(hi, site, sz=sz)\n\n\nup = +1 / 2\ndown = -1 / 2\nham = 0.0\nfor sz in (up, down):\n for u, v in g.edges():\n ham += -t * cdag(u, sz) * c(v, sz) - t * cdag(v, sz) * c(u, sz)\nfor u in g.nodes():\n ham += U * nc(u, up) * nc(u, down)\n\nprint(\"Hamiltonian =\", ham.operator_string())\n\n# metropolis exchange moves fermions around according to a graph\n# the physical graph has LxL vertices, but the computational basis defined by the\n# hilbert space contains (2s+1)*L*L occupation numbers\n# by taking a disjoint copy of the lattice, we can\n# move the fermions around independently for both spins\n# and therefore conserve the number of fermions with up and down spin\n\n# g.n_nodes == L*L --> disj_graph == 2*L*L\ndisj_graph = nk.graph.disjoint_union(g, g)\nsa = nk.sampler.MetropolisExchange(hi, graph=disj_graph, n_chains=16)\n\n# since the hilbert basis is a set of occupation numbers, we can take a general RBM\n# we take complex parameters, since it learns sign structures more easily, and for even fermion number, the wave function might be complex\nma = nk.models.RBM(alpha=1, param_dtype=complex, use_visible_bias=False)\nvs = nk.vqs.MCState(sa, ma, n_discard_per_chain=100, n_samples=512)\n\n# we will use sgd with Stochastic Reconfiguration\nopt = nk.optimizer.Sgd(learning_rate=0.01)\nsr = nk.optimizer.SR(diag_shift=0.1)\n\ngs = nk.driver.VMC(ham, opt, variational_state=vs, preconditioner=sr)\n\n# now run the optimization\n# first step will take longer in order to compile\nexp_name = \"fermions_test\"\ngs.run(500, out=exp_name)\n\n############## plot #################\n\ned_energies = np.linalg.eigvalsh(ham.to_dense())\n\nwith open(f\"{exp_name}.log\", \"r\") as f:\n data = json.load(f)\n\nx = data[\"Energy\"][\"iters\"]\ny = data[\"Energy\"][\"Mean\"][\"real\"]\n\n# plot the energy levels\nplt.axhline(ed_energies[0], color=\"red\", label=\"E0\")\nfor e in ed_energies[1:]:\n plt.axhline(e, color=\"black\")\nplt.plot(x, y, color=\"red\", label=\"VMC\")\nplt.xlabel(\"step\")\nplt.ylabel(\"E\")\nplt.show()\n", "path": "Examples/Fermions/fermi_hubbard.py"}]} | 1,472 | 219 |
gh_patches_debug_25100 | rasdani/github-patches | git_diff | ros__ros_comm-433 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
message_filters: Incorporate python approximate time synchronizer
https://github.com/ros-perception/image_pipeline/issues/76
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `utilities/message_filters/src/message_filters/__init__.py`
Content:
```
1 # Copyright (c) 2009, Willow Garage, Inc.
2 # All rights reserved.
3 #
4 # Redistribution and use in source and binary forms, with or without
5 # modification, are permitted provided that the following conditions are met:
6 #
7 # * Redistributions of source code must retain the above copyright
8 # notice, this list of conditions and the following disclaimer.
9 # * Redistributions in binary form must reproduce the above copyright
10 # notice, this list of conditions and the following disclaimer in the
11 # documentation and/or other materials provided with the distribution.
12 # * Neither the name of the Willow Garage, Inc. nor the names of its
13 # contributors may be used to endorse or promote products derived from
14 # this software without specific prior written permission.
15 #
16 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
17 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
18 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
19 # ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
20 # LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
21 # CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
22 # SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
23 # INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
24 # CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
25 # ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
26 # POSSIBILITY OF SUCH DAMAGE.
27
28 """
29 Message Filter Objects
30 ======================
31 """
32
33 import threading
34 import rospy
35
36 class SimpleFilter:
37
38 def __init__(self):
39 self.callbacks = {}
40
41 def registerCallback(self, cb, *args):
42 """
43 Register a callback function `cb` to be called when this filter
44 has output.
45 The filter calls the function ``cb`` with a filter-dependent list of arguments,
46 followed by the call-supplied arguments ``args``.
47 """
48
49 conn = len(self.callbacks)
50 self.callbacks[conn] = (cb, args)
51 return conn
52
53 def signalMessage(self, *msg):
54 for (cb, args) in self.callbacks.values():
55 cb(*(msg + args))
56
57 class Subscriber(SimpleFilter):
58
59 """
60 ROS subscription filter. Identical arguments as :class:`rospy.Subscriber`.
61
62 This class acts as a highest-level filter, simply passing messages
63 from a ROS subscription through to the filters which have connected
64 to it.
65 """
66 def __init__(self, *args, **kwargs):
67 SimpleFilter.__init__(self)
68 self.topic = args[0]
69 kwargs['callback'] = self.callback
70 self.sub = rospy.Subscriber(*args, **kwargs)
71
72 def callback(self, msg):
73 self.signalMessage(msg)
74
75 def getTopic(self):
76 return self.topic
77
78 class Cache(SimpleFilter):
79
80 """
81 Stores a time history of messages.
82
83 Given a stream of messages, the most recent ``cache_size`` messages
84 are cached in a ring buffer, from which time intervals of the cache
85 can then be retrieved by the client.
86 """
87
88 def __init__(self, f, cache_size = 1):
89 SimpleFilter.__init__(self)
90 self.connectInput(f)
91 self.cache_size = cache_size
92
93 def connectInput(self, f):
94 self.incoming_connection = f.registerCallback(self.add)
95
96 def add(self, msg):
97 # Add msg to cache... XXX TODO
98
99 self.signalMessage(msg)
100
101 class TimeSynchronizer(SimpleFilter):
102
103 """
104 Synchronizes messages by their timestamps.
105
106 :class:`TimeSynchronizer` synchronizes incoming message filters by the
107 timestamps contained in their messages' headers. TimeSynchronizer
108 listens on multiple input message filters ``fs``, and invokes the callback
109 when it has a collection of messages with matching timestamps.
110
111 The signature of the callback function is::
112
113 def callback(msg1, ... msgN):
114
115 where N is the number of input message filters, and each message is
116 the output of the corresponding filter in ``fs``.
117 The required ``queue size`` parameter specifies how many sets of
118 messages it should store from each input filter (by timestamp)
119 while waiting for messages to arrive and complete their "set".
120 """
121
122 def __init__(self, fs, queue_size):
123 SimpleFilter.__init__(self)
124 self.connectInput(fs)
125 self.queue_size = queue_size
126 self.lock = threading.Lock()
127
128 def connectInput(self, fs):
129 self.queues = [{} for f in fs]
130 self.input_connections = [f.registerCallback(self.add, q) for (f, q) in zip(fs, self.queues)]
131
132 def add(self, msg, my_queue):
133 self.lock.acquire()
134 my_queue[msg.header.stamp] = msg
135 while len(my_queue) > self.queue_size:
136 del my_queue[min(my_queue)]
137 # common is the set of timestamps that occur in all queues
138 common = reduce(set.intersection, [set(q) for q in self.queues])
139 for t in sorted(common):
140 # msgs is list of msgs (one from each queue) with stamp t
141 msgs = [q[t] for q in self.queues]
142 self.signalMessage(*msgs)
143 for q in self.queues:
144 del q[t]
145 self.lock.release()
146
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/utilities/message_filters/src/message_filters/__init__.py b/utilities/message_filters/src/message_filters/__init__.py
--- a/utilities/message_filters/src/message_filters/__init__.py
+++ b/utilities/message_filters/src/message_filters/__init__.py
@@ -30,6 +30,7 @@
======================
"""
+import itertools
import threading
import rospy
@@ -143,3 +144,33 @@
for q in self.queues:
del q[t]
self.lock.release()
+
+class ApproximateTimeSynchronizer(TimeSynchronizer):
+
+ """
+ Approximately synchronizes messages by their timestamps.
+
+ :class:`ApproximateTimeSynchronizer` synchronizes incoming message filters by the
+ timestamps contained in their messages' headers. The API is the same as TimeSynchronizer
+ except for an extra `slop` parameter in the constructor that defines the delay (in seconds)
+ with which messages can be synchronized
+ """
+
+ def __init__(self, fs, queue_size, slop):
+ TimeSynchronizer.__init__(self, fs, queue_size)
+ self.slop = rospy.Duration.from_sec(slop)
+
+ def add(self, msg, my_queue):
+ self.lock.acquire()
+ my_queue[msg.header.stamp] = msg
+ while len(my_queue) > self.queue_size:
+ del my_queue[min(my_queue)]
+ for vv in itertools.product(*[list(q.keys()) for q in self.queues]):
+ qt = list(zip(self.queues, vv))
+ if ( ((max(vv) - min(vv)) < self.slop) and
+ (len([1 for q,t in qt if t not in q]) == 0) ):
+ msgs = [q[t] for q,t in qt]
+ self.signalMessage(*msgs)
+ for q,t in qt:
+ del q[t]
+ self.lock.release()
| {"golden_diff": "diff --git a/utilities/message_filters/src/message_filters/__init__.py b/utilities/message_filters/src/message_filters/__init__.py\n--- a/utilities/message_filters/src/message_filters/__init__.py\n+++ b/utilities/message_filters/src/message_filters/__init__.py\n@@ -30,6 +30,7 @@\n ======================\n \"\"\"\n \n+import itertools\n import threading\n import rospy\n \n@@ -143,3 +144,33 @@\n for q in self.queues:\n del q[t]\n self.lock.release()\n+\n+class ApproximateTimeSynchronizer(TimeSynchronizer):\n+\n+ \"\"\"\n+ Approximately synchronizes messages by their timestamps.\n+\n+ :class:`ApproximateTimeSynchronizer` synchronizes incoming message filters by the\n+ timestamps contained in their messages' headers. The API is the same as TimeSynchronizer\n+ except for an extra `slop` parameter in the constructor that defines the delay (in seconds)\n+ with which messages can be synchronized\n+ \"\"\"\n+\n+ def __init__(self, fs, queue_size, slop):\n+ TimeSynchronizer.__init__(self, fs, queue_size)\n+ self.slop = rospy.Duration.from_sec(slop)\n+\n+ def add(self, msg, my_queue):\n+ self.lock.acquire()\n+ my_queue[msg.header.stamp] = msg\n+ while len(my_queue) > self.queue_size:\n+ del my_queue[min(my_queue)]\n+ for vv in itertools.product(*[list(q.keys()) for q in self.queues]):\n+ qt = list(zip(self.queues, vv))\n+ if ( ((max(vv) - min(vv)) < self.slop) and\n+ (len([1 for q,t in qt if t not in q]) == 0) ):\n+ msgs = [q[t] for q,t in qt]\n+ self.signalMessage(*msgs)\n+ for q,t in qt:\n+ del q[t]\n+ self.lock.release()\n", "issue": "message_filters: Incorporate python approximate time synchronizer\nhttps://github.com/ros-perception/image_pipeline/issues/76\n\n", "before_files": [{"content": "# Copyright (c) 2009, Willow Garage, Inc.\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# * Neither the name of the Willow Garage, Inc. nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n# POSSIBILITY OF SUCH DAMAGE.\n\n\"\"\"\nMessage Filter Objects\n======================\n\"\"\"\n\nimport threading\nimport rospy\n\nclass SimpleFilter:\n\n def __init__(self):\n self.callbacks = {}\n\n def registerCallback(self, cb, *args):\n \"\"\"\n Register a callback function `cb` to be called when this filter\n has output.\n The filter calls the function ``cb`` with a filter-dependent list of arguments,\n followed by the call-supplied arguments ``args``.\n \"\"\"\n\n conn = len(self.callbacks)\n self.callbacks[conn] = (cb, args)\n return conn\n\n def signalMessage(self, *msg):\n for (cb, args) in self.callbacks.values():\n cb(*(msg + args))\n\nclass Subscriber(SimpleFilter):\n \n \"\"\"\n ROS subscription filter. Identical arguments as :class:`rospy.Subscriber`.\n\n This class acts as a highest-level filter, simply passing messages\n from a ROS subscription through to the filters which have connected\n to it.\n \"\"\"\n def __init__(self, *args, **kwargs):\n SimpleFilter.__init__(self)\n self.topic = args[0]\n kwargs['callback'] = self.callback\n self.sub = rospy.Subscriber(*args, **kwargs)\n\n def callback(self, msg):\n self.signalMessage(msg)\n\n def getTopic(self):\n return self.topic\n\nclass Cache(SimpleFilter):\n\n \"\"\"\n Stores a time history of messages.\n\n Given a stream of messages, the most recent ``cache_size`` messages\n are cached in a ring buffer, from which time intervals of the cache\n can then be retrieved by the client.\n \"\"\"\n\n def __init__(self, f, cache_size = 1):\n SimpleFilter.__init__(self)\n self.connectInput(f)\n self.cache_size = cache_size\n\n def connectInput(self, f):\n self.incoming_connection = f.registerCallback(self.add)\n\n def add(self, msg):\n # Add msg to cache... XXX TODO\n\n self.signalMessage(msg)\n\nclass TimeSynchronizer(SimpleFilter):\n\n \"\"\"\n Synchronizes messages by their timestamps.\n\n :class:`TimeSynchronizer` synchronizes incoming message filters by the\n timestamps contained in their messages' headers. TimeSynchronizer\n listens on multiple input message filters ``fs``, and invokes the callback\n when it has a collection of messages with matching timestamps.\n\n The signature of the callback function is::\n\n def callback(msg1, ... msgN):\n\n where N is the number of input message filters, and each message is\n the output of the corresponding filter in ``fs``.\n The required ``queue size`` parameter specifies how many sets of\n messages it should store from each input filter (by timestamp)\n while waiting for messages to arrive and complete their \"set\".\n \"\"\"\n\n def __init__(self, fs, queue_size):\n SimpleFilter.__init__(self)\n self.connectInput(fs)\n self.queue_size = queue_size\n self.lock = threading.Lock()\n\n def connectInput(self, fs):\n self.queues = [{} for f in fs]\n self.input_connections = [f.registerCallback(self.add, q) for (f, q) in zip(fs, self.queues)]\n\n def add(self, msg, my_queue):\n self.lock.acquire()\n my_queue[msg.header.stamp] = msg\n while len(my_queue) > self.queue_size:\n del my_queue[min(my_queue)]\n # common is the set of timestamps that occur in all queues\n common = reduce(set.intersection, [set(q) for q in self.queues])\n for t in sorted(common):\n # msgs is list of msgs (one from each queue) with stamp t\n msgs = [q[t] for q in self.queues]\n self.signalMessage(*msgs)\n for q in self.queues:\n del q[t]\n self.lock.release()\n", "path": "utilities/message_filters/src/message_filters/__init__.py"}], "after_files": [{"content": "# Copyright (c) 2009, Willow Garage, Inc.\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# * Neither the name of the Willow Garage, Inc. nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n# POSSIBILITY OF SUCH DAMAGE.\n\n\"\"\"\nMessage Filter Objects\n======================\n\"\"\"\n\nimport itertools\nimport threading\nimport rospy\n\nclass SimpleFilter:\n\n def __init__(self):\n self.callbacks = {}\n\n def registerCallback(self, cb, *args):\n \"\"\"\n Register a callback function `cb` to be called when this filter\n has output.\n The filter calls the function ``cb`` with a filter-dependent list of arguments,\n followed by the call-supplied arguments ``args``.\n \"\"\"\n\n conn = len(self.callbacks)\n self.callbacks[conn] = (cb, args)\n return conn\n\n def signalMessage(self, *msg):\n for (cb, args) in self.callbacks.values():\n cb(*(msg + args))\n\nclass Subscriber(SimpleFilter):\n \n \"\"\"\n ROS subscription filter. Identical arguments as :class:`rospy.Subscriber`.\n\n This class acts as a highest-level filter, simply passing messages\n from a ROS subscription through to the filters which have connected\n to it.\n \"\"\"\n def __init__(self, *args, **kwargs):\n SimpleFilter.__init__(self)\n self.topic = args[0]\n kwargs['callback'] = self.callback\n self.sub = rospy.Subscriber(*args, **kwargs)\n\n def callback(self, msg):\n self.signalMessage(msg)\n\n def getTopic(self):\n return self.topic\n\nclass Cache(SimpleFilter):\n\n \"\"\"\n Stores a time history of messages.\n\n Given a stream of messages, the most recent ``cache_size`` messages\n are cached in a ring buffer, from which time intervals of the cache\n can then be retrieved by the client.\n \"\"\"\n\n def __init__(self, f, cache_size = 1):\n SimpleFilter.__init__(self)\n self.connectInput(f)\n self.cache_size = cache_size\n\n def connectInput(self, f):\n self.incoming_connection = f.registerCallback(self.add)\n\n def add(self, msg):\n # Add msg to cache... XXX TODO\n\n self.signalMessage(msg)\n\nclass TimeSynchronizer(SimpleFilter):\n\n \"\"\"\n Synchronizes messages by their timestamps.\n\n :class:`TimeSynchronizer` synchronizes incoming message filters by the\n timestamps contained in their messages' headers. TimeSynchronizer\n listens on multiple input message filters ``fs``, and invokes the callback\n when it has a collection of messages with matching timestamps.\n\n The signature of the callback function is::\n\n def callback(msg1, ... msgN):\n\n where N is the number of input message filters, and each message is\n the output of the corresponding filter in ``fs``.\n The required ``queue size`` parameter specifies how many sets of\n messages it should store from each input filter (by timestamp)\n while waiting for messages to arrive and complete their \"set\".\n \"\"\"\n\n def __init__(self, fs, queue_size):\n SimpleFilter.__init__(self)\n self.connectInput(fs)\n self.queue_size = queue_size\n self.lock = threading.Lock()\n\n def connectInput(self, fs):\n self.queues = [{} for f in fs]\n self.input_connections = [f.registerCallback(self.add, q) for (f, q) in zip(fs, self.queues)]\n\n def add(self, msg, my_queue):\n self.lock.acquire()\n my_queue[msg.header.stamp] = msg\n while len(my_queue) > self.queue_size:\n del my_queue[min(my_queue)]\n # common is the set of timestamps that occur in all queues\n common = reduce(set.intersection, [set(q) for q in self.queues])\n for t in sorted(common):\n # msgs is list of msgs (one from each queue) with stamp t\n msgs = [q[t] for q in self.queues]\n self.signalMessage(*msgs)\n for q in self.queues:\n del q[t]\n self.lock.release()\n\nclass ApproximateTimeSynchronizer(TimeSynchronizer):\n\n \"\"\"\n Approximately synchronizes messages by their timestamps.\n\n :class:`ApproximateTimeSynchronizer` synchronizes incoming message filters by the\n timestamps contained in their messages' headers. The API is the same as TimeSynchronizer\n except for an extra `slop` parameter in the constructor that defines the delay (in seconds)\n with which messages can be synchronized\n \"\"\"\n\n def __init__(self, fs, queue_size, slop):\n TimeSynchronizer.__init__(self, fs, queue_size)\n self.slop = rospy.Duration.from_sec(slop)\n\n def add(self, msg, my_queue):\n self.lock.acquire()\n my_queue[msg.header.stamp] = msg\n while len(my_queue) > self.queue_size:\n del my_queue[min(my_queue)]\n for vv in itertools.product(*[list(q.keys()) for q in self.queues]):\n qt = list(zip(self.queues, vv))\n if ( ((max(vv) - min(vv)) < self.slop) and\n (len([1 for q,t in qt if t not in q]) == 0) ):\n msgs = [q[t] for q,t in qt]\n self.signalMessage(*msgs)\n for q,t in qt:\n del q[t]\n self.lock.release()\n", "path": "utilities/message_filters/src/message_filters/__init__.py"}]} | 1,780 | 432 |
gh_patches_debug_5242 | rasdani/github-patches | git_diff | wemake-services__wemake-python-styleguide-884 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wrong and correct examples of nested annotations mixed up in docs for Union type
# Bug report
<!--
Hi, thanks for submitting a bug. We appreciate that.
But, we will need some information about what's wrong to help you.
-->
## What's wrong
From ``NestedAnnotationsViolation`` class docstring:
```
Example::
# Correct:
Literal[1, 2, 3, "foo", 5, None]
Union[Union[int, str], float]
# Wrong:
Literal[Literal[Literal[1, 2, 3], "foo"], 5, None]
Union[int, str, float]
```
There is a mistake: correct and wrong examples for ``Union`` type are mixed up. If I run flake8 passing it a file which contains ``Union[int, str, float]`` annotation, no error will be raised. And for ``Union[Union[int, str], float]`` WPS702 is raised. Both are expected behaviour I guess, just a mismatch in documentation.
## How is that should be
```
Example::
# Correct:
Literal[1, 2, 3, "foo", 5, None]
Union[int, str, float]
# Wrong:
Literal[Literal[Literal[1, 2, 3], "foo"], 5, None]
Union[Union[int, str], float]
```
<!--
Love wemake-python-styleguide? ❤️
Do you want your bugs to be solved faster?
Please consider supporting our collective:
👉 https://opencollective.com/wemake-python-styleguide/donate
-->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wemake_python_styleguide/violations/annotations.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """
4 These checks help to prevent incorrect usage of Python 3 typing annotations.
5
6 While they maybe of a great help in writing clear
7 and concise code, they still can be abused.
8
9 Once again, these rules are highly subjective. But, we love them.
10
11 .. currentmodule:: wemake_python_styleguide.violations.annotations
12
13 Summary
14 -------
15
16 .. autosummary::
17 :nosignatures:
18
19 LiteralNoneViolation
20 NestedAnnotationsViolation
21
22 Annotation checks
23 ------------------
24
25 .. autoclass:: LiteralNoneViolation
26 .. autoclass:: NestedAnnotationsViolation
27
28 """
29
30 from typing_extensions import final
31
32 from wemake_python_styleguide.violations.base import ASTViolation
33
34
35 @final
36 class LiteralNoneViolation(ASTViolation):
37 """
38 Forbids to use ``Literal[None]`` typing annotation.
39
40 Reasoning:
41 Literal[None] is just the same as None.
42 There's no need to use the first version.
43 It is not type related, it is a consistency rule.
44
45 Solution:
46 Replace ``Literal[None]`` with ``None``.
47
48 Example::
49
50 # Correct:
51 def func(empty: None):
52 '''Empty function.'''
53
54 # Wrong:
55 def func(empty: Literal[None]):
56 '''Empty function.'''
57
58 .. versionadded:: 0.13.0
59
60 """
61
62 code = 701
63 error_template = 'Found useless `Literal[None]` typing annotation'
64
65
66 @final
67 class NestedAnnotationsViolation(ASTViolation):
68 """
69 Forbids use of nested Literal and Union Annotation.
70
71 Reasoning:
72 There is no need to nest certain annotations of the same type.
73 They are exactly equivalent to the flattened version.
74 Use the flattened version for consistency.
75
76 Solution:
77 Flatten consecutively nested ``typing.Literal`` and ``typing.Union``.
78
79 Example::
80 # Correct:
81 Literal[1, 2, 3, "foo", 5, None]
82 Union[Union[int, str], float]
83
84 # Wrong:
85 Literal[Literal[Literal[1, 2, 3], "foo"], 5, None]
86 Union[int, str, float]
87
88 .. versionadded:: 0.13.0
89
90 """
91
92 error_template = 'Found redundant nested typing annotation'
93 code = 702
94
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wemake_python_styleguide/violations/annotations.py b/wemake_python_styleguide/violations/annotations.py
--- a/wemake_python_styleguide/violations/annotations.py
+++ b/wemake_python_styleguide/violations/annotations.py
@@ -79,11 +79,11 @@
Example::
# Correct:
Literal[1, 2, 3, "foo", 5, None]
- Union[Union[int, str], float]
+ Union[int, str, float]
# Wrong:
Literal[Literal[Literal[1, 2, 3], "foo"], 5, None]
- Union[int, str, float]
+ Union[Union[int, str], float]
.. versionadded:: 0.13.0
| {"golden_diff": "diff --git a/wemake_python_styleguide/violations/annotations.py b/wemake_python_styleguide/violations/annotations.py\n--- a/wemake_python_styleguide/violations/annotations.py\n+++ b/wemake_python_styleguide/violations/annotations.py\n@@ -79,11 +79,11 @@\n Example::\n # Correct:\n Literal[1, 2, 3, \"foo\", 5, None]\n- Union[Union[int, str], float]\n+ Union[int, str, float]\n \n # Wrong:\n Literal[Literal[Literal[1, 2, 3], \"foo\"], 5, None]\n- Union[int, str, float]\n+ Union[Union[int, str], float]\n \n .. versionadded:: 0.13.0\n", "issue": "Wrong and correct examples of nested annotations mixed up in docs for Union type\n# Bug report\r\n\r\n<!--\r\nHi, thanks for submitting a bug. We appreciate that.\r\n\r\nBut, we will need some information about what's wrong to help you.\r\n-->\r\n\r\n## What's wrong\r\n\r\nFrom ``NestedAnnotationsViolation`` class docstring:\r\n```\r\nExample::\r\n # Correct:\r\n Literal[1, 2, 3, \"foo\", 5, None]\r\n Union[Union[int, str], float]\r\n\r\n # Wrong:\r\n Literal[Literal[Literal[1, 2, 3], \"foo\"], 5, None]\r\n Union[int, str, float]\r\n```\r\nThere is a mistake: correct and wrong examples for ``Union`` type are mixed up. If I run flake8 passing it a file which contains ``Union[int, str, float]`` annotation, no error will be raised. And for ``Union[Union[int, str], float]`` WPS702 is raised. Both are expected behaviour I guess, just a mismatch in documentation.\r\n## How is that should be\r\n\r\n```\r\nExample::\r\n # Correct:\r\n Literal[1, 2, 3, \"foo\", 5, None]\r\n Union[int, str, float]\r\n\r\n # Wrong:\r\n Literal[Literal[Literal[1, 2, 3], \"foo\"], 5, None]\r\n Union[Union[int, str], float]\r\n```\r\n<!--\r\n\r\n Love wemake-python-styleguide? \u2764\ufe0f\r\n Do you want your bugs to be solved faster?\r\n\r\n Please consider supporting our collective:\r\n\ud83d\udc49 https://opencollective.com/wemake-python-styleguide/donate\r\n\r\n-->\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nThese checks help to prevent incorrect usage of Python 3 typing annotations.\n\nWhile they maybe of a great help in writing clear\nand concise code, they still can be abused.\n\nOnce again, these rules are highly subjective. But, we love them.\n\n.. currentmodule:: wemake_python_styleguide.violations.annotations\n\nSummary\n-------\n\n.. autosummary::\n :nosignatures:\n\n LiteralNoneViolation\n NestedAnnotationsViolation\n\nAnnotation checks\n------------------\n\n.. autoclass:: LiteralNoneViolation\n.. autoclass:: NestedAnnotationsViolation\n\n\"\"\"\n\nfrom typing_extensions import final\n\nfrom wemake_python_styleguide.violations.base import ASTViolation\n\n\n@final\nclass LiteralNoneViolation(ASTViolation):\n \"\"\"\n Forbids to use ``Literal[None]`` typing annotation.\n\n Reasoning:\n Literal[None] is just the same as None.\n There's no need to use the first version.\n It is not type related, it is a consistency rule.\n\n Solution:\n Replace ``Literal[None]`` with ``None``.\n\n Example::\n\n # Correct:\n def func(empty: None):\n '''Empty function.'''\n\n # Wrong:\n def func(empty: Literal[None]):\n '''Empty function.'''\n\n .. versionadded:: 0.13.0\n\n \"\"\"\n\n code = 701\n error_template = 'Found useless `Literal[None]` typing annotation'\n\n\n@final\nclass NestedAnnotationsViolation(ASTViolation):\n \"\"\"\n Forbids use of nested Literal and Union Annotation.\n\n Reasoning:\n There is no need to nest certain annotations of the same type.\n They are exactly equivalent to the flattened version.\n Use the flattened version for consistency.\n\n Solution:\n Flatten consecutively nested ``typing.Literal`` and ``typing.Union``.\n\n Example::\n # Correct:\n Literal[1, 2, 3, \"foo\", 5, None]\n Union[Union[int, str], float]\n\n # Wrong:\n Literal[Literal[Literal[1, 2, 3], \"foo\"], 5, None]\n Union[int, str, float]\n\n .. versionadded:: 0.13.0\n\n \"\"\"\n\n error_template = 'Found redundant nested typing annotation'\n code = 702\n", "path": "wemake_python_styleguide/violations/annotations.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nThese checks help to prevent incorrect usage of Python 3 typing annotations.\n\nWhile they maybe of a great help in writing clear\nand concise code, they still can be abused.\n\nOnce again, these rules are highly subjective. But, we love them.\n\n.. currentmodule:: wemake_python_styleguide.violations.annotations\n\nSummary\n-------\n\n.. autosummary::\n :nosignatures:\n\n LiteralNoneViolation\n NestedAnnotationsViolation\n\nAnnotation checks\n------------------\n\n.. autoclass:: LiteralNoneViolation\n.. autoclass:: NestedAnnotationsViolation\n\n\"\"\"\n\nfrom typing_extensions import final\n\nfrom wemake_python_styleguide.violations.base import ASTViolation\n\n\n@final\nclass LiteralNoneViolation(ASTViolation):\n \"\"\"\n Forbids to use ``Literal[None]`` typing annotation.\n\n Reasoning:\n Literal[None] is just the same as None.\n There's no need to use the first version.\n It is not type related, it is a consistency rule.\n\n Solution:\n Replace ``Literal[None]`` with ``None``.\n\n Example::\n\n # Correct:\n def func(empty: None):\n '''Empty function.'''\n\n # Wrong:\n def func(empty: Literal[None]):\n '''Empty function.'''\n\n .. versionadded:: 0.13.0\n\n \"\"\"\n\n code = 701\n error_template = 'Found useless `Literal[None]` typing annotation'\n\n\n@final\nclass NestedAnnotationsViolation(ASTViolation):\n \"\"\"\n Forbids use of nested Literal and Union Annotation.\n\n Reasoning:\n There is no need to nest certain annotations of the same type.\n They are exactly equivalent to the flattened version.\n Use the flattened version for consistency.\n\n Solution:\n Flatten consecutively nested ``typing.Literal`` and ``typing.Union``.\n\n Example::\n # Correct:\n Literal[1, 2, 3, \"foo\", 5, None]\n Union[int, str, float]\n\n # Wrong:\n Literal[Literal[Literal[1, 2, 3], \"foo\"], 5, None]\n Union[Union[int, str], float]\n\n .. versionadded:: 0.13.0\n\n \"\"\"\n\n error_template = 'Found redundant nested typing annotation'\n code = 702\n", "path": "wemake_python_styleguide/violations/annotations.py"}]} | 1,322 | 182 |
gh_patches_debug_1372 | rasdani/github-patches | git_diff | wright-group__WrightTools-361 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
collection.keys returns data objects
should return names, but get objects
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `WrightTools/collection/_collection.py`
Content:
```
1 """Collection."""
2
3
4 # --- import --------------------------------------------------------------------------------------
5
6
7 import os
8 import shutil
9
10 import numpy as np
11
12 import h5py
13
14 from .. import data as wt_data
15 from .._base import Group
16
17
18 # --- define --------------------------------------------------------------------------------------
19
20
21 __all__ = ['Collection']
22
23
24 # --- classes -------------------------------------------------------------------------------------
25
26
27 class Collection(Group):
28 """Nestable Collection of Data objects."""
29 class_name = 'Collection'
30
31 def __iter__(self):
32 self.__n = 0
33 return self
34
35 def __len__(self):
36 return len(self.item_names)
37
38 def __next__(self):
39 if self.__n < len(self):
40 out = self[self.__n]
41 self.__n += 1
42 else:
43 raise StopIteration
44 return out
45
46 def __repr__(self):
47 return '<WrightTools.Collection \'{0}\' {1} at {2}>'.format(self.natural_name,
48 self.item_names,
49 '::'.join([self.filepath,
50 self.name]))
51
52 def __getitem__(self, key):
53 if isinstance(key, int):
54 key = self.item_names[key]
55 out = h5py.Group.__getitem__(self, key)
56 if 'class' in out.attrs.keys():
57 if out.attrs['class'] == 'Data':
58 return wt_data.Data(filepath=self.filepath, parent=self.name, name=key,
59 edit_local=True)
60 elif out.attrs['class'] == 'Collection':
61 return Collection(filepath=self.filepath, parent=self.name, name=key,
62 edit_local=True)
63 else:
64 return Group(filepath=self.filepath, parent=self.name, name=key,
65 edit_local=True)
66 else:
67 return out
68
69 def __setitem__(self, key, value):
70 raise NotImplementedError
71
72 @property
73 def item_names(self):
74 if 'item_names' not in self.attrs.keys():
75 self.attrs['item_names'] = np.array([], dtype='S')
76 return [s.decode() for s in self.attrs['item_names']]
77
78 def create_collection(self, name='collection', position=None, **kwargs):
79 collection = Collection(filepath=self.filepath, parent=self.name, name=name,
80 edit_local=True, **kwargs)
81 if position is None:
82 self._items.append(collection)
83 self.attrs['item_names'] = np.append(self.attrs['item_names'],
84 collection.natural_name.encode())
85 else:
86 self._items.insert(position, collection)
87 self.attrs['item_names'] = np.insert(self.attrs['item_names'], position,
88 collection.natural_name.encode())
89 setattr(self, name, collection)
90 return collection
91
92 def create_data(self, name='data', position=None, **kwargs):
93 data = wt_data.Data(filepath=self.filepath, parent=self.name, name=name, edit_local=True,
94 **kwargs)
95 if position is None:
96 self._items.append(data)
97 self.attrs['item_names'] = np.append(self.attrs['item_names'],
98 data.natural_name.encode())
99 else:
100 self._items.insert(position, data)
101 self.attrs['item_names'] = np.insert(self.attrs['item_names'], position,
102 data.natural_name.encode())
103 setattr(self, name, data)
104 return data
105
106 def index(self):
107 raise NotImplementedError
108
109 def flush(self):
110 for item in self._items:
111 item.flush()
112 self.file.flush()
113
114 def save(self, filepath=None, verbose=True):
115 # TODO: documentation
116 self.flush() # ensure all changes are written to file
117 if filepath is None:
118 filepath = os.path.join(os.getcwd(), self.natural_name + '.wt5')
119 elif len(os.path.basename(filepath).split('.')) == 1:
120 filepath += '.wt5'
121 filepath = os.path.expanduser(filepath)
122 shutil.copyfile(src=self.filepath, dst=filepath)
123 if verbose:
124 print('file saved at', filepath)
125 return filepath
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/WrightTools/collection/_collection.py b/WrightTools/collection/_collection.py
--- a/WrightTools/collection/_collection.py
+++ b/WrightTools/collection/_collection.py
@@ -37,7 +37,7 @@
def __next__(self):
if self.__n < len(self):
- out = self[self.__n]
+ out = self.item_names[self.__n]
self.__n += 1
else:
raise StopIteration
| {"golden_diff": "diff --git a/WrightTools/collection/_collection.py b/WrightTools/collection/_collection.py\n--- a/WrightTools/collection/_collection.py\n+++ b/WrightTools/collection/_collection.py\n@@ -37,7 +37,7 @@\n \n def __next__(self):\n if self.__n < len(self):\n- out = self[self.__n]\n+ out = self.item_names[self.__n]\n self.__n += 1\n else:\n raise StopIteration\n", "issue": "collection.keys returns data objects\nshould return names, but get objects\n", "before_files": [{"content": "\"\"\"Collection.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nimport os\nimport shutil\n\nimport numpy as np\n\nimport h5py\n\nfrom .. import data as wt_data\nfrom .._base import Group\n\n\n# --- define --------------------------------------------------------------------------------------\n\n\n__all__ = ['Collection']\n\n\n# --- classes -------------------------------------------------------------------------------------\n\n\nclass Collection(Group):\n \"\"\"Nestable Collection of Data objects.\"\"\"\n class_name = 'Collection'\n\n def __iter__(self):\n self.__n = 0\n return self\n\n def __len__(self):\n return len(self.item_names)\n\n def __next__(self):\n if self.__n < len(self):\n out = self[self.__n]\n self.__n += 1\n else:\n raise StopIteration\n return out\n\n def __repr__(self):\n return '<WrightTools.Collection \\'{0}\\' {1} at {2}>'.format(self.natural_name,\n self.item_names,\n '::'.join([self.filepath,\n self.name]))\n\n def __getitem__(self, key):\n if isinstance(key, int):\n key = self.item_names[key]\n out = h5py.Group.__getitem__(self, key)\n if 'class' in out.attrs.keys():\n if out.attrs['class'] == 'Data':\n return wt_data.Data(filepath=self.filepath, parent=self.name, name=key,\n edit_local=True)\n elif out.attrs['class'] == 'Collection':\n return Collection(filepath=self.filepath, parent=self.name, name=key,\n edit_local=True)\n else:\n return Group(filepath=self.filepath, parent=self.name, name=key,\n edit_local=True)\n else:\n return out\n\n def __setitem__(self, key, value):\n raise NotImplementedError\n\n @property\n def item_names(self):\n if 'item_names' not in self.attrs.keys():\n self.attrs['item_names'] = np.array([], dtype='S')\n return [s.decode() for s in self.attrs['item_names']]\n\n def create_collection(self, name='collection', position=None, **kwargs):\n collection = Collection(filepath=self.filepath, parent=self.name, name=name,\n edit_local=True, **kwargs)\n if position is None:\n self._items.append(collection)\n self.attrs['item_names'] = np.append(self.attrs['item_names'],\n collection.natural_name.encode())\n else:\n self._items.insert(position, collection)\n self.attrs['item_names'] = np.insert(self.attrs['item_names'], position,\n collection.natural_name.encode())\n setattr(self, name, collection)\n return collection\n\n def create_data(self, name='data', position=None, **kwargs):\n data = wt_data.Data(filepath=self.filepath, parent=self.name, name=name, edit_local=True,\n **kwargs)\n if position is None:\n self._items.append(data)\n self.attrs['item_names'] = np.append(self.attrs['item_names'],\n data.natural_name.encode())\n else:\n self._items.insert(position, data)\n self.attrs['item_names'] = np.insert(self.attrs['item_names'], position,\n data.natural_name.encode())\n setattr(self, name, data)\n return data\n\n def index(self):\n raise NotImplementedError\n\n def flush(self):\n for item in self._items:\n item.flush()\n self.file.flush()\n\n def save(self, filepath=None, verbose=True):\n # TODO: documentation\n self.flush() # ensure all changes are written to file\n if filepath is None:\n filepath = os.path.join(os.getcwd(), self.natural_name + '.wt5')\n elif len(os.path.basename(filepath).split('.')) == 1:\n filepath += '.wt5'\n filepath = os.path.expanduser(filepath)\n shutil.copyfile(src=self.filepath, dst=filepath)\n if verbose:\n print('file saved at', filepath)\n return filepath\n", "path": "WrightTools/collection/_collection.py"}], "after_files": [{"content": "\"\"\"Collection.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nimport os\nimport shutil\n\nimport numpy as np\n\nimport h5py\n\nfrom .. import data as wt_data\nfrom .._base import Group\n\n\n# --- define --------------------------------------------------------------------------------------\n\n\n__all__ = ['Collection']\n\n\n# --- classes -------------------------------------------------------------------------------------\n\n\nclass Collection(Group):\n \"\"\"Nestable Collection of Data objects.\"\"\"\n class_name = 'Collection'\n\n def __iter__(self):\n self.__n = 0\n return self\n\n def __len__(self):\n return len(self.item_names)\n\n def __next__(self):\n if self.__n < len(self):\n out = self.item_names[self.__n]\n self.__n += 1\n else:\n raise StopIteration\n return out\n\n def __repr__(self):\n return '<WrightTools.Collection \\'{0}\\' {1} at {2}>'.format(self.natural_name,\n self.item_names,\n '::'.join([self.filepath,\n self.name]))\n\n def __getitem__(self, key):\n if isinstance(key, int):\n key = self.item_names[key]\n out = h5py.Group.__getitem__(self, key)\n if 'class' in out.attrs.keys():\n if out.attrs['class'] == 'Data':\n return wt_data.Data(filepath=self.filepath, parent=self.name, name=key,\n edit_local=True)\n elif out.attrs['class'] == 'Collection':\n return Collection(filepath=self.filepath, parent=self.name, name=key,\n edit_local=True)\n else:\n return Group(filepath=self.filepath, parent=self.name, name=key,\n edit_local=True)\n else:\n return out\n\n def __setitem__(self, key, value):\n raise NotImplementedError\n\n @property\n def item_names(self):\n if 'item_names' not in self.attrs.keys():\n self.attrs['item_names'] = np.array([], dtype='S')\n return [s.decode() for s in self.attrs['item_names']]\n\n def create_collection(self, name='collection', position=None, **kwargs):\n collection = Collection(filepath=self.filepath, parent=self.name, name=name,\n edit_local=True, **kwargs)\n if position is None:\n self._items.append(collection)\n self.attrs['item_names'] = np.append(self.attrs['item_names'],\n collection.natural_name.encode())\n else:\n self._items.insert(position, collection)\n self.attrs['item_names'] = np.insert(self.attrs['item_names'], position,\n collection.natural_name.encode())\n setattr(self, name, collection)\n return collection\n\n def create_data(self, name='data', position=None, **kwargs):\n data = wt_data.Data(filepath=self.filepath, parent=self.name, name=name, edit_local=True,\n **kwargs)\n if position is None:\n self._items.append(data)\n self.attrs['item_names'] = np.append(self.attrs['item_names'],\n data.natural_name.encode())\n else:\n self._items.insert(position, data)\n self.attrs['item_names'] = np.insert(self.attrs['item_names'], position,\n data.natural_name.encode())\n setattr(self, name, data)\n return data\n\n def index(self):\n raise NotImplementedError\n\n def flush(self):\n for item in self._items:\n item.flush()\n self.file.flush()\n\n def save(self, filepath=None, verbose=True):\n # TODO: documentation\n self.flush() # ensure all changes are written to file\n if filepath is None:\n filepath = os.path.join(os.getcwd(), self.natural_name + '.wt5')\n elif len(os.path.basename(filepath).split('.')) == 1:\n filepath += '.wt5'\n filepath = os.path.expanduser(filepath)\n shutil.copyfile(src=self.filepath, dst=filepath)\n if verbose:\n print('file saved at', filepath)\n return filepath\n", "path": "WrightTools/collection/_collection.py"}]} | 1,387 | 109 |
gh_patches_debug_27078 | rasdani/github-patches | git_diff | dotkom__onlineweb4-524 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
(User creation) Some fields are required that should not be
Address, zip and phone should not be required. People will complain.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/authentication/forms.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 import datetime
4 import re
5
6 from django import forms
7 from django.contrib import auth
8 from django.utils.translation import ugettext as _
9
10 from apps.authentication.models import OnlineUser as User, Email
11
12 class LoginForm(forms.Form):
13 username = forms.CharField(widget=forms.TextInput(), label=_("Brukernavn"), max_length=50)
14 password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u"Passord"))
15 user = None
16
17 def clean(self):
18 if self._errors:
19 return
20
21 user = auth.authenticate(username=self.cleaned_data['username'], password=self.cleaned_data['password'])
22
23 if user:
24 if user.is_active:
25 self.user = user
26 else:
27 self._errors['username'] = self.error_class([_(u"Din konto er ikke aktiv. Forsøk gjenoppretning av passord.")])
28 else:
29 self._errors['username'] = self.error_class([_(u"Kontoen eksisterer ikke, eller kombinasjonen av brukernavn og passord er feil.")])
30 return self.cleaned_data
31
32 def login(self, request):
33 try:
34 User.objects.get(username=request.POST['username'])
35 except:
36 return False
37 if self.is_valid():
38 auth.login(request, self.user)
39 request.session.set_expiry(0)
40 return True
41 return False
42
43 class RegisterForm(forms.Form):
44 username = forms.CharField(label=_("Brukernavn"), max_length=20)
45 first_name = forms.CharField(label=_("Fornavn"), max_length=50)
46 last_name = forms.CharField(label=_("Etternavn"), max_length=50)
47 email = forms.EmailField(label=_("Epost"), max_length=50)
48 password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_("Passord"))
49 repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_("Gjenta passord"))
50 address = forms.CharField(label=_("Adresse"), max_length=50)
51 zip_code = forms.CharField(label=_("Postnummer"), max_length=4)
52 phone = forms.CharField(label=_("Telefon"), max_length=20)
53
54 def clean(self):
55 super(RegisterForm, self).clean()
56 if self.is_valid():
57 cleaned_data = self.cleaned_data
58
59 # Check passwords
60 if cleaned_data['password'] != cleaned_data['repeat_password']:
61 self._errors['repeat_password'] = self.error_class([_(u"Passordene er ikke like.")])
62
63 # Check username
64 username = cleaned_data['username']
65 if User.objects.filter(username=username).count() > 0:
66 self._errors['username'] = self.error_class([_(u"Brukernavnet er allerede registrert.")])
67 if not re.match("^[a-zA-Z0-9_-]+$", username):
68 self._errors['username'] = self.error_class([_(u"Ditt brukernavn inneholdt ulovlige tegn. Lovlige tegn: a-Z 0-9 - _")])
69
70 # Check email
71 email = cleaned_data['email']
72 if Email.objects.filter(email=email).count() > 0:
73 self._errors['email'] = self.error_class([_(u"Det fins allerede en bruker med denne epostadressen.")])
74
75 # ZIP code digits only
76 zip_code = cleaned_data['zip_code']
77 if len(zip_code) != 4 or not zip_code.isdigit():
78 self._errors['zip_code'] = self.error_class([_(u"Postnummer må bestå av fire siffer.")])
79
80 return cleaned_data
81
82 class RecoveryForm(forms.Form):
83 email = forms.EmailField(label="Email", max_length=50)
84
85 class ChangePasswordForm(forms.Form):
86 new_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u"Nytt passord"))
87 repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u"Gjenta passord"))
88
89 def clean(self):
90 super(ChangePasswordForm, self).clean()
91 if self.is_valid():
92 cleaned_data = self.cleaned_data
93
94 # Check passwords
95 if cleaned_data['new_password'] != cleaned_data['repeat_password']:
96 self._errors['repeat_password'] = self.error_class([_(u"Passordene er ikke like.")])
97
98 return cleaned_data
99
100
101 class NewEmailForm(forms.Form):
102 new_email = forms.EmailField(_(u"ny epostadresse"))
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/authentication/forms.py b/apps/authentication/forms.py
--- a/apps/authentication/forms.py
+++ b/apps/authentication/forms.py
@@ -47,9 +47,9 @@
email = forms.EmailField(label=_("Epost"), max_length=50)
password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_("Passord"))
repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_("Gjenta passord"))
- address = forms.CharField(label=_("Adresse"), max_length=50)
- zip_code = forms.CharField(label=_("Postnummer"), max_length=4)
- phone = forms.CharField(label=_("Telefon"), max_length=20)
+ address = forms.CharField(label=_("Adresse"), max_length=50, required=False)
+ zip_code = forms.CharField(label=_("Postnummer"), max_length=4, required=False)
+ phone = forms.CharField(label=_("Telefon"), max_length=20, required=False)
def clean(self):
super(RegisterForm, self).clean()
@@ -74,8 +74,9 @@
# ZIP code digits only
zip_code = cleaned_data['zip_code']
- if len(zip_code) != 4 or not zip_code.isdigit():
- self._errors['zip_code'] = self.error_class([_(u"Postnummer må bestå av fire siffer.")])
+ if len(zip_code) != 0:
+ if len(zip_code) != 4 or len(zip_code) != 0 or not zip_code.isdigit():
+ self._errors['zip_code'] = self.error_class([_(u"Postnummer må bestå av fire siffer.")])
return cleaned_data
| {"golden_diff": "diff --git a/apps/authentication/forms.py b/apps/authentication/forms.py\n--- a/apps/authentication/forms.py\n+++ b/apps/authentication/forms.py\n@@ -47,9 +47,9 @@\n email = forms.EmailField(label=_(\"Epost\"), max_length=50)\n password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(\"Passord\"))\n repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(\"Gjenta passord\"))\n- address = forms.CharField(label=_(\"Adresse\"), max_length=50)\n- zip_code = forms.CharField(label=_(\"Postnummer\"), max_length=4)\n- phone = forms.CharField(label=_(\"Telefon\"), max_length=20)\n+ address = forms.CharField(label=_(\"Adresse\"), max_length=50, required=False)\n+ zip_code = forms.CharField(label=_(\"Postnummer\"), max_length=4, required=False)\n+ phone = forms.CharField(label=_(\"Telefon\"), max_length=20, required=False)\n \n def clean(self):\n super(RegisterForm, self).clean()\n@@ -74,8 +74,9 @@\n \n # ZIP code digits only\n zip_code = cleaned_data['zip_code']\n- if len(zip_code) != 4 or not zip_code.isdigit():\n- self._errors['zip_code'] = self.error_class([_(u\"Postnummer m\u00e5 best\u00e5 av fire siffer.\")])\n+ if len(zip_code) != 0:\n+ if len(zip_code) != 4 or len(zip_code) != 0 or not zip_code.isdigit():\n+ self._errors['zip_code'] = self.error_class([_(u\"Postnummer m\u00e5 best\u00e5 av fire siffer.\")])\n \n return cleaned_data\n", "issue": "(User creation) Some fields are required that should not be\nAddress, zip and phone should not be required. People will complain.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport datetime\nimport re\n\nfrom django import forms\nfrom django.contrib import auth\nfrom django.utils.translation import ugettext as _\n\nfrom apps.authentication.models import OnlineUser as User, Email\n\nclass LoginForm(forms.Form):\n username = forms.CharField(widget=forms.TextInput(), label=_(\"Brukernavn\"), max_length=50)\n password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u\"Passord\"))\n user = None\n\n def clean(self):\n if self._errors:\n return\n \n user = auth.authenticate(username=self.cleaned_data['username'], password=self.cleaned_data['password'])\n\n if user:\n if user.is_active:\n self.user = user\n else:\n self._errors['username'] = self.error_class([_(u\"Din konto er ikke aktiv. Fors\u00f8k gjenoppretning av passord.\")])\n else:\n self._errors['username'] = self.error_class([_(u\"Kontoen eksisterer ikke, eller kombinasjonen av brukernavn og passord er feil.\")])\n return self.cleaned_data\n\n def login(self, request):\n try:\n User.objects.get(username=request.POST['username'])\n except:\n return False\n if self.is_valid():\n auth.login(request, self.user)\n request.session.set_expiry(0)\n return True\n return False\n\nclass RegisterForm(forms.Form):\n username = forms.CharField(label=_(\"Brukernavn\"), max_length=20)\n first_name = forms.CharField(label=_(\"Fornavn\"), max_length=50)\n last_name = forms.CharField(label=_(\"Etternavn\"), max_length=50)\n email = forms.EmailField(label=_(\"Epost\"), max_length=50)\n password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(\"Passord\"))\n repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(\"Gjenta passord\"))\n address = forms.CharField(label=_(\"Adresse\"), max_length=50)\n zip_code = forms.CharField(label=_(\"Postnummer\"), max_length=4)\n phone = forms.CharField(label=_(\"Telefon\"), max_length=20)\n \n def clean(self):\n super(RegisterForm, self).clean()\n if self.is_valid():\n cleaned_data = self.cleaned_data\n\n # Check passwords\n if cleaned_data['password'] != cleaned_data['repeat_password']:\n self._errors['repeat_password'] = self.error_class([_(u\"Passordene er ikke like.\")])\n\n # Check username\n username = cleaned_data['username']\n if User.objects.filter(username=username).count() > 0:\n self._errors['username'] = self.error_class([_(u\"Brukernavnet er allerede registrert.\")])\n if not re.match(\"^[a-zA-Z0-9_-]+$\", username):\n self._errors['username'] = self.error_class([_(u\"Ditt brukernavn inneholdt ulovlige tegn. Lovlige tegn: a-Z 0-9 - _\")])\n\n # Check email\n email = cleaned_data['email']\n if Email.objects.filter(email=email).count() > 0:\n self._errors['email'] = self.error_class([_(u\"Det fins allerede en bruker med denne epostadressen.\")])\n\n # ZIP code digits only\n zip_code = cleaned_data['zip_code']\n if len(zip_code) != 4 or not zip_code.isdigit():\n self._errors['zip_code'] = self.error_class([_(u\"Postnummer m\u00e5 best\u00e5 av fire siffer.\")])\n\n return cleaned_data \n\nclass RecoveryForm(forms.Form):\n email = forms.EmailField(label=\"Email\", max_length=50)\n\nclass ChangePasswordForm(forms.Form):\n new_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u\"Nytt passord\"))\n repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u\"Gjenta passord\"))\n\n def clean(self):\n super(ChangePasswordForm, self).clean()\n if self.is_valid():\n cleaned_data = self.cleaned_data\n\n # Check passwords\n if cleaned_data['new_password'] != cleaned_data['repeat_password']:\n self._errors['repeat_password'] = self.error_class([_(u\"Passordene er ikke like.\")])\n\n return cleaned_data\n\n\nclass NewEmailForm(forms.Form):\n new_email = forms.EmailField(_(u\"ny epostadresse\"))\n", "path": "apps/authentication/forms.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport datetime\nimport re\n\nfrom django import forms\nfrom django.contrib import auth\nfrom django.utils.translation import ugettext as _\n\nfrom apps.authentication.models import OnlineUser as User, Email\n\nclass LoginForm(forms.Form):\n username = forms.CharField(widget=forms.TextInput(), label=_(\"Brukernavn\"), max_length=50)\n password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u\"Passord\"))\n user = None\n\n def clean(self):\n if self._errors:\n return\n \n user = auth.authenticate(username=self.cleaned_data['username'], password=self.cleaned_data['password'])\n\n if user:\n if user.is_active:\n self.user = user\n else:\n self._errors['username'] = self.error_class([_(u\"Din konto er ikke aktiv. Fors\u00f8k gjenoppretning av passord.\")])\n else:\n self._errors['username'] = self.error_class([_(u\"Kontoen eksisterer ikke, eller kombinasjonen av brukernavn og passord er feil.\")])\n return self.cleaned_data\n\n def login(self, request):\n try:\n User.objects.get(username=request.POST['username'])\n except:\n return False\n if self.is_valid():\n auth.login(request, self.user)\n request.session.set_expiry(0)\n return True\n return False\n\nclass RegisterForm(forms.Form):\n username = forms.CharField(label=_(\"Brukernavn\"), max_length=20)\n first_name = forms.CharField(label=_(\"Fornavn\"), max_length=50)\n last_name = forms.CharField(label=_(\"Etternavn\"), max_length=50)\n email = forms.EmailField(label=_(\"Epost\"), max_length=50)\n password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(\"Passord\"))\n repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(\"Gjenta passord\"))\n address = forms.CharField(label=_(\"Adresse\"), max_length=50, required=False)\n zip_code = forms.CharField(label=_(\"Postnummer\"), max_length=4, required=False)\n phone = forms.CharField(label=_(\"Telefon\"), max_length=20, required=False)\n \n def clean(self):\n super(RegisterForm, self).clean()\n if self.is_valid():\n cleaned_data = self.cleaned_data\n\n # Check passwords\n if cleaned_data['password'] != cleaned_data['repeat_password']:\n self._errors['repeat_password'] = self.error_class([_(u\"Passordene er ikke like.\")])\n\n # Check username\n username = cleaned_data['username']\n if User.objects.filter(username=username).count() > 0:\n self._errors['username'] = self.error_class([_(u\"Brukernavnet er allerede registrert.\")])\n if not re.match(\"^[a-zA-Z0-9_-]+$\", username):\n self._errors['username'] = self.error_class([_(u\"Ditt brukernavn inneholdt ulovlige tegn. Lovlige tegn: a-Z 0-9 - _\")])\n\n # Check email\n email = cleaned_data['email']\n if Email.objects.filter(email=email).count() > 0:\n self._errors['email'] = self.error_class([_(u\"Det fins allerede en bruker med denne epostadressen.\")])\n\n # ZIP code digits only\n zip_code = cleaned_data['zip_code']\n if len(zip_code) != 0:\n if len(zip_code) != 4 or len(zip_code) != 0 or not zip_code.isdigit():\n self._errors['zip_code'] = self.error_class([_(u\"Postnummer m\u00e5 best\u00e5 av fire siffer.\")])\n\n return cleaned_data \n\nclass RecoveryForm(forms.Form):\n email = forms.EmailField(label=\"Email\", max_length=50)\n\nclass ChangePasswordForm(forms.Form):\n new_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u\"Nytt passord\"))\n repeat_password = forms.CharField(widget=forms.PasswordInput(render_value=False), label=_(u\"Gjenta passord\"))\n\n def clean(self):\n super(ChangePasswordForm, self).clean()\n if self.is_valid():\n cleaned_data = self.cleaned_data\n\n # Check passwords\n if cleaned_data['new_password'] != cleaned_data['repeat_password']:\n self._errors['repeat_password'] = self.error_class([_(u\"Passordene er ikke like.\")])\n\n return cleaned_data\n\n\nclass NewEmailForm(forms.Form):\n new_email = forms.EmailField(_(u\"ny epostadresse\"))\n", "path": "apps/authentication/forms.py"}]} | 1,450 | 364 |
gh_patches_debug_27889 | rasdani/github-patches | git_diff | bridgecrewio__checkov-5753 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Azure Function App Slots - Ensure Azure Function App Slots use at least TLS 1.2
**Describe the issue**
It seems that there are no checks that ensure that the following resources to use at least TLS 1.2 :
- azurerm_function_app_slot
- azurerm_linux_function_app_slot
- azurerm_windows_function_app_slot
**Examples**
````hcl
````hcl
resource "azurerm_function_app_slot" "example" {
name = "test-azure-functions_slot"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
app_service_plan_id = azurerm_app_service_plan.example.id
function_app_name = azurerm_function_app.example.name
storage_account_name = azurerm_storage_account.example.name
storage_account_access_key = azurerm_storage_account.example.primary_access_key
site_config {
min_tls_version = 1.2
}
}
resource "azurerm_linux_function_app_slot" "example" {
name = "example-linux-function-app-slot"
function_app_id = azurerm_linux_function_app.example.id
storage_account_name = azurerm_storage_account.example.name
site_config {
min_tls_version = 1.2
}
}
resource "azurerm_windows_function_app" "example" {
name = "example-windows-function-app"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
storage_account_name = azurerm_storage_account.example.name
service_plan_id = azurerm_service_plan.example.id
site_config {
min_tls_version = 1.2
}
}
````
**Version (please complete the following information):**
- N/A
**Additional context**
N/A
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py`
Content:
```
1 from checkov.common.models.enums import CheckResult, CheckCategories
2 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
3
4
5 class FunctionAppMinTLSVersion(BaseResourceValueCheck):
6 def __init__(self):
7 name = "Ensure Function app is using the latest version of TLS encryption"
8 id = "CKV_AZURE_145"
9 supported_resources = ['azurerm_function_app']
10 categories = [CheckCategories.NETWORKING]
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,
12 missing_block_result=CheckResult.PASSED)
13
14 def get_inspected_key(self):
15 return "site_config/[0]/min_tls_version"
16
17 def get_expected_value(self):
18 return 1.2
19
20 def get_expected_values(self):
21 return ["1.2", 1.2]
22
23
24 check = FunctionAppMinTLSVersion()
25
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py b/checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py
--- a/checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py
+++ b/checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py
@@ -4,15 +4,27 @@
class FunctionAppMinTLSVersion(BaseResourceValueCheck):
def __init__(self):
+ """
+ The minimum supported TLS version for the function app.
+ Defaults to 1.2 for new function apps.
+ field name is:
+ - min_tls_version in azurerm_function_app, azurerm_function_app_slot.
+ - minimum_tls_version in newer resources (with linux/windows).
+ """
name = "Ensure Function app is using the latest version of TLS encryption"
id = "CKV_AZURE_145"
- supported_resources = ['azurerm_function_app']
+ supported_resources = ['azurerm_function_app', 'azurerm_linux_function_app', 'azurerm_windows_function_app',
+ 'azurerm_function_app_slot', 'azurerm_linux_function_app_slot',
+ 'azurerm_windows_function_app_slot']
categories = [CheckCategories.NETWORKING]
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,
missing_block_result=CheckResult.PASSED)
def get_inspected_key(self):
- return "site_config/[0]/min_tls_version"
+ if self.entity_type in ("azurerm_function_app", "azurerm_function_app_slot"):
+ return "site_config/[0]/min_tls_version"
+ else:
+ return "site_config/[0]/minimum_tls_version"
def get_expected_value(self):
return 1.2
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py b/checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py\n--- a/checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py\n+++ b/checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py\n@@ -4,15 +4,27 @@\n \n class FunctionAppMinTLSVersion(BaseResourceValueCheck):\n def __init__(self):\n+ \"\"\"\n+ The minimum supported TLS version for the function app.\n+ Defaults to 1.2 for new function apps.\n+ field name is:\n+ - min_tls_version in azurerm_function_app, azurerm_function_app_slot.\n+ - minimum_tls_version in newer resources (with linux/windows).\n+ \"\"\"\n name = \"Ensure Function app is using the latest version of TLS encryption\"\n id = \"CKV_AZURE_145\"\n- supported_resources = ['azurerm_function_app']\n+ supported_resources = ['azurerm_function_app', 'azurerm_linux_function_app', 'azurerm_windows_function_app',\n+ 'azurerm_function_app_slot', 'azurerm_linux_function_app_slot',\n+ 'azurerm_windows_function_app_slot']\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,\n missing_block_result=CheckResult.PASSED)\n \n def get_inspected_key(self):\n- return \"site_config/[0]/min_tls_version\"\n+ if self.entity_type in (\"azurerm_function_app\", \"azurerm_function_app_slot\"):\n+ return \"site_config/[0]/min_tls_version\"\n+ else:\n+ return \"site_config/[0]/minimum_tls_version\"\n \n def get_expected_value(self):\n return 1.2\n", "issue": "Azure Function App Slots - Ensure Azure Function App Slots use at least TLS 1.2 \n**Describe the issue**\r\nIt seems that there are no checks that ensure that the following resources to use at least TLS 1.2 :\r\n\r\n- azurerm_function_app_slot\r\n- azurerm_linux_function_app_slot\r\n- azurerm_windows_function_app_slot\r\n\r\n**Examples**\r\n\r\n````hcl\r\n````hcl\r\nresource \"azurerm_function_app_slot\" \"example\" {\r\n name = \"test-azure-functions_slot\"\r\n location = azurerm_resource_group.example.location\r\n resource_group_name = azurerm_resource_group.example.name\r\n app_service_plan_id = azurerm_app_service_plan.example.id\r\n function_app_name = azurerm_function_app.example.name\r\n storage_account_name = azurerm_storage_account.example.name\r\n storage_account_access_key = azurerm_storage_account.example.primary_access_key\r\n\r\n site_config {\r\n min_tls_version = 1.2\r\n }\r\n}\r\n\r\nresource \"azurerm_linux_function_app_slot\" \"example\" {\r\n name = \"example-linux-function-app-slot\"\r\n function_app_id = azurerm_linux_function_app.example.id\r\n storage_account_name = azurerm_storage_account.example.name\r\n\r\n site_config {\r\n min_tls_version = 1.2\r\n }\r\n}\r\n\r\nresource \"azurerm_windows_function_app\" \"example\" {\r\n name = \"example-windows-function-app\"\r\n resource_group_name = azurerm_resource_group.example.name\r\n location = azurerm_resource_group.example.location\r\n storage_account_name = azurerm_storage_account.example.name\r\n service_plan_id = azurerm_service_plan.example.id\r\n\r\n site_config {\r\n min_tls_version = 1.2\r\n }\r\n}\r\n````\r\n\r\n**Version (please complete the following information):**\r\n - N/A\r\n \r\n**Additional context**\r\n\r\nN/A\n", "before_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass FunctionAppMinTLSVersion(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure Function app is using the latest version of TLS encryption\"\n id = \"CKV_AZURE_145\"\n supported_resources = ['azurerm_function_app']\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,\n missing_block_result=CheckResult.PASSED)\n\n def get_inspected_key(self):\n return \"site_config/[0]/min_tls_version\"\n\n def get_expected_value(self):\n return 1.2\n\n def get_expected_values(self):\n return [\"1.2\", 1.2]\n\n\ncheck = FunctionAppMinTLSVersion()\n", "path": "checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py"}], "after_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass FunctionAppMinTLSVersion(BaseResourceValueCheck):\n def __init__(self):\n \"\"\"\n The minimum supported TLS version for the function app.\n Defaults to 1.2 for new function apps.\n field name is:\n - min_tls_version in azurerm_function_app, azurerm_function_app_slot.\n - minimum_tls_version in newer resources (with linux/windows).\n \"\"\"\n name = \"Ensure Function app is using the latest version of TLS encryption\"\n id = \"CKV_AZURE_145\"\n supported_resources = ['azurerm_function_app', 'azurerm_linux_function_app', 'azurerm_windows_function_app',\n 'azurerm_function_app_slot', 'azurerm_linux_function_app_slot',\n 'azurerm_windows_function_app_slot']\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,\n missing_block_result=CheckResult.PASSED)\n\n def get_inspected_key(self):\n if self.entity_type in (\"azurerm_function_app\", \"azurerm_function_app_slot\"):\n return \"site_config/[0]/min_tls_version\"\n else:\n return \"site_config/[0]/minimum_tls_version\"\n\n def get_expected_value(self):\n return 1.2\n\n def get_expected_values(self):\n return [\"1.2\", 1.2]\n\n\ncheck = FunctionAppMinTLSVersion()\n", "path": "checkov/terraform/checks/resource/azure/FunctionAppMinTLSVersion.py"}]} | 915 | 409 |
gh_patches_debug_9248 | rasdani/github-patches | git_diff | hylang__hy-1813 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
REPL crashes when ~/.hy-history is inaccessible
I've been working on updating the Hy package in Debian and in running the tests, I've noticed that the `test_bin` tests appear to expect to be able to do something with `~/.hy-history` (not sure if it's trying to read or write) but I had to set `HOME` to a real directory after which they passed. Looking at the tests, I imagine it's happening because they're interacting with the REPL via stdin so it's trying to load and/or use the history file?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hy/completer.py`
Content:
```
1 # Copyright 2019 the authors.
2 # This file is part of Hy, which is free software licensed under the Expat
3 # license. See the LICENSE.
4
5 import contextlib
6 import os
7 import re
8 import sys
9 import builtins
10
11 import hy.macros
12 import hy.compiler
13
14
15 docomplete = True
16
17 try:
18 import readline
19 except ImportError:
20 try:
21 import pyreadline.rlmain
22 import pyreadline.unicode_helper # NOQA
23 import readline
24 except ImportError:
25 docomplete = False
26
27 if docomplete:
28 if sys.platform == 'darwin' and 'libedit' in readline.__doc__:
29 readline_bind = "bind ^I rl_complete"
30 else:
31 readline_bind = "tab: complete"
32
33
34 class Completer(object):
35
36 def __init__(self, namespace={}):
37 if not isinstance(namespace, dict):
38 raise TypeError('namespace must be a dictionary')
39 self.namespace = namespace
40 self.path = [hy.compiler._special_form_compilers,
41 builtins.__dict__,
42 namespace]
43
44 self.tag_path = []
45
46 namespace.setdefault('__macros__', {})
47 namespace.setdefault('__tags__', {})
48
49 self.path.append(namespace['__macros__'])
50 self.tag_path.append(namespace['__tags__'])
51
52 def attr_matches(self, text):
53 # Borrowed from IPython's completer
54 m = re.match(r"(\S+(\.[\w-]+)*)\.([\w-]*)$", text)
55
56 if m:
57 expr, attr = m.group(1, 3)
58 attr = attr.replace("-", "_")
59 expr = expr.replace("-", "_")
60 else:
61 return []
62
63 try:
64 obj = eval(expr, self.namespace)
65 words = dir(obj)
66 except Exception:
67 return []
68
69 n = len(attr)
70 matches = []
71 for w in words:
72 if w[:n] == attr:
73 matches.append("{}.{}".format(
74 expr.replace("_", "-"), w.replace("_", "-")))
75 return matches
76
77 def global_matches(self, text):
78 matches = []
79 for p in self.path:
80 for k in p.keys():
81 if isinstance(k, str):
82 k = k.replace("_", "-")
83 if k.startswith(text):
84 matches.append(k)
85 return matches
86
87 def tag_matches(self, text):
88 text = text[1:]
89 matches = []
90 for p in self.tag_path:
91 for k in p.keys():
92 if isinstance(k, str):
93 if k.startswith(text):
94 matches.append("#{}".format(k))
95 return matches
96
97 def complete(self, text, state):
98 if text.startswith("#"):
99 matches = self.tag_matches(text)
100 elif "." in text:
101 matches = self.attr_matches(text)
102 else:
103 matches = self.global_matches(text)
104 try:
105 return matches[state]
106 except IndexError:
107 return None
108
109
110 @contextlib.contextmanager
111 def completion(completer=None):
112 delims = "()[]{} "
113 if not completer:
114 completer = Completer()
115
116 if docomplete:
117 readline.set_completer(completer.complete)
118 readline.set_completer_delims(delims)
119
120 history = os.path.expanduser("~/.hy-history")
121 readline.parse_and_bind("set blink-matching-paren on")
122
123 try:
124 readline.read_history_file(history)
125 except IOError:
126 open(history, 'a').close()
127
128 readline.parse_and_bind(readline_bind)
129
130 try:
131 yield
132 finally:
133 if docomplete:
134 readline.write_history_file(history)
135
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hy/completer.py b/hy/completer.py
--- a/hy/completer.py
+++ b/hy/completer.py
@@ -123,7 +123,7 @@
try:
readline.read_history_file(history)
except IOError:
- open(history, 'a').close()
+ pass
readline.parse_and_bind(readline_bind)
@@ -131,4 +131,7 @@
yield
finally:
if docomplete:
- readline.write_history_file(history)
+ try:
+ readline.write_history_file(history)
+ except IOError:
+ pass
| {"golden_diff": "diff --git a/hy/completer.py b/hy/completer.py\n--- a/hy/completer.py\n+++ b/hy/completer.py\n@@ -123,7 +123,7 @@\n try:\n readline.read_history_file(history)\n except IOError:\n- open(history, 'a').close()\n+ pass\n \n readline.parse_and_bind(readline_bind)\n \n@@ -131,4 +131,7 @@\n yield\n finally:\n if docomplete:\n- readline.write_history_file(history)\n+ try:\n+ readline.write_history_file(history)\n+ except IOError:\n+ pass\n", "issue": "REPL crashes when ~/.hy-history is inaccessible\nI've been working on updating the Hy package in Debian and in running the tests, I've noticed that the `test_bin` tests appear to expect to be able to do something with `~/.hy-history` (not sure if it's trying to read or write) but I had to set `HOME` to a real directory after which they passed. Looking at the tests, I imagine it's happening because they're interacting with the REPL via stdin so it's trying to load and/or use the history file?\n", "before_files": [{"content": "# Copyright 2019 the authors.\n# This file is part of Hy, which is free software licensed under the Expat\n# license. See the LICENSE.\n\nimport contextlib\nimport os\nimport re\nimport sys\nimport builtins\n\nimport hy.macros\nimport hy.compiler\n\n\ndocomplete = True\n\ntry:\n import readline\nexcept ImportError:\n try:\n import pyreadline.rlmain\n import pyreadline.unicode_helper # NOQA\n import readline\n except ImportError:\n docomplete = False\n\nif docomplete:\n if sys.platform == 'darwin' and 'libedit' in readline.__doc__:\n readline_bind = \"bind ^I rl_complete\"\n else:\n readline_bind = \"tab: complete\"\n\n\nclass Completer(object):\n\n def __init__(self, namespace={}):\n if not isinstance(namespace, dict):\n raise TypeError('namespace must be a dictionary')\n self.namespace = namespace\n self.path = [hy.compiler._special_form_compilers,\n builtins.__dict__,\n namespace]\n\n self.tag_path = []\n\n namespace.setdefault('__macros__', {})\n namespace.setdefault('__tags__', {})\n\n self.path.append(namespace['__macros__'])\n self.tag_path.append(namespace['__tags__'])\n\n def attr_matches(self, text):\n # Borrowed from IPython's completer\n m = re.match(r\"(\\S+(\\.[\\w-]+)*)\\.([\\w-]*)$\", text)\n\n if m:\n expr, attr = m.group(1, 3)\n attr = attr.replace(\"-\", \"_\")\n expr = expr.replace(\"-\", \"_\")\n else:\n return []\n\n try:\n obj = eval(expr, self.namespace)\n words = dir(obj)\n except Exception:\n return []\n\n n = len(attr)\n matches = []\n for w in words:\n if w[:n] == attr:\n matches.append(\"{}.{}\".format(\n expr.replace(\"_\", \"-\"), w.replace(\"_\", \"-\")))\n return matches\n\n def global_matches(self, text):\n matches = []\n for p in self.path:\n for k in p.keys():\n if isinstance(k, str):\n k = k.replace(\"_\", \"-\")\n if k.startswith(text):\n matches.append(k)\n return matches\n\n def tag_matches(self, text):\n text = text[1:]\n matches = []\n for p in self.tag_path:\n for k in p.keys():\n if isinstance(k, str):\n if k.startswith(text):\n matches.append(\"#{}\".format(k))\n return matches\n\n def complete(self, text, state):\n if text.startswith(\"#\"):\n matches = self.tag_matches(text)\n elif \".\" in text:\n matches = self.attr_matches(text)\n else:\n matches = self.global_matches(text)\n try:\n return matches[state]\n except IndexError:\n return None\n\n\[email protected]\ndef completion(completer=None):\n delims = \"()[]{} \"\n if not completer:\n completer = Completer()\n\n if docomplete:\n readline.set_completer(completer.complete)\n readline.set_completer_delims(delims)\n\n history = os.path.expanduser(\"~/.hy-history\")\n readline.parse_and_bind(\"set blink-matching-paren on\")\n\n try:\n readline.read_history_file(history)\n except IOError:\n open(history, 'a').close()\n\n readline.parse_and_bind(readline_bind)\n\n try:\n yield\n finally:\n if docomplete:\n readline.write_history_file(history)\n", "path": "hy/completer.py"}], "after_files": [{"content": "# Copyright 2019 the authors.\n# This file is part of Hy, which is free software licensed under the Expat\n# license. See the LICENSE.\n\nimport contextlib\nimport os\nimport re\nimport sys\nimport builtins\n\nimport hy.macros\nimport hy.compiler\n\n\ndocomplete = True\n\ntry:\n import readline\nexcept ImportError:\n try:\n import pyreadline.rlmain\n import pyreadline.unicode_helper # NOQA\n import readline\n except ImportError:\n docomplete = False\n\nif docomplete:\n if sys.platform == 'darwin' and 'libedit' in readline.__doc__:\n readline_bind = \"bind ^I rl_complete\"\n else:\n readline_bind = \"tab: complete\"\n\n\nclass Completer(object):\n\n def __init__(self, namespace={}):\n if not isinstance(namespace, dict):\n raise TypeError('namespace must be a dictionary')\n self.namespace = namespace\n self.path = [hy.compiler._special_form_compilers,\n builtins.__dict__,\n namespace]\n\n self.tag_path = []\n\n namespace.setdefault('__macros__', {})\n namespace.setdefault('__tags__', {})\n\n self.path.append(namespace['__macros__'])\n self.tag_path.append(namespace['__tags__'])\n\n def attr_matches(self, text):\n # Borrowed from IPython's completer\n m = re.match(r\"(\\S+(\\.[\\w-]+)*)\\.([\\w-]*)$\", text)\n\n if m:\n expr, attr = m.group(1, 3)\n attr = attr.replace(\"-\", \"_\")\n expr = expr.replace(\"-\", \"_\")\n else:\n return []\n\n try:\n obj = eval(expr, self.namespace)\n words = dir(obj)\n except Exception:\n return []\n\n n = len(attr)\n matches = []\n for w in words:\n if w[:n] == attr:\n matches.append(\"{}.{}\".format(\n expr.replace(\"_\", \"-\"), w.replace(\"_\", \"-\")))\n return matches\n\n def global_matches(self, text):\n matches = []\n for p in self.path:\n for k in p.keys():\n if isinstance(k, str):\n k = k.replace(\"_\", \"-\")\n if k.startswith(text):\n matches.append(k)\n return matches\n\n def tag_matches(self, text):\n text = text[1:]\n matches = []\n for p in self.tag_path:\n for k in p.keys():\n if isinstance(k, str):\n if k.startswith(text):\n matches.append(\"#{}\".format(k))\n return matches\n\n def complete(self, text, state):\n if text.startswith(\"#\"):\n matches = self.tag_matches(text)\n elif \".\" in text:\n matches = self.attr_matches(text)\n else:\n matches = self.global_matches(text)\n try:\n return matches[state]\n except IndexError:\n return None\n\n\[email protected]\ndef completion(completer=None):\n delims = \"()[]{} \"\n if not completer:\n completer = Completer()\n\n if docomplete:\n readline.set_completer(completer.complete)\n readline.set_completer_delims(delims)\n\n history = os.path.expanduser(\"~/.hy-history\")\n readline.parse_and_bind(\"set blink-matching-paren on\")\n\n try:\n readline.read_history_file(history)\n except IOError:\n pass\n\n readline.parse_and_bind(readline_bind)\n\n try:\n yield\n finally:\n if docomplete:\n try:\n readline.write_history_file(history)\n except IOError:\n pass\n", "path": "hy/completer.py"}]} | 1,439 | 143 |
gh_patches_debug_25027 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-2810 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider planned_parenthood is broken
During the global build at 2021-05-26-14-42-23, spider **planned_parenthood** failed with **598 features** and **598 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/planned_parenthood.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/planned_parenthood.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/planned_parenthood.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/planned_parenthood.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import scrapy
3 import re
4
5 from locations.items import GeojsonPointItem
6
7
8 class PlannedParenthoodSpider(scrapy.Spider):
9 name = "planned_parenthood"
10 item_attributes = { 'brand': "Planned Parenthood" }
11 allowed_domains = ["www.plannedparenthood.org"]
12 start_urls = (
13 'https://www.plannedparenthood.org/health-center',
14 )
15
16 def parse(self, response):
17 state_urls = response.xpath('//ul[@class="quicklist-list"]/li/a/@href').extract()
18 for path in state_urls:
19 yield scrapy.Request(
20 response.urljoin(path),
21 callback=self.parse_state,
22 )
23
24 def parse_state(self, response):
25 venue_urls = response.xpath('//ul[@class="quicklist-list"]/li/p/a/@href').extract()
26 for path in venue_urls:
27 yield scrapy.Request(
28 response.urljoin(path),
29 callback=self.parse_venue,
30 )
31
32 def parse_venue(self, response):
33 properties = {
34 'addr_full': response.xpath('//*[@itemprop="streetAddress"]/text()').extract_first(),
35 'city': response.xpath('//*[@itemprop="addressLocality"]/text()').extract_first(),
36 'state': response.xpath('//*[@itemprop="addressRegion"]/text()').extract_first(),
37 'postcode': response.xpath('//*[@itemprop="postalCode"]/text()').extract_first(),
38 'phone': response.xpath('//a[@itemprop="telephone"][@data-link]/text()').extract_first(),
39 'ref': response.url,
40 'website': response.url,
41 }
42
43 map_image_url = response.xpath('//img[@class="address-map"]/@src').extract_first()
44 match = re.search(r"center=(.*?),(.*?)&zoom", map_image_url)
45 properties['lat'] = float(match.group(1)),
46 properties['lon'] = float(match.group(2)),
47
48 yield GeojsonPointItem(**properties)
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/locations/spiders/planned_parenthood.py b/locations/spiders/planned_parenthood.py
--- a/locations/spiders/planned_parenthood.py
+++ b/locations/spiders/planned_parenthood.py
@@ -27,9 +27,14 @@
yield scrapy.Request(
response.urljoin(path),
callback=self.parse_venue,
+ meta={'dont_redirect':True}
)
def parse_venue(self, response):
+ if response is None:
+ # Ignoring redirects
+ return
+
properties = {
'addr_full': response.xpath('//*[@itemprop="streetAddress"]/text()').extract_first(),
'city': response.xpath('//*[@itemprop="addressLocality"]/text()').extract_first(),
@@ -42,7 +47,7 @@
map_image_url = response.xpath('//img[@class="address-map"]/@src').extract_first()
match = re.search(r"center=(.*?),(.*?)&zoom", map_image_url)
- properties['lat'] = float(match.group(1)),
- properties['lon'] = float(match.group(2)),
+ properties['lat'] = float(match.group(1))
+ properties['lon'] = float(match.group(2))
yield GeojsonPointItem(**properties)
| {"golden_diff": "diff --git a/locations/spiders/planned_parenthood.py b/locations/spiders/planned_parenthood.py\n--- a/locations/spiders/planned_parenthood.py\n+++ b/locations/spiders/planned_parenthood.py\n@@ -27,9 +27,14 @@\n yield scrapy.Request(\n response.urljoin(path),\n callback=self.parse_venue,\n+ meta={'dont_redirect':True}\n )\n \n def parse_venue(self, response):\n+ if response is None:\n+ # Ignoring redirects\n+ return\n+\n properties = {\n 'addr_full': response.xpath('//*[@itemprop=\"streetAddress\"]/text()').extract_first(),\n 'city': response.xpath('//*[@itemprop=\"addressLocality\"]/text()').extract_first(),\n@@ -42,7 +47,7 @@\n \n map_image_url = response.xpath('//img[@class=\"address-map\"]/@src').extract_first()\n match = re.search(r\"center=(.*?),(.*?)&zoom\", map_image_url)\n- properties['lat'] = float(match.group(1)),\n- properties['lon'] = float(match.group(2)),\n+ properties['lat'] = float(match.group(1))\n+ properties['lon'] = float(match.group(2))\n \n yield GeojsonPointItem(**properties)\n", "issue": "Spider planned_parenthood is broken\nDuring the global build at 2021-05-26-14-42-23, spider **planned_parenthood** failed with **598 features** and **598 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/planned_parenthood.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/planned_parenthood.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/planned_parenthood.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport re\n\nfrom locations.items import GeojsonPointItem\n\n\nclass PlannedParenthoodSpider(scrapy.Spider):\n name = \"planned_parenthood\"\n item_attributes = { 'brand': \"Planned Parenthood\" }\n allowed_domains = [\"www.plannedparenthood.org\"]\n start_urls = (\n 'https://www.plannedparenthood.org/health-center',\n )\n\n def parse(self, response):\n state_urls = response.xpath('//ul[@class=\"quicklist-list\"]/li/a/@href').extract()\n for path in state_urls:\n yield scrapy.Request(\n response.urljoin(path),\n callback=self.parse_state,\n )\n\n def parse_state(self, response):\n venue_urls = response.xpath('//ul[@class=\"quicklist-list\"]/li/p/a/@href').extract()\n for path in venue_urls:\n yield scrapy.Request(\n response.urljoin(path),\n callback=self.parse_venue,\n )\n\n def parse_venue(self, response):\n properties = {\n 'addr_full': response.xpath('//*[@itemprop=\"streetAddress\"]/text()').extract_first(),\n 'city': response.xpath('//*[@itemprop=\"addressLocality\"]/text()').extract_first(),\n 'state': response.xpath('//*[@itemprop=\"addressRegion\"]/text()').extract_first(),\n 'postcode': response.xpath('//*[@itemprop=\"postalCode\"]/text()').extract_first(),\n 'phone': response.xpath('//a[@itemprop=\"telephone\"][@data-link]/text()').extract_first(),\n 'ref': response.url,\n 'website': response.url,\n }\n\n map_image_url = response.xpath('//img[@class=\"address-map\"]/@src').extract_first()\n match = re.search(r\"center=(.*?),(.*?)&zoom\", map_image_url)\n properties['lat'] = float(match.group(1)),\n properties['lon'] = float(match.group(2)),\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/planned_parenthood.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport re\n\nfrom locations.items import GeojsonPointItem\n\n\nclass PlannedParenthoodSpider(scrapy.Spider):\n name = \"planned_parenthood\"\n item_attributes = { 'brand': \"Planned Parenthood\" }\n allowed_domains = [\"www.plannedparenthood.org\"]\n start_urls = (\n 'https://www.plannedparenthood.org/health-center',\n )\n\n def parse(self, response):\n state_urls = response.xpath('//ul[@class=\"quicklist-list\"]/li/a/@href').extract()\n for path in state_urls:\n yield scrapy.Request(\n response.urljoin(path),\n callback=self.parse_state,\n )\n\n def parse_state(self, response):\n venue_urls = response.xpath('//ul[@class=\"quicklist-list\"]/li/p/a/@href').extract()\n for path in venue_urls:\n yield scrapy.Request(\n response.urljoin(path),\n callback=self.parse_venue,\n meta={'dont_redirect':True}\n )\n\n def parse_venue(self, response):\n if response is None:\n # Ignoring redirects\n return\n\n properties = {\n 'addr_full': response.xpath('//*[@itemprop=\"streetAddress\"]/text()').extract_first(),\n 'city': response.xpath('//*[@itemprop=\"addressLocality\"]/text()').extract_first(),\n 'state': response.xpath('//*[@itemprop=\"addressRegion\"]/text()').extract_first(),\n 'postcode': response.xpath('//*[@itemprop=\"postalCode\"]/text()').extract_first(),\n 'phone': response.xpath('//a[@itemprop=\"telephone\"][@data-link]/text()').extract_first(),\n 'ref': response.url,\n 'website': response.url,\n }\n\n map_image_url = response.xpath('//img[@class=\"address-map\"]/@src').extract_first()\n match = re.search(r\"center=(.*?),(.*?)&zoom\", map_image_url)\n properties['lat'] = float(match.group(1))\n properties['lon'] = float(match.group(2))\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/planned_parenthood.py"}]} | 966 | 283 |
gh_patches_debug_23474 | rasdani/github-patches | git_diff | aws-powertools__powertools-lambda-python-186 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Case insensitive lookup of header values in http proxy events
**Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Http headers are meant to be case insensitive, but lambda events use case sensitive dicts for the header keys.
For example getting the `Authorization` header you will need to do a case insensitive look up to support if the header is `authorization`. So you end up doing something like this.
```python3
authorization = None
for key, value in self.headers.items():
if name.lower() == 'authorization'
authorization = value
```
**Describe the solution you'd like**
<!-- A clear and concise description of what you want to happen. -->
> Linked pull request : #185
Update `BaseProxyEvent.get_header_value` method to support case insensitive lookups by default
```python3
authorization = event.get_header_value("authorization")
```
```python3
def get_header_value(
self, name: str, default_value: Optional[str] = None, case_sensitive: Optional[bool] = False
) -> Optional[str]:
"""Get header value by name
Parameters
----------
name: str
Header name
default_value: str, optional
Default value if no value was found by name
case_sensitive: bool
Whether to use a case sensitive look up
Returns
-------
str, optional
Header value
"""
if case_sensitive:
return self.headers.get(name, default_value)
return next((value for key, value in self.headers.items() if name.lower() == key.lower()), default_value)
```
**Questions**
* Should we default to case insensitive look ups? This is what most people would want?
* Just return the first match? Or support returning all matches?
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
Use the requests CaseInsensitiveDict or port it :
* https://github.com/psf/requests/blob/333fa8d3f201f68b82d9e8f26be4987e869fd3bf/requests/structures.py#L15
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
From the Http 1.1 spec, https headers are not supposed to be case sensitive: https://www.w3.org/Protocols/rfc2616/rfc2616.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `aws_lambda_powertools/utilities/data_classes/common.py`
Content:
```
1 from typing import Any, Dict, Optional
2
3
4 class DictWrapper:
5 """Provides a single read only access to a wrapper dict"""
6
7 def __init__(self, data: Dict[str, Any]):
8 self._data = data
9
10 def __getitem__(self, key: str) -> Any:
11 return self._data[key]
12
13 def get(self, key: str) -> Optional[Any]:
14 return self._data.get(key)
15
16
17 class BaseProxyEvent(DictWrapper):
18 @property
19 def headers(self) -> Dict[str, str]:
20 return self["headers"]
21
22 @property
23 def query_string_parameters(self) -> Optional[Dict[str, str]]:
24 return self.get("queryStringParameters")
25
26 @property
27 def is_base64_encoded(self) -> bool:
28 return self.get("isBase64Encoded")
29
30 @property
31 def body(self) -> Optional[str]:
32 return self.get("body")
33
34 def get_query_string_value(self, name: str, default_value: Optional[str] = None) -> Optional[str]:
35 """Get query string value by name
36
37 Parameters
38 ----------
39 name: str
40 Query string parameter name
41 default_value: str, optional
42 Default value if no value was found by name
43 Returns
44 -------
45 str, optional
46 Query string parameter value
47 """
48 params = self.query_string_parameters
49 return default_value if params is None else params.get(name, default_value)
50
51 def get_header_value(self, name: str, default_value: Optional[str] = None) -> Optional[str]:
52 """Get header value by name
53
54 Parameters
55 ----------
56 name: str
57 Header name
58 default_value: str, optional
59 Default value if no value was found by name
60 Returns
61 -------
62 str, optional
63 Header value
64 """
65 return self.headers.get(name, default_value)
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/aws_lambda_powertools/utilities/data_classes/common.py b/aws_lambda_powertools/utilities/data_classes/common.py
--- a/aws_lambda_powertools/utilities/data_classes/common.py
+++ b/aws_lambda_powertools/utilities/data_classes/common.py
@@ -48,7 +48,9 @@
params = self.query_string_parameters
return default_value if params is None else params.get(name, default_value)
- def get_header_value(self, name: str, default_value: Optional[str] = None) -> Optional[str]:
+ def get_header_value(
+ self, name: str, default_value: Optional[str] = None, case_sensitive: Optional[bool] = False
+ ) -> Optional[str]:
"""Get header value by name
Parameters
@@ -57,9 +59,14 @@
Header name
default_value: str, optional
Default value if no value was found by name
+ case_sensitive: bool
+ Whether to use a case sensitive look up
Returns
-------
str, optional
Header value
"""
- return self.headers.get(name, default_value)
+ if case_sensitive:
+ return self.headers.get(name, default_value)
+
+ return next((value for key, value in self.headers.items() if name.lower() == key.lower()), default_value)
| {"golden_diff": "diff --git a/aws_lambda_powertools/utilities/data_classes/common.py b/aws_lambda_powertools/utilities/data_classes/common.py\n--- a/aws_lambda_powertools/utilities/data_classes/common.py\n+++ b/aws_lambda_powertools/utilities/data_classes/common.py\n@@ -48,7 +48,9 @@\n params = self.query_string_parameters\n return default_value if params is None else params.get(name, default_value)\n \n- def get_header_value(self, name: str, default_value: Optional[str] = None) -> Optional[str]:\n+ def get_header_value(\n+ self, name: str, default_value: Optional[str] = None, case_sensitive: Optional[bool] = False\n+ ) -> Optional[str]:\n \"\"\"Get header value by name\n \n Parameters\n@@ -57,9 +59,14 @@\n Header name\n default_value: str, optional\n Default value if no value was found by name\n+ case_sensitive: bool\n+ Whether to use a case sensitive look up\n Returns\n -------\n str, optional\n Header value\n \"\"\"\n- return self.headers.get(name, default_value)\n+ if case_sensitive:\n+ return self.headers.get(name, default_value)\n+\n+ return next((value for key, value in self.headers.items() if name.lower() == key.lower()), default_value)\n", "issue": "Case insensitive lookup of header values in http proxy events\n**Is your feature request related to a problem? Please describe.**\r\n<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->\r\n\r\nHttp headers are meant to be case insensitive, but lambda events use case sensitive dicts for the header keys.\r\n\r\nFor example getting the `Authorization` header you will need to do a case insensitive look up to support if the header is `authorization`. So you end up doing something like this.\r\n\r\n```python3\r\nauthorization = None\r\nfor key, value in self.headers.items():\r\n if name.lower() == 'authorization'\r\n authorization = value\r\n```\r\n\r\n**Describe the solution you'd like**\r\n<!-- A clear and concise description of what you want to happen. -->\r\n\r\n> Linked pull request : #185\r\n\r\nUpdate `BaseProxyEvent.get_header_value` method to support case insensitive lookups by default\r\n\r\n```python3\r\nauthorization = event.get_header_value(\"authorization\")\r\n```\r\n\r\n```python3\r\n def get_header_value(\r\n self, name: str, default_value: Optional[str] = None, case_sensitive: Optional[bool] = False\r\n ) -> Optional[str]:\r\n \"\"\"Get header value by name\r\n\r\n Parameters\r\n ----------\r\n name: str\r\n Header name\r\n default_value: str, optional\r\n Default value if no value was found by name\r\n case_sensitive: bool\r\n Whether to use a case sensitive look up\r\n Returns\r\n -------\r\n str, optional\r\n Header value\r\n \"\"\"\r\n if case_sensitive:\r\n return self.headers.get(name, default_value)\r\n \r\n return next((value for key, value in self.headers.items() if name.lower() == key.lower()), default_value)\r\n```\r\n\r\n**Questions**\r\n\r\n* Should we default to case insensitive look ups? This is what most people would want?\r\n* Just return the first match? Or support returning all matches?\r\n\r\n**Describe alternatives you've considered**\r\n<!-- A clear and concise description of any alternative solutions or features you've considered. -->\r\nUse the requests CaseInsensitiveDict or port it :\r\n* https://github.com/psf/requests/blob/333fa8d3f201f68b82d9e8f26be4987e869fd3bf/requests/structures.py#L15\r\n\r\n**Additional context**\r\n<!-- Add any other context or screenshots about the feature request here. -->\r\nFrom the Http 1.1 spec, https headers are not supposed to be case sensitive: https://www.w3.org/Protocols/rfc2616/rfc2616.html\n", "before_files": [{"content": "from typing import Any, Dict, Optional\n\n\nclass DictWrapper:\n \"\"\"Provides a single read only access to a wrapper dict\"\"\"\n\n def __init__(self, data: Dict[str, Any]):\n self._data = data\n\n def __getitem__(self, key: str) -> Any:\n return self._data[key]\n\n def get(self, key: str) -> Optional[Any]:\n return self._data.get(key)\n\n\nclass BaseProxyEvent(DictWrapper):\n @property\n def headers(self) -> Dict[str, str]:\n return self[\"headers\"]\n\n @property\n def query_string_parameters(self) -> Optional[Dict[str, str]]:\n return self.get(\"queryStringParameters\")\n\n @property\n def is_base64_encoded(self) -> bool:\n return self.get(\"isBase64Encoded\")\n\n @property\n def body(self) -> Optional[str]:\n return self.get(\"body\")\n\n def get_query_string_value(self, name: str, default_value: Optional[str] = None) -> Optional[str]:\n \"\"\"Get query string value by name\n\n Parameters\n ----------\n name: str\n Query string parameter name\n default_value: str, optional\n Default value if no value was found by name\n Returns\n -------\n str, optional\n Query string parameter value\n \"\"\"\n params = self.query_string_parameters\n return default_value if params is None else params.get(name, default_value)\n\n def get_header_value(self, name: str, default_value: Optional[str] = None) -> Optional[str]:\n \"\"\"Get header value by name\n\n Parameters\n ----------\n name: str\n Header name\n default_value: str, optional\n Default value if no value was found by name\n Returns\n -------\n str, optional\n Header value\n \"\"\"\n return self.headers.get(name, default_value)\n", "path": "aws_lambda_powertools/utilities/data_classes/common.py"}], "after_files": [{"content": "from typing import Any, Dict, Optional\n\n\nclass DictWrapper:\n \"\"\"Provides a single read only access to a wrapper dict\"\"\"\n\n def __init__(self, data: Dict[str, Any]):\n self._data = data\n\n def __getitem__(self, key: str) -> Any:\n return self._data[key]\n\n def get(self, key: str) -> Optional[Any]:\n return self._data.get(key)\n\n\nclass BaseProxyEvent(DictWrapper):\n @property\n def headers(self) -> Dict[str, str]:\n return self[\"headers\"]\n\n @property\n def query_string_parameters(self) -> Optional[Dict[str, str]]:\n return self.get(\"queryStringParameters\")\n\n @property\n def is_base64_encoded(self) -> bool:\n return self.get(\"isBase64Encoded\")\n\n @property\n def body(self) -> Optional[str]:\n return self.get(\"body\")\n\n def get_query_string_value(self, name: str, default_value: Optional[str] = None) -> Optional[str]:\n \"\"\"Get query string value by name\n\n Parameters\n ----------\n name: str\n Query string parameter name\n default_value: str, optional\n Default value if no value was found by name\n Returns\n -------\n str, optional\n Query string parameter value\n \"\"\"\n params = self.query_string_parameters\n return default_value if params is None else params.get(name, default_value)\n\n def get_header_value(\n self, name: str, default_value: Optional[str] = None, case_sensitive: Optional[bool] = False\n ) -> Optional[str]:\n \"\"\"Get header value by name\n\n Parameters\n ----------\n name: str\n Header name\n default_value: str, optional\n Default value if no value was found by name\n case_sensitive: bool\n Whether to use a case sensitive look up\n Returns\n -------\n str, optional\n Header value\n \"\"\"\n if case_sensitive:\n return self.headers.get(name, default_value)\n\n return next((value for key, value in self.headers.items() if name.lower() == key.lower()), default_value)\n", "path": "aws_lambda_powertools/utilities/data_classes/common.py"}]} | 1,344 | 291 |
gh_patches_debug_66082 | rasdani/github-patches | git_diff | freedomofpress__securedrop-5595 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
dev server hot reload has stopped working
## Description
In #5532 the `env` attribute was dropped from `SDConfig` in `sdconfig.py`. That value is checked in [`source.py`](https://github.com/freedomofpress/securedrop/blob/6246482157e31d0655a91c5e7284cc8550f2c289/securedrop/source.py#L11) and [`journalist.py`](https://github.com/freedomofpress/securedrop/blob/6246482157e31d0655a91c5e7284cc8550f2c289/securedrop/journalist.py#L26) to determine whether the Flask app will run in [debug](https://flask.palletsprojects.com/en/1.1.x/config/#DEBUG) mode. By default it will not, so the dev server has stopped responding to code changes.
Given the Flask documentation warnings about setting debug mode via code and not the `FLASK_DEBUG` environment variable, we may want to reevaluate all of this, but right now let's just get back to a properly functioning dev server.
## Steps to Reproduce
- Check out `develop` at a commit before the `sdconfig.py` change (eff931fa8a0e74d5c3be87e46c5d0f004f02e289).
- Run `make dev`.
- Change `securedrop/journalist_app/main.py` to trigger a reload, and confirm that the change is detected.
- Stop the dev server.
- Check out `develop` and run `make dev`.
- Change `securedrop/journalist_app/main.py` again, and observe that the change is not detected.
## Expected Behavior
That the dev server would notice code changes and reload to pick them up.
## Actual Behavior
It does not care one whit about your useless flailings. We are all `prod` now.
## Comments
Just need to restore `SDConfig.env`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `securedrop/sdconfig.py`
Content:
```
1 from typing import Dict
2 from typing import Optional
3
4 from typing import Type
5
6 import config as _config
7 from typing import List
8
9
10 class SDConfig:
11 def __init__(self) -> None:
12 self.JOURNALIST_APP_FLASK_CONFIG_CLS = \
13 _config.JournalistInterfaceFlaskConfig # type: Type
14
15 self.SOURCE_APP_FLASK_CONFIG_CLS = \
16 _config.SourceInterfaceFlaskConfig # type: Type
17
18 self.DATABASE_ENGINE = _config.DATABASE_ENGINE # type: str
19 self.DATABASE_FILE = _config.DATABASE_FILE # type: str
20
21 self.DATABASE_USERNAME = getattr(_config, "DATABASE_USERNAME", None) # type: Optional[str]
22 self.DATABASE_PASSWORD = getattr(_config, "DATABASE_PASSWORD", None) # type: Optional[str]
23 self.DATABASE_HOST = getattr(_config, "DATABASE_HOST", None) # type: Optional[str]
24 self.DATABASE_NAME = getattr(_config, "DATABASE_NAME", None) # type: Optional[str]
25
26 self.ADJECTIVES = _config.ADJECTIVES # type: str
27 self.NOUNS = _config.NOUNS # type: str
28 self.WORD_LIST = _config.WORD_LIST # type: str
29
30 self.DEFAULT_LOCALE = _config.DEFAULT_LOCALE # type: str
31 self.SUPPORTED_LOCALES = getattr(
32 _config, "SUPPORTED_LOCALES", [self.DEFAULT_LOCALE]
33 ) # type: List[str]
34
35 self.GPG_KEY_DIR = _config.GPG_KEY_DIR # type: str
36
37 self.JOURNALIST_KEY = _config.JOURNALIST_KEY # type: str
38 self.JOURNALIST_TEMPLATES_DIR = _config.JOURNALIST_TEMPLATES_DIR # type: str
39
40 self.SCRYPT_GPG_PEPPER = _config.SCRYPT_GPG_PEPPER # type: str
41 self.SCRYPT_ID_PEPPER = _config.SCRYPT_ID_PEPPER # type: str
42 self.SCRYPT_PARAMS = _config.SCRYPT_PARAMS # type: Dict[str, int]
43
44 self.SECUREDROP_DATA_ROOT = _config.SECUREDROP_DATA_ROOT # type: str
45 self.SECUREDROP_ROOT = _config.SECUREDROP_ROOT # type: str
46
47 self.SESSION_EXPIRATION_MINUTES = _config.SESSION_EXPIRATION_MINUTES # type: int
48
49 self.SOURCE_TEMPLATES_DIR = _config.SOURCE_TEMPLATES_DIR # type: str
50 self.TEMP_DIR = _config.TEMP_DIR # type: str
51 self.STORE_DIR = _config.STORE_DIR # type: str
52 self.TRANSLATION_DIRS = getattr(_config, "TRANSLATION_DIRS", None) # type: Optional[str]
53
54 self.WORKER_PIDFILE = _config.WORKER_PIDFILE # type: str
55
56 if _config.env == 'test':
57 self.RQ_WORKER_NAME = 'test' # type: str
58 else:
59 self.RQ_WORKER_NAME = 'default'
60
61 @property
62 def DATABASE_URI(self) -> str:
63 if self.DATABASE_ENGINE == "sqlite":
64 db_uri = (self.DATABASE_ENGINE + ":///" +
65 self.DATABASE_FILE)
66 else:
67 if self.DATABASE_USERNAME is None:
68 raise RuntimeError("Missing DATABASE_USERNAME entry from config.py")
69 if self.DATABASE_PASSWORD is None:
70 raise RuntimeError("Missing DATABASE_PASSWORD entry from config.py")
71 if self.DATABASE_HOST is None:
72 raise RuntimeError("Missing DATABASE_HOST entry from config.py")
73 if self.DATABASE_NAME is None:
74 raise RuntimeError("Missing DATABASE_NAME entry from config.py")
75
76 db_uri = (
77 self.DATABASE_ENGINE + '://' +
78 self.DATABASE_USERNAME + ':' +
79 self.DATABASE_PASSWORD + '@' +
80 self.DATABASE_HOST + '/' +
81 self.DATABASE_NAME
82 )
83 return db_uri
84
85
86 config = SDConfig() # type: SDConfig
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/securedrop/sdconfig.py b/securedrop/sdconfig.py
--- a/securedrop/sdconfig.py
+++ b/securedrop/sdconfig.py
@@ -53,7 +53,8 @@
self.WORKER_PIDFILE = _config.WORKER_PIDFILE # type: str
- if _config.env == 'test':
+ self.env = getattr(_config, 'env', 'prod') # type: str
+ if self.env == 'test':
self.RQ_WORKER_NAME = 'test' # type: str
else:
self.RQ_WORKER_NAME = 'default'
| {"golden_diff": "diff --git a/securedrop/sdconfig.py b/securedrop/sdconfig.py\n--- a/securedrop/sdconfig.py\n+++ b/securedrop/sdconfig.py\n@@ -53,7 +53,8 @@\n \n self.WORKER_PIDFILE = _config.WORKER_PIDFILE # type: str\n \n- if _config.env == 'test':\n+ self.env = getattr(_config, 'env', 'prod') # type: str\n+ if self.env == 'test':\n self.RQ_WORKER_NAME = 'test' # type: str\n else:\n self.RQ_WORKER_NAME = 'default'\n", "issue": "dev server hot reload has stopped working\n## Description\r\n\r\nIn #5532 the `env` attribute was dropped from `SDConfig` in `sdconfig.py`. That value is checked in [`source.py`](https://github.com/freedomofpress/securedrop/blob/6246482157e31d0655a91c5e7284cc8550f2c289/securedrop/source.py#L11) and [`journalist.py`](https://github.com/freedomofpress/securedrop/blob/6246482157e31d0655a91c5e7284cc8550f2c289/securedrop/journalist.py#L26) to determine whether the Flask app will run in [debug](https://flask.palletsprojects.com/en/1.1.x/config/#DEBUG) mode. By default it will not, so the dev server has stopped responding to code changes.\r\n\r\nGiven the Flask documentation warnings about setting debug mode via code and not the `FLASK_DEBUG` environment variable, we may want to reevaluate all of this, but right now let's just get back to a properly functioning dev server.\r\n\r\n## Steps to Reproduce\r\n\r\n- Check out `develop` at a commit before the `sdconfig.py` change (eff931fa8a0e74d5c3be87e46c5d0f004f02e289).\r\n- Run `make dev`.\r\n- Change `securedrop/journalist_app/main.py` to trigger a reload, and confirm that the change is detected.\r\n- Stop the dev server.\r\n- Check out `develop` and run `make dev`.\r\n- Change `securedrop/journalist_app/main.py` again, and observe that the change is not detected.\r\n\r\n## Expected Behavior\r\n\r\nThat the dev server would notice code changes and reload to pick them up.\r\n\r\n## Actual Behavior\r\n\r\nIt does not care one whit about your useless flailings. We are all `prod` now.\r\n\r\n## Comments\r\n\r\nJust need to restore `SDConfig.env`.\r\n\n", "before_files": [{"content": "from typing import Dict\nfrom typing import Optional\n\nfrom typing import Type\n\nimport config as _config\nfrom typing import List\n\n\nclass SDConfig:\n def __init__(self) -> None:\n self.JOURNALIST_APP_FLASK_CONFIG_CLS = \\\n _config.JournalistInterfaceFlaskConfig # type: Type\n\n self.SOURCE_APP_FLASK_CONFIG_CLS = \\\n _config.SourceInterfaceFlaskConfig # type: Type\n\n self.DATABASE_ENGINE = _config.DATABASE_ENGINE # type: str\n self.DATABASE_FILE = _config.DATABASE_FILE # type: str\n\n self.DATABASE_USERNAME = getattr(_config, \"DATABASE_USERNAME\", None) # type: Optional[str]\n self.DATABASE_PASSWORD = getattr(_config, \"DATABASE_PASSWORD\", None) # type: Optional[str]\n self.DATABASE_HOST = getattr(_config, \"DATABASE_HOST\", None) # type: Optional[str]\n self.DATABASE_NAME = getattr(_config, \"DATABASE_NAME\", None) # type: Optional[str]\n\n self.ADJECTIVES = _config.ADJECTIVES # type: str\n self.NOUNS = _config.NOUNS # type: str\n self.WORD_LIST = _config.WORD_LIST # type: str\n\n self.DEFAULT_LOCALE = _config.DEFAULT_LOCALE # type: str\n self.SUPPORTED_LOCALES = getattr(\n _config, \"SUPPORTED_LOCALES\", [self.DEFAULT_LOCALE]\n ) # type: List[str]\n\n self.GPG_KEY_DIR = _config.GPG_KEY_DIR # type: str\n\n self.JOURNALIST_KEY = _config.JOURNALIST_KEY # type: str\n self.JOURNALIST_TEMPLATES_DIR = _config.JOURNALIST_TEMPLATES_DIR # type: str\n\n self.SCRYPT_GPG_PEPPER = _config.SCRYPT_GPG_PEPPER # type: str\n self.SCRYPT_ID_PEPPER = _config.SCRYPT_ID_PEPPER # type: str\n self.SCRYPT_PARAMS = _config.SCRYPT_PARAMS # type: Dict[str, int]\n\n self.SECUREDROP_DATA_ROOT = _config.SECUREDROP_DATA_ROOT # type: str\n self.SECUREDROP_ROOT = _config.SECUREDROP_ROOT # type: str\n\n self.SESSION_EXPIRATION_MINUTES = _config.SESSION_EXPIRATION_MINUTES # type: int\n\n self.SOURCE_TEMPLATES_DIR = _config.SOURCE_TEMPLATES_DIR # type: str\n self.TEMP_DIR = _config.TEMP_DIR # type: str\n self.STORE_DIR = _config.STORE_DIR # type: str\n self.TRANSLATION_DIRS = getattr(_config, \"TRANSLATION_DIRS\", None) # type: Optional[str]\n\n self.WORKER_PIDFILE = _config.WORKER_PIDFILE # type: str\n\n if _config.env == 'test':\n self.RQ_WORKER_NAME = 'test' # type: str\n else:\n self.RQ_WORKER_NAME = 'default'\n\n @property\n def DATABASE_URI(self) -> str:\n if self.DATABASE_ENGINE == \"sqlite\":\n db_uri = (self.DATABASE_ENGINE + \":///\" +\n self.DATABASE_FILE)\n else:\n if self.DATABASE_USERNAME is None:\n raise RuntimeError(\"Missing DATABASE_USERNAME entry from config.py\")\n if self.DATABASE_PASSWORD is None:\n raise RuntimeError(\"Missing DATABASE_PASSWORD entry from config.py\")\n if self.DATABASE_HOST is None:\n raise RuntimeError(\"Missing DATABASE_HOST entry from config.py\")\n if self.DATABASE_NAME is None:\n raise RuntimeError(\"Missing DATABASE_NAME entry from config.py\")\n\n db_uri = (\n self.DATABASE_ENGINE + '://' +\n self.DATABASE_USERNAME + ':' +\n self.DATABASE_PASSWORD + '@' +\n self.DATABASE_HOST + '/' +\n self.DATABASE_NAME\n )\n return db_uri\n\n\nconfig = SDConfig() # type: SDConfig\n", "path": "securedrop/sdconfig.py"}], "after_files": [{"content": "from typing import Dict\nfrom typing import Optional\n\nfrom typing import Type\n\nimport config as _config\nfrom typing import List\n\n\nclass SDConfig:\n def __init__(self) -> None:\n self.JOURNALIST_APP_FLASK_CONFIG_CLS = \\\n _config.JournalistInterfaceFlaskConfig # type: Type\n\n self.SOURCE_APP_FLASK_CONFIG_CLS = \\\n _config.SourceInterfaceFlaskConfig # type: Type\n\n self.DATABASE_ENGINE = _config.DATABASE_ENGINE # type: str\n self.DATABASE_FILE = _config.DATABASE_FILE # type: str\n\n self.DATABASE_USERNAME = getattr(_config, \"DATABASE_USERNAME\", None) # type: Optional[str]\n self.DATABASE_PASSWORD = getattr(_config, \"DATABASE_PASSWORD\", None) # type: Optional[str]\n self.DATABASE_HOST = getattr(_config, \"DATABASE_HOST\", None) # type: Optional[str]\n self.DATABASE_NAME = getattr(_config, \"DATABASE_NAME\", None) # type: Optional[str]\n\n self.ADJECTIVES = _config.ADJECTIVES # type: str\n self.NOUNS = _config.NOUNS # type: str\n self.WORD_LIST = _config.WORD_LIST # type: str\n\n self.DEFAULT_LOCALE = _config.DEFAULT_LOCALE # type: str\n self.SUPPORTED_LOCALES = getattr(\n _config, \"SUPPORTED_LOCALES\", [self.DEFAULT_LOCALE]\n ) # type: List[str]\n\n self.GPG_KEY_DIR = _config.GPG_KEY_DIR # type: str\n\n self.JOURNALIST_KEY = _config.JOURNALIST_KEY # type: str\n self.JOURNALIST_TEMPLATES_DIR = _config.JOURNALIST_TEMPLATES_DIR # type: str\n\n self.SCRYPT_GPG_PEPPER = _config.SCRYPT_GPG_PEPPER # type: str\n self.SCRYPT_ID_PEPPER = _config.SCRYPT_ID_PEPPER # type: str\n self.SCRYPT_PARAMS = _config.SCRYPT_PARAMS # type: Dict[str, int]\n\n self.SECUREDROP_DATA_ROOT = _config.SECUREDROP_DATA_ROOT # type: str\n self.SECUREDROP_ROOT = _config.SECUREDROP_ROOT # type: str\n\n self.SESSION_EXPIRATION_MINUTES = _config.SESSION_EXPIRATION_MINUTES # type: int\n\n self.SOURCE_TEMPLATES_DIR = _config.SOURCE_TEMPLATES_DIR # type: str\n self.TEMP_DIR = _config.TEMP_DIR # type: str\n self.STORE_DIR = _config.STORE_DIR # type: str\n self.TRANSLATION_DIRS = getattr(_config, \"TRANSLATION_DIRS\", None) # type: Optional[str]\n\n self.WORKER_PIDFILE = _config.WORKER_PIDFILE # type: str\n\n self.env = getattr(_config, 'env', 'prod') # type: str\n if self.env == 'test':\n self.RQ_WORKER_NAME = 'test' # type: str\n else:\n self.RQ_WORKER_NAME = 'default'\n\n @property\n def DATABASE_URI(self) -> str:\n if self.DATABASE_ENGINE == \"sqlite\":\n db_uri = (self.DATABASE_ENGINE + \":///\" +\n self.DATABASE_FILE)\n else:\n if self.DATABASE_USERNAME is None:\n raise RuntimeError(\"Missing DATABASE_USERNAME entry from config.py\")\n if self.DATABASE_PASSWORD is None:\n raise RuntimeError(\"Missing DATABASE_PASSWORD entry from config.py\")\n if self.DATABASE_HOST is None:\n raise RuntimeError(\"Missing DATABASE_HOST entry from config.py\")\n if self.DATABASE_NAME is None:\n raise RuntimeError(\"Missing DATABASE_NAME entry from config.py\")\n\n db_uri = (\n self.DATABASE_ENGINE + '://' +\n self.DATABASE_USERNAME + ':' +\n self.DATABASE_PASSWORD + '@' +\n self.DATABASE_HOST + '/' +\n self.DATABASE_NAME\n )\n return db_uri\n\n\nconfig = SDConfig() # type: SDConfig\n", "path": "securedrop/sdconfig.py"}]} | 1,771 | 145 |
gh_patches_debug_38040 | rasdani/github-patches | git_diff | getsentry__sentry-16440 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
User-creation should not fail, when the account already exists
I'm creating an Ansible role for installing sentry on our network. The software is installed via `pip` -- version 8.22.0, which seems to be the latest as of today. Not using Docker -- just a dedicated VM.
One of the tasks is to create a Sentry-admin:
```
sentry createuser --superuser --no-input --password=FOO [email protected]
```
Unfortunately, the above command only works _the first time_. Trying to run it again fails:
```
django.db.utils.IntegrityError: IntegrityError('duplicate key value violates unique constraint \"auth_user_username_key\"
DETAIL: Key (username)=([email protected]) already exists
```
I understand, why the same account can not be created again. But it should not be an _error_ -- simply update the existing record if necessary. Or, if it must be an error, there should be a specific exit-code for this case, which would allow systems like Ansible to move on. Currently, `sentry` exits with the same code 1 for all errors...
At the very least, there needs to be a way to check, whether the account exists already... Ansible would then be able to perform the query before attempting to add the user.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/runner/commands/createuser.py`
Content:
```
1 from __future__ import absolute_import, print_function
2
3 import click
4 from sentry.runner.decorators import configuration
5
6
7 def _get_field(field_name):
8 from sentry.models import User
9
10 return User._meta.get_field(field_name)
11
12
13 def _get_email():
14 from django.core.exceptions import ValidationError
15
16 rv = click.prompt("Email")
17 field = _get_field("email")
18 try:
19 return field.clean(rv, None)
20 except ValidationError as e:
21 raise click.ClickException("; ".join(e.messages))
22
23
24 def _get_password():
25 from django.core.exceptions import ValidationError
26
27 rv = click.prompt("Password", hide_input=True, confirmation_prompt=True)
28 field = _get_field("password")
29 try:
30 return field.clean(rv, None)
31 except ValidationError as e:
32 raise click.ClickException("; ".join(e.messages))
33
34
35 def _get_superuser():
36 return click.confirm("Should this user be a superuser?", default=False)
37
38
39 @click.command()
40 @click.option("--email")
41 @click.option("--password")
42 @click.option("--superuser/--no-superuser", default=None, is_flag=True)
43 @click.option("--no-password", default=False, is_flag=True)
44 @click.option("--no-input", default=False, is_flag=True)
45 @configuration
46 def createuser(email, password, superuser, no_password, no_input):
47 "Create a new user."
48 if not no_input:
49 if not email:
50 email = _get_email()
51
52 if not (password or no_password):
53 password = _get_password()
54
55 if superuser is None:
56 superuser = _get_superuser()
57
58 if superuser is None:
59 superuser = False
60
61 if not email:
62 raise click.ClickException("Invalid or missing email address.")
63
64 # TODO(mattrobenolt): Accept password over stdin?
65 if not no_password and not password:
66 raise click.ClickException("No password set and --no-password not passed.")
67
68 from sentry import roles
69 from sentry.models import User
70 from django.conf import settings
71
72 user = User(
73 email=email, username=email, is_superuser=superuser, is_staff=superuser, is_active=True
74 )
75
76 if password:
77 user.set_password(password)
78
79 user.save()
80
81 click.echo("User created: %s" % (email,))
82
83 # TODO(dcramer): kill this when we improve flows
84 if settings.SENTRY_SINGLE_ORGANIZATION:
85 from sentry.models import Organization, OrganizationMember, OrganizationMemberTeam, Team
86
87 org = Organization.get_default()
88 if superuser:
89 role = roles.get_top_dog().id
90 else:
91 role = org.default_role
92 member = OrganizationMember.objects.create(organization=org, user=user, role=role)
93
94 # if we've only got a single team let's go ahead and give
95 # access to that team as its likely the desired outcome
96 teams = list(Team.objects.filter(organization=org)[0:2])
97 if len(teams) == 1:
98 OrganizationMemberTeam.objects.create(team=teams[0], organizationmember=member)
99 click.echo("Added to organization: %s" % (org.slug,))
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/sentry/runner/commands/createuser.py b/src/sentry/runner/commands/createuser.py
--- a/src/sentry/runner/commands/createuser.py
+++ b/src/sentry/runner/commands/createuser.py
@@ -1,6 +1,7 @@
from __future__ import absolute_import, print_function
import click
+import sys
from sentry.runner.decorators import configuration
@@ -42,8 +43,9 @@
@click.option("--superuser/--no-superuser", default=None, is_flag=True)
@click.option("--no-password", default=False, is_flag=True)
@click.option("--no-input", default=False, is_flag=True)
[email protected]("--force-update", default=False, is_flag=True)
@configuration
-def createuser(email, password, superuser, no_password, no_input):
+def createuser(email, password, superuser, no_password, no_input, force_update):
"Create a new user."
if not no_input:
if not email:
@@ -76,24 +78,31 @@
if password:
user.set_password(password)
- user.save()
-
- click.echo("User created: %s" % (email,))
-
- # TODO(dcramer): kill this when we improve flows
- if settings.SENTRY_SINGLE_ORGANIZATION:
- from sentry.models import Organization, OrganizationMember, OrganizationMemberTeam, Team
-
- org = Organization.get_default()
- if superuser:
- role = roles.get_top_dog().id
+ if User.objects.filter(username=email).exists():
+ if force_update:
+ user.save(force_update=force_update)
+ click.echo("User updated: %s" % (email,))
else:
- role = org.default_role
- member = OrganizationMember.objects.create(organization=org, user=user, role=role)
-
- # if we've only got a single team let's go ahead and give
- # access to that team as its likely the desired outcome
- teams = list(Team.objects.filter(organization=org)[0:2])
- if len(teams) == 1:
- OrganizationMemberTeam.objects.create(team=teams[0], organizationmember=member)
- click.echo("Added to organization: %s" % (org.slug,))
+ click.echo("User: %s exists, use --force-update to force" % (email,))
+ sys.exit(3)
+ else:
+ user.save()
+ click.echo("User created: %s" % (email,))
+
+ # TODO(dcramer): kill this when we improve flows
+ if settings.SENTRY_SINGLE_ORGANIZATION:
+ from sentry.models import Organization, OrganizationMember, OrganizationMemberTeam, Team
+
+ org = Organization.get_default()
+ if superuser:
+ role = roles.get_top_dog().id
+ else:
+ role = org.default_role
+ member = OrganizationMember.objects.create(organization=org, user=user, role=role)
+
+ # if we've only got a single team let's go ahead and give
+ # access to that team as its likely the desired outcome
+ teams = list(Team.objects.filter(organization=org)[0:2])
+ if len(teams) == 1:
+ OrganizationMemberTeam.objects.create(team=teams[0], organizationmember=member)
+ click.echo("Added to organization: %s" % (org.slug,))
| {"golden_diff": "diff --git a/src/sentry/runner/commands/createuser.py b/src/sentry/runner/commands/createuser.py\n--- a/src/sentry/runner/commands/createuser.py\n+++ b/src/sentry/runner/commands/createuser.py\n@@ -1,6 +1,7 @@\n from __future__ import absolute_import, print_function\n \n import click\n+import sys\n from sentry.runner.decorators import configuration\n \n \n@@ -42,8 +43,9 @@\n @click.option(\"--superuser/--no-superuser\", default=None, is_flag=True)\n @click.option(\"--no-password\", default=False, is_flag=True)\n @click.option(\"--no-input\", default=False, is_flag=True)\[email protected](\"--force-update\", default=False, is_flag=True)\n @configuration\n-def createuser(email, password, superuser, no_password, no_input):\n+def createuser(email, password, superuser, no_password, no_input, force_update):\n \"Create a new user.\"\n if not no_input:\n if not email:\n@@ -76,24 +78,31 @@\n if password:\n user.set_password(password)\n \n- user.save()\n-\n- click.echo(\"User created: %s\" % (email,))\n-\n- # TODO(dcramer): kill this when we improve flows\n- if settings.SENTRY_SINGLE_ORGANIZATION:\n- from sentry.models import Organization, OrganizationMember, OrganizationMemberTeam, Team\n-\n- org = Organization.get_default()\n- if superuser:\n- role = roles.get_top_dog().id\n+ if User.objects.filter(username=email).exists():\n+ if force_update:\n+ user.save(force_update=force_update)\n+ click.echo(\"User updated: %s\" % (email,))\n else:\n- role = org.default_role\n- member = OrganizationMember.objects.create(organization=org, user=user, role=role)\n-\n- # if we've only got a single team let's go ahead and give\n- # access to that team as its likely the desired outcome\n- teams = list(Team.objects.filter(organization=org)[0:2])\n- if len(teams) == 1:\n- OrganizationMemberTeam.objects.create(team=teams[0], organizationmember=member)\n- click.echo(\"Added to organization: %s\" % (org.slug,))\n+ click.echo(\"User: %s exists, use --force-update to force\" % (email,))\n+ sys.exit(3)\n+ else:\n+ user.save()\n+ click.echo(\"User created: %s\" % (email,))\n+\n+ # TODO(dcramer): kill this when we improve flows\n+ if settings.SENTRY_SINGLE_ORGANIZATION:\n+ from sentry.models import Organization, OrganizationMember, OrganizationMemberTeam, Team\n+\n+ org = Organization.get_default()\n+ if superuser:\n+ role = roles.get_top_dog().id\n+ else:\n+ role = org.default_role\n+ member = OrganizationMember.objects.create(organization=org, user=user, role=role)\n+\n+ # if we've only got a single team let's go ahead and give\n+ # access to that team as its likely the desired outcome\n+ teams = list(Team.objects.filter(organization=org)[0:2])\n+ if len(teams) == 1:\n+ OrganizationMemberTeam.objects.create(team=teams[0], organizationmember=member)\n+ click.echo(\"Added to organization: %s\" % (org.slug,))\n", "issue": "User-creation should not fail, when the account already exists\nI'm creating an Ansible role for installing sentry on our network. The software is installed via `pip` -- version 8.22.0, which seems to be the latest as of today. Not using Docker -- just a dedicated VM.\r\n\r\nOne of the tasks is to create a Sentry-admin:\r\n```\r\nsentry createuser --superuser --no-input --password=FOO [email protected]\r\n```\r\nUnfortunately, the above command only works _the first time_. Trying to run it again fails:\r\n```\r\ndjango.db.utils.IntegrityError: IntegrityError('duplicate key value violates unique constraint \\\"auth_user_username_key\\\"\r\nDETAIL: Key (username)=([email protected]) already exists\r\n```\r\nI understand, why the same account can not be created again. But it should not be an _error_ -- simply update the existing record if necessary. Or, if it must be an error, there should be a specific exit-code for this case, which would allow systems like Ansible to move on. Currently, `sentry` exits with the same code 1 for all errors...\r\n\r\nAt the very least, there needs to be a way to check, whether the account exists already... Ansible would then be able to perform the query before attempting to add the user.\r\n\n", "before_files": [{"content": "from __future__ import absolute_import, print_function\n\nimport click\nfrom sentry.runner.decorators import configuration\n\n\ndef _get_field(field_name):\n from sentry.models import User\n\n return User._meta.get_field(field_name)\n\n\ndef _get_email():\n from django.core.exceptions import ValidationError\n\n rv = click.prompt(\"Email\")\n field = _get_field(\"email\")\n try:\n return field.clean(rv, None)\n except ValidationError as e:\n raise click.ClickException(\"; \".join(e.messages))\n\n\ndef _get_password():\n from django.core.exceptions import ValidationError\n\n rv = click.prompt(\"Password\", hide_input=True, confirmation_prompt=True)\n field = _get_field(\"password\")\n try:\n return field.clean(rv, None)\n except ValidationError as e:\n raise click.ClickException(\"; \".join(e.messages))\n\n\ndef _get_superuser():\n return click.confirm(\"Should this user be a superuser?\", default=False)\n\n\[email protected]()\[email protected](\"--email\")\[email protected](\"--password\")\[email protected](\"--superuser/--no-superuser\", default=None, is_flag=True)\[email protected](\"--no-password\", default=False, is_flag=True)\[email protected](\"--no-input\", default=False, is_flag=True)\n@configuration\ndef createuser(email, password, superuser, no_password, no_input):\n \"Create a new user.\"\n if not no_input:\n if not email:\n email = _get_email()\n\n if not (password or no_password):\n password = _get_password()\n\n if superuser is None:\n superuser = _get_superuser()\n\n if superuser is None:\n superuser = False\n\n if not email:\n raise click.ClickException(\"Invalid or missing email address.\")\n\n # TODO(mattrobenolt): Accept password over stdin?\n if not no_password and not password:\n raise click.ClickException(\"No password set and --no-password not passed.\")\n\n from sentry import roles\n from sentry.models import User\n from django.conf import settings\n\n user = User(\n email=email, username=email, is_superuser=superuser, is_staff=superuser, is_active=True\n )\n\n if password:\n user.set_password(password)\n\n user.save()\n\n click.echo(\"User created: %s\" % (email,))\n\n # TODO(dcramer): kill this when we improve flows\n if settings.SENTRY_SINGLE_ORGANIZATION:\n from sentry.models import Organization, OrganizationMember, OrganizationMemberTeam, Team\n\n org = Organization.get_default()\n if superuser:\n role = roles.get_top_dog().id\n else:\n role = org.default_role\n member = OrganizationMember.objects.create(organization=org, user=user, role=role)\n\n # if we've only got a single team let's go ahead and give\n # access to that team as its likely the desired outcome\n teams = list(Team.objects.filter(organization=org)[0:2])\n if len(teams) == 1:\n OrganizationMemberTeam.objects.create(team=teams[0], organizationmember=member)\n click.echo(\"Added to organization: %s\" % (org.slug,))\n", "path": "src/sentry/runner/commands/createuser.py"}], "after_files": [{"content": "from __future__ import absolute_import, print_function\n\nimport click\nimport sys\nfrom sentry.runner.decorators import configuration\n\n\ndef _get_field(field_name):\n from sentry.models import User\n\n return User._meta.get_field(field_name)\n\n\ndef _get_email():\n from django.core.exceptions import ValidationError\n\n rv = click.prompt(\"Email\")\n field = _get_field(\"email\")\n try:\n return field.clean(rv, None)\n except ValidationError as e:\n raise click.ClickException(\"; \".join(e.messages))\n\n\ndef _get_password():\n from django.core.exceptions import ValidationError\n\n rv = click.prompt(\"Password\", hide_input=True, confirmation_prompt=True)\n field = _get_field(\"password\")\n try:\n return field.clean(rv, None)\n except ValidationError as e:\n raise click.ClickException(\"; \".join(e.messages))\n\n\ndef _get_superuser():\n return click.confirm(\"Should this user be a superuser?\", default=False)\n\n\[email protected]()\[email protected](\"--email\")\[email protected](\"--password\")\[email protected](\"--superuser/--no-superuser\", default=None, is_flag=True)\[email protected](\"--no-password\", default=False, is_flag=True)\[email protected](\"--no-input\", default=False, is_flag=True)\[email protected](\"--force-update\", default=False, is_flag=True)\n@configuration\ndef createuser(email, password, superuser, no_password, no_input, force_update):\n \"Create a new user.\"\n if not no_input:\n if not email:\n email = _get_email()\n\n if not (password or no_password):\n password = _get_password()\n\n if superuser is None:\n superuser = _get_superuser()\n\n if superuser is None:\n superuser = False\n\n if not email:\n raise click.ClickException(\"Invalid or missing email address.\")\n\n # TODO(mattrobenolt): Accept password over stdin?\n if not no_password and not password:\n raise click.ClickException(\"No password set and --no-password not passed.\")\n\n from sentry import roles\n from sentry.models import User\n from django.conf import settings\n\n user = User(\n email=email, username=email, is_superuser=superuser, is_staff=superuser, is_active=True\n )\n\n if password:\n user.set_password(password)\n\n if User.objects.filter(username=email).exists():\n if force_update:\n user.save(force_update=force_update)\n click.echo(\"User updated: %s\" % (email,))\n else:\n click.echo(\"User: %s exists, use --force-update to force\" % (email,))\n sys.exit(3)\n else:\n user.save()\n click.echo(\"User created: %s\" % (email,))\n\n # TODO(dcramer): kill this when we improve flows\n if settings.SENTRY_SINGLE_ORGANIZATION:\n from sentry.models import Organization, OrganizationMember, OrganizationMemberTeam, Team\n\n org = Organization.get_default()\n if superuser:\n role = roles.get_top_dog().id\n else:\n role = org.default_role\n member = OrganizationMember.objects.create(organization=org, user=user, role=role)\n\n # if we've only got a single team let's go ahead and give\n # access to that team as its likely the desired outcome\n teams = list(Team.objects.filter(organization=org)[0:2])\n if len(teams) == 1:\n OrganizationMemberTeam.objects.create(team=teams[0], organizationmember=member)\n click.echo(\"Added to organization: %s\" % (org.slug,))\n", "path": "src/sentry/runner/commands/createuser.py"}]} | 1,425 | 760 |
gh_patches_debug_6595 | rasdani/github-patches | git_diff | vispy__vispy-712 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bump VisPy version?
Should we bump the version to 0.4.0-dev or something? Currently it is 0.3.0, which doesn't seem right.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vispy/__init__.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright (c) 2014, Vispy Development Team.
3 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
4
5 """
6
7 =====
8 Vispy
9 =====
10
11 Vispy is a **high-performance interactive 2D/3D data visualization
12 library**. Vispy leverages the computational power of modern **Graphics
13 Processing Units (GPUs)** through the **OpenGL** library to display very
14 large datasets.
15
16 For more information, see http://vispy.org.
17
18 """
19
20 from __future__ import division
21
22 __all__ = ['use', 'sys_info', 'set_log_level', 'test']
23
24 # Definition of the version number
25 version_info = 0, 3, 0, '' # major, minor, patch, extra
26
27 # Nice string for the version (mimic how IPython composes its version str)
28 __version__ = '-'.join(map(str, version_info)).replace('-', '.', 2).strip('-')
29
30 from .util import config, set_log_level, keys, sys_info # noqa
31 from .util.wrappers import use # noqa
32 from .testing import test # noqa
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vispy/__init__.py b/vispy/__init__.py
--- a/vispy/__init__.py
+++ b/vispy/__init__.py
@@ -22,7 +22,7 @@
__all__ = ['use', 'sys_info', 'set_log_level', 'test']
# Definition of the version number
-version_info = 0, 3, 0, '' # major, minor, patch, extra
+version_info = 0, 4, 0, 'dev' # major, minor, patch, extra
# Nice string for the version (mimic how IPython composes its version str)
__version__ = '-'.join(map(str, version_info)).replace('-', '.', 2).strip('-')
| {"golden_diff": "diff --git a/vispy/__init__.py b/vispy/__init__.py\n--- a/vispy/__init__.py\n+++ b/vispy/__init__.py\n@@ -22,7 +22,7 @@\n __all__ = ['use', 'sys_info', 'set_log_level', 'test']\n \n # Definition of the version number\n-version_info = 0, 3, 0, '' # major, minor, patch, extra\n+version_info = 0, 4, 0, 'dev' # major, minor, patch, extra\n \n # Nice string for the version (mimic how IPython composes its version str)\n __version__ = '-'.join(map(str, version_info)).replace('-', '.', 2).strip('-')\n", "issue": "Bump VisPy version?\nShould we bump the version to 0.4.0-dev or something? Currently it is 0.3.0, which doesn't seem right.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2014, Vispy Development Team.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\n\"\"\"\n\n=====\nVispy\n=====\n\nVispy is a **high-performance interactive 2D/3D data visualization\nlibrary**. Vispy leverages the computational power of modern **Graphics\nProcessing Units (GPUs)** through the **OpenGL** library to display very\nlarge datasets.\n\nFor more information, see http://vispy.org.\n\n\"\"\"\n\nfrom __future__ import division\n\n__all__ = ['use', 'sys_info', 'set_log_level', 'test']\n\n# Definition of the version number\nversion_info = 0, 3, 0, '' # major, minor, patch, extra\n\n# Nice string for the version (mimic how IPython composes its version str)\n__version__ = '-'.join(map(str, version_info)).replace('-', '.', 2).strip('-')\n\nfrom .util import config, set_log_level, keys, sys_info # noqa\nfrom .util.wrappers import use # noqa\nfrom .testing import test # noqa\n", "path": "vispy/__init__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2014, Vispy Development Team.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\n\"\"\"\n\n=====\nVispy\n=====\n\nVispy is a **high-performance interactive 2D/3D data visualization\nlibrary**. Vispy leverages the computational power of modern **Graphics\nProcessing Units (GPUs)** through the **OpenGL** library to display very\nlarge datasets.\n\nFor more information, see http://vispy.org.\n\n\"\"\"\n\nfrom __future__ import division\n\n__all__ = ['use', 'sys_info', 'set_log_level', 'test']\n\n# Definition of the version number\nversion_info = 0, 4, 0, 'dev' # major, minor, patch, extra\n\n# Nice string for the version (mimic how IPython composes its version str)\n__version__ = '-'.join(map(str, version_info)).replace('-', '.', 2).strip('-')\n\nfrom .util import config, set_log_level, keys, sys_info # noqa\nfrom .util.wrappers import use # noqa\nfrom .testing import test # noqa\n", "path": "vispy/__init__.py"}]} | 612 | 169 |
gh_patches_debug_23769 | rasdani/github-patches | git_diff | pyca__cryptography-1201 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make it harder to allow truncation on a GCM tag
`GCM()` should take a `min_tag_length` parameter, which defaults to `GCM.MAX_TAG_LENGTH`, then we can make it error out on truncated tags by default, while still allowing truncation for use cases that need it.
This is technically a backwards compatibility break, but I argue that it should be allowed as a security hardening measure. Thoughts?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cryptography/__about__.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13 from __future__ import absolute_import, division, print_function
14
15 __all__ = [
16 "__title__", "__summary__", "__uri__", "__version__", "__author__",
17 "__email__", "__license__", "__copyright__",
18 ]
19
20 __title__ = "cryptography"
21 __summary__ = ("cryptography is a package which provides cryptographic recipes"
22 " and primitives to Python developers.")
23 __uri__ = "https://github.com/pyca/cryptography"
24
25 __version__ = "0.5.dev1"
26
27 __author__ = "The cryptography developers"
28 __email__ = "[email protected]"
29
30 __license__ = "Apache License, Version 2.0"
31 __copyright__ = "Copyright 2013-2014 %s" % __author__
32
```
Path: `cryptography/hazmat/primitives/ciphers/modes.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from __future__ import absolute_import, division, print_function
15
16 from cryptography import utils
17 from cryptography.hazmat.primitives import interfaces
18
19
20 def _check_iv_length(mode, algorithm):
21 if len(mode.initialization_vector) * 8 != algorithm.block_size:
22 raise ValueError("Invalid IV size ({0}) for {1}.".format(
23 len(mode.initialization_vector), mode.name
24 ))
25
26
27 @utils.register_interface(interfaces.Mode)
28 @utils.register_interface(interfaces.ModeWithInitializationVector)
29 class CBC(object):
30 name = "CBC"
31
32 def __init__(self, initialization_vector):
33 self.initialization_vector = initialization_vector
34
35 validate_for_algorithm = _check_iv_length
36
37
38 @utils.register_interface(interfaces.Mode)
39 class ECB(object):
40 name = "ECB"
41
42 def validate_for_algorithm(self, algorithm):
43 pass
44
45
46 @utils.register_interface(interfaces.Mode)
47 @utils.register_interface(interfaces.ModeWithInitializationVector)
48 class OFB(object):
49 name = "OFB"
50
51 def __init__(self, initialization_vector):
52 self.initialization_vector = initialization_vector
53
54 validate_for_algorithm = _check_iv_length
55
56
57 @utils.register_interface(interfaces.Mode)
58 @utils.register_interface(interfaces.ModeWithInitializationVector)
59 class CFB(object):
60 name = "CFB"
61
62 def __init__(self, initialization_vector):
63 self.initialization_vector = initialization_vector
64
65 validate_for_algorithm = _check_iv_length
66
67
68 @utils.register_interface(interfaces.Mode)
69 @utils.register_interface(interfaces.ModeWithInitializationVector)
70 class CFB8(object):
71 name = "CFB8"
72
73 def __init__(self, initialization_vector):
74 self.initialization_vector = initialization_vector
75
76 validate_for_algorithm = _check_iv_length
77
78
79 @utils.register_interface(interfaces.Mode)
80 @utils.register_interface(interfaces.ModeWithNonce)
81 class CTR(object):
82 name = "CTR"
83
84 def __init__(self, nonce):
85 self.nonce = nonce
86
87 def validate_for_algorithm(self, algorithm):
88 if len(self.nonce) * 8 != algorithm.block_size:
89 raise ValueError("Invalid nonce size ({0}) for {1}.".format(
90 len(self.nonce), self.name
91 ))
92
93
94 @utils.register_interface(interfaces.Mode)
95 @utils.register_interface(interfaces.ModeWithInitializationVector)
96 @utils.register_interface(interfaces.ModeWithAuthenticationTag)
97 class GCM(object):
98 name = "GCM"
99
100 def __init__(self, initialization_vector, tag=None):
101 # len(initialization_vector) must in [1, 2 ** 64), but it's impossible
102 # to actually construct a bytes object that large, so we don't check
103 # for it
104 if tag is not None and len(tag) < 4:
105 raise ValueError(
106 "Authentication tag must be 4 bytes or longer."
107 )
108
109 self.initialization_vector = initialization_vector
110 self.tag = tag
111
112 def validate_for_algorithm(self, algorithm):
113 pass
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cryptography/__about__.py b/cryptography/__about__.py
--- a/cryptography/__about__.py
+++ b/cryptography/__about__.py
@@ -28,4 +28,4 @@
__email__ = "[email protected]"
__license__ = "Apache License, Version 2.0"
-__copyright__ = "Copyright 2013-2014 %s" % __author__
+__copyright__ = "Copyright 2013-2014 {0}".format(__author__)
diff --git a/cryptography/hazmat/primitives/ciphers/modes.py b/cryptography/hazmat/primitives/ciphers/modes.py
--- a/cryptography/hazmat/primitives/ciphers/modes.py
+++ b/cryptography/hazmat/primitives/ciphers/modes.py
@@ -97,13 +97,16 @@
class GCM(object):
name = "GCM"
- def __init__(self, initialization_vector, tag=None):
+ def __init__(self, initialization_vector, tag=None, min_tag_length=16):
# len(initialization_vector) must in [1, 2 ** 64), but it's impossible
# to actually construct a bytes object that large, so we don't check
# for it
- if tag is not None and len(tag) < 4:
+ if min_tag_length < 4:
+ raise ValueError("min_tag_length must be >= 4")
+ if tag is not None and len(tag) < min_tag_length:
raise ValueError(
- "Authentication tag must be 4 bytes or longer."
+ "Authentication tag must be {0} bytes or longer.".format(
+ min_tag_length)
)
self.initialization_vector = initialization_vector
| {"golden_diff": "diff --git a/cryptography/__about__.py b/cryptography/__about__.py\n--- a/cryptography/__about__.py\n+++ b/cryptography/__about__.py\n@@ -28,4 +28,4 @@\n __email__ = \"[email protected]\"\n \n __license__ = \"Apache License, Version 2.0\"\n-__copyright__ = \"Copyright 2013-2014 %s\" % __author__\n+__copyright__ = \"Copyright 2013-2014 {0}\".format(__author__)\ndiff --git a/cryptography/hazmat/primitives/ciphers/modes.py b/cryptography/hazmat/primitives/ciphers/modes.py\n--- a/cryptography/hazmat/primitives/ciphers/modes.py\n+++ b/cryptography/hazmat/primitives/ciphers/modes.py\n@@ -97,13 +97,16 @@\n class GCM(object):\n name = \"GCM\"\n \n- def __init__(self, initialization_vector, tag=None):\n+ def __init__(self, initialization_vector, tag=None, min_tag_length=16):\n # len(initialization_vector) must in [1, 2 ** 64), but it's impossible\n # to actually construct a bytes object that large, so we don't check\n # for it\n- if tag is not None and len(tag) < 4:\n+ if min_tag_length < 4:\n+ raise ValueError(\"min_tag_length must be >= 4\")\n+ if tag is not None and len(tag) < min_tag_length:\n raise ValueError(\n- \"Authentication tag must be 4 bytes or longer.\"\n+ \"Authentication tag must be {0} bytes or longer.\".format(\n+ min_tag_length)\n )\n \n self.initialization_vector = initialization_vector\n", "issue": "Make it harder to allow truncation on a GCM tag\n`GCM()` should take a `min_tag_length` parameter, which defaults to `GCM.MAX_TAG_LENGTH`, then we can make it error out on truncated tags by default, while still allowing truncation for use cases that need it.\n\nThis is technically a backwards compatibility break, but I argue that it should be allowed as a security hardening measure. Thoughts?\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function\n\n__all__ = [\n \"__title__\", \"__summary__\", \"__uri__\", \"__version__\", \"__author__\",\n \"__email__\", \"__license__\", \"__copyright__\",\n]\n\n__title__ = \"cryptography\"\n__summary__ = (\"cryptography is a package which provides cryptographic recipes\"\n \" and primitives to Python developers.\")\n__uri__ = \"https://github.com/pyca/cryptography\"\n\n__version__ = \"0.5.dev1\"\n\n__author__ = \"The cryptography developers\"\n__email__ = \"[email protected]\"\n\n__license__ = \"Apache License, Version 2.0\"\n__copyright__ = \"Copyright 2013-2014 %s\" % __author__\n", "path": "cryptography/__about__.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nfrom cryptography import utils\nfrom cryptography.hazmat.primitives import interfaces\n\n\ndef _check_iv_length(mode, algorithm):\n if len(mode.initialization_vector) * 8 != algorithm.block_size:\n raise ValueError(\"Invalid IV size ({0}) for {1}.\".format(\n len(mode.initialization_vector), mode.name\n ))\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\nclass CBC(object):\n name = \"CBC\"\n\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n\n validate_for_algorithm = _check_iv_length\n\n\[email protected]_interface(interfaces.Mode)\nclass ECB(object):\n name = \"ECB\"\n\n def validate_for_algorithm(self, algorithm):\n pass\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\nclass OFB(object):\n name = \"OFB\"\n\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n\n validate_for_algorithm = _check_iv_length\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\nclass CFB(object):\n name = \"CFB\"\n\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n\n validate_for_algorithm = _check_iv_length\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\nclass CFB8(object):\n name = \"CFB8\"\n\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n\n validate_for_algorithm = _check_iv_length\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithNonce)\nclass CTR(object):\n name = \"CTR\"\n\n def __init__(self, nonce):\n self.nonce = nonce\n\n def validate_for_algorithm(self, algorithm):\n if len(self.nonce) * 8 != algorithm.block_size:\n raise ValueError(\"Invalid nonce size ({0}) for {1}.\".format(\n len(self.nonce), self.name\n ))\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\[email protected]_interface(interfaces.ModeWithAuthenticationTag)\nclass GCM(object):\n name = \"GCM\"\n\n def __init__(self, initialization_vector, tag=None):\n # len(initialization_vector) must in [1, 2 ** 64), but it's impossible\n # to actually construct a bytes object that large, so we don't check\n # for it\n if tag is not None and len(tag) < 4:\n raise ValueError(\n \"Authentication tag must be 4 bytes or longer.\"\n )\n\n self.initialization_vector = initialization_vector\n self.tag = tag\n\n def validate_for_algorithm(self, algorithm):\n pass\n", "path": "cryptography/hazmat/primitives/ciphers/modes.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function\n\n__all__ = [\n \"__title__\", \"__summary__\", \"__uri__\", \"__version__\", \"__author__\",\n \"__email__\", \"__license__\", \"__copyright__\",\n]\n\n__title__ = \"cryptography\"\n__summary__ = (\"cryptography is a package which provides cryptographic recipes\"\n \" and primitives to Python developers.\")\n__uri__ = \"https://github.com/pyca/cryptography\"\n\n__version__ = \"0.5.dev1\"\n\n__author__ = \"The cryptography developers\"\n__email__ = \"[email protected]\"\n\n__license__ = \"Apache License, Version 2.0\"\n__copyright__ = \"Copyright 2013-2014 {0}\".format(__author__)\n", "path": "cryptography/__about__.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nfrom cryptography import utils\nfrom cryptography.hazmat.primitives import interfaces\n\n\ndef _check_iv_length(mode, algorithm):\n if len(mode.initialization_vector) * 8 != algorithm.block_size:\n raise ValueError(\"Invalid IV size ({0}) for {1}.\".format(\n len(mode.initialization_vector), mode.name\n ))\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\nclass CBC(object):\n name = \"CBC\"\n\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n\n validate_for_algorithm = _check_iv_length\n\n\[email protected]_interface(interfaces.Mode)\nclass ECB(object):\n name = \"ECB\"\n\n def validate_for_algorithm(self, algorithm):\n pass\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\nclass OFB(object):\n name = \"OFB\"\n\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n\n validate_for_algorithm = _check_iv_length\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\nclass CFB(object):\n name = \"CFB\"\n\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n\n validate_for_algorithm = _check_iv_length\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\nclass CFB8(object):\n name = \"CFB8\"\n\n def __init__(self, initialization_vector):\n self.initialization_vector = initialization_vector\n\n validate_for_algorithm = _check_iv_length\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithNonce)\nclass CTR(object):\n name = \"CTR\"\n\n def __init__(self, nonce):\n self.nonce = nonce\n\n def validate_for_algorithm(self, algorithm):\n if len(self.nonce) * 8 != algorithm.block_size:\n raise ValueError(\"Invalid nonce size ({0}) for {1}.\".format(\n len(self.nonce), self.name\n ))\n\n\[email protected]_interface(interfaces.Mode)\[email protected]_interface(interfaces.ModeWithInitializationVector)\[email protected]_interface(interfaces.ModeWithAuthenticationTag)\nclass GCM(object):\n name = \"GCM\"\n\n def __init__(self, initialization_vector, tag=None, min_tag_length=16):\n # len(initialization_vector) must in [1, 2 ** 64), but it's impossible\n # to actually construct a bytes object that large, so we don't check\n # for it\n if min_tag_length < 4:\n raise ValueError(\"min_tag_length must be >= 4\")\n if tag is not None and len(tag) < min_tag_length:\n raise ValueError(\n \"Authentication tag must be {0} bytes or longer.\".format(\n min_tag_length)\n )\n\n self.initialization_vector = initialization_vector\n self.tag = tag\n\n def validate_for_algorithm(self, algorithm):\n pass\n", "path": "cryptography/hazmat/primitives/ciphers/modes.py"}]} | 1,715 | 397 |
gh_patches_debug_29942 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-1818 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Exception calling application: dictionary changed size during iteration
The exception occasionally happens when the worker reports gradients to ps.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticdl/python/ps/embedding_table.py`
Content:
```
1 import numpy as np
2 import tensorflow as tf
3
4 from elasticdl.proto.elasticdl_pb2 import EmbeddingTableInfo
5 from elasticdl.python.common.dtypes import dtype_numpy_to_tensor
6
7
8 class EmbeddingTable(object):
9 """
10 EmbeddingTable is used to store embedding parameters of an embedding
11 layer. The name of an embedding table is actually the embedding layer
12 name. It uses a dictionary to store embedding vectors, the key is the
13 item id, the value is a 1-D numpy.ndarray.
14
15 Embedding vectors are lazily initialized in parameter server.
16 EmbeddingTable also has dim and initializer fields. Inside the get
17 interface of EmbeddingTable, if the id is not in the embedding_vectors
18 dictionary, the corresponding value will be initialized.
19 """
20
21 def __init__(self, name, dim=None, initializer=None, is_slot=False):
22 """
23 Args:
24 name: The embedding table name.
25 dim: The dimension of embeddings in this embedding table.
26 initializer: The initializer to initialize new embeddings. If this
27 embedding table is for slots, `initializer` is a float and this
28 table will initialize with constant initializer. Otherwise
29 `initializer` is the name of Keras initializer.
30 is_slot: A bool. True for storing slot variable, otherwise false.
31 """
32 self.name = name
33 self.dim = dim
34 self.initializer_value = initializer
35 # set dtype to float32
36 self.dtype = np.dtype("float32")
37 if is_slot:
38 self.initializer = tf.keras.initializers.Constant(
39 float(self.initializer_value)
40 )
41 else:
42 self.initializer = tf.keras.initializers.get(
43 self.initializer_value
44 )
45 self.is_slot = is_slot
46 self.embedding_vectors = {}
47
48 def get(self, indices):
49 if len(indices) == 0:
50 return None
51 values = []
52 for i in indices:
53 value = self.embedding_vectors.get(i, None)
54 if value is None:
55 value = self.initializer(shape=(self.dim,)).numpy()
56 self.embedding_vectors[i] = value
57 values.append(value)
58 return np.stack(values)
59
60 def set(self, indices, values):
61 # TODO(qijun) need to add a RWLock in Sync-SGD
62 for index, i in enumerate(indices):
63 embedding_vector = values[index]
64 self.embedding_vectors[i] = embedding_vector
65
66 def clear(self):
67 self.embedding_vectors.clear()
68
69 def to_indexed_slices(self):
70 indices = []
71 embedding_vectors = []
72 for id, embedding_vector in self.embedding_vectors.items():
73 indices.append(id)
74 embedding_vectors.append(embedding_vector)
75 return tf.IndexedSlices(
76 values=np.array(embedding_vectors), indices=np.array(indices)
77 )
78
79 def to_embedding_table_info_pb(self):
80 """Convert the embedding table information to a protobuf"""
81 embedding_pb = EmbeddingTableInfo()
82 embedding_pb.name = self.name
83 embedding_pb.dim = self.dim
84 embedding_pb.initializer = str(self.initializer_value)
85 embedding_pb.dtype = dtype_numpy_to_tensor(self.dtype)
86 return embedding_pb
87
88 def get_table_size(self):
89 """Get the element count of an embedding table"""
90 if len(self.embedding_vectors) > 0:
91 element_size = list(self.embedding_vectors.values())[0].itemsize
92 size = self.dim * len(self.embedding_vectors) * element_size
93 return size
94 return 0
95
96 def debug_info(self):
97 return (
98 "Embedding param name: %s\n shape: [%d, %d]\n size: %d bytes\n"
99 % (
100 self.name,
101 len(self.embedding_vectors),
102 self.dim,
103 self.get_table_size(),
104 )
105 )
106
107
108 # TODO(bug): create_embedding_table does not create EmbeddingTable correctly
109 # if it is a slot table.
110 def create_embedding_table(embedding_table_info_pb):
111 name = embedding_table_info_pb.name
112 dim = embedding_table_info_pb.dim
113 initializer = embedding_table_info_pb.initializer
114 return EmbeddingTable(name, dim, initializer)
115
116
117 def get_slot_table_name(embedding_name, slot_name):
118 return embedding_name + "-" + slot_name
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticdl/python/ps/embedding_table.py b/elasticdl/python/ps/embedding_table.py
--- a/elasticdl/python/ps/embedding_table.py
+++ b/elasticdl/python/ps/embedding_table.py
@@ -1,3 +1,5 @@
+import threading
+
import numpy as np
import tensorflow as tf
@@ -44,6 +46,7 @@
)
self.is_slot = is_slot
self.embedding_vectors = {}
+ self._lock = threading.Lock()
def get(self, indices):
if len(indices) == 0:
@@ -52,8 +55,9 @@
for i in indices:
value = self.embedding_vectors.get(i, None)
if value is None:
- value = self.initializer(shape=(self.dim,)).numpy()
- self.embedding_vectors[i] = value
+ with self._lock:
+ value = self.initializer(shape=(self.dim,)).numpy()
+ self.embedding_vectors[i] = value
values.append(value)
return np.stack(values)
@@ -69,9 +73,10 @@
def to_indexed_slices(self):
indices = []
embedding_vectors = []
- for id, embedding_vector in self.embedding_vectors.items():
- indices.append(id)
- embedding_vectors.append(embedding_vector)
+ with self._lock:
+ for id, embedding_vector in self.embedding_vectors.items():
+ indices.append(id)
+ embedding_vectors.append(embedding_vector)
return tf.IndexedSlices(
values=np.array(embedding_vectors), indices=np.array(indices)
)
| {"golden_diff": "diff --git a/elasticdl/python/ps/embedding_table.py b/elasticdl/python/ps/embedding_table.py\n--- a/elasticdl/python/ps/embedding_table.py\n+++ b/elasticdl/python/ps/embedding_table.py\n@@ -1,3 +1,5 @@\n+import threading\n+\n import numpy as np\n import tensorflow as tf\n \n@@ -44,6 +46,7 @@\n )\n self.is_slot = is_slot\n self.embedding_vectors = {}\n+ self._lock = threading.Lock()\n \n def get(self, indices):\n if len(indices) == 0:\n@@ -52,8 +55,9 @@\n for i in indices:\n value = self.embedding_vectors.get(i, None)\n if value is None:\n- value = self.initializer(shape=(self.dim,)).numpy()\n- self.embedding_vectors[i] = value\n+ with self._lock:\n+ value = self.initializer(shape=(self.dim,)).numpy()\n+ self.embedding_vectors[i] = value\n values.append(value)\n return np.stack(values)\n \n@@ -69,9 +73,10 @@\n def to_indexed_slices(self):\n indices = []\n embedding_vectors = []\n- for id, embedding_vector in self.embedding_vectors.items():\n- indices.append(id)\n- embedding_vectors.append(embedding_vector)\n+ with self._lock:\n+ for id, embedding_vector in self.embedding_vectors.items():\n+ indices.append(id)\n+ embedding_vectors.append(embedding_vector)\n return tf.IndexedSlices(\n values=np.array(embedding_vectors), indices=np.array(indices)\n )\n", "issue": "Exception calling application: dictionary changed size during iteration\nThe exception occasionally happens when the worker reports gradients to ps.\r\n\r\n\n", "before_files": [{"content": "import numpy as np\nimport tensorflow as tf\n\nfrom elasticdl.proto.elasticdl_pb2 import EmbeddingTableInfo\nfrom elasticdl.python.common.dtypes import dtype_numpy_to_tensor\n\n\nclass EmbeddingTable(object):\n \"\"\"\n EmbeddingTable is used to store embedding parameters of an embedding\n layer. The name of an embedding table is actually the embedding layer\n name. It uses a dictionary to store embedding vectors, the key is the\n item id, the value is a 1-D numpy.ndarray.\n\n Embedding vectors are lazily initialized in parameter server.\n EmbeddingTable also has dim and initializer fields. Inside the get\n interface of EmbeddingTable, if the id is not in the embedding_vectors\n dictionary, the corresponding value will be initialized.\n \"\"\"\n\n def __init__(self, name, dim=None, initializer=None, is_slot=False):\n \"\"\"\n Args:\n name: The embedding table name.\n dim: The dimension of embeddings in this embedding table.\n initializer: The initializer to initialize new embeddings. If this\n embedding table is for slots, `initializer` is a float and this\n table will initialize with constant initializer. Otherwise\n `initializer` is the name of Keras initializer.\n is_slot: A bool. True for storing slot variable, otherwise false.\n \"\"\"\n self.name = name\n self.dim = dim\n self.initializer_value = initializer\n # set dtype to float32\n self.dtype = np.dtype(\"float32\")\n if is_slot:\n self.initializer = tf.keras.initializers.Constant(\n float(self.initializer_value)\n )\n else:\n self.initializer = tf.keras.initializers.get(\n self.initializer_value\n )\n self.is_slot = is_slot\n self.embedding_vectors = {}\n\n def get(self, indices):\n if len(indices) == 0:\n return None\n values = []\n for i in indices:\n value = self.embedding_vectors.get(i, None)\n if value is None:\n value = self.initializer(shape=(self.dim,)).numpy()\n self.embedding_vectors[i] = value\n values.append(value)\n return np.stack(values)\n\n def set(self, indices, values):\n # TODO(qijun) need to add a RWLock in Sync-SGD\n for index, i in enumerate(indices):\n embedding_vector = values[index]\n self.embedding_vectors[i] = embedding_vector\n\n def clear(self):\n self.embedding_vectors.clear()\n\n def to_indexed_slices(self):\n indices = []\n embedding_vectors = []\n for id, embedding_vector in self.embedding_vectors.items():\n indices.append(id)\n embedding_vectors.append(embedding_vector)\n return tf.IndexedSlices(\n values=np.array(embedding_vectors), indices=np.array(indices)\n )\n\n def to_embedding_table_info_pb(self):\n \"\"\"Convert the embedding table information to a protobuf\"\"\"\n embedding_pb = EmbeddingTableInfo()\n embedding_pb.name = self.name\n embedding_pb.dim = self.dim\n embedding_pb.initializer = str(self.initializer_value)\n embedding_pb.dtype = dtype_numpy_to_tensor(self.dtype)\n return embedding_pb\n\n def get_table_size(self):\n \"\"\"Get the element count of an embedding table\"\"\"\n if len(self.embedding_vectors) > 0:\n element_size = list(self.embedding_vectors.values())[0].itemsize\n size = self.dim * len(self.embedding_vectors) * element_size\n return size\n return 0\n\n def debug_info(self):\n return (\n \"Embedding param name: %s\\n shape: [%d, %d]\\n size: %d bytes\\n\"\n % (\n self.name,\n len(self.embedding_vectors),\n self.dim,\n self.get_table_size(),\n )\n )\n\n\n# TODO(bug): create_embedding_table does not create EmbeddingTable correctly\n# if it is a slot table.\ndef create_embedding_table(embedding_table_info_pb):\n name = embedding_table_info_pb.name\n dim = embedding_table_info_pb.dim\n initializer = embedding_table_info_pb.initializer\n return EmbeddingTable(name, dim, initializer)\n\n\ndef get_slot_table_name(embedding_name, slot_name):\n return embedding_name + \"-\" + slot_name\n", "path": "elasticdl/python/ps/embedding_table.py"}], "after_files": [{"content": "import threading\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom elasticdl.proto.elasticdl_pb2 import EmbeddingTableInfo\nfrom elasticdl.python.common.dtypes import dtype_numpy_to_tensor\n\n\nclass EmbeddingTable(object):\n \"\"\"\n EmbeddingTable is used to store embedding parameters of an embedding\n layer. The name of an embedding table is actually the embedding layer\n name. It uses a dictionary to store embedding vectors, the key is the\n item id, the value is a 1-D numpy.ndarray.\n\n Embedding vectors are lazily initialized in parameter server.\n EmbeddingTable also has dim and initializer fields. Inside the get\n interface of EmbeddingTable, if the id is not in the embedding_vectors\n dictionary, the corresponding value will be initialized.\n \"\"\"\n\n def __init__(self, name, dim=None, initializer=None, is_slot=False):\n \"\"\"\n Args:\n name: The embedding table name.\n dim: The dimension of embeddings in this embedding table.\n initializer: The initializer to initialize new embeddings. If this\n embedding table is for slots, `initializer` is a float and this\n table will initialize with constant initializer. Otherwise\n `initializer` is the name of Keras initializer.\n is_slot: A bool. True for storing slot variable, otherwise false.\n \"\"\"\n self.name = name\n self.dim = dim\n self.initializer_value = initializer\n # set dtype to float32\n self.dtype = np.dtype(\"float32\")\n if is_slot:\n self.initializer = tf.keras.initializers.Constant(\n float(self.initializer_value)\n )\n else:\n self.initializer = tf.keras.initializers.get(\n self.initializer_value\n )\n self.is_slot = is_slot\n self.embedding_vectors = {}\n self._lock = threading.Lock()\n\n def get(self, indices):\n if len(indices) == 0:\n return None\n values = []\n for i in indices:\n value = self.embedding_vectors.get(i, None)\n if value is None:\n with self._lock:\n value = self.initializer(shape=(self.dim,)).numpy()\n self.embedding_vectors[i] = value\n values.append(value)\n return np.stack(values)\n\n def set(self, indices, values):\n # TODO(qijun) need to add a RWLock in Sync-SGD\n for index, i in enumerate(indices):\n embedding_vector = values[index]\n self.embedding_vectors[i] = embedding_vector\n\n def clear(self):\n self.embedding_vectors.clear()\n\n def to_indexed_slices(self):\n indices = []\n embedding_vectors = []\n with self._lock:\n for id, embedding_vector in self.embedding_vectors.items():\n indices.append(id)\n embedding_vectors.append(embedding_vector)\n return tf.IndexedSlices(\n values=np.array(embedding_vectors), indices=np.array(indices)\n )\n\n def to_embedding_table_info_pb(self):\n \"\"\"Convert the embedding table information to a protobuf\"\"\"\n embedding_pb = EmbeddingTableInfo()\n embedding_pb.name = self.name\n embedding_pb.dim = self.dim\n embedding_pb.initializer = str(self.initializer_value)\n embedding_pb.dtype = dtype_numpy_to_tensor(self.dtype)\n return embedding_pb\n\n def get_table_size(self):\n \"\"\"Get the element count of an embedding table\"\"\"\n if len(self.embedding_vectors) > 0:\n element_size = list(self.embedding_vectors.values())[0].itemsize\n size = self.dim * len(self.embedding_vectors) * element_size\n return size\n return 0\n\n def debug_info(self):\n return (\n \"Embedding param name: %s\\n shape: [%d, %d]\\n size: %d bytes\\n\"\n % (\n self.name,\n len(self.embedding_vectors),\n self.dim,\n self.get_table_size(),\n )\n )\n\n\n# TODO(bug): create_embedding_table does not create EmbeddingTable correctly\n# if it is a slot table.\ndef create_embedding_table(embedding_table_info_pb):\n name = embedding_table_info_pb.name\n dim = embedding_table_info_pb.dim\n initializer = embedding_table_info_pb.initializer\n return EmbeddingTable(name, dim, initializer)\n\n\ndef get_slot_table_name(embedding_name, slot_name):\n return embedding_name + \"-\" + slot_name\n", "path": "elasticdl/python/ps/embedding_table.py"}]} | 1,502 | 351 |
gh_patches_debug_11089 | rasdani/github-patches | git_diff | encode__starlette-1218 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Consider making `starlette.templating.Jinja2Templates.get_env` "protected"
Hi,
Thank you very much for your project! We are using it intensively through FastAPI.
While I was exploring the code base and looking how to use `Jinja2Templates`, I noticed the method `get_env`. At first I thought it was a getter for the environment, but eventually realized it is only called in the constructor of the class.
It would make much more sense to me if the method was called `_get_env`. This would signal that it should not be used outside of the class (or at least that the caller needs to know what she is doing).
For example, if I want to add filters or globals to the environment, I should write:
```
templates = Jinja2Templates(...)
templates.env.globals[...] = ...
```
instead of:
```
templates = Jinja2Templates(...)
templates.get_env().globals[...] = ...
```
As of current implementation, the second snippet has no effect.
Thank you very much for considering this request!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlette/templating.py`
Content:
```
1 import typing
2
3 from starlette.background import BackgroundTask
4 from starlette.responses import Response
5 from starlette.types import Receive, Scope, Send
6
7 try:
8 import jinja2
9
10 # @contextfunction renamed to @pass_context in Jinja 3.0, to be removed in 3.1
11 if hasattr(jinja2, "pass_context"):
12 pass_context = jinja2.pass_context
13 else: # pragma: nocover
14 pass_context = jinja2.contextfunction
15 except ImportError: # pragma: nocover
16 jinja2 = None # type: ignore
17
18
19 class _TemplateResponse(Response):
20 media_type = "text/html"
21
22 def __init__(
23 self,
24 template: typing.Any,
25 context: dict,
26 status_code: int = 200,
27 headers: dict = None,
28 media_type: str = None,
29 background: BackgroundTask = None,
30 ):
31 self.template = template
32 self.context = context
33 content = template.render(context)
34 super().__init__(content, status_code, headers, media_type, background)
35
36 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
37 request = self.context.get("request", {})
38 extensions = request.get("extensions", {})
39 if "http.response.template" in extensions:
40 await send(
41 {
42 "type": "http.response.template",
43 "template": self.template,
44 "context": self.context,
45 }
46 )
47 await super().__call__(scope, receive, send)
48
49
50 class Jinja2Templates:
51 """
52 templates = Jinja2Templates("templates")
53
54 return templates.TemplateResponse("index.html", {"request": request})
55 """
56
57 def __init__(self, directory: str) -> None:
58 assert jinja2 is not None, "jinja2 must be installed to use Jinja2Templates"
59 self.env = self.get_env(directory)
60
61 def get_env(self, directory: str) -> "jinja2.Environment":
62 @pass_context
63 def url_for(context: dict, name: str, **path_params: typing.Any) -> str:
64 request = context["request"]
65 return request.url_for(name, **path_params)
66
67 loader = jinja2.FileSystemLoader(directory)
68 env = jinja2.Environment(loader=loader, autoescape=True)
69 env.globals["url_for"] = url_for
70 return env
71
72 def get_template(self, name: str) -> "jinja2.Template":
73 return self.env.get_template(name)
74
75 def TemplateResponse(
76 self,
77 name: str,
78 context: dict,
79 status_code: int = 200,
80 headers: dict = None,
81 media_type: str = None,
82 background: BackgroundTask = None,
83 ) -> _TemplateResponse:
84 if "request" not in context:
85 raise ValueError('context must include a "request" key')
86 template = self.get_template(name)
87 return _TemplateResponse(
88 template,
89 context,
90 status_code=status_code,
91 headers=headers,
92 media_type=media_type,
93 background=background,
94 )
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/starlette/templating.py b/starlette/templating.py
--- a/starlette/templating.py
+++ b/starlette/templating.py
@@ -56,9 +56,9 @@
def __init__(self, directory: str) -> None:
assert jinja2 is not None, "jinja2 must be installed to use Jinja2Templates"
- self.env = self.get_env(directory)
+ self.env = self._create_env(directory)
- def get_env(self, directory: str) -> "jinja2.Environment":
+ def _create_env(self, directory: str) -> "jinja2.Environment":
@pass_context
def url_for(context: dict, name: str, **path_params: typing.Any) -> str:
request = context["request"]
| {"golden_diff": "diff --git a/starlette/templating.py b/starlette/templating.py\n--- a/starlette/templating.py\n+++ b/starlette/templating.py\n@@ -56,9 +56,9 @@\n \n def __init__(self, directory: str) -> None:\n assert jinja2 is not None, \"jinja2 must be installed to use Jinja2Templates\"\n- self.env = self.get_env(directory)\n+ self.env = self._create_env(directory)\n \n- def get_env(self, directory: str) -> \"jinja2.Environment\":\n+ def _create_env(self, directory: str) -> \"jinja2.Environment\":\n @pass_context\n def url_for(context: dict, name: str, **path_params: typing.Any) -> str:\n request = context[\"request\"]\n", "issue": "Consider making `starlette.templating.Jinja2Templates.get_env` \"protected\"\nHi,\r\nThank you very much for your project! We are using it intensively through FastAPI.\r\n\r\nWhile I was exploring the code base and looking how to use `Jinja2Templates`, I noticed the method `get_env`. At first I thought it was a getter for the environment, but eventually realized it is only called in the constructor of the class.\r\n\r\nIt would make much more sense to me if the method was called `_get_env`. This would signal that it should not be used outside of the class (or at least that the caller needs to know what she is doing).\r\n\r\nFor example, if I want to add filters or globals to the environment, I should write:\r\n\r\n```\r\ntemplates = Jinja2Templates(...)\r\ntemplates.env.globals[...] = ...\r\n```\r\n\r\ninstead of:\r\n\r\n```\r\ntemplates = Jinja2Templates(...)\r\ntemplates.get_env().globals[...] = ...\r\n```\r\n\r\nAs of current implementation, the second snippet has no effect.\r\n\r\nThank you very much for considering this request!\n", "before_files": [{"content": "import typing\n\nfrom starlette.background import BackgroundTask\nfrom starlette.responses import Response\nfrom starlette.types import Receive, Scope, Send\n\ntry:\n import jinja2\n\n # @contextfunction renamed to @pass_context in Jinja 3.0, to be removed in 3.1\n if hasattr(jinja2, \"pass_context\"):\n pass_context = jinja2.pass_context\n else: # pragma: nocover\n pass_context = jinja2.contextfunction\nexcept ImportError: # pragma: nocover\n jinja2 = None # type: ignore\n\n\nclass _TemplateResponse(Response):\n media_type = \"text/html\"\n\n def __init__(\n self,\n template: typing.Any,\n context: dict,\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n ):\n self.template = template\n self.context = context\n content = template.render(context)\n super().__init__(content, status_code, headers, media_type, background)\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n request = self.context.get(\"request\", {})\n extensions = request.get(\"extensions\", {})\n if \"http.response.template\" in extensions:\n await send(\n {\n \"type\": \"http.response.template\",\n \"template\": self.template,\n \"context\": self.context,\n }\n )\n await super().__call__(scope, receive, send)\n\n\nclass Jinja2Templates:\n \"\"\"\n templates = Jinja2Templates(\"templates\")\n\n return templates.TemplateResponse(\"index.html\", {\"request\": request})\n \"\"\"\n\n def __init__(self, directory: str) -> None:\n assert jinja2 is not None, \"jinja2 must be installed to use Jinja2Templates\"\n self.env = self.get_env(directory)\n\n def get_env(self, directory: str) -> \"jinja2.Environment\":\n @pass_context\n def url_for(context: dict, name: str, **path_params: typing.Any) -> str:\n request = context[\"request\"]\n return request.url_for(name, **path_params)\n\n loader = jinja2.FileSystemLoader(directory)\n env = jinja2.Environment(loader=loader, autoescape=True)\n env.globals[\"url_for\"] = url_for\n return env\n\n def get_template(self, name: str) -> \"jinja2.Template\":\n return self.env.get_template(name)\n\n def TemplateResponse(\n self,\n name: str,\n context: dict,\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n ) -> _TemplateResponse:\n if \"request\" not in context:\n raise ValueError('context must include a \"request\" key')\n template = self.get_template(name)\n return _TemplateResponse(\n template,\n context,\n status_code=status_code,\n headers=headers,\n media_type=media_type,\n background=background,\n )\n", "path": "starlette/templating.py"}], "after_files": [{"content": "import typing\n\nfrom starlette.background import BackgroundTask\nfrom starlette.responses import Response\nfrom starlette.types import Receive, Scope, Send\n\ntry:\n import jinja2\n\n # @contextfunction renamed to @pass_context in Jinja 3.0, to be removed in 3.1\n if hasattr(jinja2, \"pass_context\"):\n pass_context = jinja2.pass_context\n else: # pragma: nocover\n pass_context = jinja2.contextfunction\nexcept ImportError: # pragma: nocover\n jinja2 = None # type: ignore\n\n\nclass _TemplateResponse(Response):\n media_type = \"text/html\"\n\n def __init__(\n self,\n template: typing.Any,\n context: dict,\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n ):\n self.template = template\n self.context = context\n content = template.render(context)\n super().__init__(content, status_code, headers, media_type, background)\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n request = self.context.get(\"request\", {})\n extensions = request.get(\"extensions\", {})\n if \"http.response.template\" in extensions:\n await send(\n {\n \"type\": \"http.response.template\",\n \"template\": self.template,\n \"context\": self.context,\n }\n )\n await super().__call__(scope, receive, send)\n\n\nclass Jinja2Templates:\n \"\"\"\n templates = Jinja2Templates(\"templates\")\n\n return templates.TemplateResponse(\"index.html\", {\"request\": request})\n \"\"\"\n\n def __init__(self, directory: str) -> None:\n assert jinja2 is not None, \"jinja2 must be installed to use Jinja2Templates\"\n self.env = self._create_env(directory)\n\n def _create_env(self, directory: str) -> \"jinja2.Environment\":\n @pass_context\n def url_for(context: dict, name: str, **path_params: typing.Any) -> str:\n request = context[\"request\"]\n return request.url_for(name, **path_params)\n\n loader = jinja2.FileSystemLoader(directory)\n env = jinja2.Environment(loader=loader, autoescape=True)\n env.globals[\"url_for\"] = url_for\n return env\n\n def get_template(self, name: str) -> \"jinja2.Template\":\n return self.env.get_template(name)\n\n def TemplateResponse(\n self,\n name: str,\n context: dict,\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n ) -> _TemplateResponse:\n if \"request\" not in context:\n raise ValueError('context must include a \"request\" key')\n template = self.get_template(name)\n return _TemplateResponse(\n template,\n context,\n status_code=status_code,\n headers=headers,\n media_type=media_type,\n background=background,\n )\n", "path": "starlette/templating.py"}]} | 1,338 | 179 |
gh_patches_debug_15302 | rasdani/github-patches | git_diff | dj-stripe__dj-stripe-975 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Document our plan to standardise on storing currency in cents
- [x] Add note on field classes that StripeQuantumCurrencyAmountField should be used for new fields
- [x] Update changelog about 3.0 migration to cents
_Originally posted by @therefromhere in https://github.com/dj-stripe/dj-stripe/issues/955#issuecomment-527251976_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `djstripe/fields.py`
Content:
```
1 """
2 dj-stripe Custom Field Definitions
3 """
4 import decimal
5
6 from django.core.validators import MaxValueValidator, MinValueValidator
7 from django.db import models
8
9 from .settings import USE_NATIVE_JSONFIELD
10 from .utils import convert_tstamp
11
12 if USE_NATIVE_JSONFIELD:
13 from django.contrib.postgres.fields import JSONField as BaseJSONField
14 else:
15 from jsonfield import JSONField as BaseJSONField
16
17
18 class PaymentMethodForeignKey(models.ForeignKey):
19 def __init__(self, **kwargs):
20 kwargs.setdefault("to", "DjstripePaymentMethod")
21 super().__init__(**kwargs)
22
23
24 class StripePercentField(models.DecimalField):
25 """A field used to define a percent according to djstripe logic."""
26
27 def __init__(self, *args, **kwargs):
28 """Assign default args to this field."""
29 defaults = {
30 "decimal_places": 2,
31 "max_digits": 5,
32 "validators": [MinValueValidator(1), MaxValueValidator(100)],
33 }
34 defaults.update(kwargs)
35 super().__init__(*args, **defaults)
36
37
38 class StripeCurrencyCodeField(models.CharField):
39 """
40 A field used to store a three-letter currency code (eg. usd, eur, ...)
41 """
42
43 def __init__(self, *args, **kwargs):
44 defaults = {"max_length": 3, "help_text": "Three-letter ISO currency code"}
45 defaults.update(kwargs)
46 super().__init__(*args, **defaults)
47
48
49 class StripeQuantumCurrencyAmountField(models.IntegerField):
50 pass
51
52
53 class StripeDecimalCurrencyAmountField(models.DecimalField):
54 """
55 A field used to define currency according to djstripe logic.
56
57 Stripe is always in cents. djstripe stores everything in dollars.
58 """
59
60 def __init__(self, *args, **kwargs):
61 """Assign default args to this field."""
62 defaults = {"decimal_places": 2, "max_digits": 8}
63 defaults.update(kwargs)
64 super().__init__(*args, **defaults)
65
66 def stripe_to_db(self, data):
67 """Convert the raw value to decimal representation."""
68 val = data.get(self.name)
69
70 # Note: 0 is a possible return value, which is 'falseish'
71 if val is not None:
72 return val / decimal.Decimal("100")
73
74
75 class StripeEnumField(models.CharField):
76 def __init__(self, enum, *args, **kwargs):
77 self.enum = enum
78 choices = enum.choices
79 defaults = {"choices": choices, "max_length": max(len(k) for k, v in choices)}
80 defaults.update(kwargs)
81 super().__init__(*args, **defaults)
82
83 def deconstruct(self):
84 name, path, args, kwargs = super().deconstruct()
85 kwargs["enum"] = self.enum
86 del kwargs["choices"]
87 return name, path, args, kwargs
88
89
90 class StripeIdField(models.CharField):
91 """A field with enough space to hold any stripe ID."""
92
93 def __init__(self, *args, **kwargs):
94 """
95 Assign default args to this field.
96
97 As per: https://stripe.com/docs/upgrades
98 You can safely assume object IDs we generate will never exceed 255
99 characters, but you should be able to handle IDs of up to that
100 length.
101 """
102 defaults = {"max_length": 255, "blank": False, "null": False}
103 defaults.update(kwargs)
104 super().__init__(*args, **defaults)
105
106
107 class StripeDateTimeField(models.DateTimeField):
108 """A field used to define a DateTimeField value according to djstripe logic."""
109
110 def stripe_to_db(self, data):
111 """Convert the raw timestamp value to a DateTime representation."""
112 val = data.get(self.name)
113
114 # Note: 0 is a possible return value, which is 'falseish'
115 if val is not None:
116 return convert_tstamp(val)
117
118
119 class JSONField(BaseJSONField):
120 """A field used to define a JSONField value according to djstripe logic."""
121
122 pass
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/djstripe/fields.py b/djstripe/fields.py
--- a/djstripe/fields.py
+++ b/djstripe/fields.py
@@ -47,14 +47,22 @@
class StripeQuantumCurrencyAmountField(models.IntegerField):
+ """
+ A field used to store currency amounts in cents (etc) as per stripe.
+ """
+
pass
class StripeDecimalCurrencyAmountField(models.DecimalField):
"""
- A field used to define currency according to djstripe logic.
+ A legacy field to store currency amounts in dollars (etc).
+
+ Stripe is always in cents. Historically djstripe stored everything in dollars.
- Stripe is always in cents. djstripe stores everything in dollars.
+ Note: Don't use this for new fields, use StripeQuantumCurrencyAmountField instead.
+ We're planning on migrating existing fields in dj-stripe 3.0,
+ see https://github.com/dj-stripe/dj-stripe/issues/955
"""
def __init__(self, *args, **kwargs):
| {"golden_diff": "diff --git a/djstripe/fields.py b/djstripe/fields.py\n--- a/djstripe/fields.py\n+++ b/djstripe/fields.py\n@@ -47,14 +47,22 @@\n \n \n class StripeQuantumCurrencyAmountField(models.IntegerField):\n+ \"\"\"\n+ A field used to store currency amounts in cents (etc) as per stripe.\n+ \"\"\"\n+\n pass\n \n \n class StripeDecimalCurrencyAmountField(models.DecimalField):\n \"\"\"\n- A field used to define currency according to djstripe logic.\n+ A legacy field to store currency amounts in dollars (etc).\n+\n+ Stripe is always in cents. Historically djstripe stored everything in dollars.\n \n- Stripe is always in cents. djstripe stores everything in dollars.\n+ Note: Don't use this for new fields, use StripeQuantumCurrencyAmountField instead.\n+ We're planning on migrating existing fields in dj-stripe 3.0,\n+ see https://github.com/dj-stripe/dj-stripe/issues/955\n \"\"\"\n \n def __init__(self, *args, **kwargs):\n", "issue": "Document our plan to standardise on storing currency in cents\n- [x] Add note on field classes that StripeQuantumCurrencyAmountField should be used for new fields\r\n- [x] Update changelog about 3.0 migration to cents\r\n\r\n_Originally posted by @therefromhere in https://github.com/dj-stripe/dj-stripe/issues/955#issuecomment-527251976_\n", "before_files": [{"content": "\"\"\"\ndj-stripe Custom Field Definitions\n\"\"\"\nimport decimal\n\nfrom django.core.validators import MaxValueValidator, MinValueValidator\nfrom django.db import models\n\nfrom .settings import USE_NATIVE_JSONFIELD\nfrom .utils import convert_tstamp\n\nif USE_NATIVE_JSONFIELD:\n from django.contrib.postgres.fields import JSONField as BaseJSONField\nelse:\n from jsonfield import JSONField as BaseJSONField\n\n\nclass PaymentMethodForeignKey(models.ForeignKey):\n def __init__(self, **kwargs):\n kwargs.setdefault(\"to\", \"DjstripePaymentMethod\")\n super().__init__(**kwargs)\n\n\nclass StripePercentField(models.DecimalField):\n \"\"\"A field used to define a percent according to djstripe logic.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Assign default args to this field.\"\"\"\n defaults = {\n \"decimal_places\": 2,\n \"max_digits\": 5,\n \"validators\": [MinValueValidator(1), MaxValueValidator(100)],\n }\n defaults.update(kwargs)\n super().__init__(*args, **defaults)\n\n\nclass StripeCurrencyCodeField(models.CharField):\n \"\"\"\n A field used to store a three-letter currency code (eg. usd, eur, ...)\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n defaults = {\"max_length\": 3, \"help_text\": \"Three-letter ISO currency code\"}\n defaults.update(kwargs)\n super().__init__(*args, **defaults)\n\n\nclass StripeQuantumCurrencyAmountField(models.IntegerField):\n pass\n\n\nclass StripeDecimalCurrencyAmountField(models.DecimalField):\n \"\"\"\n A field used to define currency according to djstripe logic.\n\n Stripe is always in cents. djstripe stores everything in dollars.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Assign default args to this field.\"\"\"\n defaults = {\"decimal_places\": 2, \"max_digits\": 8}\n defaults.update(kwargs)\n super().__init__(*args, **defaults)\n\n def stripe_to_db(self, data):\n \"\"\"Convert the raw value to decimal representation.\"\"\"\n val = data.get(self.name)\n\n # Note: 0 is a possible return value, which is 'falseish'\n if val is not None:\n return val / decimal.Decimal(\"100\")\n\n\nclass StripeEnumField(models.CharField):\n def __init__(self, enum, *args, **kwargs):\n self.enum = enum\n choices = enum.choices\n defaults = {\"choices\": choices, \"max_length\": max(len(k) for k, v in choices)}\n defaults.update(kwargs)\n super().__init__(*args, **defaults)\n\n def deconstruct(self):\n name, path, args, kwargs = super().deconstruct()\n kwargs[\"enum\"] = self.enum\n del kwargs[\"choices\"]\n return name, path, args, kwargs\n\n\nclass StripeIdField(models.CharField):\n \"\"\"A field with enough space to hold any stripe ID.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Assign default args to this field.\n\n As per: https://stripe.com/docs/upgrades\n You can safely assume object IDs we generate will never exceed 255\n characters, but you should be able to handle IDs of up to that\n length.\n \"\"\"\n defaults = {\"max_length\": 255, \"blank\": False, \"null\": False}\n defaults.update(kwargs)\n super().__init__(*args, **defaults)\n\n\nclass StripeDateTimeField(models.DateTimeField):\n \"\"\"A field used to define a DateTimeField value according to djstripe logic.\"\"\"\n\n def stripe_to_db(self, data):\n \"\"\"Convert the raw timestamp value to a DateTime representation.\"\"\"\n val = data.get(self.name)\n\n # Note: 0 is a possible return value, which is 'falseish'\n if val is not None:\n return convert_tstamp(val)\n\n\nclass JSONField(BaseJSONField):\n \"\"\"A field used to define a JSONField value according to djstripe logic.\"\"\"\n\n pass\n", "path": "djstripe/fields.py"}], "after_files": [{"content": "\"\"\"\ndj-stripe Custom Field Definitions\n\"\"\"\nimport decimal\n\nfrom django.core.validators import MaxValueValidator, MinValueValidator\nfrom django.db import models\n\nfrom .settings import USE_NATIVE_JSONFIELD\nfrom .utils import convert_tstamp\n\nif USE_NATIVE_JSONFIELD:\n from django.contrib.postgres.fields import JSONField as BaseJSONField\nelse:\n from jsonfield import JSONField as BaseJSONField\n\n\nclass PaymentMethodForeignKey(models.ForeignKey):\n def __init__(self, **kwargs):\n kwargs.setdefault(\"to\", \"DjstripePaymentMethod\")\n super().__init__(**kwargs)\n\n\nclass StripePercentField(models.DecimalField):\n \"\"\"A field used to define a percent according to djstripe logic.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Assign default args to this field.\"\"\"\n defaults = {\n \"decimal_places\": 2,\n \"max_digits\": 5,\n \"validators\": [MinValueValidator(1), MaxValueValidator(100)],\n }\n defaults.update(kwargs)\n super().__init__(*args, **defaults)\n\n\nclass StripeCurrencyCodeField(models.CharField):\n \"\"\"\n A field used to store a three-letter currency code (eg. usd, eur, ...)\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n defaults = {\"max_length\": 3, \"help_text\": \"Three-letter ISO currency code\"}\n defaults.update(kwargs)\n super().__init__(*args, **defaults)\n\n\nclass StripeQuantumCurrencyAmountField(models.IntegerField):\n \"\"\"\n A field used to store currency amounts in cents (etc) as per stripe.\n \"\"\"\n\n pass\n\n\nclass StripeDecimalCurrencyAmountField(models.DecimalField):\n \"\"\"\n A legacy field to store currency amounts in dollars (etc).\n\n Stripe is always in cents. Historically djstripe stored everything in dollars.\n\n Note: Don't use this for new fields, use StripeQuantumCurrencyAmountField instead.\n We're planning on migrating existing fields in dj-stripe 3.0,\n see https://github.com/dj-stripe/dj-stripe/issues/955\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Assign default args to this field.\"\"\"\n defaults = {\"decimal_places\": 2, \"max_digits\": 8}\n defaults.update(kwargs)\n super().__init__(*args, **defaults)\n\n def stripe_to_db(self, data):\n \"\"\"Convert the raw value to decimal representation.\"\"\"\n val = data.get(self.name)\n\n # Note: 0 is a possible return value, which is 'falseish'\n if val is not None:\n return val / decimal.Decimal(\"100\")\n\n\nclass StripeEnumField(models.CharField):\n def __init__(self, enum, *args, **kwargs):\n self.enum = enum\n choices = enum.choices\n defaults = {\"choices\": choices, \"max_length\": max(len(k) for k, v in choices)}\n defaults.update(kwargs)\n super().__init__(*args, **defaults)\n\n def deconstruct(self):\n name, path, args, kwargs = super().deconstruct()\n kwargs[\"enum\"] = self.enum\n del kwargs[\"choices\"]\n return name, path, args, kwargs\n\n\nclass StripeIdField(models.CharField):\n \"\"\"A field with enough space to hold any stripe ID.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Assign default args to this field.\n\n As per: https://stripe.com/docs/upgrades\n You can safely assume object IDs we generate will never exceed 255\n characters, but you should be able to handle IDs of up to that\n length.\n \"\"\"\n defaults = {\"max_length\": 255, \"blank\": False, \"null\": False}\n defaults.update(kwargs)\n super().__init__(*args, **defaults)\n\n\nclass StripeDateTimeField(models.DateTimeField):\n \"\"\"A field used to define a DateTimeField value according to djstripe logic.\"\"\"\n\n def stripe_to_db(self, data):\n \"\"\"Convert the raw timestamp value to a DateTime representation.\"\"\"\n val = data.get(self.name)\n\n # Note: 0 is a possible return value, which is 'falseish'\n if val is not None:\n return convert_tstamp(val)\n\n\nclass JSONField(BaseJSONField):\n \"\"\"A field used to define a JSONField value according to djstripe logic.\"\"\"\n\n pass\n", "path": "djstripe/fields.py"}]} | 1,476 | 239 |
gh_patches_debug_23652 | rasdani/github-patches | git_diff | Mailu__Mailu-2150 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
fix key error in dictionary
## What type of PR?
(Feature, enhancement, bug-fix, documentation)
## What does this PR do?
### Related issue(s)
- Auto close an issue like: closes #2145
## Prerequisites
Before we can consider review and merge, please make sure the following list is done and checked.
If an entry in not applicable, you can check it or remove it from the list.
- [ ] In case of feature or enhancement: documentation updated accordingly
- [ ] Unless it's docs or a minor change: add [changelog](https://mailu.io/master/contributors/workflow.html#changelog) entry file.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/admin/mailu/internal/views/auth.py`
Content:
```
1 from mailu import models, utils
2 from mailu.internal import internal, nginx
3 from flask import current_app as app
4
5 import flask
6 import flask_login
7 import base64
8
9 @internal.route("/auth/email")
10 def nginx_authentication():
11 """ Main authentication endpoint for Nginx email server
12 """
13 client_ip = flask.request.headers["Client-Ip"]
14 headers = flask.request.headers
15 if headers["Auth-Port"] == '25' and headers['Auth-Method'] == 'plain':
16 response = flask.Response()
17 response.headers['Auth-Status'] = 'AUTH not supported'
18 response.headers['Auth-Error-Code'] = '502 5.5.1'
19 utils.limiter.rate_limit_ip(client_ip)
20 return response
21 if utils.limiter.should_rate_limit_ip(client_ip):
22 status, code = nginx.get_status(flask.request.headers['Auth-Protocol'], 'ratelimit')
23 response = flask.Response()
24 response.headers['Auth-Status'] = status
25 response.headers['Auth-Error-Code'] = code
26 if int(flask.request.headers['Auth-Login-Attempt']) < 10:
27 response.headers['Auth-Wait'] = '3'
28 return response
29 headers = nginx.handle_authentication(flask.request.headers)
30 response = flask.Response()
31 for key, value in headers.items():
32 response.headers[key] = str(value)
33 is_valid_user = False
34 is_from_webmail = headers['Auth-Port'] in ['10143', '10025']
35 if response.headers.get("Auth-User-Exists"):
36 username = response.headers["Auth-User"]
37 if utils.limiter.should_rate_limit_user(username, client_ip):
38 # FIXME could be done before handle_authentication()
39 status, code = nginx.get_status(flask.request.headers['Auth-Protocol'], 'ratelimit')
40 response = flask.Response()
41 response.headers['Auth-Status'] = status
42 response.headers['Auth-Error-Code'] = code
43 if int(flask.request.headers['Auth-Login-Attempt']) < 10:
44 response.headers['Auth-Wait'] = '3'
45 return response
46 is_valid_user = True
47 if headers.get("Auth-Status") == "OK":
48 utils.limiter.exempt_ip_from_ratelimits(client_ip)
49 elif is_valid_user:
50 utils.limiter.rate_limit_user(username, client_ip)
51 elif not is_from_webmail:
52 utils.limiter.rate_limit_ip(client_ip)
53 return response
54
55 @internal.route("/auth/admin")
56 def admin_authentication():
57 """ Fails if the user is not an authenticated admin.
58 """
59 if (not flask_login.current_user.is_anonymous
60 and flask_login.current_user.global_admin
61 and flask_login.current_user.enabled):
62 return ""
63 return flask.abort(403)
64
65 @internal.route("/auth/user")
66 def user_authentication():
67 """ Fails if the user is not authenticated.
68 """
69 if (not flask_login.current_user.is_anonymous
70 and flask_login.current_user.enabled):
71 response = flask.Response()
72 email = flask_login.current_user.get_id()
73 response.headers["X-User"] = models.IdnaEmail.process_bind_param(flask_login, email, "")
74 response.headers["X-User-Token"] = utils.gen_temp_token(email, flask.session)
75 return response
76 return flask.abort(403)
77
78
79 @internal.route("/auth/basic")
80 def basic_authentication():
81 """ Tries to authenticate using the Authorization header.
82 """
83 client_ip = flask.request.headers.get('X-Real-IP', flask.request.remote_addr)
84 if utils.limiter.should_rate_limit_ip(client_ip):
85 response = flask.Response(status=401)
86 response.headers["WWW-Authenticate"] = 'Basic realm="Authentication rate limit from one source exceeded"'
87 response.headers['Retry-After'] = '60'
88 return response
89 authorization = flask.request.headers.get("Authorization")
90 if authorization and authorization.startswith("Basic "):
91 encoded = authorization.replace("Basic ", "")
92 user_email, password = base64.b64decode(encoded).split(b":", 1)
93 user_email = user_email.decode("utf8")
94 if utils.limiter.should_rate_limit_user(user_email, client_ip):
95 response = flask.Response(status=401)
96 response.headers["WWW-Authenticate"] = 'Basic realm="Authentication rate limit for this username exceeded"'
97 response.headers['Retry-After'] = '60'
98 return response
99 user = models.User.query.get(user_email)
100 if user and nginx.check_credentials(user, password.decode('utf-8'), client_ip, "web"):
101 response = flask.Response()
102 response.headers["X-User"] = models.IdnaEmail.process_bind_param(flask_login, user.email, "")
103 utils.limiter.exempt_ip_from_ratelimits(client_ip)
104 return response
105 utils.limiter.rate_limit_user(user_email, client_ip) if user else utils.limiter.rate_limit_ip(client_ip)
106 response = flask.Response(status=401)
107 response.headers["WWW-Authenticate"] = 'Basic realm="Login Required"'
108 return response
109
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/admin/mailu/internal/views/auth.py b/core/admin/mailu/internal/views/auth.py
--- a/core/admin/mailu/internal/views/auth.py
+++ b/core/admin/mailu/internal/views/auth.py
@@ -18,7 +18,8 @@
response.headers['Auth-Error-Code'] = '502 5.5.1'
utils.limiter.rate_limit_ip(client_ip)
return response
- if utils.limiter.should_rate_limit_ip(client_ip):
+ is_from_webmail = headers['Auth-Port'] in ['10143', '10025']
+ if not is_from_webmail and utils.limiter.should_rate_limit_ip(client_ip):
status, code = nginx.get_status(flask.request.headers['Auth-Protocol'], 'ratelimit')
response = flask.Response()
response.headers['Auth-Status'] = status
@@ -31,7 +32,6 @@
for key, value in headers.items():
response.headers[key] = str(value)
is_valid_user = False
- is_from_webmail = headers['Auth-Port'] in ['10143', '10025']
if response.headers.get("Auth-User-Exists"):
username = response.headers["Auth-User"]
if utils.limiter.should_rate_limit_user(username, client_ip):
| {"golden_diff": "diff --git a/core/admin/mailu/internal/views/auth.py b/core/admin/mailu/internal/views/auth.py\n--- a/core/admin/mailu/internal/views/auth.py\n+++ b/core/admin/mailu/internal/views/auth.py\n@@ -18,7 +18,8 @@\n response.headers['Auth-Error-Code'] = '502 5.5.1'\n utils.limiter.rate_limit_ip(client_ip)\n return response\n- if utils.limiter.should_rate_limit_ip(client_ip):\n+ is_from_webmail = headers['Auth-Port'] in ['10143', '10025']\n+ if not is_from_webmail and utils.limiter.should_rate_limit_ip(client_ip):\n status, code = nginx.get_status(flask.request.headers['Auth-Protocol'], 'ratelimit')\n response = flask.Response()\n response.headers['Auth-Status'] = status\n@@ -31,7 +32,6 @@\n for key, value in headers.items():\n response.headers[key] = str(value)\n is_valid_user = False\n- is_from_webmail = headers['Auth-Port'] in ['10143', '10025']\n if response.headers.get(\"Auth-User-Exists\"):\n username = response.headers[\"Auth-User\"]\n if utils.limiter.should_rate_limit_user(username, client_ip):\n", "issue": "fix key error in dictionary\n## What type of PR?\r\n\r\n(Feature, enhancement, bug-fix, documentation)\r\n\r\n## What does this PR do?\r\n\r\n### Related issue(s)\r\n- Auto close an issue like: closes #2145\r\n\r\n## Prerequisites\r\nBefore we can consider review and merge, please make sure the following list is done and checked.\r\nIf an entry in not applicable, you can check it or remove it from the list.\r\n\r\n- [ ] In case of feature or enhancement: documentation updated accordingly\r\n- [ ] Unless it's docs or a minor change: add [changelog](https://mailu.io/master/contributors/workflow.html#changelog) entry file.\r\n\n", "before_files": [{"content": "from mailu import models, utils\nfrom mailu.internal import internal, nginx\nfrom flask import current_app as app\n\nimport flask\nimport flask_login\nimport base64\n\[email protected](\"/auth/email\")\ndef nginx_authentication():\n \"\"\" Main authentication endpoint for Nginx email server\n \"\"\"\n client_ip = flask.request.headers[\"Client-Ip\"]\n headers = flask.request.headers\n if headers[\"Auth-Port\"] == '25' and headers['Auth-Method'] == 'plain':\n response = flask.Response()\n response.headers['Auth-Status'] = 'AUTH not supported'\n response.headers['Auth-Error-Code'] = '502 5.5.1'\n utils.limiter.rate_limit_ip(client_ip)\n return response\n if utils.limiter.should_rate_limit_ip(client_ip):\n status, code = nginx.get_status(flask.request.headers['Auth-Protocol'], 'ratelimit')\n response = flask.Response()\n response.headers['Auth-Status'] = status\n response.headers['Auth-Error-Code'] = code\n if int(flask.request.headers['Auth-Login-Attempt']) < 10:\n response.headers['Auth-Wait'] = '3'\n return response\n headers = nginx.handle_authentication(flask.request.headers)\n response = flask.Response()\n for key, value in headers.items():\n response.headers[key] = str(value)\n is_valid_user = False\n is_from_webmail = headers['Auth-Port'] in ['10143', '10025']\n if response.headers.get(\"Auth-User-Exists\"):\n username = response.headers[\"Auth-User\"]\n if utils.limiter.should_rate_limit_user(username, client_ip):\n # FIXME could be done before handle_authentication()\n status, code = nginx.get_status(flask.request.headers['Auth-Protocol'], 'ratelimit')\n response = flask.Response()\n response.headers['Auth-Status'] = status\n response.headers['Auth-Error-Code'] = code\n if int(flask.request.headers['Auth-Login-Attempt']) < 10:\n response.headers['Auth-Wait'] = '3'\n return response\n is_valid_user = True\n if headers.get(\"Auth-Status\") == \"OK\":\n utils.limiter.exempt_ip_from_ratelimits(client_ip)\n elif is_valid_user:\n utils.limiter.rate_limit_user(username, client_ip)\n elif not is_from_webmail:\n utils.limiter.rate_limit_ip(client_ip)\n return response\n\[email protected](\"/auth/admin\")\ndef admin_authentication():\n \"\"\" Fails if the user is not an authenticated admin.\n \"\"\"\n if (not flask_login.current_user.is_anonymous\n and flask_login.current_user.global_admin\n and flask_login.current_user.enabled):\n return \"\"\n return flask.abort(403)\n\[email protected](\"/auth/user\")\ndef user_authentication():\n \"\"\" Fails if the user is not authenticated.\n \"\"\"\n if (not flask_login.current_user.is_anonymous\n and flask_login.current_user.enabled):\n response = flask.Response()\n email = flask_login.current_user.get_id()\n response.headers[\"X-User\"] = models.IdnaEmail.process_bind_param(flask_login, email, \"\")\n response.headers[\"X-User-Token\"] = utils.gen_temp_token(email, flask.session)\n return response\n return flask.abort(403)\n\n\[email protected](\"/auth/basic\")\ndef basic_authentication():\n \"\"\" Tries to authenticate using the Authorization header.\n \"\"\"\n client_ip = flask.request.headers.get('X-Real-IP', flask.request.remote_addr)\n if utils.limiter.should_rate_limit_ip(client_ip):\n response = flask.Response(status=401)\n response.headers[\"WWW-Authenticate\"] = 'Basic realm=\"Authentication rate limit from one source exceeded\"'\n response.headers['Retry-After'] = '60'\n return response\n authorization = flask.request.headers.get(\"Authorization\")\n if authorization and authorization.startswith(\"Basic \"):\n encoded = authorization.replace(\"Basic \", \"\")\n user_email, password = base64.b64decode(encoded).split(b\":\", 1)\n user_email = user_email.decode(\"utf8\")\n if utils.limiter.should_rate_limit_user(user_email, client_ip):\n response = flask.Response(status=401)\n response.headers[\"WWW-Authenticate\"] = 'Basic realm=\"Authentication rate limit for this username exceeded\"'\n response.headers['Retry-After'] = '60'\n return response\n user = models.User.query.get(user_email)\n if user and nginx.check_credentials(user, password.decode('utf-8'), client_ip, \"web\"):\n response = flask.Response()\n response.headers[\"X-User\"] = models.IdnaEmail.process_bind_param(flask_login, user.email, \"\")\n utils.limiter.exempt_ip_from_ratelimits(client_ip)\n return response\n utils.limiter.rate_limit_user(user_email, client_ip) if user else utils.limiter.rate_limit_ip(client_ip)\n response = flask.Response(status=401)\n response.headers[\"WWW-Authenticate\"] = 'Basic realm=\"Login Required\"'\n return response\n", "path": "core/admin/mailu/internal/views/auth.py"}], "after_files": [{"content": "from mailu import models, utils\nfrom mailu.internal import internal, nginx\nfrom flask import current_app as app\n\nimport flask\nimport flask_login\nimport base64\n\[email protected](\"/auth/email\")\ndef nginx_authentication():\n \"\"\" Main authentication endpoint for Nginx email server\n \"\"\"\n client_ip = flask.request.headers[\"Client-Ip\"]\n headers = flask.request.headers\n if headers[\"Auth-Port\"] == '25' and headers['Auth-Method'] == 'plain':\n response = flask.Response()\n response.headers['Auth-Status'] = 'AUTH not supported'\n response.headers['Auth-Error-Code'] = '502 5.5.1'\n utils.limiter.rate_limit_ip(client_ip)\n return response\n is_from_webmail = headers['Auth-Port'] in ['10143', '10025']\n if not is_from_webmail and utils.limiter.should_rate_limit_ip(client_ip):\n status, code = nginx.get_status(flask.request.headers['Auth-Protocol'], 'ratelimit')\n response = flask.Response()\n response.headers['Auth-Status'] = status\n response.headers['Auth-Error-Code'] = code\n if int(flask.request.headers['Auth-Login-Attempt']) < 10:\n response.headers['Auth-Wait'] = '3'\n return response\n headers = nginx.handle_authentication(flask.request.headers)\n response = flask.Response()\n for key, value in headers.items():\n response.headers[key] = str(value)\n is_valid_user = False\n if response.headers.get(\"Auth-User-Exists\"):\n username = response.headers[\"Auth-User\"]\n if utils.limiter.should_rate_limit_user(username, client_ip):\n # FIXME could be done before handle_authentication()\n status, code = nginx.get_status(flask.request.headers['Auth-Protocol'], 'ratelimit')\n response = flask.Response()\n response.headers['Auth-Status'] = status\n response.headers['Auth-Error-Code'] = code\n if int(flask.request.headers['Auth-Login-Attempt']) < 10:\n response.headers['Auth-Wait'] = '3'\n return response\n is_valid_user = True\n if headers.get(\"Auth-Status\") == \"OK\":\n utils.limiter.exempt_ip_from_ratelimits(client_ip)\n elif is_valid_user:\n utils.limiter.rate_limit_user(username, client_ip)\n elif not is_from_webmail:\n utils.limiter.rate_limit_ip(client_ip)\n return response\n\[email protected](\"/auth/admin\")\ndef admin_authentication():\n \"\"\" Fails if the user is not an authenticated admin.\n \"\"\"\n if (not flask_login.current_user.is_anonymous\n and flask_login.current_user.global_admin\n and flask_login.current_user.enabled):\n return \"\"\n return flask.abort(403)\n\[email protected](\"/auth/user\")\ndef user_authentication():\n \"\"\" Fails if the user is not authenticated.\n \"\"\"\n if (not flask_login.current_user.is_anonymous\n and flask_login.current_user.enabled):\n response = flask.Response()\n email = flask_login.current_user.get_id()\n response.headers[\"X-User\"] = models.IdnaEmail.process_bind_param(flask_login, email, \"\")\n response.headers[\"X-User-Token\"] = utils.gen_temp_token(email, flask.session)\n return response\n return flask.abort(403)\n\n\[email protected](\"/auth/basic\")\ndef basic_authentication():\n \"\"\" Tries to authenticate using the Authorization header.\n \"\"\"\n client_ip = flask.request.headers.get('X-Real-IP', flask.request.remote_addr)\n if utils.limiter.should_rate_limit_ip(client_ip):\n response = flask.Response(status=401)\n response.headers[\"WWW-Authenticate\"] = 'Basic realm=\"Authentication rate limit from one source exceeded\"'\n response.headers['Retry-After'] = '60'\n return response\n authorization = flask.request.headers.get(\"Authorization\")\n if authorization and authorization.startswith(\"Basic \"):\n encoded = authorization.replace(\"Basic \", \"\")\n user_email, password = base64.b64decode(encoded).split(b\":\", 1)\n user_email = user_email.decode(\"utf8\")\n if utils.limiter.should_rate_limit_user(user_email, client_ip):\n response = flask.Response(status=401)\n response.headers[\"WWW-Authenticate\"] = 'Basic realm=\"Authentication rate limit for this username exceeded\"'\n response.headers['Retry-After'] = '60'\n return response\n user = models.User.query.get(user_email)\n if user and nginx.check_credentials(user, password.decode('utf-8'), client_ip, \"web\"):\n response = flask.Response()\n response.headers[\"X-User\"] = models.IdnaEmail.process_bind_param(flask_login, user.email, \"\")\n utils.limiter.exempt_ip_from_ratelimits(client_ip)\n return response\n utils.limiter.rate_limit_user(user_email, client_ip) if user else utils.limiter.rate_limit_ip(client_ip)\n response = flask.Response(status=401)\n response.headers[\"WWW-Authenticate\"] = 'Basic realm=\"Login Required\"'\n return response\n", "path": "core/admin/mailu/internal/views/auth.py"}]} | 1,715 | 292 |
gh_patches_debug_6805 | rasdani/github-patches | git_diff | Pylons__pyramid-2672 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
reify docstring doesn't render properly
The testsetup section is omitted from the rendered docs at http://pyramid.readthedocs.io/en/latest/api/decorator.html#pyramid.decorator.reify
Unfortunately this section is essential for understanding the example.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyramid/decorator.py`
Content:
```
1 from functools import update_wrapper
2
3
4 class reify(object):
5 """ Use as a class method decorator. It operates almost exactly like the
6 Python ``@property`` decorator, but it puts the result of the method it
7 decorates into the instance dict after the first call, effectively
8 replacing the function it decorates with an instance variable. It is, in
9 Python parlance, a non-data descriptor. An example:
10
11 .. testsetup::
12
13 from pyramid.decorator import reify
14
15 class Foo(object):
16 @reify
17 def jammy(self):
18 print('jammy called')
19 return 1
20
21 And usage of Foo:
22
23 .. doctest::
24
25 >>> f = Foo()
26 >>> v = f.jammy
27 jammy called
28 >>> print(v)
29 1
30 >>> f.jammy
31 1
32 >>> # jammy func not called the second time; it replaced itself with 1
33 >>> # Note: reassignment is possible
34 >>> f.jammy = 2
35 >>> f.jammy
36 2
37 """
38 def __init__(self, wrapped):
39 self.wrapped = wrapped
40 update_wrapper(self, wrapped)
41
42 def __get__(self, inst, objtype=None):
43 if inst is None:
44 return self
45 val = self.wrapped(inst)
46 setattr(inst, self.wrapped.__name__, val)
47 return val
48
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pyramid/decorator.py b/pyramid/decorator.py
--- a/pyramid/decorator.py
+++ b/pyramid/decorator.py
@@ -8,6 +8,16 @@
replacing the function it decorates with an instance variable. It is, in
Python parlance, a non-data descriptor. An example:
+ .. code-block:: python
+
+ from pyramid.decorator import reify
+
+ class Foo(object):
+ @reify
+ def jammy(self):
+ print('jammy called')
+ return 1
+
.. testsetup::
from pyramid.decorator import reify
| {"golden_diff": "diff --git a/pyramid/decorator.py b/pyramid/decorator.py\n--- a/pyramid/decorator.py\n+++ b/pyramid/decorator.py\n@@ -8,6 +8,16 @@\n replacing the function it decorates with an instance variable. It is, in\n Python parlance, a non-data descriptor. An example:\n \n+ .. code-block:: python\n+\n+ from pyramid.decorator import reify\n+\n+ class Foo(object):\n+ @reify\n+ def jammy(self):\n+ print('jammy called')\n+ return 1\n+\n .. testsetup::\n \n from pyramid.decorator import reify\n", "issue": "reify docstring doesn't render properly\nThe testsetup section is omitted from the rendered docs at http://pyramid.readthedocs.io/en/latest/api/decorator.html#pyramid.decorator.reify\n\nUnfortunately this section is essential for understanding the example.\n\n", "before_files": [{"content": "from functools import update_wrapper\n\n\nclass reify(object):\n \"\"\" Use as a class method decorator. It operates almost exactly like the\n Python ``@property`` decorator, but it puts the result of the method it\n decorates into the instance dict after the first call, effectively\n replacing the function it decorates with an instance variable. It is, in\n Python parlance, a non-data descriptor. An example:\n\n .. testsetup::\n\n from pyramid.decorator import reify\n\n class Foo(object):\n @reify\n def jammy(self):\n print('jammy called')\n return 1\n\n And usage of Foo:\n\n .. doctest::\n\n >>> f = Foo()\n >>> v = f.jammy\n jammy called\n >>> print(v)\n 1\n >>> f.jammy\n 1\n >>> # jammy func not called the second time; it replaced itself with 1\n >>> # Note: reassignment is possible\n >>> f.jammy = 2\n >>> f.jammy\n 2\n \"\"\"\n def __init__(self, wrapped):\n self.wrapped = wrapped\n update_wrapper(self, wrapped)\n\n def __get__(self, inst, objtype=None):\n if inst is None:\n return self\n val = self.wrapped(inst)\n setattr(inst, self.wrapped.__name__, val)\n return val\n\n", "path": "pyramid/decorator.py"}], "after_files": [{"content": "from functools import update_wrapper\n\n\nclass reify(object):\n \"\"\" Use as a class method decorator. It operates almost exactly like the\n Python ``@property`` decorator, but it puts the result of the method it\n decorates into the instance dict after the first call, effectively\n replacing the function it decorates with an instance variable. It is, in\n Python parlance, a non-data descriptor. An example:\n\n .. code-block:: python\n\n from pyramid.decorator import reify\n\n class Foo(object):\n @reify\n def jammy(self):\n print('jammy called')\n return 1\n\n .. testsetup::\n\n from pyramid.decorator import reify\n\n class Foo(object):\n @reify\n def jammy(self):\n print('jammy called')\n return 1\n\n And usage of Foo:\n\n .. doctest::\n\n >>> f = Foo()\n >>> v = f.jammy\n jammy called\n >>> print(v)\n 1\n >>> f.jammy\n 1\n >>> # jammy func not called the second time; it replaced itself with 1\n >>> # Note: reassignment is possible\n >>> f.jammy = 2\n >>> f.jammy\n 2\n \"\"\"\n def __init__(self, wrapped):\n self.wrapped = wrapped\n update_wrapper(self, wrapped)\n\n def __get__(self, inst, objtype=None):\n if inst is None:\n return self\n val = self.wrapped(inst)\n setattr(inst, self.wrapped.__name__, val)\n return val\n\n", "path": "pyramid/decorator.py"}]} | 719 | 149 |
gh_patches_debug_17650 | rasdani/github-patches | git_diff | conan-io__conan-4834 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
conan export-pkg with --package-folder does not set the metadata info
When an export-pkg command is used with the ``--package-folder``, the *metadata.json* is not updated with the package ID and package revision.
Then, when the user tries to upload the package ir errors with:
```
ERROR: Binary package hello/1.0@user/testing:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 not found
```
Here is a test reproducing the issue:
```
def upload_export_pkg_test(self):
"""
Package metadata is created when doing an export-pkg and then uploading the package
"""
server1 = TestServer([("*/*@*/*", "*")], [("*/*@*/*", "*")], users={"lasote": "mypass"})
servers = OrderedDict()
servers["server1"] = server1
client = TestClient(servers=servers)
client.save({"release/kk.lib": ""})
client.run("user lasote -r server1 -p mypass")
client.run("new hello/1.0 --header")
client.run("export-pkg . user/testing -pf release")
client.run("upload hello/1.0@user/testing --all -r server1")
```
The original issue comes from #4768
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conans/client/cmd/export_pkg.py`
Content:
```
1 import os
2
3 from conans.client import packager
4 from conans.client.graph.graph_manager import load_deps_info
5 from conans.errors import ConanException
6 from conans.model.manifest import FileTreeManifest
7 from conans.model.ref import PackageReference
8 from conans.util.files import rmdir
9
10
11 def export_pkg(cache, graph_manager, hook_manager, recorder, output,
12 ref, source_folder, build_folder, package_folder, install_folder,
13 graph_info, force):
14
15 conan_file_path = cache.conanfile(ref)
16 if not os.path.exists(conan_file_path):
17 raise ConanException("Package recipe '%s' does not exist" % str(ref))
18
19 # The graph has to be loaded with build_mode=[ref.name], so that node is not tried
20 # to be downloaded from remotes
21 deps_graph, _ = graph_manager.load_graph(ref, None, graph_info=graph_info, build_mode=[ref.name],
22 check_updates=False, update=False,
23 remote_name=None, recorder=recorder)
24 # this is a bit tricky, but works. The root (virtual), has only 1 neighbor,
25 # which is the exported pkg
26 nodes = deps_graph.root.neighbors()
27 conanfile = nodes[0].conanfile
28 from conans.client.conan_api import existing_info_files
29 if install_folder and existing_info_files(install_folder):
30 load_deps_info(install_folder, conanfile, required=True)
31 package_id = nodes[0].package_id
32 output.info("Packaging to %s" % package_id)
33 pref = PackageReference(ref, package_id)
34 dest_package_folder = cache.package(pref, short_paths=conanfile.short_paths)
35
36 if os.path.exists(dest_package_folder):
37 if force:
38 rmdir(dest_package_folder)
39 else:
40 raise ConanException("Package already exists. Please use --force, -f to "
41 "overwrite it")
42
43 recipe_hash = cache.package_layout(ref).recipe_manifest().summary_hash
44 conanfile.info.recipe_hash = recipe_hash
45 conanfile.develop = True
46 if package_folder:
47 packager.export_pkg(conanfile, package_id, package_folder, dest_package_folder,
48 hook_manager, conan_file_path, ref)
49 else:
50 packager.create_package(conanfile, package_id, source_folder, build_folder,
51 dest_package_folder, install_folder, hook_manager, conan_file_path,
52 ref, local=True)
53 with cache.package_layout(ref).update_metadata() as metadata:
54 readed_manifest = FileTreeManifest.load(dest_package_folder)
55 metadata.packages[package_id].revision = readed_manifest.summary_hash
56 metadata.packages[package_id].recipe_revision = metadata.recipe.revision
57
58 recorder.package_exported(pref)
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conans/client/cmd/export_pkg.py b/conans/client/cmd/export_pkg.py
--- a/conans/client/cmd/export_pkg.py
+++ b/conans/client/cmd/export_pkg.py
@@ -50,9 +50,9 @@
packager.create_package(conanfile, package_id, source_folder, build_folder,
dest_package_folder, install_folder, hook_manager, conan_file_path,
ref, local=True)
- with cache.package_layout(ref).update_metadata() as metadata:
- readed_manifest = FileTreeManifest.load(dest_package_folder)
- metadata.packages[package_id].revision = readed_manifest.summary_hash
- metadata.packages[package_id].recipe_revision = metadata.recipe.revision
+ with cache.package_layout(ref).update_metadata() as metadata:
+ readed_manifest = FileTreeManifest.load(dest_package_folder)
+ metadata.packages[package_id].revision = readed_manifest.summary_hash
+ metadata.packages[package_id].recipe_revision = metadata.recipe.revision
recorder.package_exported(pref)
| {"golden_diff": "diff --git a/conans/client/cmd/export_pkg.py b/conans/client/cmd/export_pkg.py\n--- a/conans/client/cmd/export_pkg.py\n+++ b/conans/client/cmd/export_pkg.py\n@@ -50,9 +50,9 @@\n packager.create_package(conanfile, package_id, source_folder, build_folder,\n dest_package_folder, install_folder, hook_manager, conan_file_path,\n ref, local=True)\n- with cache.package_layout(ref).update_metadata() as metadata:\n- readed_manifest = FileTreeManifest.load(dest_package_folder)\n- metadata.packages[package_id].revision = readed_manifest.summary_hash\n- metadata.packages[package_id].recipe_revision = metadata.recipe.revision\n+ with cache.package_layout(ref).update_metadata() as metadata:\n+ readed_manifest = FileTreeManifest.load(dest_package_folder)\n+ metadata.packages[package_id].revision = readed_manifest.summary_hash\n+ metadata.packages[package_id].recipe_revision = metadata.recipe.revision\n \n recorder.package_exported(pref)\n", "issue": "conan export-pkg with --package-folder does not set the metadata info\nWhen an export-pkg command is used with the ``--package-folder``, the *metadata.json* is not updated with the package ID and package revision.\r\n\r\nThen, when the user tries to upload the package ir errors with:\r\n```\r\nERROR: Binary package hello/1.0@user/testing:5ab84d6acfe1f23c4fae0ab88f26e3a396351ac9 not found\r\n```\r\n\r\nHere is a test reproducing the issue:\r\n\r\n```\r\n def upload_export_pkg_test(self):\r\n \"\"\"\r\n Package metadata is created when doing an export-pkg and then uploading the package\r\n \"\"\"\r\n server1 = TestServer([(\"*/*@*/*\", \"*\")], [(\"*/*@*/*\", \"*\")], users={\"lasote\": \"mypass\"})\r\n servers = OrderedDict()\r\n servers[\"server1\"] = server1\r\n client = TestClient(servers=servers)\r\n client.save({\"release/kk.lib\": \"\"})\r\n client.run(\"user lasote -r server1 -p mypass\")\r\n client.run(\"new hello/1.0 --header\")\r\n client.run(\"export-pkg . user/testing -pf release\")\r\n client.run(\"upload hello/1.0@user/testing --all -r server1\")\r\n```\r\n\r\nThe original issue comes from #4768\n", "before_files": [{"content": "import os\n\nfrom conans.client import packager\nfrom conans.client.graph.graph_manager import load_deps_info\nfrom conans.errors import ConanException\nfrom conans.model.manifest import FileTreeManifest\nfrom conans.model.ref import PackageReference\nfrom conans.util.files import rmdir\n\n\ndef export_pkg(cache, graph_manager, hook_manager, recorder, output,\n ref, source_folder, build_folder, package_folder, install_folder,\n graph_info, force):\n\n conan_file_path = cache.conanfile(ref)\n if not os.path.exists(conan_file_path):\n raise ConanException(\"Package recipe '%s' does not exist\" % str(ref))\n\n # The graph has to be loaded with build_mode=[ref.name], so that node is not tried\n # to be downloaded from remotes\n deps_graph, _ = graph_manager.load_graph(ref, None, graph_info=graph_info, build_mode=[ref.name],\n check_updates=False, update=False,\n remote_name=None, recorder=recorder)\n # this is a bit tricky, but works. The root (virtual), has only 1 neighbor,\n # which is the exported pkg\n nodes = deps_graph.root.neighbors()\n conanfile = nodes[0].conanfile\n from conans.client.conan_api import existing_info_files\n if install_folder and existing_info_files(install_folder):\n load_deps_info(install_folder, conanfile, required=True)\n package_id = nodes[0].package_id\n output.info(\"Packaging to %s\" % package_id)\n pref = PackageReference(ref, package_id)\n dest_package_folder = cache.package(pref, short_paths=conanfile.short_paths)\n\n if os.path.exists(dest_package_folder):\n if force:\n rmdir(dest_package_folder)\n else:\n raise ConanException(\"Package already exists. Please use --force, -f to \"\n \"overwrite it\")\n\n recipe_hash = cache.package_layout(ref).recipe_manifest().summary_hash\n conanfile.info.recipe_hash = recipe_hash\n conanfile.develop = True\n if package_folder:\n packager.export_pkg(conanfile, package_id, package_folder, dest_package_folder,\n hook_manager, conan_file_path, ref)\n else:\n packager.create_package(conanfile, package_id, source_folder, build_folder,\n dest_package_folder, install_folder, hook_manager, conan_file_path,\n ref, local=True)\n with cache.package_layout(ref).update_metadata() as metadata:\n readed_manifest = FileTreeManifest.load(dest_package_folder)\n metadata.packages[package_id].revision = readed_manifest.summary_hash\n metadata.packages[package_id].recipe_revision = metadata.recipe.revision\n\n recorder.package_exported(pref)\n", "path": "conans/client/cmd/export_pkg.py"}], "after_files": [{"content": "import os\n\nfrom conans.client import packager\nfrom conans.client.graph.graph_manager import load_deps_info\nfrom conans.errors import ConanException\nfrom conans.model.manifest import FileTreeManifest\nfrom conans.model.ref import PackageReference\nfrom conans.util.files import rmdir\n\n\ndef export_pkg(cache, graph_manager, hook_manager, recorder, output,\n ref, source_folder, build_folder, package_folder, install_folder,\n graph_info, force):\n\n conan_file_path = cache.conanfile(ref)\n if not os.path.exists(conan_file_path):\n raise ConanException(\"Package recipe '%s' does not exist\" % str(ref))\n\n # The graph has to be loaded with build_mode=[ref.name], so that node is not tried\n # to be downloaded from remotes\n deps_graph, _ = graph_manager.load_graph(ref, None, graph_info=graph_info, build_mode=[ref.name],\n check_updates=False, update=False,\n remote_name=None, recorder=recorder)\n # this is a bit tricky, but works. The root (virtual), has only 1 neighbor,\n # which is the exported pkg\n nodes = deps_graph.root.neighbors()\n conanfile = nodes[0].conanfile\n from conans.client.conan_api import existing_info_files\n if install_folder and existing_info_files(install_folder):\n load_deps_info(install_folder, conanfile, required=True)\n package_id = nodes[0].package_id\n output.info(\"Packaging to %s\" % package_id)\n pref = PackageReference(ref, package_id)\n dest_package_folder = cache.package(pref, short_paths=conanfile.short_paths)\n\n if os.path.exists(dest_package_folder):\n if force:\n rmdir(dest_package_folder)\n else:\n raise ConanException(\"Package already exists. Please use --force, -f to \"\n \"overwrite it\")\n\n recipe_hash = cache.package_layout(ref).recipe_manifest().summary_hash\n conanfile.info.recipe_hash = recipe_hash\n conanfile.develop = True\n if package_folder:\n packager.export_pkg(conanfile, package_id, package_folder, dest_package_folder,\n hook_manager, conan_file_path, ref)\n else:\n packager.create_package(conanfile, package_id, source_folder, build_folder,\n dest_package_folder, install_folder, hook_manager, conan_file_path,\n ref, local=True)\n with cache.package_layout(ref).update_metadata() as metadata:\n readed_manifest = FileTreeManifest.load(dest_package_folder)\n metadata.packages[package_id].revision = readed_manifest.summary_hash\n metadata.packages[package_id].recipe_revision = metadata.recipe.revision\n\n recorder.package_exported(pref)\n", "path": "conans/client/cmd/export_pkg.py"}]} | 1,259 | 223 |
gh_patches_debug_17967 | rasdani/github-patches | git_diff | WeblateOrg__weblate-11641 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No way to set default notification setting for watched projects
### Describe the issue
Thanks for this great project. On the notifications part there is some room for improvements as when you come from transifex, you get a lot of useful notifications which encourage user to finish the translation of a language when stuff changes. Would it be possible to set some defaults for the "New translation component was created" and "New string is available for translation" settings. As IMO for these settings an instant notification should be sent out. Probably some more would be nice too.
### I already tried
- [X] I've read and searched [the documentation](https://docs.weblate.org/).
- [X] I've searched for similar issues in this repository.
### Steps to reproduce the behavior
1. Got to /accounts/profile/#notifications__1
2. Check the component and translation notifications
### Expected behavior
Some default settings to adjust, either in the GUI or by configuration variable.
### Screenshots
_No response_
### Exception traceback
_No response_
### How do you run Weblate?
Docker container
### Weblate versions
_No response_
### Weblate deploy checks
_No response_
### Additional context
Kind of related to #5155.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `weblate/accounts/data.py`
Content:
```
1 # Copyright © Michal Čihař <[email protected]>
2 #
3 # SPDX-License-Identifier: GPL-3.0-or-later
4
5 from weblate.accounts.notifications import (
6 FREQ_INSTANT,
7 SCOPE_ADMIN,
8 SCOPE_ALL,
9 SCOPE_WATCHED,
10 )
11
12 DEFAULT_NOTIFICATIONS = [
13 (SCOPE_ALL, FREQ_INSTANT, "MentionCommentNotificaton"),
14 (SCOPE_WATCHED, FREQ_INSTANT, "LastAuthorCommentNotificaton"),
15 (SCOPE_WATCHED, FREQ_INSTANT, "MentionCommentNotificaton"),
16 (SCOPE_WATCHED, FREQ_INSTANT, "NewAnnouncementNotificaton"),
17 (SCOPE_ADMIN, FREQ_INSTANT, "MergeFailureNotification"),
18 (SCOPE_ADMIN, FREQ_INSTANT, "ParseErrorNotification"),
19 (SCOPE_ADMIN, FREQ_INSTANT, "NewTranslationNotificaton"),
20 (SCOPE_ADMIN, FREQ_INSTANT, "NewAlertNotificaton"),
21 (SCOPE_ADMIN, FREQ_INSTANT, "NewAnnouncementNotificaton"),
22 ]
23
24
25 def create_default_notifications(user) -> None:
26 for scope, frequency, notification in DEFAULT_NOTIFICATIONS:
27 user.subscription_set.get_or_create(
28 scope=scope, notification=notification, defaults={"frequency": frequency}
29 )
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/weblate/accounts/data.py b/weblate/accounts/data.py
--- a/weblate/accounts/data.py
+++ b/weblate/accounts/data.py
@@ -4,6 +4,7 @@
from weblate.accounts.notifications import (
FREQ_INSTANT,
+ FREQ_WEEKLY,
SCOPE_ADMIN,
SCOPE_ALL,
SCOPE_WATCHED,
@@ -14,6 +15,7 @@
(SCOPE_WATCHED, FREQ_INSTANT, "LastAuthorCommentNotificaton"),
(SCOPE_WATCHED, FREQ_INSTANT, "MentionCommentNotificaton"),
(SCOPE_WATCHED, FREQ_INSTANT, "NewAnnouncementNotificaton"),
+ (SCOPE_WATCHED, FREQ_WEEKLY, "NewStringNotificaton"),
(SCOPE_ADMIN, FREQ_INSTANT, "MergeFailureNotification"),
(SCOPE_ADMIN, FREQ_INSTANT, "ParseErrorNotification"),
(SCOPE_ADMIN, FREQ_INSTANT, "NewTranslationNotificaton"),
| {"golden_diff": "diff --git a/weblate/accounts/data.py b/weblate/accounts/data.py\n--- a/weblate/accounts/data.py\n+++ b/weblate/accounts/data.py\n@@ -4,6 +4,7 @@\n \n from weblate.accounts.notifications import (\n FREQ_INSTANT,\n+ FREQ_WEEKLY,\n SCOPE_ADMIN,\n SCOPE_ALL,\n SCOPE_WATCHED,\n@@ -14,6 +15,7 @@\n (SCOPE_WATCHED, FREQ_INSTANT, \"LastAuthorCommentNotificaton\"),\n (SCOPE_WATCHED, FREQ_INSTANT, \"MentionCommentNotificaton\"),\n (SCOPE_WATCHED, FREQ_INSTANT, \"NewAnnouncementNotificaton\"),\n+ (SCOPE_WATCHED, FREQ_WEEKLY, \"NewStringNotificaton\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"MergeFailureNotification\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"ParseErrorNotification\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"NewTranslationNotificaton\"),\n", "issue": "No way to set default notification setting for watched projects\n### Describe the issue\n\nThanks for this great project. On the notifications part there is some room for improvements as when you come from transifex, you get a lot of useful notifications which encourage user to finish the translation of a language when stuff changes. Would it be possible to set some defaults for the \"New translation component was created\" and \"New string is available for translation\" settings. As IMO for these settings an instant notification should be sent out. Probably some more would be nice too.\n\n### I already tried\n\n- [X] I've read and searched [the documentation](https://docs.weblate.org/).\n- [X] I've searched for similar issues in this repository.\n\n### Steps to reproduce the behavior\n\n1. Got to /accounts/profile/#notifications__1\r\n2. Check the component and translation notifications\n\n### Expected behavior\n\nSome default settings to adjust, either in the GUI or by configuration variable.\n\n### Screenshots\n\n_No response_\n\n### Exception traceback\n\n_No response_\n\n### How do you run Weblate?\n\nDocker container\n\n### Weblate versions\n\n_No response_\n\n### Weblate deploy checks\n\n_No response_\n\n### Additional context\n\nKind of related to #5155.\n", "before_files": [{"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom weblate.accounts.notifications import (\n FREQ_INSTANT,\n SCOPE_ADMIN,\n SCOPE_ALL,\n SCOPE_WATCHED,\n)\n\nDEFAULT_NOTIFICATIONS = [\n (SCOPE_ALL, FREQ_INSTANT, \"MentionCommentNotificaton\"),\n (SCOPE_WATCHED, FREQ_INSTANT, \"LastAuthorCommentNotificaton\"),\n (SCOPE_WATCHED, FREQ_INSTANT, \"MentionCommentNotificaton\"),\n (SCOPE_WATCHED, FREQ_INSTANT, \"NewAnnouncementNotificaton\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"MergeFailureNotification\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"ParseErrorNotification\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"NewTranslationNotificaton\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"NewAlertNotificaton\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"NewAnnouncementNotificaton\"),\n]\n\n\ndef create_default_notifications(user) -> None:\n for scope, frequency, notification in DEFAULT_NOTIFICATIONS:\n user.subscription_set.get_or_create(\n scope=scope, notification=notification, defaults={\"frequency\": frequency}\n )\n", "path": "weblate/accounts/data.py"}], "after_files": [{"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom weblate.accounts.notifications import (\n FREQ_INSTANT,\n FREQ_WEEKLY,\n SCOPE_ADMIN,\n SCOPE_ALL,\n SCOPE_WATCHED,\n)\n\nDEFAULT_NOTIFICATIONS = [\n (SCOPE_ALL, FREQ_INSTANT, \"MentionCommentNotificaton\"),\n (SCOPE_WATCHED, FREQ_INSTANT, \"LastAuthorCommentNotificaton\"),\n (SCOPE_WATCHED, FREQ_INSTANT, \"MentionCommentNotificaton\"),\n (SCOPE_WATCHED, FREQ_INSTANT, \"NewAnnouncementNotificaton\"),\n (SCOPE_WATCHED, FREQ_WEEKLY, \"NewStringNotificaton\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"MergeFailureNotification\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"ParseErrorNotification\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"NewTranslationNotificaton\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"NewAlertNotificaton\"),\n (SCOPE_ADMIN, FREQ_INSTANT, \"NewAnnouncementNotificaton\"),\n]\n\n\ndef create_default_notifications(user) -> None:\n for scope, frequency, notification in DEFAULT_NOTIFICATIONS:\n user.subscription_set.get_or_create(\n scope=scope, notification=notification, defaults={\"frequency\": frequency}\n )\n", "path": "weblate/accounts/data.py"}]} | 858 | 226 |
gh_patches_debug_28982 | rasdani/github-patches | git_diff | bokeh__bokeh-5327 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Examples with deprecation warnings
As of 0.12.3, the following examples have deprecation warnings
- plotting/file/hover.py
- plotting/file/tap.py
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/plotting/file/tap.py`
Content:
```
1 import itertools
2 import numpy as np
3
4 from bokeh.plotting import ColumnDataSource, figure, show, output_file
5 from bokeh.models import TapTool
6
7 xx, yy = np.meshgrid(range(0,101,4), range(0,101,4))
8 x = xx.flatten()
9 y = yy.flatten()
10 N = len(x)
11 inds = [str(i) for i in np.arange(N)]
12 radii = np.random.random(size=N)*0.4 + 1.7
13 colors = [
14 "#%02x%02x%02x" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)
15 ]
16
17 source = ColumnDataSource(data=dict(
18 x=x,
19 y=y,
20 radius=radii,
21 colors=colors,
22 foo=list(itertools.permutations("abcdef"))[:N],
23 bar=np.random.normal(size=N),
24 ))
25
26 TOOLS="crosshair,pan,wheel_zoom,box_zoom,reset,tap,save"
27
28 p = figure(title="Tappy Scatter", tools=TOOLS)
29
30 cr = p.circle(x, y, radius=radii, source=source,
31 fill_color=colors, fill_alpha=0.6, line_color=None)
32
33 tr = p.text(x, y, text=inds, alpha=0.5, text_font_size="5pt",
34 text_baseline="middle", text_align="center")
35
36 # in the browser console, you will see messages when circles are clicked
37 tool = p.select_one(TapTool).renderers = [cr]
38
39 output_file("tap.html", title="tap.py example")
40
41 show(p) # open a browser
42
```
Path: `examples/plotting/file/hover.py`
Content:
```
1 import itertools
2
3 import numpy as np
4
5 from bokeh.plotting import ColumnDataSource, figure, show, output_file
6 from bokeh.models import HoverTool
7
8 TOOLS="crosshair,pan,wheel_zoom,box_zoom,reset,hover,save"
9
10 xx, yy = np.meshgrid(range(0,101,4), range(0,101,4))
11 x = xx.flatten()
12 y = yy.flatten()
13 N = len(x)
14 inds = [str(i) for i in np.arange(N)]
15 radii = np.random.random(size=N)*0.4 + 1.7
16 colors = [
17 "#%02x%02x%02x" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)
18 ]
19
20 source = ColumnDataSource(data=dict(
21 x=x,
22 y=y,
23 radius=radii,
24 colors=colors,
25 foo=list(itertools.permutations("abcdef"))[:N],
26 bar=np.random.normal(size=N),
27 ))
28
29 p = figure(title="Hoverful Scatter", tools=TOOLS)
30
31 p.circle(x, y, radius=radii, source=source,
32 fill_color=colors, fill_alpha=0.6, line_color=None)
33
34 p.text(x, y, text=inds, alpha=0.5, text_font_size="5pt",
35 text_baseline="middle", text_align="center")
36
37 hover = p.select_one(HoverTool).tooltips = [
38 ("index", "$index"),
39 ("(x,y)", "($x, $y)"),
40 ("radius", "@radius"),
41 ("fill color", "$color[hex, swatch]:fill_color"),
42 ("foo", "@foo"),
43 ("bar", "@bar"),
44 ]
45
46 output_file("hover.html", title="hover.py example")
47
48 show(p) # open a browser
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/plotting/file/hover.py b/examples/plotting/file/hover.py
--- a/examples/plotting/file/hover.py
+++ b/examples/plotting/file/hover.py
@@ -28,8 +28,8 @@
p = figure(title="Hoverful Scatter", tools=TOOLS)
-p.circle(x, y, radius=radii, source=source,
- fill_color=colors, fill_alpha=0.6, line_color=None)
+p.circle(x='x', y='y', radius='radius', source=source,
+ fill_color='colors', fill_alpha=0.6, line_color=None)
p.text(x, y, text=inds, alpha=0.5, text_font_size="5pt",
text_baseline="middle", text_align="center")
diff --git a/examples/plotting/file/tap.py b/examples/plotting/file/tap.py
--- a/examples/plotting/file/tap.py
+++ b/examples/plotting/file/tap.py
@@ -1,7 +1,6 @@
-import itertools
import numpy as np
-from bokeh.plotting import ColumnDataSource, figure, show, output_file
+from bokeh.plotting import figure, show, output_file
from bokeh.models import TapTool
xx, yy = np.meshgrid(range(0,101,4), range(0,101,4))
@@ -14,20 +13,11 @@
"#%02x%02x%02x" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)
]
-source = ColumnDataSource(data=dict(
- x=x,
- y=y,
- radius=radii,
- colors=colors,
- foo=list(itertools.permutations("abcdef"))[:N],
- bar=np.random.normal(size=N),
-))
-
TOOLS="crosshair,pan,wheel_zoom,box_zoom,reset,tap,save"
p = figure(title="Tappy Scatter", tools=TOOLS)
-cr = p.circle(x, y, radius=radii, source=source,
+cr = p.circle(x, y, radius=radii,
fill_color=colors, fill_alpha=0.6, line_color=None)
tr = p.text(x, y, text=inds, alpha=0.5, text_font_size="5pt",
| {"golden_diff": "diff --git a/examples/plotting/file/hover.py b/examples/plotting/file/hover.py\n--- a/examples/plotting/file/hover.py\n+++ b/examples/plotting/file/hover.py\n@@ -28,8 +28,8 @@\n \n p = figure(title=\"Hoverful Scatter\", tools=TOOLS)\n \n-p.circle(x, y, radius=radii, source=source,\n- fill_color=colors, fill_alpha=0.6, line_color=None)\n+p.circle(x='x', y='y', radius='radius', source=source,\n+ fill_color='colors', fill_alpha=0.6, line_color=None)\n \n p.text(x, y, text=inds, alpha=0.5, text_font_size=\"5pt\",\n text_baseline=\"middle\", text_align=\"center\")\ndiff --git a/examples/plotting/file/tap.py b/examples/plotting/file/tap.py\n--- a/examples/plotting/file/tap.py\n+++ b/examples/plotting/file/tap.py\n@@ -1,7 +1,6 @@\n-import itertools\n import numpy as np\n \n-from bokeh.plotting import ColumnDataSource, figure, show, output_file\n+from bokeh.plotting import figure, show, output_file\n from bokeh.models import TapTool\n \n xx, yy = np.meshgrid(range(0,101,4), range(0,101,4))\n@@ -14,20 +13,11 @@\n \"#%02x%02x%02x\" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)\n ]\n \n-source = ColumnDataSource(data=dict(\n- x=x,\n- y=y,\n- radius=radii,\n- colors=colors,\n- foo=list(itertools.permutations(\"abcdef\"))[:N],\n- bar=np.random.normal(size=N),\n-))\n-\n TOOLS=\"crosshair,pan,wheel_zoom,box_zoom,reset,tap,save\"\n \n p = figure(title=\"Tappy Scatter\", tools=TOOLS)\n \n-cr = p.circle(x, y, radius=radii, source=source,\n+cr = p.circle(x, y, radius=radii,\n fill_color=colors, fill_alpha=0.6, line_color=None)\n \n tr = p.text(x, y, text=inds, alpha=0.5, text_font_size=\"5pt\",\n", "issue": "Examples with deprecation warnings\nAs of 0.12.3, the following examples have deprecation warnings\n- plotting/file/hover.py\n- plotting/file/tap.py\n\n", "before_files": [{"content": "import itertools\nimport numpy as np\n\nfrom bokeh.plotting import ColumnDataSource, figure, show, output_file\nfrom bokeh.models import TapTool\n\nxx, yy = np.meshgrid(range(0,101,4), range(0,101,4))\nx = xx.flatten()\ny = yy.flatten()\nN = len(x)\ninds = [str(i) for i in np.arange(N)]\nradii = np.random.random(size=N)*0.4 + 1.7\ncolors = [\n \"#%02x%02x%02x\" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)\n]\n\nsource = ColumnDataSource(data=dict(\n x=x,\n y=y,\n radius=radii,\n colors=colors,\n foo=list(itertools.permutations(\"abcdef\"))[:N],\n bar=np.random.normal(size=N),\n))\n\nTOOLS=\"crosshair,pan,wheel_zoom,box_zoom,reset,tap,save\"\n\np = figure(title=\"Tappy Scatter\", tools=TOOLS)\n\ncr = p.circle(x, y, radius=radii, source=source,\n fill_color=colors, fill_alpha=0.6, line_color=None)\n\ntr = p.text(x, y, text=inds, alpha=0.5, text_font_size=\"5pt\",\n text_baseline=\"middle\", text_align=\"center\")\n\n# in the browser console, you will see messages when circles are clicked\ntool = p.select_one(TapTool).renderers = [cr]\n\noutput_file(\"tap.html\", title=\"tap.py example\")\n\nshow(p) # open a browser\n", "path": "examples/plotting/file/tap.py"}, {"content": "import itertools\n\nimport numpy as np\n\nfrom bokeh.plotting import ColumnDataSource, figure, show, output_file\nfrom bokeh.models import HoverTool\n\nTOOLS=\"crosshair,pan,wheel_zoom,box_zoom,reset,hover,save\"\n\nxx, yy = np.meshgrid(range(0,101,4), range(0,101,4))\nx = xx.flatten()\ny = yy.flatten()\nN = len(x)\ninds = [str(i) for i in np.arange(N)]\nradii = np.random.random(size=N)*0.4 + 1.7\ncolors = [\n \"#%02x%02x%02x\" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)\n]\n\nsource = ColumnDataSource(data=dict(\n x=x,\n y=y,\n radius=radii,\n colors=colors,\n foo=list(itertools.permutations(\"abcdef\"))[:N],\n bar=np.random.normal(size=N),\n))\n\np = figure(title=\"Hoverful Scatter\", tools=TOOLS)\n\np.circle(x, y, radius=radii, source=source,\n fill_color=colors, fill_alpha=0.6, line_color=None)\n\np.text(x, y, text=inds, alpha=0.5, text_font_size=\"5pt\",\n text_baseline=\"middle\", text_align=\"center\")\n\nhover = p.select_one(HoverTool).tooltips = [\n (\"index\", \"$index\"),\n (\"(x,y)\", \"($x, $y)\"),\n (\"radius\", \"@radius\"),\n (\"fill color\", \"$color[hex, swatch]:fill_color\"),\n (\"foo\", \"@foo\"),\n (\"bar\", \"@bar\"),\n]\n\noutput_file(\"hover.html\", title=\"hover.py example\")\n\nshow(p) # open a browser\n", "path": "examples/plotting/file/hover.py"}], "after_files": [{"content": "import numpy as np\n\nfrom bokeh.plotting import figure, show, output_file\nfrom bokeh.models import TapTool\n\nxx, yy = np.meshgrid(range(0,101,4), range(0,101,4))\nx = xx.flatten()\ny = yy.flatten()\nN = len(x)\ninds = [str(i) for i in np.arange(N)]\nradii = np.random.random(size=N)*0.4 + 1.7\ncolors = [\n \"#%02x%02x%02x\" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)\n]\n\nTOOLS=\"crosshair,pan,wheel_zoom,box_zoom,reset,tap,save\"\n\np = figure(title=\"Tappy Scatter\", tools=TOOLS)\n\ncr = p.circle(x, y, radius=radii,\n fill_color=colors, fill_alpha=0.6, line_color=None)\n\ntr = p.text(x, y, text=inds, alpha=0.5, text_font_size=\"5pt\",\n text_baseline=\"middle\", text_align=\"center\")\n\n# in the browser console, you will see messages when circles are clicked\ntool = p.select_one(TapTool).renderers = [cr]\n\noutput_file(\"tap.html\", title=\"tap.py example\")\n\nshow(p) # open a browser\n", "path": "examples/plotting/file/tap.py"}, {"content": "import itertools\n\nimport numpy as np\n\nfrom bokeh.plotting import ColumnDataSource, figure, show, output_file\nfrom bokeh.models import HoverTool\n\nTOOLS=\"crosshair,pan,wheel_zoom,box_zoom,reset,hover,save\"\n\nxx, yy = np.meshgrid(range(0,101,4), range(0,101,4))\nx = xx.flatten()\ny = yy.flatten()\nN = len(x)\ninds = [str(i) for i in np.arange(N)]\nradii = np.random.random(size=N)*0.4 + 1.7\ncolors = [\n \"#%02x%02x%02x\" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)\n]\n\nsource = ColumnDataSource(data=dict(\n x=x,\n y=y,\n radius=radii,\n colors=colors,\n foo=list(itertools.permutations(\"abcdef\"))[:N],\n bar=np.random.normal(size=N),\n))\n\np = figure(title=\"Hoverful Scatter\", tools=TOOLS)\n\np.circle(x='x', y='y', radius='radius', source=source,\n fill_color='colors', fill_alpha=0.6, line_color=None)\n\np.text(x, y, text=inds, alpha=0.5, text_font_size=\"5pt\",\n text_baseline=\"middle\", text_align=\"center\")\n\nhover = p.select_one(HoverTool).tooltips = [\n (\"index\", \"$index\"),\n (\"(x,y)\", \"($x, $y)\"),\n (\"radius\", \"@radius\"),\n (\"fill color\", \"$color[hex, swatch]:fill_color\"),\n (\"foo\", \"@foo\"),\n (\"bar\", \"@bar\"),\n]\n\noutput_file(\"hover.html\", title=\"hover.py example\")\n\nshow(p) # open a browser\n", "path": "examples/plotting/file/hover.py"}]} | 1,261 | 531 |
gh_patches_debug_96 | rasdani/github-patches | git_diff | pymodbus-dev__pymodbus-1422 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
py.typed missing in pip install
### Versions
- Python: 3.11
- OS: macOS
- Pymodbus: 3.2.0 (pip install)
- Modbus Hardware (if used):
### Description
running mypy on my project complains about missing
```
artisanlib/modbusport.py:68: error: Skipping analyzing "pymodbus.constants": module is installed, but missing library stubs or py.typed marker [import]
artisanlib/modbusport.py:69: error: Skipping analyzing "pymodbus.payload": module is installed, but missing library stubs or py.typed marker [import]
artisanlib/modbusport.py:241: error: Skipping analyzing "pymodbus.client": module is installed, but missing library stubs or py.typed marker [import]
artisanlib/modbusport.py:385: error: Skipping analyzing "pymodbus.pdu": module is installed, but missing library stubs or py.typed marker [import]
```
despite all your nice work on adding type annotations. The file py.typed id not get installed along via
```
# sudo -H python3 -m pip install pymodbus --upgrade
```
```
# cd /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pymodbus
# ls ./py.typed
ls: ./py.typed: No such file or directory
```
I added it as follows and the mypy errors went away.
```
# cd /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pymodbus
# sudo touch py.typed
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python3
2 """Installs pymodbus using setuptools."""
3
4
5 # --------------------------------------------------------------------------- #
6 # initialization
7 # --------------------------------------------------------------------------- #
8 from setuptools import setup
9
10
11 dependencies = {}
12 with open("requirements.txt") as reqs:
13 option = None
14 for line in reqs.read().split("\n"):
15 if line == "":
16 option = None
17 elif line.startswith("# install:"):
18 option = line.split(":")[1]
19 dependencies[option] = []
20 elif not line.startswith("#") and option:
21 dependencies[option].append(line)
22
23 install_req = dependencies["required"]
24 del dependencies["required"]
25
26
27 # --------------------------------------------------------------------------- #
28 # configuration
29 # --------------------------------------------------------------------------- #
30 setup(
31 install_requires=install_req,
32 extras_require=dependencies,
33 )
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -30,4 +30,5 @@
setup(
install_requires=install_req,
extras_require=dependencies,
+ package_data={"pymodbus": ["py.typed"]},
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -30,4 +30,5 @@\n setup(\n install_requires=install_req,\n extras_require=dependencies,\n+ package_data={\"pymodbus\": [\"py.typed\"]},\n )\n", "issue": "py.typed missing in pip install\n\r\n### Versions\r\n\r\n- Python: 3.11\r\n- OS: macOS\r\n- Pymodbus: 3.2.0 (pip install)\r\n- Modbus Hardware (if used):\r\n\r\n### Description\r\n\r\nrunning mypy on my project complains about missing \r\n\r\n```\r\nartisanlib/modbusport.py:68: error: Skipping analyzing \"pymodbus.constants\": module is installed, but missing library stubs or py.typed marker [import]\r\nartisanlib/modbusport.py:69: error: Skipping analyzing \"pymodbus.payload\": module is installed, but missing library stubs or py.typed marker [import]\r\nartisanlib/modbusport.py:241: error: Skipping analyzing \"pymodbus.client\": module is installed, but missing library stubs or py.typed marker [import]\r\nartisanlib/modbusport.py:385: error: Skipping analyzing \"pymodbus.pdu\": module is installed, but missing library stubs or py.typed marker [import]\r\n```\r\n\r\ndespite all your nice work on adding type annotations. The file py.typed id not get installed along via\r\n\r\n```\r\n# sudo -H python3 -m pip install pymodbus --upgrade\r\n```\r\n\r\n```\r\n# cd /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pymodbus\r\n# ls ./py.typed\r\nls: ./py.typed: No such file or directory\r\n```\r\n\r\nI added it as follows and the mypy errors went away.\r\n\r\n```\r\n# cd /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pymodbus\r\n# sudo touch py.typed\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python3\n\"\"\"Installs pymodbus using setuptools.\"\"\"\n\n\n# --------------------------------------------------------------------------- #\n# initialization\n# --------------------------------------------------------------------------- #\nfrom setuptools import setup\n\n\ndependencies = {}\nwith open(\"requirements.txt\") as reqs:\n option = None\n for line in reqs.read().split(\"\\n\"):\n if line == \"\":\n option = None\n elif line.startswith(\"# install:\"):\n option = line.split(\":\")[1]\n dependencies[option] = []\n elif not line.startswith(\"#\") and option:\n dependencies[option].append(line)\n\ninstall_req = dependencies[\"required\"]\ndel dependencies[\"required\"]\n\n\n# --------------------------------------------------------------------------- #\n# configuration\n# --------------------------------------------------------------------------- #\nsetup(\n install_requires=install_req,\n extras_require=dependencies,\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\"\"\"Installs pymodbus using setuptools.\"\"\"\n\n\n# --------------------------------------------------------------------------- #\n# initialization\n# --------------------------------------------------------------------------- #\nfrom setuptools import setup\n\n\ndependencies = {}\nwith open(\"requirements.txt\") as reqs:\n option = None\n for line in reqs.read().split(\"\\n\"):\n if line == \"\":\n option = None\n elif line.startswith(\"# install:\"):\n option = line.split(\":\")[1]\n dependencies[option] = []\n elif not line.startswith(\"#\") and option:\n dependencies[option].append(line)\n\ninstall_req = dependencies[\"required\"]\ndel dependencies[\"required\"]\n\n\n# --------------------------------------------------------------------------- #\n# configuration\n# --------------------------------------------------------------------------- #\nsetup(\n install_requires=install_req,\n extras_require=dependencies,\n package_data={\"pymodbus\": [\"py.typed\"]},\n)\n", "path": "setup.py"}]} | 847 | 64 |
gh_patches_debug_23502 | rasdani/github-patches | git_diff | uccser__cs-unplugged-255 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Create example test for templatetags
Create example test for templatetags
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `csunplugged/general/templatetags/render_html_field.py`
Content:
```
1 """Module for the custom render_html_field template tag."""
2
3 from django import template
4 from django.template import Template, Variable, TemplateSyntaxError
5
6
7 class RenderHTMLFieldNode(template.Node):
8 """Class used for the custom render_html_field template tag."""
9
10 def __init__(self, item_to_be_rendered):
11 """Create the RenderHTMLFieldNode object."""
12 self.item_to_be_rendered = Variable(item_to_be_rendered)
13
14 def render(self, context):
15 """Render the text with the static template tag.
16
17 Returns:
18 Rendered string of text, or an empty string if the render
19 fails to convert.
20 """
21 try:
22 actual_item = '{% load static %}\n' + self.item_to_be_rendered.resolve(context)
23 return Template(actual_item).render(context)
24 except template.VariableDoesNotExist:
25 return ''
26
27
28 def render_html_field(parser, token):
29 """Run when the render_html_field template tag is used.
30
31 Returns:
32 Rendered string of text, or an empty string if the render
33 fails to convert.
34 """
35 bits = token.split_contents()
36 if len(bits) != 2:
37 raise TemplateSyntaxError("'%s' takes only one argument"
38 " (a variable representing a template to render)" % bits[0])
39 return RenderHTMLFieldNode(bits[1])
40
41
42 register = template.Library()
43 render_html_field = register.tag(render_html_field)
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/csunplugged/general/templatetags/render_html_field.py b/csunplugged/general/templatetags/render_html_field.py
--- a/csunplugged/general/templatetags/render_html_field.py
+++ b/csunplugged/general/templatetags/render_html_field.py
@@ -3,6 +3,12 @@
from django import template
from django.template import Template, Variable, TemplateSyntaxError
+INVALID_ATTRIBUTE_MESSAGE = "The 'render_html_field' tag was given an " \
+ "attribute that could not be converted to a string."
+
+MISSING_ATTRIBUTE_MESSAGE = "The 'render_html_field' tag was given an " \
+ "attribute that does not exist."
+
class RenderHTMLFieldNode(template.Node):
"""Class used for the custom render_html_field template tag."""
@@ -21,8 +27,10 @@
try:
actual_item = '{% load static %}\n' + self.item_to_be_rendered.resolve(context)
return Template(actual_item).render(context)
+ except TypeError:
+ raise TemplateSyntaxError(INVALID_ATTRIBUTE_MESSAGE)
except template.VariableDoesNotExist:
- return ''
+ raise TemplateSyntaxError(MISSING_ATTRIBUTE_MESSAGE)
def render_html_field(parser, token):
| {"golden_diff": "diff --git a/csunplugged/general/templatetags/render_html_field.py b/csunplugged/general/templatetags/render_html_field.py\n--- a/csunplugged/general/templatetags/render_html_field.py\n+++ b/csunplugged/general/templatetags/render_html_field.py\n@@ -3,6 +3,12 @@\n from django import template\n from django.template import Template, Variable, TemplateSyntaxError\n \n+INVALID_ATTRIBUTE_MESSAGE = \"The 'render_html_field' tag was given an \" \\\n+ \"attribute that could not be converted to a string.\"\n+\n+MISSING_ATTRIBUTE_MESSAGE = \"The 'render_html_field' tag was given an \" \\\n+ \"attribute that does not exist.\"\n+\n \n class RenderHTMLFieldNode(template.Node):\n \"\"\"Class used for the custom render_html_field template tag.\"\"\"\n@@ -21,8 +27,10 @@\n try:\n actual_item = '{% load static %}\\n' + self.item_to_be_rendered.resolve(context)\n return Template(actual_item).render(context)\n+ except TypeError:\n+ raise TemplateSyntaxError(INVALID_ATTRIBUTE_MESSAGE)\n except template.VariableDoesNotExist:\n- return ''\n+ raise TemplateSyntaxError(MISSING_ATTRIBUTE_MESSAGE)\n \n \n def render_html_field(parser, token):\n", "issue": "Create example test for templatetags\n\nCreate example test for templatetags\n\n", "before_files": [{"content": "\"\"\"Module for the custom render_html_field template tag.\"\"\"\n\nfrom django import template\nfrom django.template import Template, Variable, TemplateSyntaxError\n\n\nclass RenderHTMLFieldNode(template.Node):\n \"\"\"Class used for the custom render_html_field template tag.\"\"\"\n\n def __init__(self, item_to_be_rendered):\n \"\"\"Create the RenderHTMLFieldNode object.\"\"\"\n self.item_to_be_rendered = Variable(item_to_be_rendered)\n\n def render(self, context):\n \"\"\"Render the text with the static template tag.\n\n Returns:\n Rendered string of text, or an empty string if the render\n fails to convert.\n \"\"\"\n try:\n actual_item = '{% load static %}\\n' + self.item_to_be_rendered.resolve(context)\n return Template(actual_item).render(context)\n except template.VariableDoesNotExist:\n return ''\n\n\ndef render_html_field(parser, token):\n \"\"\"Run when the render_html_field template tag is used.\n\n Returns:\n Rendered string of text, or an empty string if the render\n fails to convert.\n \"\"\"\n bits = token.split_contents()\n if len(bits) != 2:\n raise TemplateSyntaxError(\"'%s' takes only one argument\"\n \" (a variable representing a template to render)\" % bits[0])\n return RenderHTMLFieldNode(bits[1])\n\n\nregister = template.Library()\nrender_html_field = register.tag(render_html_field)\n", "path": "csunplugged/general/templatetags/render_html_field.py"}], "after_files": [{"content": "\"\"\"Module for the custom render_html_field template tag.\"\"\"\n\nfrom django import template\nfrom django.template import Template, Variable, TemplateSyntaxError\n\nINVALID_ATTRIBUTE_MESSAGE = \"The 'render_html_field' tag was given an \" \\\n \"attribute that could not be converted to a string.\"\n\nMISSING_ATTRIBUTE_MESSAGE = \"The 'render_html_field' tag was given an \" \\\n \"attribute that does not exist.\"\n\n\nclass RenderHTMLFieldNode(template.Node):\n \"\"\"Class used for the custom render_html_field template tag.\"\"\"\n\n def __init__(self, item_to_be_rendered):\n \"\"\"Create the RenderHTMLFieldNode object.\"\"\"\n self.item_to_be_rendered = Variable(item_to_be_rendered)\n\n def render(self, context):\n \"\"\"Render the text with the static template tag.\n\n Returns:\n Rendered string of text, or an empty string if the render\n fails to convert.\n \"\"\"\n try:\n actual_item = '{% load static %}\\n' + self.item_to_be_rendered.resolve(context)\n return Template(actual_item).render(context)\n except TypeError:\n raise TemplateSyntaxError(INVALID_ATTRIBUTE_MESSAGE)\n except template.VariableDoesNotExist:\n raise TemplateSyntaxError(MISSING_ATTRIBUTE_MESSAGE)\n\n\ndef render_html_field(parser, token):\n \"\"\"Run when the render_html_field template tag is used.\n\n Returns:\n Rendered string of text, or an empty string if the render\n fails to convert.\n \"\"\"\n bits = token.split_contents()\n if len(bits) != 2:\n raise TemplateSyntaxError(\"'%s' takes only one argument\"\n \" (a variable representing a template to render)\" % bits[0])\n return RenderHTMLFieldNode(bits[1])\n\n\nregister = template.Library()\nrender_html_field = register.tag(render_html_field)\n", "path": "csunplugged/general/templatetags/render_html_field.py"}]} | 667 | 274 |
gh_patches_debug_10305 | rasdani/github-patches | git_diff | scikit-hep__pyhf-1242 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add pyhf.writexml and pyhf.readxml to public API docs
# Description
As first brought up in PR #1125, `pyhf.writexml` and `pyhf.readxml` are not currently documented in the public Python API docs. This should get fixed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup
2
3 extras_require = {
4 'shellcomplete': ['click_completion'],
5 'tensorflow': [
6 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major
7 'tensorflow-probability~=0.10.0',
8 ],
9 'torch': ['torch~=1.2'],
10 'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'],
11 'xmlio': [
12 'uproot3~=3.14',
13 'uproot~=4.0',
14 ], # uproot3 required until writing to ROOT supported in uproot4
15 'minuit': ['iminuit~=2.1'],
16 }
17 extras_require['backends'] = sorted(
18 set(
19 extras_require['tensorflow']
20 + extras_require['torch']
21 + extras_require['jax']
22 + extras_require['minuit']
23 )
24 )
25 extras_require['contrib'] = sorted({'matplotlib', 'requests'})
26 extras_require['lint'] = sorted({'flake8', 'black'})
27
28 extras_require['test'] = sorted(
29 set(
30 extras_require['backends']
31 + extras_require['xmlio']
32 + extras_require['contrib']
33 + extras_require['shellcomplete']
34 + [
35 'pytest~=6.0',
36 'pytest-cov>=2.5.1',
37 'pytest-mock',
38 'pytest-benchmark[histogram]',
39 'pytest-console-scripts',
40 'pytest-mpl',
41 'pydocstyle',
42 'coverage>=4.0', # coveralls
43 'papermill~=2.0',
44 'nteract-scrapbook~=0.2',
45 'jupyter',
46 'graphviz',
47 'jsonpatch',
48 ]
49 )
50 )
51 extras_require['docs'] = sorted(
52 {
53 'sphinx>=3.1.2',
54 'sphinxcontrib-bibtex~=1.0',
55 'sphinx-click',
56 'sphinx_rtd_theme',
57 'nbsphinx',
58 'ipywidgets',
59 'sphinx-issues',
60 'sphinx-copybutton>0.2.9',
61 }
62 )
63 extras_require['develop'] = sorted(
64 set(
65 extras_require['docs']
66 + extras_require['lint']
67 + extras_require['test']
68 + [
69 'nbdime',
70 'bump2version',
71 'ipython',
72 'pre-commit',
73 'check-manifest',
74 'codemetapy>=0.3.4',
75 'twine',
76 ]
77 )
78 )
79 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
80
81
82 setup(
83 extras_require=extras_require,
84 use_scm_version=lambda: {'local_scheme': lambda version: ''},
85 )
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -49,16 +49,19 @@
)
)
extras_require['docs'] = sorted(
- {
- 'sphinx>=3.1.2',
- 'sphinxcontrib-bibtex~=1.0',
- 'sphinx-click',
- 'sphinx_rtd_theme',
- 'nbsphinx',
- 'ipywidgets',
- 'sphinx-issues',
- 'sphinx-copybutton>0.2.9',
- }
+ set(
+ extras_require['xmlio']
+ + [
+ 'sphinx>=3.1.2',
+ 'sphinxcontrib-bibtex~=1.0',
+ 'sphinx-click',
+ 'sphinx_rtd_theme',
+ 'nbsphinx',
+ 'ipywidgets',
+ 'sphinx-issues',
+ 'sphinx-copybutton>0.2.9',
+ ]
+ )
)
extras_require['develop'] = sorted(
set(
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -49,16 +49,19 @@\n )\n )\n extras_require['docs'] = sorted(\n- {\n- 'sphinx>=3.1.2',\n- 'sphinxcontrib-bibtex~=1.0',\n- 'sphinx-click',\n- 'sphinx_rtd_theme',\n- 'nbsphinx',\n- 'ipywidgets',\n- 'sphinx-issues',\n- 'sphinx-copybutton>0.2.9',\n- }\n+ set(\n+ extras_require['xmlio']\n+ + [\n+ 'sphinx>=3.1.2',\n+ 'sphinxcontrib-bibtex~=1.0',\n+ 'sphinx-click',\n+ 'sphinx_rtd_theme',\n+ 'nbsphinx',\n+ 'ipywidgets',\n+ 'sphinx-issues',\n+ 'sphinx-copybutton>0.2.9',\n+ ]\n+ )\n )\n extras_require['develop'] = sorted(\n set(\n", "issue": "Add pyhf.writexml and pyhf.readxml to public API docs\n# Description\r\n\r\nAs first brought up in PR #1125, `pyhf.writexml` and `pyhf.readxml` are not currently documented in the public Python API docs. This should get fixed.\n", "before_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major\n 'tensorflow-probability~=0.10.0',\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'],\n 'xmlio': [\n 'uproot3~=3.14',\n 'uproot~=4.0',\n ], # uproot3 required until writing to ROOT supported in uproot4\n 'minuit': ['iminuit~=2.1'],\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted({'matplotlib', 'requests'})\nextras_require['lint'] = sorted({'flake8', 'black'})\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'pytest~=6.0',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n {\n 'sphinx>=3.1.2',\n 'sphinxcontrib-bibtex~=1.0',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n }\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + [\n 'nbdime',\n 'bump2version',\n 'ipython',\n 'pre-commit',\n 'check-manifest',\n 'codemetapy>=0.3.4',\n 'twine',\n ]\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow~=2.2.0', # TensorFlow minor releases are as volatile as major\n 'tensorflow-probability~=0.10.0',\n ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.2.4', 'jaxlib~=0.1.56'],\n 'xmlio': [\n 'uproot3~=3.14',\n 'uproot~=4.0',\n ], # uproot3 required until writing to ROOT supported in uproot4\n 'minuit': ['iminuit~=2.1'],\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted({'matplotlib', 'requests'})\nextras_require['lint'] = sorted({'flake8', 'black'})\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'pytest~=6.0',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n extras_require['xmlio']\n + [\n 'sphinx>=3.1.2',\n 'sphinxcontrib-bibtex~=1.0',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + [\n 'nbdime',\n 'bump2version',\n 'ipython',\n 'pre-commit',\n 'check-manifest',\n 'codemetapy>=0.3.4',\n 'twine',\n ]\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}]} | 1,068 | 239 |
gh_patches_debug_38716 | rasdani/github-patches | git_diff | psychopy__psychopy-1542 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
no documentation for trialHandler.data.addDataType
The TrialHandler.py demo uses the function addDataType:
```
trials.data.addDataType('choice') # this will help store things with the stimuli
trials.data.addDataType('RT') # add as many types as you like
```
But I haven't been able to find any documentation for addDataType in the psychopy.data page (http://www.psychopy.org/api/data.html). (I'd like to know what it does)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `psychopy/demos/coder/experiment control/TrialHandler.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 Demo of TrialHandler
6 """
7
8 from __future__ import division
9 from __future__ import print_function
10
11 from builtins import range
12 from random import random
13 from psychopy import data
14
15 # create your list of stimuli
16 # NB as of version 1.62 you could simply import an excel spreadsheet with this
17 # using data.importConditions('someFile.xlsx')
18 stimList = []
19 for ori in range(90, 180, 30):
20 for sf in [0.5, 1.0, 2.0]:
21 # append a python 'dictionary' to the list
22 stimList.append({'sf':sf, 'ori':ori})
23
24 # organize them with the trial handler
25 trials = data.TrialHandler(stimList, 10, extraInfo= {'participant':"Nobody", 'session':1})
26 trials.data.addDataType('choice') # this will help store things with the stimuli
27 trials.data.addDataType('RT') # add as many types as you like
28
29 # run the experiment
30 nDone = 0
31 for thisTrial in trials: # handler can act like a for loop
32 # simulate some data
33 thisReactionTime = random() + float(thisTrial['sf']) / 2.0
34 thisChoice = round(random())
35 trials.data.add('RT', thisReactionTime) # add the data to our set
36 trials.data.add('choice', thisChoice)
37 nDone += 1 # just for a quick reference
38
39 msg = 'trial %i had position %s in the list (sf=%.1f)'
40 print(msg % (nDone, trials.thisIndex, thisTrial['sf']))
41
42 # after the experiment
43 print('\n')
44 trials.printAsText(stimOut=['sf', 'ori'], # write summary data to screen
45 dataOut=['RT_mean', 'RT_std', 'choice_raw'])
46 trials.saveAsText(fileName='testData', # also write summary data to a text file
47 stimOut=['sf', 'ori'],
48 dataOut=['RT_mean', 'RT_std', 'choice_raw'])
49 trials.saveAsExcel(fileName='testData', # ...or an xlsx file (which supports sheets)
50 sheetName = 'rawData',
51 stimOut=['sf', 'ori'],
52 dataOut=['RT_mean', 'RT_std', 'choice_raw'])
53 trials.saveAsPickle(fileName = 'testData') # this saves a copy of the whole object
54 df = trials.saveAsWideText("testDataWide.txt") # wide is useful for analysis with R or SPSS. Also returns dataframe df
55
56 # The contents of this file are in the public domain.
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/psychopy/demos/coder/experiment control/TrialHandler.py b/psychopy/demos/coder/experiment control/TrialHandler.py
--- a/psychopy/demos/coder/experiment control/TrialHandler.py
+++ b/psychopy/demos/coder/experiment control/TrialHandler.py
@@ -3,6 +3,9 @@
"""
Demo of TrialHandler
+
+The contents of this file are in the public domain.
+
"""
from __future__ import division
@@ -10,6 +13,7 @@
from builtins import range
from random import random
+
from psychopy import data
# create your list of stimuli
@@ -19,12 +23,11 @@
for ori in range(90, 180, 30):
for sf in [0.5, 1.0, 2.0]:
# append a python 'dictionary' to the list
- stimList.append({'sf':sf, 'ori':ori})
+ stimList.append({'sf': sf, 'ori': ori})
# organize them with the trial handler
-trials = data.TrialHandler(stimList, 10, extraInfo= {'participant':"Nobody", 'session':1})
-trials.data.addDataType('choice') # this will help store things with the stimuli
-trials.data.addDataType('RT') # add as many types as you like
+trials = data.TrialHandler(stimList, 10,
+ extraInfo={'participant': "Nobody", 'session': 1})
# run the experiment
nDone = 0
@@ -39,18 +42,27 @@
msg = 'trial %i had position %s in the list (sf=%.1f)'
print(msg % (nDone, trials.thisIndex, thisTrial['sf']))
-# after the experiment
+# After the experiment, print a new line
print('\n')
-trials.printAsText(stimOut=['sf', 'ori'], # write summary data to screen
- dataOut=['RT_mean', 'RT_std', 'choice_raw'])
-trials.saveAsText(fileName='testData', # also write summary data to a text file
- stimOut=['sf', 'ori'],
- dataOut=['RT_mean', 'RT_std', 'choice_raw'])
-trials.saveAsExcel(fileName='testData', # ...or an xlsx file (which supports sheets)
- sheetName = 'rawData',
+
+# Write summary data to screen
+trials.printAsText(stimOut=['sf', 'ori'],
+ dataOut=['RT_mean', 'RT_std', 'choice_raw'])
+
+# Write summary data to a text file ...
+trials.saveAsText(fileName='testData',
stimOut=['sf', 'ori'],
dataOut=['RT_mean', 'RT_std', 'choice_raw'])
-trials.saveAsPickle(fileName = 'testData') # this saves a copy of the whole object
-df = trials.saveAsWideText("testDataWide.txt") # wide is useful for analysis with R or SPSS. Also returns dataframe df
-# The contents of this file are in the public domain.
+# ... or an xlsx file (which supports sheets)
+trials.saveAsExcel(fileName='testData',
+ sheetName='rawData',
+ stimOut=['sf', 'ori'],
+ dataOut=['RT_mean', 'RT_std', 'choice_raw'])
+
+# Save a copy of the whole TrialHandler object, which can be reloaded later to
+# re-create the experiment.
+trials.saveAsPickle(fileName='testData')
+
+# Wide format is useful for analysis with R or SPSS.
+df = trials.saveAsWideText('testDataWide.txt')
| {"golden_diff": "diff --git a/psychopy/demos/coder/experiment control/TrialHandler.py b/psychopy/demos/coder/experiment control/TrialHandler.py\n--- a/psychopy/demos/coder/experiment control/TrialHandler.py\t\n+++ b/psychopy/demos/coder/experiment control/TrialHandler.py\t\n@@ -3,6 +3,9 @@\n \n \"\"\"\n Demo of TrialHandler\n+\n+The contents of this file are in the public domain.\n+\n \"\"\"\n \n from __future__ import division\n@@ -10,6 +13,7 @@\n \n from builtins import range\n from random import random\n+\n from psychopy import data\n \n # create your list of stimuli\n@@ -19,12 +23,11 @@\n for ori in range(90, 180, 30):\n for sf in [0.5, 1.0, 2.0]:\n # append a python 'dictionary' to the list\n- stimList.append({'sf':sf, 'ori':ori})\n+ stimList.append({'sf': sf, 'ori': ori})\n \n # organize them with the trial handler\n-trials = data.TrialHandler(stimList, 10, extraInfo= {'participant':\"Nobody\", 'session':1})\n-trials.data.addDataType('choice') # this will help store things with the stimuli\n-trials.data.addDataType('RT') # add as many types as you like\n+trials = data.TrialHandler(stimList, 10,\n+ extraInfo={'participant': \"Nobody\", 'session': 1})\n \n # run the experiment\n nDone = 0\n@@ -39,18 +42,27 @@\n msg = 'trial %i had position %s in the list (sf=%.1f)'\n print(msg % (nDone, trials.thisIndex, thisTrial['sf']))\n \n-# after the experiment\n+# After the experiment, print a new line\n print('\\n')\n-trials.printAsText(stimOut=['sf', 'ori'], # write summary data to screen\n- dataOut=['RT_mean', 'RT_std', 'choice_raw'])\n-trials.saveAsText(fileName='testData', # also write summary data to a text file\n- stimOut=['sf', 'ori'],\n- dataOut=['RT_mean', 'RT_std', 'choice_raw'])\n-trials.saveAsExcel(fileName='testData', # ...or an xlsx file (which supports sheets)\n- sheetName = 'rawData',\n+\n+# Write summary data to screen\n+trials.printAsText(stimOut=['sf', 'ori'],\n+ dataOut=['RT_mean', 'RT_std', 'choice_raw'])\n+\n+# Write summary data to a text file ...\n+trials.saveAsText(fileName='testData',\n stimOut=['sf', 'ori'],\n dataOut=['RT_mean', 'RT_std', 'choice_raw'])\n-trials.saveAsPickle(fileName = 'testData') # this saves a copy of the whole object\n-df = trials.saveAsWideText(\"testDataWide.txt\") # wide is useful for analysis with R or SPSS. Also returns dataframe df\n \n-# The contents of this file are in the public domain.\n+# ... or an xlsx file (which supports sheets)\n+trials.saveAsExcel(fileName='testData',\n+ sheetName='rawData',\n+ stimOut=['sf', 'ori'],\n+ dataOut=['RT_mean', 'RT_std', 'choice_raw'])\n+\n+# Save a copy of the whole TrialHandler object, which can be reloaded later to\n+# re-create the experiment.\n+trials.saveAsPickle(fileName='testData')\n+\n+# Wide format is useful for analysis with R or SPSS.\n+df = trials.saveAsWideText('testDataWide.txt')\n", "issue": "no documentation for trialHandler.data.addDataType\nThe TrialHandler.py demo uses the function addDataType:\r\n\r\n```\r\ntrials.data.addDataType('choice') # this will help store things with the stimuli\r\ntrials.data.addDataType('RT') # add as many types as you like\r\n```\r\n\r\nBut I haven't been able to find any documentation for addDataType in the psychopy.data page (http://www.psychopy.org/api/data.html). (I'd like to know what it does)\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nDemo of TrialHandler\n\"\"\"\n\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom builtins import range\nfrom random import random\nfrom psychopy import data\n\n# create your list of stimuli\n# NB as of version 1.62 you could simply import an excel spreadsheet with this\n# using data.importConditions('someFile.xlsx')\nstimList = []\nfor ori in range(90, 180, 30):\n for sf in [0.5, 1.0, 2.0]:\n # append a python 'dictionary' to the list\n stimList.append({'sf':sf, 'ori':ori})\n\n# organize them with the trial handler\ntrials = data.TrialHandler(stimList, 10, extraInfo= {'participant':\"Nobody\", 'session':1})\ntrials.data.addDataType('choice') # this will help store things with the stimuli\ntrials.data.addDataType('RT') # add as many types as you like\n\n# run the experiment\nnDone = 0\nfor thisTrial in trials: # handler can act like a for loop\n # simulate some data\n thisReactionTime = random() + float(thisTrial['sf']) / 2.0\n thisChoice = round(random())\n trials.data.add('RT', thisReactionTime) # add the data to our set\n trials.data.add('choice', thisChoice)\n nDone += 1 # just for a quick reference\n\n msg = 'trial %i had position %s in the list (sf=%.1f)'\n print(msg % (nDone, trials.thisIndex, thisTrial['sf']))\n\n# after the experiment\nprint('\\n')\ntrials.printAsText(stimOut=['sf', 'ori'], # write summary data to screen\n dataOut=['RT_mean', 'RT_std', 'choice_raw'])\ntrials.saveAsText(fileName='testData', # also write summary data to a text file\n stimOut=['sf', 'ori'],\n dataOut=['RT_mean', 'RT_std', 'choice_raw'])\ntrials.saveAsExcel(fileName='testData', # ...or an xlsx file (which supports sheets)\n sheetName = 'rawData',\n stimOut=['sf', 'ori'],\n dataOut=['RT_mean', 'RT_std', 'choice_raw'])\ntrials.saveAsPickle(fileName = 'testData') # this saves a copy of the whole object\ndf = trials.saveAsWideText(\"testDataWide.txt\") # wide is useful for analysis with R or SPSS. Also returns dataframe df\n\n# The contents of this file are in the public domain.\n", "path": "psychopy/demos/coder/experiment control/TrialHandler.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nDemo of TrialHandler\n\nThe contents of this file are in the public domain.\n\n\"\"\"\n\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom builtins import range\nfrom random import random\n\nfrom psychopy import data\n\n# create your list of stimuli\n# NB as of version 1.62 you could simply import an excel spreadsheet with this\n# using data.importConditions('someFile.xlsx')\nstimList = []\nfor ori in range(90, 180, 30):\n for sf in [0.5, 1.0, 2.0]:\n # append a python 'dictionary' to the list\n stimList.append({'sf': sf, 'ori': ori})\n\n# organize them with the trial handler\ntrials = data.TrialHandler(stimList, 10,\n extraInfo={'participant': \"Nobody\", 'session': 1})\n\n# run the experiment\nnDone = 0\nfor thisTrial in trials: # handler can act like a for loop\n # simulate some data\n thisReactionTime = random() + float(thisTrial['sf']) / 2.0\n thisChoice = round(random())\n trials.data.add('RT', thisReactionTime) # add the data to our set\n trials.data.add('choice', thisChoice)\n nDone += 1 # just for a quick reference\n\n msg = 'trial %i had position %s in the list (sf=%.1f)'\n print(msg % (nDone, trials.thisIndex, thisTrial['sf']))\n\n# After the experiment, print a new line\nprint('\\n')\n\n# Write summary data to screen\ntrials.printAsText(stimOut=['sf', 'ori'],\n dataOut=['RT_mean', 'RT_std', 'choice_raw'])\n\n# Write summary data to a text file ...\ntrials.saveAsText(fileName='testData',\n stimOut=['sf', 'ori'],\n dataOut=['RT_mean', 'RT_std', 'choice_raw'])\n\n# ... or an xlsx file (which supports sheets)\ntrials.saveAsExcel(fileName='testData',\n sheetName='rawData',\n stimOut=['sf', 'ori'],\n dataOut=['RT_mean', 'RT_std', 'choice_raw'])\n\n# Save a copy of the whole TrialHandler object, which can be reloaded later to\n# re-create the experiment.\ntrials.saveAsPickle(fileName='testData')\n\n# Wide format is useful for analysis with R or SPSS.\ndf = trials.saveAsWideText('testDataWide.txt')\n", "path": "psychopy/demos/coder/experiment control/TrialHandler.py"}]} | 1,060 | 820 |
gh_patches_debug_2575 | rasdani/github-patches | git_diff | coala__coala-3608 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove call_without_output from Shell.py L7
This line was used by the requirement classes, it isnt used anymore as they use sarge, so it should be removed.
difficulty/newcomer
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `coalib/misc/Shell.py`
Content:
```
1 from contextlib import contextmanager
2 import functools
3 import shlex
4 from subprocess import PIPE, Popen, call, DEVNULL
5
6
7 call_without_output = functools.partial(call, stdout=DEVNULL, stderr=DEVNULL)
8 """
9 Uses subprocess.call to execute a command, but suppresses the output and
10 the errors.
11 """
12
13
14 @contextmanager
15 def run_interactive_shell_command(command, **kwargs):
16 """
17 Runs a single command in shell and provides stdout, stderr and stdin
18 streams.
19
20 This function creates a context manager that sets up the process (using
21 ``subprocess.Popen()``), returns to caller and waits for process to exit on
22 leaving.
23
24 By default the process is opened in ``universal_newlines`` mode and creates
25 pipes for all streams (stdout, stderr and stdin) using ``subprocess.PIPE``
26 special value. These pipes are closed automatically, so if you want to get
27 the contents of the streams you should retrieve them before the context
28 manager exits.
29
30 >>> with run_interactive_shell_command(["echo", "TEXT"]) as p:
31 ... stdout = p.stdout
32 ... stdout_text = stdout.read()
33 >>> stdout_text
34 'TEXT\\n'
35 >>> stdout.closed
36 True
37
38 Custom streams provided are not closed except of ``subprocess.PIPE``.
39
40 >>> from tempfile import TemporaryFile
41 >>> stream = TemporaryFile()
42 >>> with run_interactive_shell_command(["echo", "TEXT"],
43 ... stdout=stream) as p:
44 ... stderr = p.stderr
45 >>> stderr.closed
46 True
47 >>> stream.closed
48 False
49
50 :param command: The command to run on shell. This parameter can either
51 be a sequence of arguments that are directly passed to
52 the process or a string. A string gets splitted beforehand
53 using ``shlex.split()``. If providing ``shell=True`` as a
54 keyword-argument, no ``shlex.split()`` is performed and the
55 command string goes directly to ``subprocess.Popen()``.
56 :param kwargs: Additional keyword arguments to pass to
57 ``subprocess.Popen`` that are used to spawn the process.
58 :return: A context manager yielding the process started from the
59 command.
60 """
61 if not kwargs.get('shell', False) and isinstance(command, str):
62 command = shlex.split(command)
63
64 args = {'stdout': PIPE,
65 'stderr': PIPE,
66 'stdin': PIPE,
67 'universal_newlines': True}
68 args.update(kwargs)
69
70 process = Popen(command, **args)
71 try:
72 yield process
73 finally:
74 if args['stdout'] is PIPE:
75 process.stdout.close()
76 if args['stderr'] is PIPE:
77 process.stderr.close()
78 if args['stdin'] is PIPE:
79 process.stdin.close()
80
81 process.wait()
82
83
84 def run_shell_command(command, stdin=None, **kwargs):
85 """
86 Runs a single command in shell and returns the read stdout and stderr data.
87
88 This function waits for the process (created using ``subprocess.Popen()``)
89 to exit. Effectively it wraps ``run_interactive_shell_command()`` and uses
90 ``communicate()`` on the process.
91
92 See also ``run_interactive_shell_command()``.
93
94 :param command: The command to run on shell. This parameter can either
95 be a sequence of arguments that are directly passed to
96 the process or a string. A string gets splitted beforehand
97 using ``shlex.split()``.
98 :param stdin: Initial input to send to the process.
99 :param kwargs: Additional keyword arguments to pass to
100 ``subprocess.Popen`` that is used to spawn the process.
101 :return: A tuple with ``(stdoutstring, stderrstring)``.
102 """
103 with run_interactive_shell_command(command, **kwargs) as p:
104 ret = p.communicate(stdin)
105 return ret
106
107
108 def get_shell_type(): # pragma: no cover
109 """
110 Finds the current shell type based on the outputs of common pre-defined
111 variables in them. This is useful to identify which sort of escaping
112 is required for strings.
113
114 :return: The shell type. This can be either "powershell" if Windows
115 Powershell is detected, "cmd" if command prompt is been
116 detected or "sh" if it's neither of these.
117 """
118 out = run_shell_command('echo $host.name', shell=True)[0]
119 if out.strip() == 'ConsoleHost':
120 return 'powershell'
121 out = run_shell_command('echo $0', shell=True)[0]
122 if out.strip() == '$0':
123 return 'cmd'
124 return 'sh'
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/coalib/misc/Shell.py b/coalib/misc/Shell.py
--- a/coalib/misc/Shell.py
+++ b/coalib/misc/Shell.py
@@ -1,14 +1,6 @@
from contextlib import contextmanager
-import functools
import shlex
-from subprocess import PIPE, Popen, call, DEVNULL
-
-
-call_without_output = functools.partial(call, stdout=DEVNULL, stderr=DEVNULL)
-"""
-Uses subprocess.call to execute a command, but suppresses the output and
-the errors.
-"""
+from subprocess import PIPE, Popen
@contextmanager
| {"golden_diff": "diff --git a/coalib/misc/Shell.py b/coalib/misc/Shell.py\n--- a/coalib/misc/Shell.py\n+++ b/coalib/misc/Shell.py\n@@ -1,14 +1,6 @@\n from contextlib import contextmanager\n-import functools\n import shlex\n-from subprocess import PIPE, Popen, call, DEVNULL\n-\n-\n-call_without_output = functools.partial(call, stdout=DEVNULL, stderr=DEVNULL)\n-\"\"\"\n-Uses subprocess.call to execute a command, but suppresses the output and\n-the errors.\n-\"\"\"\n+from subprocess import PIPE, Popen\n \n \n @contextmanager\n", "issue": "Remove call_without_output from Shell.py L7\nThis line was used by the requirement classes, it isnt used anymore as they use sarge, so it should be removed.\r\ndifficulty/newcomer\n", "before_files": [{"content": "from contextlib import contextmanager\nimport functools\nimport shlex\nfrom subprocess import PIPE, Popen, call, DEVNULL\n\n\ncall_without_output = functools.partial(call, stdout=DEVNULL, stderr=DEVNULL)\n\"\"\"\nUses subprocess.call to execute a command, but suppresses the output and\nthe errors.\n\"\"\"\n\n\n@contextmanager\ndef run_interactive_shell_command(command, **kwargs):\n \"\"\"\n Runs a single command in shell and provides stdout, stderr and stdin\n streams.\n\n This function creates a context manager that sets up the process (using\n ``subprocess.Popen()``), returns to caller and waits for process to exit on\n leaving.\n\n By default the process is opened in ``universal_newlines`` mode and creates\n pipes for all streams (stdout, stderr and stdin) using ``subprocess.PIPE``\n special value. These pipes are closed automatically, so if you want to get\n the contents of the streams you should retrieve them before the context\n manager exits.\n\n >>> with run_interactive_shell_command([\"echo\", \"TEXT\"]) as p:\n ... stdout = p.stdout\n ... stdout_text = stdout.read()\n >>> stdout_text\n 'TEXT\\\\n'\n >>> stdout.closed\n True\n\n Custom streams provided are not closed except of ``subprocess.PIPE``.\n\n >>> from tempfile import TemporaryFile\n >>> stream = TemporaryFile()\n >>> with run_interactive_shell_command([\"echo\", \"TEXT\"],\n ... stdout=stream) as p:\n ... stderr = p.stderr\n >>> stderr.closed\n True\n >>> stream.closed\n False\n\n :param command: The command to run on shell. This parameter can either\n be a sequence of arguments that are directly passed to\n the process or a string. A string gets splitted beforehand\n using ``shlex.split()``. If providing ``shell=True`` as a\n keyword-argument, no ``shlex.split()`` is performed and the\n command string goes directly to ``subprocess.Popen()``.\n :param kwargs: Additional keyword arguments to pass to\n ``subprocess.Popen`` that are used to spawn the process.\n :return: A context manager yielding the process started from the\n command.\n \"\"\"\n if not kwargs.get('shell', False) and isinstance(command, str):\n command = shlex.split(command)\n\n args = {'stdout': PIPE,\n 'stderr': PIPE,\n 'stdin': PIPE,\n 'universal_newlines': True}\n args.update(kwargs)\n\n process = Popen(command, **args)\n try:\n yield process\n finally:\n if args['stdout'] is PIPE:\n process.stdout.close()\n if args['stderr'] is PIPE:\n process.stderr.close()\n if args['stdin'] is PIPE:\n process.stdin.close()\n\n process.wait()\n\n\ndef run_shell_command(command, stdin=None, **kwargs):\n \"\"\"\n Runs a single command in shell and returns the read stdout and stderr data.\n\n This function waits for the process (created using ``subprocess.Popen()``)\n to exit. Effectively it wraps ``run_interactive_shell_command()`` and uses\n ``communicate()`` on the process.\n\n See also ``run_interactive_shell_command()``.\n\n :param command: The command to run on shell. This parameter can either\n be a sequence of arguments that are directly passed to\n the process or a string. A string gets splitted beforehand\n using ``shlex.split()``.\n :param stdin: Initial input to send to the process.\n :param kwargs: Additional keyword arguments to pass to\n ``subprocess.Popen`` that is used to spawn the process.\n :return: A tuple with ``(stdoutstring, stderrstring)``.\n \"\"\"\n with run_interactive_shell_command(command, **kwargs) as p:\n ret = p.communicate(stdin)\n return ret\n\n\ndef get_shell_type(): # pragma: no cover\n \"\"\"\n Finds the current shell type based on the outputs of common pre-defined\n variables in them. This is useful to identify which sort of escaping\n is required for strings.\n\n :return: The shell type. This can be either \"powershell\" if Windows\n Powershell is detected, \"cmd\" if command prompt is been\n detected or \"sh\" if it's neither of these.\n \"\"\"\n out = run_shell_command('echo $host.name', shell=True)[0]\n if out.strip() == 'ConsoleHost':\n return 'powershell'\n out = run_shell_command('echo $0', shell=True)[0]\n if out.strip() == '$0':\n return 'cmd'\n return 'sh'\n", "path": "coalib/misc/Shell.py"}], "after_files": [{"content": "from contextlib import contextmanager\nimport shlex\nfrom subprocess import PIPE, Popen\n\n\n@contextmanager\ndef run_interactive_shell_command(command, **kwargs):\n \"\"\"\n Runs a single command in shell and provides stdout, stderr and stdin\n streams.\n\n This function creates a context manager that sets up the process (using\n ``subprocess.Popen()``), returns to caller and waits for process to exit on\n leaving.\n\n By default the process is opened in ``universal_newlines`` mode and creates\n pipes for all streams (stdout, stderr and stdin) using ``subprocess.PIPE``\n special value. These pipes are closed automatically, so if you want to get\n the contents of the streams you should retrieve them before the context\n manager exits.\n\n >>> with run_interactive_shell_command([\"echo\", \"TEXT\"]) as p:\n ... stdout = p.stdout\n ... stdout_text = stdout.read()\n >>> stdout_text\n 'TEXT\\\\n'\n >>> stdout.closed\n True\n\n Custom streams provided are not closed except of ``subprocess.PIPE``.\n\n >>> from tempfile import TemporaryFile\n >>> stream = TemporaryFile()\n >>> with run_interactive_shell_command([\"echo\", \"TEXT\"],\n ... stdout=stream) as p:\n ... stderr = p.stderr\n >>> stderr.closed\n True\n >>> stream.closed\n False\n\n :param command: The command to run on shell. This parameter can either\n be a sequence of arguments that are directly passed to\n the process or a string. A string gets splitted beforehand\n using ``shlex.split()``. If providing ``shell=True`` as a\n keyword-argument, no ``shlex.split()`` is performed and the\n command string goes directly to ``subprocess.Popen()``.\n :param kwargs: Additional keyword arguments to pass to\n ``subprocess.Popen`` that are used to spawn the process.\n :return: A context manager yielding the process started from the\n command.\n \"\"\"\n if not kwargs.get('shell', False) and isinstance(command, str):\n command = shlex.split(command)\n\n args = {'stdout': PIPE,\n 'stderr': PIPE,\n 'stdin': PIPE,\n 'universal_newlines': True}\n args.update(kwargs)\n\n process = Popen(command, **args)\n try:\n yield process\n finally:\n if args['stdout'] is PIPE:\n process.stdout.close()\n if args['stderr'] is PIPE:\n process.stderr.close()\n if args['stdin'] is PIPE:\n process.stdin.close()\n\n process.wait()\n\n\ndef run_shell_command(command, stdin=None, **kwargs):\n \"\"\"\n Runs a single command in shell and returns the read stdout and stderr data.\n\n This function waits for the process (created using ``subprocess.Popen()``)\n to exit. Effectively it wraps ``run_interactive_shell_command()`` and uses\n ``communicate()`` on the process.\n\n See also ``run_interactive_shell_command()``.\n\n :param command: The command to run on shell. This parameter can either\n be a sequence of arguments that are directly passed to\n the process or a string. A string gets splitted beforehand\n using ``shlex.split()``.\n :param stdin: Initial input to send to the process.\n :param kwargs: Additional keyword arguments to pass to\n ``subprocess.Popen`` that is used to spawn the process.\n :return: A tuple with ``(stdoutstring, stderrstring)``.\n \"\"\"\n with run_interactive_shell_command(command, **kwargs) as p:\n ret = p.communicate(stdin)\n return ret\n\n\ndef get_shell_type(): # pragma: no cover\n \"\"\"\n Finds the current shell type based on the outputs of common pre-defined\n variables in them. This is useful to identify which sort of escaping\n is required for strings.\n\n :return: The shell type. This can be either \"powershell\" if Windows\n Powershell is detected, \"cmd\" if command prompt is been\n detected or \"sh\" if it's neither of these.\n \"\"\"\n out = run_shell_command('echo $host.name', shell=True)[0]\n if out.strip() == 'ConsoleHost':\n return 'powershell'\n out = run_shell_command('echo $0', shell=True)[0]\n if out.strip() == '$0':\n return 'cmd'\n return 'sh'\n", "path": "coalib/misc/Shell.py"}]} | 1,579 | 135 |
gh_patches_debug_52259 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-6127 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`view.flows.add` command does not exist but the examples reference it
#### Problem Description
The `view.flows.add` command does not exist but the example `duplicate-modify-replay.py` shows this command being used.
`replay.client` seems to perform both the "add to view" and "replay" function.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/addons/duplicate-modify-replay.py`
Content:
```
1 """Take incoming HTTP requests and replay them with modified parameters."""
2 from mitmproxy import ctx
3
4
5 def request(flow):
6 # Avoid an infinite loop by not replaying already replayed requests
7 if flow.is_replay == "request":
8 return
9 flow = flow.copy()
10 # Only interactive tools have a view. If we have one, add a duplicate entry
11 # for our flow.
12 if "view" in ctx.master.addons:
13 ctx.master.commands.call("view.flows.add", [flow])
14 flow.request.path = "/changed"
15 ctx.master.commands.call("replay.client", [flow])
16
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/addons/duplicate-modify-replay.py b/examples/addons/duplicate-modify-replay.py
--- a/examples/addons/duplicate-modify-replay.py
+++ b/examples/addons/duplicate-modify-replay.py
@@ -10,6 +10,6 @@
# Only interactive tools have a view. If we have one, add a duplicate entry
# for our flow.
if "view" in ctx.master.addons:
- ctx.master.commands.call("view.flows.add", [flow])
+ ctx.master.commands.call("view.flows.duplicate", [flow])
flow.request.path = "/changed"
ctx.master.commands.call("replay.client", [flow])
| {"golden_diff": "diff --git a/examples/addons/duplicate-modify-replay.py b/examples/addons/duplicate-modify-replay.py\n--- a/examples/addons/duplicate-modify-replay.py\n+++ b/examples/addons/duplicate-modify-replay.py\n@@ -10,6 +10,6 @@\n # Only interactive tools have a view. If we have one, add a duplicate entry\n # for our flow.\n if \"view\" in ctx.master.addons:\n- ctx.master.commands.call(\"view.flows.add\", [flow])\n+ ctx.master.commands.call(\"view.flows.duplicate\", [flow])\n flow.request.path = \"/changed\"\n ctx.master.commands.call(\"replay.client\", [flow])\n", "issue": "`view.flows.add` command does not exist but the examples reference it\n#### Problem Description\r\n\r\nThe `view.flows.add` command does not exist but the example `duplicate-modify-replay.py` shows this command being used.\r\n\r\n`replay.client` seems to perform both the \"add to view\" and \"replay\" function.\n", "before_files": [{"content": "\"\"\"Take incoming HTTP requests and replay them with modified parameters.\"\"\"\nfrom mitmproxy import ctx\n\n\ndef request(flow):\n # Avoid an infinite loop by not replaying already replayed requests\n if flow.is_replay == \"request\":\n return\n flow = flow.copy()\n # Only interactive tools have a view. If we have one, add a duplicate entry\n # for our flow.\n if \"view\" in ctx.master.addons:\n ctx.master.commands.call(\"view.flows.add\", [flow])\n flow.request.path = \"/changed\"\n ctx.master.commands.call(\"replay.client\", [flow])\n", "path": "examples/addons/duplicate-modify-replay.py"}], "after_files": [{"content": "\"\"\"Take incoming HTTP requests and replay them with modified parameters.\"\"\"\nfrom mitmproxy import ctx\n\n\ndef request(flow):\n # Avoid an infinite loop by not replaying already replayed requests\n if flow.is_replay == \"request\":\n return\n flow = flow.copy()\n # Only interactive tools have a view. If we have one, add a duplicate entry\n # for our flow.\n if \"view\" in ctx.master.addons:\n ctx.master.commands.call(\"view.flows.duplicate\", [flow])\n flow.request.path = \"/changed\"\n ctx.master.commands.call(\"replay.client\", [flow])\n", "path": "examples/addons/duplicate-modify-replay.py"}]} | 487 | 152 |
gh_patches_debug_17692 | rasdani/github-patches | git_diff | sopel-irc__sopel-1262 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[wikipedia] Queries with e.g. & do not work properly
````
<dgw> ;w Miraculous Ladybug
<Kaede> KeyError: u'extract' (file "/usr/local/lib/python2.7/dist-packages/sopel/modules/wikipedia.py",
line 87, in mw_snippet)
[…]
<dgw> https://en.wikipedia.org/wiki/Miraculous:_Tales_of_Ladybug_%26_Cat_Noir
<Kaede> KeyError: u'extract' (file "/usr/local/lib/python2.7/dist-packages/sopel/modules/wikipedia.py",
line 87, in mw_snippet)
````
Basically, if a search returns a page title that contains `&`, e.g. "Miraculous: Tales of Ladybug & Cat Noir", the `wikipedia` module winds up trying to retrieve a snippet for, e.g., "Miraculous: Tales of Ladybug " and failing to do so because the page doesn't exist.
I have a fix for this in testing, but wanted to open the issue so anyone searching for the problem would find out that it is already on the radar.
Small fix for URLs with "+" in them
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sopel/modules/wikipedia.py`
Content:
```
1 # coding=utf-8
2 # Copyright 2013 Elsie Powell - embolalia.com
3 # Licensed under the Eiffel Forum License 2.
4 from __future__ import unicode_literals, absolute_import, print_function, division
5 from sopel import web, tools
6 from sopel.config.types import StaticSection, ValidatedAttribute
7 from sopel.module import NOLIMIT, commands, example, rule
8 import json
9 import re
10
11 import sys
12 if sys.version_info.major < 3:
13 from urlparse import unquote as _unquote
14 unquote = lambda s: _unquote(s.encode('utf-8')).decode('utf-8')
15 else:
16 from urllib.parse import unquote
17
18 REDIRECT = re.compile(r'^REDIRECT (.*)')
19
20
21 class WikipediaSection(StaticSection):
22 default_lang = ValidatedAttribute('default_lang', default='en')
23 """The default language to find articles from."""
24 lang_per_channel = ValidatedAttribute('lang_per_channel')
25
26
27 def setup(bot):
28 bot.config.define_section('wikipedia', WikipediaSection)
29
30 regex = re.compile('([a-z]+).(wikipedia.org/wiki/)([^ ]+)')
31 if not bot.memory.contains('url_callbacks'):
32 bot.memory['url_callbacks'] = tools.SopelMemory()
33 bot.memory['url_callbacks'][regex] = mw_info
34
35
36 def configure(config):
37 config.define_section('wikipedia', WikipediaSection)
38 config.wikipedia.configure_setting(
39 'default_lang',
40 "Enter the default language to find articles from."
41 )
42
43
44 def mw_search(server, query, num):
45 """
46 Searches the specified MediaWiki server for the given query, and returns
47 the specified number of results.
48 """
49 search_url = ('http://%s/w/api.php?format=json&action=query'
50 '&list=search&srlimit=%d&srprop=timestamp&srwhat=text'
51 '&srsearch=') % (server, num)
52 search_url += query
53 query = json.loads(web.get(search_url))
54 if 'query' in query:
55 query = query['query']['search']
56 return [r['title'] for r in query]
57 else:
58 return None
59
60
61 def say_snippet(bot, server, query, show_url=True):
62 page_name = query.replace('_', ' ')
63 query = query.replace(' ', '_')
64 try:
65 snippet = mw_snippet(server, query)
66 except KeyError:
67 if show_url:
68 bot.say("[WIKIPEDIA] Error fetching snippet for \"{}\".".format(page_name))
69 return
70 msg = '[WIKIPEDIA] {} | "{}"'.format(page_name, snippet)
71 if show_url:
72 msg = msg + ' | https://{}/wiki/{}'.format(server, query)
73 bot.say(msg)
74
75
76 def mw_snippet(server, query):
77 """
78 Retrives a snippet of the specified length from the given page on the given
79 server.
80 """
81 snippet_url = ('https://' + server + '/w/api.php?format=json'
82 '&action=query&prop=extracts&exintro&explaintext'
83 '&exchars=300&redirects&titles=')
84 snippet_url += query
85 snippet = json.loads(web.get(snippet_url))
86 snippet = snippet['query']['pages']
87
88 # For some reason, the API gives the page *number* as the key, so we just
89 # grab the first page number in the results.
90 snippet = snippet[list(snippet.keys())[0]]
91
92 return snippet['extract']
93
94
95 @rule('.*\/([a-z]+\.wikipedia.org)\/wiki\/((?!File\:)[^ ]+).*')
96 def mw_info(bot, trigger, found_match=None):
97 """
98 Retrives a snippet of the specified length from the given page on the given
99 server.
100 """
101 match = found_match or trigger
102 say_snippet(bot, match.group(1), unquote(match.group(2)), show_url=False)
103
104
105 @commands('w', 'wiki', 'wik')
106 @example('.w San Francisco')
107 def wikipedia(bot, trigger):
108 lang = bot.config.wikipedia.default_lang
109
110 # change lang if channel has custom language set
111 if (trigger.sender and not trigger.sender.is_nick() and
112 bot.config.wikipedia.lang_per_channel):
113 customlang = re.search('(' + trigger.sender + '):(\w+)',
114 bot.config.wikipedia.lang_per_channel)
115 if customlang is not None:
116 lang = customlang.group(2)
117
118 if trigger.group(2) is None:
119 bot.reply("What do you want me to look up?")
120 return NOLIMIT
121
122 query = trigger.group(2)
123 args = re.search(r'^-([a-z]{2,12})\s(.*)', query)
124 if args is not None:
125 lang = args.group(1)
126 query = args.group(2)
127
128 if not query:
129 bot.reply('What do you want me to look up?')
130 return NOLIMIT
131 server = lang + '.wikipedia.org'
132 query = mw_search(server, query, 1)
133 if not query:
134 bot.reply("I can't find any results for that.")
135 return NOLIMIT
136 else:
137 query = query[0]
138 say_snippet(bot, server, query)
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sopel/modules/wikipedia.py b/sopel/modules/wikipedia.py
--- a/sopel/modules/wikipedia.py
+++ b/sopel/modules/wikipedia.py
@@ -10,10 +10,12 @@
import sys
if sys.version_info.major < 3:
+ from urllib import quote as _quote
from urlparse import unquote as _unquote
+ quote = lambda s: _quote(s.encode('utf-8')).decode('utf-8')
unquote = lambda s: _unquote(s.encode('utf-8')).decode('utf-8')
else:
- from urllib.parse import unquote
+ from urllib.parse import quote, unquote
REDIRECT = re.compile(r'^REDIRECT (.*)')
@@ -60,7 +62,7 @@
def say_snippet(bot, server, query, show_url=True):
page_name = query.replace('_', ' ')
- query = query.replace(' ', '_')
+ query = quote(query.replace(' ', '_'))
try:
snippet = mw_snippet(server, query)
except KeyError:
| {"golden_diff": "diff --git a/sopel/modules/wikipedia.py b/sopel/modules/wikipedia.py\n--- a/sopel/modules/wikipedia.py\n+++ b/sopel/modules/wikipedia.py\n@@ -10,10 +10,12 @@\n \n import sys\n if sys.version_info.major < 3:\n+ from urllib import quote as _quote\n from urlparse import unquote as _unquote\n+ quote = lambda s: _quote(s.encode('utf-8')).decode('utf-8')\n unquote = lambda s: _unquote(s.encode('utf-8')).decode('utf-8')\n else:\n- from urllib.parse import unquote\n+ from urllib.parse import quote, unquote\n \n REDIRECT = re.compile(r'^REDIRECT (.*)')\n \n@@ -60,7 +62,7 @@\n \n def say_snippet(bot, server, query, show_url=True):\n page_name = query.replace('_', ' ')\n- query = query.replace(' ', '_')\n+ query = quote(query.replace(' ', '_'))\n try:\n snippet = mw_snippet(server, query)\n except KeyError:\n", "issue": "[wikipedia] Queries with e.g. & do not work properly\n````\r\n<dgw> ;w Miraculous Ladybug\r\n<Kaede> KeyError: u'extract' (file \"/usr/local/lib/python2.7/dist-packages/sopel/modules/wikipedia.py\",\r\n line 87, in mw_snippet)\r\n[\u2026]\r\n<dgw> https://en.wikipedia.org/wiki/Miraculous:_Tales_of_Ladybug_%26_Cat_Noir\r\n<Kaede> KeyError: u'extract' (file \"/usr/local/lib/python2.7/dist-packages/sopel/modules/wikipedia.py\",\r\n line 87, in mw_snippet)\r\n````\r\n\r\nBasically, if a search returns a page title that contains `&`, e.g. \"Miraculous: Tales of Ladybug & Cat Noir\", the `wikipedia` module winds up trying to retrieve a snippet for, e.g., \"Miraculous: Tales of Ladybug \" and failing to do so because the page doesn't exist.\r\n\r\nI have a fix for this in testing, but wanted to open the issue so anyone searching for the problem would find out that it is already on the radar.\nSmall fix for URLs with \"+\" in them\n\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2013 Elsie Powell - embolalia.com\n# Licensed under the Eiffel Forum License 2.\nfrom __future__ import unicode_literals, absolute_import, print_function, division\nfrom sopel import web, tools\nfrom sopel.config.types import StaticSection, ValidatedAttribute\nfrom sopel.module import NOLIMIT, commands, example, rule\nimport json\nimport re\n\nimport sys\nif sys.version_info.major < 3:\n from urlparse import unquote as _unquote\n unquote = lambda s: _unquote(s.encode('utf-8')).decode('utf-8')\nelse:\n from urllib.parse import unquote\n\nREDIRECT = re.compile(r'^REDIRECT (.*)')\n\n\nclass WikipediaSection(StaticSection):\n default_lang = ValidatedAttribute('default_lang', default='en')\n \"\"\"The default language to find articles from.\"\"\"\n lang_per_channel = ValidatedAttribute('lang_per_channel')\n\n\ndef setup(bot):\n bot.config.define_section('wikipedia', WikipediaSection)\n\n regex = re.compile('([a-z]+).(wikipedia.org/wiki/)([^ ]+)')\n if not bot.memory.contains('url_callbacks'):\n bot.memory['url_callbacks'] = tools.SopelMemory()\n bot.memory['url_callbacks'][regex] = mw_info\n\n\ndef configure(config):\n config.define_section('wikipedia', WikipediaSection)\n config.wikipedia.configure_setting(\n 'default_lang',\n \"Enter the default language to find articles from.\"\n )\n\n\ndef mw_search(server, query, num):\n \"\"\"\n Searches the specified MediaWiki server for the given query, and returns\n the specified number of results.\n \"\"\"\n search_url = ('http://%s/w/api.php?format=json&action=query'\n '&list=search&srlimit=%d&srprop=timestamp&srwhat=text'\n '&srsearch=') % (server, num)\n search_url += query\n query = json.loads(web.get(search_url))\n if 'query' in query:\n query = query['query']['search']\n return [r['title'] for r in query]\n else:\n return None\n\n\ndef say_snippet(bot, server, query, show_url=True):\n page_name = query.replace('_', ' ')\n query = query.replace(' ', '_')\n try:\n snippet = mw_snippet(server, query)\n except KeyError:\n if show_url:\n bot.say(\"[WIKIPEDIA] Error fetching snippet for \\\"{}\\\".\".format(page_name))\n return\n msg = '[WIKIPEDIA] {} | \"{}\"'.format(page_name, snippet)\n if show_url:\n msg = msg + ' | https://{}/wiki/{}'.format(server, query)\n bot.say(msg)\n\n\ndef mw_snippet(server, query):\n \"\"\"\n Retrives a snippet of the specified length from the given page on the given\n server.\n \"\"\"\n snippet_url = ('https://' + server + '/w/api.php?format=json'\n '&action=query&prop=extracts&exintro&explaintext'\n '&exchars=300&redirects&titles=')\n snippet_url += query\n snippet = json.loads(web.get(snippet_url))\n snippet = snippet['query']['pages']\n\n # For some reason, the API gives the page *number* as the key, so we just\n # grab the first page number in the results.\n snippet = snippet[list(snippet.keys())[0]]\n\n return snippet['extract']\n\n\n@rule('.*\\/([a-z]+\\.wikipedia.org)\\/wiki\\/((?!File\\:)[^ ]+).*')\ndef mw_info(bot, trigger, found_match=None):\n \"\"\"\n Retrives a snippet of the specified length from the given page on the given\n server.\n \"\"\"\n match = found_match or trigger\n say_snippet(bot, match.group(1), unquote(match.group(2)), show_url=False)\n\n\n@commands('w', 'wiki', 'wik')\n@example('.w San Francisco')\ndef wikipedia(bot, trigger):\n lang = bot.config.wikipedia.default_lang\n\n # change lang if channel has custom language set\n if (trigger.sender and not trigger.sender.is_nick() and\n bot.config.wikipedia.lang_per_channel):\n customlang = re.search('(' + trigger.sender + '):(\\w+)',\n bot.config.wikipedia.lang_per_channel)\n if customlang is not None:\n lang = customlang.group(2)\n\n if trigger.group(2) is None:\n bot.reply(\"What do you want me to look up?\")\n return NOLIMIT\n\n query = trigger.group(2)\n args = re.search(r'^-([a-z]{2,12})\\s(.*)', query)\n if args is not None:\n lang = args.group(1)\n query = args.group(2)\n\n if not query:\n bot.reply('What do you want me to look up?')\n return NOLIMIT\n server = lang + '.wikipedia.org'\n query = mw_search(server, query, 1)\n if not query:\n bot.reply(\"I can't find any results for that.\")\n return NOLIMIT\n else:\n query = query[0]\n say_snippet(bot, server, query)\n", "path": "sopel/modules/wikipedia.py"}], "after_files": [{"content": "# coding=utf-8\n# Copyright 2013 Elsie Powell - embolalia.com\n# Licensed under the Eiffel Forum License 2.\nfrom __future__ import unicode_literals, absolute_import, print_function, division\nfrom sopel import web, tools\nfrom sopel.config.types import StaticSection, ValidatedAttribute\nfrom sopel.module import NOLIMIT, commands, example, rule\nimport json\nimport re\n\nimport sys\nif sys.version_info.major < 3:\n from urllib import quote as _quote\n from urlparse import unquote as _unquote\n quote = lambda s: _quote(s.encode('utf-8')).decode('utf-8')\n unquote = lambda s: _unquote(s.encode('utf-8')).decode('utf-8')\nelse:\n from urllib.parse import quote, unquote\n\nREDIRECT = re.compile(r'^REDIRECT (.*)')\n\n\nclass WikipediaSection(StaticSection):\n default_lang = ValidatedAttribute('default_lang', default='en')\n \"\"\"The default language to find articles from.\"\"\"\n lang_per_channel = ValidatedAttribute('lang_per_channel')\n\n\ndef setup(bot):\n bot.config.define_section('wikipedia', WikipediaSection)\n\n regex = re.compile('([a-z]+).(wikipedia.org/wiki/)([^ ]+)')\n if not bot.memory.contains('url_callbacks'):\n bot.memory['url_callbacks'] = tools.SopelMemory()\n bot.memory['url_callbacks'][regex] = mw_info\n\n\ndef configure(config):\n config.define_section('wikipedia', WikipediaSection)\n config.wikipedia.configure_setting(\n 'default_lang',\n \"Enter the default language to find articles from.\"\n )\n\n\ndef mw_search(server, query, num):\n \"\"\"\n Searches the specified MediaWiki server for the given query, and returns\n the specified number of results.\n \"\"\"\n search_url = ('http://%s/w/api.php?format=json&action=query'\n '&list=search&srlimit=%d&srprop=timestamp&srwhat=text'\n '&srsearch=') % (server, num)\n search_url += query\n query = json.loads(web.get(search_url))\n if 'query' in query:\n query = query['query']['search']\n return [r['title'] for r in query]\n else:\n return None\n\n\ndef say_snippet(bot, server, query, show_url=True):\n page_name = query.replace('_', ' ')\n query = quote(query.replace(' ', '_'))\n try:\n snippet = mw_snippet(server, query)\n except KeyError:\n if show_url:\n bot.say(\"[WIKIPEDIA] Error fetching snippet for \\\"{}\\\".\".format(page_name))\n return\n msg = '[WIKIPEDIA] {} | \"{}\"'.format(page_name, snippet)\n if show_url:\n msg = msg + ' | https://{}/wiki/{}'.format(server, query)\n bot.say(msg)\n\n\ndef mw_snippet(server, query):\n \"\"\"\n Retrives a snippet of the specified length from the given page on the given\n server.\n \"\"\"\n snippet_url = ('https://' + server + '/w/api.php?format=json'\n '&action=query&prop=extracts&exintro&explaintext'\n '&exchars=300&redirects&titles=')\n snippet_url += query\n snippet = json.loads(web.get(snippet_url))\n snippet = snippet['query']['pages']\n\n # For some reason, the API gives the page *number* as the key, so we just\n # grab the first page number in the results.\n snippet = snippet[list(snippet.keys())[0]]\n\n return snippet['extract']\n\n\n@rule('.*\\/([a-z]+\\.wikipedia.org)\\/wiki\\/((?!File\\:)[^ ]+).*')\ndef mw_info(bot, trigger, found_match=None):\n \"\"\"\n Retrives a snippet of the specified length from the given page on the given\n server.\n \"\"\"\n match = found_match or trigger\n say_snippet(bot, match.group(1), unquote(match.group(2)), show_url=False)\n\n\n@commands('w', 'wiki', 'wik')\n@example('.w San Francisco')\ndef wikipedia(bot, trigger):\n lang = bot.config.wikipedia.default_lang\n\n # change lang if channel has custom language set\n if (trigger.sender and not trigger.sender.is_nick() and\n bot.config.wikipedia.lang_per_channel):\n customlang = re.search('(' + trigger.sender + '):(\\w+)',\n bot.config.wikipedia.lang_per_channel)\n if customlang is not None:\n lang = customlang.group(2)\n\n if trigger.group(2) is None:\n bot.reply(\"What do you want me to look up?\")\n return NOLIMIT\n\n query = trigger.group(2)\n args = re.search(r'^-([a-z]{2,12})\\s(.*)', query)\n if args is not None:\n lang = args.group(1)\n query = args.group(2)\n\n if not query:\n bot.reply('What do you want me to look up?')\n return NOLIMIT\n server = lang + '.wikipedia.org'\n query = mw_search(server, query, 1)\n if not query:\n bot.reply(\"I can't find any results for that.\")\n return NOLIMIT\n else:\n query = query[0]\n say_snippet(bot, server, query)\n", "path": "sopel/modules/wikipedia.py"}]} | 1,965 | 242 |
gh_patches_debug_33382 | rasdani/github-patches | git_diff | conan-io__conan-center-index-438 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[package] sqlite3/all: Discussion of default options
Having a look at the options of the sqlite3 recipe you see most of them defaulted to false. However, there are some recipes like qt from bincrafters that already require this package with specific option enabled:
https://github.com/bincrafters/conan-qt/blob/594e40f036adcbf22d23dc2da228f389d45a76dc/conanfile.py#L256
Moreover, the sqlite3 is provided with a custom *CMakeLists.txt* with the CMake options, so we could say there are *no defaults* for these options recommended by the library authors/maintainers.
Additionally, these options are enabled by default in vcpkg repo:
```
-DSQLITE_ENABLE_RTREE
-DSQLITE_ENABLE_UNLOCK_NOTIFY
-DSQLITE_ENABLE_COLUMN_METADATA
```
This issue is open for discussion regarding the options of this package and its default values.
cc/ @ericLemanissier @SSE4 @uilianries
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/sqlite3/all/conanfile.py`
Content:
```
1 import os
2 from conans import ConanFile, CMake, tools
3
4
5 class ConanSqlite3(ConanFile):
6 name = "sqlite3"
7 description = "Self-contained, serverless, in-process SQL database engine."
8 url = "https://github.com/conan-io/conan-center-index"
9 homepage = "https://www.sqlite.org"
10 topics = ("conan", "sqlite", "database", "sql", "serverless")
11 license = "Public Domain"
12 generators = "cmake"
13 settings = "os", "compiler", "arch", "build_type"
14 exports_sources = ["CMakeLists.txt"]
15 options = {"shared": [True, False],
16 "fPIC": [True, False],
17 "threadsafe": [0, 1, 2],
18 "enable_column_metadata": [True, False],
19 "enable_explain_comments": [True, False],
20 "enable_fts3": [True, False],
21 "enable_fts4": [True, False],
22 "enable_fts5": [True, False],
23 "enable_json1": [True, False],
24 "enable_rtree": [True, False],
25 "omit_load_extension": [True, False]
26 }
27 default_options = {"shared": False,
28 "fPIC": True,
29 "threadsafe": 1,
30 "enable_column_metadata": False,
31 "enable_explain_comments": False,
32 "enable_fts3": False,
33 "enable_fts4": False,
34 "enable_fts5": False,
35 "enable_json1": False,
36 "enable_rtree": False,
37 "omit_load_extension": False
38 }
39 _source_subfolder = "source_subfolder"
40
41 def source(self):
42 tools.get(**self.conan_data["sources"][self.version])
43 url = self.conan_data["sources"][self.version]["url"]
44 archive_name = os.path.basename(url)
45 archive_name = os.path.splitext(archive_name)[0]
46 os.rename(archive_name, self._source_subfolder)
47
48 def config_options(self):
49 if self.settings.os == "Windows":
50 del self.options.fPIC
51
52 def configure(self):
53 del self.settings.compiler.libcxx
54 del self.settings.compiler.cppstd
55
56 def _configure_cmake(self):
57 cmake = CMake(self)
58 cmake.definitions["THREADSAFE"] = self.options.threadsafe
59 cmake.definitions["ENABLE_COLUMN_METADATA"] = self.options.enable_column_metadata
60 cmake.definitions["ENABLE_EXPLAIN_COMMENTS"] = self.options.enable_explain_comments
61 cmake.definitions["ENABLE_FTS3"] = self.options.enable_fts3
62 cmake.definitions["ENABLE_FTS4"] = self.options.enable_fts4
63 cmake.definitions["ENABLE_FTS5"] = self.options.enable_fts5
64 cmake.definitions["ENABLE_JSON1"] = self.options.enable_json1
65 cmake.definitions["ENABLE_RTREE"] = self.options.enable_rtree
66 cmake.definitions["OMIT_LOAD_EXTENSION"] = self.options.omit_load_extension
67 cmake.definitions["HAVE_FDATASYNC"] = True
68 cmake.definitions["HAVE_GMTIME_R"] = True
69 cmake.definitions["HAVE_LOCALTIME_R"] = True
70 cmake.definitions["HAVE_POSIX_FALLOCATE"] = True
71 cmake.definitions["HAVE_STRERROR_R"] = True
72 cmake.definitions["HAVE_USLEEP"] = True
73 if self.settings.os == "Windows":
74 cmake.definitions["HAVE_LOCALTIME_R"] = False
75 cmake.definitions["HAVE_POSIX_FALLOCATE"] = False
76 if tools.is_apple_os(self.settings.os):
77 cmake.definitions["HAVE_POSIX_FALLOCATE"] = False
78 if self.settings.os == "Android":
79 cmake.definitions["HAVE_POSIX_FALLOCATE"] = False
80 cmake.configure()
81 return cmake
82
83 def build(self):
84 cmake = self._configure_cmake()
85 cmake.build()
86
87 def package(self):
88 header = tools.load(os.path.join(self._source_subfolder, "sqlite3.h"))
89 license_content = header[3:header.find("***", 1)]
90 tools.save("LICENSE", license_content)
91
92 self.copy("LICENSE", dst="licenses")
93
94 cmake = self._configure_cmake()
95 cmake.install()
96
97 def package_info(self):
98 self.cpp_info.libs = tools.collect_libs(self)
99 if self.settings.os == "Linux":
100 if self.options.threadsafe:
101 self.cpp_info.libs.append("pthread")
102 if self.options.omit_load_extension == "False":
103 self.cpp_info.libs.append("dl")
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/recipes/sqlite3/all/conanfile.py b/recipes/sqlite3/all/conanfile.py
--- a/recipes/sqlite3/all/conanfile.py
+++ b/recipes/sqlite3/all/conanfile.py
@@ -22,19 +22,21 @@
"enable_fts5": [True, False],
"enable_json1": [True, False],
"enable_rtree": [True, False],
- "omit_load_extension": [True, False]
+ "omit_load_extension": [True, False],
+ "enable_unlock_notify": [True, False]
}
default_options = {"shared": False,
"fPIC": True,
"threadsafe": 1,
- "enable_column_metadata": False,
+ "enable_column_metadata": True,
"enable_explain_comments": False,
"enable_fts3": False,
"enable_fts4": False,
"enable_fts5": False,
"enable_json1": False,
- "enable_rtree": False,
- "omit_load_extension": False
+ "enable_rtree": True,
+ "omit_load_extension": False,
+ "enable_unlock_notify": True
}
_source_subfolder = "source_subfolder"
@@ -64,6 +66,7 @@
cmake.definitions["ENABLE_JSON1"] = self.options.enable_json1
cmake.definitions["ENABLE_RTREE"] = self.options.enable_rtree
cmake.definitions["OMIT_LOAD_EXTENSION"] = self.options.omit_load_extension
+ cmake.definitions["SQLITE_ENABLE_UNLOCK_NOTIFY"] = self.options.enable_unlock_notify
cmake.definitions["HAVE_FDATASYNC"] = True
cmake.definitions["HAVE_GMTIME_R"] = True
cmake.definitions["HAVE_LOCALTIME_R"] = True
| {"golden_diff": "diff --git a/recipes/sqlite3/all/conanfile.py b/recipes/sqlite3/all/conanfile.py\n--- a/recipes/sqlite3/all/conanfile.py\n+++ b/recipes/sqlite3/all/conanfile.py\n@@ -22,19 +22,21 @@\n \"enable_fts5\": [True, False],\n \"enable_json1\": [True, False],\n \"enable_rtree\": [True, False],\n- \"omit_load_extension\": [True, False]\n+ \"omit_load_extension\": [True, False],\n+ \"enable_unlock_notify\": [True, False]\n }\n default_options = {\"shared\": False,\n \"fPIC\": True,\n \"threadsafe\": 1,\n- \"enable_column_metadata\": False,\n+ \"enable_column_metadata\": True,\n \"enable_explain_comments\": False,\n \"enable_fts3\": False,\n \"enable_fts4\": False,\n \"enable_fts5\": False,\n \"enable_json1\": False,\n- \"enable_rtree\": False,\n- \"omit_load_extension\": False\n+ \"enable_rtree\": True,\n+ \"omit_load_extension\": False,\n+ \"enable_unlock_notify\": True\n }\n _source_subfolder = \"source_subfolder\"\n \n@@ -64,6 +66,7 @@\n cmake.definitions[\"ENABLE_JSON1\"] = self.options.enable_json1\n cmake.definitions[\"ENABLE_RTREE\"] = self.options.enable_rtree\n cmake.definitions[\"OMIT_LOAD_EXTENSION\"] = self.options.omit_load_extension\n+ cmake.definitions[\"SQLITE_ENABLE_UNLOCK_NOTIFY\"] = self.options.enable_unlock_notify \n cmake.definitions[\"HAVE_FDATASYNC\"] = True\n cmake.definitions[\"HAVE_GMTIME_R\"] = True\n cmake.definitions[\"HAVE_LOCALTIME_R\"] = True\n", "issue": "[package] sqlite3/all: Discussion of default options\nHaving a look at the options of the sqlite3 recipe you see most of them defaulted to false. However, there are some recipes like qt from bincrafters that already require this package with specific option enabled:\r\n\r\nhttps://github.com/bincrafters/conan-qt/blob/594e40f036adcbf22d23dc2da228f389d45a76dc/conanfile.py#L256\r\n\r\nMoreover, the sqlite3 is provided with a custom *CMakeLists.txt* with the CMake options, so we could say there are *no defaults* for these options recommended by the library authors/maintainers.\r\n\r\nAdditionally, these options are enabled by default in vcpkg repo:\r\n\r\n```\r\n-DSQLITE_ENABLE_RTREE\r\n-DSQLITE_ENABLE_UNLOCK_NOTIFY\r\n-DSQLITE_ENABLE_COLUMN_METADATA\r\n```\r\n\r\nThis issue is open for discussion regarding the options of this package and its default values.\r\n\r\ncc/ @ericLemanissier @SSE4 @uilianries \n", "before_files": [{"content": "import os\nfrom conans import ConanFile, CMake, tools\n\n\nclass ConanSqlite3(ConanFile):\n name = \"sqlite3\"\n description = \"Self-contained, serverless, in-process SQL database engine.\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://www.sqlite.org\"\n topics = (\"conan\", \"sqlite\", \"database\", \"sql\", \"serverless\")\n license = \"Public Domain\"\n generators = \"cmake\"\n settings = \"os\", \"compiler\", \"arch\", \"build_type\"\n exports_sources = [\"CMakeLists.txt\"]\n options = {\"shared\": [True, False],\n \"fPIC\": [True, False],\n \"threadsafe\": [0, 1, 2],\n \"enable_column_metadata\": [True, False],\n \"enable_explain_comments\": [True, False],\n \"enable_fts3\": [True, False],\n \"enable_fts4\": [True, False],\n \"enable_fts5\": [True, False],\n \"enable_json1\": [True, False],\n \"enable_rtree\": [True, False],\n \"omit_load_extension\": [True, False]\n }\n default_options = {\"shared\": False,\n \"fPIC\": True,\n \"threadsafe\": 1,\n \"enable_column_metadata\": False,\n \"enable_explain_comments\": False,\n \"enable_fts3\": False,\n \"enable_fts4\": False,\n \"enable_fts5\": False,\n \"enable_json1\": False,\n \"enable_rtree\": False,\n \"omit_load_extension\": False\n }\n _source_subfolder = \"source_subfolder\"\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n url = self.conan_data[\"sources\"][self.version][\"url\"]\n archive_name = os.path.basename(url)\n archive_name = os.path.splitext(archive_name)[0]\n os.rename(archive_name, self._source_subfolder)\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n def _configure_cmake(self):\n cmake = CMake(self)\n cmake.definitions[\"THREADSAFE\"] = self.options.threadsafe\n cmake.definitions[\"ENABLE_COLUMN_METADATA\"] = self.options.enable_column_metadata\n cmake.definitions[\"ENABLE_EXPLAIN_COMMENTS\"] = self.options.enable_explain_comments\n cmake.definitions[\"ENABLE_FTS3\"] = self.options.enable_fts3\n cmake.definitions[\"ENABLE_FTS4\"] = self.options.enable_fts4\n cmake.definitions[\"ENABLE_FTS5\"] = self.options.enable_fts5\n cmake.definitions[\"ENABLE_JSON1\"] = self.options.enable_json1\n cmake.definitions[\"ENABLE_RTREE\"] = self.options.enable_rtree\n cmake.definitions[\"OMIT_LOAD_EXTENSION\"] = self.options.omit_load_extension\n cmake.definitions[\"HAVE_FDATASYNC\"] = True\n cmake.definitions[\"HAVE_GMTIME_R\"] = True\n cmake.definitions[\"HAVE_LOCALTIME_R\"] = True\n cmake.definitions[\"HAVE_POSIX_FALLOCATE\"] = True\n cmake.definitions[\"HAVE_STRERROR_R\"] = True\n cmake.definitions[\"HAVE_USLEEP\"] = True\n if self.settings.os == \"Windows\":\n cmake.definitions[\"HAVE_LOCALTIME_R\"] = False\n cmake.definitions[\"HAVE_POSIX_FALLOCATE\"] = False\n if tools.is_apple_os(self.settings.os):\n cmake.definitions[\"HAVE_POSIX_FALLOCATE\"] = False\n if self.settings.os == \"Android\":\n cmake.definitions[\"HAVE_POSIX_FALLOCATE\"] = False\n cmake.configure()\n return cmake\n\n def build(self):\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n header = tools.load(os.path.join(self._source_subfolder, \"sqlite3.h\"))\n license_content = header[3:header.find(\"***\", 1)]\n tools.save(\"LICENSE\", license_content)\n\n self.copy(\"LICENSE\", dst=\"licenses\")\n\n cmake = self._configure_cmake()\n cmake.install()\n\n def package_info(self):\n self.cpp_info.libs = tools.collect_libs(self)\n if self.settings.os == \"Linux\":\n if self.options.threadsafe:\n self.cpp_info.libs.append(\"pthread\")\n if self.options.omit_load_extension == \"False\":\n self.cpp_info.libs.append(\"dl\")\n", "path": "recipes/sqlite3/all/conanfile.py"}], "after_files": [{"content": "import os\nfrom conans import ConanFile, CMake, tools\n\n\nclass ConanSqlite3(ConanFile):\n name = \"sqlite3\"\n description = \"Self-contained, serverless, in-process SQL database engine.\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://www.sqlite.org\"\n topics = (\"conan\", \"sqlite\", \"database\", \"sql\", \"serverless\")\n license = \"Public Domain\"\n generators = \"cmake\"\n settings = \"os\", \"compiler\", \"arch\", \"build_type\"\n exports_sources = [\"CMakeLists.txt\"]\n options = {\"shared\": [True, False],\n \"fPIC\": [True, False],\n \"threadsafe\": [0, 1, 2],\n \"enable_column_metadata\": [True, False],\n \"enable_explain_comments\": [True, False],\n \"enable_fts3\": [True, False],\n \"enable_fts4\": [True, False],\n \"enable_fts5\": [True, False],\n \"enable_json1\": [True, False],\n \"enable_rtree\": [True, False],\n \"omit_load_extension\": [True, False],\n \"enable_unlock_notify\": [True, False]\n }\n default_options = {\"shared\": False,\n \"fPIC\": True,\n \"threadsafe\": 1,\n \"enable_column_metadata\": True,\n \"enable_explain_comments\": False,\n \"enable_fts3\": False,\n \"enable_fts4\": False,\n \"enable_fts5\": False,\n \"enable_json1\": False,\n \"enable_rtree\": True,\n \"omit_load_extension\": False,\n \"enable_unlock_notify\": True\n }\n _source_subfolder = \"source_subfolder\"\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n url = self.conan_data[\"sources\"][self.version][\"url\"]\n archive_name = os.path.basename(url)\n archive_name = os.path.splitext(archive_name)[0]\n os.rename(archive_name, self._source_subfolder)\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n def _configure_cmake(self):\n cmake = CMake(self)\n cmake.definitions[\"THREADSAFE\"] = self.options.threadsafe\n cmake.definitions[\"ENABLE_COLUMN_METADATA\"] = self.options.enable_column_metadata\n cmake.definitions[\"ENABLE_EXPLAIN_COMMENTS\"] = self.options.enable_explain_comments\n cmake.definitions[\"ENABLE_FTS3\"] = self.options.enable_fts3\n cmake.definitions[\"ENABLE_FTS4\"] = self.options.enable_fts4\n cmake.definitions[\"ENABLE_FTS5\"] = self.options.enable_fts5\n cmake.definitions[\"ENABLE_JSON1\"] = self.options.enable_json1\n cmake.definitions[\"ENABLE_RTREE\"] = self.options.enable_rtree\n cmake.definitions[\"OMIT_LOAD_EXTENSION\"] = self.options.omit_load_extension\n cmake.definitions[\"SQLITE_ENABLE_UNLOCK_NOTIFY\"] = self.options.enable_unlock_notify \n cmake.definitions[\"HAVE_FDATASYNC\"] = True\n cmake.definitions[\"HAVE_GMTIME_R\"] = True\n cmake.definitions[\"HAVE_LOCALTIME_R\"] = True\n cmake.definitions[\"HAVE_POSIX_FALLOCATE\"] = True\n cmake.definitions[\"HAVE_STRERROR_R\"] = True\n cmake.definitions[\"HAVE_USLEEP\"] = True\n if self.settings.os == \"Windows\":\n cmake.definitions[\"HAVE_LOCALTIME_R\"] = False\n cmake.definitions[\"HAVE_POSIX_FALLOCATE\"] = False\n if tools.is_apple_os(self.settings.os):\n cmake.definitions[\"HAVE_POSIX_FALLOCATE\"] = False\n if self.settings.os == \"Android\":\n cmake.definitions[\"HAVE_POSIX_FALLOCATE\"] = False\n cmake.configure()\n return cmake\n\n def build(self):\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n header = tools.load(os.path.join(self._source_subfolder, \"sqlite3.h\"))\n license_content = header[3:header.find(\"***\", 1)]\n tools.save(\"LICENSE\", license_content)\n\n self.copy(\"LICENSE\", dst=\"licenses\")\n\n cmake = self._configure_cmake()\n cmake.install()\n\n def package_info(self):\n self.cpp_info.libs = tools.collect_libs(self)\n if self.settings.os == \"Linux\":\n if self.options.threadsafe:\n self.cpp_info.libs.append(\"pthread\")\n if self.options.omit_load_extension == \"False\":\n self.cpp_info.libs.append(\"dl\")\n", "path": "recipes/sqlite3/all/conanfile.py"}]} | 1,689 | 405 |
gh_patches_debug_31335 | rasdani/github-patches | git_diff | pfnet__pytorch-pfn-extras-367 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ProgressBar extension doesn't show correct `iters/sec` in ppe==0.5.0
The `ProgressBar` extension shows `iters/sec` as well as the progress bar, but I noticed that in ppe==0.5.0 it becomes always `inf iters/sec`.
### ppe==0.5.0
Minimum reproduction code:
```python
# progress.py (based on https://git.io/JzHP3)
import pytorch_pfn_extras as ppe
import time
print("PPE =", ppe.__version__)
max_epochs = 3
iters_per_epoch = 20
manager = ppe.training.ExtensionsManager({}, {}, max_epochs, iters_per_epoch=iters_per_epoch)
manager.extend(ppe.training.extensions.ProgressBar(training_length=None, update_interval=1, bar_length=40))
for epoch in range(max_epochs):
for _ in range(iters_per_epoch):
with manager.run_iteration():
pass
time.sleep(0.1)
```
Result:
```
$ python progress.py
PPE = 0.5.0
total [####################....................] 51.67%
this epoch [######################..................] 55.00%
31 iter, 1 epoch / 3 epochs
inf iters/sec. Estimated time to finish: 0:00:00.
```
### ppe==0.4.5 (expected behavior)
It shows a correct iters/sec.
```
$ python progress.py
PPE = 0.4.5
total [#################.......................] 43.33%
this epoch [############............................] 30.00%
26 iter, 1 epoch / 3 epochs
10.825 iters/sec. Estimated time to finish: 0:00:03.140780.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pytorch_pfn_extras/training/extensions/util.py`
Content:
```
1 import os
2 import sys
3 import queue
4 import time
5 from typing import NamedTuple, Optional, Sequence, TextIO, Tuple, TYPE_CHECKING
6
7 if TYPE_CHECKING:
8 from pytorch_pfn_extras.training.manager import _BaseExtensionsManager
9
10
11 if os.name == 'nt':
12 import ctypes
13 from ctypes import windll # type: ignore [attr-defined]
14
15 _STD_OUTPUT_HANDLE = -11
16
17 _COORD = ctypes.wintypes._COORD
18
19 class _CONSOLE_SCREEN_BUFFER_INFO(ctypes.Structure):
20 _fields_ = [('dwSize', _COORD), ('dwCursorPosition', _COORD),
21 ('wAttributes', ctypes.c_ushort),
22 ('srWindow', ctypes.wintypes.SMALL_RECT),
23 ('dwMaximumWindowSize', _COORD)]
24
25 def set_console_cursor_position(x: int, y: int) -> None:
26 """Set relative cursor position from current position to (x,y)"""
27
28 whnd = windll.kernel32.GetStdHandle(_STD_OUTPUT_HANDLE)
29 csbi = _CONSOLE_SCREEN_BUFFER_INFO()
30 windll.kernel32.GetConsoleScreenBufferInfo(whnd, ctypes.byref(csbi))
31 cur_pos = csbi.dwCursorPosition
32 pos = _COORD(cur_pos.X + x, cur_pos.Y + y)
33
34 # Workaround the issue that pyreadline overwrites the argtype
35 setpos = windll.kernel32.SetConsoleCursorPosition
36 argtypes = setpos.argtypes
37 setpos.argtypes = None
38 setpos(whnd, pos)
39 setpos.argtypes = argtypes
40
41 def erase_console(x: int, y: int, mode: int = 0) -> None:
42 """Erase screen.
43
44 Mode=0: From (x,y) position down to the bottom of the screen.
45 Mode=1: From (x,y) position down to the beginning of line.
46 Mode=2: Hole screen
47 """
48
49 whnd = windll.kernel32.GetStdHandle(_STD_OUTPUT_HANDLE)
50 csbi = _CONSOLE_SCREEN_BUFFER_INFO()
51 windll.kernel32.GetConsoleScreenBufferInfo(whnd, ctypes.byref(csbi))
52 cur_pos = csbi.dwCursorPosition
53 wr = ctypes.c_ulong()
54 if mode == 0:
55 num = csbi.srWindow.Right * (
56 csbi.srWindow.Bottom - cur_pos.Y) - cur_pos.X
57 windll.kernel32.FillConsoleOutputCharacterA(
58 whnd, ord(' '), num, cur_pos, ctypes.byref(wr))
59 elif mode == 1:
60 num = cur_pos.X
61 windll.kernel32.FillConsoleOutputCharacterA(
62 whnd, ord(' '), num, _COORD(0, cur_pos.Y), ctypes.byref(wr))
63 elif mode == 2:
64 os.system('cls')
65
66
67 class _QueUnit(NamedTuple):
68 iteration: int
69 epoch_detail: float
70 time: float
71
72
73 class ProgressBar:
74
75 def __init__(self, out: Optional[TextIO] = None) -> None:
76 self._out = sys.stdout if out is None else out
77 self._recent_timing: 'queue.Queue[_QueUnit]' = queue.Queue(maxsize=100)
78
79 def update_speed(
80 self,
81 iteration: int,
82 epoch_detail: float
83 ) -> Tuple[float, float]:
84 now = time.time()
85 self._recent_timing.put(_QueUnit(iteration, epoch_detail, now))
86 old = self._recent_timing.get()
87 span = now - old.time
88 if span != 0:
89 speed_t = (iteration - old.iteration) / span
90 speed_e = (epoch_detail - old.epoch_detail) / span
91 else:
92 speed_t = float('inf')
93 speed_e = float('inf')
94 return speed_t, speed_e
95
96 def get_lines(self) -> Sequence[str]:
97 raise NotImplementedError
98
99 def update(
100 self,
101 manager: Optional['_BaseExtensionsManager'] = None
102 ) -> None:
103 self.erase_console()
104
105 lines = self.get_lines()
106 for line in lines:
107 self._out.write(line)
108
109 self.move_cursor_up(len(lines))
110 self.flush()
111
112 def close(self) -> None:
113 self.erase_console()
114 self.flush()
115
116 def erase_console(self) -> None:
117 if os.name == 'nt':
118 erase_console(0, 0)
119 else:
120 self._out.write('\033[J')
121
122 def move_cursor_up(self, n: int) -> None:
123 # move the cursor to the head of the progress bar
124 if os.name == 'nt':
125 set_console_cursor_position(0, - n)
126 else:
127 self._out.write('\033[{:d}A'.format(n))
128
129 def flush(self) -> None:
130 if hasattr(self._out, 'flush'):
131 self._out.flush()
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pytorch_pfn_extras/training/extensions/util.py b/pytorch_pfn_extras/training/extensions/util.py
--- a/pytorch_pfn_extras/training/extensions/util.py
+++ b/pytorch_pfn_extras/training/extensions/util.py
@@ -1,8 +1,8 @@
+import collections
import os
import sys
-import queue
import time
-from typing import NamedTuple, Optional, Sequence, TextIO, Tuple, TYPE_CHECKING
+from typing import Deque, Optional, Sequence, TextIO, Tuple, TYPE_CHECKING
if TYPE_CHECKING:
from pytorch_pfn_extras.training.manager import _BaseExtensionsManager
@@ -64,17 +64,12 @@
os.system('cls')
-class _QueUnit(NamedTuple):
- iteration: int
- epoch_detail: float
- time: float
-
-
class ProgressBar:
def __init__(self, out: Optional[TextIO] = None) -> None:
self._out = sys.stdout if out is None else out
- self._recent_timing: 'queue.Queue[_QueUnit]' = queue.Queue(maxsize=100)
+ self._recent_timing: Deque[Tuple[int, float, float]] = collections.deque(
+ [], maxlen=100)
def update_speed(
self,
@@ -82,12 +77,12 @@
epoch_detail: float
) -> Tuple[float, float]:
now = time.time()
- self._recent_timing.put(_QueUnit(iteration, epoch_detail, now))
- old = self._recent_timing.get()
- span = now - old.time
+ self._recent_timing.append((iteration, epoch_detail, now))
+ old_t, old_e, old_sec = self._recent_timing[0]
+ span = now - old_sec
if span != 0:
- speed_t = (iteration - old.iteration) / span
- speed_e = (epoch_detail - old.epoch_detail) / span
+ speed_t = (iteration - old_t) / span
+ speed_e = (epoch_detail - old_e) / span
else:
speed_t = float('inf')
speed_e = float('inf')
| {"golden_diff": "diff --git a/pytorch_pfn_extras/training/extensions/util.py b/pytorch_pfn_extras/training/extensions/util.py\n--- a/pytorch_pfn_extras/training/extensions/util.py\n+++ b/pytorch_pfn_extras/training/extensions/util.py\n@@ -1,8 +1,8 @@\n+import collections\n import os\n import sys\n-import queue\n import time\n-from typing import NamedTuple, Optional, Sequence, TextIO, Tuple, TYPE_CHECKING\n+from typing import Deque, Optional, Sequence, TextIO, Tuple, TYPE_CHECKING\n \n if TYPE_CHECKING:\n from pytorch_pfn_extras.training.manager import _BaseExtensionsManager\n@@ -64,17 +64,12 @@\n os.system('cls')\n \n \n-class _QueUnit(NamedTuple):\n- iteration: int\n- epoch_detail: float\n- time: float\n-\n-\n class ProgressBar:\n \n def __init__(self, out: Optional[TextIO] = None) -> None:\n self._out = sys.stdout if out is None else out\n- self._recent_timing: 'queue.Queue[_QueUnit]' = queue.Queue(maxsize=100)\n+ self._recent_timing: Deque[Tuple[int, float, float]] = collections.deque(\n+ [], maxlen=100)\n \n def update_speed(\n self,\n@@ -82,12 +77,12 @@\n epoch_detail: float\n ) -> Tuple[float, float]:\n now = time.time()\n- self._recent_timing.put(_QueUnit(iteration, epoch_detail, now))\n- old = self._recent_timing.get()\n- span = now - old.time\n+ self._recent_timing.append((iteration, epoch_detail, now))\n+ old_t, old_e, old_sec = self._recent_timing[0]\n+ span = now - old_sec\n if span != 0:\n- speed_t = (iteration - old.iteration) / span\n- speed_e = (epoch_detail - old.epoch_detail) / span\n+ speed_t = (iteration - old_t) / span\n+ speed_e = (epoch_detail - old_e) / span\n else:\n speed_t = float('inf')\n speed_e = float('inf')\n", "issue": "ProgressBar extension doesn't show correct `iters/sec` in ppe==0.5.0\nThe `ProgressBar` extension shows `iters/sec` as well as the progress bar, but I noticed that in ppe==0.5.0 it becomes always `inf iters/sec`.\r\n\r\n### ppe==0.5.0\r\n\r\nMinimum reproduction code:\r\n```python\r\n# progress.py (based on https://git.io/JzHP3)\r\nimport pytorch_pfn_extras as ppe\r\nimport time\r\n\r\nprint(\"PPE =\", ppe.__version__)\r\n\r\nmax_epochs = 3\r\niters_per_epoch = 20\r\nmanager = ppe.training.ExtensionsManager({}, {}, max_epochs, iters_per_epoch=iters_per_epoch)\r\nmanager.extend(ppe.training.extensions.ProgressBar(training_length=None, update_interval=1, bar_length=40))\r\n\r\nfor epoch in range(max_epochs):\r\n for _ in range(iters_per_epoch):\r\n with manager.run_iteration():\r\n pass\r\n time.sleep(0.1)\r\n```\r\n\r\nResult:\r\n```\r\n$ python progress.py\r\nPPE = 0.5.0\r\n total [####################....................] 51.67%\r\nthis epoch [######################..................] 55.00%\r\n 31 iter, 1 epoch / 3 epochs\r\n inf iters/sec. Estimated time to finish: 0:00:00.\r\n```\r\n\r\n### ppe==0.4.5 (expected behavior)\r\nIt shows a correct iters/sec.\r\n```\r\n$ python progress.py\r\nPPE = 0.4.5\r\n total [#################.......................] 43.33%\r\nthis epoch [############............................] 30.00%\r\n 26 iter, 1 epoch / 3 epochs\r\n 10.825 iters/sec. Estimated time to finish: 0:00:03.140780.\r\n```\n", "before_files": [{"content": "import os\nimport sys\nimport queue\nimport time\nfrom typing import NamedTuple, Optional, Sequence, TextIO, Tuple, TYPE_CHECKING\n\nif TYPE_CHECKING:\n from pytorch_pfn_extras.training.manager import _BaseExtensionsManager\n\n\nif os.name == 'nt':\n import ctypes\n from ctypes import windll # type: ignore [attr-defined]\n\n _STD_OUTPUT_HANDLE = -11\n\n _COORD = ctypes.wintypes._COORD\n\n class _CONSOLE_SCREEN_BUFFER_INFO(ctypes.Structure):\n _fields_ = [('dwSize', _COORD), ('dwCursorPosition', _COORD),\n ('wAttributes', ctypes.c_ushort),\n ('srWindow', ctypes.wintypes.SMALL_RECT),\n ('dwMaximumWindowSize', _COORD)]\n\n def set_console_cursor_position(x: int, y: int) -> None:\n \"\"\"Set relative cursor position from current position to (x,y)\"\"\"\n\n whnd = windll.kernel32.GetStdHandle(_STD_OUTPUT_HANDLE)\n csbi = _CONSOLE_SCREEN_BUFFER_INFO()\n windll.kernel32.GetConsoleScreenBufferInfo(whnd, ctypes.byref(csbi))\n cur_pos = csbi.dwCursorPosition\n pos = _COORD(cur_pos.X + x, cur_pos.Y + y)\n\n # Workaround the issue that pyreadline overwrites the argtype\n setpos = windll.kernel32.SetConsoleCursorPosition\n argtypes = setpos.argtypes\n setpos.argtypes = None\n setpos(whnd, pos)\n setpos.argtypes = argtypes\n\n def erase_console(x: int, y: int, mode: int = 0) -> None:\n \"\"\"Erase screen.\n\n Mode=0: From (x,y) position down to the bottom of the screen.\n Mode=1: From (x,y) position down to the beginning of line.\n Mode=2: Hole screen\n \"\"\"\n\n whnd = windll.kernel32.GetStdHandle(_STD_OUTPUT_HANDLE)\n csbi = _CONSOLE_SCREEN_BUFFER_INFO()\n windll.kernel32.GetConsoleScreenBufferInfo(whnd, ctypes.byref(csbi))\n cur_pos = csbi.dwCursorPosition\n wr = ctypes.c_ulong()\n if mode == 0:\n num = csbi.srWindow.Right * (\n csbi.srWindow.Bottom - cur_pos.Y) - cur_pos.X\n windll.kernel32.FillConsoleOutputCharacterA(\n whnd, ord(' '), num, cur_pos, ctypes.byref(wr))\n elif mode == 1:\n num = cur_pos.X\n windll.kernel32.FillConsoleOutputCharacterA(\n whnd, ord(' '), num, _COORD(0, cur_pos.Y), ctypes.byref(wr))\n elif mode == 2:\n os.system('cls')\n\n\nclass _QueUnit(NamedTuple):\n iteration: int\n epoch_detail: float\n time: float\n\n\nclass ProgressBar:\n\n def __init__(self, out: Optional[TextIO] = None) -> None:\n self._out = sys.stdout if out is None else out\n self._recent_timing: 'queue.Queue[_QueUnit]' = queue.Queue(maxsize=100)\n\n def update_speed(\n self,\n iteration: int,\n epoch_detail: float\n ) -> Tuple[float, float]:\n now = time.time()\n self._recent_timing.put(_QueUnit(iteration, epoch_detail, now))\n old = self._recent_timing.get()\n span = now - old.time\n if span != 0:\n speed_t = (iteration - old.iteration) / span\n speed_e = (epoch_detail - old.epoch_detail) / span\n else:\n speed_t = float('inf')\n speed_e = float('inf')\n return speed_t, speed_e\n\n def get_lines(self) -> Sequence[str]:\n raise NotImplementedError\n\n def update(\n self,\n manager: Optional['_BaseExtensionsManager'] = None\n ) -> None:\n self.erase_console()\n\n lines = self.get_lines()\n for line in lines:\n self._out.write(line)\n\n self.move_cursor_up(len(lines))\n self.flush()\n\n def close(self) -> None:\n self.erase_console()\n self.flush()\n\n def erase_console(self) -> None:\n if os.name == 'nt':\n erase_console(0, 0)\n else:\n self._out.write('\\033[J')\n\n def move_cursor_up(self, n: int) -> None:\n # move the cursor to the head of the progress bar\n if os.name == 'nt':\n set_console_cursor_position(0, - n)\n else:\n self._out.write('\\033[{:d}A'.format(n))\n\n def flush(self) -> None:\n if hasattr(self._out, 'flush'):\n self._out.flush()\n", "path": "pytorch_pfn_extras/training/extensions/util.py"}], "after_files": [{"content": "import collections\nimport os\nimport sys\nimport time\nfrom typing import Deque, Optional, Sequence, TextIO, Tuple, TYPE_CHECKING\n\nif TYPE_CHECKING:\n from pytorch_pfn_extras.training.manager import _BaseExtensionsManager\n\n\nif os.name == 'nt':\n import ctypes\n from ctypes import windll # type: ignore [attr-defined]\n\n _STD_OUTPUT_HANDLE = -11\n\n _COORD = ctypes.wintypes._COORD\n\n class _CONSOLE_SCREEN_BUFFER_INFO(ctypes.Structure):\n _fields_ = [('dwSize', _COORD), ('dwCursorPosition', _COORD),\n ('wAttributes', ctypes.c_ushort),\n ('srWindow', ctypes.wintypes.SMALL_RECT),\n ('dwMaximumWindowSize', _COORD)]\n\n def set_console_cursor_position(x: int, y: int) -> None:\n \"\"\"Set relative cursor position from current position to (x,y)\"\"\"\n\n whnd = windll.kernel32.GetStdHandle(_STD_OUTPUT_HANDLE)\n csbi = _CONSOLE_SCREEN_BUFFER_INFO()\n windll.kernel32.GetConsoleScreenBufferInfo(whnd, ctypes.byref(csbi))\n cur_pos = csbi.dwCursorPosition\n pos = _COORD(cur_pos.X + x, cur_pos.Y + y)\n\n # Workaround the issue that pyreadline overwrites the argtype\n setpos = windll.kernel32.SetConsoleCursorPosition\n argtypes = setpos.argtypes\n setpos.argtypes = None\n setpos(whnd, pos)\n setpos.argtypes = argtypes\n\n def erase_console(x: int, y: int, mode: int = 0) -> None:\n \"\"\"Erase screen.\n\n Mode=0: From (x,y) position down to the bottom of the screen.\n Mode=1: From (x,y) position down to the beginning of line.\n Mode=2: Hole screen\n \"\"\"\n\n whnd = windll.kernel32.GetStdHandle(_STD_OUTPUT_HANDLE)\n csbi = _CONSOLE_SCREEN_BUFFER_INFO()\n windll.kernel32.GetConsoleScreenBufferInfo(whnd, ctypes.byref(csbi))\n cur_pos = csbi.dwCursorPosition\n wr = ctypes.c_ulong()\n if mode == 0:\n num = csbi.srWindow.Right * (\n csbi.srWindow.Bottom - cur_pos.Y) - cur_pos.X\n windll.kernel32.FillConsoleOutputCharacterA(\n whnd, ord(' '), num, cur_pos, ctypes.byref(wr))\n elif mode == 1:\n num = cur_pos.X\n windll.kernel32.FillConsoleOutputCharacterA(\n whnd, ord(' '), num, _COORD(0, cur_pos.Y), ctypes.byref(wr))\n elif mode == 2:\n os.system('cls')\n\n\nclass ProgressBar:\n\n def __init__(self, out: Optional[TextIO] = None) -> None:\n self._out = sys.stdout if out is None else out\n self._recent_timing: Deque[Tuple[int, float, float]] = collections.deque(\n [], maxlen=100)\n\n def update_speed(\n self,\n iteration: int,\n epoch_detail: float\n ) -> Tuple[float, float]:\n now = time.time()\n self._recent_timing.append((iteration, epoch_detail, now))\n old_t, old_e, old_sec = self._recent_timing[0]\n span = now - old_sec\n if span != 0:\n speed_t = (iteration - old_t) / span\n speed_e = (epoch_detail - old_e) / span\n else:\n speed_t = float('inf')\n speed_e = float('inf')\n return speed_t, speed_e\n\n def get_lines(self) -> Sequence[str]:\n raise NotImplementedError\n\n def update(\n self,\n manager: Optional['_BaseExtensionsManager'] = None\n ) -> None:\n self.erase_console()\n\n lines = self.get_lines()\n for line in lines:\n self._out.write(line)\n\n self.move_cursor_up(len(lines))\n self.flush()\n\n def close(self) -> None:\n self.erase_console()\n self.flush()\n\n def erase_console(self) -> None:\n if os.name == 'nt':\n erase_console(0, 0)\n else:\n self._out.write('\\033[J')\n\n def move_cursor_up(self, n: int) -> None:\n # move the cursor to the head of the progress bar\n if os.name == 'nt':\n set_console_cursor_position(0, - n)\n else:\n self._out.write('\\033[{:d}A'.format(n))\n\n def flush(self) -> None:\n if hasattr(self._out, 'flush'):\n self._out.flush()\n", "path": "pytorch_pfn_extras/training/extensions/util.py"}]} | 2,033 | 493 |
gh_patches_debug_26402 | rasdani/github-patches | git_diff | statsmodels__statsmodels-6654 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Handle pathlib paths with `get_file_obj`
#### Is your feature request related to a problem? Please describe
I'm trying to read a statsmodels.tsa.statespace.sarimax.SARIMAXResults pickle file I've saved. I'd like to pass a pathlib.Path object as I find it easier to handle (I work with different machines, Mac and Windows).
#### Describe the solution you'd like
The function `open` already supports pathlib objects. Why not using it?
You could replace `statsmodels.iolib.openfile.get_file_obj` with:
```python
if _is_string_like(fname):
return _open(fname, mode, encoding)
try: # try opening it with 'open'
return open(fname, mode, encoding)
except TypeError: # else, maybe return context manager (please check if it's always TypeError)
try:
# Make sure the object has the write methods
if 'r' in mode:
fname.read
if 'w' in mode or 'a' in mode:
fname.write
except AttributeError:
raise ValueError('fname must be a string or a file-like object')
return EmptyContextManager(fname)
```
This is just a suggestion though :)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `statsmodels/iolib/openfile.py`
Content:
```
1 """
2 Handle file opening for read/write
3 """
4 from numpy.lib._iotools import _is_string_like
5
6
7 class EmptyContextManager(object):
8 """
9 This class is needed to allow file-like object to be used as
10 context manager, but without getting closed.
11 """
12 def __init__(self, obj):
13 self._obj = obj
14
15 def __enter__(self):
16 '''When entering, return the embedded object'''
17 return self._obj
18
19 def __exit__(self, *args):
20 '''Do not hide anything'''
21 return False
22
23 def __getattr__(self, name):
24 return getattr(self._obj, name)
25
26
27 def _open(fname, mode, encoding):
28 if fname.endswith('.gz'):
29 import gzip
30 return gzip.open(fname, mode, encoding=encoding)
31 else:
32 return open(fname, mode, encoding=encoding)
33
34
35 def get_file_obj(fname, mode='r', encoding=None):
36 """
37 Light wrapper to handle strings and let files (anything else) pass through.
38
39 It also handle '.gz' files.
40
41 Parameters
42 ----------
43 fname : str or file-like object
44 File to open / forward
45 mode : str
46 Argument passed to the 'open' or 'gzip.open' function
47 encoding : str
48 For Python 3 only, specify the encoding of the file
49
50 Returns
51 -------
52 A file-like object that is always a context-manager. If the `fname` was
53 already a file-like object, the returned context manager *will not
54 close the file*.
55 """
56 if _is_string_like(fname):
57 return _open(fname, mode, encoding)
58 try:
59 # Make sure the object has the write methods
60 if 'r' in mode:
61 fname.read
62 if 'w' in mode or 'a' in mode:
63 fname.write
64 except AttributeError:
65 raise ValueError('fname must be a string or a file-like object')
66 return EmptyContextManager(fname)
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/statsmodels/iolib/openfile.py b/statsmodels/iolib/openfile.py
--- a/statsmodels/iolib/openfile.py
+++ b/statsmodels/iolib/openfile.py
@@ -34,13 +34,14 @@
def get_file_obj(fname, mode='r', encoding=None):
"""
- Light wrapper to handle strings and let files (anything else) pass through.
+ Light wrapper to handle strings, path objects and let files (anything else)
+ pass through.
It also handle '.gz' files.
Parameters
----------
- fname : str or file-like object
+ fname : str, path object or file-like object
File to open / forward
mode : str
Argument passed to the 'open' or 'gzip.open' function
@@ -56,11 +57,14 @@
if _is_string_like(fname):
return _open(fname, mode, encoding)
try:
- # Make sure the object has the write methods
- if 'r' in mode:
- fname.read
- if 'w' in mode or 'a' in mode:
- fname.write
- except AttributeError:
- raise ValueError('fname must be a string or a file-like object')
- return EmptyContextManager(fname)
+ return open(fname, mode, encoding=encoding) # handle pathlib-like objs
+ except TypeError:
+ try:
+ # Make sure the object has the write methods
+ if "r" in mode:
+ fname.read
+ if "w" in mode or "a" in mode:
+ fname.write
+ except AttributeError:
+ raise ValueError("fname must be a string or a file-like object")
+ return EmptyContextManager(fname)
| {"golden_diff": "diff --git a/statsmodels/iolib/openfile.py b/statsmodels/iolib/openfile.py\n--- a/statsmodels/iolib/openfile.py\n+++ b/statsmodels/iolib/openfile.py\n@@ -34,13 +34,14 @@\n \n def get_file_obj(fname, mode='r', encoding=None):\n \"\"\"\n- Light wrapper to handle strings and let files (anything else) pass through.\n+ Light wrapper to handle strings, path objects and let files (anything else)\n+ pass through.\n \n It also handle '.gz' files.\n \n Parameters\n ----------\n- fname : str or file-like object\n+ fname : str, path object or file-like object\n File to open / forward\n mode : str\n Argument passed to the 'open' or 'gzip.open' function\n@@ -56,11 +57,14 @@\n if _is_string_like(fname):\n return _open(fname, mode, encoding)\n try:\n- # Make sure the object has the write methods\n- if 'r' in mode:\n- fname.read\n- if 'w' in mode or 'a' in mode:\n- fname.write\n- except AttributeError:\n- raise ValueError('fname must be a string or a file-like object')\n- return EmptyContextManager(fname)\n+ return open(fname, mode, encoding=encoding) # handle pathlib-like objs\n+ except TypeError:\n+ try:\n+ # Make sure the object has the write methods\n+ if \"r\" in mode:\n+ fname.read\n+ if \"w\" in mode or \"a\" in mode:\n+ fname.write\n+ except AttributeError:\n+ raise ValueError(\"fname must be a string or a file-like object\")\n+ return EmptyContextManager(fname)\n", "issue": "Handle pathlib paths with `get_file_obj`\n#### Is your feature request related to a problem? Please describe\r\nI'm trying to read a statsmodels.tsa.statespace.sarimax.SARIMAXResults pickle file I've saved. I'd like to pass a pathlib.Path object as I find it easier to handle (I work with different machines, Mac and Windows).\r\n\r\n#### Describe the solution you'd like\r\nThe function `open` already supports pathlib objects. Why not using it?\r\n\r\nYou could replace `statsmodels.iolib.openfile.get_file_obj` with:\r\n\r\n```python\r\nif _is_string_like(fname):\r\n return _open(fname, mode, encoding)\r\ntry: # try opening it with 'open'\r\n return open(fname, mode, encoding)\r\nexcept TypeError: # else, maybe return context manager (please check if it's always TypeError)\r\n try:\r\n # Make sure the object has the write methods\r\n if 'r' in mode:\r\n fname.read\r\n if 'w' in mode or 'a' in mode:\r\n fname.write\r\n except AttributeError:\r\n raise ValueError('fname must be a string or a file-like object')\r\n return EmptyContextManager(fname)\r\n```\r\n\r\nThis is just a suggestion though :)\n", "before_files": [{"content": "\"\"\"\nHandle file opening for read/write\n\"\"\"\nfrom numpy.lib._iotools import _is_string_like\n\n\nclass EmptyContextManager(object):\n \"\"\"\n This class is needed to allow file-like object to be used as\n context manager, but without getting closed.\n \"\"\"\n def __init__(self, obj):\n self._obj = obj\n\n def __enter__(self):\n '''When entering, return the embedded object'''\n return self._obj\n\n def __exit__(self, *args):\n '''Do not hide anything'''\n return False\n\n def __getattr__(self, name):\n return getattr(self._obj, name)\n\n\ndef _open(fname, mode, encoding):\n if fname.endswith('.gz'):\n import gzip\n return gzip.open(fname, mode, encoding=encoding)\n else:\n return open(fname, mode, encoding=encoding)\n\n\ndef get_file_obj(fname, mode='r', encoding=None):\n \"\"\"\n Light wrapper to handle strings and let files (anything else) pass through.\n\n It also handle '.gz' files.\n\n Parameters\n ----------\n fname : str or file-like object\n File to open / forward\n mode : str\n Argument passed to the 'open' or 'gzip.open' function\n encoding : str\n For Python 3 only, specify the encoding of the file\n\n Returns\n -------\n A file-like object that is always a context-manager. If the `fname` was\n already a file-like object, the returned context manager *will not\n close the file*.\n \"\"\"\n if _is_string_like(fname):\n return _open(fname, mode, encoding)\n try:\n # Make sure the object has the write methods\n if 'r' in mode:\n fname.read\n if 'w' in mode or 'a' in mode:\n fname.write\n except AttributeError:\n raise ValueError('fname must be a string or a file-like object')\n return EmptyContextManager(fname)\n", "path": "statsmodels/iolib/openfile.py"}], "after_files": [{"content": "\"\"\"\nHandle file opening for read/write\n\"\"\"\nfrom numpy.lib._iotools import _is_string_like\n\n\nclass EmptyContextManager(object):\n \"\"\"\n This class is needed to allow file-like object to be used as\n context manager, but without getting closed.\n \"\"\"\n def __init__(self, obj):\n self._obj = obj\n\n def __enter__(self):\n '''When entering, return the embedded object'''\n return self._obj\n\n def __exit__(self, *args):\n '''Do not hide anything'''\n return False\n\n def __getattr__(self, name):\n return getattr(self._obj, name)\n\n\ndef _open(fname, mode, encoding):\n if fname.endswith('.gz'):\n import gzip\n return gzip.open(fname, mode, encoding=encoding)\n else:\n return open(fname, mode, encoding=encoding)\n\n\ndef get_file_obj(fname, mode='r', encoding=None):\n \"\"\"\n Light wrapper to handle strings, path objects and let files (anything else)\n pass through.\n\n It also handle '.gz' files.\n\n Parameters\n ----------\n fname : str, path object or file-like object\n File to open / forward\n mode : str\n Argument passed to the 'open' or 'gzip.open' function\n encoding : str\n For Python 3 only, specify the encoding of the file\n\n Returns\n -------\n A file-like object that is always a context-manager. If the `fname` was\n already a file-like object, the returned context manager *will not\n close the file*.\n \"\"\"\n if _is_string_like(fname):\n return _open(fname, mode, encoding)\n try:\n return open(fname, mode, encoding=encoding) # handle pathlib-like objs\n except TypeError:\n try:\n # Make sure the object has the write methods\n if \"r\" in mode:\n fname.read\n if \"w\" in mode or \"a\" in mode:\n fname.write\n except AttributeError:\n raise ValueError(\"fname must be a string or a file-like object\")\n return EmptyContextManager(fname)\n", "path": "statsmodels/iolib/openfile.py"}]} | 1,067 | 388 |
gh_patches_debug_29404 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-2151 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ID sequence isn't being created for tables
## Description
<!-- A clear and concise description of what the bug is. -->
I cannot add a new record to a new table because the ID does not have a default value. I set up this table by:
1. Importing [library_books.tsv.zip](https://github.com/centerofci/mathesar/files/10273401/library_books.tsv.zip) as "Items"
2. Pulling "Author First Name" and "Author First Name" into an "Authors" table
3. Moving "Author Website" to "Authors" table
4. Pulling "Publishers" into a "Publishers" table
5. Pulling "Title", "Publication Year", "ISBN", "Authors_id", and "Publishers_id" into a "Books" table
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
I should be able to add a new record!
## To Reproduce
<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->
~Try adding a new record on https://library.mathesar.org/mathesar_tables/4/tables/34/~
I needed to nuke this table since it was blocking my work. To reproduce, follow the steps under Description.
## Additional context
<!-- Add any other context about the problem or screenshots here. -->
This seems related to
- #2116
- #1356
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `db/tables/operations/split.py`
Content:
```
1 from sqlalchemy import exists, func, literal, select
2
3 from db import constants
4 from db.columns.base import MathesarColumn
5 from db.columns.operations.alter import batch_alter_table_drop_columns
6 from db.columns.operations.select import get_column_names_from_attnums
7 from db.links.operations.create import create_foreign_key_link
8 from db.tables.operations.create import create_mathesar_table
9 from db.tables.operations.select import get_oid_from_table, reflect_table, reflect_table_from_oid
10 from db.metadata import get_empty_metadata
11
12
13 def _create_split_tables(extracted_table_name, extracted_columns, remainder_table_name, schema, engine, fk_column_name=None):
14 extracted_table = create_mathesar_table(
15 extracted_table_name,
16 schema,
17 extracted_columns,
18 engine,
19 )
20 fk_column_name = fk_column_name if fk_column_name else f"{extracted_table.name}_{constants.ID}"
21 remainder_table_oid = get_oid_from_table(remainder_table_name, schema, engine)
22 extracted_table_oid = get_oid_from_table(extracted_table_name, schema, engine)
23 create_foreign_key_link(engine, schema, fk_column_name, remainder_table_oid, extracted_table_oid)
24 # TODO reuse metadata
25 remainder_table_with_fk_key = reflect_table(remainder_table_name, schema, engine, metadata=get_empty_metadata())
26 return extracted_table, remainder_table_with_fk_key, fk_column_name
27
28
29 def _create_split_insert_stmt(old_table, extracted_table, extracted_columns, remainder_fk_name):
30 SPLIT_ID = f"{constants.MATHESAR_PREFIX}_split_column_alias"
31 extracted_column_names = [col.name for col in extracted_columns]
32 split_cte = select(
33 [
34 old_table,
35 func.dense_rank().over(order_by=extracted_columns).label(SPLIT_ID)
36 ]
37 ).cte()
38 cte_extraction_columns = (
39 [split_cte.columns[SPLIT_ID]]
40 + [split_cte.columns[n] for n in extracted_column_names]
41 )
42 extract_sel = select(
43 cte_extraction_columns,
44 distinct=True
45 )
46 extract_ins_cte = (
47 extracted_table
48 .insert()
49 .from_select([constants.ID] + extracted_column_names, extract_sel)
50 .returning(literal(1))
51 .cte()
52 )
53 fk_update_dict = {remainder_fk_name: split_cte.c[SPLIT_ID]}
54 split_ins = (
55 old_table
56 .update().values(**fk_update_dict).
57 where(old_table.c[constants.ID] == split_cte.c[constants.ID],
58 exists(extract_ins_cte.select()))
59 )
60 return split_ins
61
62
63 def extract_columns_from_table(old_table_oid, extracted_column_attnums, extracted_table_name, schema, engine, relationship_fk_column_name=None):
64 # TODO reuse metadata
65 old_table = reflect_table_from_oid(old_table_oid, engine, metadata=get_empty_metadata())
66 old_table_name = old_table.name
67 old_columns = (MathesarColumn.from_column(col) for col in old_table.columns)
68 old_non_default_columns = [
69 col for col in old_columns if not col.is_default
70 ]
71 # TODO reuse metadata
72 extracted_column_names = get_column_names_from_attnums(old_table_oid, extracted_column_attnums, engine, metadata=get_empty_metadata())
73 extracted_columns = [
74 col for col in old_non_default_columns if col.name in extracted_column_names
75 ]
76 with engine.begin() as conn:
77 extracted_table, remainder_table_with_fk_column, fk_column_name = _create_split_tables(
78 extracted_table_name,
79 extracted_columns,
80 old_table_name,
81 schema,
82 engine,
83 relationship_fk_column_name
84 )
85 split_ins = _create_split_insert_stmt(
86 remainder_table_with_fk_column,
87 extracted_table,
88 extracted_columns,
89 fk_column_name,
90 )
91 conn.execute(split_ins)
92 remainder_table_oid = get_oid_from_table(remainder_table_with_fk_column.name, schema, engine)
93 deletion_column_data = [
94 {'attnum': column_attnum, 'delete': True}
95 for column_attnum in extracted_column_attnums
96 ]
97 batch_alter_table_drop_columns(remainder_table_oid, deletion_column_data, conn, engine)
98 return extracted_table, remainder_table_with_fk_column, fk_column_name
99
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/db/tables/operations/split.py b/db/tables/operations/split.py
--- a/db/tables/operations/split.py
+++ b/db/tables/operations/split.py
@@ -89,6 +89,8 @@
fk_column_name,
)
conn.execute(split_ins)
+ update_pk_sequence_to_latest(conn, engine, extracted_table)
+
remainder_table_oid = get_oid_from_table(remainder_table_with_fk_column.name, schema, engine)
deletion_column_data = [
{'attnum': column_attnum, 'delete': True}
@@ -96,3 +98,28 @@
]
batch_alter_table_drop_columns(remainder_table_oid, deletion_column_data, conn, engine)
return extracted_table, remainder_table_with_fk_column, fk_column_name
+
+
+def update_pk_sequence_to_latest(conn, engine, extracted_table):
+ _preparer = engine.dialect.identifier_preparer
+ quoted_table_name = _preparer.quote(extracted_table.schema) + "." + _preparer.quote(extracted_table.name)
+ update_pk_sequence_stmt = func.setval(
+ # `pg_get_serial_sequence needs a string of the Table name
+ func.pg_get_serial_sequence(
+ quoted_table_name,
+ extracted_table.c[constants.ID].name
+ ),
+ # If the table can be empty, start from 1 instead of using Null
+ func.coalesce(
+ func.max(extracted_table.c[constants.ID]) + 1,
+ 1
+ ),
+ # Set the sequence to use the last value of the sequence
+ # Setting is_called field to false, meaning that the next nextval will not advance the sequence before returning a value.
+ # We need to do it as our default coalesce value is 1 instead of 0
+ # Refer the postgres docs https://www.postgresql.org/docs/current/functions-sequence.html
+ False
+ )
+ conn.execute(
+ select(update_pk_sequence_stmt)
+ )
| {"golden_diff": "diff --git a/db/tables/operations/split.py b/db/tables/operations/split.py\n--- a/db/tables/operations/split.py\n+++ b/db/tables/operations/split.py\n@@ -89,6 +89,8 @@\n fk_column_name,\n )\n conn.execute(split_ins)\n+ update_pk_sequence_to_latest(conn, engine, extracted_table)\n+\n remainder_table_oid = get_oid_from_table(remainder_table_with_fk_column.name, schema, engine)\n deletion_column_data = [\n {'attnum': column_attnum, 'delete': True}\n@@ -96,3 +98,28 @@\n ]\n batch_alter_table_drop_columns(remainder_table_oid, deletion_column_data, conn, engine)\n return extracted_table, remainder_table_with_fk_column, fk_column_name\n+\n+\n+def update_pk_sequence_to_latest(conn, engine, extracted_table):\n+ _preparer = engine.dialect.identifier_preparer\n+ quoted_table_name = _preparer.quote(extracted_table.schema) + \".\" + _preparer.quote(extracted_table.name)\n+ update_pk_sequence_stmt = func.setval(\n+ # `pg_get_serial_sequence needs a string of the Table name\n+ func.pg_get_serial_sequence(\n+ quoted_table_name,\n+ extracted_table.c[constants.ID].name\n+ ),\n+ # If the table can be empty, start from 1 instead of using Null\n+ func.coalesce(\n+ func.max(extracted_table.c[constants.ID]) + 1,\n+ 1\n+ ),\n+ # Set the sequence to use the last value of the sequence\n+ # Setting is_called field to false, meaning that the next nextval will not advance the sequence before returning a value.\n+ # We need to do it as our default coalesce value is 1 instead of 0\n+ # Refer the postgres docs https://www.postgresql.org/docs/current/functions-sequence.html\n+ False\n+ )\n+ conn.execute(\n+ select(update_pk_sequence_stmt)\n+ )\n", "issue": "ID sequence isn't being created for tables\n## Description\r\n<!-- A clear and concise description of what the bug is. -->\r\nI cannot add a new record to a new table because the ID does not have a default value. I set up this table by:\r\n1. Importing [library_books.tsv.zip](https://github.com/centerofci/mathesar/files/10273401/library_books.tsv.zip) as \"Items\"\r\n2. Pulling \"Author First Name\" and \"Author First Name\" into an \"Authors\" table\r\n3. Moving \"Author Website\" to \"Authors\" table\r\n4. Pulling \"Publishers\" into a \"Publishers\" table\r\n5. Pulling \"Title\", \"Publication Year\", \"ISBN\", \"Authors_id\", and \"Publishers_id\" into a \"Books\" table\r\n\r\n## Expected behavior\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nI should be able to add a new record!\r\n\r\n## To Reproduce\r\n<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->\r\n~Try adding a new record on https://library.mathesar.org/mathesar_tables/4/tables/34/~\r\n\r\nI needed to nuke this table since it was blocking my work. To reproduce, follow the steps under Description.\r\n\r\n## Additional context\r\n<!-- Add any other context about the problem or screenshots here. -->\r\nThis seems related to\r\n- #2116 \r\n- #1356 \r\n\n", "before_files": [{"content": "from sqlalchemy import exists, func, literal, select\n\nfrom db import constants\nfrom db.columns.base import MathesarColumn\nfrom db.columns.operations.alter import batch_alter_table_drop_columns\nfrom db.columns.operations.select import get_column_names_from_attnums\nfrom db.links.operations.create import create_foreign_key_link\nfrom db.tables.operations.create import create_mathesar_table\nfrom db.tables.operations.select import get_oid_from_table, reflect_table, reflect_table_from_oid\nfrom db.metadata import get_empty_metadata\n\n\ndef _create_split_tables(extracted_table_name, extracted_columns, remainder_table_name, schema, engine, fk_column_name=None):\n extracted_table = create_mathesar_table(\n extracted_table_name,\n schema,\n extracted_columns,\n engine,\n )\n fk_column_name = fk_column_name if fk_column_name else f\"{extracted_table.name}_{constants.ID}\"\n remainder_table_oid = get_oid_from_table(remainder_table_name, schema, engine)\n extracted_table_oid = get_oid_from_table(extracted_table_name, schema, engine)\n create_foreign_key_link(engine, schema, fk_column_name, remainder_table_oid, extracted_table_oid)\n # TODO reuse metadata\n remainder_table_with_fk_key = reflect_table(remainder_table_name, schema, engine, metadata=get_empty_metadata())\n return extracted_table, remainder_table_with_fk_key, fk_column_name\n\n\ndef _create_split_insert_stmt(old_table, extracted_table, extracted_columns, remainder_fk_name):\n SPLIT_ID = f\"{constants.MATHESAR_PREFIX}_split_column_alias\"\n extracted_column_names = [col.name for col in extracted_columns]\n split_cte = select(\n [\n old_table,\n func.dense_rank().over(order_by=extracted_columns).label(SPLIT_ID)\n ]\n ).cte()\n cte_extraction_columns = (\n [split_cte.columns[SPLIT_ID]]\n + [split_cte.columns[n] for n in extracted_column_names]\n )\n extract_sel = select(\n cte_extraction_columns,\n distinct=True\n )\n extract_ins_cte = (\n extracted_table\n .insert()\n .from_select([constants.ID] + extracted_column_names, extract_sel)\n .returning(literal(1))\n .cte()\n )\n fk_update_dict = {remainder_fk_name: split_cte.c[SPLIT_ID]}\n split_ins = (\n old_table\n .update().values(**fk_update_dict).\n where(old_table.c[constants.ID] == split_cte.c[constants.ID],\n exists(extract_ins_cte.select()))\n )\n return split_ins\n\n\ndef extract_columns_from_table(old_table_oid, extracted_column_attnums, extracted_table_name, schema, engine, relationship_fk_column_name=None):\n # TODO reuse metadata\n old_table = reflect_table_from_oid(old_table_oid, engine, metadata=get_empty_metadata())\n old_table_name = old_table.name\n old_columns = (MathesarColumn.from_column(col) for col in old_table.columns)\n old_non_default_columns = [\n col for col in old_columns if not col.is_default\n ]\n # TODO reuse metadata\n extracted_column_names = get_column_names_from_attnums(old_table_oid, extracted_column_attnums, engine, metadata=get_empty_metadata())\n extracted_columns = [\n col for col in old_non_default_columns if col.name in extracted_column_names\n ]\n with engine.begin() as conn:\n extracted_table, remainder_table_with_fk_column, fk_column_name = _create_split_tables(\n extracted_table_name,\n extracted_columns,\n old_table_name,\n schema,\n engine,\n relationship_fk_column_name\n )\n split_ins = _create_split_insert_stmt(\n remainder_table_with_fk_column,\n extracted_table,\n extracted_columns,\n fk_column_name,\n )\n conn.execute(split_ins)\n remainder_table_oid = get_oid_from_table(remainder_table_with_fk_column.name, schema, engine)\n deletion_column_data = [\n {'attnum': column_attnum, 'delete': True}\n for column_attnum in extracted_column_attnums\n ]\n batch_alter_table_drop_columns(remainder_table_oid, deletion_column_data, conn, engine)\n return extracted_table, remainder_table_with_fk_column, fk_column_name\n", "path": "db/tables/operations/split.py"}], "after_files": [{"content": "from sqlalchemy import exists, func, literal, select\n\nfrom db import constants\nfrom db.columns.base import MathesarColumn\nfrom db.columns.operations.alter import batch_alter_table_drop_columns\nfrom db.columns.operations.select import get_column_names_from_attnums\nfrom db.links.operations.create import create_foreign_key_link\nfrom db.tables.operations.create import create_mathesar_table\nfrom db.tables.operations.select import get_oid_from_table, reflect_table, reflect_table_from_oid\nfrom db.metadata import get_empty_metadata\n\n\ndef _create_split_tables(extracted_table_name, extracted_columns, remainder_table_name, schema, engine, fk_column_name=None):\n extracted_table = create_mathesar_table(\n extracted_table_name,\n schema,\n extracted_columns,\n engine,\n )\n fk_column_name = fk_column_name if fk_column_name else f\"{extracted_table.name}_{constants.ID}\"\n remainder_table_oid = get_oid_from_table(remainder_table_name, schema, engine)\n extracted_table_oid = get_oid_from_table(extracted_table_name, schema, engine)\n create_foreign_key_link(engine, schema, fk_column_name, remainder_table_oid, extracted_table_oid)\n # TODO reuse metadata\n remainder_table_with_fk_key = reflect_table(remainder_table_name, schema, engine, metadata=get_empty_metadata())\n return extracted_table, remainder_table_with_fk_key, fk_column_name\n\n\ndef _create_split_insert_stmt(old_table, extracted_table, extracted_columns, remainder_fk_name):\n SPLIT_ID = f\"{constants.MATHESAR_PREFIX}_split_column_alias\"\n extracted_column_names = [col.name for col in extracted_columns]\n split_cte = select(\n [\n old_table,\n func.dense_rank().over(order_by=extracted_columns).label(SPLIT_ID)\n ]\n ).cte()\n cte_extraction_columns = (\n [split_cte.columns[SPLIT_ID]]\n + [split_cte.columns[n] for n in extracted_column_names]\n )\n extract_sel = select(\n cte_extraction_columns,\n distinct=True\n )\n extract_ins_cte = (\n extracted_table\n .insert()\n .from_select([constants.ID] + extracted_column_names, extract_sel)\n .returning(literal(1))\n .cte()\n )\n fk_update_dict = {remainder_fk_name: split_cte.c[SPLIT_ID]}\n split_ins = (\n old_table\n .update().values(**fk_update_dict).\n where(old_table.c[constants.ID] == split_cte.c[constants.ID],\n exists(extract_ins_cte.select()))\n )\n return split_ins\n\n\ndef extract_columns_from_table(old_table_oid, extracted_column_attnums, extracted_table_name, schema, engine, relationship_fk_column_name=None):\n # TODO reuse metadata\n old_table = reflect_table_from_oid(old_table_oid, engine, metadata=get_empty_metadata())\n old_table_name = old_table.name\n old_columns = (MathesarColumn.from_column(col) for col in old_table.columns)\n old_non_default_columns = [\n col for col in old_columns if not col.is_default\n ]\n # TODO reuse metadata\n extracted_column_names = get_column_names_from_attnums(old_table_oid, extracted_column_attnums, engine, metadata=get_empty_metadata())\n extracted_columns = [\n col for col in old_non_default_columns if col.name in extracted_column_names\n ]\n with engine.begin() as conn:\n extracted_table, remainder_table_with_fk_column, fk_column_name = _create_split_tables(\n extracted_table_name,\n extracted_columns,\n old_table_name,\n schema,\n engine,\n relationship_fk_column_name\n )\n split_ins = _create_split_insert_stmt(\n remainder_table_with_fk_column,\n extracted_table,\n extracted_columns,\n fk_column_name,\n )\n conn.execute(split_ins)\n update_pk_sequence_to_latest(conn, engine, extracted_table)\n\n remainder_table_oid = get_oid_from_table(remainder_table_with_fk_column.name, schema, engine)\n deletion_column_data = [\n {'attnum': column_attnum, 'delete': True}\n for column_attnum in extracted_column_attnums\n ]\n batch_alter_table_drop_columns(remainder_table_oid, deletion_column_data, conn, engine)\n return extracted_table, remainder_table_with_fk_column, fk_column_name\n\n\ndef update_pk_sequence_to_latest(conn, engine, extracted_table):\n _preparer = engine.dialect.identifier_preparer\n quoted_table_name = _preparer.quote(extracted_table.schema) + \".\" + _preparer.quote(extracted_table.name)\n update_pk_sequence_stmt = func.setval(\n # `pg_get_serial_sequence needs a string of the Table name\n func.pg_get_serial_sequence(\n quoted_table_name,\n extracted_table.c[constants.ID].name\n ),\n # If the table can be empty, start from 1 instead of using Null\n func.coalesce(\n func.max(extracted_table.c[constants.ID]) + 1,\n 1\n ),\n # Set the sequence to use the last value of the sequence\n # Setting is_called field to false, meaning that the next nextval will not advance the sequence before returning a value.\n # We need to do it as our default coalesce value is 1 instead of 0\n # Refer the postgres docs https://www.postgresql.org/docs/current/functions-sequence.html\n False\n )\n conn.execute(\n select(update_pk_sequence_stmt)\n )\n", "path": "db/tables/operations/split.py"}]} | 1,674 | 443 |
gh_patches_debug_12548 | rasdani/github-patches | git_diff | mabel-dev__opteryx-1691 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
🪲 VIEWs load error should be in debug mode only
### Thank you for taking the time to report a problem with Opteryx.
_To help us to respond to your request we ask that you try to provide the below detail about the bug._
**Describe the bug** _A clear and specific description of what the bug is. What the error, incorrect or unexpected behaviour was._
**Expected behaviour** _A clear and concise description of what you expected to happen._
**Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._
~~~sql
~~~
**Additional context** _Add any other context about the problem here, for example what you have done to try to diagnose or workaround the problem._
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opteryx/planner/views/__init__.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import orjson
14
15 from opteryx.planner.logical_planner import LogicalPlan
16
17
18 def _load_views():
19 try:
20 with open("views.json", "rb") as defs:
21 return orjson.loads(defs.read())
22 except Exception as err:
23 # DEBUG:: log (f"[OPTERYX] Unable to open views definition file. {err}")
24 return {}
25
26
27 VIEWS = _load_views()
28
29
30 def is_view(view_name: str) -> bool:
31 return view_name in VIEWS
32
33
34 def view_as_plan(view_name: str) -> LogicalPlan:
35 from opteryx.planner.logical_planner import do_logical_planning_phase
36 from opteryx.third_party import sqloxide
37 from opteryx.utils.sql import clean_statement
38 from opteryx.utils.sql import remove_comments
39
40 operation = VIEWS.get(view_name)["statement"]
41
42 clean_sql = clean_statement(remove_comments(operation))
43 parsed_statements = sqloxide.parse_sql(clean_sql, dialect="mysql")
44 logical_plan, _, _ = next(do_logical_planning_phase(parsed_statements))
45
46 return logical_plan
47
```
Path: `opteryx/__version__.py`
Content:
```
1 __build__ = 523
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Store the version here so:
17 1) we don't load dependencies by storing it in __init__.py
18 2) we can import it in setup.py for the same reason
19 """
20 from enum import Enum # isort: skip
21
22
23 class VersionStatus(Enum):
24 ALPHA = "alpha"
25 BETA = "beta"
26 RELEASE = "release"
27
28
29 _major = 0
30 _minor = 16
31 _revision = 0
32 _status = VersionStatus.ALPHA
33
34 __author__ = "@joocer"
35 __version__ = f"{_major}.{_minor}.{_revision}" + (
36 f"-{_status.value}.{__build__}" if _status != VersionStatus.RELEASE else ""
37 )
38
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opteryx/__version__.py b/opteryx/__version__.py
--- a/opteryx/__version__.py
+++ b/opteryx/__version__.py
@@ -1,4 +1,4 @@
-__build__ = 523
+__build__ = 526
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
diff --git a/opteryx/planner/views/__init__.py b/opteryx/planner/views/__init__.py
--- a/opteryx/planner/views/__init__.py
+++ b/opteryx/planner/views/__init__.py
@@ -20,7 +20,7 @@
with open("views.json", "rb") as defs:
return orjson.loads(defs.read())
except Exception as err:
- print(f"[OPTERYX] Unable to open views definition file. {err}")
+ # DEBUG:: log (f"[OPTERYX] Unable to open views definition file. {err}")
return {}
| {"golden_diff": "diff --git a/opteryx/__version__.py b/opteryx/__version__.py\n--- a/opteryx/__version__.py\n+++ b/opteryx/__version__.py\n@@ -1,4 +1,4 @@\n-__build__ = 523\n+__build__ = 526\n \n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\ndiff --git a/opteryx/planner/views/__init__.py b/opteryx/planner/views/__init__.py\n--- a/opteryx/planner/views/__init__.py\n+++ b/opteryx/planner/views/__init__.py\n@@ -20,7 +20,7 @@\n with open(\"views.json\", \"rb\") as defs:\n return orjson.loads(defs.read())\n except Exception as err:\n- print(f\"[OPTERYX] Unable to open views definition file. {err}\")\n+ # DEBUG:: log (f\"[OPTERYX] Unable to open views definition file. {err}\")\n return {}\n", "issue": "\ud83e\udeb2 VIEWs load error should be in debug mode only\n### Thank you for taking the time to report a problem with Opteryx.\r\n_To help us to respond to your request we ask that you try to provide the below detail about the bug._\r\n\r\n**Describe the bug** _A clear and specific description of what the bug is. What the error, incorrect or unexpected behaviour was._\r\n\r\n\r\n**Expected behaviour** _A clear and concise description of what you expected to happen._\r\n\r\n\r\n**Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._\r\n\r\n~~~sql\r\n\r\n~~~\r\n\r\n**Additional context** _Add any other context about the problem here, for example what you have done to try to diagnose or workaround the problem._\r\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport orjson\n\nfrom opteryx.planner.logical_planner import LogicalPlan\n\n\ndef _load_views():\n try:\n with open(\"views.json\", \"rb\") as defs:\n return orjson.loads(defs.read())\n except Exception as err:\n # DEBUG:: log (f\"[OPTERYX] Unable to open views definition file. {err}\")\n return {}\n\n\nVIEWS = _load_views()\n\n\ndef is_view(view_name: str) -> bool:\n return view_name in VIEWS\n\n\ndef view_as_plan(view_name: str) -> LogicalPlan:\n from opteryx.planner.logical_planner import do_logical_planning_phase\n from opteryx.third_party import sqloxide\n from opteryx.utils.sql import clean_statement\n from opteryx.utils.sql import remove_comments\n\n operation = VIEWS.get(view_name)[\"statement\"]\n\n clean_sql = clean_statement(remove_comments(operation))\n parsed_statements = sqloxide.parse_sql(clean_sql, dialect=\"mysql\")\n logical_plan, _, _ = next(do_logical_planning_phase(parsed_statements))\n\n return logical_plan\n", "path": "opteryx/planner/views/__init__.py"}, {"content": "__build__ = 523\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nStore the version here so:\n1) we don't load dependencies by storing it in __init__.py\n2) we can import it in setup.py for the same reason\n\"\"\"\nfrom enum import Enum # isort: skip\n\n\nclass VersionStatus(Enum):\n ALPHA = \"alpha\"\n BETA = \"beta\"\n RELEASE = \"release\"\n\n\n_major = 0\n_minor = 16\n_revision = 0\n_status = VersionStatus.ALPHA\n\n__author__ = \"@joocer\"\n__version__ = f\"{_major}.{_minor}.{_revision}\" + (\n f\"-{_status.value}.{__build__}\" if _status != VersionStatus.RELEASE else \"\"\n)\n", "path": "opteryx/__version__.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport orjson\n\nfrom opteryx.planner.logical_planner import LogicalPlan\n\n\ndef _load_views():\n try:\n with open(\"views.json\", \"rb\") as defs:\n return orjson.loads(defs.read())\n except Exception as err:\n # DEBUG:: log (f\"[OPTERYX] Unable to open views definition file. {err}\")\n return {}\n\n\nVIEWS = _load_views()\n\n\ndef is_view(view_name: str) -> bool:\n return view_name in VIEWS\n\n\ndef view_as_plan(view_name: str) -> LogicalPlan:\n from opteryx.planner.logical_planner import do_logical_planning_phase\n from opteryx.third_party import sqloxide\n from opteryx.utils.sql import clean_statement\n from opteryx.utils.sql import remove_comments\n\n operation = VIEWS.get(view_name)[\"statement\"]\n\n clean_sql = clean_statement(remove_comments(operation))\n parsed_statements = sqloxide.parse_sql(clean_sql, dialect=\"mysql\")\n logical_plan, _, _ = next(do_logical_planning_phase(parsed_statements))\n\n return logical_plan\n", "path": "opteryx/planner/views/__init__.py"}, {"content": "__build__ = 526\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nStore the version here so:\n1) we don't load dependencies by storing it in __init__.py\n2) we can import it in setup.py for the same reason\n\"\"\"\nfrom enum import Enum # isort: skip\n\n\nclass VersionStatus(Enum):\n ALPHA = \"alpha\"\n BETA = \"beta\"\n RELEASE = \"release\"\n\n\n_major = 0\n_minor = 16\n_revision = 0\n_status = VersionStatus.ALPHA\n\n__author__ = \"@joocer\"\n__version__ = f\"{_major}.{_minor}.{_revision}\" + (\n f\"-{_status.value}.{__build__}\" if _status != VersionStatus.RELEASE else \"\"\n)\n", "path": "opteryx/__version__.py"}]} | 1,248 | 238 |
gh_patches_debug_20772 | rasdani/github-patches | git_diff | nvaccess__nvda-10144 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error in alpha with built-in audacity appmod
### Steps to reproduce:
1. open audacity.
2. select audio. ctrl+a works.
3. press alt+c, and press c until "compresser" is found.
4. press enter.
5. press tab.
### Actual behavior:
```
ERROR - eventHandler.executeEvent (23:07:33.553):
error executing event: gainFocus on <NVDAObjects.IAccessible.IAccessible object at 0x03318FF0> with extra args of {}
Traceback (most recent call last):
File "eventHandler.pyc", line 150, in executeEvent
File "eventHandler.pyc", line 163, in doPreGainFocus
File "api.pyc", line 109, in setFocusObject
File "baseObject.pyc", line 46, in __get__
File "baseObject.pyc", line 149, in _getPropertyViaCache
File "NVDAObjects\IAccessible\__init__.pyc", line 1695, in _get_container
File "IAccessibleHandler.pyc", line 950, in findGroupboxObject
File "NVDAObjects\IAccessible\__init__.pyc", line 48, in getNVDAObjectFromEvent
File "NVDAObjects\__init__.pyc", line 137, in __call__
File "appModules\audacity.pyc", line 15, in event_NVDAObject_init
AttributeError: 'NoneType' object has no attribute 'replace'
```
### Expected behavior:
Items in the UI read.
### System configuration
#### NVDA installed/portable/running from source:
installed
#### NVDA version:
Version: alpha-18399,26485a09
#### Windows version:
^10
#### Name and version of other software in use when reproducing the issue:
Audacity: 2.3.1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `source/appModules/audacity.py`
Content:
```
1 # -*- coding: UTF-8 -*-
2 #appModules/audacity.py
3 #A part of NonVisual Desktop Access (NVDA)
4 #Copyright (C) 2006-2018 NV Access Limited, Robert Hänggi
5 #This file is covered by the GNU General Public License.
6 #See the file COPYING for more details.
7
8 import appModuleHandler
9 import controlTypes
10
11 class AppModule(appModuleHandler.AppModule):
12
13 def event_NVDAObject_init(self,obj):
14 if obj.windowClassName=="Button" and not obj.role in [controlTypes.ROLE_MENUBAR, controlTypes.ROLE_MENUITEM, controlTypes.ROLE_POPUPMENU]:
15 obj.name=obj.name.replace('&','')
16
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/source/appModules/audacity.py b/source/appModules/audacity.py
--- a/source/appModules/audacity.py
+++ b/source/appModules/audacity.py
@@ -1,9 +1,8 @@
# -*- coding: UTF-8 -*-
-#appModules/audacity.py
-#A part of NonVisual Desktop Access (NVDA)
-#Copyright (C) 2006-2018 NV Access Limited, Robert Hänggi
-#This file is covered by the GNU General Public License.
-#See the file COPYING for more details.
+# A part of NonVisual Desktop Access (NVDA)
+# Copyright (C) 2006-2019 NV Access Limited, Robert Hänggi, Łukasz Golonka
+# This file is covered by the GNU General Public License.
+# See the file COPYING for more details.
import appModuleHandler
import controlTypes
@@ -11,5 +10,9 @@
class AppModule(appModuleHandler.AppModule):
def event_NVDAObject_init(self,obj):
- if obj.windowClassName=="Button" and not obj.role in [controlTypes.ROLE_MENUBAR, controlTypes.ROLE_MENUITEM, controlTypes.ROLE_POPUPMENU]:
+ if (
+ obj.windowClassName == "Button"
+ and obj.role not in [controlTypes.ROLE_MENUBAR, controlTypes.ROLE_MENUITEM, controlTypes.ROLE_POPUPMENU]
+ and obj.name is not None
+ ):
obj.name=obj.name.replace('&','')
| {"golden_diff": "diff --git a/source/appModules/audacity.py b/source/appModules/audacity.py\n--- a/source/appModules/audacity.py\n+++ b/source/appModules/audacity.py\n@@ -1,9 +1,8 @@\n # -*- coding: UTF-8 -*-\r\n-#appModules/audacity.py\r\n-#A part of NonVisual Desktop Access (NVDA)\r\n-#Copyright (C) 2006-2018 NV Access Limited, Robert H\u00e4nggi\r\n-#This file is covered by the GNU General Public License.\r\n-#See the file COPYING for more details.\r\n+# A part of NonVisual Desktop Access (NVDA)\r\n+# Copyright (C) 2006-2019 NV Access Limited, Robert H\u00e4nggi, \u0141ukasz Golonka\r\n+# This file is covered by the GNU General Public License.\r\n+# See the file COPYING for more details.\r\n \r\n import appModuleHandler\r\n import controlTypes\r\n@@ -11,5 +10,9 @@\n class AppModule(appModuleHandler.AppModule):\r\n \r\n \tdef event_NVDAObject_init(self,obj):\r\n-\t\tif obj.windowClassName==\"Button\" and not obj.role in [controlTypes.ROLE_MENUBAR, controlTypes.ROLE_MENUITEM, controlTypes.ROLE_POPUPMENU]:\r\n+\t\tif (\r\n+\t\t\tobj.windowClassName == \"Button\"\r\n+\t\t\tand obj.role not in [controlTypes.ROLE_MENUBAR, controlTypes.ROLE_MENUITEM, controlTypes.ROLE_POPUPMENU]\r\n+\t\t\tand obj.name is not None\r\n+\t\t):\r\n \t\t\tobj.name=obj.name.replace('&','')\n", "issue": "Error in alpha with built-in audacity appmod\n### Steps to reproduce:\r\n\r\n1. open audacity.\r\n2. select audio. ctrl+a works.\r\n3. press alt+c, and press c until \"compresser\" is found.\r\n4. press enter.\r\n5. press tab.\r\n\r\n### Actual behavior:\r\n\r\n```\r\nERROR - eventHandler.executeEvent (23:07:33.553):\r\nerror executing event: gainFocus on <NVDAObjects.IAccessible.IAccessible object at 0x03318FF0> with extra args of {}\r\nTraceback (most recent call last):\r\n File \"eventHandler.pyc\", line 150, in executeEvent\r\n File \"eventHandler.pyc\", line 163, in doPreGainFocus\r\n File \"api.pyc\", line 109, in setFocusObject\r\n File \"baseObject.pyc\", line 46, in __get__\r\n File \"baseObject.pyc\", line 149, in _getPropertyViaCache\r\n File \"NVDAObjects\\IAccessible\\__init__.pyc\", line 1695, in _get_container\r\n File \"IAccessibleHandler.pyc\", line 950, in findGroupboxObject\r\n File \"NVDAObjects\\IAccessible\\__init__.pyc\", line 48, in getNVDAObjectFromEvent\r\n File \"NVDAObjects\\__init__.pyc\", line 137, in __call__\r\n File \"appModules\\audacity.pyc\", line 15, in event_NVDAObject_init\r\nAttributeError: 'NoneType' object has no attribute 'replace'\r\n```\r\n\r\n### Expected behavior:\r\n\r\nItems in the UI read.\r\n\r\n### System configuration\r\n#### NVDA installed/portable/running from source:\r\ninstalled\r\n\r\n#### NVDA version:\r\nVersion: alpha-18399,26485a09\r\n\r\n\r\n#### Windows version:\r\n^10\r\n\r\n#### Name and version of other software in use when reproducing the issue:\r\nAudacity: 2.3.1\r\n\r\n\n", "before_files": [{"content": "# -*- coding: UTF-8 -*-\r\n#appModules/audacity.py\r\n#A part of NonVisual Desktop Access (NVDA)\r\n#Copyright (C) 2006-2018 NV Access Limited, Robert H\u00e4nggi\r\n#This file is covered by the GNU General Public License.\r\n#See the file COPYING for more details.\r\n\r\nimport appModuleHandler\r\nimport controlTypes\r\n\r\nclass AppModule(appModuleHandler.AppModule):\r\n\r\n\tdef event_NVDAObject_init(self,obj):\r\n\t\tif obj.windowClassName==\"Button\" and not obj.role in [controlTypes.ROLE_MENUBAR, controlTypes.ROLE_MENUITEM, controlTypes.ROLE_POPUPMENU]:\r\n\t\t\tobj.name=obj.name.replace('&','')\r\n", "path": "source/appModules/audacity.py"}], "after_files": [{"content": "# -*- coding: UTF-8 -*-\r\n# A part of NonVisual Desktop Access (NVDA)\r\n# Copyright (C) 2006-2019 NV Access Limited, Robert H\u00e4nggi, \u0141ukasz Golonka\r\n# This file is covered by the GNU General Public License.\r\n# See the file COPYING for more details.\r\n\r\nimport appModuleHandler\r\nimport controlTypes\r\n\r\nclass AppModule(appModuleHandler.AppModule):\r\n\r\n\tdef event_NVDAObject_init(self,obj):\r\n\t\tif (\r\n\t\t\tobj.windowClassName == \"Button\"\r\n\t\t\tand obj.role not in [controlTypes.ROLE_MENUBAR, controlTypes.ROLE_MENUITEM, controlTypes.ROLE_POPUPMENU]\r\n\t\t\tand obj.name is not None\r\n\t\t):\r\n\t\t\tobj.name=obj.name.replace('&','')\r\n", "path": "source/appModules/audacity.py"}]} | 881 | 339 |
gh_patches_debug_20232 | rasdani/github-patches | git_diff | ansible__ansible-17394 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
network _template not checking arguments correctly
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/plugins/action/net_template.py
##### ANSIBLE VERSION
devel
##### CONFIGURATION
##### OS / ENVIRONMENT
##### SUMMARY
##### STEPS TO REPRODUCE
```
---
- name: Backup existing configuration
eos_template:
backup: yes
```
No other `vars` set
same for `ios_template`
##### EXPECTED RESULTS
`msg": "missing required arguments: host"` (or src)
##### ACTUAL RESULTS
```
File "/home/gundalow/git/ansible-inc/ansible/lib/ansible/plugins/action/net_template.py", line 43, in run
self._handle_template()
File "/home/gundalow/git/ansible-inc/ansible/lib/ansible/plugins/action/net_template.py", line 80, in _handle_template
if os.path.isabs(src) or urlparse.urlsplit('src').scheme:
File "/usr/lib/python2.7/posixpath.py", line 54, in isabs
return s.startswith('/')
AttributeError: 'NoneType' object has no attribute 'startswith'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/ansible/plugins/action/net_template.py`
Content:
```
1 #
2 # Copyright 2015 Peter Sprygada <[email protected]>
3 #
4 # This file is part of Ansible
5 #
6 # Ansible is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU General Public License as published by
8 # the Free Software Foundation, either version 3 of the License, or
9 # (at your option) any later version.
10 #
11 # Ansible is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
18 #
19 from __future__ import (absolute_import, division, print_function)
20 __metaclass__ = type
21
22 import sys
23 import os
24 import time
25 import glob
26 import urlparse
27
28 from ansible.plugins.action import ActionBase
29 from ansible.utils.boolean import boolean
30 from ansible.utils.unicode import to_unicode
31
32 BOOLEANS = ('true', 'false', 'yes', 'no')
33
34 class ActionModule(ActionBase):
35
36 TRANSFERS_FILES = False
37
38 def run(self, tmp=None, task_vars=None):
39 result = super(ActionModule, self).run(tmp, task_vars)
40 result['changed'] = False
41
42 try:
43 self._handle_template()
44 except ValueError as exc:
45 return dict(failed=True, msg=exc.message)
46
47 result.update(self._execute_module(module_name=self._task.action,
48 module_args=self._task.args, task_vars=task_vars))
49
50 if self._task.args.get('backup') and result.get('_backup'):
51 # User requested backup and no error occurred in module.
52 # NOTE: If there is a parameter error, _backup key may not be in results.
53 self._write_backup(task_vars['inventory_hostname'], result['_backup'])
54
55 if '_backup' in result:
56 del result['_backup']
57
58 return result
59
60 def _get_working_path(self):
61 cwd = self._loader.get_basedir()
62 if self._task._role is not None:
63 cwd = self._task._role._role_path
64 return cwd
65
66 def _write_backup(self, host, contents):
67 backup_path = self._get_working_path() + '/backup'
68 if not os.path.exists(backup_path):
69 os.mkdir(backup_path)
70 for fn in glob.glob('%s/%s*' % (backup_path, host)):
71 os.remove(fn)
72 tstamp = time.strftime("%Y-%m-%d@%H:%M:%S", time.localtime(time.time()))
73 filename = '%s/%s_config.%s' % (backup_path, host, tstamp)
74 open(filename, 'w').write(contents)
75
76 def _handle_template(self):
77 src = self._task.args.get('src')
78 working_path = self._get_working_path()
79
80 if os.path.isabs(src) or urlparse.urlsplit('src').scheme:
81 source = src
82 else:
83 source = self._loader.path_dwim_relative(working_path, 'templates', src)
84 if not source:
85 source = self._loader.path_dwim_relative(working_path, src)
86
87 if not os.path.exists(source):
88 return
89
90 try:
91 with open(source, 'r') as f:
92 template_data = to_unicode(f.read())
93 except IOError:
94 return dict(failed=True, msg='unable to load src file')
95
96 # Create a template search path in the following order:
97 # [working_path, self_role_path, dependent_role_paths, dirname(source)]
98 searchpath = [working_path]
99 if self._task._role is not None:
100 searchpath.append(self._task._role._role_path)
101 dep_chain = self._task._block.get_dep_chain()
102 if dep_chain is not None:
103 for role in dep_chain:
104 searchpath.append(role._role_path)
105 searchpath.append(os.path.dirname(source))
106 self._templar.environment.loader.searchpath = searchpath
107 self._task.args['src'] = self._templar.template(template_data)
108
109
110
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/ansible/plugins/action/net_template.py b/lib/ansible/plugins/action/net_template.py
--- a/lib/ansible/plugins/action/net_template.py
+++ b/lib/ansible/plugins/action/net_template.py
@@ -41,7 +41,7 @@
try:
self._handle_template()
- except ValueError as exc:
+ except (ValueError, AttributeError) as exc:
return dict(failed=True, msg=exc.message)
result.update(self._execute_module(module_name=self._task.action,
@@ -75,9 +75,12 @@
def _handle_template(self):
src = self._task.args.get('src')
+ if not src:
+ raise ValueError('missing required arguments: src')
+
working_path = self._get_working_path()
- if os.path.isabs(src) or urlparse.urlsplit('src').scheme:
+ if os.path.isabs(src) or urlparse.urlsplit(src).scheme:
source = src
else:
source = self._loader.path_dwim_relative(working_path, 'templates', src)
| {"golden_diff": "diff --git a/lib/ansible/plugins/action/net_template.py b/lib/ansible/plugins/action/net_template.py\n--- a/lib/ansible/plugins/action/net_template.py\n+++ b/lib/ansible/plugins/action/net_template.py\n@@ -41,7 +41,7 @@\n \n try:\n self._handle_template()\n- except ValueError as exc:\n+ except (ValueError, AttributeError) as exc:\n return dict(failed=True, msg=exc.message)\n \n result.update(self._execute_module(module_name=self._task.action,\n@@ -75,9 +75,12 @@\n \n def _handle_template(self):\n src = self._task.args.get('src')\n+ if not src:\n+ raise ValueError('missing required arguments: src')\n+\n working_path = self._get_working_path()\n \n- if os.path.isabs(src) or urlparse.urlsplit('src').scheme:\n+ if os.path.isabs(src) or urlparse.urlsplit(src).scheme:\n source = src\n else:\n source = self._loader.path_dwim_relative(working_path, 'templates', src)\n", "issue": "network _template not checking arguments correctly\n##### ISSUE TYPE\n- Bug Report\n##### COMPONENT NAME\n\nlib/ansible/plugins/action/net_template.py\n##### ANSIBLE VERSION\n\ndevel\n##### CONFIGURATION\n##### OS / ENVIRONMENT\n##### SUMMARY\n##### STEPS TO REPRODUCE\n\n```\n\n---\n- name: Backup existing configuration\n eos_template:\n backup: yes\n```\n\nNo other `vars` set\n\nsame for `ios_template`\n##### EXPECTED RESULTS\n\n`msg\": \"missing required arguments: host\"` (or src)\n##### ACTUAL RESULTS\n\n```\n File \"/home/gundalow/git/ansible-inc/ansible/lib/ansible/plugins/action/net_template.py\", line 43, in run\n self._handle_template()\n File \"/home/gundalow/git/ansible-inc/ansible/lib/ansible/plugins/action/net_template.py\", line 80, in _handle_template\n if os.path.isabs(src) or urlparse.urlsplit('src').scheme:\n File \"/usr/lib/python2.7/posixpath.py\", line 54, in isabs\n return s.startswith('/')\nAttributeError: 'NoneType' object has no attribute 'startswith'\n```\n\n", "before_files": [{"content": "#\n# Copyright 2015 Peter Sprygada <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nimport sys\nimport os\nimport time\nimport glob\nimport urlparse\n\nfrom ansible.plugins.action import ActionBase\nfrom ansible.utils.boolean import boolean\nfrom ansible.utils.unicode import to_unicode\n\nBOOLEANS = ('true', 'false', 'yes', 'no')\n\nclass ActionModule(ActionBase):\n\n TRANSFERS_FILES = False\n\n def run(self, tmp=None, task_vars=None):\n result = super(ActionModule, self).run(tmp, task_vars)\n result['changed'] = False\n\n try:\n self._handle_template()\n except ValueError as exc:\n return dict(failed=True, msg=exc.message)\n\n result.update(self._execute_module(module_name=self._task.action,\n module_args=self._task.args, task_vars=task_vars))\n\n if self._task.args.get('backup') and result.get('_backup'):\n # User requested backup and no error occurred in module.\n # NOTE: If there is a parameter error, _backup key may not be in results.\n self._write_backup(task_vars['inventory_hostname'], result['_backup'])\n\n if '_backup' in result:\n del result['_backup']\n\n return result\n\n def _get_working_path(self):\n cwd = self._loader.get_basedir()\n if self._task._role is not None:\n cwd = self._task._role._role_path\n return cwd\n\n def _write_backup(self, host, contents):\n backup_path = self._get_working_path() + '/backup'\n if not os.path.exists(backup_path):\n os.mkdir(backup_path)\n for fn in glob.glob('%s/%s*' % (backup_path, host)):\n os.remove(fn)\n tstamp = time.strftime(\"%Y-%m-%d@%H:%M:%S\", time.localtime(time.time()))\n filename = '%s/%s_config.%s' % (backup_path, host, tstamp)\n open(filename, 'w').write(contents)\n\n def _handle_template(self):\n src = self._task.args.get('src')\n working_path = self._get_working_path()\n\n if os.path.isabs(src) or urlparse.urlsplit('src').scheme:\n source = src\n else:\n source = self._loader.path_dwim_relative(working_path, 'templates', src)\n if not source:\n source = self._loader.path_dwim_relative(working_path, src)\n\n if not os.path.exists(source):\n return\n\n try:\n with open(source, 'r') as f:\n template_data = to_unicode(f.read())\n except IOError:\n return dict(failed=True, msg='unable to load src file')\n\n # Create a template search path in the following order:\n # [working_path, self_role_path, dependent_role_paths, dirname(source)]\n searchpath = [working_path]\n if self._task._role is not None:\n searchpath.append(self._task._role._role_path)\n dep_chain = self._task._block.get_dep_chain()\n if dep_chain is not None:\n for role in dep_chain:\n searchpath.append(role._role_path)\n searchpath.append(os.path.dirname(source))\n self._templar.environment.loader.searchpath = searchpath\n self._task.args['src'] = self._templar.template(template_data)\n\n\n", "path": "lib/ansible/plugins/action/net_template.py"}], "after_files": [{"content": "#\n# Copyright 2015 Peter Sprygada <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nimport sys\nimport os\nimport time\nimport glob\nimport urlparse\n\nfrom ansible.plugins.action import ActionBase\nfrom ansible.utils.boolean import boolean\nfrom ansible.utils.unicode import to_unicode\n\nBOOLEANS = ('true', 'false', 'yes', 'no')\n\nclass ActionModule(ActionBase):\n\n TRANSFERS_FILES = False\n\n def run(self, tmp=None, task_vars=None):\n result = super(ActionModule, self).run(tmp, task_vars)\n result['changed'] = False\n\n try:\n self._handle_template()\n except (ValueError, AttributeError) as exc:\n return dict(failed=True, msg=exc.message)\n\n result.update(self._execute_module(module_name=self._task.action,\n module_args=self._task.args, task_vars=task_vars))\n\n if self._task.args.get('backup') and result.get('_backup'):\n # User requested backup and no error occurred in module.\n # NOTE: If there is a parameter error, _backup key may not be in results.\n self._write_backup(task_vars['inventory_hostname'], result['_backup'])\n\n if '_backup' in result:\n del result['_backup']\n\n return result\n\n def _get_working_path(self):\n cwd = self._loader.get_basedir()\n if self._task._role is not None:\n cwd = self._task._role._role_path\n return cwd\n\n def _write_backup(self, host, contents):\n backup_path = self._get_working_path() + '/backup'\n if not os.path.exists(backup_path):\n os.mkdir(backup_path)\n for fn in glob.glob('%s/%s*' % (backup_path, host)):\n os.remove(fn)\n tstamp = time.strftime(\"%Y-%m-%d@%H:%M:%S\", time.localtime(time.time()))\n filename = '%s/%s_config.%s' % (backup_path, host, tstamp)\n open(filename, 'w').write(contents)\n\n def _handle_template(self):\n src = self._task.args.get('src')\n if not src:\n raise ValueError('missing required arguments: src')\n\n working_path = self._get_working_path()\n\n if os.path.isabs(src) or urlparse.urlsplit(src).scheme:\n source = src\n else:\n source = self._loader.path_dwim_relative(working_path, 'templates', src)\n if not source:\n source = self._loader.path_dwim_relative(working_path, src)\n\n if not os.path.exists(source):\n return\n\n try:\n with open(source, 'r') as f:\n template_data = to_unicode(f.read())\n except IOError:\n return dict(failed=True, msg='unable to load src file')\n\n # Create a template search path in the following order:\n # [working_path, self_role_path, dependent_role_paths, dirname(source)]\n searchpath = [working_path]\n if self._task._role is not None:\n searchpath.append(self._task._role._role_path)\n dep_chain = self._task._block.get_dep_chain()\n if dep_chain is not None:\n for role in dep_chain:\n searchpath.append(role._role_path)\n searchpath.append(os.path.dirname(source))\n self._templar.environment.loader.searchpath = searchpath\n self._task.args['src'] = self._templar.template(template_data)\n\n\n", "path": "lib/ansible/plugins/action/net_template.py"}]} | 1,638 | 235 |
gh_patches_debug_28348 | rasdani/github-patches | git_diff | ibis-project__ibis-3263 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
docs: add blurb about the datafusion backend
After #2918 lands, we need to mention it in the docs.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ibis/backends/datafusion/__init__.py`
Content:
```
1 from __future__ import annotations
2
3 import re
4 from typing import Mapping
5
6 import datafusion as df
7 import pyarrow as pa
8
9 import ibis.common.exceptions as com
10 import ibis.expr.schema as sch
11 import ibis.expr.types as ir
12 from ibis.backends.base import BaseBackend
13
14 from .compiler import translate
15
16
17 def _to_pyarrow_table(frame):
18 batches = frame.collect()
19 if batches:
20 return pa.Table.from_batches(batches)
21 else:
22 # TODO(kszucs): file a bug to datafusion because the fields'
23 # nullability from frame.schema() is not always consistent
24 # with the first record batch's schema
25 return pa.Table.from_batches(batches, schema=frame.schema())
26
27
28 class Backend(BaseBackend):
29 name = 'datafusion'
30 builder = None
31
32 @property
33 def version(self):
34 try:
35 import importlib.metadata as importlib_metadata
36 except ImportError:
37 # TODO: remove this when Python 3.7 support is dropped
38 import importlib_metadata
39 return importlib_metadata.version("datafusion")
40
41 def do_connect(self, config):
42 """
43 Create a DataFusionClient for use with Ibis
44
45 Parameters
46 ----------
47 config : DataFusionContext or dict
48
49 Returns
50 -------
51 DataFusionClient
52 """
53 if isinstance(config, df.ExecutionContext):
54 self._context = config
55 else:
56 self._context = df.ExecutionContext()
57
58 for name, path in config.items():
59 strpath = str(path)
60 if strpath.endswith('.csv'):
61 self.register_csv(name, path)
62 elif strpath.endswith('.parquet'):
63 self.register_parquet(name, path)
64 else:
65 raise ValueError(
66 "Currently the DataFusion backend only supports CSV "
67 "files with the extension .csv and Parquet files with "
68 "the .parquet extension."
69 )
70
71 def current_database(self):
72 raise NotImplementedError()
73
74 def list_databases(self, like: str = None) -> list[str]:
75 raise NotImplementedError()
76
77 def list_tables(self, like: str = None, database: str = None) -> list[str]:
78 """List the available tables."""
79 tables = list(self._context.tables())
80 if like is not None:
81 pattern = re.compile(like)
82 return list(filter(lambda t: pattern.findall(t), tables))
83 return tables
84
85 def table(self, name, schema=None):
86 catalog = self._context.catalog()
87 database = catalog.database('public')
88 table = database.table(name)
89 schema = sch.infer(table.schema)
90 return self.table_class(name, schema, self).to_expr()
91
92 def register_csv(self, name, path, schema=None):
93 self._context.register_csv(name, path, schema=schema)
94
95 def register_parquet(self, name, path, schema=None):
96 self._context.register_parquet(name, path, schema=schema)
97
98 def execute(
99 self,
100 expr: ir.Expr,
101 params: Mapping[ir.Expr, object] = None,
102 limit: str = 'default',
103 **kwargs,
104 ):
105 if isinstance(expr, ir.TableExpr):
106 frame = self.compile(expr, params, **kwargs)
107 table = _to_pyarrow_table(frame)
108 return table.to_pandas()
109 elif isinstance(expr, ir.ColumnExpr):
110 # expression must be named for the projection
111 expr = expr.name('tmp').to_projection()
112 frame = self.compile(expr, params, **kwargs)
113 table = _to_pyarrow_table(frame)
114 return table['tmp'].to_pandas()
115 elif isinstance(expr, ir.ScalarExpr):
116 if expr.op().root_tables():
117 # there are associated datafusion tables so convert the expr
118 # to a selection which we can directly convert to a datafusion
119 # plan
120 expr = expr.name('tmp').to_projection()
121 frame = self.compile(expr, params, **kwargs)
122 else:
123 # doesn't have any tables associated so create a plan from a
124 # dummy datafusion table
125 compiled = self.compile(expr, params, **kwargs)
126 frame = self._context.empty_table().select(compiled)
127 table = _to_pyarrow_table(frame)
128 return table[0][0].as_py()
129 else:
130 raise com.IbisError(
131 f"Cannot execute expression of type: {type(expr)}"
132 )
133
134 def compile(
135 self, expr: ir.Expr, params: Mapping[ir.Expr, object] = None, **kwargs
136 ):
137 return translate(expr)
138
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ibis/backends/datafusion/__init__.py b/ibis/backends/datafusion/__init__.py
--- a/ibis/backends/datafusion/__init__.py
+++ b/ibis/backends/datafusion/__init__.py
@@ -83,6 +83,20 @@
return tables
def table(self, name, schema=None):
+ """Get an ibis expression representing a DataFusion table.
+
+ Parameters
+ ---------
+ name
+ The name of the table to retreive
+ schema
+ An optional schema
+
+ Returns
+ -------
+ ibis.expr.types.TableExpr
+ A table expression
+ """
catalog = self._context.catalog()
database = catalog.database('public')
table = database.table(name)
@@ -90,9 +104,31 @@
return self.table_class(name, schema, self).to_expr()
def register_csv(self, name, path, schema=None):
+ """Register a CSV file with with `name` located at `path`.
+
+ Parameters
+ ----------
+ name
+ The name of the table
+ path
+ The path to the CSV file
+ schema
+ An optional schema
+ """
self._context.register_csv(name, path, schema=schema)
def register_parquet(self, name, path, schema=None):
+ """Register a parquet file with with `name` located at `path`.
+
+ Parameters
+ ----------
+ name
+ The name of the table
+ path
+ The path to the parquet file
+ schema
+ An optional schema
+ """
self._context.register_parquet(name, path, schema=schema)
def execute(
| {"golden_diff": "diff --git a/ibis/backends/datafusion/__init__.py b/ibis/backends/datafusion/__init__.py\n--- a/ibis/backends/datafusion/__init__.py\n+++ b/ibis/backends/datafusion/__init__.py\n@@ -83,6 +83,20 @@\n return tables\n \n def table(self, name, schema=None):\n+ \"\"\"Get an ibis expression representing a DataFusion table.\n+\n+ Parameters\n+ ---------\n+ name\n+ The name of the table to retreive\n+ schema\n+ An optional schema\n+\n+ Returns\n+ -------\n+ ibis.expr.types.TableExpr\n+ A table expression\n+ \"\"\"\n catalog = self._context.catalog()\n database = catalog.database('public')\n table = database.table(name)\n@@ -90,9 +104,31 @@\n return self.table_class(name, schema, self).to_expr()\n \n def register_csv(self, name, path, schema=None):\n+ \"\"\"Register a CSV file with with `name` located at `path`.\n+\n+ Parameters\n+ ----------\n+ name\n+ The name of the table\n+ path\n+ The path to the CSV file\n+ schema\n+ An optional schema\n+ \"\"\"\n self._context.register_csv(name, path, schema=schema)\n \n def register_parquet(self, name, path, schema=None):\n+ \"\"\"Register a parquet file with with `name` located at `path`.\n+\n+ Parameters\n+ ----------\n+ name\n+ The name of the table\n+ path\n+ The path to the parquet file\n+ schema\n+ An optional schema\n+ \"\"\"\n self._context.register_parquet(name, path, schema=schema)\n \n def execute(\n", "issue": "docs: add blurb about the datafusion backend\nAfter #2918 lands, we need to mention it in the docs.\n", "before_files": [{"content": "from __future__ import annotations\n\nimport re\nfrom typing import Mapping\n\nimport datafusion as df\nimport pyarrow as pa\n\nimport ibis.common.exceptions as com\nimport ibis.expr.schema as sch\nimport ibis.expr.types as ir\nfrom ibis.backends.base import BaseBackend\n\nfrom .compiler import translate\n\n\ndef _to_pyarrow_table(frame):\n batches = frame.collect()\n if batches:\n return pa.Table.from_batches(batches)\n else:\n # TODO(kszucs): file a bug to datafusion because the fields'\n # nullability from frame.schema() is not always consistent\n # with the first record batch's schema\n return pa.Table.from_batches(batches, schema=frame.schema())\n\n\nclass Backend(BaseBackend):\n name = 'datafusion'\n builder = None\n\n @property\n def version(self):\n try:\n import importlib.metadata as importlib_metadata\n except ImportError:\n # TODO: remove this when Python 3.7 support is dropped\n import importlib_metadata\n return importlib_metadata.version(\"datafusion\")\n\n def do_connect(self, config):\n \"\"\"\n Create a DataFusionClient for use with Ibis\n\n Parameters\n ----------\n config : DataFusionContext or dict\n\n Returns\n -------\n DataFusionClient\n \"\"\"\n if isinstance(config, df.ExecutionContext):\n self._context = config\n else:\n self._context = df.ExecutionContext()\n\n for name, path in config.items():\n strpath = str(path)\n if strpath.endswith('.csv'):\n self.register_csv(name, path)\n elif strpath.endswith('.parquet'):\n self.register_parquet(name, path)\n else:\n raise ValueError(\n \"Currently the DataFusion backend only supports CSV \"\n \"files with the extension .csv and Parquet files with \"\n \"the .parquet extension.\"\n )\n\n def current_database(self):\n raise NotImplementedError()\n\n def list_databases(self, like: str = None) -> list[str]:\n raise NotImplementedError()\n\n def list_tables(self, like: str = None, database: str = None) -> list[str]:\n \"\"\"List the available tables.\"\"\"\n tables = list(self._context.tables())\n if like is not None:\n pattern = re.compile(like)\n return list(filter(lambda t: pattern.findall(t), tables))\n return tables\n\n def table(self, name, schema=None):\n catalog = self._context.catalog()\n database = catalog.database('public')\n table = database.table(name)\n schema = sch.infer(table.schema)\n return self.table_class(name, schema, self).to_expr()\n\n def register_csv(self, name, path, schema=None):\n self._context.register_csv(name, path, schema=schema)\n\n def register_parquet(self, name, path, schema=None):\n self._context.register_parquet(name, path, schema=schema)\n\n def execute(\n self,\n expr: ir.Expr,\n params: Mapping[ir.Expr, object] = None,\n limit: str = 'default',\n **kwargs,\n ):\n if isinstance(expr, ir.TableExpr):\n frame = self.compile(expr, params, **kwargs)\n table = _to_pyarrow_table(frame)\n return table.to_pandas()\n elif isinstance(expr, ir.ColumnExpr):\n # expression must be named for the projection\n expr = expr.name('tmp').to_projection()\n frame = self.compile(expr, params, **kwargs)\n table = _to_pyarrow_table(frame)\n return table['tmp'].to_pandas()\n elif isinstance(expr, ir.ScalarExpr):\n if expr.op().root_tables():\n # there are associated datafusion tables so convert the expr\n # to a selection which we can directly convert to a datafusion\n # plan\n expr = expr.name('tmp').to_projection()\n frame = self.compile(expr, params, **kwargs)\n else:\n # doesn't have any tables associated so create a plan from a\n # dummy datafusion table\n compiled = self.compile(expr, params, **kwargs)\n frame = self._context.empty_table().select(compiled)\n table = _to_pyarrow_table(frame)\n return table[0][0].as_py()\n else:\n raise com.IbisError(\n f\"Cannot execute expression of type: {type(expr)}\"\n )\n\n def compile(\n self, expr: ir.Expr, params: Mapping[ir.Expr, object] = None, **kwargs\n ):\n return translate(expr)\n", "path": "ibis/backends/datafusion/__init__.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport re\nfrom typing import Mapping\n\nimport datafusion as df\nimport pyarrow as pa\n\nimport ibis.common.exceptions as com\nimport ibis.expr.schema as sch\nimport ibis.expr.types as ir\nfrom ibis.backends.base import BaseBackend\n\nfrom .compiler import translate\n\n\ndef _to_pyarrow_table(frame):\n batches = frame.collect()\n if batches:\n return pa.Table.from_batches(batches)\n else:\n # TODO(kszucs): file a bug to datafusion because the fields'\n # nullability from frame.schema() is not always consistent\n # with the first record batch's schema\n return pa.Table.from_batches(batches, schema=frame.schema())\n\n\nclass Backend(BaseBackend):\n name = 'datafusion'\n builder = None\n\n @property\n def version(self):\n try:\n import importlib.metadata as importlib_metadata\n except ImportError:\n # TODO: remove this when Python 3.7 support is dropped\n import importlib_metadata\n return importlib_metadata.version(\"datafusion\")\n\n def do_connect(self, config):\n \"\"\"\n Create a DataFusionClient for use with Ibis\n\n Parameters\n ----------\n config : DataFusionContext or dict\n\n Returns\n -------\n DataFusionClient\n \"\"\"\n if isinstance(config, df.ExecutionContext):\n self._context = config\n else:\n self._context = df.ExecutionContext()\n\n for name, path in config.items():\n strpath = str(path)\n if strpath.endswith('.csv'):\n self.register_csv(name, path)\n elif strpath.endswith('.parquet'):\n self.register_parquet(name, path)\n else:\n raise ValueError(\n \"Currently the DataFusion backend only supports CSV \"\n \"files with the extension .csv and Parquet files with \"\n \"the .parquet extension.\"\n )\n\n def current_database(self):\n raise NotImplementedError()\n\n def list_databases(self, like: str = None) -> list[str]:\n raise NotImplementedError()\n\n def list_tables(self, like: str = None, database: str = None) -> list[str]:\n \"\"\"List the available tables.\"\"\"\n tables = list(self._context.tables())\n if like is not None:\n pattern = re.compile(like)\n return list(filter(lambda t: pattern.findall(t), tables))\n return tables\n\n def table(self, name, schema=None):\n \"\"\"Get an ibis expression representing a DataFusion table.\n\n Parameters\n ---------\n name\n The name of the table to retreive\n schema\n An optional schema\n\n Returns\n -------\n ibis.expr.types.TableExpr\n A table expression\n \"\"\"\n catalog = self._context.catalog()\n database = catalog.database('public')\n table = database.table(name)\n schema = sch.infer(table.schema)\n return self.table_class(name, schema, self).to_expr()\n\n def register_csv(self, name, path, schema=None):\n \"\"\"Register a CSV file with with `name` located at `path`.\n\n Parameters\n ----------\n name\n The name of the table\n path\n The path to the CSV file\n schema\n An optional schema\n \"\"\"\n self._context.register_csv(name, path, schema=schema)\n\n def register_parquet(self, name, path, schema=None):\n \"\"\"Register a parquet file with with `name` located at `path`.\n\n Parameters\n ----------\n name\n The name of the table\n path\n The path to the parquet file\n schema\n An optional schema\n \"\"\"\n self._context.register_parquet(name, path, schema=schema)\n\n def execute(\n self,\n expr: ir.Expr,\n params: Mapping[ir.Expr, object] = None,\n limit: str = 'default',\n **kwargs,\n ):\n if isinstance(expr, ir.TableExpr):\n frame = self.compile(expr, params, **kwargs)\n table = _to_pyarrow_table(frame)\n return table.to_pandas()\n elif isinstance(expr, ir.ColumnExpr):\n # expression must be named for the projection\n expr = expr.name('tmp').to_projection()\n frame = self.compile(expr, params, **kwargs)\n table = _to_pyarrow_table(frame)\n return table['tmp'].to_pandas()\n elif isinstance(expr, ir.ScalarExpr):\n if expr.op().root_tables():\n # there are associated datafusion tables so convert the expr\n # to a selection which we can directly convert to a datafusion\n # plan\n expr = expr.name('tmp').to_projection()\n frame = self.compile(expr, params, **kwargs)\n else:\n # doesn't have any tables associated so create a plan from a\n # dummy datafusion table\n compiled = self.compile(expr, params, **kwargs)\n frame = self._context.empty_table().select(compiled)\n table = _to_pyarrow_table(frame)\n return table[0][0].as_py()\n else:\n raise com.IbisError(\n f\"Cannot execute expression of type: {type(expr)}\"\n )\n\n def compile(\n self, expr: ir.Expr, params: Mapping[ir.Expr, object] = None, **kwargs\n ):\n return translate(expr)\n", "path": "ibis/backends/datafusion/__init__.py"}]} | 1,572 | 401 |
gh_patches_debug_2007 | rasdani/github-patches | git_diff | googleapis__google-cloud-python-5683 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 'datastore 1.7.0'
Major changes are:
- Add support for Python 3.7.
- Drop support for Python 3.4.
- Bugfix: query offsets (#4675).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `datastore/setup.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20
21 # Package metadata.
22
23 name = 'google-cloud-datastore'
24 description = 'Google Cloud Datastore API client library'
25 version = '1.6.0'
26 # Should be one of:
27 # 'Development Status :: 3 - Alpha'
28 # 'Development Status :: 4 - Beta'
29 # 'Development Status :: 5 - Production/Stable'
30 release_status = 'Development Status :: 5 - Production/Stable'
31 dependencies = [
32 'google-cloud-core<0.29dev,>=0.28.0',
33 'google-api-core[grpc]<2.0.0dev,>=1.0.0',
34 ]
35 extras = {
36 }
37
38
39 # Setup boilerplate below this line.
40
41 package_root = os.path.abspath(os.path.dirname(__file__))
42
43 readme_filename = os.path.join(package_root, 'README.rst')
44 with io.open(readme_filename, encoding='utf-8') as readme_file:
45 readme = readme_file.read()
46
47 # Only include packages under the 'google' namespace. Do not include tests,
48 # benchmarks, etc.
49 packages = [
50 package for package in setuptools.find_packages()
51 if package.startswith('google')]
52
53 # Determine which namespaces are needed.
54 namespaces = ['google']
55 if 'google.cloud' in packages:
56 namespaces.append('google.cloud')
57
58
59 setuptools.setup(
60 name=name,
61 version=version,
62 description=description,
63 long_description=readme,
64 author='Google LLC',
65 author_email='[email protected]',
66 license='Apache 2.0',
67 url='https://github.com/GoogleCloudPlatform/google-cloud-python',
68 classifiers=[
69 release_status,
70 'Intended Audience :: Developers',
71 'License :: OSI Approved :: Apache Software License',
72 'Programming Language :: Python',
73 'Programming Language :: Python :: 2',
74 'Programming Language :: Python :: 2.7',
75 'Programming Language :: Python :: 3',
76 'Programming Language :: Python :: 3.4',
77 'Programming Language :: Python :: 3.5',
78 'Programming Language :: Python :: 3.6',
79 'Operating System :: OS Independent',
80 'Topic :: Internet',
81 ],
82 platforms='Posix; MacOS X; Windows',
83 packages=packages,
84 namespace_packages=namespaces,
85 install_requires=dependencies,
86 extras_require=extras,
87 include_package_data=True,
88 zip_safe=False,
89 )
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/datastore/setup.py b/datastore/setup.py
--- a/datastore/setup.py
+++ b/datastore/setup.py
@@ -22,7 +22,7 @@
name = 'google-cloud-datastore'
description = 'Google Cloud Datastore API client library'
-version = '1.6.0'
+version = '1.7.0'
# Should be one of:
# 'Development Status :: 3 - Alpha'
# 'Development Status :: 4 - Beta'
| {"golden_diff": "diff --git a/datastore/setup.py b/datastore/setup.py\n--- a/datastore/setup.py\n+++ b/datastore/setup.py\n@@ -22,7 +22,7 @@\n \n name = 'google-cloud-datastore'\n description = 'Google Cloud Datastore API client library'\n-version = '1.6.0'\n+version = '1.7.0'\n # Should be one of:\n # 'Development Status :: 3 - Alpha'\n # 'Development Status :: 4 - Beta'\n", "issue": "Release 'datastore 1.7.0'\nMajor changes are:\r\n\r\n- Add support for Python 3.7.\r\n- Drop support for Python 3.4.\r\n- Bugfix: query offsets (#4675).\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = 'google-cloud-datastore'\ndescription = 'Google Cloud Datastore API client library'\nversion = '1.6.0'\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = 'Development Status :: 5 - Production/Stable'\ndependencies = [\n 'google-cloud-core<0.29dev,>=0.28.0',\n 'google-api-core[grpc]<2.0.0dev,>=1.0.0',\n]\nextras = {\n}\n\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, 'README.rst')\nwith io.open(readme_filename, encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package for package in setuptools.find_packages()\n if package.startswith('google')]\n\n# Determine which namespaces are needed.\nnamespaces = ['google']\nif 'google.cloud' in packages:\n namespaces.append('google.cloud')\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author='Google LLC',\n author_email='[email protected]',\n license='Apache 2.0',\n url='https://github.com/GoogleCloudPlatform/google-cloud-python',\n classifiers=[\n release_status,\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Operating System :: OS Independent',\n 'Topic :: Internet',\n ],\n platforms='Posix; MacOS X; Windows',\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "datastore/setup.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = 'google-cloud-datastore'\ndescription = 'Google Cloud Datastore API client library'\nversion = '1.7.0'\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = 'Development Status :: 5 - Production/Stable'\ndependencies = [\n 'google-cloud-core<0.29dev,>=0.28.0',\n 'google-api-core[grpc]<2.0.0dev,>=1.0.0',\n]\nextras = {\n}\n\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, 'README.rst')\nwith io.open(readme_filename, encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package for package in setuptools.find_packages()\n if package.startswith('google')]\n\n# Determine which namespaces are needed.\nnamespaces = ['google']\nif 'google.cloud' in packages:\n namespaces.append('google.cloud')\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author='Google LLC',\n author_email='[email protected]',\n license='Apache 2.0',\n url='https://github.com/GoogleCloudPlatform/google-cloud-python',\n classifiers=[\n release_status,\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Operating System :: OS Independent',\n 'Topic :: Internet',\n ],\n platforms='Posix; MacOS X; Windows',\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "datastore/setup.py"}]} | 1,115 | 105 |
gh_patches_debug_2550 | rasdani/github-patches | git_diff | semgrep__semgrep-rules-1051 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
python tempfile-without-flush: Don't require .write() for exception?
**Describe the bug**
The `python.lang.correctness.tempfile.flush.tempfile-without-flush` rule has [exceptions](https://github.com/returntocorp/semgrep-rules/blob/35e9f350a5d0190c936502fff179ba2c465c4438/python/lang/correctness/tempfile/flush.yaml#L5-L20) if `.write()` and then `.close()` or `.flush()` is used on a tempfile. However, this doesn't cover code which calls `.close()` immediately, without writing:
```
$ semgrep --config=p/r2c-ci --verbose test.py
[...]
severity:error rule:python.lang.correctness.tempfile.flush.tempfile-without-flush: possibly missing a .flush() or .close() call to temporary file handle; file may or may not exist when handle.name is used
3:handle = tempfile.NamedTemporaryFile(delete=False)
4:handle.close()
5:print(handle.name)
```
**To Reproduce**
```python3
import tempfile
handle = tempfile.NamedTemporaryFile(delete=False)
handle.close()
print(handle.name)
```
**Expected behavior**
It seems to me like the exception should trigger whenever `.close()` is invoked, no matter if `.write()` is called before that or not?
python tempfile-without-flush: Don't require .write() for exception?
**Describe the bug**
The `python.lang.correctness.tempfile.flush.tempfile-without-flush` rule has [exceptions](https://github.com/returntocorp/semgrep-rules/blob/35e9f350a5d0190c936502fff179ba2c465c4438/python/lang/correctness/tempfile/flush.yaml#L5-L20) if `.write()` and then `.close()` or `.flush()` is used on a tempfile. However, this doesn't cover code which calls `.close()` immediately, without writing:
```
$ semgrep --config=p/r2c-ci --verbose test.py
[...]
severity:error rule:python.lang.correctness.tempfile.flush.tempfile-without-flush: possibly missing a .flush() or .close() call to temporary file handle; file may or may not exist when handle.name is used
3:handle = tempfile.NamedTemporaryFile(delete=False)
4:handle.close()
5:print(handle.name)
```
**To Reproduce**
```python3
import tempfile
handle = tempfile.NamedTemporaryFile(delete=False)
handle.close()
print(handle.name)
```
**Expected behavior**
It seems to me like the exception should trigger whenever `.close()` is invoked, no matter if `.write()` is called before that or not?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/lang/correctness/tempfile/flush.py`
Content:
```
1 import tempfile
2
3 import at
4 import tf
5
6
7 def main():
8 with tempfile.NamedTemporaryFile("w") as fout:
9 debug_print(astr)
10 fout.write(astr)
11 # ok:tempfile-without-flush
12 fout.flush()
13 cmd = [binary_name, fout.name, *[str(path) for path in targets]]
14
15
16 def main_b():
17 with tempfile.NamedTemporaryFile("w") as fout:
18 debug_print(astr)
19 fout.write(astr)
20 # ok:tempfile-without-flush
21 fout.close()
22 cmd = [binary_name, fout.name, *[str(path) for path in targets]]
23
24
25 def main_c():
26 with tempfile.NamedTemporaryFile("w") as fout:
27 debug_print(astr)
28 fout.write(astr)
29
30 # ok:tempfile-without-flush
31 cmd = [binary_name, fout.name, *[str(path) for path in targets]]
32
33
34 def main_c():
35 with tempfile.NamedTemporaryFile("w") as fout:
36 debug_print(astr)
37 fout.write(astr)
38 debug_print('wrote file')
39
40 # ruleid:tempfile-without-flush
41 cmd = [binary_name, fout.name, *[str(path) for path in targets]]
42
43
44 def main_d():
45 fout = tempfile.NamedTemporaryFile('w')
46 debug_print(astr)
47 fout.write(astr)
48
49 # ruleid:tempfile-without-flush
50 fout.name
51 # ruleid:tempfile-without-flush
52 cmd = [binary_name, fout.name, *[str(path) for path in targets]]
53
54
55 def main_e():
56 fout = tempfile.NamedTemporaryFile('w')
57 debug_print(astr)
58 fout.write(astr)
59
60 # ruleid:tempfile-without-flush
61 print(fout.name)
62 # ruleid:tempfile-without-flush
63 cmd = [binary_name, fout.name, *[str(path) for path in targets]]
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/lang/correctness/tempfile/flush.py b/python/lang/correctness/tempfile/flush.py
--- a/python/lang/correctness/tempfile/flush.py
+++ b/python/lang/correctness/tempfile/flush.py
@@ -61,3 +61,12 @@
print(fout.name)
# ruleid:tempfile-without-flush
cmd = [binary_name, fout.name, *[str(path) for path in targets]]
+
+
+def main_f():
+ fout = tempfile.NamedTemporaryFile('w', delete=False)
+ debug_print(astr)
+ fout.close()
+
+ # ok:tempfile-without-flush
+ print(fout.name)
| {"golden_diff": "diff --git a/python/lang/correctness/tempfile/flush.py b/python/lang/correctness/tempfile/flush.py\n--- a/python/lang/correctness/tempfile/flush.py\n+++ b/python/lang/correctness/tempfile/flush.py\n@@ -61,3 +61,12 @@\n print(fout.name)\n # ruleid:tempfile-without-flush\n cmd = [binary_name, fout.name, *[str(path) for path in targets]]\n+\n+\n+def main_f():\n+ fout = tempfile.NamedTemporaryFile('w', delete=False)\n+ debug_print(astr)\n+ fout.close()\n+\n+ # ok:tempfile-without-flush\n+ print(fout.name)\n", "issue": "python tempfile-without-flush: Don't require .write() for exception?\n**Describe the bug**\r\nThe `python.lang.correctness.tempfile.flush.tempfile-without-flush` rule has [exceptions](https://github.com/returntocorp/semgrep-rules/blob/35e9f350a5d0190c936502fff179ba2c465c4438/python/lang/correctness/tempfile/flush.yaml#L5-L20) if `.write()` and then `.close()` or `.flush()` is used on a tempfile. However, this doesn't cover code which calls `.close()` immediately, without writing:\r\n\r\n```\r\n$ semgrep --config=p/r2c-ci --verbose test.py\r\n[...]\r\nseverity:error rule:python.lang.correctness.tempfile.flush.tempfile-without-flush: possibly missing a .flush() or .close() call to temporary file handle; file may or may not exist when handle.name is used\r\n3:handle = tempfile.NamedTemporaryFile(delete=False)\r\n4:handle.close()\r\n5:print(handle.name)\r\n```\r\n\r\n**To Reproduce**\r\n```python3\r\nimport tempfile\r\n\r\nhandle = tempfile.NamedTemporaryFile(delete=False)\r\nhandle.close()\r\nprint(handle.name)\r\n```\r\n\r\n**Expected behavior**\r\nIt seems to me like the exception should trigger whenever `.close()` is invoked, no matter if `.write()` is called before that or not?\npython tempfile-without-flush: Don't require .write() for exception?\n**Describe the bug**\r\nThe `python.lang.correctness.tempfile.flush.tempfile-without-flush` rule has [exceptions](https://github.com/returntocorp/semgrep-rules/blob/35e9f350a5d0190c936502fff179ba2c465c4438/python/lang/correctness/tempfile/flush.yaml#L5-L20) if `.write()` and then `.close()` or `.flush()` is used on a tempfile. However, this doesn't cover code which calls `.close()` immediately, without writing:\r\n\r\n```\r\n$ semgrep --config=p/r2c-ci --verbose test.py\r\n[...]\r\nseverity:error rule:python.lang.correctness.tempfile.flush.tempfile-without-flush: possibly missing a .flush() or .close() call to temporary file handle; file may or may not exist when handle.name is used\r\n3:handle = tempfile.NamedTemporaryFile(delete=False)\r\n4:handle.close()\r\n5:print(handle.name)\r\n```\r\n\r\n**To Reproduce**\r\n```python3\r\nimport tempfile\r\n\r\nhandle = tempfile.NamedTemporaryFile(delete=False)\r\nhandle.close()\r\nprint(handle.name)\r\n```\r\n\r\n**Expected behavior**\r\nIt seems to me like the exception should trigger whenever `.close()` is invoked, no matter if `.write()` is called before that or not?\n", "before_files": [{"content": "import tempfile\n\nimport at\nimport tf\n\n\ndef main():\n with tempfile.NamedTemporaryFile(\"w\") as fout:\n debug_print(astr)\n fout.write(astr)\n # ok:tempfile-without-flush\n fout.flush()\n cmd = [binary_name, fout.name, *[str(path) for path in targets]]\n\n\ndef main_b():\n with tempfile.NamedTemporaryFile(\"w\") as fout:\n debug_print(astr)\n fout.write(astr)\n # ok:tempfile-without-flush\n fout.close()\n cmd = [binary_name, fout.name, *[str(path) for path in targets]]\n\n\ndef main_c():\n with tempfile.NamedTemporaryFile(\"w\") as fout:\n debug_print(astr)\n fout.write(astr)\n\n # ok:tempfile-without-flush\n cmd = [binary_name, fout.name, *[str(path) for path in targets]]\n\n\ndef main_c():\n with tempfile.NamedTemporaryFile(\"w\") as fout:\n debug_print(astr)\n fout.write(astr)\n debug_print('wrote file')\n\n # ruleid:tempfile-without-flush\n cmd = [binary_name, fout.name, *[str(path) for path in targets]]\n\n\ndef main_d():\n fout = tempfile.NamedTemporaryFile('w')\n debug_print(astr)\n fout.write(astr)\n\n # ruleid:tempfile-without-flush\n fout.name\n # ruleid:tempfile-without-flush\n cmd = [binary_name, fout.name, *[str(path) for path in targets]]\n\n\ndef main_e():\n fout = tempfile.NamedTemporaryFile('w')\n debug_print(astr)\n fout.write(astr)\n\n # ruleid:tempfile-without-flush\n print(fout.name)\n # ruleid:tempfile-without-flush\n cmd = [binary_name, fout.name, *[str(path) for path in targets]]\n", "path": "python/lang/correctness/tempfile/flush.py"}], "after_files": [{"content": "import tempfile\n\nimport at\nimport tf\n\n\ndef main():\n with tempfile.NamedTemporaryFile(\"w\") as fout:\n debug_print(astr)\n fout.write(astr)\n # ok:tempfile-without-flush\n fout.flush()\n cmd = [binary_name, fout.name, *[str(path) for path in targets]]\n\n\ndef main_b():\n with tempfile.NamedTemporaryFile(\"w\") as fout:\n debug_print(astr)\n fout.write(astr)\n # ok:tempfile-without-flush\n fout.close()\n cmd = [binary_name, fout.name, *[str(path) for path in targets]]\n\n\ndef main_c():\n with tempfile.NamedTemporaryFile(\"w\") as fout:\n debug_print(astr)\n fout.write(astr)\n\n # ok:tempfile-without-flush\n cmd = [binary_name, fout.name, *[str(path) for path in targets]]\n\n\ndef main_c():\n with tempfile.NamedTemporaryFile(\"w\") as fout:\n debug_print(astr)\n fout.write(astr)\n debug_print('wrote file')\n\n # ruleid:tempfile-without-flush\n cmd = [binary_name, fout.name, *[str(path) for path in targets]]\n\n\ndef main_d():\n fout = tempfile.NamedTemporaryFile('w')\n debug_print(astr)\n fout.write(astr)\n\n # ruleid:tempfile-without-flush\n fout.name\n # ruleid:tempfile-without-flush\n cmd = [binary_name, fout.name, *[str(path) for path in targets]]\n\n\ndef main_e():\n fout = tempfile.NamedTemporaryFile('w')\n debug_print(astr)\n fout.write(astr)\n\n # ruleid:tempfile-without-flush\n print(fout.name)\n # ruleid:tempfile-without-flush\n cmd = [binary_name, fout.name, *[str(path) for path in targets]]\n\n\ndef main_f():\n fout = tempfile.NamedTemporaryFile('w', delete=False)\n debug_print(astr)\n fout.close()\n\n # ok:tempfile-without-flush\n print(fout.name)\n", "path": "python/lang/correctness/tempfile/flush.py"}]} | 1,404 | 153 |
gh_patches_debug_14362 | rasdani/github-patches | git_diff | python-poetry__poetry-620 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Arbitrary equality in pip freeze causes crash
<!--
Hi there! Thank you for discovering and submitting an issue.
Before you submit this; let's make sure of a few things.
Please make sure the following boxes are ticked if they are correct.
If not, please try and fulfill these first.
-->
<!-- Checked checkbox should look like this: [x] -->
- [ x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.
- [x ] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.
- [x ] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).
<!--
Once those are done, if you're able to fill in the following list with your information,
it'd be very helpful to whoever handles the issue.
-->
- **MacOS High Sierra 10.13.6**: <!-- Replace with version + name -->
- **Poetry 0.11.5**: <!-- Replace with version -->
## Issue
<!-- Now feel free to write your issue, but please be descriptive! Thanks again 🙌 ❤️ -->
Running `poetry install` when a package is installed such that pip freeze lists it with an arbitrary equality operator (ex. `package===1.6.1-642-g1e03d5e853`) causes an exception
```
[ValueError]
Unable to parse "=1.6.1-642-g1e03d5e853".
```
Looks like the issue is the strict splitting on `==` in `poetry/repositories/installed_repository.py`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `poetry/repositories/installed_repository.py`
Content:
```
1 from poetry.packages import Package
2 from poetry.utils.env import Env
3
4 from .repository import Repository
5
6
7 class InstalledRepository(Repository):
8 @classmethod
9 def load(cls, env): # type: (Env) -> InstalledRepository
10 """
11 Load installed packages.
12
13 For now, it uses the pip "freeze" command.
14 """
15 repo = cls()
16
17 freeze_output = env.run("pip", "freeze")
18 for line in freeze_output.split("\n"):
19 if "==" in line:
20 name, version = line.split("==")
21 repo.add_package(Package(name, version, version))
22 elif line.startswith("-e "):
23 line = line[3:].strip()
24 if line.startswith("git+"):
25 url = line.lstrip("git+")
26 if "@" in url:
27 url, rev = url.rsplit("@", 1)
28 else:
29 rev = "master"
30
31 name = url.split("/")[-1].rstrip(".git")
32 if "#egg=" in rev:
33 rev, name = rev.split("#egg=")
34
35 package = Package(name, "0.0.0")
36 package.source_type = "git"
37 package.source_url = url
38 package.source_reference = rev
39
40 repo.add_package(package)
41
42 return repo
43
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/poetry/repositories/installed_repository.py b/poetry/repositories/installed_repository.py
--- a/poetry/repositories/installed_repository.py
+++ b/poetry/repositories/installed_repository.py
@@ -1,3 +1,5 @@
+import re
+
from poetry.packages import Package
from poetry.utils.env import Env
@@ -17,7 +19,7 @@
freeze_output = env.run("pip", "freeze")
for line in freeze_output.split("\n"):
if "==" in line:
- name, version = line.split("==")
+ name, version = re.split("={2,3}", line)
repo.add_package(Package(name, version, version))
elif line.startswith("-e "):
line = line[3:].strip()
| {"golden_diff": "diff --git a/poetry/repositories/installed_repository.py b/poetry/repositories/installed_repository.py\n--- a/poetry/repositories/installed_repository.py\n+++ b/poetry/repositories/installed_repository.py\n@@ -1,3 +1,5 @@\n+import re\n+\n from poetry.packages import Package\n from poetry.utils.env import Env\n \n@@ -17,7 +19,7 @@\n freeze_output = env.run(\"pip\", \"freeze\")\n for line in freeze_output.split(\"\\n\"):\n if \"==\" in line:\n- name, version = line.split(\"==\")\n+ name, version = re.split(\"={2,3}\", line)\n repo.add_package(Package(name, version, version))\n elif line.startswith(\"-e \"):\n line = line[3:].strip()\n", "issue": "Arbitrary equality in pip freeze causes crash\n<!--\r\n Hi there! Thank you for discovering and submitting an issue.\r\n\r\n Before you submit this; let's make sure of a few things.\r\n Please make sure the following boxes are ticked if they are correct.\r\n If not, please try and fulfill these first.\r\n-->\r\n\r\n<!-- Checked checkbox should look like this: [x] -->\r\n- [ x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.\r\n- [x ] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.\r\n- [x ] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).\r\n\r\n<!--\r\n Once those are done, if you're able to fill in the following list with your information,\r\n it'd be very helpful to whoever handles the issue.\r\n-->\r\n\r\n- **MacOS High Sierra 10.13.6**: <!-- Replace with version + name -->\r\n- **Poetry 0.11.5**: <!-- Replace with version -->\r\n\r\n## Issue\r\n<!-- Now feel free to write your issue, but please be descriptive! Thanks again \ud83d\ude4c \u2764\ufe0f -->\r\nRunning `poetry install` when a package is installed such that pip freeze lists it with an arbitrary equality operator (ex. `package===1.6.1-642-g1e03d5e853`) causes an exception\r\n\r\n```\r\n[ValueError]\r\nUnable to parse \"=1.6.1-642-g1e03d5e853\".\r\n```\r\n\r\nLooks like the issue is the strict splitting on `==` in `poetry/repositories/installed_repository.py`.\r\n\n", "before_files": [{"content": "from poetry.packages import Package\nfrom poetry.utils.env import Env\n\nfrom .repository import Repository\n\n\nclass InstalledRepository(Repository):\n @classmethod\n def load(cls, env): # type: (Env) -> InstalledRepository\n \"\"\"\n Load installed packages.\n\n For now, it uses the pip \"freeze\" command.\n \"\"\"\n repo = cls()\n\n freeze_output = env.run(\"pip\", \"freeze\")\n for line in freeze_output.split(\"\\n\"):\n if \"==\" in line:\n name, version = line.split(\"==\")\n repo.add_package(Package(name, version, version))\n elif line.startswith(\"-e \"):\n line = line[3:].strip()\n if line.startswith(\"git+\"):\n url = line.lstrip(\"git+\")\n if \"@\" in url:\n url, rev = url.rsplit(\"@\", 1)\n else:\n rev = \"master\"\n\n name = url.split(\"/\")[-1].rstrip(\".git\")\n if \"#egg=\" in rev:\n rev, name = rev.split(\"#egg=\")\n\n package = Package(name, \"0.0.0\")\n package.source_type = \"git\"\n package.source_url = url\n package.source_reference = rev\n\n repo.add_package(package)\n\n return repo\n", "path": "poetry/repositories/installed_repository.py"}], "after_files": [{"content": "import re\n\nfrom poetry.packages import Package\nfrom poetry.utils.env import Env\n\nfrom .repository import Repository\n\n\nclass InstalledRepository(Repository):\n @classmethod\n def load(cls, env): # type: (Env) -> InstalledRepository\n \"\"\"\n Load installed packages.\n\n For now, it uses the pip \"freeze\" command.\n \"\"\"\n repo = cls()\n\n freeze_output = env.run(\"pip\", \"freeze\")\n for line in freeze_output.split(\"\\n\"):\n if \"==\" in line:\n name, version = re.split(\"={2,3}\", line)\n repo.add_package(Package(name, version, version))\n elif line.startswith(\"-e \"):\n line = line[3:].strip()\n if line.startswith(\"git+\"):\n url = line.lstrip(\"git+\")\n if \"@\" in url:\n url, rev = url.rsplit(\"@\", 1)\n else:\n rev = \"master\"\n\n name = url.split(\"/\")[-1].rstrip(\".git\")\n if \"#egg=\" in rev:\n rev, name = rev.split(\"#egg=\")\n\n package = Package(name, \"0.0.0\")\n package.source_type = \"git\"\n package.source_url = url\n package.source_reference = rev\n\n repo.add_package(package)\n\n return repo\n", "path": "poetry/repositories/installed_repository.py"}]} | 995 | 174 |
gh_patches_debug_94 | rasdani/github-patches | git_diff | open-mmlab__mmsegmentation-658 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
error in train.py
error in line 134 in mmsegmentation/tools/train.py
model.init_weights() is error
model.init_weight() is ok
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmseg/__init__.py`
Content:
```
1 import mmcv
2
3 from .version import __version__, version_info
4
5 MMCV_MIN = '1.3.1'
6 MMCV_MAX = '1.4.0'
7
8
9 def digit_version(version_str):
10 digit_version = []
11 for x in version_str.split('.'):
12 if x.isdigit():
13 digit_version.append(int(x))
14 elif x.find('rc') != -1:
15 patch_version = x.split('rc')
16 digit_version.append(int(patch_version[0]) - 1)
17 digit_version.append(int(patch_version[1]))
18 return digit_version
19
20
21 mmcv_min_version = digit_version(MMCV_MIN)
22 mmcv_max_version = digit_version(MMCV_MAX)
23 mmcv_version = digit_version(mmcv.__version__)
24
25
26 assert (mmcv_min_version <= mmcv_version <= mmcv_max_version), \
27 f'MMCV=={mmcv.__version__} is used but incompatible. ' \
28 f'Please install mmcv>={mmcv_min_version}, <={mmcv_max_version}.'
29
30 __all__ = ['__version__', 'version_info']
31
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mmseg/__init__.py b/mmseg/__init__.py
--- a/mmseg/__init__.py
+++ b/mmseg/__init__.py
@@ -2,7 +2,7 @@
from .version import __version__, version_info
-MMCV_MIN = '1.3.1'
+MMCV_MIN = '1.3.7'
MMCV_MAX = '1.4.0'
| {"golden_diff": "diff --git a/mmseg/__init__.py b/mmseg/__init__.py\n--- a/mmseg/__init__.py\n+++ b/mmseg/__init__.py\n@@ -2,7 +2,7 @@\n \n from .version import __version__, version_info\n \n-MMCV_MIN = '1.3.1'\n+MMCV_MIN = '1.3.7'\n MMCV_MAX = '1.4.0'\n", "issue": "error in train.py\nerror in line 134 in mmsegmentation/tools/train.py \r\nmodel.init_weights() is error \r\nmodel.init_weight() is ok\r\n\n", "before_files": [{"content": "import mmcv\n\nfrom .version import __version__, version_info\n\nMMCV_MIN = '1.3.1'\nMMCV_MAX = '1.4.0'\n\n\ndef digit_version(version_str):\n digit_version = []\n for x in version_str.split('.'):\n if x.isdigit():\n digit_version.append(int(x))\n elif x.find('rc') != -1:\n patch_version = x.split('rc')\n digit_version.append(int(patch_version[0]) - 1)\n digit_version.append(int(patch_version[1]))\n return digit_version\n\n\nmmcv_min_version = digit_version(MMCV_MIN)\nmmcv_max_version = digit_version(MMCV_MAX)\nmmcv_version = digit_version(mmcv.__version__)\n\n\nassert (mmcv_min_version <= mmcv_version <= mmcv_max_version), \\\n f'MMCV=={mmcv.__version__} is used but incompatible. ' \\\n f'Please install mmcv>={mmcv_min_version}, <={mmcv_max_version}.'\n\n__all__ = ['__version__', 'version_info']\n", "path": "mmseg/__init__.py"}], "after_files": [{"content": "import mmcv\n\nfrom .version import __version__, version_info\n\nMMCV_MIN = '1.3.7'\nMMCV_MAX = '1.4.0'\n\n\ndef digit_version(version_str):\n digit_version = []\n for x in version_str.split('.'):\n if x.isdigit():\n digit_version.append(int(x))\n elif x.find('rc') != -1:\n patch_version = x.split('rc')\n digit_version.append(int(patch_version[0]) - 1)\n digit_version.append(int(patch_version[1]))\n return digit_version\n\n\nmmcv_min_version = digit_version(MMCV_MIN)\nmmcv_max_version = digit_version(MMCV_MAX)\nmmcv_version = digit_version(mmcv.__version__)\n\n\nassert (mmcv_min_version <= mmcv_version <= mmcv_max_version), \\\n f'MMCV=={mmcv.__version__} is used but incompatible. ' \\\n f'Please install mmcv>={mmcv_min_version}, <={mmcv_max_version}.'\n\n__all__ = ['__version__', 'version_info']\n", "path": "mmseg/__init__.py"}]} | 581 | 92 |
gh_patches_debug_1570 | rasdani/github-patches | git_diff | statsmodels__statsmodels-578 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PatsyData detection buglet
If exog is None, then is_using_patsy data utility function returns False. It's a quick fix, but I'm starting to use patsy a bit more internally for convenience (e.g., passing in names to plotting functions rather than having to pull out the data) so I'll make a ticket.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `statsmodels/tools/data.py`
Content:
```
1 """
2 Compatibility tools for various data structure inputs
3 """
4
5 #TODO: question: interpret_data
6 # looks good and could/should be merged with other check convertion functions we also have
7 # similar also to what Nathaniel mentioned for Formula
8 # good: if ndarray check passes then loading pandas is not triggered,
9
10
11 import numpy as np
12
13 def have_pandas():
14 try:
15 import pandas
16 return True
17 except ImportError:
18 return False
19 except Exception:
20 return False
21
22 def have_patsy():
23 try:
24 import patsy
25 return True
26 except ImportError:
27 return False
28 except Exception:
29 return False
30
31 def is_data_frame(obj):
32 if not have_pandas():
33 return False
34
35 import pandas as pn
36
37 return isinstance(obj, pn.DataFrame)
38
39 def is_design_matrix(obj):
40 if not have_patsy():
41 return False
42
43 from patsy import DesignMatrix
44 return isinstance(obj, DesignMatrix)
45
46 def _is_structured_ndarray(obj):
47 return isinstance(obj, np.ndarray) and obj.dtype.names is not None
48
49 def interpret_data(data, colnames=None, rownames=None):
50 """
51 Convert passed data structure to form required by estimation classes
52
53 Parameters
54 ----------
55 data : ndarray-like
56 colnames : sequence or None
57 May be part of data structure
58 rownames : sequence or None
59
60 Returns
61 -------
62 (values, colnames, rownames) : (homogeneous ndarray, list)
63 """
64 if isinstance(data, np.ndarray):
65 if _is_structured_ndarray(data):
66 if colnames is None:
67 colnames = data.dtype.names
68 values = struct_to_ndarray(data)
69 else:
70 values = data
71
72 if colnames is None:
73 colnames = ['Y_%d' % i for i in range(values.shape[1])]
74 elif is_data_frame(data):
75 # XXX: hack
76 data = data.dropna()
77 values = data.values
78 colnames = data.columns
79 rownames = data.index
80 else: # pragma: no cover
81 raise Exception('cannot handle other input types at the moment')
82
83 if not isinstance(colnames, list):
84 colnames = list(colnames)
85
86 # sanity check
87 if len(colnames) != values.shape[1]:
88 raise ValueError('length of colnames does not match number '
89 'of columns in data')
90
91 if rownames is not None and len(rownames) != len(values):
92 raise ValueError('length of rownames does not match number '
93 'of rows in data')
94
95 return values, colnames, rownames
96
97 def struct_to_ndarray(arr):
98 return arr.view((float, len(arr.dtype.names)))
99
100 def _is_using_ndarray_type(endog, exog):
101 return (type(endog) is np.ndarray and
102 (type(exog) is np.ndarray or exog is None))
103
104 def _is_using_ndarray(endog, exog):
105 return (isinstance(endog, np.ndarray) and
106 (isinstance(exog, np.ndarray) or exog is None))
107
108 def _is_using_pandas(endog, exog):
109 if not have_pandas():
110 return False
111 from pandas import Series, DataFrame, WidePanel
112 klasses = (Series, DataFrame, WidePanel)
113 return (isinstance(endog, klasses) or isinstance(exog, klasses))
114
115 def _is_array_like(endog, exog):
116 try: # do it like this in case of mixed types, ie., ndarray and list
117 endog = np.asarray(endog)
118 exog = np.asarray(exog)
119 return True
120 except:
121 return False
122
123 def _is_using_patsy(endog, exog):
124 # we get this when a structured array is passed through a formula
125 return is_design_matrix(endog) and is_design_matrix(exog)
126
127
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/statsmodels/tools/data.py b/statsmodels/tools/data.py
--- a/statsmodels/tools/data.py
+++ b/statsmodels/tools/data.py
@@ -122,5 +122,6 @@
def _is_using_patsy(endog, exog):
# we get this when a structured array is passed through a formula
- return is_design_matrix(endog) and is_design_matrix(exog)
+ return (is_design_matrix(endog) and
+ (is_design_matrix(exog) or exog is None))
| {"golden_diff": "diff --git a/statsmodels/tools/data.py b/statsmodels/tools/data.py\n--- a/statsmodels/tools/data.py\n+++ b/statsmodels/tools/data.py\n@@ -122,5 +122,6 @@\n \n def _is_using_patsy(endog, exog):\n # we get this when a structured array is passed through a formula\n- return is_design_matrix(endog) and is_design_matrix(exog)\n+ return (is_design_matrix(endog) and\n+ (is_design_matrix(exog) or exog is None))\n", "issue": "PatsyData detection buglet\nIf exog is None, then is_using_patsy data utility function returns False. It's a quick fix, but I'm starting to use patsy a bit more internally for convenience (e.g., passing in names to plotting functions rather than having to pull out the data) so I'll make a ticket.\n\n", "before_files": [{"content": "\"\"\"\nCompatibility tools for various data structure inputs\n\"\"\"\n\n#TODO: question: interpret_data\n# looks good and could/should be merged with other check convertion functions we also have\n# similar also to what Nathaniel mentioned for Formula\n# good: if ndarray check passes then loading pandas is not triggered,\n\n\nimport numpy as np\n\ndef have_pandas():\n try:\n import pandas\n return True\n except ImportError:\n return False\n except Exception:\n return False\n\ndef have_patsy():\n try:\n import patsy\n return True\n except ImportError:\n return False\n except Exception:\n return False\n\ndef is_data_frame(obj):\n if not have_pandas():\n return False\n\n import pandas as pn\n\n return isinstance(obj, pn.DataFrame)\n\ndef is_design_matrix(obj):\n if not have_patsy():\n return False\n\n from patsy import DesignMatrix\n return isinstance(obj, DesignMatrix)\n\ndef _is_structured_ndarray(obj):\n return isinstance(obj, np.ndarray) and obj.dtype.names is not None\n\ndef interpret_data(data, colnames=None, rownames=None):\n \"\"\"\n Convert passed data structure to form required by estimation classes\n\n Parameters\n ----------\n data : ndarray-like\n colnames : sequence or None\n May be part of data structure\n rownames : sequence or None\n\n Returns\n -------\n (values, colnames, rownames) : (homogeneous ndarray, list)\n \"\"\"\n if isinstance(data, np.ndarray):\n if _is_structured_ndarray(data):\n if colnames is None:\n colnames = data.dtype.names\n values = struct_to_ndarray(data)\n else:\n values = data\n\n if colnames is None:\n colnames = ['Y_%d' % i for i in range(values.shape[1])]\n elif is_data_frame(data):\n # XXX: hack\n data = data.dropna()\n values = data.values\n colnames = data.columns\n rownames = data.index\n else: # pragma: no cover\n raise Exception('cannot handle other input types at the moment')\n\n if not isinstance(colnames, list):\n colnames = list(colnames)\n\n # sanity check\n if len(colnames) != values.shape[1]:\n raise ValueError('length of colnames does not match number '\n 'of columns in data')\n\n if rownames is not None and len(rownames) != len(values):\n raise ValueError('length of rownames does not match number '\n 'of rows in data')\n\n return values, colnames, rownames\n\ndef struct_to_ndarray(arr):\n return arr.view((float, len(arr.dtype.names)))\n\ndef _is_using_ndarray_type(endog, exog):\n return (type(endog) is np.ndarray and\n (type(exog) is np.ndarray or exog is None))\n\ndef _is_using_ndarray(endog, exog):\n return (isinstance(endog, np.ndarray) and\n (isinstance(exog, np.ndarray) or exog is None))\n\ndef _is_using_pandas(endog, exog):\n if not have_pandas():\n return False\n from pandas import Series, DataFrame, WidePanel\n klasses = (Series, DataFrame, WidePanel)\n return (isinstance(endog, klasses) or isinstance(exog, klasses))\n\ndef _is_array_like(endog, exog):\n try: # do it like this in case of mixed types, ie., ndarray and list\n endog = np.asarray(endog)\n exog = np.asarray(exog)\n return True\n except:\n return False\n\ndef _is_using_patsy(endog, exog):\n # we get this when a structured array is passed through a formula\n return is_design_matrix(endog) and is_design_matrix(exog)\n\n", "path": "statsmodels/tools/data.py"}], "after_files": [{"content": "\"\"\"\nCompatibility tools for various data structure inputs\n\"\"\"\n\n#TODO: question: interpret_data\n# looks good and could/should be merged with other check convertion functions we also have\n# similar also to what Nathaniel mentioned for Formula\n# good: if ndarray check passes then loading pandas is not triggered,\n\n\nimport numpy as np\n\ndef have_pandas():\n try:\n import pandas\n return True\n except ImportError:\n return False\n except Exception:\n return False\n\ndef have_patsy():\n try:\n import patsy\n return True\n except ImportError:\n return False\n except Exception:\n return False\n\ndef is_data_frame(obj):\n if not have_pandas():\n return False\n\n import pandas as pn\n\n return isinstance(obj, pn.DataFrame)\n\ndef is_design_matrix(obj):\n if not have_patsy():\n return False\n\n from patsy import DesignMatrix\n return isinstance(obj, DesignMatrix)\n\ndef _is_structured_ndarray(obj):\n return isinstance(obj, np.ndarray) and obj.dtype.names is not None\n\ndef interpret_data(data, colnames=None, rownames=None):\n \"\"\"\n Convert passed data structure to form required by estimation classes\n\n Parameters\n ----------\n data : ndarray-like\n colnames : sequence or None\n May be part of data structure\n rownames : sequence or None\n\n Returns\n -------\n (values, colnames, rownames) : (homogeneous ndarray, list)\n \"\"\"\n if isinstance(data, np.ndarray):\n if _is_structured_ndarray(data):\n if colnames is None:\n colnames = data.dtype.names\n values = struct_to_ndarray(data)\n else:\n values = data\n\n if colnames is None:\n colnames = ['Y_%d' % i for i in range(values.shape[1])]\n elif is_data_frame(data):\n # XXX: hack\n data = data.dropna()\n values = data.values\n colnames = data.columns\n rownames = data.index\n else: # pragma: no cover\n raise Exception('cannot handle other input types at the moment')\n\n if not isinstance(colnames, list):\n colnames = list(colnames)\n\n # sanity check\n if len(colnames) != values.shape[1]:\n raise ValueError('length of colnames does not match number '\n 'of columns in data')\n\n if rownames is not None and len(rownames) != len(values):\n raise ValueError('length of rownames does not match number '\n 'of rows in data')\n\n return values, colnames, rownames\n\ndef struct_to_ndarray(arr):\n return arr.view((float, len(arr.dtype.names)))\n\ndef _is_using_ndarray_type(endog, exog):\n return (type(endog) is np.ndarray and\n (type(exog) is np.ndarray or exog is None))\n\ndef _is_using_ndarray(endog, exog):\n return (isinstance(endog, np.ndarray) and\n (isinstance(exog, np.ndarray) or exog is None))\n\ndef _is_using_pandas(endog, exog):\n if not have_pandas():\n return False\n from pandas import Series, DataFrame, WidePanel\n klasses = (Series, DataFrame, WidePanel)\n return (isinstance(endog, klasses) or isinstance(exog, klasses))\n\ndef _is_array_like(endog, exog):\n try: # do it like this in case of mixed types, ie., ndarray and list\n endog = np.asarray(endog)\n exog = np.asarray(exog)\n return True\n except:\n return False\n\ndef _is_using_patsy(endog, exog):\n # we get this when a structured array is passed through a formula\n return (is_design_matrix(endog) and\n (is_design_matrix(exog) or exog is None))\n\n", "path": "statsmodels/tools/data.py"}]} | 1,451 | 118 |
gh_patches_debug_29826 | rasdani/github-patches | git_diff | getsentry__sentry-python-593 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Memory leak introduced in 0.12.3
Dear maintainers,
I recently noticed that my application containers started going out of memory fairly quickly. Upon investigation, I found that recent upgrade of sentry-sdk was causing this.
Tested with a few versions of sentry-sdk using Python 3.7.6:
sentry-sdk==0.12.2 -> memory usage stays at ~70 MiB, does not raise
sentry-sdk==0.12.3
sentry-sdk==0.13.5
sentry-sdk @ master
-> memory usage starts from ~70MiB and raises steadily until reaching memory limit and crashes container.
Please see if you can reproduce this behaviour. If not, I'll try to put up a minimal reproducible example, but this should be fairly simple to see. My application does not do anything special really, just receives some millions of messages over MQ and out of those, some few thousands end up getting logged to Sentry due to exceptions occurring. This is using pretty much the default Sentry config, no integrations.
Thanks for your help!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/worker.py`
Content:
```
1 import os
2
3 from threading import Thread, Lock
4 from time import sleep, time
5 from sentry_sdk._compat import queue, check_thread_support
6 from sentry_sdk.utils import logger
7
8
9 from sentry_sdk._types import MYPY
10
11 if MYPY:
12 from queue import Queue
13 from typing import Any
14 from typing import Optional
15 from typing import Callable
16
17
18 _TERMINATOR = object()
19
20
21 class BackgroundWorker(object):
22 def __init__(self):
23 # type: () -> None
24 check_thread_support()
25 self._queue = queue.Queue(-1) # type: Queue[Any]
26 self._lock = Lock()
27 self._thread = None # type: Optional[Thread]
28 self._thread_for_pid = None # type: Optional[int]
29
30 @property
31 def is_alive(self):
32 # type: () -> bool
33 if self._thread_for_pid != os.getpid():
34 return False
35 if not self._thread:
36 return False
37 return self._thread.is_alive()
38
39 def _ensure_thread(self):
40 # type: () -> None
41 if not self.is_alive:
42 self.start()
43
44 def _timed_queue_join(self, timeout):
45 # type: (float) -> bool
46 deadline = time() + timeout
47 queue = self._queue
48
49 real_all_tasks_done = getattr(
50 queue, "all_tasks_done", None
51 ) # type: Optional[Any]
52 if real_all_tasks_done is not None:
53 real_all_tasks_done.acquire()
54 all_tasks_done = real_all_tasks_done # type: Optional[Any]
55 elif queue.__module__.startswith("eventlet."):
56 all_tasks_done = getattr(queue, "_cond", None)
57 else:
58 all_tasks_done = None
59
60 try:
61 while queue.unfinished_tasks:
62 delay = deadline - time()
63 if delay <= 0:
64 return False
65 if all_tasks_done is not None:
66 all_tasks_done.wait(timeout=delay)
67 else:
68 # worst case, we just poll the number of remaining tasks
69 sleep(0.1)
70
71 return True
72 finally:
73 if real_all_tasks_done is not None:
74 real_all_tasks_done.release()
75
76 def start(self):
77 # type: () -> None
78 with self._lock:
79 if not self.is_alive:
80 self._thread = Thread(
81 target=self._target, name="raven-sentry.BackgroundWorker"
82 )
83 self._thread.setDaemon(True)
84 self._thread.start()
85 self._thread_for_pid = os.getpid()
86
87 def kill(self):
88 # type: () -> None
89 logger.debug("background worker got kill request")
90 with self._lock:
91 if self._thread:
92 self._queue.put_nowait(_TERMINATOR)
93 self._thread = None
94 self._thread_for_pid = None
95
96 def flush(self, timeout, callback=None):
97 # type: (float, Optional[Any]) -> None
98 logger.debug("background worker got flush request")
99 with self._lock:
100 if self.is_alive and timeout > 0.0:
101 self._wait_flush(timeout, callback)
102 logger.debug("background worker flushed")
103
104 def _wait_flush(self, timeout, callback):
105 # type: (float, Optional[Any]) -> None
106 initial_timeout = min(0.1, timeout)
107 if not self._timed_queue_join(initial_timeout):
108 pending = self._queue.qsize()
109 logger.debug("%d event(s) pending on flush", pending)
110 if callback is not None:
111 callback(pending, timeout)
112 self._timed_queue_join(timeout - initial_timeout)
113
114 def submit(self, callback):
115 # type: (Callable[[], None]) -> None
116 self._ensure_thread()
117 self._queue.put_nowait(callback)
118
119 def _target(self):
120 # type: () -> None
121 while True:
122 callback = self._queue.get()
123 try:
124 if callback is _TERMINATOR:
125 break
126 try:
127 callback()
128 except Exception:
129 logger.error("Failed processing job", exc_info=True)
130 finally:
131 self._queue.task_done()
132 sleep(0)
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sentry_sdk/worker.py b/sentry_sdk/worker.py
--- a/sentry_sdk/worker.py
+++ b/sentry_sdk/worker.py
@@ -22,7 +22,7 @@
def __init__(self):
# type: () -> None
check_thread_support()
- self._queue = queue.Queue(-1) # type: Queue[Any]
+ self._queue = queue.Queue(30) # type: Queue[Any]
self._lock = Lock()
self._thread = None # type: Optional[Thread]
self._thread_for_pid = None # type: Optional[int]
@@ -86,10 +86,18 @@
def kill(self):
# type: () -> None
+ """
+ Kill worker thread. Returns immediately. Not useful for
+ waiting on shutdown for events, use `flush` for that.
+ """
logger.debug("background worker got kill request")
with self._lock:
if self._thread:
- self._queue.put_nowait(_TERMINATOR)
+ try:
+ self._queue.put_nowait(_TERMINATOR)
+ except queue.Full:
+ logger.debug("background worker queue full, kill failed")
+
self._thread = None
self._thread_for_pid = None
@@ -114,7 +122,10 @@
def submit(self, callback):
# type: (Callable[[], None]) -> None
self._ensure_thread()
- self._queue.put_nowait(callback)
+ try:
+ self._queue.put_nowait(callback)
+ except queue.Full:
+ logger.debug("background worker queue full, dropping event")
def _target(self):
# type: () -> None
| {"golden_diff": "diff --git a/sentry_sdk/worker.py b/sentry_sdk/worker.py\n--- a/sentry_sdk/worker.py\n+++ b/sentry_sdk/worker.py\n@@ -22,7 +22,7 @@\n def __init__(self):\n # type: () -> None\n check_thread_support()\n- self._queue = queue.Queue(-1) # type: Queue[Any]\n+ self._queue = queue.Queue(30) # type: Queue[Any]\n self._lock = Lock()\n self._thread = None # type: Optional[Thread]\n self._thread_for_pid = None # type: Optional[int]\n@@ -86,10 +86,18 @@\n \n def kill(self):\n # type: () -> None\n+ \"\"\"\n+ Kill worker thread. Returns immediately. Not useful for\n+ waiting on shutdown for events, use `flush` for that.\n+ \"\"\"\n logger.debug(\"background worker got kill request\")\n with self._lock:\n if self._thread:\n- self._queue.put_nowait(_TERMINATOR)\n+ try:\n+ self._queue.put_nowait(_TERMINATOR)\n+ except queue.Full:\n+ logger.debug(\"background worker queue full, kill failed\")\n+\n self._thread = None\n self._thread_for_pid = None\n \n@@ -114,7 +122,10 @@\n def submit(self, callback):\n # type: (Callable[[], None]) -> None\n self._ensure_thread()\n- self._queue.put_nowait(callback)\n+ try:\n+ self._queue.put_nowait(callback)\n+ except queue.Full:\n+ logger.debug(\"background worker queue full, dropping event\")\n \n def _target(self):\n # type: () -> None\n", "issue": "Memory leak introduced in 0.12.3\nDear maintainers,\r\n\r\nI recently noticed that my application containers started going out of memory fairly quickly. Upon investigation, I found that recent upgrade of sentry-sdk was causing this.\r\n\r\nTested with a few versions of sentry-sdk using Python 3.7.6:\r\nsentry-sdk==0.12.2 -> memory usage stays at ~70 MiB, does not raise\r\n\r\nsentry-sdk==0.12.3\r\nsentry-sdk==0.13.5\r\nsentry-sdk @ master\r\n-> memory usage starts from ~70MiB and raises steadily until reaching memory limit and crashes container.\r\n\r\nPlease see if you can reproduce this behaviour. If not, I'll try to put up a minimal reproducible example, but this should be fairly simple to see. My application does not do anything special really, just receives some millions of messages over MQ and out of those, some few thousands end up getting logged to Sentry due to exceptions occurring. This is using pretty much the default Sentry config, no integrations.\r\n\r\nThanks for your help!\n", "before_files": [{"content": "import os\n\nfrom threading import Thread, Lock\nfrom time import sleep, time\nfrom sentry_sdk._compat import queue, check_thread_support\nfrom sentry_sdk.utils import logger\n\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from queue import Queue\n from typing import Any\n from typing import Optional\n from typing import Callable\n\n\n_TERMINATOR = object()\n\n\nclass BackgroundWorker(object):\n def __init__(self):\n # type: () -> None\n check_thread_support()\n self._queue = queue.Queue(-1) # type: Queue[Any]\n self._lock = Lock()\n self._thread = None # type: Optional[Thread]\n self._thread_for_pid = None # type: Optional[int]\n\n @property\n def is_alive(self):\n # type: () -> bool\n if self._thread_for_pid != os.getpid():\n return False\n if not self._thread:\n return False\n return self._thread.is_alive()\n\n def _ensure_thread(self):\n # type: () -> None\n if not self.is_alive:\n self.start()\n\n def _timed_queue_join(self, timeout):\n # type: (float) -> bool\n deadline = time() + timeout\n queue = self._queue\n\n real_all_tasks_done = getattr(\n queue, \"all_tasks_done\", None\n ) # type: Optional[Any]\n if real_all_tasks_done is not None:\n real_all_tasks_done.acquire()\n all_tasks_done = real_all_tasks_done # type: Optional[Any]\n elif queue.__module__.startswith(\"eventlet.\"):\n all_tasks_done = getattr(queue, \"_cond\", None)\n else:\n all_tasks_done = None\n\n try:\n while queue.unfinished_tasks:\n delay = deadline - time()\n if delay <= 0:\n return False\n if all_tasks_done is not None:\n all_tasks_done.wait(timeout=delay)\n else:\n # worst case, we just poll the number of remaining tasks\n sleep(0.1)\n\n return True\n finally:\n if real_all_tasks_done is not None:\n real_all_tasks_done.release()\n\n def start(self):\n # type: () -> None\n with self._lock:\n if not self.is_alive:\n self._thread = Thread(\n target=self._target, name=\"raven-sentry.BackgroundWorker\"\n )\n self._thread.setDaemon(True)\n self._thread.start()\n self._thread_for_pid = os.getpid()\n\n def kill(self):\n # type: () -> None\n logger.debug(\"background worker got kill request\")\n with self._lock:\n if self._thread:\n self._queue.put_nowait(_TERMINATOR)\n self._thread = None\n self._thread_for_pid = None\n\n def flush(self, timeout, callback=None):\n # type: (float, Optional[Any]) -> None\n logger.debug(\"background worker got flush request\")\n with self._lock:\n if self.is_alive and timeout > 0.0:\n self._wait_flush(timeout, callback)\n logger.debug(\"background worker flushed\")\n\n def _wait_flush(self, timeout, callback):\n # type: (float, Optional[Any]) -> None\n initial_timeout = min(0.1, timeout)\n if not self._timed_queue_join(initial_timeout):\n pending = self._queue.qsize()\n logger.debug(\"%d event(s) pending on flush\", pending)\n if callback is not None:\n callback(pending, timeout)\n self._timed_queue_join(timeout - initial_timeout)\n\n def submit(self, callback):\n # type: (Callable[[], None]) -> None\n self._ensure_thread()\n self._queue.put_nowait(callback)\n\n def _target(self):\n # type: () -> None\n while True:\n callback = self._queue.get()\n try:\n if callback is _TERMINATOR:\n break\n try:\n callback()\n except Exception:\n logger.error(\"Failed processing job\", exc_info=True)\n finally:\n self._queue.task_done()\n sleep(0)\n", "path": "sentry_sdk/worker.py"}], "after_files": [{"content": "import os\n\nfrom threading import Thread, Lock\nfrom time import sleep, time\nfrom sentry_sdk._compat import queue, check_thread_support\nfrom sentry_sdk.utils import logger\n\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from queue import Queue\n from typing import Any\n from typing import Optional\n from typing import Callable\n\n\n_TERMINATOR = object()\n\n\nclass BackgroundWorker(object):\n def __init__(self):\n # type: () -> None\n check_thread_support()\n self._queue = queue.Queue(30) # type: Queue[Any]\n self._lock = Lock()\n self._thread = None # type: Optional[Thread]\n self._thread_for_pid = None # type: Optional[int]\n\n @property\n def is_alive(self):\n # type: () -> bool\n if self._thread_for_pid != os.getpid():\n return False\n if not self._thread:\n return False\n return self._thread.is_alive()\n\n def _ensure_thread(self):\n # type: () -> None\n if not self.is_alive:\n self.start()\n\n def _timed_queue_join(self, timeout):\n # type: (float) -> bool\n deadline = time() + timeout\n queue = self._queue\n\n real_all_tasks_done = getattr(\n queue, \"all_tasks_done\", None\n ) # type: Optional[Any]\n if real_all_tasks_done is not None:\n real_all_tasks_done.acquire()\n all_tasks_done = real_all_tasks_done # type: Optional[Any]\n elif queue.__module__.startswith(\"eventlet.\"):\n all_tasks_done = getattr(queue, \"_cond\", None)\n else:\n all_tasks_done = None\n\n try:\n while queue.unfinished_tasks:\n delay = deadline - time()\n if delay <= 0:\n return False\n if all_tasks_done is not None:\n all_tasks_done.wait(timeout=delay)\n else:\n # worst case, we just poll the number of remaining tasks\n sleep(0.1)\n\n return True\n finally:\n if real_all_tasks_done is not None:\n real_all_tasks_done.release()\n\n def start(self):\n # type: () -> None\n with self._lock:\n if not self.is_alive:\n self._thread = Thread(\n target=self._target, name=\"raven-sentry.BackgroundWorker\"\n )\n self._thread.setDaemon(True)\n self._thread.start()\n self._thread_for_pid = os.getpid()\n\n def kill(self):\n # type: () -> None\n \"\"\"\n Kill worker thread. Returns immediately. Not useful for\n waiting on shutdown for events, use `flush` for that.\n \"\"\"\n logger.debug(\"background worker got kill request\")\n with self._lock:\n if self._thread:\n try:\n self._queue.put_nowait(_TERMINATOR)\n except queue.Full:\n logger.debug(\"background worker queue full, kill failed\")\n\n self._thread = None\n self._thread_for_pid = None\n\n def flush(self, timeout, callback=None):\n # type: (float, Optional[Any]) -> None\n logger.debug(\"background worker got flush request\")\n with self._lock:\n if self.is_alive and timeout > 0.0:\n self._wait_flush(timeout, callback)\n logger.debug(\"background worker flushed\")\n\n def _wait_flush(self, timeout, callback):\n # type: (float, Optional[Any]) -> None\n initial_timeout = min(0.1, timeout)\n if not self._timed_queue_join(initial_timeout):\n pending = self._queue.qsize()\n logger.debug(\"%d event(s) pending on flush\", pending)\n if callback is not None:\n callback(pending, timeout)\n self._timed_queue_join(timeout - initial_timeout)\n\n def submit(self, callback):\n # type: (Callable[[], None]) -> None\n self._ensure_thread()\n try:\n self._queue.put_nowait(callback)\n except queue.Full:\n logger.debug(\"background worker queue full, dropping event\")\n\n def _target(self):\n # type: () -> None\n while True:\n callback = self._queue.get()\n try:\n if callback is _TERMINATOR:\n break\n try:\n callback()\n except Exception:\n logger.error(\"Failed processing job\", exc_info=True)\n finally:\n self._queue.task_done()\n sleep(0)\n", "path": "sentry_sdk/worker.py"}]} | 1,683 | 390 |
gh_patches_debug_7727 | rasdani/github-patches | git_diff | bridgecrewio__checkov-5696 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Terraform check "CKV_AZURE_158" - Typo within the name
**Describe the issue**
It seems that there is a typo within the name of the check CKV_AZURE_158 for Terraform resources.
**Examples**
````python
from checkov.common.models.enums import CheckCategories, CheckResult
from checkov.terraform.checks.resource.base_resource_negative_value_check import BaseResourceNegativeValueCheck
from typing import List, Any
class DatabricksWorkspaceIsNotPublic(BaseResourceNegativeValueCheck):
def __init__(self):
name = "Ensure that databricks workspace has not public"
id = "CKV_AZURE_158"
supported_resources = ['azurerm_databricks_workspace']
categories = [CheckCategories.NETWORKING]
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,
missing_attribute_result=CheckResult.FAILED)
def get_inspected_key(self) -> str:
return 'public_network_access_enabled'
def get_forbidden_values(self) -> List[Any]:
return [True]
check = DatabricksWorkspaceIsNotPublic()
````
**Version (please complete the following information):**
Current
**Additional context**
N/A
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/azure/DatabricksWorkspaceIsNotPublic.py`
Content:
```
1 from checkov.common.models.enums import CheckCategories, CheckResult
2 from checkov.terraform.checks.resource.base_resource_negative_value_check import BaseResourceNegativeValueCheck
3 from typing import List, Any
4
5
6 class DatabricksWorkspaceIsNotPublic(BaseResourceNegativeValueCheck):
7 def __init__(self):
8 name = "Ensure that databricks workspace has not public"
9 id = "CKV_AZURE_158"
10 supported_resources = ['azurerm_databricks_workspace']
11 categories = [CheckCategories.NETWORKING]
12 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,
13 missing_attribute_result=CheckResult.FAILED)
14
15 def get_inspected_key(self) -> str:
16 return 'public_network_access_enabled'
17
18 def get_forbidden_values(self) -> List[Any]:
19 return [True]
20
21
22 check = DatabricksWorkspaceIsNotPublic()
23
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/checkov/terraform/checks/resource/azure/DatabricksWorkspaceIsNotPublic.py b/checkov/terraform/checks/resource/azure/DatabricksWorkspaceIsNotPublic.py
--- a/checkov/terraform/checks/resource/azure/DatabricksWorkspaceIsNotPublic.py
+++ b/checkov/terraform/checks/resource/azure/DatabricksWorkspaceIsNotPublic.py
@@ -5,7 +5,7 @@
class DatabricksWorkspaceIsNotPublic(BaseResourceNegativeValueCheck):
def __init__(self):
- name = "Ensure that databricks workspace has not public"
+ name = "Ensure that databricks workspace is not public"
id = "CKV_AZURE_158"
supported_resources = ['azurerm_databricks_workspace']
categories = [CheckCategories.NETWORKING]
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/DatabricksWorkspaceIsNotPublic.py b/checkov/terraform/checks/resource/azure/DatabricksWorkspaceIsNotPublic.py\n--- a/checkov/terraform/checks/resource/azure/DatabricksWorkspaceIsNotPublic.py\n+++ b/checkov/terraform/checks/resource/azure/DatabricksWorkspaceIsNotPublic.py\n@@ -5,7 +5,7 @@\n \n class DatabricksWorkspaceIsNotPublic(BaseResourceNegativeValueCheck):\n def __init__(self):\n- name = \"Ensure that databricks workspace has not public\"\n+ name = \"Ensure that databricks workspace is not public\"\n id = \"CKV_AZURE_158\"\n supported_resources = ['azurerm_databricks_workspace']\n categories = [CheckCategories.NETWORKING]\n", "issue": "Terraform check \"CKV_AZURE_158\" - Typo within the name\n**Describe the issue**\r\nIt seems that there is a typo within the name of the check CKV_AZURE_158 for Terraform resources.\r\n\r\n**Examples**\r\n````python\r\nfrom checkov.common.models.enums import CheckCategories, CheckResult\r\nfrom checkov.terraform.checks.resource.base_resource_negative_value_check import BaseResourceNegativeValueCheck\r\nfrom typing import List, Any\r\n\r\n\r\nclass DatabricksWorkspaceIsNotPublic(BaseResourceNegativeValueCheck):\r\n def __init__(self):\r\n name = \"Ensure that databricks workspace has not public\"\r\n id = \"CKV_AZURE_158\"\r\n supported_resources = ['azurerm_databricks_workspace']\r\n categories = [CheckCategories.NETWORKING]\r\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,\r\n missing_attribute_result=CheckResult.FAILED)\r\n\r\n def get_inspected_key(self) -> str:\r\n return 'public_network_access_enabled'\r\n\r\n def get_forbidden_values(self) -> List[Any]:\r\n return [True]\r\n\r\n\r\ncheck = DatabricksWorkspaceIsNotPublic()\r\n````\r\n\r\n\r\n**Version (please complete the following information):**\r\n Current\r\n \r\n \r\n**Additional context**\r\nN/A\n", "before_files": [{"content": "from checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.terraform.checks.resource.base_resource_negative_value_check import BaseResourceNegativeValueCheck\nfrom typing import List, Any\n\n\nclass DatabricksWorkspaceIsNotPublic(BaseResourceNegativeValueCheck):\n def __init__(self):\n name = \"Ensure that databricks workspace has not public\"\n id = \"CKV_AZURE_158\"\n supported_resources = ['azurerm_databricks_workspace']\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,\n missing_attribute_result=CheckResult.FAILED)\n\n def get_inspected_key(self) -> str:\n return 'public_network_access_enabled'\n\n def get_forbidden_values(self) -> List[Any]:\n return [True]\n\n\ncheck = DatabricksWorkspaceIsNotPublic()\n", "path": "checkov/terraform/checks/resource/azure/DatabricksWorkspaceIsNotPublic.py"}], "after_files": [{"content": "from checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.terraform.checks.resource.base_resource_negative_value_check import BaseResourceNegativeValueCheck\nfrom typing import List, Any\n\n\nclass DatabricksWorkspaceIsNotPublic(BaseResourceNegativeValueCheck):\n def __init__(self):\n name = \"Ensure that databricks workspace is not public\"\n id = \"CKV_AZURE_158\"\n supported_resources = ['azurerm_databricks_workspace']\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,\n missing_attribute_result=CheckResult.FAILED)\n\n def get_inspected_key(self) -> str:\n return 'public_network_access_enabled'\n\n def get_forbidden_values(self) -> List[Any]:\n return [True]\n\n\ncheck = DatabricksWorkspaceIsNotPublic()\n", "path": "checkov/terraform/checks/resource/azure/DatabricksWorkspaceIsNotPublic.py"}]} | 777 | 177 |
gh_patches_debug_63151 | rasdani/github-patches | git_diff | frappe__frappe-7705 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Hard coded localhost for pgsql setup
The postgresql setup is using `localhost` instead of the `db_host` from the site configuration:
https://github.com/frappe/frappe/blob/17c43b4e1d2506e5b1498e52f3aa6c81277d67be/frappe/database/postgres/setup_db.py#L20
This makes it impossible to use Postgres on a remote host :/
I can try to provide a PR for that.
I do not guaranty I will have the time to test it but it's fairly simple.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `frappe/database/postgres/setup_db.py`
Content:
```
1 import frappe, subprocess, os
2 from six.moves import input
3
4 def setup_database(force, source_sql, verbose):
5 root_conn = get_root_connection()
6 root_conn.commit()
7 root_conn.sql("DROP DATABASE IF EXISTS `{0}`".format(frappe.conf.db_name))
8 root_conn.sql("DROP USER IF EXISTS {0}".format(frappe.conf.db_name))
9 root_conn.sql("CREATE DATABASE `{0}`".format(frappe.conf.db_name))
10 root_conn.sql("CREATE user {0} password '{1}'".format(frappe.conf.db_name,
11 frappe.conf.db_password))
12 root_conn.sql("GRANT ALL PRIVILEGES ON DATABASE `{0}` TO {0}".format(frappe.conf.db_name))
13
14 # we can't pass psql password in arguments in postgresql as mysql. So
15 # set password connection parameter in environment variable
16 subprocess_env = os.environ.copy()
17 subprocess_env['PGPASSWORD'] = str(frappe.conf.db_password)
18 # bootstrap db
19 subprocess.check_output([
20 'psql', frappe.conf.db_name, '-h', 'localhost', '-U',
21 frappe.conf.db_name, '-f',
22 os.path.join(os.path.dirname(__file__), 'framework_postgres.sql')
23 ], env=subprocess_env)
24
25 frappe.connect()
26
27 def setup_help_database(help_db_name):
28 root_conn = get_root_connection()
29 root_conn.sql("DROP DATABASE IF EXISTS `{0}`".format(help_db_name))
30 root_conn.sql("DROP USER IF EXISTS {0}".format(help_db_name))
31 root_conn.sql("CREATE DATABASE `{0}`".format(help_db_name))
32 root_conn.sql("CREATE user {0} password '{1}'".format(help_db_name, help_db_name))
33 root_conn.sql("GRANT ALL PRIVILEGES ON DATABASE `{0}` TO {0}".format(help_db_name))
34
35 def get_root_connection(root_login=None, root_password=None):
36 import getpass
37 if not frappe.local.flags.root_connection:
38 if not root_login:
39 root_login = frappe.conf.get("root_login") or None
40
41 if not root_login:
42 root_login = input("Enter postgres super user: ")
43
44 if not root_password:
45 root_password = frappe.conf.get("root_password") or None
46
47 if not root_password:
48 root_password = getpass.getpass("Postgres super user password: ")
49
50 frappe.local.flags.root_connection = frappe.database.get_db(user=root_login, password=root_password)
51
52 return frappe.local.flags.root_connection
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/frappe/database/postgres/setup_db.py b/frappe/database/postgres/setup_db.py
--- a/frappe/database/postgres/setup_db.py
+++ b/frappe/database/postgres/setup_db.py
@@ -17,7 +17,7 @@
subprocess_env['PGPASSWORD'] = str(frappe.conf.db_password)
# bootstrap db
subprocess.check_output([
- 'psql', frappe.conf.db_name, '-h', 'localhost', '-U',
+ 'psql', frappe.conf.db_name, '-h', frappe.conf.db_host, '-U',
frappe.conf.db_name, '-f',
os.path.join(os.path.dirname(__file__), 'framework_postgres.sql')
], env=subprocess_env)
| {"golden_diff": "diff --git a/frappe/database/postgres/setup_db.py b/frappe/database/postgres/setup_db.py\n--- a/frappe/database/postgres/setup_db.py\n+++ b/frappe/database/postgres/setup_db.py\n@@ -17,7 +17,7 @@\n \tsubprocess_env['PGPASSWORD'] = str(frappe.conf.db_password)\n \t# bootstrap db\n \tsubprocess.check_output([\n-\t\t'psql', frappe.conf.db_name, '-h', 'localhost', '-U',\n+\t\t'psql', frappe.conf.db_name, '-h', frappe.conf.db_host, '-U',\n \t\tfrappe.conf.db_name, '-f',\n \t\tos.path.join(os.path.dirname(__file__), 'framework_postgres.sql')\n \t], env=subprocess_env)\n", "issue": "Hard coded localhost for pgsql setup\nThe postgresql setup is using `localhost` instead of the `db_host` from the site configuration:\r\nhttps://github.com/frappe/frappe/blob/17c43b4e1d2506e5b1498e52f3aa6c81277d67be/frappe/database/postgres/setup_db.py#L20\r\n\r\nThis makes it impossible to use Postgres on a remote host :/\r\n\r\nI can try to provide a PR for that.\r\nI do not guaranty I will have the time to test it but it's fairly simple.\n", "before_files": [{"content": "import frappe, subprocess, os\nfrom six.moves import input\n\ndef setup_database(force, source_sql, verbose):\n\troot_conn = get_root_connection()\n\troot_conn.commit()\n\troot_conn.sql(\"DROP DATABASE IF EXISTS `{0}`\".format(frappe.conf.db_name))\n\troot_conn.sql(\"DROP USER IF EXISTS {0}\".format(frappe.conf.db_name))\n\troot_conn.sql(\"CREATE DATABASE `{0}`\".format(frappe.conf.db_name))\n\troot_conn.sql(\"CREATE user {0} password '{1}'\".format(frappe.conf.db_name,\n\t\tfrappe.conf.db_password))\n\troot_conn.sql(\"GRANT ALL PRIVILEGES ON DATABASE `{0}` TO {0}\".format(frappe.conf.db_name))\n\n\t# we can't pass psql password in arguments in postgresql as mysql. So\n\t# set password connection parameter in environment variable\n\tsubprocess_env = os.environ.copy()\n\tsubprocess_env['PGPASSWORD'] = str(frappe.conf.db_password)\n\t# bootstrap db\n\tsubprocess.check_output([\n\t\t'psql', frappe.conf.db_name, '-h', 'localhost', '-U',\n\t\tfrappe.conf.db_name, '-f',\n\t\tos.path.join(os.path.dirname(__file__), 'framework_postgres.sql')\n\t], env=subprocess_env)\n\n\tfrappe.connect()\n\ndef setup_help_database(help_db_name):\n\troot_conn = get_root_connection()\n\troot_conn.sql(\"DROP DATABASE IF EXISTS `{0}`\".format(help_db_name))\n\troot_conn.sql(\"DROP USER IF EXISTS {0}\".format(help_db_name))\n\troot_conn.sql(\"CREATE DATABASE `{0}`\".format(help_db_name))\n\troot_conn.sql(\"CREATE user {0} password '{1}'\".format(help_db_name, help_db_name))\n\troot_conn.sql(\"GRANT ALL PRIVILEGES ON DATABASE `{0}` TO {0}\".format(help_db_name))\n\ndef get_root_connection(root_login=None, root_password=None):\n\timport getpass\n\tif not frappe.local.flags.root_connection:\n\t\tif not root_login:\n\t\t\troot_login = frappe.conf.get(\"root_login\") or None\n\n\t\tif not root_login:\n\t\t\troot_login = input(\"Enter postgres super user: \")\n\n\t\tif not root_password:\n\t\t\troot_password = frappe.conf.get(\"root_password\") or None\n\n\t\tif not root_password:\n\t\t\troot_password = getpass.getpass(\"Postgres super user password: \")\n\n\t\tfrappe.local.flags.root_connection = frappe.database.get_db(user=root_login, password=root_password)\n\n\treturn frappe.local.flags.root_connection\n", "path": "frappe/database/postgres/setup_db.py"}], "after_files": [{"content": "import frappe, subprocess, os\nfrom six.moves import input\n\ndef setup_database(force, source_sql, verbose):\n\troot_conn = get_root_connection()\n\troot_conn.commit()\n\troot_conn.sql(\"DROP DATABASE IF EXISTS `{0}`\".format(frappe.conf.db_name))\n\troot_conn.sql(\"DROP USER IF EXISTS {0}\".format(frappe.conf.db_name))\n\troot_conn.sql(\"CREATE DATABASE `{0}`\".format(frappe.conf.db_name))\n\troot_conn.sql(\"CREATE user {0} password '{1}'\".format(frappe.conf.db_name,\n\t\tfrappe.conf.db_password))\n\troot_conn.sql(\"GRANT ALL PRIVILEGES ON DATABASE `{0}` TO {0}\".format(frappe.conf.db_name))\n\n\t# we can't pass psql password in arguments in postgresql as mysql. So\n\t# set password connection parameter in environment variable\n\tsubprocess_env = os.environ.copy()\n\tsubprocess_env['PGPASSWORD'] = str(frappe.conf.db_password)\n\t# bootstrap db\n\tsubprocess.check_output([\n\t\t'psql', frappe.conf.db_name, '-h', frappe.conf.db_host, '-U',\n\t\tfrappe.conf.db_name, '-f',\n\t\tos.path.join(os.path.dirname(__file__), 'framework_postgres.sql')\n\t], env=subprocess_env)\n\n\tfrappe.connect()\n\ndef setup_help_database(help_db_name):\n\troot_conn = get_root_connection()\n\troot_conn.sql(\"DROP DATABASE IF EXISTS `{0}`\".format(help_db_name))\n\troot_conn.sql(\"DROP USER IF EXISTS {0}\".format(help_db_name))\n\troot_conn.sql(\"CREATE DATABASE `{0}`\".format(help_db_name))\n\troot_conn.sql(\"CREATE user {0} password '{1}'\".format(help_db_name, help_db_name))\n\troot_conn.sql(\"GRANT ALL PRIVILEGES ON DATABASE `{0}` TO {0}\".format(help_db_name))\n\ndef get_root_connection(root_login=None, root_password=None):\n\timport getpass\n\tif not frappe.local.flags.root_connection:\n\t\tif not root_login:\n\t\t\troot_login = frappe.conf.get(\"root_login\") or None\n\n\t\tif not root_login:\n\t\t\troot_login = input(\"Enter postgres super user: \")\n\n\t\tif not root_password:\n\t\t\troot_password = frappe.conf.get(\"root_password\") or None\n\n\t\tif not root_password:\n\t\t\troot_password = getpass.getpass(\"Postgres super user password: \")\n\n\t\tfrappe.local.flags.root_connection = frappe.database.get_db(user=root_login, password=root_password)\n\n\treturn frappe.local.flags.root_connection\n", "path": "frappe/database/postgres/setup_db.py"}]} | 1,045 | 166 |
gh_patches_debug_1782 | rasdani/github-patches | git_diff | python__python-docs-es-1714 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Translate 'library/http.server.po'
This needs to reach 100% translated.
The rendered version of this file will be available at https://docs.python.org/es/3.10/library/http.server.html once translated.
Meanwhile, the English version is shown.
Current stats for `library/http.server.po`:
* Fuzzy: 4
* Percent translated: 97.8%
* Entries: 87 / 89
* Untranslated: 2
Please, comment here if you want this file to be assigned to you and an member will assign it to you as soon as possible, so you can start working on it.
Remember to follow the steps in our [Contributing Guide](https://python-docs-es.readthedocs.io/page/CONTRIBUTING.html).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/translate.py`
Content:
```
1 import os
2 import re
3 import sys
4 from typing import Dict, Tuple
5
6 import polib
7
8 VERBOSE = False
9 DEBUG = False
10 SKIP_TRANSLATED_ENTRIES = True
11
12 try:
13 from deep_translator import GoogleTranslator
14 except ImportError:
15 print("Error: This util script needs `deep_translator` to be installed")
16 sys.exit(1)
17
18 _patterns = [
19 ":c:func:`[^`]+`",
20 ":c:type:`[^`]+`",
21 ":c:macro:`[^`]+`",
22 ":c:member:`[^`]+`",
23 ":c:data:`[^`]+`",
24 ":py:data:`[^`]+`",
25 ":py:mod:`[^`]+`",
26 ":func:`[^`]+`",
27 ":mod:`[^`]+`",
28 ":ref:`[^`]+`",
29 ":class:`[^`]+`",
30 ":pep:`[^`]+`",
31 ":data:`[^`]+`",
32 ":exc:`[^`]+`",
33 ":term:`[^`]+`",
34 ":meth:`[^`]+`",
35 ":envvar:`[^`]+`",
36 ":file:`[^`]+`",
37 ":attr:`[^`]+`",
38 ":const:`[^`]+`",
39 ":issue:`[^`]+`",
40 ":opcode:`[^`]+`",
41 ":option:`[^`]+`",
42 ":program:`[^`]+`",
43 ":keyword:`[^`]+`",
44 ":RFC:`[^`]+`",
45 ":doc:`[^`]+`",
46 "``[^`]+``",
47 "`[^`]+`__",
48 "`[^`]+`_",
49 "\*\*.+\*\*", # bold text between **
50 "\*.+\*", # italic text between *
51 ]
52
53 _exps = [re.compile(e) for e in _patterns]
54
55 def protect_sphinx_directives(s: str) -> Tuple[dict, str]:
56 """
57 Parameters:
58 string containing the text to translate
59
60 Returns:
61 dictionary containing all the placeholder text as keys
62 and the correct value.
63 """
64
65 i = 0
66 d: Dict[str, str] = {}
67 for exp in _exps:
68 matches = exp.findall(s)
69 if DEBUG:
70 print(exp, matches)
71 for match in matches:
72 ph = f"XASDF{str(i).zfill(2)}"
73 s = s.replace(match, ph)
74 if ph in d and VERBOSE:
75 print(f"Error: {ph} is already in the dictionary")
76 print("new", match)
77 print("old", d[ph])
78 d[ph] = match
79 i += 1
80 return d, s
81
82
83 def undo_sphinx_directives_protection(placeholders: dict, translated_text: str) -> str:
84 for ph, value in placeholders.items():
85 translated_text = translated_text.replace(ph, value)
86 if DEBUG:
87 print(ph, value)
88 print(translated_text)
89 return translated_text
90
91
92 if __name__ == "__main__":
93 filename = sys.argv[1]
94 if not os.path.isfile(filename):
95 print(f"File not found: '{filename}'")
96 sys.exit(-1)
97
98 po = polib.pofile(filename)
99 translator = GoogleTranslator(source="en", target="es")
100
101 for entry in po:
102 # If the entry has already a translation, skip.
103 if SKIP_TRANSLATED_ENTRIES and entry.msgstr:
104 continue
105
106 print("\nEN|", entry.msgid)
107 placeholders, temp_text = protect_sphinx_directives(entry.msgid)
108 if VERBOSE:
109 print(temp_text)
110 print(placeholders)
111
112 # Translate the temporary text without sphinx statements
113 translated_text = translator.translate(temp_text)
114
115 # Recover sphinx statements
116 real_text = undo_sphinx_directives_protection(placeholders, translated_text)
117 print("ES|", real_text)
118
119 # Replace the po file translated entry
120 entry.msgstr = real_text
121
122 # Save the file after all the entries are translated
123 po.save()
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/translate.py b/scripts/translate.py
--- a/scripts/translate.py
+++ b/scripts/translate.py
@@ -46,8 +46,8 @@
"``[^`]+``",
"`[^`]+`__",
"`[^`]+`_",
- "\*\*.+\*\*", # bold text between **
- "\*.+\*", # italic text between *
+ "\*\*[^\*]+\*\*", # bold text between **
+ "\*[^\*]+\*", # italic text between *
]
_exps = [re.compile(e) for e in _patterns]
| {"golden_diff": "diff --git a/scripts/translate.py b/scripts/translate.py\n--- a/scripts/translate.py\n+++ b/scripts/translate.py\n@@ -46,8 +46,8 @@\n \"``[^`]+``\",\n \"`[^`]+`__\",\n \"`[^`]+`_\",\n- \"\\*\\*.+\\*\\*\", # bold text between **\n- \"\\*.+\\*\", # italic text between *\n+ \"\\*\\*[^\\*]+\\*\\*\", # bold text between **\n+ \"\\*[^\\*]+\\*\", # italic text between *\n ]\n \n _exps = [re.compile(e) for e in _patterns]\n", "issue": "Translate 'library/http.server.po'\nThis needs to reach 100% translated.\n\nThe rendered version of this file will be available at https://docs.python.org/es/3.10/library/http.server.html once translated.\nMeanwhile, the English version is shown.\n\nCurrent stats for `library/http.server.po`:\n\n* Fuzzy: 4\n* Percent translated: 97.8%\n* Entries: 87 / 89\n* Untranslated: 2\n\nPlease, comment here if you want this file to be assigned to you and an member will assign it to you as soon as possible, so you can start working on it.\n\nRemember to follow the steps in our [Contributing Guide](https://python-docs-es.readthedocs.io/page/CONTRIBUTING.html).\n", "before_files": [{"content": "import os\nimport re\nimport sys\nfrom typing import Dict, Tuple\n\nimport polib\n\nVERBOSE = False\nDEBUG = False\nSKIP_TRANSLATED_ENTRIES = True\n\ntry:\n from deep_translator import GoogleTranslator\nexcept ImportError:\n print(\"Error: This util script needs `deep_translator` to be installed\")\n sys.exit(1)\n\n_patterns = [\n \":c:func:`[^`]+`\",\n \":c:type:`[^`]+`\",\n \":c:macro:`[^`]+`\",\n \":c:member:`[^`]+`\",\n \":c:data:`[^`]+`\",\n \":py:data:`[^`]+`\",\n \":py:mod:`[^`]+`\",\n \":func:`[^`]+`\",\n \":mod:`[^`]+`\",\n \":ref:`[^`]+`\",\n \":class:`[^`]+`\",\n \":pep:`[^`]+`\",\n \":data:`[^`]+`\",\n \":exc:`[^`]+`\",\n \":term:`[^`]+`\",\n \":meth:`[^`]+`\",\n \":envvar:`[^`]+`\",\n \":file:`[^`]+`\",\n \":attr:`[^`]+`\",\n \":const:`[^`]+`\",\n \":issue:`[^`]+`\",\n \":opcode:`[^`]+`\",\n \":option:`[^`]+`\",\n \":program:`[^`]+`\",\n \":keyword:`[^`]+`\",\n \":RFC:`[^`]+`\",\n \":doc:`[^`]+`\",\n \"``[^`]+``\",\n \"`[^`]+`__\",\n \"`[^`]+`_\",\n \"\\*\\*.+\\*\\*\", # bold text between **\n \"\\*.+\\*\", # italic text between *\n]\n\n_exps = [re.compile(e) for e in _patterns]\n\ndef protect_sphinx_directives(s: str) -> Tuple[dict, str]:\n \"\"\"\n Parameters:\n string containing the text to translate\n\n Returns:\n dictionary containing all the placeholder text as keys\n and the correct value.\n \"\"\"\n\n i = 0\n d: Dict[str, str] = {}\n for exp in _exps:\n matches = exp.findall(s)\n if DEBUG:\n print(exp, matches)\n for match in matches:\n ph = f\"XASDF{str(i).zfill(2)}\"\n s = s.replace(match, ph)\n if ph in d and VERBOSE:\n print(f\"Error: {ph} is already in the dictionary\")\n print(\"new\", match)\n print(\"old\", d[ph])\n d[ph] = match\n i += 1\n return d, s\n\n\ndef undo_sphinx_directives_protection(placeholders: dict, translated_text: str) -> str:\n for ph, value in placeholders.items():\n translated_text = translated_text.replace(ph, value)\n if DEBUG:\n print(ph, value)\n print(translated_text)\n return translated_text\n\n\nif __name__ == \"__main__\":\n filename = sys.argv[1]\n if not os.path.isfile(filename):\n print(f\"File not found: '{filename}'\")\n sys.exit(-1)\n\n po = polib.pofile(filename)\n translator = GoogleTranslator(source=\"en\", target=\"es\")\n\n for entry in po:\n # If the entry has already a translation, skip.\n if SKIP_TRANSLATED_ENTRIES and entry.msgstr:\n continue\n\n print(\"\\nEN|\", entry.msgid)\n placeholders, temp_text = protect_sphinx_directives(entry.msgid)\n if VERBOSE:\n print(temp_text)\n print(placeholders)\n\n # Translate the temporary text without sphinx statements\n translated_text = translator.translate(temp_text)\n\n # Recover sphinx statements\n real_text = undo_sphinx_directives_protection(placeholders, translated_text)\n print(\"ES|\", real_text)\n\n # Replace the po file translated entry\n entry.msgstr = real_text\n\n # Save the file after all the entries are translated\n po.save()\n", "path": "scripts/translate.py"}], "after_files": [{"content": "import os\nimport re\nimport sys\nfrom typing import Dict, Tuple\n\nimport polib\n\nVERBOSE = False\nDEBUG = False\nSKIP_TRANSLATED_ENTRIES = True\n\ntry:\n from deep_translator import GoogleTranslator\nexcept ImportError:\n print(\"Error: This util script needs `deep_translator` to be installed\")\n sys.exit(1)\n\n_patterns = [\n \":c:func:`[^`]+`\",\n \":c:type:`[^`]+`\",\n \":c:macro:`[^`]+`\",\n \":c:member:`[^`]+`\",\n \":c:data:`[^`]+`\",\n \":py:data:`[^`]+`\",\n \":py:mod:`[^`]+`\",\n \":func:`[^`]+`\",\n \":mod:`[^`]+`\",\n \":ref:`[^`]+`\",\n \":class:`[^`]+`\",\n \":pep:`[^`]+`\",\n \":data:`[^`]+`\",\n \":exc:`[^`]+`\",\n \":term:`[^`]+`\",\n \":meth:`[^`]+`\",\n \":envvar:`[^`]+`\",\n \":file:`[^`]+`\",\n \":attr:`[^`]+`\",\n \":const:`[^`]+`\",\n \":issue:`[^`]+`\",\n \":opcode:`[^`]+`\",\n \":option:`[^`]+`\",\n \":program:`[^`]+`\",\n \":keyword:`[^`]+`\",\n \":RFC:`[^`]+`\",\n \":doc:`[^`]+`\",\n \"``[^`]+``\",\n \"`[^`]+`__\",\n \"`[^`]+`_\",\n \"\\*\\*[^\\*]+\\*\\*\", # bold text between **\n \"\\*[^\\*]+\\*\", # italic text between *\n]\n\n_exps = [re.compile(e) for e in _patterns]\n\ndef protect_sphinx_directives(s: str) -> Tuple[dict, str]:\n \"\"\"\n Parameters:\n string containing the text to translate\n\n Returns:\n dictionary containing all the placeholder text as keys\n and the correct value.\n \"\"\"\n\n i = 0\n d: Dict[str, str] = {}\n for exp in _exps:\n matches = exp.findall(s)\n if DEBUG:\n print(exp, matches)\n for match in matches:\n ph = f\"XASDF{str(i).zfill(2)}\"\n s = s.replace(match, ph)\n if ph in d and VERBOSE:\n print(f\"Error: {ph} is already in the dictionary\")\n print(\"new\", match)\n print(\"old\", d[ph])\n d[ph] = match\n i += 1\n return d, s\n\n\ndef undo_sphinx_directives_protection(placeholders: dict, translated_text: str) -> str:\n for ph, value in placeholders.items():\n translated_text = translated_text.replace(ph, value)\n if DEBUG:\n print(ph, value)\n print(translated_text)\n return translated_text\n\n\nif __name__ == \"__main__\":\n filename = sys.argv[1]\n if not os.path.isfile(filename):\n print(f\"File not found: '{filename}'\")\n sys.exit(-1)\n\n po = polib.pofile(filename)\n translator = GoogleTranslator(source=\"en\", target=\"es\")\n\n for entry in po:\n # If the entry has already a translation, skip.\n if SKIP_TRANSLATED_ENTRIES and entry.msgstr:\n continue\n\n print(\"\\nEN|\", entry.msgid)\n placeholders, temp_text = protect_sphinx_directives(entry.msgid)\n if VERBOSE:\n print(temp_text)\n print(placeholders)\n\n # Translate the temporary text without sphinx statements\n translated_text = translator.translate(temp_text)\n\n # Recover sphinx statements\n real_text = undo_sphinx_directives_protection(placeholders, translated_text)\n print(\"ES|\", real_text)\n\n # Replace the po file translated entry\n entry.msgstr = real_text\n\n # Save the file after all the entries are translated\n po.save()\n", "path": "scripts/translate.py"}]} | 1,569 | 137 |
gh_patches_debug_8128 | rasdani/github-patches | git_diff | ethereum__web3.py-1930 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Changed pypiwin32 to pywin32
### What was wrong?
pypiwin32 is just an empty package requiring pywin32, so this can just be required directly. As far as I know, pypiwin32 is only there for historical reasons.
Related to Issue #1721
### How was it fixed?
change it in setup.py
### Todo:
- [ ] Add entry to the [release notes](https://github.com/ethereum/web3.py/blob/master/newsfragments/README.md)
#### Cute Animal Picture

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 from setuptools import (
4 find_packages,
5 setup,
6 )
7
8 extras_require = {
9 'tester': [
10 "eth-tester[py-evm]==v0.5.0-beta.3",
11 "py-geth>=2.4.0,<3",
12 ],
13 'linter': [
14 "flake8==3.8.3",
15 "isort>=4.2.15,<4.3.5",
16 "mypy==0.730",
17 ],
18 'docs': [
19 "mock",
20 "sphinx-better-theme>=0.1.4",
21 "click>=5.1",
22 "configparser==3.5.0",
23 "contextlib2>=0.5.4",
24 "py-geth>=2.4.0,<3",
25 "py-solc>=0.4.0",
26 "pytest>=4.4.0,<5.0.0",
27 "sphinx>=2.4.4,<3",
28 "sphinx_rtd_theme>=0.1.9",
29 "toposort>=1.4",
30 "towncrier>=19.2.0,<20",
31 "urllib3",
32 "web3>=2.1.0",
33 "wheel"
34 ],
35 'dev': [
36 "bumpversion",
37 "flaky>=3.3.0",
38 "hypothesis>=3.31.2,<6",
39 "pytest>=4.4.0,<5.0.0",
40 "pytest-asyncio>=0.10.0,<0.11",
41 "pytest-mock>=1.10,<2",
42 "pytest-pythonpath>=0.3",
43 "pytest-watch>=4.2,<5",
44 "pytest-xdist>=1.29,<2",
45 "setuptools>=38.6.0",
46 "tox>=1.8.0",
47 "tqdm>4.32,<5",
48 "twine>=1.13,<2",
49 "when-changed>=0.3.0,<0.4"
50 ]
51 }
52
53 extras_require['dev'] = (
54 extras_require['tester']
55 + extras_require['linter']
56 + extras_require['docs']
57 + extras_require['dev']
58 )
59
60 with open('./README.md') as readme:
61 long_description = readme.read()
62
63 setup(
64 name='web3',
65 # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.
66 version='5.17.0',
67 description="""Web3.py""",
68 long_description_content_type='text/markdown',
69 long_description=long_description,
70 author='Piper Merriam',
71 author_email='[email protected]',
72 url='https://github.com/ethereum/web3.py',
73 include_package_data=True,
74 install_requires=[
75 "eth-abi>=2.0.0b6,<3.0.0",
76 "eth-account>=0.5.3,<0.6.0",
77 "eth-hash[pycryptodome]>=0.2.0,<1.0.0",
78 "eth-typing>=2.0.0,<3.0.0",
79 "eth-utils>=1.9.5,<2.0.0",
80 "hexbytes>=0.1.0,<1.0.0",
81 "ipfshttpclient==0.7.0a1",
82 "jsonschema>=3.2.0,<4.0.0",
83 "lru-dict>=1.1.6,<2.0.0",
84 "protobuf>=3.10.0,<4",
85 "pypiwin32>=223;platform_system=='Windows'",
86 "requests>=2.16.0,<3.0.0",
87 # remove typing_extensions after python_requires>=3.8, see web3._utils.compat
88 "typing-extensions>=3.7.4.1,<4;python_version<'3.8'",
89 "websockets>=8.1.0,<9.0.0",
90 ],
91 python_requires='>=3.6,<4',
92 extras_require=extras_require,
93 py_modules=['web3', 'ens', 'ethpm'],
94 entry_points={"pytest11": ["pytest_ethereum = web3.tools.pytest_ethereum.plugins"]},
95 license="MIT",
96 zip_safe=False,
97 keywords='ethereum',
98 packages=find_packages(exclude=["tests", "tests.*"]),
99 package_data={"web3": ["py.typed"]},
100 classifiers=[
101 'Development Status :: 5 - Production/Stable',
102 'Intended Audience :: Developers',
103 'License :: OSI Approved :: MIT License',
104 'Natural Language :: English',
105 'Programming Language :: Python :: 3',
106 'Programming Language :: Python :: 3.6',
107 'Programming Language :: Python :: 3.7',
108 'Programming Language :: Python :: 3.8',
109 ],
110 )
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -82,7 +82,7 @@
"jsonschema>=3.2.0,<4.0.0",
"lru-dict>=1.1.6,<2.0.0",
"protobuf>=3.10.0,<4",
- "pypiwin32>=223;platform_system=='Windows'",
+ "pywin32>=223;platform_system=='Windows'",
"requests>=2.16.0,<3.0.0",
# remove typing_extensions after python_requires>=3.8, see web3._utils.compat
"typing-extensions>=3.7.4.1,<4;python_version<'3.8'",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -82,7 +82,7 @@\n \"jsonschema>=3.2.0,<4.0.0\",\n \"lru-dict>=1.1.6,<2.0.0\",\n \"protobuf>=3.10.0,<4\",\n- \"pypiwin32>=223;platform_system=='Windows'\",\n+ \"pywin32>=223;platform_system=='Windows'\",\n \"requests>=2.16.0,<3.0.0\",\n # remove typing_extensions after python_requires>=3.8, see web3._utils.compat\n \"typing-extensions>=3.7.4.1,<4;python_version<'3.8'\",\n", "issue": "Changed pypiwin32 to pywin32\n### What was wrong?\r\npypiwin32 is just an empty package requiring pywin32, so this can just be required directly. As far as I know, pypiwin32 is only there for historical reasons.\r\n\r\nRelated to Issue #1721\r\n\r\n### How was it fixed?\r\nchange it in setup.py\r\n\r\n### Todo:\r\n- [ ] Add entry to the [release notes](https://github.com/ethereum/web3.py/blob/master/newsfragments/README.md)\r\n\r\n#### Cute Animal Picture\r\n\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom setuptools import (\n find_packages,\n setup,\n)\n\nextras_require = {\n 'tester': [\n \"eth-tester[py-evm]==v0.5.0-beta.3\",\n \"py-geth>=2.4.0,<3\",\n ],\n 'linter': [\n \"flake8==3.8.3\",\n \"isort>=4.2.15,<4.3.5\",\n \"mypy==0.730\",\n ],\n 'docs': [\n \"mock\",\n \"sphinx-better-theme>=0.1.4\",\n \"click>=5.1\",\n \"configparser==3.5.0\",\n \"contextlib2>=0.5.4\",\n \"py-geth>=2.4.0,<3\",\n \"py-solc>=0.4.0\",\n \"pytest>=4.4.0,<5.0.0\",\n \"sphinx>=2.4.4,<3\",\n \"sphinx_rtd_theme>=0.1.9\",\n \"toposort>=1.4\",\n \"towncrier>=19.2.0,<20\",\n \"urllib3\",\n \"web3>=2.1.0\",\n \"wheel\"\n ],\n 'dev': [\n \"bumpversion\",\n \"flaky>=3.3.0\",\n \"hypothesis>=3.31.2,<6\",\n \"pytest>=4.4.0,<5.0.0\",\n \"pytest-asyncio>=0.10.0,<0.11\",\n \"pytest-mock>=1.10,<2\",\n \"pytest-pythonpath>=0.3\",\n \"pytest-watch>=4.2,<5\",\n \"pytest-xdist>=1.29,<2\",\n \"setuptools>=38.6.0\",\n \"tox>=1.8.0\",\n \"tqdm>4.32,<5\",\n \"twine>=1.13,<2\",\n \"when-changed>=0.3.0,<0.4\"\n ]\n}\n\nextras_require['dev'] = (\n extras_require['tester']\n + extras_require['linter']\n + extras_require['docs']\n + extras_require['dev']\n)\n\nwith open('./README.md') as readme:\n long_description = readme.read()\n\nsetup(\n name='web3',\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version='5.17.0',\n description=\"\"\"Web3.py\"\"\",\n long_description_content_type='text/markdown',\n long_description=long_description,\n author='Piper Merriam',\n author_email='[email protected]',\n url='https://github.com/ethereum/web3.py',\n include_package_data=True,\n install_requires=[\n \"eth-abi>=2.0.0b6,<3.0.0\",\n \"eth-account>=0.5.3,<0.6.0\",\n \"eth-hash[pycryptodome]>=0.2.0,<1.0.0\",\n \"eth-typing>=2.0.0,<3.0.0\",\n \"eth-utils>=1.9.5,<2.0.0\",\n \"hexbytes>=0.1.0,<1.0.0\",\n \"ipfshttpclient==0.7.0a1\",\n \"jsonschema>=3.2.0,<4.0.0\",\n \"lru-dict>=1.1.6,<2.0.0\",\n \"protobuf>=3.10.0,<4\",\n \"pypiwin32>=223;platform_system=='Windows'\",\n \"requests>=2.16.0,<3.0.0\",\n # remove typing_extensions after python_requires>=3.8, see web3._utils.compat\n \"typing-extensions>=3.7.4.1,<4;python_version<'3.8'\",\n \"websockets>=8.1.0,<9.0.0\",\n ],\n python_requires='>=3.6,<4',\n extras_require=extras_require,\n py_modules=['web3', 'ens', 'ethpm'],\n entry_points={\"pytest11\": [\"pytest_ethereum = web3.tools.pytest_ethereum.plugins\"]},\n license=\"MIT\",\n zip_safe=False,\n keywords='ethereum',\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n package_data={\"web3\": [\"py.typed\"]},\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom setuptools import (\n find_packages,\n setup,\n)\n\nextras_require = {\n 'tester': [\n \"eth-tester[py-evm]==v0.5.0-beta.3\",\n \"py-geth>=2.4.0,<3\",\n ],\n 'linter': [\n \"flake8==3.8.3\",\n \"isort>=4.2.15,<4.3.5\",\n \"mypy==0.730\",\n ],\n 'docs': [\n \"mock\",\n \"sphinx-better-theme>=0.1.4\",\n \"click>=5.1\",\n \"configparser==3.5.0\",\n \"contextlib2>=0.5.4\",\n \"py-geth>=2.4.0,<3\",\n \"py-solc>=0.4.0\",\n \"pytest>=4.4.0,<5.0.0\",\n \"sphinx>=2.4.4,<3\",\n \"sphinx_rtd_theme>=0.1.9\",\n \"toposort>=1.4\",\n \"towncrier>=19.2.0,<20\",\n \"urllib3\",\n \"web3>=2.1.0\",\n \"wheel\"\n ],\n 'dev': [\n \"bumpversion\",\n \"flaky>=3.3.0\",\n \"hypothesis>=3.31.2,<6\",\n \"pytest>=4.4.0,<5.0.0\",\n \"pytest-asyncio>=0.10.0,<0.11\",\n \"pytest-mock>=1.10,<2\",\n \"pytest-pythonpath>=0.3\",\n \"pytest-watch>=4.2,<5\",\n \"pytest-xdist>=1.29,<2\",\n \"setuptools>=38.6.0\",\n \"tox>=1.8.0\",\n \"tqdm>4.32,<5\",\n \"twine>=1.13,<2\",\n \"when-changed>=0.3.0,<0.4\"\n ]\n}\n\nextras_require['dev'] = (\n extras_require['tester']\n + extras_require['linter']\n + extras_require['docs']\n + extras_require['dev']\n)\n\nwith open('./README.md') as readme:\n long_description = readme.read()\n\nsetup(\n name='web3',\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version='5.17.0',\n description=\"\"\"Web3.py\"\"\",\n long_description_content_type='text/markdown',\n long_description=long_description,\n author='Piper Merriam',\n author_email='[email protected]',\n url='https://github.com/ethereum/web3.py',\n include_package_data=True,\n install_requires=[\n \"eth-abi>=2.0.0b6,<3.0.0\",\n \"eth-account>=0.5.3,<0.6.0\",\n \"eth-hash[pycryptodome]>=0.2.0,<1.0.0\",\n \"eth-typing>=2.0.0,<3.0.0\",\n \"eth-utils>=1.9.5,<2.0.0\",\n \"hexbytes>=0.1.0,<1.0.0\",\n \"ipfshttpclient==0.7.0a1\",\n \"jsonschema>=3.2.0,<4.0.0\",\n \"lru-dict>=1.1.6,<2.0.0\",\n \"protobuf>=3.10.0,<4\",\n \"pywin32>=223;platform_system=='Windows'\",\n \"requests>=2.16.0,<3.0.0\",\n # remove typing_extensions after python_requires>=3.8, see web3._utils.compat\n \"typing-extensions>=3.7.4.1,<4;python_version<'3.8'\",\n \"websockets>=8.1.0,<9.0.0\",\n ],\n python_requires='>=3.6,<4',\n extras_require=extras_require,\n py_modules=['web3', 'ens', 'ethpm'],\n entry_points={\"pytest11\": [\"pytest_ethereum = web3.tools.pytest_ethereum.plugins\"]},\n license=\"MIT\",\n zip_safe=False,\n keywords='ethereum',\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n package_data={\"web3\": [\"py.typed\"]},\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],\n)\n", "path": "setup.py"}]} | 1,764 | 176 |
gh_patches_debug_9273 | rasdani/github-patches | git_diff | google__fuzzbench-309 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
master cannot pass "make presubmit"
Current master branch has one failed at "``make presubmit``".
To reproduce: ``git clone https://github.com/google/fuzzbench ; cd fuzzbench ; git submodule update --init ; make install-dependencies ; source ./vene/bin/activate ; make presubmit``
https://github.com/google/fuzzbench/blob/41aa9b4267907185bd51514a6fb335ff70f4990d/experiment/test_measurer.py#L347
E assert 0 == 3798
Reference: https://google.github.io/fuzzbench/getting-started/prerequisites/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conftest.py`
Content:
```
1 # Copyright 2020 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """A pytest conftest.py file that defines fixtures and does other things many
15 tests might need (such as using an in-memory SQLite database)."""
16
17 import os
18 import sqlite3
19 from unittest import mock
20
21 import pytest
22 import sqlalchemy
23
24 from common import new_process
25
26 # Never wait for a timeout so that tests don't take any longer than they need
27 # to.
28 new_process.WAIT_SECONDS = 0
29
30 # Set this to an in-memory instance of SQLite so that db_utils can be imported
31 # without running a real Postgres database.
32 # pylint: disable=wrong-import-position
33 os.environ['SQL_DATABASE_URL'] = 'sqlite://'
34
35 from database import utils as db_utils
36 from database import models
37
38
39 # Give this a short name since it is a fixture.
40 @pytest.yield_fixture
41 def db(): # pylint: disable=invalid-name
42 """Connect to the SQLite database and create all the expected tables."""
43 db_utils.initialize()
44 models.Base.metadata.create_all(db_utils.engine)
45 with mock.patch('database.utils.cleanup'):
46 yield
47 db_utils.cleanup()
48
49
50 @sqlalchemy.event.listens_for(sqlalchemy.engine.Engine, 'connect')
51 def set_sqlite_pragma(connection, _):
52 """Force SQLite to enforce non-null foreign key relationships.
53 Based on
54 https://docs.sqlalchemy.org/en/13/dialects/sqlite.html#foreign-key-support
55 """
56 if not isinstance(connection, sqlite3.Connection):
57 return
58
59 cursor = connection.cursor()
60 cursor.execute('PRAGMA foreign_keys=ON')
61 cursor.close()
62
63
64 @pytest.yield_fixture
65 def environ():
66 """Patch environment."""
67 # TODO(metzman): Make sure this is used by all tests that modify the
68 # environment.
69 patcher = mock.patch.dict(os.environ, {})
70 patcher.start()
71 yield
72 patcher.stop()
73
74
75 @pytest.fixture
76 def experiment(tmp_path, environ): # pylint: disable=redefined-outer-name,unused-argument
77 """Mock an experiment."""
78 os.environ['WORK'] = str(tmp_path)
79 os.environ['EXPERIMENT'] = 'test-experiment'
80 os.environ['CLOUD_EXPERIMENT_BUCKET'] = 'gs://experiment-data'
81 os.environ['CLOUD_WEB_BUCKET'] = 'gs://web-bucket'
82 os.environ['CLOUD_PROJECT'] = 'fuzzbench'
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conftest.py b/conftest.py
--- a/conftest.py
+++ b/conftest.py
@@ -73,9 +73,9 @@
@pytest.fixture
-def experiment(tmp_path, environ): # pylint: disable=redefined-outer-name,unused-argument
+def experiment(environ): # pylint: disable=redefined-outer-name,unused-argument
"""Mock an experiment."""
- os.environ['WORK'] = str(tmp_path)
+ os.environ['WORK'] = '/work'
os.environ['EXPERIMENT'] = 'test-experiment'
os.environ['CLOUD_EXPERIMENT_BUCKET'] = 'gs://experiment-data'
os.environ['CLOUD_WEB_BUCKET'] = 'gs://web-bucket'
| {"golden_diff": "diff --git a/conftest.py b/conftest.py\n--- a/conftest.py\n+++ b/conftest.py\n@@ -73,9 +73,9 @@\n \n \n @pytest.fixture\n-def experiment(tmp_path, environ): # pylint: disable=redefined-outer-name,unused-argument\n+def experiment(environ): # pylint: disable=redefined-outer-name,unused-argument\n \"\"\"Mock an experiment.\"\"\"\n- os.environ['WORK'] = str(tmp_path)\n+ os.environ['WORK'] = '/work'\n os.environ['EXPERIMENT'] = 'test-experiment'\n os.environ['CLOUD_EXPERIMENT_BUCKET'] = 'gs://experiment-data'\n os.environ['CLOUD_WEB_BUCKET'] = 'gs://web-bucket'\n", "issue": "master cannot pass \"make presubmit\"\nCurrent master branch has one failed at \"``make presubmit``\".\r\n\r\nTo reproduce: ``git clone https://github.com/google/fuzzbench ; cd fuzzbench ; git submodule update --init ; make install-dependencies ; source ./vene/bin/activate ; make presubmit``\r\n\r\nhttps://github.com/google/fuzzbench/blob/41aa9b4267907185bd51514a6fb335ff70f4990d/experiment/test_measurer.py#L347\r\n\r\nE assert 0 == 3798\r\n\r\nReference: https://google.github.io/fuzzbench/getting-started/prerequisites/\n", "before_files": [{"content": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"A pytest conftest.py file that defines fixtures and does other things many\ntests might need (such as using an in-memory SQLite database).\"\"\"\n\nimport os\nimport sqlite3\nfrom unittest import mock\n\nimport pytest\nimport sqlalchemy\n\nfrom common import new_process\n\n# Never wait for a timeout so that tests don't take any longer than they need\n# to.\nnew_process.WAIT_SECONDS = 0\n\n# Set this to an in-memory instance of SQLite so that db_utils can be imported\n# without running a real Postgres database.\n# pylint: disable=wrong-import-position\nos.environ['SQL_DATABASE_URL'] = 'sqlite://'\n\nfrom database import utils as db_utils\nfrom database import models\n\n\n# Give this a short name since it is a fixture.\[email protected]_fixture\ndef db(): # pylint: disable=invalid-name\n \"\"\"Connect to the SQLite database and create all the expected tables.\"\"\"\n db_utils.initialize()\n models.Base.metadata.create_all(db_utils.engine)\n with mock.patch('database.utils.cleanup'):\n yield\n db_utils.cleanup()\n\n\[email protected]_for(sqlalchemy.engine.Engine, 'connect')\ndef set_sqlite_pragma(connection, _):\n \"\"\"Force SQLite to enforce non-null foreign key relationships.\n Based on\n https://docs.sqlalchemy.org/en/13/dialects/sqlite.html#foreign-key-support\n \"\"\"\n if not isinstance(connection, sqlite3.Connection):\n return\n\n cursor = connection.cursor()\n cursor.execute('PRAGMA foreign_keys=ON')\n cursor.close()\n\n\[email protected]_fixture\ndef environ():\n \"\"\"Patch environment.\"\"\"\n # TODO(metzman): Make sure this is used by all tests that modify the\n # environment.\n patcher = mock.patch.dict(os.environ, {})\n patcher.start()\n yield\n patcher.stop()\n\n\[email protected]\ndef experiment(tmp_path, environ): # pylint: disable=redefined-outer-name,unused-argument\n \"\"\"Mock an experiment.\"\"\"\n os.environ['WORK'] = str(tmp_path)\n os.environ['EXPERIMENT'] = 'test-experiment'\n os.environ['CLOUD_EXPERIMENT_BUCKET'] = 'gs://experiment-data'\n os.environ['CLOUD_WEB_BUCKET'] = 'gs://web-bucket'\n os.environ['CLOUD_PROJECT'] = 'fuzzbench'\n", "path": "conftest.py"}], "after_files": [{"content": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"A pytest conftest.py file that defines fixtures and does other things many\ntests might need (such as using an in-memory SQLite database).\"\"\"\n\nimport os\nimport sqlite3\nfrom unittest import mock\n\nimport pytest\nimport sqlalchemy\n\nfrom common import new_process\n\n# Never wait for a timeout so that tests don't take any longer than they need\n# to.\nnew_process.WAIT_SECONDS = 0\n\n# Set this to an in-memory instance of SQLite so that db_utils can be imported\n# without running a real Postgres database.\n# pylint: disable=wrong-import-position\nos.environ['SQL_DATABASE_URL'] = 'sqlite://'\n\nfrom database import utils as db_utils\nfrom database import models\n\n\n# Give this a short name since it is a fixture.\[email protected]_fixture\ndef db(): # pylint: disable=invalid-name\n \"\"\"Connect to the SQLite database and create all the expected tables.\"\"\"\n db_utils.initialize()\n models.Base.metadata.create_all(db_utils.engine)\n with mock.patch('database.utils.cleanup'):\n yield\n db_utils.cleanup()\n\n\[email protected]_for(sqlalchemy.engine.Engine, 'connect')\ndef set_sqlite_pragma(connection, _):\n \"\"\"Force SQLite to enforce non-null foreign key relationships.\n Based on\n https://docs.sqlalchemy.org/en/13/dialects/sqlite.html#foreign-key-support\n \"\"\"\n if not isinstance(connection, sqlite3.Connection):\n return\n\n cursor = connection.cursor()\n cursor.execute('PRAGMA foreign_keys=ON')\n cursor.close()\n\n\[email protected]_fixture\ndef environ():\n \"\"\"Patch environment.\"\"\"\n # TODO(metzman): Make sure this is used by all tests that modify the\n # environment.\n patcher = mock.patch.dict(os.environ, {})\n patcher.start()\n yield\n patcher.stop()\n\n\[email protected]\ndef experiment(environ): # pylint: disable=redefined-outer-name,unused-argument\n \"\"\"Mock an experiment.\"\"\"\n os.environ['WORK'] = '/work'\n os.environ['EXPERIMENT'] = 'test-experiment'\n os.environ['CLOUD_EXPERIMENT_BUCKET'] = 'gs://experiment-data'\n os.environ['CLOUD_WEB_BUCKET'] = 'gs://web-bucket'\n os.environ['CLOUD_PROJECT'] = 'fuzzbench'\n", "path": "conftest.py"}]} | 1,205 | 166 |
gh_patches_debug_43702 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-605 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Static/Assets directories in apps
Django says to include app specific static files in a `/static` directory in the root of the particular app (https://docs.djangoproject.com/en/1.11/howto/static-files/). On `runserver` Django collects static files from the project wide static directory (mentioned in base.py) and the `/static` directories within all apps, grouping them into one global `/static` directory.
Now, we don't need `.jsx` files in that global `/static` directory since they are required and put together into one file by webpack. Still, those files need a directory to be put into on app level – therefore sometimes there might be an asset directory instead of a static directory or the other way around.
It'd be great to have one directory and in the best case that would include not having `.jsx` files in that django-generated, global `/static` directory.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/dashboard/blueprints.py`
Content:
```
1 from collections import namedtuple
2
3 from django.utils.translation import ugettext_lazy as _
4
5 from apps.bplan import phases as bplan_phases
6 from apps.budgeting import phases as budgeting_phases
7 from apps.documents import phases as documents_phases
8 from apps.extprojects import phases as extprojects_phases
9 from apps.ideas import phases as ideas_phases
10 from apps.kiezkasse import phases as kiezkasse_phases
11 from apps.mapideas import phases as mapideas_phases
12 from apps.polls import phases as poll_phases
13 from apps.topicprio import phases as topicprio_phases
14
15 ProjectBlueprint = namedtuple(
16 'ProjectBlueprint', [
17 'title', 'description', 'content', 'image', 'settings_model'
18 ]
19 )
20
21 blueprints = [
22 ('brainstorming',
23 ProjectBlueprint(
24 title=_('Brainstorming'),
25 description=_(
26 'Collect first ideas for a specific topic and comment on them.'
27 ),
28 content=[
29 ideas_phases.CollectPhase(),
30 ],
31 image='images/blueprints/brainstorming.svg',
32 settings_model=None,
33 )),
34 ('map-brainstorming',
35 ProjectBlueprint(
36 title=_('Spatial Brainstorming'),
37 description=_(
38 'Collect location specific ideas for a topic and comment on them.'
39 ),
40 content=[
41 mapideas_phases.CollectPhase(),
42 ],
43 image='images/blueprints/map-brainstorming.svg',
44 settings_model=('a4maps', 'AreaSettings'),
45 )),
46 ('map-idea-collection',
47 ProjectBlueprint(
48 title=_('Spatial Idea Collection'),
49 description=_(
50 'Collect location specific ideas that can be rated and commented.'
51 ),
52 content=[
53 mapideas_phases.CollectPhase(),
54 mapideas_phases.RatingPhase()
55 ],
56 image='images/blueprints/map-idea-collection.svg',
57 settings_model=('a4maps', 'AreaSettings'),
58 )),
59 ('agenda-setting',
60 ProjectBlueprint(
61 title=_('Agenda Setting'),
62 description=_(
63 'With Agenda-Setting it’s possible to identify topics and to '
64 'define mission statements. Afterwards anyone can comment and '
65 'rate on different topics.'
66 ),
67 content=[
68 ideas_phases.CollectPhase(),
69 ideas_phases.RatingPhase(),
70 ],
71 image='images/blueprints/agenda-setting.svg',
72 settings_model=None,
73 )),
74 ('text-review',
75 ProjectBlueprint(
76 title=_('Text Review'),
77 description=_(
78 'In the text-review it’s possible to structure draft texts '
79 'that can be commented.'
80 ),
81 content=[
82 documents_phases.CommentPhase(),
83 ],
84 image='images/blueprints/text-review.svg',
85 settings_model=None,
86 )),
87 ('participatory-budgeting',
88 ProjectBlueprint(
89 title=_('Participatory budgeting'),
90 description=_(
91 'With participatory-budgeting it’s possible to make proposals '
92 'with budget specifications and locate them. Anyone can comment '
93 'and rate on different proposals.'
94 ),
95 content=[
96 budgeting_phases.RequestPhase()
97 ],
98 image='images/blueprints/participatory-budgeting.svg',
99 settings_model=('a4maps', 'AreaSettings'),
100 )),
101 ('external-project',
102 ProjectBlueprint(
103 title=_('External project'),
104 description=_(
105 'External projects are handled on a different platform.'
106 ),
107 content=[
108 extprojects_phases.ExternalPhase(),
109 ],
110 image='images/blueprints/external-project.svg',
111 settings_model=None,
112 )),
113 ('poll',
114 ProjectBlueprint(
115 title=_('Poll'),
116 description=_(
117 'Create a poll with multiple questions and possible answers. '
118 'Anyone can cast votes and comment on the poll.'
119 ),
120 content=[
121 poll_phases.VotingPhase(),
122 ],
123 image='images/blueprints/poll.svg',
124 settings_model=None,
125 )),
126 ('topic-prioritization',
127 ProjectBlueprint(
128 title=_('Topic Prioritization'),
129 description=_(
130 'Comment and prioritize topics.'
131 ),
132 content=[
133 topicprio_phases.PrioritizePhase(),
134 ],
135 image='images/blueprints/priorization.svg',
136 settings_model=None,
137 )),
138 ('bplan',
139 ProjectBlueprint(
140 title=_('Development Plan'),
141 description=_('Create a statement formular for development plans'
142 ' to be embedded on external sites.'),
143 content=[
144 bplan_phases.StatementPhase(),
145 ],
146 image='images/blueprints/bplan.svg',
147 settings_model=None,
148 )),
149 ('kiezkasse',
150 ProjectBlueprint(
151 title=_('Kiezkasse'),
152 description=_(
153 'With kiezkasse it’s possible to make proposals '
154 'with budget specifications and locate them. '
155 'The proposals can be commented and rated.'
156 ),
157 content=[
158 kiezkasse_phases.RequestFeedbackPhase(),
159 ],
160 image='images/blueprints/kiezkasse.svg',
161 settings_model=('a4maps', 'AreaSettings'),
162 )),
163 ]
164
165
166 class BlueprintMixin:
167 @property
168 def blueprint(self):
169 return dict(blueprints)[self.blueprint_key]
170
171 @property
172 def blueprint_key(self):
173 return self.kwargs['blueprint_slug']
174
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/dashboard/blueprints.py b/apps/dashboard/blueprints.py
--- a/apps/dashboard/blueprints.py
+++ b/apps/dashboard/blueprints.py
@@ -28,7 +28,7 @@
content=[
ideas_phases.CollectPhase(),
],
- image='images/blueprints/brainstorming.svg',
+ image='images/brainstorming.svg',
settings_model=None,
)),
('map-brainstorming',
@@ -40,7 +40,7 @@
content=[
mapideas_phases.CollectPhase(),
],
- image='images/blueprints/map-brainstorming.svg',
+ image='images/map-brainstorming.svg',
settings_model=('a4maps', 'AreaSettings'),
)),
('map-idea-collection',
@@ -53,7 +53,7 @@
mapideas_phases.CollectPhase(),
mapideas_phases.RatingPhase()
],
- image='images/blueprints/map-idea-collection.svg',
+ image='images/map-idea-collection.svg',
settings_model=('a4maps', 'AreaSettings'),
)),
('agenda-setting',
@@ -68,7 +68,7 @@
ideas_phases.CollectPhase(),
ideas_phases.RatingPhase(),
],
- image='images/blueprints/agenda-setting.svg',
+ image='images/agenda-setting.svg',
settings_model=None,
)),
('text-review',
@@ -81,7 +81,7 @@
content=[
documents_phases.CommentPhase(),
],
- image='images/blueprints/text-review.svg',
+ image='images/text-review.svg',
settings_model=None,
)),
('participatory-budgeting',
@@ -95,7 +95,7 @@
content=[
budgeting_phases.RequestPhase()
],
- image='images/blueprints/participatory-budgeting.svg',
+ image='images/participatory-budgeting.svg',
settings_model=('a4maps', 'AreaSettings'),
)),
('external-project',
@@ -107,7 +107,7 @@
content=[
extprojects_phases.ExternalPhase(),
],
- image='images/blueprints/external-project.svg',
+ image='images/external-project.svg',
settings_model=None,
)),
('poll',
@@ -120,7 +120,7 @@
content=[
poll_phases.VotingPhase(),
],
- image='images/blueprints/poll.svg',
+ image='images/poll.svg',
settings_model=None,
)),
('topic-prioritization',
@@ -132,7 +132,7 @@
content=[
topicprio_phases.PrioritizePhase(),
],
- image='images/blueprints/priorization.svg',
+ image='images/priorization.svg',
settings_model=None,
)),
('bplan',
@@ -143,7 +143,7 @@
content=[
bplan_phases.StatementPhase(),
],
- image='images/blueprints/bplan.svg',
+ image='images/bplan.svg',
settings_model=None,
)),
('kiezkasse',
@@ -157,7 +157,7 @@
content=[
kiezkasse_phases.RequestFeedbackPhase(),
],
- image='images/blueprints/kiezkasse.svg',
+ image='images/kiezkasse.svg',
settings_model=('a4maps', 'AreaSettings'),
)),
]
| {"golden_diff": "diff --git a/apps/dashboard/blueprints.py b/apps/dashboard/blueprints.py\n--- a/apps/dashboard/blueprints.py\n+++ b/apps/dashboard/blueprints.py\n@@ -28,7 +28,7 @@\n content=[\n ideas_phases.CollectPhase(),\n ],\n- image='images/blueprints/brainstorming.svg',\n+ image='images/brainstorming.svg',\n settings_model=None,\n )),\n ('map-brainstorming',\n@@ -40,7 +40,7 @@\n content=[\n mapideas_phases.CollectPhase(),\n ],\n- image='images/blueprints/map-brainstorming.svg',\n+ image='images/map-brainstorming.svg',\n settings_model=('a4maps', 'AreaSettings'),\n )),\n ('map-idea-collection',\n@@ -53,7 +53,7 @@\n mapideas_phases.CollectPhase(),\n mapideas_phases.RatingPhase()\n ],\n- image='images/blueprints/map-idea-collection.svg',\n+ image='images/map-idea-collection.svg',\n settings_model=('a4maps', 'AreaSettings'),\n )),\n ('agenda-setting',\n@@ -68,7 +68,7 @@\n ideas_phases.CollectPhase(),\n ideas_phases.RatingPhase(),\n ],\n- image='images/blueprints/agenda-setting.svg',\n+ image='images/agenda-setting.svg',\n settings_model=None,\n )),\n ('text-review',\n@@ -81,7 +81,7 @@\n content=[\n documents_phases.CommentPhase(),\n ],\n- image='images/blueprints/text-review.svg',\n+ image='images/text-review.svg',\n settings_model=None,\n )),\n ('participatory-budgeting',\n@@ -95,7 +95,7 @@\n content=[\n budgeting_phases.RequestPhase()\n ],\n- image='images/blueprints/participatory-budgeting.svg',\n+ image='images/participatory-budgeting.svg',\n settings_model=('a4maps', 'AreaSettings'),\n )),\n ('external-project',\n@@ -107,7 +107,7 @@\n content=[\n extprojects_phases.ExternalPhase(),\n ],\n- image='images/blueprints/external-project.svg',\n+ image='images/external-project.svg',\n settings_model=None,\n )),\n ('poll',\n@@ -120,7 +120,7 @@\n content=[\n poll_phases.VotingPhase(),\n ],\n- image='images/blueprints/poll.svg',\n+ image='images/poll.svg',\n settings_model=None,\n )),\n ('topic-prioritization',\n@@ -132,7 +132,7 @@\n content=[\n topicprio_phases.PrioritizePhase(),\n ],\n- image='images/blueprints/priorization.svg',\n+ image='images/priorization.svg',\n settings_model=None,\n )),\n ('bplan',\n@@ -143,7 +143,7 @@\n content=[\n bplan_phases.StatementPhase(),\n ],\n- image='images/blueprints/bplan.svg',\n+ image='images/bplan.svg',\n settings_model=None,\n )),\n ('kiezkasse',\n@@ -157,7 +157,7 @@\n content=[\n kiezkasse_phases.RequestFeedbackPhase(),\n ],\n- image='images/blueprints/kiezkasse.svg',\n+ image='images/kiezkasse.svg',\n settings_model=('a4maps', 'AreaSettings'),\n )),\n ]\n", "issue": "Static/Assets directories in apps\nDjango says to include app specific static files in a `/static` directory in the root of the particular app (https://docs.djangoproject.com/en/1.11/howto/static-files/). On `runserver` Django collects static files from the project wide static directory (mentioned in base.py) and the `/static` directories within all apps, grouping them into one global `/static` directory.\r\nNow, we don't need `.jsx` files in that global `/static` directory since they are required and put together into one file by webpack. Still, those files need a directory to be put into on app level \u2013 therefore sometimes there might be an asset directory instead of a static directory or the other way around.\r\nIt'd be great to have one directory and in the best case that would include not having `.jsx` files in that django-generated, global `/static` directory.\n", "before_files": [{"content": "from collections import namedtuple\n\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom apps.bplan import phases as bplan_phases\nfrom apps.budgeting import phases as budgeting_phases\nfrom apps.documents import phases as documents_phases\nfrom apps.extprojects import phases as extprojects_phases\nfrom apps.ideas import phases as ideas_phases\nfrom apps.kiezkasse import phases as kiezkasse_phases\nfrom apps.mapideas import phases as mapideas_phases\nfrom apps.polls import phases as poll_phases\nfrom apps.topicprio import phases as topicprio_phases\n\nProjectBlueprint = namedtuple(\n 'ProjectBlueprint', [\n 'title', 'description', 'content', 'image', 'settings_model'\n ]\n)\n\nblueprints = [\n ('brainstorming',\n ProjectBlueprint(\n title=_('Brainstorming'),\n description=_(\n 'Collect first ideas for a specific topic and comment on them.'\n ),\n content=[\n ideas_phases.CollectPhase(),\n ],\n image='images/blueprints/brainstorming.svg',\n settings_model=None,\n )),\n ('map-brainstorming',\n ProjectBlueprint(\n title=_('Spatial Brainstorming'),\n description=_(\n 'Collect location specific ideas for a topic and comment on them.'\n ),\n content=[\n mapideas_phases.CollectPhase(),\n ],\n image='images/blueprints/map-brainstorming.svg',\n settings_model=('a4maps', 'AreaSettings'),\n )),\n ('map-idea-collection',\n ProjectBlueprint(\n title=_('Spatial Idea Collection'),\n description=_(\n 'Collect location specific ideas that can be rated and commented.'\n ),\n content=[\n mapideas_phases.CollectPhase(),\n mapideas_phases.RatingPhase()\n ],\n image='images/blueprints/map-idea-collection.svg',\n settings_model=('a4maps', 'AreaSettings'),\n )),\n ('agenda-setting',\n ProjectBlueprint(\n title=_('Agenda Setting'),\n description=_(\n 'With Agenda-Setting it\u2019s possible to identify topics and to '\n 'define mission statements. Afterwards anyone can comment and '\n 'rate on different topics.'\n ),\n content=[\n ideas_phases.CollectPhase(),\n ideas_phases.RatingPhase(),\n ],\n image='images/blueprints/agenda-setting.svg',\n settings_model=None,\n )),\n ('text-review',\n ProjectBlueprint(\n title=_('Text Review'),\n description=_(\n 'In the text-review it\u2019s possible to structure draft texts '\n 'that can be commented.'\n ),\n content=[\n documents_phases.CommentPhase(),\n ],\n image='images/blueprints/text-review.svg',\n settings_model=None,\n )),\n ('participatory-budgeting',\n ProjectBlueprint(\n title=_('Participatory budgeting'),\n description=_(\n 'With participatory-budgeting it\u2019s possible to make proposals '\n 'with budget specifications and locate them. Anyone can comment '\n 'and rate on different proposals.'\n ),\n content=[\n budgeting_phases.RequestPhase()\n ],\n image='images/blueprints/participatory-budgeting.svg',\n settings_model=('a4maps', 'AreaSettings'),\n )),\n ('external-project',\n ProjectBlueprint(\n title=_('External project'),\n description=_(\n 'External projects are handled on a different platform.'\n ),\n content=[\n extprojects_phases.ExternalPhase(),\n ],\n image='images/blueprints/external-project.svg',\n settings_model=None,\n )),\n ('poll',\n ProjectBlueprint(\n title=_('Poll'),\n description=_(\n 'Create a poll with multiple questions and possible answers. '\n 'Anyone can cast votes and comment on the poll.'\n ),\n content=[\n poll_phases.VotingPhase(),\n ],\n image='images/blueprints/poll.svg',\n settings_model=None,\n )),\n ('topic-prioritization',\n ProjectBlueprint(\n title=_('Topic Prioritization'),\n description=_(\n 'Comment and prioritize topics.'\n ),\n content=[\n topicprio_phases.PrioritizePhase(),\n ],\n image='images/blueprints/priorization.svg',\n settings_model=None,\n )),\n ('bplan',\n ProjectBlueprint(\n title=_('Development Plan'),\n description=_('Create a statement formular for development plans'\n ' to be embedded on external sites.'),\n content=[\n bplan_phases.StatementPhase(),\n ],\n image='images/blueprints/bplan.svg',\n settings_model=None,\n )),\n ('kiezkasse',\n ProjectBlueprint(\n title=_('Kiezkasse'),\n description=_(\n 'With kiezkasse it\u2019s possible to make proposals '\n 'with budget specifications and locate them. '\n 'The proposals can be commented and rated.'\n ),\n content=[\n kiezkasse_phases.RequestFeedbackPhase(),\n ],\n image='images/blueprints/kiezkasse.svg',\n settings_model=('a4maps', 'AreaSettings'),\n )),\n]\n\n\nclass BlueprintMixin:\n @property\n def blueprint(self):\n return dict(blueprints)[self.blueprint_key]\n\n @property\n def blueprint_key(self):\n return self.kwargs['blueprint_slug']\n", "path": "apps/dashboard/blueprints.py"}], "after_files": [{"content": "from collections import namedtuple\n\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom apps.bplan import phases as bplan_phases\nfrom apps.budgeting import phases as budgeting_phases\nfrom apps.documents import phases as documents_phases\nfrom apps.extprojects import phases as extprojects_phases\nfrom apps.ideas import phases as ideas_phases\nfrom apps.kiezkasse import phases as kiezkasse_phases\nfrom apps.mapideas import phases as mapideas_phases\nfrom apps.polls import phases as poll_phases\nfrom apps.topicprio import phases as topicprio_phases\n\nProjectBlueprint = namedtuple(\n 'ProjectBlueprint', [\n 'title', 'description', 'content', 'image', 'settings_model'\n ]\n)\n\nblueprints = [\n ('brainstorming',\n ProjectBlueprint(\n title=_('Brainstorming'),\n description=_(\n 'Collect first ideas for a specific topic and comment on them.'\n ),\n content=[\n ideas_phases.CollectPhase(),\n ],\n image='images/brainstorming.svg',\n settings_model=None,\n )),\n ('map-brainstorming',\n ProjectBlueprint(\n title=_('Spatial Brainstorming'),\n description=_(\n 'Collect location specific ideas for a topic and comment on them.'\n ),\n content=[\n mapideas_phases.CollectPhase(),\n ],\n image='images/map-brainstorming.svg',\n settings_model=('a4maps', 'AreaSettings'),\n )),\n ('map-idea-collection',\n ProjectBlueprint(\n title=_('Spatial Idea Collection'),\n description=_(\n 'Collect location specific ideas that can be rated and commented.'\n ),\n content=[\n mapideas_phases.CollectPhase(),\n mapideas_phases.RatingPhase()\n ],\n image='images/map-idea-collection.svg',\n settings_model=('a4maps', 'AreaSettings'),\n )),\n ('agenda-setting',\n ProjectBlueprint(\n title=_('Agenda Setting'),\n description=_(\n 'With Agenda-Setting it\u2019s possible to identify topics and to '\n 'define mission statements. Afterwards anyone can comment and '\n 'rate on different topics.'\n ),\n content=[\n ideas_phases.CollectPhase(),\n ideas_phases.RatingPhase(),\n ],\n image='images/agenda-setting.svg',\n settings_model=None,\n )),\n ('text-review',\n ProjectBlueprint(\n title=_('Text Review'),\n description=_(\n 'In the text-review it\u2019s possible to structure draft texts '\n 'that can be commented.'\n ),\n content=[\n documents_phases.CommentPhase(),\n ],\n image='images/text-review.svg',\n settings_model=None,\n )),\n ('participatory-budgeting',\n ProjectBlueprint(\n title=_('Participatory budgeting'),\n description=_(\n 'With participatory-budgeting it\u2019s possible to make proposals '\n 'with budget specifications and locate them. Anyone can comment '\n 'and rate on different proposals.'\n ),\n content=[\n budgeting_phases.RequestPhase()\n ],\n image='images/participatory-budgeting.svg',\n settings_model=('a4maps', 'AreaSettings'),\n )),\n ('external-project',\n ProjectBlueprint(\n title=_('External project'),\n description=_(\n 'External projects are handled on a different platform.'\n ),\n content=[\n extprojects_phases.ExternalPhase(),\n ],\n image='images/external-project.svg',\n settings_model=None,\n )),\n ('poll',\n ProjectBlueprint(\n title=_('Poll'),\n description=_(\n 'Create a poll with multiple questions and possible answers. '\n 'Anyone can cast votes and comment on the poll.'\n ),\n content=[\n poll_phases.VotingPhase(),\n ],\n image='images/poll.svg',\n settings_model=None,\n )),\n ('topic-prioritization',\n ProjectBlueprint(\n title=_('Topic Prioritization'),\n description=_(\n 'Comment and prioritize topics.'\n ),\n content=[\n topicprio_phases.PrioritizePhase(),\n ],\n image='images/priorization.svg',\n settings_model=None,\n )),\n ('bplan',\n ProjectBlueprint(\n title=_('Development Plan'),\n description=_('Create a statement formular for development plans'\n ' to be embedded on external sites.'),\n content=[\n bplan_phases.StatementPhase(),\n ],\n image='images/bplan.svg',\n settings_model=None,\n )),\n ('kiezkasse',\n ProjectBlueprint(\n title=_('Kiezkasse'),\n description=_(\n 'With kiezkasse it\u2019s possible to make proposals '\n 'with budget specifications and locate them. '\n 'The proposals can be commented and rated.'\n ),\n content=[\n kiezkasse_phases.RequestFeedbackPhase(),\n ],\n image='images/kiezkasse.svg',\n settings_model=('a4maps', 'AreaSettings'),\n )),\n]\n\n\nclass BlueprintMixin:\n @property\n def blueprint(self):\n return dict(blueprints)[self.blueprint_key]\n\n @property\n def blueprint_key(self):\n return self.kwargs['blueprint_slug']\n", "path": "apps/dashboard/blueprints.py"}]} | 1,946 | 753 |
gh_patches_debug_9626 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-1545 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Jaeger Propagator not handling missing carrier
Python 3.8 / Flask injector 0.17b0
**Steps to reproduce**
If jaeger is the text map and you do not send a carrier header then the parser fails when it tries to parse the carrier header.
```python
if context is None:
context = get_current()
fields = _extract_first_element(
getter.get(carrier, self.TRACE_ID_KEY)
).split(":")
```
**What is the expected **
Parse as invalid span. Code continues.
**What is the actual behavior?**
```
File "/usr/local/lib/python3.8/site-packages/opentelemetry/propagators/jaeger/__init__.py", line 48, in extract
fields = _extract_first_element(
AttributeError: 'NoneType' object has no attribute ‘split’
```
**Additional context**
TraceContextTextMapPropagator does this:
```python
header = getter.get(carrier, self._TRACEPARENT_HEADER_NAME)
if not header:
return trace.set_span_in_context(trace.INVALID_SPAN, context)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `propagator/opentelemetry-propagator-jaeger/src/opentelemetry/propagators/jaeger/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import typing
16 import urllib.parse
17
18 import opentelemetry.trace as trace
19 from opentelemetry import baggage
20 from opentelemetry.context import Context, get_current
21 from opentelemetry.trace.propagation.textmap import (
22 Getter,
23 Setter,
24 TextMapPropagator,
25 TextMapPropagatorT,
26 )
27
28
29 class JaegerPropagator(TextMapPropagator):
30 """Propagator for the Jaeger format.
31
32 See: https://www.jaegertracing.io/docs/1.19/client-libraries/#propagation-format
33 """
34
35 TRACE_ID_KEY = "uber-trace-id"
36 BAGGAGE_PREFIX = "uberctx-"
37 DEBUG_FLAG = 0x02
38
39 def extract(
40 self,
41 getter: Getter[TextMapPropagatorT],
42 carrier: TextMapPropagatorT,
43 context: typing.Optional[Context] = None,
44 ) -> Context:
45
46 if context is None:
47 context = get_current()
48 fields = _extract_first_element(
49 getter.get(carrier, self.TRACE_ID_KEY)
50 ).split(":")
51
52 context = self._extract_baggage(getter, carrier, context)
53 if len(fields) != 4:
54 return trace.set_span_in_context(trace.INVALID_SPAN, context)
55
56 trace_id, span_id, _parent_id, flags = fields
57 if (
58 trace_id == trace.INVALID_TRACE_ID
59 or span_id == trace.INVALID_SPAN_ID
60 ):
61 return trace.set_span_in_context(trace.INVALID_SPAN, context)
62
63 span = trace.DefaultSpan(
64 trace.SpanContext(
65 trace_id=int(trace_id, 16),
66 span_id=int(span_id, 16),
67 is_remote=True,
68 trace_flags=trace.TraceFlags(
69 int(flags, 16) & trace.TraceFlags.SAMPLED
70 ),
71 )
72 )
73 return trace.set_span_in_context(span, context)
74
75 def inject(
76 self,
77 set_in_carrier: Setter[TextMapPropagatorT],
78 carrier: TextMapPropagatorT,
79 context: typing.Optional[Context] = None,
80 ) -> None:
81 span = trace.get_current_span(context=context)
82 span_context = span.get_span_context()
83 if span_context == trace.INVALID_SPAN_CONTEXT:
84 return
85
86 span_parent_id = span.parent.span_id if span.parent else 0
87 trace_flags = span_context.trace_flags
88 if trace_flags.sampled:
89 trace_flags |= self.DEBUG_FLAG
90
91 # set span identity
92 set_in_carrier(
93 carrier,
94 self.TRACE_ID_KEY,
95 _format_uber_trace_id(
96 span_context.trace_id,
97 span_context.span_id,
98 span_parent_id,
99 trace_flags,
100 ),
101 )
102
103 # set span baggage, if any
104 baggage_entries = baggage.get_all(context=context)
105 if not baggage_entries:
106 return
107 for key, value in baggage_entries.items():
108 baggage_key = self.BAGGAGE_PREFIX + key
109 set_in_carrier(
110 carrier, baggage_key, urllib.parse.quote(str(value))
111 )
112
113 @property
114 def fields(self) -> typing.Set[str]:
115 return {self.TRACE_ID_KEY}
116
117 def _extract_baggage(self, getter, carrier, context):
118 baggage_keys = [
119 key
120 for key in getter.keys(carrier)
121 if key.startswith(self.BAGGAGE_PREFIX)
122 ]
123 for key in baggage_keys:
124 value = _extract_first_element(getter.get(carrier, key))
125 context = baggage.set_baggage(
126 key.replace(self.BAGGAGE_PREFIX, ""),
127 urllib.parse.unquote(value).strip(),
128 context=context,
129 )
130 return context
131
132
133 def _format_uber_trace_id(trace_id, span_id, parent_span_id, flags):
134 return "{:032x}:{:016x}:{:016x}:{:02x}".format(
135 trace_id, span_id, parent_span_id, flags
136 )
137
138
139 def _extract_first_element(
140 items: typing.Iterable[TextMapPropagatorT],
141 ) -> typing.Optional[TextMapPropagatorT]:
142 if items is None:
143 return None
144 return next(iter(items), None)
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/propagator/opentelemetry-propagator-jaeger/src/opentelemetry/propagators/jaeger/__init__.py b/propagator/opentelemetry-propagator-jaeger/src/opentelemetry/propagators/jaeger/__init__.py
--- a/propagator/opentelemetry-propagator-jaeger/src/opentelemetry/propagators/jaeger/__init__.py
+++ b/propagator/opentelemetry-propagator-jaeger/src/opentelemetry/propagators/jaeger/__init__.py
@@ -45,9 +45,10 @@
if context is None:
context = get_current()
- fields = _extract_first_element(
- getter.get(carrier, self.TRACE_ID_KEY)
- ).split(":")
+ header = getter.get(carrier, self.TRACE_ID_KEY)
+ if not header:
+ return trace.set_span_in_context(trace.INVALID_SPAN, context)
+ fields = _extract_first_element(header).split(":")
context = self._extract_baggage(getter, carrier, context)
if len(fields) != 4:
| {"golden_diff": "diff --git a/propagator/opentelemetry-propagator-jaeger/src/opentelemetry/propagators/jaeger/__init__.py b/propagator/opentelemetry-propagator-jaeger/src/opentelemetry/propagators/jaeger/__init__.py\n--- a/propagator/opentelemetry-propagator-jaeger/src/opentelemetry/propagators/jaeger/__init__.py\n+++ b/propagator/opentelemetry-propagator-jaeger/src/opentelemetry/propagators/jaeger/__init__.py\n@@ -45,9 +45,10 @@\n \n if context is None:\n context = get_current()\n- fields = _extract_first_element(\n- getter.get(carrier, self.TRACE_ID_KEY)\n- ).split(\":\")\n+ header = getter.get(carrier, self.TRACE_ID_KEY)\n+ if not header:\n+ return trace.set_span_in_context(trace.INVALID_SPAN, context)\n+ fields = _extract_first_element(header).split(\":\")\n \n context = self._extract_baggage(getter, carrier, context)\n if len(fields) != 4:\n", "issue": "Jaeger Propagator not handling missing carrier\nPython 3.8 / Flask injector 0.17b0\r\n\r\n\r\n**Steps to reproduce**\r\nIf jaeger is the text map and you do not send a carrier header then the parser fails when it tries to parse the carrier header.\r\n\r\n```python\r\n if context is None:\r\n context = get_current()\r\n fields = _extract_first_element(\r\n getter.get(carrier, self.TRACE_ID_KEY)\r\n ).split(\":\")\r\n```\r\n\r\n**What is the expected **\r\nParse as invalid span. Code continues.\r\n\r\n**What is the actual behavior?**\r\n```\r\nFile \"/usr/local/lib/python3.8/site-packages/opentelemetry/propagators/jaeger/__init__.py\", line 48, in extract\r\n fields = _extract_first_element(\r\nAttributeError: 'NoneType' object has no attribute \u2018split\u2019\r\n```\r\n\r\n**Additional context**\r\nTraceContextTextMapPropagator does this:\r\n\r\n```python\r\n header = getter.get(carrier, self._TRACEPARENT_HEADER_NAME)\r\n\r\n if not header:\r\n return trace.set_span_in_context(trace.INVALID_SPAN, context)\r\n```\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport typing\nimport urllib.parse\n\nimport opentelemetry.trace as trace\nfrom opentelemetry import baggage\nfrom opentelemetry.context import Context, get_current\nfrom opentelemetry.trace.propagation.textmap import (\n Getter,\n Setter,\n TextMapPropagator,\n TextMapPropagatorT,\n)\n\n\nclass JaegerPropagator(TextMapPropagator):\n \"\"\"Propagator for the Jaeger format.\n\n See: https://www.jaegertracing.io/docs/1.19/client-libraries/#propagation-format\n \"\"\"\n\n TRACE_ID_KEY = \"uber-trace-id\"\n BAGGAGE_PREFIX = \"uberctx-\"\n DEBUG_FLAG = 0x02\n\n def extract(\n self,\n getter: Getter[TextMapPropagatorT],\n carrier: TextMapPropagatorT,\n context: typing.Optional[Context] = None,\n ) -> Context:\n\n if context is None:\n context = get_current()\n fields = _extract_first_element(\n getter.get(carrier, self.TRACE_ID_KEY)\n ).split(\":\")\n\n context = self._extract_baggage(getter, carrier, context)\n if len(fields) != 4:\n return trace.set_span_in_context(trace.INVALID_SPAN, context)\n\n trace_id, span_id, _parent_id, flags = fields\n if (\n trace_id == trace.INVALID_TRACE_ID\n or span_id == trace.INVALID_SPAN_ID\n ):\n return trace.set_span_in_context(trace.INVALID_SPAN, context)\n\n span = trace.DefaultSpan(\n trace.SpanContext(\n trace_id=int(trace_id, 16),\n span_id=int(span_id, 16),\n is_remote=True,\n trace_flags=trace.TraceFlags(\n int(flags, 16) & trace.TraceFlags.SAMPLED\n ),\n )\n )\n return trace.set_span_in_context(span, context)\n\n def inject(\n self,\n set_in_carrier: Setter[TextMapPropagatorT],\n carrier: TextMapPropagatorT,\n context: typing.Optional[Context] = None,\n ) -> None:\n span = trace.get_current_span(context=context)\n span_context = span.get_span_context()\n if span_context == trace.INVALID_SPAN_CONTEXT:\n return\n\n span_parent_id = span.parent.span_id if span.parent else 0\n trace_flags = span_context.trace_flags\n if trace_flags.sampled:\n trace_flags |= self.DEBUG_FLAG\n\n # set span identity\n set_in_carrier(\n carrier,\n self.TRACE_ID_KEY,\n _format_uber_trace_id(\n span_context.trace_id,\n span_context.span_id,\n span_parent_id,\n trace_flags,\n ),\n )\n\n # set span baggage, if any\n baggage_entries = baggage.get_all(context=context)\n if not baggage_entries:\n return\n for key, value in baggage_entries.items():\n baggage_key = self.BAGGAGE_PREFIX + key\n set_in_carrier(\n carrier, baggage_key, urllib.parse.quote(str(value))\n )\n\n @property\n def fields(self) -> typing.Set[str]:\n return {self.TRACE_ID_KEY}\n\n def _extract_baggage(self, getter, carrier, context):\n baggage_keys = [\n key\n for key in getter.keys(carrier)\n if key.startswith(self.BAGGAGE_PREFIX)\n ]\n for key in baggage_keys:\n value = _extract_first_element(getter.get(carrier, key))\n context = baggage.set_baggage(\n key.replace(self.BAGGAGE_PREFIX, \"\"),\n urllib.parse.unquote(value).strip(),\n context=context,\n )\n return context\n\n\ndef _format_uber_trace_id(trace_id, span_id, parent_span_id, flags):\n return \"{:032x}:{:016x}:{:016x}:{:02x}\".format(\n trace_id, span_id, parent_span_id, flags\n )\n\n\ndef _extract_first_element(\n items: typing.Iterable[TextMapPropagatorT],\n) -> typing.Optional[TextMapPropagatorT]:\n if items is None:\n return None\n return next(iter(items), None)\n", "path": "propagator/opentelemetry-propagator-jaeger/src/opentelemetry/propagators/jaeger/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport typing\nimport urllib.parse\n\nimport opentelemetry.trace as trace\nfrom opentelemetry import baggage\nfrom opentelemetry.context import Context, get_current\nfrom opentelemetry.trace.propagation.textmap import (\n Getter,\n Setter,\n TextMapPropagator,\n TextMapPropagatorT,\n)\n\n\nclass JaegerPropagator(TextMapPropagator):\n \"\"\"Propagator for the Jaeger format.\n\n See: https://www.jaegertracing.io/docs/1.19/client-libraries/#propagation-format\n \"\"\"\n\n TRACE_ID_KEY = \"uber-trace-id\"\n BAGGAGE_PREFIX = \"uberctx-\"\n DEBUG_FLAG = 0x02\n\n def extract(\n self,\n getter: Getter[TextMapPropagatorT],\n carrier: TextMapPropagatorT,\n context: typing.Optional[Context] = None,\n ) -> Context:\n\n if context is None:\n context = get_current()\n header = getter.get(carrier, self.TRACE_ID_KEY)\n if not header:\n return trace.set_span_in_context(trace.INVALID_SPAN, context)\n fields = _extract_first_element(header).split(\":\")\n\n context = self._extract_baggage(getter, carrier, context)\n if len(fields) != 4:\n return trace.set_span_in_context(trace.INVALID_SPAN, context)\n\n trace_id, span_id, _parent_id, flags = fields\n if (\n trace_id == trace.INVALID_TRACE_ID\n or span_id == trace.INVALID_SPAN_ID\n ):\n return trace.set_span_in_context(trace.INVALID_SPAN, context)\n\n span = trace.DefaultSpan(\n trace.SpanContext(\n trace_id=int(trace_id, 16),\n span_id=int(span_id, 16),\n is_remote=True,\n trace_flags=trace.TraceFlags(\n int(flags, 16) & trace.TraceFlags.SAMPLED\n ),\n )\n )\n return trace.set_span_in_context(span, context)\n\n def inject(\n self,\n set_in_carrier: Setter[TextMapPropagatorT],\n carrier: TextMapPropagatorT,\n context: typing.Optional[Context] = None,\n ) -> None:\n span = trace.get_current_span(context=context)\n span_context = span.get_span_context()\n if span_context == trace.INVALID_SPAN_CONTEXT:\n return\n\n span_parent_id = span.parent.span_id if span.parent else 0\n trace_flags = span_context.trace_flags\n if trace_flags.sampled:\n trace_flags |= self.DEBUG_FLAG\n\n # set span identity\n set_in_carrier(\n carrier,\n self.TRACE_ID_KEY,\n _format_uber_trace_id(\n span_context.trace_id,\n span_context.span_id,\n span_parent_id,\n trace_flags,\n ),\n )\n\n # set span baggage, if any\n baggage_entries = baggage.get_all(context=context)\n if not baggage_entries:\n return\n for key, value in baggage_entries.items():\n baggage_key = self.BAGGAGE_PREFIX + key\n set_in_carrier(\n carrier, baggage_key, urllib.parse.quote(str(value))\n )\n\n @property\n def fields(self) -> typing.Set[str]:\n return {self.TRACE_ID_KEY}\n\n def _extract_baggage(self, getter, carrier, context):\n baggage_keys = [\n key\n for key in getter.keys(carrier)\n if key.startswith(self.BAGGAGE_PREFIX)\n ]\n for key in baggage_keys:\n value = _extract_first_element(getter.get(carrier, key))\n context = baggage.set_baggage(\n key.replace(self.BAGGAGE_PREFIX, \"\"),\n urllib.parse.unquote(value).strip(),\n context=context,\n )\n return context\n\n\ndef _format_uber_trace_id(trace_id, span_id, parent_span_id, flags):\n return \"{:032x}:{:016x}:{:016x}:{:02x}\".format(\n trace_id, span_id, parent_span_id, flags\n )\n\n\ndef _extract_first_element(\n items: typing.Iterable[TextMapPropagatorT],\n) -> typing.Optional[TextMapPropagatorT]:\n if items is None:\n return None\n return next(iter(items), None)\n", "path": "propagator/opentelemetry-propagator-jaeger/src/opentelemetry/propagators/jaeger/__init__.py"}]} | 1,885 | 252 |
gh_patches_debug_9572 | rasdani/github-patches | git_diff | ESMCI__cime-1428 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Clean up python in unit_testing
Clean up code in unit_testing/python
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/lib/CIME/code_checker.py`
Content:
```
1 """
2 Libraries for checking python code with pylint
3 """
4
5 from CIME.XML.standard_module_setup import *
6
7 from CIME.utils import run_cmd, run_cmd_no_fail, expect, get_cime_root, is_python_executable
8
9 from multiprocessing.dummy import Pool as ThreadPool
10 from distutils.spawn import find_executable
11
12 logger = logging.getLogger(__name__)
13
14 ###############################################################################
15 def _run_pylint(on_file, interactive):
16 ###############################################################################
17 pylint = find_executable("pylint")
18
19 cmd_options = " --disable=I,C,R,logging-not-lazy,wildcard-import,unused-wildcard-import,fixme,broad-except,bare-except,eval-used,exec-used,global-statement"
20 cimeroot = get_cime_root()
21
22 if "scripts/Tools" in on_file:
23 cmd_options +=",relative-import"
24
25 # add init-hook option
26 cmd_options += " --init-hook='sys.path.extend((\"%s\",\"%s\",\"%s\"))'"%\
27 (os.path.join(cimeroot,"scripts","lib"),
28 os.path.join(cimeroot,"scripts","Tools"),
29 os.path.join(cimeroot,"tools","unit_testing","python"))
30
31 cmd = "%s %s %s" % (pylint, cmd_options, on_file)
32 logger.debug("pylint command is %s"%cmd)
33 stat, out, err = run_cmd(cmd, verbose=False, from_dir=cimeroot)
34 if stat != 0:
35 if interactive:
36 logger.info("File %s has pylint problems, please fix\n Use command: %s" % (on_file, cmd))
37 logger.info(out + "\n" + err)
38 return (on_file, out + "\n" + err)
39 else:
40 if interactive:
41 logger.info("File %s has no pylint problems" % on_file)
42 return (on_file, "")
43
44 ###############################################################################
45 def _matches(file_path, file_ends):
46 ###############################################################################
47 for file_end in file_ends:
48 if file_path.endswith(file_end):
49 return True
50
51 return False
52
53 ###############################################################################
54 def _should_pylint_skip(filepath):
55 ###############################################################################
56 # TODO - get rid of this
57 list_of_directories_to_ignore = ("xmlconvertors", "pointclm", "point_clm", "tools", "machines", "apidocs")
58 for dir_to_skip in list_of_directories_to_ignore:
59 if dir_to_skip in filepath:
60 return True
61
62 return False
63
64 ###############################################################################
65 def get_all_checkable_files():
66 ###############################################################################
67 cimeroot = get_cime_root()
68 all_git_files = run_cmd_no_fail("git ls-files", from_dir=cimeroot, verbose=False).splitlines()
69
70 files_to_test = [item for item in all_git_files
71 if ((item.endswith(".py") or is_python_executable(os.path.join(cimeroot, item))) and not _should_pylint_skip(item))]
72 return files_to_test
73
74 ###############################################################################
75 def check_code(files, num_procs=10, interactive=False):
76 ###############################################################################
77 """
78 Check all python files in the given directory
79
80 Returns True if all files had no problems
81 """
82 # Get list of files to check, we look to see if user-provided file argument
83 # is a valid file, if not, we search the repo for a file with similar name.
84 repo_files = run_cmd_no_fail('git ls-files --full-name %s' % get_cime_root(), verbose=False).splitlines()
85 files_to_check = []
86 if files:
87 for filearg in files:
88 if os.path.exists(filearg):
89 files_to_check.append(os.path.abspath(filearg))
90 else:
91 found = False
92 for repo_file in repo_files:
93 if repo_file.endswith(filearg):
94 found = True
95 files_to_check.append(repo_file) # could have multiple matches
96
97 if not found:
98 logger.warning("Could not find file matching argument '%s'" % filearg)
99 else:
100 # Check every python file
101 files_to_check = get_all_checkable_files()
102
103 expect(len(files_to_check) > 0, "No matching files found")
104
105 # No point in using more threads than files
106 if len(files_to_check) < num_procs:
107 num_procs = len(files_to_check)
108
109 pool = ThreadPool(num_procs)
110 results = pool.map(lambda x : _run_pylint(x, interactive), files_to_check)
111 pool.close()
112 pool.join()
113 return dict(results)
114
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/lib/CIME/code_checker.py b/scripts/lib/CIME/code_checker.py
--- a/scripts/lib/CIME/code_checker.py
+++ b/scripts/lib/CIME/code_checker.py
@@ -26,7 +26,7 @@
cmd_options += " --init-hook='sys.path.extend((\"%s\",\"%s\",\"%s\"))'"%\
(os.path.join(cimeroot,"scripts","lib"),
os.path.join(cimeroot,"scripts","Tools"),
- os.path.join(cimeroot,"tools","unit_testing","python"))
+ os.path.join(cimeroot,"scripts","fortran_unit_testing","python"))
cmd = "%s %s %s" % (pylint, cmd_options, on_file)
logger.debug("pylint command is %s"%cmd)
| {"golden_diff": "diff --git a/scripts/lib/CIME/code_checker.py b/scripts/lib/CIME/code_checker.py\n--- a/scripts/lib/CIME/code_checker.py\n+++ b/scripts/lib/CIME/code_checker.py\n@@ -26,7 +26,7 @@\n cmd_options += \" --init-hook='sys.path.extend((\\\"%s\\\",\\\"%s\\\",\\\"%s\\\"))'\"%\\\n (os.path.join(cimeroot,\"scripts\",\"lib\"),\n os.path.join(cimeroot,\"scripts\",\"Tools\"),\n- os.path.join(cimeroot,\"tools\",\"unit_testing\",\"python\"))\n+ os.path.join(cimeroot,\"scripts\",\"fortran_unit_testing\",\"python\"))\n \n cmd = \"%s %s %s\" % (pylint, cmd_options, on_file)\n logger.debug(\"pylint command is %s\"%cmd)\n", "issue": "Clean up python in unit_testing\nClean up code in unit_testing/python\n", "before_files": [{"content": "\"\"\"\nLibraries for checking python code with pylint\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\n\nfrom CIME.utils import run_cmd, run_cmd_no_fail, expect, get_cime_root, is_python_executable\n\nfrom multiprocessing.dummy import Pool as ThreadPool\nfrom distutils.spawn import find_executable\n\nlogger = logging.getLogger(__name__)\n\n###############################################################################\ndef _run_pylint(on_file, interactive):\n###############################################################################\n pylint = find_executable(\"pylint\")\n\n cmd_options = \" --disable=I,C,R,logging-not-lazy,wildcard-import,unused-wildcard-import,fixme,broad-except,bare-except,eval-used,exec-used,global-statement\"\n cimeroot = get_cime_root()\n\n if \"scripts/Tools\" in on_file:\n cmd_options +=\",relative-import\"\n\n # add init-hook option\n cmd_options += \" --init-hook='sys.path.extend((\\\"%s\\\",\\\"%s\\\",\\\"%s\\\"))'\"%\\\n (os.path.join(cimeroot,\"scripts\",\"lib\"),\n os.path.join(cimeroot,\"scripts\",\"Tools\"),\n os.path.join(cimeroot,\"tools\",\"unit_testing\",\"python\"))\n\n cmd = \"%s %s %s\" % (pylint, cmd_options, on_file)\n logger.debug(\"pylint command is %s\"%cmd)\n stat, out, err = run_cmd(cmd, verbose=False, from_dir=cimeroot)\n if stat != 0:\n if interactive:\n logger.info(\"File %s has pylint problems, please fix\\n Use command: %s\" % (on_file, cmd))\n logger.info(out + \"\\n\" + err)\n return (on_file, out + \"\\n\" + err)\n else:\n if interactive:\n logger.info(\"File %s has no pylint problems\" % on_file)\n return (on_file, \"\")\n\n###############################################################################\ndef _matches(file_path, file_ends):\n###############################################################################\n for file_end in file_ends:\n if file_path.endswith(file_end):\n return True\n\n return False\n\n###############################################################################\ndef _should_pylint_skip(filepath):\n###############################################################################\n # TODO - get rid of this\n list_of_directories_to_ignore = (\"xmlconvertors\", \"pointclm\", \"point_clm\", \"tools\", \"machines\", \"apidocs\")\n for dir_to_skip in list_of_directories_to_ignore:\n if dir_to_skip in filepath:\n return True\n\n return False\n\n###############################################################################\ndef get_all_checkable_files():\n###############################################################################\n cimeroot = get_cime_root()\n all_git_files = run_cmd_no_fail(\"git ls-files\", from_dir=cimeroot, verbose=False).splitlines()\n\n files_to_test = [item for item in all_git_files\n if ((item.endswith(\".py\") or is_python_executable(os.path.join(cimeroot, item))) and not _should_pylint_skip(item))]\n return files_to_test\n\n###############################################################################\ndef check_code(files, num_procs=10, interactive=False):\n###############################################################################\n \"\"\"\n Check all python files in the given directory\n\n Returns True if all files had no problems\n \"\"\"\n # Get list of files to check, we look to see if user-provided file argument\n # is a valid file, if not, we search the repo for a file with similar name.\n repo_files = run_cmd_no_fail('git ls-files --full-name %s' % get_cime_root(), verbose=False).splitlines()\n files_to_check = []\n if files:\n for filearg in files:\n if os.path.exists(filearg):\n files_to_check.append(os.path.abspath(filearg))\n else:\n found = False\n for repo_file in repo_files:\n if repo_file.endswith(filearg):\n found = True\n files_to_check.append(repo_file) # could have multiple matches\n\n if not found:\n logger.warning(\"Could not find file matching argument '%s'\" % filearg)\n else:\n # Check every python file\n files_to_check = get_all_checkable_files()\n\n expect(len(files_to_check) > 0, \"No matching files found\")\n\n # No point in using more threads than files\n if len(files_to_check) < num_procs:\n num_procs = len(files_to_check)\n\n pool = ThreadPool(num_procs)\n results = pool.map(lambda x : _run_pylint(x, interactive), files_to_check)\n pool.close()\n pool.join()\n return dict(results)\n", "path": "scripts/lib/CIME/code_checker.py"}], "after_files": [{"content": "\"\"\"\nLibraries for checking python code with pylint\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\n\nfrom CIME.utils import run_cmd, run_cmd_no_fail, expect, get_cime_root, is_python_executable\n\nfrom multiprocessing.dummy import Pool as ThreadPool\nfrom distutils.spawn import find_executable\n\nlogger = logging.getLogger(__name__)\n\n###############################################################################\ndef _run_pylint(on_file, interactive):\n###############################################################################\n pylint = find_executable(\"pylint\")\n\n cmd_options = \" --disable=I,C,R,logging-not-lazy,wildcard-import,unused-wildcard-import,fixme,broad-except,bare-except,eval-used,exec-used,global-statement\"\n cimeroot = get_cime_root()\n\n if \"scripts/Tools\" in on_file:\n cmd_options +=\",relative-import\"\n\n # add init-hook option\n cmd_options += \" --init-hook='sys.path.extend((\\\"%s\\\",\\\"%s\\\",\\\"%s\\\"))'\"%\\\n (os.path.join(cimeroot,\"scripts\",\"lib\"),\n os.path.join(cimeroot,\"scripts\",\"Tools\"),\n os.path.join(cimeroot,\"scripts\",\"fortran_unit_testing\",\"python\"))\n\n cmd = \"%s %s %s\" % (pylint, cmd_options, on_file)\n logger.debug(\"pylint command is %s\"%cmd)\n stat, out, err = run_cmd(cmd, verbose=False, from_dir=cimeroot)\n if stat != 0:\n if interactive:\n logger.info(\"File %s has pylint problems, please fix\\n Use command: %s\" % (on_file, cmd))\n logger.info(out + \"\\n\" + err)\n return (on_file, out + \"\\n\" + err)\n else:\n if interactive:\n logger.info(\"File %s has no pylint problems\" % on_file)\n return (on_file, \"\")\n\n###############################################################################\ndef _matches(file_path, file_ends):\n###############################################################################\n for file_end in file_ends:\n if file_path.endswith(file_end):\n return True\n\n return False\n\n###############################################################################\ndef _should_pylint_skip(filepath):\n###############################################################################\n # TODO - get rid of this\n list_of_directories_to_ignore = (\"xmlconvertors\", \"pointclm\", \"point_clm\", \"tools\", \"machines\", \"apidocs\")\n for dir_to_skip in list_of_directories_to_ignore:\n if dir_to_skip in filepath:\n return True\n\n return False\n\n###############################################################################\ndef get_all_checkable_files():\n###############################################################################\n cimeroot = get_cime_root()\n all_git_files = run_cmd_no_fail(\"git ls-files\", from_dir=cimeroot, verbose=False).splitlines()\n\n files_to_test = [item for item in all_git_files\n if ((item.endswith(\".py\") or is_python_executable(os.path.join(cimeroot, item))) and not _should_pylint_skip(item))]\n return files_to_test\n\n###############################################################################\ndef check_code(files, num_procs=10, interactive=False):\n###############################################################################\n \"\"\"\n Check all python files in the given directory\n\n Returns True if all files had no problems\n \"\"\"\n # Get list of files to check, we look to see if user-provided file argument\n # is a valid file, if not, we search the repo for a file with similar name.\n repo_files = run_cmd_no_fail('git ls-files --full-name %s' % get_cime_root(), verbose=False).splitlines()\n files_to_check = []\n if files:\n for filearg in files:\n if os.path.exists(filearg):\n files_to_check.append(os.path.abspath(filearg))\n else:\n found = False\n for repo_file in repo_files:\n if repo_file.endswith(filearg):\n found = True\n files_to_check.append(repo_file) # could have multiple matches\n\n if not found:\n logger.warning(\"Could not find file matching argument '%s'\" % filearg)\n else:\n # Check every python file\n files_to_check = get_all_checkable_files()\n\n expect(len(files_to_check) > 0, \"No matching files found\")\n\n # No point in using more threads than files\n if len(files_to_check) < num_procs:\n num_procs = len(files_to_check)\n\n pool = ThreadPool(num_procs)\n results = pool.map(lambda x : _run_pylint(x, interactive), files_to_check)\n pool.close()\n pool.join()\n return dict(results)\n", "path": "scripts/lib/CIME/code_checker.py"}]} | 1,469 | 173 |
gh_patches_debug_452 | rasdani/github-patches | git_diff | Flexget__Flexget-2652 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Dependencies on non pure Python modules
Hi,
Sorry for [x-posting](https://discuss.flexget.com/t/dependencies-on-non-pure-python-modules/5086). This is kind of a development issue so it might fit here better than in the forum. Feel free to just close this and reply on Discourse.
When upgrading Flexget I stumbled upon the fact that now it depends on Pillow, as per:
https://github.com/Flexget/Flexget/pull/2621
My Flexget runs on an embedded aarch64 device so every time you add a dep on a module that links against C libraries (Pillow depends on zlib and libjpeg) I have to cross compile the necessary native extensions, distribute them etc. Either that or manually installing a skimmed `requirements.txt` and finally installing Flexget passing `--no-dependencies` to `pip install`.
Does adding a tray icon justify this hassle for users? No saying that’s wrong of course, just trying to bring to your attention this use case and the potential annoyance for users.
Could perhaps these things be made optional at install/build time by relaxing the hard dependency?
Thanks.
Dependencies on non pure Python modules
Hi,
Sorry for [x-posting](https://discuss.flexget.com/t/dependencies-on-non-pure-python-modules/5086). This is kind of a development issue so it might fit here better than in the forum. Feel free to just close this and reply on Discourse.
When upgrading Flexget I stumbled upon the fact that now it depends on Pillow, as per:
https://github.com/Flexget/Flexget/pull/2621
My Flexget runs on an embedded aarch64 device so every time you add a dep on a module that links against C libraries (Pillow depends on zlib and libjpeg) I have to cross compile the necessary native extensions, distribute them etc. Either that or manually installing a skimmed `requirements.txt` and finally installing Flexget passing `--no-dependencies` to `pip install`.
Does adding a tray icon justify this hassle for users? No saying that’s wrong of course, just trying to bring to your attention this use case and the potential annoyance for users.
Could perhaps these things be made optional at install/build time by relaxing the hard dependency?
Thanks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flexget/tray_icon.py`
Content:
```
1 import logging
2 import webbrowser
3 from functools import partial, wraps
4 from pathlib import Path
5 from typing import List, Optional
6
7 from loguru import logger
8
9 from flexget import __version__
10
11 logger = logger.bind(name='tray_icon')
12
13 try:
14 # If we are running outside of a graphical environment, these imports will fail
15 from PIL import Image
16 from pystray import Icon, Menu, MenuItem
17
18 _import_success = True
19 except Exception as e:
20 logger.debug('Could not import pystray: {}', e)
21 _import_success = False
22
23
24 def check_if_tray_is_active(f):
25 @wraps(f)
26 def wrapped(self, *args, **kwargs):
27 if not self.active:
28 return
29 return f(self, *args, **kwargs)
30
31 return wrapped
32
33
34 image_path = Path(__file__).parent / 'resources' / 'flexget.png'
35
36
37 class TrayIcon:
38 def __init__(self, path_to_image: Path = image_path):
39 # Silence PIL noisy logging
40 logging.getLogger('PIL.PngImagePlugin').setLevel(logging.INFO)
41 logging.getLogger('PIL.Image').setLevel(logging.INFO)
42
43 self.path_to_image: Path = path_to_image
44 self.icon: Optional['Icon'] = None
45 self._menu: Optional['Menu'] = None
46 self.menu_items: List['MenuItem'] = []
47
48 self.active: bool = _import_success
49 self.running: bool = False
50
51 self.add_core_menu_items()
52
53 @check_if_tray_is_active
54 def add_menu_item(
55 self,
56 text: str = None,
57 action: callable = None,
58 menu_item: 'MenuItem' = None,
59 index: int = None,
60 **kwargs,
61 ):
62 """
63 Add a menu item byt passing its text and function, or pass a created MenuItem. Force position by sending index
64 """
65 if not any(v for v in (menu_item, text)):
66 raise ValueError(f"Either 'text' or 'menu_item' are required")
67
68 menu_item = menu_item or MenuItem(text=text, action=action, **kwargs)
69 if index is not None:
70 self.menu_items.insert(index, menu_item)
71 else:
72 self.menu_items.append(menu_item)
73
74 @check_if_tray_is_active
75 def add_menu_separator(self, index: int = None):
76 self.add_menu_item(menu_item=Menu.SEPARATOR, index=index)
77
78 def add_core_menu_items(self):
79 open_web = partial(webbrowser.open)
80 self.add_menu_item(text=f'Flexget {__version__}', enabled=False)
81 self.add_menu_separator()
82 self.add_menu_item(text='Homepage', action=partial(open_web, 'https://flexget.com/'))
83 self.add_menu_item(text='Forum', action=partial(open_web, 'https://discuss.flexget.com/'))
84
85 @property
86 def menu(self) -> 'Menu':
87 # This is lazy loaded since we'd like to delay the menu build until the tray is requested to run
88 if not self._menu:
89 self._menu = Menu(*self.menu_items)
90 return self._menu
91
92 @check_if_tray_is_active
93 def run(self):
94 """Run the tray icon. Must be run from the main thread and is blocking"""
95 try:
96 logger.verbose('Starting tray icon')
97 self.icon = Icon('Flexget', Image.open(self.path_to_image), menu=self.menu)
98 self.running = True
99 self.icon.run()
100 except Exception as e:
101 logger.warning('Could not run tray icon: {}', e)
102 self.running = False
103
104 @check_if_tray_is_active
105 def stop(self):
106 if not self.running:
107 return
108
109 logger.verbose('Stopping tray icon')
110 self.icon.stop()
111 self.running = False
112
113
114 tray_icon = TrayIcon()
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/flexget/tray_icon.py b/flexget/tray_icon.py
--- a/flexget/tray_icon.py
+++ b/flexget/tray_icon.py
@@ -17,7 +17,7 @@
_import_success = True
except Exception as e:
- logger.debug('Could not import pystray: {}', e)
+ logger.debug('Could not load tray icon: {}', e)
_import_success = False
| {"golden_diff": "diff --git a/flexget/tray_icon.py b/flexget/tray_icon.py\n--- a/flexget/tray_icon.py\n+++ b/flexget/tray_icon.py\n@@ -17,7 +17,7 @@\n \n _import_success = True\n except Exception as e:\n- logger.debug('Could not import pystray: {}', e)\n+ logger.debug('Could not load tray icon: {}', e)\n _import_success = False\n", "issue": "Dependencies on non pure Python modules\nHi,\r\n\r\nSorry for [x-posting](https://discuss.flexget.com/t/dependencies-on-non-pure-python-modules/5086). This is kind of a development issue so it might fit here better than in the forum. Feel free to just close this and reply on Discourse.\r\n\r\nWhen upgrading Flexget I stumbled upon the fact that now it depends on Pillow, as per:\r\n\r\nhttps://github.com/Flexget/Flexget/pull/2621\r\n\r\nMy Flexget runs on an embedded aarch64 device so every time you add a dep on a module that links against C libraries (Pillow depends on zlib and libjpeg) I have to cross compile the necessary native extensions, distribute them etc. Either that or manually installing a skimmed `requirements.txt` and finally installing Flexget passing `--no-dependencies` to `pip install`.\r\n\r\nDoes adding a tray icon justify this hassle for users? No saying that\u2019s wrong of course, just trying to bring to your attention this use case and the potential annoyance for users.\r\n\r\nCould perhaps these things be made optional at install/build time by relaxing the hard dependency?\r\n\r\nThanks.\nDependencies on non pure Python modules\nHi,\r\n\r\nSorry for [x-posting](https://discuss.flexget.com/t/dependencies-on-non-pure-python-modules/5086). This is kind of a development issue so it might fit here better than in the forum. Feel free to just close this and reply on Discourse.\r\n\r\nWhen upgrading Flexget I stumbled upon the fact that now it depends on Pillow, as per:\r\n\r\nhttps://github.com/Flexget/Flexget/pull/2621\r\n\r\nMy Flexget runs on an embedded aarch64 device so every time you add a dep on a module that links against C libraries (Pillow depends on zlib and libjpeg) I have to cross compile the necessary native extensions, distribute them etc. Either that or manually installing a skimmed `requirements.txt` and finally installing Flexget passing `--no-dependencies` to `pip install`.\r\n\r\nDoes adding a tray icon justify this hassle for users? No saying that\u2019s wrong of course, just trying to bring to your attention this use case and the potential annoyance for users.\r\n\r\nCould perhaps these things be made optional at install/build time by relaxing the hard dependency?\r\n\r\nThanks.\n", "before_files": [{"content": "import logging\nimport webbrowser\nfrom functools import partial, wraps\nfrom pathlib import Path\nfrom typing import List, Optional\n\nfrom loguru import logger\n\nfrom flexget import __version__\n\nlogger = logger.bind(name='tray_icon')\n\ntry:\n # If we are running outside of a graphical environment, these imports will fail\n from PIL import Image\n from pystray import Icon, Menu, MenuItem\n\n _import_success = True\nexcept Exception as e:\n logger.debug('Could not import pystray: {}', e)\n _import_success = False\n\n\ndef check_if_tray_is_active(f):\n @wraps(f)\n def wrapped(self, *args, **kwargs):\n if not self.active:\n return\n return f(self, *args, **kwargs)\n\n return wrapped\n\n\nimage_path = Path(__file__).parent / 'resources' / 'flexget.png'\n\n\nclass TrayIcon:\n def __init__(self, path_to_image: Path = image_path):\n # Silence PIL noisy logging\n logging.getLogger('PIL.PngImagePlugin').setLevel(logging.INFO)\n logging.getLogger('PIL.Image').setLevel(logging.INFO)\n\n self.path_to_image: Path = path_to_image\n self.icon: Optional['Icon'] = None\n self._menu: Optional['Menu'] = None\n self.menu_items: List['MenuItem'] = []\n\n self.active: bool = _import_success\n self.running: bool = False\n\n self.add_core_menu_items()\n\n @check_if_tray_is_active\n def add_menu_item(\n self,\n text: str = None,\n action: callable = None,\n menu_item: 'MenuItem' = None,\n index: int = None,\n **kwargs,\n ):\n \"\"\"\n Add a menu item byt passing its text and function, or pass a created MenuItem. Force position by sending index\n \"\"\"\n if not any(v for v in (menu_item, text)):\n raise ValueError(f\"Either 'text' or 'menu_item' are required\")\n\n menu_item = menu_item or MenuItem(text=text, action=action, **kwargs)\n if index is not None:\n self.menu_items.insert(index, menu_item)\n else:\n self.menu_items.append(menu_item)\n\n @check_if_tray_is_active\n def add_menu_separator(self, index: int = None):\n self.add_menu_item(menu_item=Menu.SEPARATOR, index=index)\n\n def add_core_menu_items(self):\n open_web = partial(webbrowser.open)\n self.add_menu_item(text=f'Flexget {__version__}', enabled=False)\n self.add_menu_separator()\n self.add_menu_item(text='Homepage', action=partial(open_web, 'https://flexget.com/'))\n self.add_menu_item(text='Forum', action=partial(open_web, 'https://discuss.flexget.com/'))\n\n @property\n def menu(self) -> 'Menu':\n # This is lazy loaded since we'd like to delay the menu build until the tray is requested to run\n if not self._menu:\n self._menu = Menu(*self.menu_items)\n return self._menu\n\n @check_if_tray_is_active\n def run(self):\n \"\"\"Run the tray icon. Must be run from the main thread and is blocking\"\"\"\n try:\n logger.verbose('Starting tray icon')\n self.icon = Icon('Flexget', Image.open(self.path_to_image), menu=self.menu)\n self.running = True\n self.icon.run()\n except Exception as e:\n logger.warning('Could not run tray icon: {}', e)\n self.running = False\n\n @check_if_tray_is_active\n def stop(self):\n if not self.running:\n return\n\n logger.verbose('Stopping tray icon')\n self.icon.stop()\n self.running = False\n\n\ntray_icon = TrayIcon()\n", "path": "flexget/tray_icon.py"}], "after_files": [{"content": "import logging\nimport webbrowser\nfrom functools import partial, wraps\nfrom pathlib import Path\nfrom typing import List, Optional\n\nfrom loguru import logger\n\nfrom flexget import __version__\n\nlogger = logger.bind(name='tray_icon')\n\ntry:\n # If we are running outside of a graphical environment, these imports will fail\n from PIL import Image\n from pystray import Icon, Menu, MenuItem\n\n _import_success = True\nexcept Exception as e:\n logger.debug('Could not load tray icon: {}', e)\n _import_success = False\n\n\ndef check_if_tray_is_active(f):\n @wraps(f)\n def wrapped(self, *args, **kwargs):\n if not self.active:\n return\n return f(self, *args, **kwargs)\n\n return wrapped\n\n\nimage_path = Path(__file__).parent / 'resources' / 'flexget.png'\n\n\nclass TrayIcon:\n def __init__(self, path_to_image: Path = image_path):\n # Silence PIL noisy logging\n logging.getLogger('PIL.PngImagePlugin').setLevel(logging.INFO)\n logging.getLogger('PIL.Image').setLevel(logging.INFO)\n\n self.path_to_image: Path = path_to_image\n self.icon: Optional['Icon'] = None\n self._menu: Optional['Menu'] = None\n self.menu_items: List['MenuItem'] = []\n\n self.active: bool = _import_success\n self.running: bool = False\n\n self.add_core_menu_items()\n\n @check_if_tray_is_active\n def add_menu_item(\n self,\n text: str = None,\n action: callable = None,\n menu_item: 'MenuItem' = None,\n index: int = None,\n **kwargs,\n ):\n \"\"\"\n Add a menu item byt passing its text and function, or pass a created MenuItem. Force position by sending index\n \"\"\"\n if not any(v for v in (menu_item, text)):\n raise ValueError(f\"Either 'text' or 'menu_item' are required\")\n\n menu_item = menu_item or MenuItem(text=text, action=action, **kwargs)\n if index is not None:\n self.menu_items.insert(index, menu_item)\n else:\n self.menu_items.append(menu_item)\n\n @check_if_tray_is_active\n def add_menu_separator(self, index: int = None):\n self.add_menu_item(menu_item=Menu.SEPARATOR, index=index)\n\n def add_core_menu_items(self):\n open_web = partial(webbrowser.open)\n self.add_menu_item(text=f'Flexget {__version__}', enabled=False)\n self.add_menu_separator()\n self.add_menu_item(text='Homepage', action=partial(open_web, 'https://flexget.com/'))\n self.add_menu_item(text='Forum', action=partial(open_web, 'https://discuss.flexget.com/'))\n\n @property\n def menu(self) -> 'Menu':\n # This is lazy loaded since we'd like to delay the menu build until the tray is requested to run\n if not self._menu:\n self._menu = Menu(*self.menu_items)\n return self._menu\n\n @check_if_tray_is_active\n def run(self):\n \"\"\"Run the tray icon. Must be run from the main thread and is blocking\"\"\"\n try:\n logger.verbose('Starting tray icon')\n self.icon = Icon('Flexget', Image.open(self.path_to_image), menu=self.menu)\n self.running = True\n self.icon.run()\n except Exception as e:\n logger.warning('Could not run tray icon: {}', e)\n self.running = False\n\n @check_if_tray_is_active\n def stop(self):\n if not self.running:\n return\n\n logger.verbose('Stopping tray icon')\n self.icon.stop()\n self.running = False\n\n\ntray_icon = TrayIcon()\n", "path": "flexget/tray_icon.py"}]} | 1,818 | 103 |
gh_patches_debug_11938 | rasdani/github-patches | git_diff | inventree__InvenTree-1890 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Can't start inventree server
It seems that error at `https://api.exchangerate.host/latest` blocks server startup.
```
# curl https://api.exchangerate.host/latest
<html>
<head><title>521 Origin Down</title></head>
<body bgcolor="white">
<center><h1>521 Origin Down</h1></center>
<hr><center>cloudflare-nginx</center>
</body>
</html>
```
<details>
```
# inv server
InvenTree Database Configuration
================================
ENGINE: django.db.backends.mysql
NAME: inventree
HOST: localhost
Base currency changed from USD to RUB
Updating exchange rates from https://api.exchangerate.host/latest
Using base currency 'RUB'
InvenTree Database Configuration
================================
ENGINE: django.db.backends.mysql
NAME: inventree
HOST: localhost
Base currency changed from USD to RUB
Updating exchange rates from https://api.exchangerate.host/latest
Using base currency 'RUB'
Watching for file changes with StatReloader
Watching for file changes with StatReloader
Exception in thread django-main-thread:
Traceback (most recent call last):
File "/usr/lib64/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib64/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/var/projects/inventree/inventree-env/lib64/python3.6/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/var/projects/inventree/inventree-env/lib64/python3.6/site-packages/django/core/management/commands/runserver.py", line 110, in inner_run
autoreload.raise_last_exception()
File "/var/projects/inventree/inventree-env/lib64/python3.6/site-packages/django/utils/autoreload.py", line 87, in raise_last_exception
raise _exception[1]
File "/var/projects/inventree/inventree-env/lib64/python3.6/site-packages/django/core/management/__init__.py", line 375, in execute
autoreload.check_errors(django.setup)()
File "/var/projects/inventree/inventree-env/lib64/python3.6/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/var/projects/inventree/inventree-env/lib64/python3.6/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/var/projects/inventree/inventree-env/lib64/python3.6/site-packages/django/apps/registry.py", line 122, in populate
app_config.ready()
File "/var/projects/inventree/inventree/InvenTree/InvenTree/apps.py", line 24, in ready
self.update_exchange_rates()
File "/var/projects/inventree/inventree/InvenTree/InvenTree/apps.py", line 105, in update_exchange_rates
update_exchange_rates()
File "/var/projects/inventree/inventree/InvenTree/InvenTree/tasks.py", line 199, in update_exchange_rates
backend.update_rates(base_currency=base)
File "/var/projects/inventree/inventree/InvenTree/InvenTree/exchange.py", line 29, in update_rates
super().update_rates(base=base_currency, symbols=symbols)
File "/usr/lib64/python3.6/contextlib.py", line 52, in inner
return func(*args, **kwds)
File "/var/projects/inventree/inventree-env/lib64/python3.6/site-packages/djmoney/contrib/exchange/backends/base.py", line 68, in update_rates
for currency, value in self.get_rates(**params).items()
File "/var/projects/inventree/inventree-env/lib64/python3.6/site-packages/djmoney/contrib/exchange/backends/base.py", line 80, in get_rates
response = self.get_response(**params)
File "/var/projects/inventree/inventree-env/lib64/python3.6/site-packages/djmoney/contrib/exchange/backends/base.py", line 48, in get_response
response = urlopen(url, cafile=certifi.where())
File "/usr/lib64/python3.6/urllib/request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib64/python3.6/urllib/request.py", line 532, in open
response = meth(req, response)
File "/usr/lib64/python3.6/urllib/request.py", line 642, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib64/python3.6/urllib/request.py", line 570, in error
return self._call_chain(*args)
File "/usr/lib64/python3.6/urllib/request.py", line 504, in _call_chain
result = func(*args)
File "/usr/lib64/python3.6/urllib/request.py", line 650, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 521: Origin Down
```
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `InvenTree/InvenTree/exchange.py`
Content:
```
1 from common.settings import currency_code_default, currency_codes
2
3 from djmoney.contrib.exchange.backends.base import SimpleExchangeBackend
4
5
6 class InvenTreeExchange(SimpleExchangeBackend):
7 """
8 Backend for automatically updating currency exchange rates.
9
10 Uses the exchangerate.host service API
11 """
12
13 name = "InvenTreeExchange"
14
15 def __init__(self):
16 self.url = "https://api.exchangerate.host/latest"
17
18 super().__init__()
19
20 def get_params(self):
21 # No API key is required
22 return {
23 }
24
25 def update_rates(self, base_currency=currency_code_default()):
26
27 symbols = ','.join(currency_codes())
28
29 super().update_rates(base=base_currency, symbols=symbols)
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/InvenTree/InvenTree/exchange.py b/InvenTree/InvenTree/exchange.py
--- a/InvenTree/InvenTree/exchange.py
+++ b/InvenTree/InvenTree/exchange.py
@@ -1,4 +1,5 @@
from common.settings import currency_code_default, currency_codes
+from urllib.error import HTTPError, URLError
from djmoney.contrib.exchange.backends.base import SimpleExchangeBackend
@@ -26,4 +27,8 @@
symbols = ','.join(currency_codes())
- super().update_rates(base=base_currency, symbols=symbols)
+ try:
+ super().update_rates(base=base_currency, symbols=symbols)
+ # catch connection errors
+ except (HTTPError, URLError):
+ print('Encountered connection error while updating')
| {"golden_diff": "diff --git a/InvenTree/InvenTree/exchange.py b/InvenTree/InvenTree/exchange.py\n--- a/InvenTree/InvenTree/exchange.py\n+++ b/InvenTree/InvenTree/exchange.py\n@@ -1,4 +1,5 @@\n from common.settings import currency_code_default, currency_codes\n+from urllib.error import HTTPError, URLError\n \n from djmoney.contrib.exchange.backends.base import SimpleExchangeBackend\n \n@@ -26,4 +27,8 @@\n \n symbols = ','.join(currency_codes())\n \n- super().update_rates(base=base_currency, symbols=symbols)\n+ try:\n+ super().update_rates(base=base_currency, symbols=symbols)\n+ # catch connection errors\n+ except (HTTPError, URLError):\n+ print('Encountered connection error while updating')\n", "issue": "Can't start inventree server\nIt seems that error at `https://api.exchangerate.host/latest` blocks server startup.\r\n\r\n```\r\n# curl https://api.exchangerate.host/latest\r\n<html>\r\n<head><title>521 Origin Down</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>521 Origin Down</h1></center>\r\n<hr><center>cloudflare-nginx</center>\r\n</body>\r\n</html>\r\n```\r\n\r\n<details>\r\n\r\n```\r\n# inv server\r\nInvenTree Database Configuration\r\n================================\r\nENGINE: django.db.backends.mysql\r\nNAME: inventree\r\nHOST: localhost\r\nBase currency changed from USD to RUB\r\nUpdating exchange rates from https://api.exchangerate.host/latest\r\nUsing base currency 'RUB'\r\nInvenTree Database Configuration\r\n================================\r\nENGINE: django.db.backends.mysql\r\nNAME: inventree\r\nHOST: localhost\r\nBase currency changed from USD to RUB\r\nUpdating exchange rates from https://api.exchangerate.host/latest\r\nUsing base currency 'RUB'\r\nWatching for file changes with StatReloader\r\nWatching for file changes with StatReloader\r\nException in thread django-main-thread:\r\nTraceback (most recent call last):\r\n File \"/usr/lib64/python3.6/threading.py\", line 916, in _bootstrap_inner\r\n self.run()\r\n File \"/usr/lib64/python3.6/threading.py\", line 864, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/var/projects/inventree/inventree-env/lib64/python3.6/site-packages/django/utils/autoreload.py\", line 64, in wrapper\r\n fn(*args, **kwargs)\r\n File \"/var/projects/inventree/inventree-env/lib64/python3.6/site-packages/django/core/management/commands/runserver.py\", line 110, in inner_run\r\n autoreload.raise_last_exception()\r\n File \"/var/projects/inventree/inventree-env/lib64/python3.6/site-packages/django/utils/autoreload.py\", line 87, in raise_last_exception\r\n raise _exception[1]\r\n File \"/var/projects/inventree/inventree-env/lib64/python3.6/site-packages/django/core/management/__init__.py\", line 375, in execute\r\n autoreload.check_errors(django.setup)()\r\n File \"/var/projects/inventree/inventree-env/lib64/python3.6/site-packages/django/utils/autoreload.py\", line 64, in wrapper\r\n fn(*args, **kwargs)\r\n File \"/var/projects/inventree/inventree-env/lib64/python3.6/site-packages/django/__init__.py\", line 24, in setup\r\n apps.populate(settings.INSTALLED_APPS)\r\n File \"/var/projects/inventree/inventree-env/lib64/python3.6/site-packages/django/apps/registry.py\", line 122, in populate\r\n app_config.ready()\r\n File \"/var/projects/inventree/inventree/InvenTree/InvenTree/apps.py\", line 24, in ready\r\n self.update_exchange_rates()\r\n File \"/var/projects/inventree/inventree/InvenTree/InvenTree/apps.py\", line 105, in update_exchange_rates\r\n update_exchange_rates()\r\n File \"/var/projects/inventree/inventree/InvenTree/InvenTree/tasks.py\", line 199, in update_exchange_rates\r\n backend.update_rates(base_currency=base)\r\n File \"/var/projects/inventree/inventree/InvenTree/InvenTree/exchange.py\", line 29, in update_rates\r\n super().update_rates(base=base_currency, symbols=symbols)\r\n File \"/usr/lib64/python3.6/contextlib.py\", line 52, in inner\r\n return func(*args, **kwds)\r\n File \"/var/projects/inventree/inventree-env/lib64/python3.6/site-packages/djmoney/contrib/exchange/backends/base.py\", line 68, in update_rates\r\n for currency, value in self.get_rates(**params).items()\r\n File \"/var/projects/inventree/inventree-env/lib64/python3.6/site-packages/djmoney/contrib/exchange/backends/base.py\", line 80, in get_rates\r\n response = self.get_response(**params)\r\n File \"/var/projects/inventree/inventree-env/lib64/python3.6/site-packages/djmoney/contrib/exchange/backends/base.py\", line 48, in get_response\r\n response = urlopen(url, cafile=certifi.where())\r\n File \"/usr/lib64/python3.6/urllib/request.py\", line 223, in urlopen\r\n return opener.open(url, data, timeout)\r\n File \"/usr/lib64/python3.6/urllib/request.py\", line 532, in open\r\n response = meth(req, response)\r\n File \"/usr/lib64/python3.6/urllib/request.py\", line 642, in http_response\r\n 'http', request, response, code, msg, hdrs)\r\n File \"/usr/lib64/python3.6/urllib/request.py\", line 570, in error\r\n return self._call_chain(*args)\r\n File \"/usr/lib64/python3.6/urllib/request.py\", line 504, in _call_chain\r\n result = func(*args)\r\n File \"/usr/lib64/python3.6/urllib/request.py\", line 650, in http_error_default\r\n raise HTTPError(req.full_url, code, msg, hdrs, fp)\r\nurllib.error.HTTPError: HTTP Error 521: Origin Down\r\n```\r\n\r\n</details>\n", "before_files": [{"content": "from common.settings import currency_code_default, currency_codes\n\nfrom djmoney.contrib.exchange.backends.base import SimpleExchangeBackend\n\n\nclass InvenTreeExchange(SimpleExchangeBackend):\n \"\"\"\n Backend for automatically updating currency exchange rates.\n\n Uses the exchangerate.host service API\n \"\"\"\n\n name = \"InvenTreeExchange\"\n\n def __init__(self):\n self.url = \"https://api.exchangerate.host/latest\"\n\n super().__init__()\n\n def get_params(self):\n # No API key is required\n return {\n }\n\n def update_rates(self, base_currency=currency_code_default()):\n\n symbols = ','.join(currency_codes())\n\n super().update_rates(base=base_currency, symbols=symbols)\n", "path": "InvenTree/InvenTree/exchange.py"}], "after_files": [{"content": "from common.settings import currency_code_default, currency_codes\nfrom urllib.error import HTTPError, URLError\n\nfrom djmoney.contrib.exchange.backends.base import SimpleExchangeBackend\n\n\nclass InvenTreeExchange(SimpleExchangeBackend):\n \"\"\"\n Backend for automatically updating currency exchange rates.\n\n Uses the exchangerate.host service API\n \"\"\"\n\n name = \"InvenTreeExchange\"\n\n def __init__(self):\n self.url = \"https://api.exchangerate.host/latest\"\n\n super().__init__()\n\n def get_params(self):\n # No API key is required\n return {\n }\n\n def update_rates(self, base_currency=currency_code_default()):\n\n symbols = ','.join(currency_codes())\n\n try:\n super().update_rates(base=base_currency, symbols=symbols)\n # catch connection errors\n except (HTTPError, URLError):\n print('Encountered connection error while updating')\n", "path": "InvenTree/InvenTree/exchange.py"}]} | 1,716 | 185 |
gh_patches_debug_29185 | rasdani/github-patches | git_diff | fossasia__open-event-server-4267 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Implement User Emails
Parent Issue: #4196
Requires #4210 to be merged
- [x] Email verification email - To the user who registered
- [x] A welcome email - After registration. Before email verification. - To the user who registered
- [x] Forgot password reset email - To the user who registered
- [x] Email change notification (#4210) - To both the old email address and the new email address
- [ ] Password change notification - To the user
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/auth.py`
Content:
```
1 import base64
2
3 from flask import Blueprint, abort, make_response
4 from flask import request, jsonify
5
6 from app.api.helpers.db import save_to_db
7
8 from app.api.helpers.utilities import get_serializer
9 from app.models.user import User
10
11 auth_routes = Blueprint('auth', __name__, url_prefix='/v1/auth')
12
13
14 @auth_routes.route('/verify-email', methods=['POST'])
15 def verify_email():
16 token = base64.b64decode(request.json['data']['token'])
17 s = get_serializer()
18
19 try:
20 data = s.loads(token)
21 except Exception:
22 return abort(
23 make_response(jsonify(error="Invalid Token"), 400)
24 )
25
26 try:
27 user = User.query.filter_by(email=data[0]).one()
28 except Exception:
29 return abort(
30 make_response(jsonify(error="Invalid Token"), 400)
31 )
32 else:
33 user.is_verified = True
34 save_to_db(user)
35
36 return jsonify({
37 "id": user.id,
38 "email": user.email,
39 "name": user.name if user.get('name') else None
40 })
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/api/auth.py b/app/api/auth.py
--- a/app/api/auth.py
+++ b/app/api/auth.py
@@ -1,11 +1,14 @@
import base64
+from flask import request, jsonify, abort, make_response, Blueprint
+from sqlalchemy.orm.exc import NoResultFound
-from flask import Blueprint, abort, make_response
-from flask import request, jsonify
-
+from app import get_settings
from app.api.helpers.db import save_to_db
+from app.api.helpers.files import make_frontend_url
+from app.api.helpers.mail import send_email_with_action
from app.api.helpers.utilities import get_serializer
+from app.models.mail import PASSWORD_RESET
from app.models.user import User
auth_routes = Blueprint('auth', __name__, url_prefix='/v1/auth')
@@ -31,6 +34,38 @@
)
else:
user.is_verified = True
+
+
+@auth_routes.route('/reset-password', methods=['POST'])
+def reset_password_post():
+ email = request.json['data']['email']
+
+ try:
+ user = User.query.filter_by(email=email).one()
+ except NoResultFound:
+ return abort(
+ make_response(jsonify(error="User not found"), 404)
+ )
+ else:
+ link = make_frontend_url('/reset-password', {'token': user.reset_password})
+ send_email_with_action(user, PASSWORD_RESET, app_name=get_settings()['app_name'], link=link)
+
+ return make_response(jsonify(message="Email Sent"), 200)
+
+
+@auth_routes.route('/reset-password', methods=['PATCH'])
+def reset_password_patch():
+ token = request.json['data']['token']
+ password = request.json['data']['password']
+
+ try:
+ user = User.query.filter_by(reset_password=token).one()
+ except NoResultFound:
+ return abort(
+ make_response(jsonify(error="User not found"), 404)
+ )
+ else:
+ user.password = password
save_to_db(user)
return jsonify({
| {"golden_diff": "diff --git a/app/api/auth.py b/app/api/auth.py\n--- a/app/api/auth.py\n+++ b/app/api/auth.py\n@@ -1,11 +1,14 @@\n import base64\n+from flask import request, jsonify, abort, make_response, Blueprint\n+from sqlalchemy.orm.exc import NoResultFound\n \n-from flask import Blueprint, abort, make_response\n-from flask import request, jsonify\n-\n+from app import get_settings\n from app.api.helpers.db import save_to_db\n+from app.api.helpers.files import make_frontend_url\n+from app.api.helpers.mail import send_email_with_action\n \n from app.api.helpers.utilities import get_serializer\n+from app.models.mail import PASSWORD_RESET\n from app.models.user import User\n \n auth_routes = Blueprint('auth', __name__, url_prefix='/v1/auth')\n@@ -31,6 +34,38 @@\n )\n else:\n user.is_verified = True\n+\n+\n+@auth_routes.route('/reset-password', methods=['POST'])\n+def reset_password_post():\n+ email = request.json['data']['email']\n+\n+ try:\n+ user = User.query.filter_by(email=email).one()\n+ except NoResultFound:\n+ return abort(\n+ make_response(jsonify(error=\"User not found\"), 404)\n+ )\n+ else:\n+ link = make_frontend_url('/reset-password', {'token': user.reset_password})\n+ send_email_with_action(user, PASSWORD_RESET, app_name=get_settings()['app_name'], link=link)\n+\n+ return make_response(jsonify(message=\"Email Sent\"), 200)\n+\n+\n+@auth_routes.route('/reset-password', methods=['PATCH'])\n+def reset_password_patch():\n+ token = request.json['data']['token']\n+ password = request.json['data']['password']\n+\n+ try:\n+ user = User.query.filter_by(reset_password=token).one()\n+ except NoResultFound:\n+ return abort(\n+ make_response(jsonify(error=\"User not found\"), 404)\n+ )\n+ else:\n+ user.password = password\n save_to_db(user)\n \n return jsonify({\n", "issue": "Implement User Emails\nParent Issue: #4196 \r\n\r\nRequires #4210 to be merged\r\n\r\n- [x] Email verification email - To the user who registered\r\n- [x] A welcome email - After registration. Before email verification. - To the user who registered\r\n- [x] Forgot password reset email - To the user who registered\r\n- [x] Email change notification (#4210) - To both the old email address and the new email address\r\n- [ ] Password change notification - To the user\n", "before_files": [{"content": "import base64\n\nfrom flask import Blueprint, abort, make_response\nfrom flask import request, jsonify\n\nfrom app.api.helpers.db import save_to_db\n\nfrom app.api.helpers.utilities import get_serializer\nfrom app.models.user import User\n\nauth_routes = Blueprint('auth', __name__, url_prefix='/v1/auth')\n\n\n@auth_routes.route('/verify-email', methods=['POST'])\ndef verify_email():\n token = base64.b64decode(request.json['data']['token'])\n s = get_serializer()\n\n try:\n data = s.loads(token)\n except Exception:\n return abort(\n make_response(jsonify(error=\"Invalid Token\"), 400)\n )\n\n try:\n user = User.query.filter_by(email=data[0]).one()\n except Exception:\n return abort(\n make_response(jsonify(error=\"Invalid Token\"), 400)\n )\n else:\n user.is_verified = True\n save_to_db(user)\n\n return jsonify({\n \"id\": user.id,\n \"email\": user.email,\n \"name\": user.name if user.get('name') else None\n })\n", "path": "app/api/auth.py"}], "after_files": [{"content": "import base64\nfrom flask import request, jsonify, abort, make_response, Blueprint\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom app import get_settings\nfrom app.api.helpers.db import save_to_db\nfrom app.api.helpers.files import make_frontend_url\nfrom app.api.helpers.mail import send_email_with_action\n\nfrom app.api.helpers.utilities import get_serializer\nfrom app.models.mail import PASSWORD_RESET\nfrom app.models.user import User\n\nauth_routes = Blueprint('auth', __name__, url_prefix='/v1/auth')\n\n\n@auth_routes.route('/verify-email', methods=['POST'])\ndef verify_email():\n token = base64.b64decode(request.json['data']['token'])\n s = get_serializer()\n\n try:\n data = s.loads(token)\n except Exception:\n return abort(\n make_response(jsonify(error=\"Invalid Token\"), 400)\n )\n\n try:\n user = User.query.filter_by(email=data[0]).one()\n except Exception:\n return abort(\n make_response(jsonify(error=\"Invalid Token\"), 400)\n )\n else:\n user.is_verified = True\n\n\n@auth_routes.route('/reset-password', methods=['POST'])\ndef reset_password_post():\n email = request.json['data']['email']\n\n try:\n user = User.query.filter_by(email=email).one()\n except NoResultFound:\n return abort(\n make_response(jsonify(error=\"User not found\"), 404)\n )\n else:\n link = make_frontend_url('/reset-password', {'token': user.reset_password})\n send_email_with_action(user, PASSWORD_RESET, app_name=get_settings()['app_name'], link=link)\n\n return make_response(jsonify(message=\"Email Sent\"), 200)\n\n\n@auth_routes.route('/reset-password', methods=['PATCH'])\ndef reset_password_patch():\n token = request.json['data']['token']\n password = request.json['data']['password']\n\n try:\n user = User.query.filter_by(reset_password=token).one()\n except NoResultFound:\n return abort(\n make_response(jsonify(error=\"User not found\"), 404)\n )\n else:\n user.password = password\n save_to_db(user)\n\n return jsonify({\n \"id\": user.id,\n \"email\": user.email,\n \"name\": user.name if user.get('name') else None\n })\n", "path": "app/api/auth.py"}]} | 683 | 458 |
gh_patches_debug_11828 | rasdani/github-patches | git_diff | pypi__warehouse-2524 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Validate username format before attempting to flush it to the db
Validate a valid username prior to trying to save it to the database, using the same logic as is already in the database.
---
https://sentry.io/python-software-foundation/warehouse-production/issues/258353802/
```
IntegrityError: new row for relation "accounts_user" violates check constraint "accounts_user_valid_username"
File "sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "sqlalchemy/engine/default.py", line 470, in do_execute
cursor.execute(statement, parameters)
IntegrityError: (psycopg2.IntegrityError) new row for relation "accounts_user" violates check constraint "accounts_user_valid_username"
(30 additional frame(s) were not displayed)
...
File "warehouse/raven.py", line 41, in raven_tween
return handler(request)
File "warehouse/cache/http.py", line 69, in conditional_http_tween
response = handler(request)
File "warehouse/cache/http.py", line 33, in wrapped
return view(context, request)
File "warehouse/accounts/views.py", line 226, in register
form.email.data
File "warehouse/accounts/services.py", line 152, in create_user
self.db.flush()
IntegrityError: (psycopg2.IntegrityError) new row for relation "accounts_user" violates check constraint "accounts_user_valid_username"
DETAIL: Failing row contains
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/accounts/forms.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import disposable_email_domains
14 import wtforms
15 import wtforms.fields.html5
16
17 from warehouse import forms, recaptcha
18 from warehouse.accounts.interfaces import TooManyFailedLogins
19
20
21 class CredentialsMixin:
22 username = wtforms.StringField(
23 validators=[
24 wtforms.validators.DataRequired(),
25 wtforms.validators.Length(
26 max=50,
27 message=(
28 "The username you have chosen is too long. Please choose "
29 "a username with under 50 characters."
30 )
31 ),
32 ],
33 )
34
35 password = wtforms.PasswordField(
36 validators=[
37 wtforms.validators.DataRequired(),
38 ],
39 )
40
41 def __init__(self, *args, user_service, **kwargs):
42 super().__init__(*args, **kwargs)
43 self.user_service = user_service
44
45
46 class RegistrationForm(CredentialsMixin, forms.Form):
47 password = wtforms.PasswordField(
48 validators=[
49 wtforms.validators.DataRequired(),
50 forms.PasswordStrengthValidator(
51 user_input_fields=["full_name", "username", "email"],
52 ),
53 ],
54 )
55 password_confirm = wtforms.PasswordField(
56 validators=[
57 wtforms.validators.DataRequired(),
58 wtforms.validators.EqualTo(
59 "password", "Your passwords do not match. Please try again."
60 ),
61 ],
62 )
63
64 full_name = wtforms.StringField()
65
66 email = wtforms.fields.html5.EmailField(
67 validators=[
68 wtforms.validators.DataRequired(),
69 wtforms.validators.Email(
70 message=(
71 "The email address you have chosen is not a valid "
72 "format. Please try again."
73 )
74 ),
75 ],
76 )
77
78 g_recaptcha_response = wtforms.StringField()
79
80 def __init__(self, *args, recaptcha_service, **kwargs):
81 super().__init__(*args, **kwargs)
82 self.recaptcha_service = recaptcha_service
83
84 def validate_username(self, field):
85 if self.user_service.find_userid(field.data) is not None:
86 raise wtforms.validators.ValidationError(
87 "This username is already being used by another "
88 "account. Please choose a different username."
89 )
90
91 def validate_email(self, field):
92 if self.user_service.find_userid_by_email(field.data) is not None:
93 raise wtforms.validators.ValidationError(
94 "This email address is already being used by another account. "
95 "Please use a different email."
96 )
97 domain = field.data.split('@')[-1]
98 if domain in disposable_email_domains.blacklist:
99 raise wtforms.validators.ValidationError(
100 "Sorry, you cannot create an account with an email address "
101 "from this domain. Please use a different email."
102 )
103
104 def validate_g_recaptcha_response(self, field):
105 # do required data validation here due to enabled flag being required
106 if self.recaptcha_service.enabled and not field.data:
107 raise wtforms.validators.ValidationError("Recaptcha error.")
108 try:
109 self.recaptcha_service.verify_response(field.data)
110 except recaptcha.RecaptchaError:
111 # TODO: log error
112 # don't want to provide the user with any detail
113 raise wtforms.validators.ValidationError("Recaptcha error.")
114
115
116 class LoginForm(CredentialsMixin, forms.Form):
117 def validate_username(self, field):
118 userid = self.user_service.find_userid(field.data)
119
120 if userid is None:
121 raise wtforms.validators.ValidationError(
122 "No user found with that username. Please try again."
123 )
124
125 def validate_password(self, field):
126 userid = self.user_service.find_userid(self.username.data)
127 if userid is not None:
128 try:
129 if not self.user_service.check_password(userid, field.data):
130 raise wtforms.validators.ValidationError(
131 "The username and password combination you have "
132 "provided is invalid. Please try again."
133 )
134 except TooManyFailedLogins:
135 raise wtforms.validators.ValidationError(
136 "There have been too many unsuccessful login attempts, "
137 "please try again later."
138 ) from None
139
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/warehouse/accounts/forms.py b/warehouse/accounts/forms.py
--- a/warehouse/accounts/forms.py
+++ b/warehouse/accounts/forms.py
@@ -29,6 +29,18 @@
"a username with under 50 characters."
)
),
+ # the regexp below must match the CheckConstraint
+ # for the username field in accounts.models.User
+ wtforms.validators.Regexp(
+ r'^[a-zA-Z0-9][a-zA-Z0-9._-]*[a-zA-Z0-9]$',
+ message=(
+ "The username is invalid. Usernames "
+ "must be composed of letters, numbers, "
+ "dots, hyphens and underscores. And must "
+ "also start and finish with a letter or number. "
+ "Please choose a different username."
+ )
+ )
],
)
| {"golden_diff": "diff --git a/warehouse/accounts/forms.py b/warehouse/accounts/forms.py\n--- a/warehouse/accounts/forms.py\n+++ b/warehouse/accounts/forms.py\n@@ -29,6 +29,18 @@\n \"a username with under 50 characters.\"\n )\n ),\n+ # the regexp below must match the CheckConstraint\n+ # for the username field in accounts.models.User\n+ wtforms.validators.Regexp(\n+ r'^[a-zA-Z0-9][a-zA-Z0-9._-]*[a-zA-Z0-9]$',\n+ message=(\n+ \"The username is invalid. Usernames \"\n+ \"must be composed of letters, numbers, \"\n+ \"dots, hyphens and underscores. And must \"\n+ \"also start and finish with a letter or number. \"\n+ \"Please choose a different username.\"\n+ )\n+ )\n ],\n )\n", "issue": "Validate username format before attempting to flush it to the db\nValidate a valid username prior to trying to save it to the database, using the same logic as is already in the database.\n\n---\n\nhttps://sentry.io/python-software-foundation/warehouse-production/issues/258353802/\n\n```\nIntegrityError: new row for relation \"accounts_user\" violates check constraint \"accounts_user_valid_username\"\n\n File \"sqlalchemy/engine/base.py\", line 1182, in _execute_context\n context)\n File \"sqlalchemy/engine/default.py\", line 470, in do_execute\n cursor.execute(statement, parameters)\n\nIntegrityError: (psycopg2.IntegrityError) new row for relation \"accounts_user\" violates check constraint \"accounts_user_valid_username\"\n(30 additional frame(s) were not displayed)\n...\n File \"warehouse/raven.py\", line 41, in raven_tween\n return handler(request)\n File \"warehouse/cache/http.py\", line 69, in conditional_http_tween\n response = handler(request)\n File \"warehouse/cache/http.py\", line 33, in wrapped\n return view(context, request)\n File \"warehouse/accounts/views.py\", line 226, in register\n form.email.data\n File \"warehouse/accounts/services.py\", line 152, in create_user\n self.db.flush()\n\nIntegrityError: (psycopg2.IntegrityError) new row for relation \"accounts_user\" violates check constraint \"accounts_user_valid_username\"\nDETAIL: Failing row contains \n```\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport disposable_email_domains\nimport wtforms\nimport wtforms.fields.html5\n\nfrom warehouse import forms, recaptcha\nfrom warehouse.accounts.interfaces import TooManyFailedLogins\n\n\nclass CredentialsMixin:\n username = wtforms.StringField(\n validators=[\n wtforms.validators.DataRequired(),\n wtforms.validators.Length(\n max=50,\n message=(\n \"The username you have chosen is too long. Please choose \"\n \"a username with under 50 characters.\"\n )\n ),\n ],\n )\n\n password = wtforms.PasswordField(\n validators=[\n wtforms.validators.DataRequired(),\n ],\n )\n\n def __init__(self, *args, user_service, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n\n\nclass RegistrationForm(CredentialsMixin, forms.Form):\n password = wtforms.PasswordField(\n validators=[\n wtforms.validators.DataRequired(),\n forms.PasswordStrengthValidator(\n user_input_fields=[\"full_name\", \"username\", \"email\"],\n ),\n ],\n )\n password_confirm = wtforms.PasswordField(\n validators=[\n wtforms.validators.DataRequired(),\n wtforms.validators.EqualTo(\n \"password\", \"Your passwords do not match. Please try again.\"\n ),\n ],\n )\n\n full_name = wtforms.StringField()\n\n email = wtforms.fields.html5.EmailField(\n validators=[\n wtforms.validators.DataRequired(),\n wtforms.validators.Email(\n message=(\n \"The email address you have chosen is not a valid \"\n \"format. Please try again.\"\n )\n ),\n ],\n )\n\n g_recaptcha_response = wtforms.StringField()\n\n def __init__(self, *args, recaptcha_service, **kwargs):\n super().__init__(*args, **kwargs)\n self.recaptcha_service = recaptcha_service\n\n def validate_username(self, field):\n if self.user_service.find_userid(field.data) is not None:\n raise wtforms.validators.ValidationError(\n \"This username is already being used by another \"\n \"account. Please choose a different username.\"\n )\n\n def validate_email(self, field):\n if self.user_service.find_userid_by_email(field.data) is not None:\n raise wtforms.validators.ValidationError(\n \"This email address is already being used by another account. \"\n \"Please use a different email.\"\n )\n domain = field.data.split('@')[-1]\n if domain in disposable_email_domains.blacklist:\n raise wtforms.validators.ValidationError(\n \"Sorry, you cannot create an account with an email address \"\n \"from this domain. Please use a different email.\"\n )\n\n def validate_g_recaptcha_response(self, field):\n # do required data validation here due to enabled flag being required\n if self.recaptcha_service.enabled and not field.data:\n raise wtforms.validators.ValidationError(\"Recaptcha error.\")\n try:\n self.recaptcha_service.verify_response(field.data)\n except recaptcha.RecaptchaError:\n # TODO: log error\n # don't want to provide the user with any detail\n raise wtforms.validators.ValidationError(\"Recaptcha error.\")\n\n\nclass LoginForm(CredentialsMixin, forms.Form):\n def validate_username(self, field):\n userid = self.user_service.find_userid(field.data)\n\n if userid is None:\n raise wtforms.validators.ValidationError(\n \"No user found with that username. Please try again.\"\n )\n\n def validate_password(self, field):\n userid = self.user_service.find_userid(self.username.data)\n if userid is not None:\n try:\n if not self.user_service.check_password(userid, field.data):\n raise wtforms.validators.ValidationError(\n \"The username and password combination you have \"\n \"provided is invalid. Please try again.\"\n )\n except TooManyFailedLogins:\n raise wtforms.validators.ValidationError(\n \"There have been too many unsuccessful login attempts, \"\n \"please try again later.\"\n ) from None\n", "path": "warehouse/accounts/forms.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport disposable_email_domains\nimport wtforms\nimport wtforms.fields.html5\n\nfrom warehouse import forms, recaptcha\nfrom warehouse.accounts.interfaces import TooManyFailedLogins\n\n\nclass CredentialsMixin:\n username = wtforms.StringField(\n validators=[\n wtforms.validators.DataRequired(),\n wtforms.validators.Length(\n max=50,\n message=(\n \"The username you have chosen is too long. Please choose \"\n \"a username with under 50 characters.\"\n )\n ),\n # the regexp below must match the CheckConstraint\n # for the username field in accounts.models.User\n wtforms.validators.Regexp(\n r'^[a-zA-Z0-9][a-zA-Z0-9._-]*[a-zA-Z0-9]$',\n message=(\n \"The username is invalid. Usernames \"\n \"must be composed of letters, numbers, \"\n \"dots, hyphens and underscores. And must \"\n \"also start and finish with a letter or number. \"\n \"Please choose a different username.\"\n )\n )\n ],\n )\n\n password = wtforms.PasswordField(\n validators=[\n wtforms.validators.DataRequired(),\n ],\n )\n\n def __init__(self, *args, user_service, **kwargs):\n super().__init__(*args, **kwargs)\n self.user_service = user_service\n\n\nclass RegistrationForm(CredentialsMixin, forms.Form):\n password = wtforms.PasswordField(\n validators=[\n wtforms.validators.DataRequired(),\n forms.PasswordStrengthValidator(\n user_input_fields=[\"full_name\", \"username\", \"email\"],\n ),\n ],\n )\n password_confirm = wtforms.PasswordField(\n validators=[\n wtforms.validators.DataRequired(),\n wtforms.validators.EqualTo(\n \"password\", \"Your passwords do not match. Please try again.\"\n ),\n ],\n )\n\n full_name = wtforms.StringField()\n\n email = wtforms.fields.html5.EmailField(\n validators=[\n wtforms.validators.DataRequired(),\n wtforms.validators.Email(\n message=(\n \"The email address you have chosen is not a valid \"\n \"format. Please try again.\"\n )\n ),\n ],\n )\n\n g_recaptcha_response = wtforms.StringField()\n\n def __init__(self, *args, recaptcha_service, **kwargs):\n super().__init__(*args, **kwargs)\n self.recaptcha_service = recaptcha_service\n\n def validate_username(self, field):\n if self.user_service.find_userid(field.data) is not None:\n raise wtforms.validators.ValidationError(\n \"This username is already being used by another \"\n \"account. Please choose a different username.\"\n )\n\n def validate_email(self, field):\n if self.user_service.find_userid_by_email(field.data) is not None:\n raise wtforms.validators.ValidationError(\n \"This email address is already being used by another account. \"\n \"Please use a different email.\"\n )\n domain = field.data.split('@')[-1]\n if domain in disposable_email_domains.blacklist:\n raise wtforms.validators.ValidationError(\n \"Sorry, you cannot create an account with an email address \"\n \"from this domain. Please use a different email.\"\n )\n\n def validate_g_recaptcha_response(self, field):\n # do required data validation here due to enabled flag being required\n if self.recaptcha_service.enabled and not field.data:\n raise wtforms.validators.ValidationError(\"Recaptcha error.\")\n try:\n self.recaptcha_service.verify_response(field.data)\n except recaptcha.RecaptchaError:\n # TODO: log error\n # don't want to provide the user with any detail\n raise wtforms.validators.ValidationError(\"Recaptcha error.\")\n\n\nclass LoginForm(CredentialsMixin, forms.Form):\n def validate_username(self, field):\n userid = self.user_service.find_userid(field.data)\n\n if userid is None:\n raise wtforms.validators.ValidationError(\n \"No user found with that username. Please try again.\"\n )\n\n def validate_password(self, field):\n userid = self.user_service.find_userid(self.username.data)\n if userid is not None:\n try:\n if not self.user_service.check_password(userid, field.data):\n raise wtforms.validators.ValidationError(\n \"The username and password combination you have \"\n \"provided is invalid. Please try again.\"\n )\n except TooManyFailedLogins:\n raise wtforms.validators.ValidationError(\n \"There have been too many unsuccessful login attempts, \"\n \"please try again later.\"\n ) from None\n", "path": "warehouse/accounts/forms.py"}]} | 1,844 | 196 |
gh_patches_debug_19 | rasdani/github-patches | git_diff | pwndbg__pwndbg-381 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Broken `entry` command
### Description
The `entry` command pass arguments differently then the `run` command.
### Steps to reproduce
```
[dc@dc:pwndbg|dev *$%]$ gdb python
Loaded 113 commands. Type pwndbg [filter] for a list.
Reading symbols from python...(no debugging symbols found)...done.
pwndbg> set exception-verbose on
Set whether to print a full stacktracefor exceptions raised in Pwndbg commands to True
pwndbg> run -c "print(1); print(2)"
Starting program: /usr/bin/python -c "print(1); print(2)"
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
1
2
[Inferior 1 (process 20590) exited normally]
pwndbg> entry -c "print(1); print(2)"
('-c', 'print(1); print(2)')
Running '%s' run -c print(1); print(2)
/bin/bash: -c: line 0: syntax error near unexpected token `('
/bin/bash: -c: line 0: `exec /usr/bin/python -c print(1); print(2)'
Traceback (most recent call last):
File "/home/dc/installed/pwndbg/pwndbg/commands/__init__.py", line 100, in __call__
return self.function(*args, **kwargs)
File "/home/dc/installed/pwndbg/pwndbg/commands/__init__.py", line 181, in _OnlyWithFile
return function(*a, **kw)
File "/home/dc/installed/pwndbg/pwndbg/commands/start.py", line 72, in entry
gdb.execute(run, from_tty=False)
gdb.error: During startup program exited with code 1.
If that is an issue, you can report it on https://github.com/pwndbg/pwndbg/issues
(Please don't forget to search if it hasn't been reported before)
PS: Pull requests are welcome
```
### My version
```
pwndbg> version
Gdb: GNU gdb (GDB) 8.0.1
Python: 3.6.3 (default, Oct 24 2017, 14:48:20) [GCC 7.2.0]
Pwndbg: 1.0.0 build: 5811010
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pwndbg/prompt.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 from __future__ import absolute_import
4 from __future__ import division
5 from __future__ import print_function
6 from __future__ import unicode_literals
7
8 import gdb
9
10 import pwndbg.color as C
11 import pwndbg.events
12 import pwndbg.gdbutils
13 import pwndbg.memoize
14
15 funcs_list_str = ', '.join(C.purple('$' + f.name) for f in pwndbg.gdbutils.functions.functions)
16
17 hint_lines = (
18 'loaded %i commands. Type %s for a list.' % (len(pwndbg.commands.commands), C.purple('pwndbg [filter]')),
19 'created %s gdb functions (can be used with print/break)' % funcs_list_str
20 )
21
22 for line in hint_lines:
23 print(C.light_red(pwndbg.color.bold('pwndbg: ') + line))
24
25 cur = (gdb.selected_inferior(), gdb.selected_thread())
26
27
28 def prompt_hook(*a):
29 global cur
30 new = (gdb.selected_inferior(), gdb.selected_thread())
31
32 if cur != new:
33 pwndbg.events.after_reload(start=False)
34 cur = new
35
36 if pwndbg.proc.alive and pwndbg.proc.thread_is_stopped:
37 prompt_hook_on_stop(*a)
38
39
40 @pwndbg.memoize.reset_on_stop
41 def prompt_hook_on_stop(*a):
42 pwndbg.commands.context.context()
43
44
45
46 gdb.prompt_hook = prompt_hook
47
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pwndbg/prompt.py b/pwndbg/prompt.py
--- a/pwndbg/prompt.py
+++ b/pwndbg/prompt.py
@@ -43,4 +43,5 @@
+
gdb.prompt_hook = prompt_hook
| {"golden_diff": "diff --git a/pwndbg/prompt.py b/pwndbg/prompt.py\n--- a/pwndbg/prompt.py\n+++ b/pwndbg/prompt.py\n@@ -43,4 +43,5 @@\n \n \n \n+\n gdb.prompt_hook = prompt_hook\n", "issue": "Broken `entry` command\n### Description\r\n\r\nThe `entry` command pass arguments differently then the `run` command.\r\n\r\n### Steps to reproduce\r\n\r\n```\r\n[dc@dc:pwndbg|dev *$%]$ gdb python\r\nLoaded 113 commands. Type pwndbg [filter] for a list.\r\nReading symbols from python...(no debugging symbols found)...done.\r\npwndbg> set exception-verbose on\r\nSet whether to print a full stacktracefor exceptions raised in Pwndbg commands to True\r\npwndbg> run -c \"print(1); print(2)\"\r\nStarting program: /usr/bin/python -c \"print(1); print(2)\"\r\n[Thread debugging using libthread_db enabled]\r\nUsing host libthread_db library \"/usr/lib/libthread_db.so.1\".\r\n1\r\n2\r\n[Inferior 1 (process 20590) exited normally]\r\npwndbg> entry -c \"print(1); print(2)\"\r\n('-c', 'print(1); print(2)')\r\nRunning '%s' run -c print(1); print(2)\r\n/bin/bash: -c: line 0: syntax error near unexpected token `('\r\n/bin/bash: -c: line 0: `exec /usr/bin/python -c print(1); print(2)'\r\nTraceback (most recent call last):\r\n File \"/home/dc/installed/pwndbg/pwndbg/commands/__init__.py\", line 100, in __call__\r\n return self.function(*args, **kwargs)\r\n File \"/home/dc/installed/pwndbg/pwndbg/commands/__init__.py\", line 181, in _OnlyWithFile\r\n return function(*a, **kw)\r\n File \"/home/dc/installed/pwndbg/pwndbg/commands/start.py\", line 72, in entry\r\n gdb.execute(run, from_tty=False)\r\ngdb.error: During startup program exited with code 1.\r\n\r\nIf that is an issue, you can report it on https://github.com/pwndbg/pwndbg/issues\r\n(Please don't forget to search if it hasn't been reported before)\r\nPS: Pull requests are welcome\r\n```\r\n\r\n### My version\r\n\r\n```\r\npwndbg> version\r\nGdb: GNU gdb (GDB) 8.0.1\r\nPython: 3.6.3 (default, Oct 24 2017, 14:48:20) [GCC 7.2.0]\r\nPwndbg: 1.0.0 build: 5811010\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport gdb\n\nimport pwndbg.color as C\nimport pwndbg.events\nimport pwndbg.gdbutils\nimport pwndbg.memoize\n\nfuncs_list_str = ', '.join(C.purple('$' + f.name) for f in pwndbg.gdbutils.functions.functions)\n\nhint_lines = (\n 'loaded %i commands. Type %s for a list.' % (len(pwndbg.commands.commands), C.purple('pwndbg [filter]')),\n 'created %s gdb functions (can be used with print/break)' % funcs_list_str\n)\n\nfor line in hint_lines:\n print(C.light_red(pwndbg.color.bold('pwndbg: ') + line))\n\ncur = (gdb.selected_inferior(), gdb.selected_thread())\n\n\ndef prompt_hook(*a):\n global cur\n new = (gdb.selected_inferior(), gdb.selected_thread())\n\n if cur != new:\n pwndbg.events.after_reload(start=False)\n cur = new\n\n if pwndbg.proc.alive and pwndbg.proc.thread_is_stopped:\n prompt_hook_on_stop(*a)\n\n\[email protected]_on_stop\ndef prompt_hook_on_stop(*a):\n pwndbg.commands.context.context()\n\n\n\ngdb.prompt_hook = prompt_hook\n", "path": "pwndbg/prompt.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport gdb\n\nimport pwndbg.color as C\nimport pwndbg.events\nimport pwndbg.gdbutils\nimport pwndbg.memoize\n\nfuncs_list_str = ', '.join(C.purple('$' + f.name) for f in pwndbg.gdbutils.functions.functions)\n\nhint_lines = (\n 'loaded %i commands. Type %s for a list.' % (len(pwndbg.commands.commands), C.purple('pwndbg [filter]')),\n 'created %s gdb functions (can be used with print/break)' % funcs_list_str\n)\n\nfor line in hint_lines:\n print(C.light_red(pwndbg.color.bold('pwndbg: ') + line))\n\ncur = (gdb.selected_inferior(), gdb.selected_thread())\n\n\ndef prompt_hook(*a):\n global cur\n new = (gdb.selected_inferior(), gdb.selected_thread())\n\n if cur != new:\n pwndbg.events.after_reload(start=False)\n cur = new\n\n if pwndbg.proc.alive and pwndbg.proc.thread_is_stopped:\n prompt_hook_on_stop(*a)\n\n\[email protected]_on_stop\ndef prompt_hook_on_stop(*a):\n pwndbg.commands.context.context()\n\n\n\n\ngdb.prompt_hook = prompt_hook\n", "path": "pwndbg/prompt.py"}]} | 1,227 | 58 |
gh_patches_debug_57501 | rasdani/github-patches | git_diff | dotkom__onlineweb4-1652 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Application for extending membership marks field of study as guest
When someone applies to get their membership prolonged their field of study is set to "guest" instead of what they previously were registered as.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `apps/approval/views.py`
Content:
```
1 # -*- encoding: utf-8 -*-
2
3 import datetime
4
5 from django.contrib import messages
6 from django.contrib.auth.decorators import login_required
7 from django.http import Http404
8 from django.shortcuts import get_object_or_404, redirect
9 from django.utils import timezone
10 from django.utils.translation import ugettext as _
11
12 from apps.approval.forms import FieldOfStudyApplicationForm
13 from apps.approval.models import MembershipApproval
14 from apps.authentication.models import AllowedUsername, get_length_of_field_of_study
15
16
17 @login_required
18 def create_fos_application(request):
19 if request.method == 'POST':
20 if not request.user.ntnu_username:
21 messages.error(request, _("Du må knytte et NTNU-brukernavn til kontoen din."))
22 return redirect('profiles_active', active_tab='membership')
23
24 form = FieldOfStudyApplicationForm(request.POST)
25 if form.is_valid():
26 cleaned = form.cleaned_data
27
28 field_of_study = int(cleaned['field_of_study'])
29
30 if field_of_study == 0:
31 messages.warning(request, _("Denne studieretningen (Gjest) er ikke et gyldig alternativ."))
32 return redirect('profiles_active', active_tab='membership')
33
34 started_day = 1
35 started_month = 0
36 started_year = int(cleaned['started_year'])
37
38 if cleaned['started_semester'] == "h":
39 started_month = 7
40 if cleaned['started_semester'] == "v":
41 started_month = 1
42
43 started_date = datetime.date(started_year, started_month, started_day)
44
45 # Does the user already have a field of study and started date?
46 if request.user.started_date and request.user.field_of_study:
47 # If there is no change from the current settings, ignore the request
48 if request.user.started_date == started_date and request.user.field_of_study == field_of_study:
49 messages.error(
50 request,
51 _("Du er allerede registrert med denne studieretningen og denne startdatoen.")
52 )
53 return redirect('profiles_active', active_tab='membership')
54
55 application = MembershipApproval(
56 applicant=request.user,
57 field_of_study=field_of_study,
58 started_date=started_date
59 )
60
61 length_of_fos = get_length_of_field_of_study(field_of_study)
62 if length_of_fos > 0:
63 application.new_expiry_date = get_expiry_date(started_year, length_of_fos)
64 application.save()
65
66 messages.success(request, _("Søknad om bytte av studieretning er sendt."))
67
68 return redirect('profiles_active', active_tab='membership')
69 raise Http404
70
71
72 def get_expiry_date(started_year, length_of_fos):
73 today = timezone.now().date()
74 # Expiry dates should be 15th September, so that we have time to get new lists from NTNU
75 new_expiry_date = datetime.date(
76 started_year, 9, 16) + datetime.timedelta(days=365*length_of_fos)
77 # Expiry dates in the past sets the expiry date to next september
78 if new_expiry_date < today:
79 if today < datetime.date(today.year, 9, 15):
80 new_expiry_date = datetime.date(today.year, 9, 15)
81 else:
82 new_expiry_date = datetime.date(
83 today.year, 9, 16) + datetime.timedelta(days=365)
84 return new_expiry_date
85
86
87 @login_required
88 def create_membership_application(request):
89 if request.method == 'POST':
90 if not request.user.has_expiring_membership:
91 messages.error(request, _("Din bruker har ikke et utløpende medlemskap."))
92 return redirect('profiles_active', active_tab='membership')
93
94 if not request.user.ntnu_username:
95 messages.error(request, _("Du må knytte et NTNU-brukernavn til kontoen din."))
96 return redirect('profiles_active', active_tab='membership')
97
98 # Extend length of membership by 1 year
99 membership = AllowedUsername.objects.get(username=request.user.ntnu_username)
100 new_expiration_date = datetime.date(membership.expiration_date.year + 1, 9, 16)
101
102 application = MembershipApproval(
103 applicant=request.user,
104 new_expiry_date=new_expiration_date,
105 )
106 application.save()
107
108 messages.success(request, _("Søknad om ett års forlenget medlemskap er sendt."))
109
110 return redirect('profiles_active', active_tab='membership')
111 raise Http404
112
113
114 @login_required
115 def cancel_application(request, application_id):
116 app = get_object_or_404(MembershipApproval, pk=application_id)
117
118 if app.applicant != request.user:
119 messages.error(request, _("Bare søkeren selv kan slette en søknad."))
120 return redirect('profiles_active', active_tab='membership')
121
122 if app.processed:
123 messages.error(request, _("Denne søknaden er behandlet og kan ikke slettes."))
124 return redirect('profiles_active', active_tab='membership')
125
126 app.delete()
127
128 return redirect('profiles_active', active_tab='membership')
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/apps/approval/views.py b/apps/approval/views.py
--- a/apps/approval/views.py
+++ b/apps/approval/views.py
@@ -101,6 +101,7 @@
application = MembershipApproval(
applicant=request.user,
+ field_of_study=request.user.field_of_study,
new_expiry_date=new_expiration_date,
)
application.save()
| {"golden_diff": "diff --git a/apps/approval/views.py b/apps/approval/views.py\n--- a/apps/approval/views.py\n+++ b/apps/approval/views.py\n@@ -101,6 +101,7 @@\n \n application = MembershipApproval(\n applicant=request.user,\n+ field_of_study=request.user.field_of_study,\n new_expiry_date=new_expiration_date,\n )\n application.save()\n", "issue": "Application for extending membership marks field of study as guest\nWhen someone applies to get their membership prolonged their field of study is set to \"guest\" instead of what they previously were registered as. \n\n", "before_files": [{"content": "# -*- encoding: utf-8 -*-\n\nimport datetime\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import Http404\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext as _\n\nfrom apps.approval.forms import FieldOfStudyApplicationForm\nfrom apps.approval.models import MembershipApproval\nfrom apps.authentication.models import AllowedUsername, get_length_of_field_of_study\n\n\n@login_required\ndef create_fos_application(request):\n if request.method == 'POST':\n if not request.user.ntnu_username:\n messages.error(request, _(\"Du m\u00e5 knytte et NTNU-brukernavn til kontoen din.\"))\n return redirect('profiles_active', active_tab='membership')\n\n form = FieldOfStudyApplicationForm(request.POST)\n if form.is_valid():\n cleaned = form.cleaned_data\n\n field_of_study = int(cleaned['field_of_study'])\n\n if field_of_study == 0:\n messages.warning(request, _(\"Denne studieretningen (Gjest) er ikke et gyldig alternativ.\"))\n return redirect('profiles_active', active_tab='membership')\n\n started_day = 1\n started_month = 0\n started_year = int(cleaned['started_year'])\n\n if cleaned['started_semester'] == \"h\":\n started_month = 7\n if cleaned['started_semester'] == \"v\":\n started_month = 1\n\n started_date = datetime.date(started_year, started_month, started_day)\n\n # Does the user already have a field of study and started date?\n if request.user.started_date and request.user.field_of_study:\n # If there is no change from the current settings, ignore the request\n if request.user.started_date == started_date and request.user.field_of_study == field_of_study:\n messages.error(\n request,\n _(\"Du er allerede registrert med denne studieretningen og denne startdatoen.\")\n )\n return redirect('profiles_active', active_tab='membership')\n\n application = MembershipApproval(\n applicant=request.user,\n field_of_study=field_of_study,\n started_date=started_date\n )\n\n length_of_fos = get_length_of_field_of_study(field_of_study)\n if length_of_fos > 0:\n application.new_expiry_date = get_expiry_date(started_year, length_of_fos)\n application.save()\n\n messages.success(request, _(\"S\u00f8knad om bytte av studieretning er sendt.\"))\n\n return redirect('profiles_active', active_tab='membership')\n raise Http404\n\n\ndef get_expiry_date(started_year, length_of_fos):\n today = timezone.now().date()\n # Expiry dates should be 15th September, so that we have time to get new lists from NTNU\n new_expiry_date = datetime.date(\n started_year, 9, 16) + datetime.timedelta(days=365*length_of_fos)\n # Expiry dates in the past sets the expiry date to next september\n if new_expiry_date < today:\n if today < datetime.date(today.year, 9, 15):\n new_expiry_date = datetime.date(today.year, 9, 15)\n else:\n new_expiry_date = datetime.date(\n today.year, 9, 16) + datetime.timedelta(days=365)\n return new_expiry_date\n\n\n@login_required\ndef create_membership_application(request):\n if request.method == 'POST':\n if not request.user.has_expiring_membership:\n messages.error(request, _(\"Din bruker har ikke et utl\u00f8pende medlemskap.\"))\n return redirect('profiles_active', active_tab='membership')\n\n if not request.user.ntnu_username:\n messages.error(request, _(\"Du m\u00e5 knytte et NTNU-brukernavn til kontoen din.\"))\n return redirect('profiles_active', active_tab='membership')\n\n # Extend length of membership by 1 year\n membership = AllowedUsername.objects.get(username=request.user.ntnu_username)\n new_expiration_date = datetime.date(membership.expiration_date.year + 1, 9, 16)\n\n application = MembershipApproval(\n applicant=request.user,\n new_expiry_date=new_expiration_date,\n )\n application.save()\n\n messages.success(request, _(\"S\u00f8knad om ett \u00e5rs forlenget medlemskap er sendt.\"))\n\n return redirect('profiles_active', active_tab='membership')\n raise Http404\n\n\n@login_required\ndef cancel_application(request, application_id):\n app = get_object_or_404(MembershipApproval, pk=application_id)\n\n if app.applicant != request.user:\n messages.error(request, _(\"Bare s\u00f8keren selv kan slette en s\u00f8knad.\"))\n return redirect('profiles_active', active_tab='membership')\n\n if app.processed:\n messages.error(request, _(\"Denne s\u00f8knaden er behandlet og kan ikke slettes.\"))\n return redirect('profiles_active', active_tab='membership')\n\n app.delete()\n\n return redirect('profiles_active', active_tab='membership')\n", "path": "apps/approval/views.py"}], "after_files": [{"content": "# -*- encoding: utf-8 -*-\n\nimport datetime\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import Http404\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext as _\n\nfrom apps.approval.forms import FieldOfStudyApplicationForm\nfrom apps.approval.models import MembershipApproval\nfrom apps.authentication.models import AllowedUsername, get_length_of_field_of_study\n\n\n@login_required\ndef create_fos_application(request):\n if request.method == 'POST':\n if not request.user.ntnu_username:\n messages.error(request, _(\"Du m\u00e5 knytte et NTNU-brukernavn til kontoen din.\"))\n return redirect('profiles_active', active_tab='membership')\n\n form = FieldOfStudyApplicationForm(request.POST)\n if form.is_valid():\n cleaned = form.cleaned_data\n\n field_of_study = int(cleaned['field_of_study'])\n\n if field_of_study == 0:\n messages.warning(request, _(\"Denne studieretningen (Gjest) er ikke et gyldig alternativ.\"))\n return redirect('profiles_active', active_tab='membership')\n\n started_day = 1\n started_month = 0\n started_year = int(cleaned['started_year'])\n\n if cleaned['started_semester'] == \"h\":\n started_month = 7\n if cleaned['started_semester'] == \"v\":\n started_month = 1\n\n started_date = datetime.date(started_year, started_month, started_day)\n\n # Does the user already have a field of study and started date?\n if request.user.started_date and request.user.field_of_study:\n # If there is no change from the current settings, ignore the request\n if request.user.started_date == started_date and request.user.field_of_study == field_of_study:\n messages.error(\n request,\n _(\"Du er allerede registrert med denne studieretningen og denne startdatoen.\")\n )\n return redirect('profiles_active', active_tab='membership')\n\n application = MembershipApproval(\n applicant=request.user,\n field_of_study=field_of_study,\n started_date=started_date\n )\n\n length_of_fos = get_length_of_field_of_study(field_of_study)\n if length_of_fos > 0:\n application.new_expiry_date = get_expiry_date(started_year, length_of_fos)\n application.save()\n\n messages.success(request, _(\"S\u00f8knad om bytte av studieretning er sendt.\"))\n\n return redirect('profiles_active', active_tab='membership')\n raise Http404\n\n\ndef get_expiry_date(started_year, length_of_fos):\n today = timezone.now().date()\n # Expiry dates should be 15th September, so that we have time to get new lists from NTNU\n new_expiry_date = datetime.date(\n started_year, 9, 16) + datetime.timedelta(days=365*length_of_fos)\n # Expiry dates in the past sets the expiry date to next september\n if new_expiry_date < today:\n if today < datetime.date(today.year, 9, 15):\n new_expiry_date = datetime.date(today.year, 9, 15)\n else:\n new_expiry_date = datetime.date(\n today.year, 9, 16) + datetime.timedelta(days=365)\n return new_expiry_date\n\n\n@login_required\ndef create_membership_application(request):\n if request.method == 'POST':\n if not request.user.has_expiring_membership:\n messages.error(request, _(\"Din bruker har ikke et utl\u00f8pende medlemskap.\"))\n return redirect('profiles_active', active_tab='membership')\n\n if not request.user.ntnu_username:\n messages.error(request, _(\"Du m\u00e5 knytte et NTNU-brukernavn til kontoen din.\"))\n return redirect('profiles_active', active_tab='membership')\n\n # Extend length of membership by 1 year\n membership = AllowedUsername.objects.get(username=request.user.ntnu_username)\n new_expiration_date = datetime.date(membership.expiration_date.year + 1, 9, 16)\n\n application = MembershipApproval(\n applicant=request.user,\n field_of_study=request.user.field_of_study,\n new_expiry_date=new_expiration_date,\n )\n application.save()\n\n messages.success(request, _(\"S\u00f8knad om ett \u00e5rs forlenget medlemskap er sendt.\"))\n\n return redirect('profiles_active', active_tab='membership')\n raise Http404\n\n\n@login_required\ndef cancel_application(request, application_id):\n app = get_object_or_404(MembershipApproval, pk=application_id)\n\n if app.applicant != request.user:\n messages.error(request, _(\"Bare s\u00f8keren selv kan slette en s\u00f8knad.\"))\n return redirect('profiles_active', active_tab='membership')\n\n if app.processed:\n messages.error(request, _(\"Denne s\u00f8knaden er behandlet og kan ikke slettes.\"))\n return redirect('profiles_active', active_tab='membership')\n\n app.delete()\n\n return redirect('profiles_active', active_tab='membership')\n", "path": "apps/approval/views.py"}]} | 1,696 | 88 |
gh_patches_debug_24042 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-4112 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
user front end: make username in idea list bold
**URL:** https://meinberlin-dev.liqd.net/projekte/brainstorming-2/
**user:** any
**expected behaviour:**
**behaviour:** better distinguish username and date of creation/editing. e.g. by making the username bolt as we do in idea detail view
**important screensize:**
**device & browser:**
**Comment/Question:**
Screenshot?
<img width="639" alt="Bildschirmfoto 2021-12-16 um 12 14 35" src="https://user-images.githubusercontent.com/35491681/146361740-cf6915cf-bef4-497d-83f5-6e8cf1154ddb.png">
testing 5024: filter bar above list on smaller screens
**URL:** https://meinberlin-dev.liqd.net/projekte/burgerhaushalt-spandau/?mode=list
**user:** any
**expected behaviour:**
**behaviour:** on screens smaller than 1200: list/map switch, sorting and filter don't look the same as old list, although old list also doesn't look good between around 850 and 1100
**important screensize:**
**device & browser:**
**Comment/Question:** or should we change that when touching the filter again? not sure though when that will be so maybe now make it look good?
Screenshot?
<img width="439" alt="Bildschirmfoto 2021-12-20 um 18 03 19" src="https://user-images.githubusercontent.com/35491681/146957441-4e6c4f98-609e-4d85-a21b-a7d2b6964909.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/budgeting/api.py`
Content:
```
1 from django.utils.translation import get_language
2 from django.utils.translation import gettext_lazy as _
3 from django_filters.rest_framework import DjangoFilterBackend
4 from rest_framework import mixins
5 from rest_framework import viewsets
6 from rest_framework.filters import OrderingFilter
7 from rest_framework.pagination import PageNumberPagination
8
9 from adhocracy4.api.mixins import ModuleMixin
10 from adhocracy4.api.permissions import ViewSetRulesPermission
11 from adhocracy4.categories import get_category_icon_url
12 from adhocracy4.categories import has_icons
13 from adhocracy4.categories.models import Category
14 from meinberlin.apps.contrib.filters import IdeaCategoryFilterBackend
15 from meinberlin.apps.votes.api import VotingTokenInfoMixin
16
17 from .models import Proposal
18 from .serializers import ProposalSerializer
19
20
21 # To be changed to a more general IdeaPagination, when using
22 # pagination via rest api for more idea lists
23 class ProposalPagination(PageNumberPagination):
24 page_size = 15
25
26 def get_paginated_response(self, data):
27 response = super(ProposalPagination, self).get_paginated_response(data)
28 response.data['page_size'] = self.page_size
29 response.data['page_count'] = self.page.paginator.num_pages
30 return response
31
32
33 class LocaleInfoMixin:
34 def list(self, request, *args, **kwargs):
35 response = super().list(request, args, kwargs)
36 response.data['locale'] = get_language()
37 return response
38
39
40 class ProposalFilterInfoMixin(ModuleMixin):
41 def list(self, request, *args, **kwargs):
42 """Add the filter information to the data of the Proposal API.
43
44 Needs to be used with rest_framework.mixins.ListModelMixin
45 """
46 filters = {}
47
48 ordering_choices = [('-created', _('Most recent')), ]
49 if self.module.has_feature('rate', Proposal):
50 ordering_choices += ('-positive_rating_count', _('Most popular')),
51 ordering_choices += ('-comment_count', _('Most commented')),
52
53 filters['ordering'] = {
54 'label': _('Ordering'),
55 'choices': ordering_choices,
56 'default': '-created',
57 }
58
59 categories = Category.objects.filter(
60 module=self.module
61 )
62 if categories:
63 category_choices = [('', _('All')), ]
64 if has_icons(self.module):
65 category_icons = []
66 for category in categories:
67 category_choices += (str(category.pk), category.name),
68 if has_icons(self.module):
69 icon_name = getattr(category, 'icon', None)
70 icon_url = get_category_icon_url(icon_name)
71 category_icons += (str(category.pk), icon_url),
72
73 filters['category'] = {
74 'label': _('Category'),
75 'choices': category_choices,
76 }
77 if has_icons(self.module):
78 filters['category']['icons'] = category_icons
79
80 filters['is_archived'] = {
81 'label': _('Archived'),
82 'choices': [
83 ('', _('All')),
84 ('false', _('No')),
85 ('true', _('Yes')),
86 ],
87 'default': 'false',
88 }
89
90 response = super().list(request, args, kwargs)
91 response.data['filters'] = filters
92 return response
93
94
95 class ProposalViewSet(ProposalFilterInfoMixin,
96 LocaleInfoMixin,
97 VotingTokenInfoMixin,
98 mixins.ListModelMixin,
99 viewsets.GenericViewSet,
100 ):
101
102 pagination_class = ProposalPagination
103 serializer_class = ProposalSerializer
104 permission_classes = (ViewSetRulesPermission,)
105 filter_backends = (DjangoFilterBackend,
106 OrderingFilter,
107 IdeaCategoryFilterBackend,)
108 filter_fields = ('is_archived', 'category',)
109 ordering_fields = ('created',
110 'comment_count',
111 'positive_rating_count',)
112
113 def get_permission_object(self):
114 return self.module
115
116 def get_queryset(self):
117 proposals = Proposal.objects\
118 .filter(module=self.module) \
119 .annotate_comment_count() \
120 .annotate_positive_rating_count() \
121 .annotate_negative_rating_count() \
122 .order_by('-created')
123 return proposals
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/meinberlin/apps/budgeting/api.py b/meinberlin/apps/budgeting/api.py
--- a/meinberlin/apps/budgeting/api.py
+++ b/meinberlin/apps/budgeting/api.py
@@ -45,17 +45,6 @@
"""
filters = {}
- ordering_choices = [('-created', _('Most recent')), ]
- if self.module.has_feature('rate', Proposal):
- ordering_choices += ('-positive_rating_count', _('Most popular')),
- ordering_choices += ('-comment_count', _('Most commented')),
-
- filters['ordering'] = {
- 'label': _('Ordering'),
- 'choices': ordering_choices,
- 'default': '-created',
- }
-
categories = Category.objects.filter(
module=self.module
)
@@ -87,6 +76,17 @@
'default': 'false',
}
+ ordering_choices = [('-created', _('Most recent')), ]
+ if self.module.has_feature('rate', Proposal):
+ ordering_choices += ('-positive_rating_count', _('Most popular')),
+ ordering_choices += ('-comment_count', _('Most commented')),
+
+ filters['ordering'] = {
+ 'label': _('Ordering'),
+ 'choices': ordering_choices,
+ 'default': '-created',
+ }
+
response = super().list(request, args, kwargs)
response.data['filters'] = filters
return response
| {"golden_diff": "diff --git a/meinberlin/apps/budgeting/api.py b/meinberlin/apps/budgeting/api.py\n--- a/meinberlin/apps/budgeting/api.py\n+++ b/meinberlin/apps/budgeting/api.py\n@@ -45,17 +45,6 @@\n \"\"\"\n filters = {}\n \n- ordering_choices = [('-created', _('Most recent')), ]\n- if self.module.has_feature('rate', Proposal):\n- ordering_choices += ('-positive_rating_count', _('Most popular')),\n- ordering_choices += ('-comment_count', _('Most commented')),\n-\n- filters['ordering'] = {\n- 'label': _('Ordering'),\n- 'choices': ordering_choices,\n- 'default': '-created',\n- }\n-\n categories = Category.objects.filter(\n module=self.module\n )\n@@ -87,6 +76,17 @@\n 'default': 'false',\n }\n \n+ ordering_choices = [('-created', _('Most recent')), ]\n+ if self.module.has_feature('rate', Proposal):\n+ ordering_choices += ('-positive_rating_count', _('Most popular')),\n+ ordering_choices += ('-comment_count', _('Most commented')),\n+\n+ filters['ordering'] = {\n+ 'label': _('Ordering'),\n+ 'choices': ordering_choices,\n+ 'default': '-created',\n+ }\n+\n response = super().list(request, args, kwargs)\n response.data['filters'] = filters\n return response\n", "issue": "user front end: make username in idea list bold\n**URL:** https://meinberlin-dev.liqd.net/projekte/brainstorming-2/\r\n**user:** any\r\n**expected behaviour:** \r\n**behaviour:** better distinguish username and date of creation/editing. e.g. by making the username bolt as we do in idea detail view\r\n**important screensize:**\r\n**device & browser:** \r\n**Comment/Question:** \r\n\r\n\r\n\r\nScreenshot?\r\n<img width=\"639\" alt=\"Bildschirmfoto 2021-12-16 um 12 14 35\" src=\"https://user-images.githubusercontent.com/35491681/146361740-cf6915cf-bef4-497d-83f5-6e8cf1154ddb.png\">\r\n\r\n\ntesting 5024: filter bar above list on smaller screens\n**URL:** https://meinberlin-dev.liqd.net/projekte/burgerhaushalt-spandau/?mode=list\r\n**user:** any\r\n**expected behaviour:**\r\n**behaviour:** on screens smaller than 1200: list/map switch, sorting and filter don't look the same as old list, although old list also doesn't look good between around 850 and 1100 \r\n**important screensize:**\r\n**device & browser:** \r\n**Comment/Question:** or should we change that when touching the filter again? not sure though when that will be so maybe now make it look good?\r\n\r\nScreenshot?\r\n\r\n<img width=\"439\" alt=\"Bildschirmfoto 2021-12-20 um 18 03 19\" src=\"https://user-images.githubusercontent.com/35491681/146957441-4e6c4f98-609e-4d85-a21b-a7d2b6964909.png\">\r\n\r\n\n", "before_files": [{"content": "from django.utils.translation import get_language\nfrom django.utils.translation import gettext_lazy as _\nfrom django_filters.rest_framework import DjangoFilterBackend\nfrom rest_framework import mixins\nfrom rest_framework import viewsets\nfrom rest_framework.filters import OrderingFilter\nfrom rest_framework.pagination import PageNumberPagination\n\nfrom adhocracy4.api.mixins import ModuleMixin\nfrom adhocracy4.api.permissions import ViewSetRulesPermission\nfrom adhocracy4.categories import get_category_icon_url\nfrom adhocracy4.categories import has_icons\nfrom adhocracy4.categories.models import Category\nfrom meinberlin.apps.contrib.filters import IdeaCategoryFilterBackend\nfrom meinberlin.apps.votes.api import VotingTokenInfoMixin\n\nfrom .models import Proposal\nfrom .serializers import ProposalSerializer\n\n\n# To be changed to a more general IdeaPagination, when using\n# pagination via rest api for more idea lists\nclass ProposalPagination(PageNumberPagination):\n page_size = 15\n\n def get_paginated_response(self, data):\n response = super(ProposalPagination, self).get_paginated_response(data)\n response.data['page_size'] = self.page_size\n response.data['page_count'] = self.page.paginator.num_pages\n return response\n\n\nclass LocaleInfoMixin:\n def list(self, request, *args, **kwargs):\n response = super().list(request, args, kwargs)\n response.data['locale'] = get_language()\n return response\n\n\nclass ProposalFilterInfoMixin(ModuleMixin):\n def list(self, request, *args, **kwargs):\n \"\"\"Add the filter information to the data of the Proposal API.\n\n Needs to be used with rest_framework.mixins.ListModelMixin\n \"\"\"\n filters = {}\n\n ordering_choices = [('-created', _('Most recent')), ]\n if self.module.has_feature('rate', Proposal):\n ordering_choices += ('-positive_rating_count', _('Most popular')),\n ordering_choices += ('-comment_count', _('Most commented')),\n\n filters['ordering'] = {\n 'label': _('Ordering'),\n 'choices': ordering_choices,\n 'default': '-created',\n }\n\n categories = Category.objects.filter(\n module=self.module\n )\n if categories:\n category_choices = [('', _('All')), ]\n if has_icons(self.module):\n category_icons = []\n for category in categories:\n category_choices += (str(category.pk), category.name),\n if has_icons(self.module):\n icon_name = getattr(category, 'icon', None)\n icon_url = get_category_icon_url(icon_name)\n category_icons += (str(category.pk), icon_url),\n\n filters['category'] = {\n 'label': _('Category'),\n 'choices': category_choices,\n }\n if has_icons(self.module):\n filters['category']['icons'] = category_icons\n\n filters['is_archived'] = {\n 'label': _('Archived'),\n 'choices': [\n ('', _('All')),\n ('false', _('No')),\n ('true', _('Yes')),\n ],\n 'default': 'false',\n }\n\n response = super().list(request, args, kwargs)\n response.data['filters'] = filters\n return response\n\n\nclass ProposalViewSet(ProposalFilterInfoMixin,\n LocaleInfoMixin,\n VotingTokenInfoMixin,\n mixins.ListModelMixin,\n viewsets.GenericViewSet,\n ):\n\n pagination_class = ProposalPagination\n serializer_class = ProposalSerializer\n permission_classes = (ViewSetRulesPermission,)\n filter_backends = (DjangoFilterBackend,\n OrderingFilter,\n IdeaCategoryFilterBackend,)\n filter_fields = ('is_archived', 'category',)\n ordering_fields = ('created',\n 'comment_count',\n 'positive_rating_count',)\n\n def get_permission_object(self):\n return self.module\n\n def get_queryset(self):\n proposals = Proposal.objects\\\n .filter(module=self.module) \\\n .annotate_comment_count() \\\n .annotate_positive_rating_count() \\\n .annotate_negative_rating_count() \\\n .order_by('-created')\n return proposals\n", "path": "meinberlin/apps/budgeting/api.py"}], "after_files": [{"content": "from django.utils.translation import get_language\nfrom django.utils.translation import gettext_lazy as _\nfrom django_filters.rest_framework import DjangoFilterBackend\nfrom rest_framework import mixins\nfrom rest_framework import viewsets\nfrom rest_framework.filters import OrderingFilter\nfrom rest_framework.pagination import PageNumberPagination\n\nfrom adhocracy4.api.mixins import ModuleMixin\nfrom adhocracy4.api.permissions import ViewSetRulesPermission\nfrom adhocracy4.categories import get_category_icon_url\nfrom adhocracy4.categories import has_icons\nfrom adhocracy4.categories.models import Category\nfrom meinberlin.apps.contrib.filters import IdeaCategoryFilterBackend\nfrom meinberlin.apps.votes.api import VotingTokenInfoMixin\n\nfrom .models import Proposal\nfrom .serializers import ProposalSerializer\n\n\n# To be changed to a more general IdeaPagination, when using\n# pagination via rest api for more idea lists\nclass ProposalPagination(PageNumberPagination):\n page_size = 15\n\n def get_paginated_response(self, data):\n response = super(ProposalPagination, self).get_paginated_response(data)\n response.data['page_size'] = self.page_size\n response.data['page_count'] = self.page.paginator.num_pages\n return response\n\n\nclass LocaleInfoMixin:\n def list(self, request, *args, **kwargs):\n response = super().list(request, args, kwargs)\n response.data['locale'] = get_language()\n return response\n\n\nclass ProposalFilterInfoMixin(ModuleMixin):\n def list(self, request, *args, **kwargs):\n \"\"\"Add the filter information to the data of the Proposal API.\n\n Needs to be used with rest_framework.mixins.ListModelMixin\n \"\"\"\n filters = {}\n\n categories = Category.objects.filter(\n module=self.module\n )\n if categories:\n category_choices = [('', _('All')), ]\n if has_icons(self.module):\n category_icons = []\n for category in categories:\n category_choices += (str(category.pk), category.name),\n if has_icons(self.module):\n icon_name = getattr(category, 'icon', None)\n icon_url = get_category_icon_url(icon_name)\n category_icons += (str(category.pk), icon_url),\n\n filters['category'] = {\n 'label': _('Category'),\n 'choices': category_choices,\n }\n if has_icons(self.module):\n filters['category']['icons'] = category_icons\n\n filters['is_archived'] = {\n 'label': _('Archived'),\n 'choices': [\n ('', _('All')),\n ('false', _('No')),\n ('true', _('Yes')),\n ],\n 'default': 'false',\n }\n\n ordering_choices = [('-created', _('Most recent')), ]\n if self.module.has_feature('rate', Proposal):\n ordering_choices += ('-positive_rating_count', _('Most popular')),\n ordering_choices += ('-comment_count', _('Most commented')),\n\n filters['ordering'] = {\n 'label': _('Ordering'),\n 'choices': ordering_choices,\n 'default': '-created',\n }\n\n response = super().list(request, args, kwargs)\n response.data['filters'] = filters\n return response\n\n\nclass ProposalViewSet(ProposalFilterInfoMixin,\n LocaleInfoMixin,\n VotingTokenInfoMixin,\n mixins.ListModelMixin,\n viewsets.GenericViewSet,\n ):\n\n pagination_class = ProposalPagination\n serializer_class = ProposalSerializer\n permission_classes = (ViewSetRulesPermission,)\n filter_backends = (DjangoFilterBackend,\n OrderingFilter,\n IdeaCategoryFilterBackend,)\n filter_fields = ('is_archived', 'category',)\n ordering_fields = ('created',\n 'comment_count',\n 'positive_rating_count',)\n\n def get_permission_object(self):\n return self.module\n\n def get_queryset(self):\n proposals = Proposal.objects\\\n .filter(module=self.module) \\\n .annotate_comment_count() \\\n .annotate_positive_rating_count() \\\n .annotate_negative_rating_count() \\\n .order_by('-created')\n return proposals\n", "path": "meinberlin/apps/budgeting/api.py"}]} | 1,817 | 320 |
gh_patches_debug_28459 | rasdani/github-patches | git_diff | saleor__saleor-336 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Removing category redirects to category update view
Confirmation modal contains "update" button that redirects to update view, instead of sending a POST request to remove the category.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/dashboard/category/views.py`
Content:
```
1 from django.contrib import messages
2 from django.shortcuts import get_object_or_404, redirect
3 from django.template.response import TemplateResponse
4 from django.utils.translation import ugettext_lazy as _
5
6 from ...product.models import Category
7 from ..views import staff_member_required
8 from .forms import CategoryForm
9
10
11 @staff_member_required
12 def category_list(request, root_pk=None):
13 root = None
14 path = None
15 categories = Category.tree.root_nodes()
16 if root_pk:
17 root = get_object_or_404(Category, pk=root_pk)
18 path = root.get_ancestors(include_self=True) if root else []
19 categories = root.get_children()
20 ctx = {'categories': categories, 'path': path, 'root': root}
21 return TemplateResponse(request, 'dashboard/category/list.html', ctx)
22
23
24 @staff_member_required
25 def category_create(request, root_pk=None):
26 category = Category()
27 form = CategoryForm(request.POST or None, parent_pk=root_pk)
28 if form.is_valid():
29 category = form.save()
30 messages.success(request, _('Added category %s') % category)
31 if root_pk:
32 return redirect('dashboard:category-list', root_pk=root_pk)
33 else:
34 return redirect('dashboard:category-list')
35 ctx = {'category': category, 'form': form}
36 return TemplateResponse(request, 'dashboard/category/detail.html', ctx)
37
38
39 @staff_member_required
40 def category_edit(request, root_pk=None):
41 category = get_object_or_404(Category, pk=root_pk)
42 form = CategoryForm(request.POST or None, instance=category,
43 parent_pk=category.parent_id)
44 status = 200
45 if form.is_valid():
46 category = form.save()
47 messages.success(request, _('Added category %s') % category)
48 if root_pk:
49 return redirect('dashboard:category-list', root_pk=root_pk)
50 else:
51 return redirect('dashboard:category-list')
52 elif form.errors:
53 status = 400
54 ctx = {'category': category, 'form': form, 'status': status}
55 template = 'dashboard/category/modal_edit.html'
56 return TemplateResponse(request, template, ctx, status=status)
57
58
59 @staff_member_required
60 def category_delete(request, pk):
61 category = get_object_or_404(Category, pk=pk)
62 if request.method == 'POST':
63 category.delete()
64 messages.success(request, _('Deleted category %s') % category)
65 root_pk = None
66 if category.parent:
67 root_pk = category.parent.pk
68 if root_pk:
69 return redirect('dashboard:category-list', root_pk=root_pk)
70 else:
71 return redirect('dashboard:category-list')
72 ctx = {'category': category,
73 'descendants': list(category.get_descendants()),
74 'products_count': len(category.products.all())}
75 return TemplateResponse(request,
76 'dashboard/category/modal_category_confirm_delete.html',
77 ctx)
78
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/dashboard/category/views.py b/saleor/dashboard/category/views.py
--- a/saleor/dashboard/category/views.py
+++ b/saleor/dashboard/category/views.py
@@ -1,4 +1,6 @@
from django.contrib import messages
+from django.core.urlresolvers import reverse
+from django.http import JsonResponse
from django.shortcuts import get_object_or_404, redirect
from django.template.response import TemplateResponse
from django.utils.translation import ugettext_lazy as _
@@ -66,12 +68,19 @@
if category.parent:
root_pk = category.parent.pk
if root_pk:
+ if request.is_ajax():
+ response = {'redirectUrl': reverse(
+ 'dashboard:category-list', kwargs={'root_pk': root_pk})}
+ return JsonResponse(response)
return redirect('dashboard:category-list', root_pk=root_pk)
else:
+ if request.is_ajax():
+ response = {'redirectUrl': reverse('dashboard:category-list')}
+ return JsonResponse(response)
return redirect('dashboard:category-list')
ctx = {'category': category,
'descendants': list(category.get_descendants()),
'products_count': len(category.products.all())}
return TemplateResponse(request,
- 'dashboard/category/modal_category_confirm_delete.html',
+ 'dashboard/category/modal_delete.html',
ctx)
| {"golden_diff": "diff --git a/saleor/dashboard/category/views.py b/saleor/dashboard/category/views.py\n--- a/saleor/dashboard/category/views.py\n+++ b/saleor/dashboard/category/views.py\n@@ -1,4 +1,6 @@\n from django.contrib import messages\n+from django.core.urlresolvers import reverse\n+from django.http import JsonResponse\n from django.shortcuts import get_object_or_404, redirect\n from django.template.response import TemplateResponse\n from django.utils.translation import ugettext_lazy as _\n@@ -66,12 +68,19 @@\n if category.parent:\n root_pk = category.parent.pk\n if root_pk:\n+ if request.is_ajax():\n+ response = {'redirectUrl': reverse(\n+ 'dashboard:category-list', kwargs={'root_pk': root_pk})}\n+ return JsonResponse(response)\n return redirect('dashboard:category-list', root_pk=root_pk)\n else:\n+ if request.is_ajax():\n+ response = {'redirectUrl': reverse('dashboard:category-list')}\n+ return JsonResponse(response)\n return redirect('dashboard:category-list')\n ctx = {'category': category,\n 'descendants': list(category.get_descendants()),\n 'products_count': len(category.products.all())}\n return TemplateResponse(request,\n- 'dashboard/category/modal_category_confirm_delete.html',\n+ 'dashboard/category/modal_delete.html',\n ctx)\n", "issue": "Removing category redirects to category update view \nConfirmation modal contains \"update\" button that redirects to update view, instead of sending a POST request to remove the category.\n\n", "before_files": [{"content": "from django.contrib import messages\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom ...product.models import Category\nfrom ..views import staff_member_required\nfrom .forms import CategoryForm\n\n\n@staff_member_required\ndef category_list(request, root_pk=None):\n root = None\n path = None\n categories = Category.tree.root_nodes()\n if root_pk:\n root = get_object_or_404(Category, pk=root_pk)\n path = root.get_ancestors(include_self=True) if root else []\n categories = root.get_children()\n ctx = {'categories': categories, 'path': path, 'root': root}\n return TemplateResponse(request, 'dashboard/category/list.html', ctx)\n\n\n@staff_member_required\ndef category_create(request, root_pk=None):\n category = Category()\n form = CategoryForm(request.POST or None, parent_pk=root_pk)\n if form.is_valid():\n category = form.save()\n messages.success(request, _('Added category %s') % category)\n if root_pk:\n return redirect('dashboard:category-list', root_pk=root_pk)\n else:\n return redirect('dashboard:category-list')\n ctx = {'category': category, 'form': form}\n return TemplateResponse(request, 'dashboard/category/detail.html', ctx)\n\n\n@staff_member_required\ndef category_edit(request, root_pk=None):\n category = get_object_or_404(Category, pk=root_pk)\n form = CategoryForm(request.POST or None, instance=category,\n parent_pk=category.parent_id)\n status = 200\n if form.is_valid():\n category = form.save()\n messages.success(request, _('Added category %s') % category)\n if root_pk:\n return redirect('dashboard:category-list', root_pk=root_pk)\n else:\n return redirect('dashboard:category-list')\n elif form.errors:\n status = 400\n ctx = {'category': category, 'form': form, 'status': status}\n template = 'dashboard/category/modal_edit.html'\n return TemplateResponse(request, template, ctx, status=status)\n\n\n@staff_member_required\ndef category_delete(request, pk):\n category = get_object_or_404(Category, pk=pk)\n if request.method == 'POST':\n category.delete()\n messages.success(request, _('Deleted category %s') % category)\n root_pk = None\n if category.parent:\n root_pk = category.parent.pk\n if root_pk:\n return redirect('dashboard:category-list', root_pk=root_pk)\n else:\n return redirect('dashboard:category-list')\n ctx = {'category': category,\n 'descendants': list(category.get_descendants()),\n 'products_count': len(category.products.all())}\n return TemplateResponse(request,\n 'dashboard/category/modal_category_confirm_delete.html',\n ctx)\n", "path": "saleor/dashboard/category/views.py"}], "after_files": [{"content": "from django.contrib import messages\nfrom django.core.urlresolvers import reverse\nfrom django.http import JsonResponse\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom ...product.models import Category\nfrom ..views import staff_member_required\nfrom .forms import CategoryForm\n\n\n@staff_member_required\ndef category_list(request, root_pk=None):\n root = None\n path = None\n categories = Category.tree.root_nodes()\n if root_pk:\n root = get_object_or_404(Category, pk=root_pk)\n path = root.get_ancestors(include_self=True) if root else []\n categories = root.get_children()\n ctx = {'categories': categories, 'path': path, 'root': root}\n return TemplateResponse(request, 'dashboard/category/list.html', ctx)\n\n\n@staff_member_required\ndef category_create(request, root_pk=None):\n category = Category()\n form = CategoryForm(request.POST or None, parent_pk=root_pk)\n if form.is_valid():\n category = form.save()\n messages.success(request, _('Added category %s') % category)\n if root_pk:\n return redirect('dashboard:category-list', root_pk=root_pk)\n else:\n return redirect('dashboard:category-list')\n ctx = {'category': category, 'form': form}\n return TemplateResponse(request, 'dashboard/category/detail.html', ctx)\n\n\n@staff_member_required\ndef category_edit(request, root_pk=None):\n category = get_object_or_404(Category, pk=root_pk)\n form = CategoryForm(request.POST or None, instance=category,\n parent_pk=category.parent_id)\n status = 200\n if form.is_valid():\n category = form.save()\n messages.success(request, _('Added category %s') % category)\n if root_pk:\n return redirect('dashboard:category-list', root_pk=root_pk)\n else:\n return redirect('dashboard:category-list')\n elif form.errors:\n status = 400\n ctx = {'category': category, 'form': form, 'status': status}\n template = 'dashboard/category/modal_edit.html'\n return TemplateResponse(request, template, ctx, status=status)\n\n\n@staff_member_required\ndef category_delete(request, pk):\n category = get_object_or_404(Category, pk=pk)\n if request.method == 'POST':\n category.delete()\n messages.success(request, _('Deleted category %s') % category)\n root_pk = None\n if category.parent:\n root_pk = category.parent.pk\n if root_pk:\n if request.is_ajax():\n response = {'redirectUrl': reverse(\n 'dashboard:category-list', kwargs={'root_pk': root_pk})}\n return JsonResponse(response)\n return redirect('dashboard:category-list', root_pk=root_pk)\n else:\n if request.is_ajax():\n response = {'redirectUrl': reverse('dashboard:category-list')}\n return JsonResponse(response)\n return redirect('dashboard:category-list')\n ctx = {'category': category,\n 'descendants': list(category.get_descendants()),\n 'products_count': len(category.products.all())}\n return TemplateResponse(request,\n 'dashboard/category/modal_delete.html',\n ctx)\n", "path": "saleor/dashboard/category/views.py"}]} | 1,058 | 289 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.