problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
9.01k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 465
11.3k
| num_tokens_prompt
int64 557
2.05k
| num_tokens_diff
int64 48
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_27942 | rasdani/github-patches | git_diff | pyjanitor-devs__pyjanitor-461 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[DOC] API Documentation for Biology functions is not standardized
# Brief Description of Fix
<!-- Please describe the fix in terms of a "before" and "after". In other words, what's not so good about the current docs
page, and what you would like to see it become.
Example starter wording is provided. -->
Currently, the docs do not have a standard docstring format for functions.
### Example of inconsistent docstring
> (no `Returns`, no `Functional usage example`, no `Method chaining example`)
<img width="690" alt="incorrect_func_doc" src="https://user-images.githubusercontent.com/24984410/61177779-6936e100-a5a2-11e9-8332-0c02bad7b5bf.png">
### Example of a correct/desired docstring
<img width="716" alt="correct_func_doc" src="https://user-images.githubusercontent.com/24984410/61177795-c5016a00-a5a2-11e9-8cd9-415f440d95c2.png">
I would like to propose a change, such that now the docs contain a **standardized** docstring suite. All functions should contain (at a minimum) the following:
- `Parameters`
- ` Returns`
- `Functional usage example`
- `Method chaining example`
**NOTE**: This can be done for all functions within the `janitor` directory. For ease of review, this will focus on the `biology.py` file and move to other files/functions as time permits.
# Relevant Context
<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available
to get you started. -->
- [Link to documentation page](https://pyjanitor.readthedocs.io/reference/biology.html)
- [Link to exact file to be edited](https://github.com/loganthomas/pyjanitor/blob/dev/janitor/biology.py)
[DOC] API Documentation for Biology functions is not standardized
# Brief Description of Fix
<!-- Please describe the fix in terms of a "before" and "after". In other words, what's not so good about the current docs
page, and what you would like to see it become.
Example starter wording is provided. -->
Currently, the docs do not have a standard docstring format for functions.
### Example of inconsistent docstring
> (no `Returns`, no `Functional usage example`, no `Method chaining example`)
<img width="690" alt="incorrect_func_doc" src="https://user-images.githubusercontent.com/24984410/61177779-6936e100-a5a2-11e9-8332-0c02bad7b5bf.png">
### Example of a correct/desired docstring
<img width="716" alt="correct_func_doc" src="https://user-images.githubusercontent.com/24984410/61177795-c5016a00-a5a2-11e9-8cd9-415f440d95c2.png">
I would like to propose a change, such that now the docs contain a **standardized** docstring suite. All functions should contain (at a minimum) the following:
- `Parameters`
- ` Returns`
- `Functional usage example`
- `Method chaining example`
**NOTE**: This can be done for all functions within the `janitor` directory. For ease of review, this will focus on the `biology.py` file and move to other files/functions as time permits.
# Relevant Context
<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available
to get you started. -->
- [Link to documentation page](https://pyjanitor.readthedocs.io/reference/biology.html)
- [Link to exact file to be edited](https://github.com/loganthomas/pyjanitor/blob/dev/janitor/biology.py)
</issue>
<code>
[start of janitor/biology.py]
1 """
2 Biology and bioinformatics-oriented data cleaning functions.
3 """
4
5 import pandas as pd
6 import pandas_flavor as pf
7
8 from .utils import deprecated_alias, import_message
9
10 try:
11 from Bio import SeqIO
12 except ImportError:
13 import_message(
14 "biology", "biopython", "conda install -c conda-forge biopython"
15 )
16
17
18 @pf.register_dataframe_method
19 @deprecated_alias(col_name="column_name")
20 def join_fasta(
21 df: pd.DataFrame, filename: str, id_col: str, column_name
22 ) -> pd.DataFrame:
23 """
24 Convenience method to join in a FASTA file as a column.
25
26 This allows us to add the string sequence of a FASTA file as a new column
27 of data in the dataframe.
28
29 This method only attaches the string representation of the SeqRecord.Seq
30 object from Biopython. Does not attach the full SeqRecord. Alphabet is
31 also not stored, under the assumption that the data scientist has domain
32 knowledge of what kind of sequence is being read in (nucleotide vs. amino
33 acid.)
34
35 This method mutates the original DataFrame.
36
37 For more advanced functions, please use phylopandas.
38
39 :param df: A pandas DataFrame.
40 :param filename: Path to the FASTA file.
41 :param id_col: The column in the DataFrame that houses sequence IDs.
42 :param column_name: The name of the new column.
43 """
44 seqrecords = {
45 x.id: x.seq.__str__() for x in SeqIO.parse(filename, "fasta")
46 }
47 seq_col = [seqrecords[i] for i in df[id_col]]
48 df[column_name] = seq_col
49 return df
50
[end of janitor/biology.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/janitor/biology.py b/janitor/biology.py
--- a/janitor/biology.py
+++ b/janitor/biology.py
@@ -18,7 +18,7 @@
@pf.register_dataframe_method
@deprecated_alias(col_name="column_name")
def join_fasta(
- df: pd.DataFrame, filename: str, id_col: str, column_name
+ df: pd.DataFrame, filename: str, id_col: str, column_name: str
) -> pd.DataFrame:
"""
Convenience method to join in a FASTA file as a column.
@@ -36,10 +36,37 @@
For more advanced functions, please use phylopandas.
+ Functional usage example:
+
+ .. code-block:: python
+
+ import janitor.biology
+
+ df = janitor.biology.join_fasta(
+ df=df,
+ filename='fasta_file.fasta',
+ id_col='sequence_accession',
+ column_name='sequence',
+ )
+
+ Method chaining example:
+
+ .. code-block:: python
+
+ import pandas as pd
+ import janitor.biology
+
+ df = pd.DataFrame(...).join_fasta(
+ filename='fasta_file.fasta',
+ id_col='sequence_accession',
+ column_name='sequence',
+ )
+
:param df: A pandas DataFrame.
:param filename: Path to the FASTA file.
:param id_col: The column in the DataFrame that houses sequence IDs.
:param column_name: The name of the new column.
+ :returns: A pandas DataFrame with new FASTA string sequence column.
"""
seqrecords = {
x.id: x.seq.__str__() for x in SeqIO.parse(filename, "fasta")
| {"golden_diff": "diff --git a/janitor/biology.py b/janitor/biology.py\n--- a/janitor/biology.py\n+++ b/janitor/biology.py\n@@ -18,7 +18,7 @@\n @pf.register_dataframe_method\n @deprecated_alias(col_name=\"column_name\")\n def join_fasta(\n- df: pd.DataFrame, filename: str, id_col: str, column_name\n+ df: pd.DataFrame, filename: str, id_col: str, column_name: str\n ) -> pd.DataFrame:\n \"\"\"\n Convenience method to join in a FASTA file as a column.\n@@ -36,10 +36,37 @@\n \n For more advanced functions, please use phylopandas.\n \n+ Functional usage example:\n+\n+ .. code-block:: python\n+\n+ import janitor.biology\n+\n+ df = janitor.biology.join_fasta(\n+ df=df,\n+ filename='fasta_file.fasta',\n+ id_col='sequence_accession',\n+ column_name='sequence',\n+ )\n+\n+ Method chaining example:\n+\n+ .. code-block:: python\n+\n+ import pandas as pd\n+ import janitor.biology\n+\n+ df = pd.DataFrame(...).join_fasta(\n+ filename='fasta_file.fasta',\n+ id_col='sequence_accession',\n+ column_name='sequence',\n+ )\n+\n :param df: A pandas DataFrame.\n :param filename: Path to the FASTA file.\n :param id_col: The column in the DataFrame that houses sequence IDs.\n :param column_name: The name of the new column.\n+ :returns: A pandas DataFrame with new FASTA string sequence column.\n \"\"\"\n seqrecords = {\n x.id: x.seq.__str__() for x in SeqIO.parse(filename, \"fasta\")\n", "issue": "[DOC] API Documentation for Biology functions is not standardized\n# Brief Description of Fix\r\n\r\n<!-- Please describe the fix in terms of a \"before\" and \"after\". In other words, what's not so good about the current docs\r\npage, and what you would like to see it become. \r\n\r\nExample starter wording is provided. -->\r\n\r\nCurrently, the docs do not have a standard docstring format for functions.\r\n### Example of inconsistent docstring\r\n> (no `Returns`, no `Functional usage example`, no `Method chaining example`)\r\n<img width=\"690\" alt=\"incorrect_func_doc\" src=\"https://user-images.githubusercontent.com/24984410/61177779-6936e100-a5a2-11e9-8332-0c02bad7b5bf.png\">\r\n\r\n### Example of a correct/desired docstring\r\n<img width=\"716\" alt=\"correct_func_doc\" src=\"https://user-images.githubusercontent.com/24984410/61177795-c5016a00-a5a2-11e9-8cd9-415f440d95c2.png\">\r\n\r\nI would like to propose a change, such that now the docs contain a **standardized** docstring suite. All functions should contain (at a minimum) the following:\r\n - `Parameters`\r\n - ` Returns`\r\n - `Functional usage example`\r\n - `Method chaining example`\r\n\r\n**NOTE**: This can be done for all functions within the `janitor` directory. For ease of review, this will focus on the `biology.py` file and move to other files/functions as time permits.\r\n\r\n# Relevant Context\r\n\r\n<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available\r\nto get you started. -->\r\n\r\n- [Link to documentation page](https://pyjanitor.readthedocs.io/reference/biology.html)\r\n- [Link to exact file to be edited](https://github.com/loganthomas/pyjanitor/blob/dev/janitor/biology.py)\r\n\n[DOC] API Documentation for Biology functions is not standardized\n# Brief Description of Fix\r\n\r\n<!-- Please describe the fix in terms of a \"before\" and \"after\". In other words, what's not so good about the current docs\r\npage, and what you would like to see it become. \r\n\r\nExample starter wording is provided. -->\r\n\r\nCurrently, the docs do not have a standard docstring format for functions.\r\n### Example of inconsistent docstring\r\n> (no `Returns`, no `Functional usage example`, no `Method chaining example`)\r\n<img width=\"690\" alt=\"incorrect_func_doc\" src=\"https://user-images.githubusercontent.com/24984410/61177779-6936e100-a5a2-11e9-8332-0c02bad7b5bf.png\">\r\n\r\n### Example of a correct/desired docstring\r\n<img width=\"716\" alt=\"correct_func_doc\" src=\"https://user-images.githubusercontent.com/24984410/61177795-c5016a00-a5a2-11e9-8cd9-415f440d95c2.png\">\r\n\r\nI would like to propose a change, such that now the docs contain a **standardized** docstring suite. All functions should contain (at a minimum) the following:\r\n - `Parameters`\r\n - ` Returns`\r\n - `Functional usage example`\r\n - `Method chaining example`\r\n\r\n**NOTE**: This can be done for all functions within the `janitor` directory. For ease of review, this will focus on the `biology.py` file and move to other files/functions as time permits.\r\n\r\n# Relevant Context\r\n\r\n<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available\r\nto get you started. -->\r\n\r\n- [Link to documentation page](https://pyjanitor.readthedocs.io/reference/biology.html)\r\n- [Link to exact file to be edited](https://github.com/loganthomas/pyjanitor/blob/dev/janitor/biology.py)\r\n\n", "before_files": [{"content": "\"\"\"\nBiology and bioinformatics-oriented data cleaning functions.\n\"\"\"\n\nimport pandas as pd\nimport pandas_flavor as pf\n\nfrom .utils import deprecated_alias, import_message\n\ntry:\n from Bio import SeqIO\nexcept ImportError:\n import_message(\n \"biology\", \"biopython\", \"conda install -c conda-forge biopython\"\n )\n\n\[email protected]_dataframe_method\n@deprecated_alias(col_name=\"column_name\")\ndef join_fasta(\n df: pd.DataFrame, filename: str, id_col: str, column_name\n) -> pd.DataFrame:\n \"\"\"\n Convenience method to join in a FASTA file as a column.\n\n This allows us to add the string sequence of a FASTA file as a new column\n of data in the dataframe.\n\n This method only attaches the string representation of the SeqRecord.Seq\n object from Biopython. Does not attach the full SeqRecord. Alphabet is\n also not stored, under the assumption that the data scientist has domain\n knowledge of what kind of sequence is being read in (nucleotide vs. amino\n acid.)\n\n This method mutates the original DataFrame.\n\n For more advanced functions, please use phylopandas.\n\n :param df: A pandas DataFrame.\n :param filename: Path to the FASTA file.\n :param id_col: The column in the DataFrame that houses sequence IDs.\n :param column_name: The name of the new column.\n \"\"\"\n seqrecords = {\n x.id: x.seq.__str__() for x in SeqIO.parse(filename, \"fasta\")\n }\n seq_col = [seqrecords[i] for i in df[id_col]]\n df[column_name] = seq_col\n return df\n", "path": "janitor/biology.py"}]} | 1,909 | 393 |
gh_patches_debug_13931 | rasdani/github-patches | git_diff | sopel-irc__sopel-987 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
KeyError trying to reload a system module name
`KeyError: 'time' (file "/opt/rh/python33/root/usr/lib/python3.3/site-packages/sopel/modules/reload.py", line 62, in f_reload)` should be a sane error message
</issue>
<code>
[start of sopel/modules/reload.py]
1 # coding=utf-8
2 """
3 reload.py - Sopel Module Reloader Module
4 Copyright 2008, Sean B. Palmer, inamidst.com
5 Licensed under the Eiffel Forum License 2.
6
7 http://sopel.chat
8 """
9 from __future__ import unicode_literals, absolute_import, print_function, division
10
11 import collections
12 import sys
13 import time
14 from sopel.tools import iteritems
15 import sopel.loader
16 import sopel.module
17 import subprocess
18
19
20 @sopel.module.nickname_commands("reload")
21 @sopel.module.priority("low")
22 @sopel.module.thread(False)
23 def f_reload(bot, trigger):
24 """Reloads a module, for use by admins only."""
25 if not trigger.admin:
26 return
27
28 name = trigger.group(2)
29 if name == bot.config.core.owner:
30 return bot.reply('What?')
31
32 if not name or name == '*' or name.upper() == 'ALL THE THINGS':
33 bot._callables = {
34 'high': collections.defaultdict(list),
35 'medium': collections.defaultdict(list),
36 'low': collections.defaultdict(list)
37 }
38 bot.command_groups = collections.defaultdict(list)
39 bot.setup()
40 return bot.reply('done')
41
42 if name not in sys.modules:
43 return bot.reply('%s: not loaded, try the `load` command' % name)
44
45 old_module = sys.modules[name]
46
47 old_callables = {}
48 for obj_name, obj in iteritems(vars(old_module)):
49 bot.unregister(obj)
50
51 # Also remove all references to sopel callables from top level of the
52 # module, so that they will not get loaded again if reloading the
53 # module does not override them.
54 for obj_name in old_callables.keys():
55 delattr(old_module, obj_name)
56
57 # Also delete the setup function
58 if hasattr(old_module, "setup"):
59 delattr(old_module, "setup")
60
61 modules = sopel.loader.enumerate_modules(bot.config)
62 path, type_ = modules[name]
63 load_module(bot, name, path, type_)
64
65
66 def load_module(bot, name, path, type_):
67 module, mtime = sopel.loader.load_module(name, path, type_)
68 relevant_parts = sopel.loader.clean_module(module, bot.config)
69
70 bot.register(*relevant_parts)
71
72 # TODO sys.modules[name] = module
73 if hasattr(module, 'setup'):
74 module.setup(bot)
75
76 modified = time.strftime('%Y-%m-%d %H:%M:%S', time.gmtime(mtime))
77
78 bot.reply('%r (version: %s)' % (module, modified))
79
80
81 @sopel.module.nickname_commands('update')
82 def f_update(bot, trigger):
83 if not trigger.admin:
84 return
85
86 """Pulls the latest versions of all modules from Git"""
87 proc = subprocess.Popen('/usr/bin/git pull',
88 stdout=subprocess.PIPE,
89 stderr=subprocess.PIPE, shell=True)
90 bot.reply(proc.communicate()[0])
91
92 f_reload(bot, trigger)
93
94
95 @sopel.module.nickname_commands("load")
96 @sopel.module.priority("low")
97 @sopel.module.thread(False)
98 def f_load(bot, trigger):
99 """Loads a module, for use by admins only."""
100 if not trigger.admin:
101 return
102
103 name = trigger.group(2)
104 path = ''
105 if name == bot.config.core.owner:
106 return bot.reply('What?')
107
108 if name in sys.modules:
109 return bot.reply('Module already loaded, use reload')
110
111 mods = sopel.loader.enumerate_modules(bot.config)
112 if name not in mods:
113 return bot.reply('Module %s not found' % name)
114 path, type_ = mods[name]
115 load_module(bot, name, path, type_)
116
117
118 # Catch PM based messages
119 @sopel.module.commands("reload")
120 @sopel.module.priority("low")
121 @sopel.module.thread(False)
122 def pm_f_reload(bot, trigger):
123 """Wrapper for allowing delivery of .reload command via PM"""
124 if trigger.is_privmsg:
125 f_reload(bot, trigger)
126
127
128 @sopel.module.commands('update')
129 def pm_f_update(bot, trigger):
130 """Wrapper for allowing delivery of .update command via PM"""
131 if trigger.is_privmsg:
132 f_update(bot, trigger)
133
134
135 @sopel.module.commands("load")
136 @sopel.module.priority("low")
137 @sopel.module.thread(False)
138 def pm_f_load(bot, trigger):
139 """Wrapper for allowing delivery of .load command via PM"""
140 if trigger.is_privmsg:
141 f_load(bot, trigger)
142
[end of sopel/modules/reload.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sopel/modules/reload.py b/sopel/modules/reload.py
--- a/sopel/modules/reload.py
+++ b/sopel/modules/reload.py
@@ -40,7 +40,7 @@
return bot.reply('done')
if name not in sys.modules:
- return bot.reply('%s: not loaded, try the `load` command' % name)
+ return bot.reply('"%s" not loaded, try the `load` command' % name)
old_module = sys.modules[name]
@@ -59,6 +59,8 @@
delattr(old_module, "setup")
modules = sopel.loader.enumerate_modules(bot.config)
+ if name not in modules:
+ return bot.reply('"%s" not loaded, try the `load` command' % name)
path, type_ = modules[name]
load_module(bot, name, path, type_)
| {"golden_diff": "diff --git a/sopel/modules/reload.py b/sopel/modules/reload.py\n--- a/sopel/modules/reload.py\n+++ b/sopel/modules/reload.py\n@@ -40,7 +40,7 @@\n return bot.reply('done')\n \n if name not in sys.modules:\n- return bot.reply('%s: not loaded, try the `load` command' % name)\n+ return bot.reply('\"%s\" not loaded, try the `load` command' % name)\n \n old_module = sys.modules[name]\n \n@@ -59,6 +59,8 @@\n delattr(old_module, \"setup\")\n \n modules = sopel.loader.enumerate_modules(bot.config)\n+ if name not in modules:\n+ return bot.reply('\"%s\" not loaded, try the `load` command' % name)\n path, type_ = modules[name]\n load_module(bot, name, path, type_)\n", "issue": "KeyError trying to reload a system module name\n`KeyError: 'time' (file \"/opt/rh/python33/root/usr/lib/python3.3/site-packages/sopel/modules/reload.py\", line 62, in f_reload)` should be a sane error message\n\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"\nreload.py - Sopel Module Reloader Module\nCopyright 2008, Sean B. Palmer, inamidst.com\nLicensed under the Eiffel Forum License 2.\n\nhttp://sopel.chat\n\"\"\"\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nimport collections\nimport sys\nimport time\nfrom sopel.tools import iteritems\nimport sopel.loader\nimport sopel.module\nimport subprocess\n\n\[email protected]_commands(\"reload\")\[email protected](\"low\")\[email protected](False)\ndef f_reload(bot, trigger):\n \"\"\"Reloads a module, for use by admins only.\"\"\"\n if not trigger.admin:\n return\n\n name = trigger.group(2)\n if name == bot.config.core.owner:\n return bot.reply('What?')\n\n if not name or name == '*' or name.upper() == 'ALL THE THINGS':\n bot._callables = {\n 'high': collections.defaultdict(list),\n 'medium': collections.defaultdict(list),\n 'low': collections.defaultdict(list)\n }\n bot.command_groups = collections.defaultdict(list)\n bot.setup()\n return bot.reply('done')\n\n if name not in sys.modules:\n return bot.reply('%s: not loaded, try the `load` command' % name)\n\n old_module = sys.modules[name]\n\n old_callables = {}\n for obj_name, obj in iteritems(vars(old_module)):\n bot.unregister(obj)\n\n # Also remove all references to sopel callables from top level of the\n # module, so that they will not get loaded again if reloading the\n # module does not override them.\n for obj_name in old_callables.keys():\n delattr(old_module, obj_name)\n\n # Also delete the setup function\n if hasattr(old_module, \"setup\"):\n delattr(old_module, \"setup\")\n\n modules = sopel.loader.enumerate_modules(bot.config)\n path, type_ = modules[name]\n load_module(bot, name, path, type_)\n\n\ndef load_module(bot, name, path, type_):\n module, mtime = sopel.loader.load_module(name, path, type_)\n relevant_parts = sopel.loader.clean_module(module, bot.config)\n\n bot.register(*relevant_parts)\n\n # TODO sys.modules[name] = module\n if hasattr(module, 'setup'):\n module.setup(bot)\n\n modified = time.strftime('%Y-%m-%d %H:%M:%S', time.gmtime(mtime))\n\n bot.reply('%r (version: %s)' % (module, modified))\n\n\[email protected]_commands('update')\ndef f_update(bot, trigger):\n if not trigger.admin:\n return\n\n \"\"\"Pulls the latest versions of all modules from Git\"\"\"\n proc = subprocess.Popen('/usr/bin/git pull',\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE, shell=True)\n bot.reply(proc.communicate()[0])\n\n f_reload(bot, trigger)\n\n\[email protected]_commands(\"load\")\[email protected](\"low\")\[email protected](False)\ndef f_load(bot, trigger):\n \"\"\"Loads a module, for use by admins only.\"\"\"\n if not trigger.admin:\n return\n\n name = trigger.group(2)\n path = ''\n if name == bot.config.core.owner:\n return bot.reply('What?')\n\n if name in sys.modules:\n return bot.reply('Module already loaded, use reload')\n\n mods = sopel.loader.enumerate_modules(bot.config)\n if name not in mods:\n return bot.reply('Module %s not found' % name)\n path, type_ = mods[name]\n load_module(bot, name, path, type_)\n\n\n# Catch PM based messages\[email protected](\"reload\")\[email protected](\"low\")\[email protected](False)\ndef pm_f_reload(bot, trigger):\n \"\"\"Wrapper for allowing delivery of .reload command via PM\"\"\"\n if trigger.is_privmsg:\n f_reload(bot, trigger)\n\n\[email protected]('update')\ndef pm_f_update(bot, trigger):\n \"\"\"Wrapper for allowing delivery of .update command via PM\"\"\"\n if trigger.is_privmsg:\n f_update(bot, trigger)\n\n\[email protected](\"load\")\[email protected](\"low\")\[email protected](False)\ndef pm_f_load(bot, trigger):\n \"\"\"Wrapper for allowing delivery of .load command via PM\"\"\"\n if trigger.is_privmsg:\n f_load(bot, trigger)\n", "path": "sopel/modules/reload.py"}]} | 1,905 | 206 |
gh_patches_debug_10721 | rasdani/github-patches | git_diff | pypi__warehouse-3979 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
use CSP: sandbox on /simple/ pages
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/sandbox + https://www.youtube.com/watch?v=fbhW37JZtSA&feature=youtu.be
I believe this is a great fit for /simple/, which don't need any ability to do anthing but have a simple HTML structure.
I _think_ we can replace the whole current header with `Content-Security-Policy: sandbox allow-top-navigations`.
</issue>
<code>
[start of warehouse/csp.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import collections
14 import copy
15
16
17 SELF = "'self'"
18 NONE = "'none'"
19
20
21 def _serialize(policy):
22 return "; ".join([
23 " ".join([k] + [v2 for v2 in v if v2 is not None])
24 for k, v in sorted(policy.items())
25 ])
26
27
28 def content_security_policy_tween_factory(handler, registry):
29 def content_security_policy_tween(request):
30 resp = handler(request)
31
32 try:
33 policy = request.find_service(name="csp")
34 except ValueError:
35 policy = collections.defaultdict(list)
36
37 # We don't want to apply our Content Security Policy to the debug
38 # toolbar, that's not part of our application and it doesn't work with
39 # our restrictive CSP.
40 policy = _serialize(policy).format(request=request)
41 if not request.path.startswith("/_debug_toolbar/") and policy:
42 resp.headers["Content-Security-Policy"] = policy
43
44 return resp
45
46 return content_security_policy_tween
47
48
49 class CSPPolicy(collections.defaultdict):
50 def __init__(self, policy=None):
51 super().__init__(list, policy or {})
52
53 def merge(self, policy):
54 for key, attrs in policy.items():
55 self[key].extend(attrs)
56
57
58 def csp_factory(_, request):
59 try:
60 return CSPPolicy(copy.deepcopy(request.registry.settings["csp"]))
61 except KeyError:
62 return CSPPolicy({})
63
64
65 def includeme(config):
66 config.register_service_factory(csp_factory, name="csp")
67 # Enable a Content Security Policy
68 config.add_settings({
69 "csp": {
70 "base-uri": [SELF],
71 "block-all-mixed-content": [],
72 "connect-src": [
73 item for item in [
74 SELF,
75 config.registry.settings.get("statuspage.url"),
76 "https://api.github.com/repos/",
77 ]
78 if item
79 ],
80 "default-src": [NONE],
81 "font-src": [SELF, "fonts.gstatic.com"],
82 "form-action": [SELF],
83 "frame-ancestors": [NONE],
84 "frame-src": [NONE],
85 "img-src": [
86 SELF,
87 config.registry.settings["camo.url"],
88 "www.google-analytics.com",
89 ],
90 "script-src": [
91 SELF,
92 "www.googletagmanager.com",
93 "www.google-analytics.com",
94 ],
95 "style-src": [SELF, "fonts.googleapis.com"],
96 },
97 })
98 config.add_tween("warehouse.csp.content_security_policy_tween_factory")
99
[end of warehouse/csp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/warehouse/csp.py b/warehouse/csp.py
--- a/warehouse/csp.py
+++ b/warehouse/csp.py
@@ -34,6 +34,12 @@
except ValueError:
policy = collections.defaultdict(list)
+ # Replace CSP headers on /simple/ pages.
+ if request.path.startswith("/simple/"):
+ policy = collections.defaultdict(list)
+ policy["sandbox"] = ["allow-top-navigation"]
+ policy["default-src"] = [NONE]
+
# We don't want to apply our Content Security Policy to the debug
# toolbar, that's not part of our application and it doesn't work with
# our restrictive CSP.
| {"golden_diff": "diff --git a/warehouse/csp.py b/warehouse/csp.py\n--- a/warehouse/csp.py\n+++ b/warehouse/csp.py\n@@ -34,6 +34,12 @@\n except ValueError:\n policy = collections.defaultdict(list)\n \n+ # Replace CSP headers on /simple/ pages.\n+ if request.path.startswith(\"/simple/\"):\n+ policy = collections.defaultdict(list)\n+ policy[\"sandbox\"] = [\"allow-top-navigation\"]\n+ policy[\"default-src\"] = [NONE]\n+\n # We don't want to apply our Content Security Policy to the debug\n # toolbar, that's not part of our application and it doesn't work with\n # our restrictive CSP.\n", "issue": "use CSP: sandbox on /simple/ pages\nhttps://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/sandbox + https://www.youtube.com/watch?v=fbhW37JZtSA&feature=youtu.be\r\n\r\nI believe this is a great fit for /simple/, which don't need any ability to do anthing but have a simple HTML structure.\r\n\r\nI _think_ we can replace the whole current header with `Content-Security-Policy: sandbox allow-top-navigations`.\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport collections\nimport copy\n\n\nSELF = \"'self'\"\nNONE = \"'none'\"\n\n\ndef _serialize(policy):\n return \"; \".join([\n \" \".join([k] + [v2 for v2 in v if v2 is not None])\n for k, v in sorted(policy.items())\n ])\n\n\ndef content_security_policy_tween_factory(handler, registry):\n def content_security_policy_tween(request):\n resp = handler(request)\n\n try:\n policy = request.find_service(name=\"csp\")\n except ValueError:\n policy = collections.defaultdict(list)\n\n # We don't want to apply our Content Security Policy to the debug\n # toolbar, that's not part of our application and it doesn't work with\n # our restrictive CSP.\n policy = _serialize(policy).format(request=request)\n if not request.path.startswith(\"/_debug_toolbar/\") and policy:\n resp.headers[\"Content-Security-Policy\"] = policy\n\n return resp\n\n return content_security_policy_tween\n\n\nclass CSPPolicy(collections.defaultdict):\n def __init__(self, policy=None):\n super().__init__(list, policy or {})\n\n def merge(self, policy):\n for key, attrs in policy.items():\n self[key].extend(attrs)\n\n\ndef csp_factory(_, request):\n try:\n return CSPPolicy(copy.deepcopy(request.registry.settings[\"csp\"]))\n except KeyError:\n return CSPPolicy({})\n\n\ndef includeme(config):\n config.register_service_factory(csp_factory, name=\"csp\")\n # Enable a Content Security Policy\n config.add_settings({\n \"csp\": {\n \"base-uri\": [SELF],\n \"block-all-mixed-content\": [],\n \"connect-src\": [\n item for item in [\n SELF,\n config.registry.settings.get(\"statuspage.url\"),\n \"https://api.github.com/repos/\",\n ]\n if item\n ],\n \"default-src\": [NONE],\n \"font-src\": [SELF, \"fonts.gstatic.com\"],\n \"form-action\": [SELF],\n \"frame-ancestors\": [NONE],\n \"frame-src\": [NONE],\n \"img-src\": [\n SELF,\n config.registry.settings[\"camo.url\"],\n \"www.google-analytics.com\",\n ],\n \"script-src\": [\n SELF,\n \"www.googletagmanager.com\",\n \"www.google-analytics.com\",\n ],\n \"style-src\": [SELF, \"fonts.googleapis.com\"],\n },\n })\n config.add_tween(\"warehouse.csp.content_security_policy_tween_factory\")\n", "path": "warehouse/csp.py"}]} | 1,491 | 154 |
gh_patches_debug_16614 | rasdani/github-patches | git_diff | inventree__InvenTree-4492 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Email settings not configured
### Deployment Method
- [ ] Installer
- [ ] Docker Development
- [X] Docker Production
- [ ] Bare metal Development
- [ ] Bare metal Production
- [ ] Digital Ocean image
- [ ] Other (please provide a link `Steps to Reproduce`
### Describe the problem*
Hello,
I am having issues configuring the email settings to connect to our on-prem exchange server.
I have configured the .env file with the following variables (Note - exchange does not require authentication internally)
INVENTREE_EMAIL_HOST
INVENTREE_EMAIL_PORT
INVENTREE_EMAIL_SENDER
However, when running the docker container, the 'System Information' screen reports that 'Email settings not configured. Under 'Global Settings' > 'Login Settings' a prompt shows at the top saying 'Outgoing email has not been configured. Some login and sign-up features may not work correctly!'
Apologies if I have missed something obvious in the documentation but I seem to be going round in circles currently. Any help that you could offer would be greatly appreciated. Thank you
### Steps to Reproduce
1. Stop the contain with `docker compose down`
2. Edit .env file to include the email variables (as per https://docs.inventree.org/en/0.10.0/start/config/#email-settings)
3. Starting the container with `docker compose up -d`
Further steps (Apologies, these might be all guesses at this stage)
1. I have tried setting these within the configuration file (I believe saved here: `volume/inventree-data/config.yaml` to include host, port, sender, tls and ssl variables, however with no change.
2. Feeling like I am missing a step, I am running `docker compose run inventree-server invoke update`
3. I am now running 0.10.1 but with the same issue.
### Relevant log output
_No response_
</issue>
<code>
[start of InvenTree/InvenTree/status.py]
1 """Provides system status functionality checks."""
2 # -*- coding: utf-8 -*-
3
4 import logging
5 from datetime import timedelta
6
7 from django.conf import settings
8 from django.utils import timezone
9 from django.utils.translation import gettext_lazy as _
10
11 from django_q.models import Success
12 from django_q.monitor import Stat
13
14 import InvenTree.ready
15
16 logger = logging.getLogger("inventree")
17
18
19 def is_worker_running(**kwargs):
20 """Return True if the background worker process is oprational."""
21 clusters = Stat.get_all()
22
23 if len(clusters) > 0:
24 # TODO - Introspect on any cluster information
25 return True
26
27 """
28 Sometimes Stat.get_all() returns [].
29 In this case we have the 'heartbeat' task running every 5 minutes.
30 Check to see if we have any successful result within the last 10 minutes
31 """
32
33 now = timezone.now()
34 past = now - timedelta(minutes=10)
35
36 results = Success.objects.filter(
37 started__gte=past
38 )
39
40 # If any results are returned, then the background worker is running!
41 return results.exists()
42
43
44 def is_email_configured():
45 """Check if email backend is configured.
46
47 NOTE: This does not check if the configuration is valid!
48 """
49 configured = True
50
51 if InvenTree.ready.isInTestMode():
52 return False
53
54 if InvenTree.ready.isImportingData():
55 return False
56
57 if not settings.EMAIL_HOST:
58 configured = False
59
60 # Display warning unless in test mode
61 if not settings.TESTING: # pragma: no cover
62 logger.debug("EMAIL_HOST is not configured")
63
64 if not settings.EMAIL_HOST_USER:
65 configured = False
66
67 # Display warning unless in test mode
68 if not settings.TESTING: # pragma: no cover
69 logger.debug("EMAIL_HOST_USER is not configured")
70
71 if not settings.EMAIL_HOST_PASSWORD:
72 configured = False
73
74 # Display warning unless in test mode
75 if not settings.TESTING: # pragma: no cover
76 logger.debug("EMAIL_HOST_PASSWORD is not configured")
77
78 return configured
79
80
81 def check_system_health(**kwargs):
82 """Check that the InvenTree system is running OK.
83
84 Returns True if all system checks pass.
85 """
86 result = True
87
88 if InvenTree.ready.isInTestMode():
89 # Do not perform further checks if we are running unit tests
90 return False
91
92 if InvenTree.ready.isImportingData():
93 # Do not perform further checks if we are importing data
94 return False
95
96 if not is_worker_running(**kwargs): # pragma: no cover
97 result = False
98 logger.warning(_("Background worker check failed"))
99
100 if not is_email_configured(): # pragma: no cover
101 result = False
102 logger.warning(_("Email backend not configured"))
103
104 if not result: # pragma: no cover
105 logger.warning(_("InvenTree system health checks failed"))
106
107 return result
108
[end of InvenTree/InvenTree/status.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/InvenTree/InvenTree/status.py b/InvenTree/InvenTree/status.py
--- a/InvenTree/InvenTree/status.py
+++ b/InvenTree/InvenTree/status.py
@@ -61,19 +61,13 @@
if not settings.TESTING: # pragma: no cover
logger.debug("EMAIL_HOST is not configured")
- if not settings.EMAIL_HOST_USER:
- configured = False
-
- # Display warning unless in test mode
- if not settings.TESTING: # pragma: no cover
- logger.debug("EMAIL_HOST_USER is not configured")
+ # Display warning unless in test mode
+ if not settings.TESTING: # pragma: no cover
+ logger.debug("EMAIL_HOST_USER is not configured")
- if not settings.EMAIL_HOST_PASSWORD:
- configured = False
-
- # Display warning unless in test mode
- if not settings.TESTING: # pragma: no cover
- logger.debug("EMAIL_HOST_PASSWORD is not configured")
+ # Display warning unless in test mode
+ if not settings.TESTING: # pragma: no cover
+ logger.debug("EMAIL_HOST_PASSWORD is not configured")
return configured
| {"golden_diff": "diff --git a/InvenTree/InvenTree/status.py b/InvenTree/InvenTree/status.py\n--- a/InvenTree/InvenTree/status.py\n+++ b/InvenTree/InvenTree/status.py\n@@ -61,19 +61,13 @@\n if not settings.TESTING: # pragma: no cover\n logger.debug(\"EMAIL_HOST is not configured\")\n \n- if not settings.EMAIL_HOST_USER:\n- configured = False\n-\n- # Display warning unless in test mode\n- if not settings.TESTING: # pragma: no cover\n- logger.debug(\"EMAIL_HOST_USER is not configured\")\n+ # Display warning unless in test mode\n+ if not settings.TESTING: # pragma: no cover\n+ logger.debug(\"EMAIL_HOST_USER is not configured\")\n \n- if not settings.EMAIL_HOST_PASSWORD:\n- configured = False\n-\n- # Display warning unless in test mode\n- if not settings.TESTING: # pragma: no cover\n- logger.debug(\"EMAIL_HOST_PASSWORD is not configured\")\n+ # Display warning unless in test mode\n+ if not settings.TESTING: # pragma: no cover\n+ logger.debug(\"EMAIL_HOST_PASSWORD is not configured\")\n \n return configured\n", "issue": "Email settings not configured\n### Deployment Method\n\n- [ ] Installer\n- [ ] Docker Development\n- [X] Docker Production\n- [ ] Bare metal Development\n- [ ] Bare metal Production\n- [ ] Digital Ocean image\n- [ ] Other (please provide a link `Steps to Reproduce`\n\n### Describe the problem*\n\nHello,\r\n\r\nI am having issues configuring the email settings to connect to our on-prem exchange server. \r\n\r\nI have configured the .env file with the following variables (Note - exchange does not require authentication internally)\r\n\r\nINVENTREE_EMAIL_HOST\r\nINVENTREE_EMAIL_PORT\r\nINVENTREE_EMAIL_SENDER\r\n\r\nHowever, when running the docker container, the 'System Information' screen reports that 'Email settings not configured. Under 'Global Settings' > 'Login Settings' a prompt shows at the top saying 'Outgoing email has not been configured. Some login and sign-up features may not work correctly!'\r\n\r\nApologies if I have missed something obvious in the documentation but I seem to be going round in circles currently. Any help that you could offer would be greatly appreciated. Thank you\n\n### Steps to Reproduce\n\n1. Stop the contain with `docker compose down` \r\n2. Edit .env file to include the email variables (as per https://docs.inventree.org/en/0.10.0/start/config/#email-settings)\r\n3. Starting the container with `docker compose up -d`\r\n\r\nFurther steps (Apologies, these might be all guesses at this stage)\r\n\r\n1. I have tried setting these within the configuration file (I believe saved here: `volume/inventree-data/config.yaml` to include host, port, sender, tls and ssl variables, however with no change.\r\n2. Feeling like I am missing a step, I am running `docker compose run inventree-server invoke update`\r\n3. I am now running 0.10.1 but with the same issue.\r\n\r\n\n\n### Relevant log output\n\n_No response_\n", "before_files": [{"content": "\"\"\"Provides system status functionality checks.\"\"\"\n# -*- coding: utf-8 -*-\n\nimport logging\nfrom datetime import timedelta\n\nfrom django.conf import settings\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\n\nfrom django_q.models import Success\nfrom django_q.monitor import Stat\n\nimport InvenTree.ready\n\nlogger = logging.getLogger(\"inventree\")\n\n\ndef is_worker_running(**kwargs):\n \"\"\"Return True if the background worker process is oprational.\"\"\"\n clusters = Stat.get_all()\n\n if len(clusters) > 0:\n # TODO - Introspect on any cluster information\n return True\n\n \"\"\"\n Sometimes Stat.get_all() returns [].\n In this case we have the 'heartbeat' task running every 5 minutes.\n Check to see if we have any successful result within the last 10 minutes\n \"\"\"\n\n now = timezone.now()\n past = now - timedelta(minutes=10)\n\n results = Success.objects.filter(\n started__gte=past\n )\n\n # If any results are returned, then the background worker is running!\n return results.exists()\n\n\ndef is_email_configured():\n \"\"\"Check if email backend is configured.\n\n NOTE: This does not check if the configuration is valid!\n \"\"\"\n configured = True\n\n if InvenTree.ready.isInTestMode():\n return False\n\n if InvenTree.ready.isImportingData():\n return False\n\n if not settings.EMAIL_HOST:\n configured = False\n\n # Display warning unless in test mode\n if not settings.TESTING: # pragma: no cover\n logger.debug(\"EMAIL_HOST is not configured\")\n\n if not settings.EMAIL_HOST_USER:\n configured = False\n\n # Display warning unless in test mode\n if not settings.TESTING: # pragma: no cover\n logger.debug(\"EMAIL_HOST_USER is not configured\")\n\n if not settings.EMAIL_HOST_PASSWORD:\n configured = False\n\n # Display warning unless in test mode\n if not settings.TESTING: # pragma: no cover\n logger.debug(\"EMAIL_HOST_PASSWORD is not configured\")\n\n return configured\n\n\ndef check_system_health(**kwargs):\n \"\"\"Check that the InvenTree system is running OK.\n\n Returns True if all system checks pass.\n \"\"\"\n result = True\n\n if InvenTree.ready.isInTestMode():\n # Do not perform further checks if we are running unit tests\n return False\n\n if InvenTree.ready.isImportingData():\n # Do not perform further checks if we are importing data\n return False\n\n if not is_worker_running(**kwargs): # pragma: no cover\n result = False\n logger.warning(_(\"Background worker check failed\"))\n\n if not is_email_configured(): # pragma: no cover\n result = False\n logger.warning(_(\"Email backend not configured\"))\n\n if not result: # pragma: no cover\n logger.warning(_(\"InvenTree system health checks failed\"))\n\n return result\n", "path": "InvenTree/InvenTree/status.py"}]} | 1,812 | 278 |
gh_patches_debug_4837 | rasdani/github-patches | git_diff | kivy__python-for-android-2800 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Python 3.10 cffi build fails
<!--
The issue tracker is a tool to address bugs NOT a support platform.
Please use the Discord community or Stack Overflow for support questions,
more information at https://github.com/kivy/python-for-android#support
-->
### Checklist
- [ x] the issue is indeed a bug and not a support request
- [ x] issue doesn't already exist: https://github.com/kivy/python-for-android/issues
- [ x] I have a short, runnable example that reproduces the issue
- [x ] I reproduced the problem with the latest development version (`p4a.branch = develop`)
- [ x] I used the grave accent (aka backticks) to format code or logs when appropriated
### Versions
- Python: 2.10.6 (OS)
- OS: Ubuntu 22.04.2
- Kivy: 2.2.0
- Cython: 0.29.33
- OpenJDK:
### Description
p4a v2023.5.21 build of cffi fails
### buildozer.spec
Spec file:
```
requirements = python3,kivy, cffi
```
### Logs
```
/home/bobf/.buildozer/android/platform/android-ndk-r25b/toolchains/llvm/prebuilt/linux-x86_64/bin/clang -target aarch64-linux-android21 -fomit-frame-pointer -march=armv8-a -fPIC -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -target aarch64-linux-android21 -fomit-frame-pointer -march=armv8-a -fPIC -I/home/bobf/ex/hello/.buildozer/android/platform/build-arm64-v8a/build/other_builds/libffi/arm64-v8a__ndk_target_21/libffi/include -I/home/bobf/ex/hello/.buildozer/android/platform/build-arm64-v8a/build/other_builds/python3/arm64-v8a__ndk_target_21/python3/Include -DANDROID -I/home/bobf/.buildozer/android/platform/android-ndk-r25b/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include -I/home/bobf/ex/hello/.buildozer/android/platform/build-arm64-v8a/build/python-installs/apptest/arm64-v8a/include/python3.1 -fPIC -DUSE__THREAD -DHAVE_SYNC_SYNCHRONIZE -I/home/bobf/ex/hello/.buildozer/android/platform/build-arm64-v8a/build/other_builds/hostpython3/desktop/hostpython3/Include -I/home/bobf/ex/hello/.buildozer/android/platform/build-arm64-v8a/build/other_builds/hostpython3/desktop/hostpython3/native-build -c c/_cffi_backend.c -o build/temp.linux-x86_64-3.10/c/_cffi_backend.o
c/_cffi_backend.c:407:23: error: expression is not assignable
Py_REFCNT(ct) = 43;
~~~~~~~~~~~~~ ^
c/_cffi_backend.c:410:23: error: expression is not assignable
Py_REFCNT(ct) = 0;
~~~~~~~~~~~~~ ^
```
</issue>
<code>
[start of pythonforandroid/recipes/cffi/__init__.py]
1 import os
2 from pythonforandroid.recipe import CompiledComponentsPythonRecipe
3
4
5 class CffiRecipe(CompiledComponentsPythonRecipe):
6 """
7 Extra system dependencies: autoconf, automake and libtool.
8 """
9 name = 'cffi'
10 version = '1.13.2'
11 url = 'https://pypi.python.org/packages/source/c/cffi/cffi-{version}.tar.gz'
12
13 depends = ['setuptools', 'pycparser', 'libffi']
14
15 patches = ['disable-pkg-config.patch']
16
17 # call_hostpython_via_targetpython = False
18 install_in_hostpython = True
19
20 def get_hostrecipe_env(self, arch=None):
21 # fixes missing ffi.h on some host systems (e.g. gentoo)
22 env = super().get_hostrecipe_env(arch)
23 libffi = self.get_recipe('libffi', self.ctx)
24 includes = libffi.get_include_dirs(arch)
25 env['FFI_INC'] = ",".join(includes)
26 return env
27
28 def get_recipe_env(self, arch=None):
29 env = super().get_recipe_env(arch)
30 libffi = self.get_recipe('libffi', self.ctx)
31 includes = libffi.get_include_dirs(arch)
32 env['CFLAGS'] = ' -I'.join([env.get('CFLAGS', '')] + includes)
33 env['CFLAGS'] += ' -I{}'.format(self.ctx.python_recipe.include_root(arch.arch))
34 env['LDFLAGS'] = (env.get('CFLAGS', '') + ' -L' +
35 self.ctx.get_libs_dir(arch.arch))
36 env['LDFLAGS'] += ' -L{}'.format(os.path.join(self.ctx.bootstrap.build_dir, 'libs', arch.arch))
37 # required for libc and libdl
38 env['LDFLAGS'] += ' -L{}'.format(arch.ndk_lib_dir_versioned)
39 env['PYTHONPATH'] = ':'.join([
40 self.ctx.get_site_packages_dir(arch),
41 env['BUILDLIB_PATH'],
42 ])
43 env['LDFLAGS'] += ' -L{}'.format(self.ctx.python_recipe.link_root(arch.arch))
44 env['LDFLAGS'] += ' -lpython{}'.format(self.ctx.python_recipe.link_version)
45 return env
46
47
48 recipe = CffiRecipe()
49
[end of pythonforandroid/recipes/cffi/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pythonforandroid/recipes/cffi/__init__.py b/pythonforandroid/recipes/cffi/__init__.py
--- a/pythonforandroid/recipes/cffi/__init__.py
+++ b/pythonforandroid/recipes/cffi/__init__.py
@@ -7,7 +7,7 @@
Extra system dependencies: autoconf, automake and libtool.
"""
name = 'cffi'
- version = '1.13.2'
+ version = '1.15.1'
url = 'https://pypi.python.org/packages/source/c/cffi/cffi-{version}.tar.gz'
depends = ['setuptools', 'pycparser', 'libffi']
| {"golden_diff": "diff --git a/pythonforandroid/recipes/cffi/__init__.py b/pythonforandroid/recipes/cffi/__init__.py\n--- a/pythonforandroid/recipes/cffi/__init__.py\n+++ b/pythonforandroid/recipes/cffi/__init__.py\n@@ -7,7 +7,7 @@\n Extra system dependencies: autoconf, automake and libtool.\n \"\"\"\n name = 'cffi'\n- version = '1.13.2'\n+ version = '1.15.1'\n url = 'https://pypi.python.org/packages/source/c/cffi/cffi-{version}.tar.gz'\n \n depends = ['setuptools', 'pycparser', 'libffi']\n", "issue": "Python 3.10 cffi build fails\n<!--\r\nThe issue tracker is a tool to address bugs NOT a support platform.\r\nPlease use the Discord community or Stack Overflow for support questions,\r\nmore information at https://github.com/kivy/python-for-android#support\r\n-->\r\n\r\n### Checklist\r\n\r\n- [ x] the issue is indeed a bug and not a support request\r\n- [ x] issue doesn't already exist: https://github.com/kivy/python-for-android/issues\r\n- [ x] I have a short, runnable example that reproduces the issue\r\n- [x ] I reproduced the problem with the latest development version (`p4a.branch = develop`)\r\n- [ x] I used the grave accent (aka backticks) to format code or logs when appropriated\r\n\r\n### Versions\r\n\r\n- Python: 2.10.6 (OS)\r\n- OS: Ubuntu 22.04.2\r\n- Kivy: 2.2.0\r\n- Cython: 0.29.33\r\n- OpenJDK:\r\n\r\n### Description\r\n\r\np4a v2023.5.21 build of cffi fails\r\n\r\n### buildozer.spec\r\n\r\n\r\nSpec file:\r\n```\r\nrequirements = python3,kivy, cffi\r\n```\r\n\r\n### Logs\r\n\r\n```\r\n/home/bobf/.buildozer/android/platform/android-ndk-r25b/toolchains/llvm/prebuilt/linux-x86_64/bin/clang -target aarch64-linux-android21 -fomit-frame-pointer -march=armv8-a -fPIC -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -target aarch64-linux-android21 -fomit-frame-pointer -march=armv8-a -fPIC -I/home/bobf/ex/hello/.buildozer/android/platform/build-arm64-v8a/build/other_builds/libffi/arm64-v8a__ndk_target_21/libffi/include -I/home/bobf/ex/hello/.buildozer/android/platform/build-arm64-v8a/build/other_builds/python3/arm64-v8a__ndk_target_21/python3/Include -DANDROID -I/home/bobf/.buildozer/android/platform/android-ndk-r25b/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include -I/home/bobf/ex/hello/.buildozer/android/platform/build-arm64-v8a/build/python-installs/apptest/arm64-v8a/include/python3.1 -fPIC -DUSE__THREAD -DHAVE_SYNC_SYNCHRONIZE -I/home/bobf/ex/hello/.buildozer/android/platform/build-arm64-v8a/build/other_builds/hostpython3/desktop/hostpython3/Include -I/home/bobf/ex/hello/.buildozer/android/platform/build-arm64-v8a/build/other_builds/hostpython3/desktop/hostpython3/native-build -c c/_cffi_backend.c -o build/temp.linux-x86_64-3.10/c/_cffi_backend.o\r\nc/_cffi_backend.c:407:23: error: expression is not assignable\r\n Py_REFCNT(ct) = 43;\r\n ~~~~~~~~~~~~~ ^\r\nc/_cffi_backend.c:410:23: error: expression is not assignable\r\n Py_REFCNT(ct) = 0;\r\n ~~~~~~~~~~~~~ ^\r\n\r\n```\r\n\n", "before_files": [{"content": "import os\nfrom pythonforandroid.recipe import CompiledComponentsPythonRecipe\n\n\nclass CffiRecipe(CompiledComponentsPythonRecipe):\n \"\"\"\n Extra system dependencies: autoconf, automake and libtool.\n \"\"\"\n name = 'cffi'\n version = '1.13.2'\n url = 'https://pypi.python.org/packages/source/c/cffi/cffi-{version}.tar.gz'\n\n depends = ['setuptools', 'pycparser', 'libffi']\n\n patches = ['disable-pkg-config.patch']\n\n # call_hostpython_via_targetpython = False\n install_in_hostpython = True\n\n def get_hostrecipe_env(self, arch=None):\n # fixes missing ffi.h on some host systems (e.g. gentoo)\n env = super().get_hostrecipe_env(arch)\n libffi = self.get_recipe('libffi', self.ctx)\n includes = libffi.get_include_dirs(arch)\n env['FFI_INC'] = \",\".join(includes)\n return env\n\n def get_recipe_env(self, arch=None):\n env = super().get_recipe_env(arch)\n libffi = self.get_recipe('libffi', self.ctx)\n includes = libffi.get_include_dirs(arch)\n env['CFLAGS'] = ' -I'.join([env.get('CFLAGS', '')] + includes)\n env['CFLAGS'] += ' -I{}'.format(self.ctx.python_recipe.include_root(arch.arch))\n env['LDFLAGS'] = (env.get('CFLAGS', '') + ' -L' +\n self.ctx.get_libs_dir(arch.arch))\n env['LDFLAGS'] += ' -L{}'.format(os.path.join(self.ctx.bootstrap.build_dir, 'libs', arch.arch))\n # required for libc and libdl\n env['LDFLAGS'] += ' -L{}'.format(arch.ndk_lib_dir_versioned)\n env['PYTHONPATH'] = ':'.join([\n self.ctx.get_site_packages_dir(arch),\n env['BUILDLIB_PATH'],\n ])\n env['LDFLAGS'] += ' -L{}'.format(self.ctx.python_recipe.link_root(arch.arch))\n env['LDFLAGS'] += ' -lpython{}'.format(self.ctx.python_recipe.link_version)\n return env\n\n\nrecipe = CffiRecipe()\n", "path": "pythonforandroid/recipes/cffi/__init__.py"}]} | 1,878 | 154 |
gh_patches_debug_3903 | rasdani/github-patches | git_diff | archlinux__archinstall-1674 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
archinstall won't start due to TypeError
# Error
When I execute `archinstall` I get the following error:
```
...
File "/usr/lib/python3.10/site-packages/archinstall/lib/packages/packages.py", line 115, in installed_package
return LocalPackage(**package_info)
TypeError: LocalPackage.__init__() got an unexpected keyword argument 'warning'
```
# What I tried
I looked at the code of `installed_package` and found it executes `pacman -Q --info {package}`, so I tried to execute that with archlinux-keyring to see what the output was like.
I executed `pacman -Q --info archlinux-keyring` and got the following output
```
warning: config file /etc/pacman.conf, line 19: directive 'SyncFirst' in section 'options' not recognized
Name : archlinux-keyring
Version : 20230225-1
...
```
# Why this seems to be happening
## Code
https://github.com/archlinux/archinstall/blob/8f6cc07062968b259bebd346521ef685c16f89dc/archinstall/lib/packages/packages.py#L105-L115
## Explanation
Because the line `warning: config file /etc/pacman.conf, line 19: directive 'SyncFirst' in section 'options' not recognized` I get as part of the output of `pacman -Q --info {package}` has a colon it is being interpreted as a key value pair.
# Possible fix
Ignore all lines that start with 'warning'
</issue>
<code>
[start of archinstall/lib/packages/packages.py]
1 import json
2 import ssl
3 from typing import Dict, Any, Tuple, List
4 from urllib.error import HTTPError
5 from urllib.parse import urlencode
6 from urllib.request import urlopen
7
8 from ..exceptions import PackageError, SysCallError
9 from ..models.dataclasses import PackageSearch, PackageSearchResult, LocalPackage
10 from ..pacman import run_pacman
11
12 BASE_URL_PKG_SEARCH = 'https://archlinux.org/packages/search/json/'
13 # BASE_URL_PKG_CONTENT = 'https://archlinux.org/packages/search/json/'
14 BASE_GROUP_URL = 'https://archlinux.org/groups/search/json/'
15
16
17 def _make_request(url: str, params: Dict) -> Any:
18 ssl_context = ssl.create_default_context()
19 ssl_context.check_hostname = False
20 ssl_context.verify_mode = ssl.CERT_NONE
21
22 encoded = urlencode(params)
23 full_url = f'{url}?{encoded}'
24
25 return urlopen(full_url, context=ssl_context)
26
27
28 def group_search(name :str) -> List[PackageSearchResult]:
29 # TODO UPSTREAM: Implement /json/ for the groups search
30 try:
31 response = _make_request(BASE_GROUP_URL, {'name': name})
32 except HTTPError as err:
33 if err.code == 404:
34 return []
35 else:
36 raise err
37
38 # Just to be sure some code didn't slip through the exception
39 data = response.read().decode('UTF-8')
40
41 return [PackageSearchResult(**package) for package in json.loads(data)['results']]
42
43
44 def package_search(package :str) -> PackageSearch:
45 """
46 Finds a specific package via the package database.
47 It makes a simple web-request, which might be a bit slow.
48 """
49 # TODO UPSTREAM: Implement bulk search, either support name=X&name=Y or split on space (%20 or ' ')
50 # TODO: utilize pacman cache first, upstream second.
51 response = _make_request(BASE_URL_PKG_SEARCH, {'name': package})
52
53 if response.code != 200:
54 raise PackageError(f"Could not locate package: [{response.code}] {response}")
55
56 data = response.read().decode('UTF-8')
57
58 return PackageSearch(**json.loads(data))
59
60
61 def find_package(package :str) -> List[PackageSearchResult]:
62 data = package_search(package)
63 results = []
64
65 for result in data.results:
66 if result.pkgname == package:
67 results.append(result)
68
69 # If we didn't find the package in the search results,
70 # odds are it's a group package
71 if not results:
72 # Check if the package is actually a group
73 for result in group_search(package):
74 results.append(result)
75
76 return results
77
78
79 def find_packages(*names :str) -> Dict[str, Any]:
80 """
81 This function returns the search results for many packages.
82 The function itself is rather slow, so consider not sending to
83 many packages to the search query.
84 """
85 result = {}
86 for package in names:
87 for found_package in find_package(package):
88 result[package] = found_package
89
90 return result
91
92
93 def validate_package_list(packages :list) -> Tuple[list, list]:
94 """
95 Validates a list of given packages.
96 return: Tuple of lists containing valid packavges in the first and invalid
97 packages in the second entry
98 """
99 valid_packages = {package for package in packages if find_package(package)}
100 invalid_packages = set(packages) - valid_packages
101
102 return list(valid_packages), list(invalid_packages)
103
104
105 def installed_package(package :str) -> LocalPackage:
106 package_info = {}
107 try:
108 for line in run_pacman(f"-Q --info {package}"):
109 if b':' in line:
110 key, value = line.decode().split(':', 1)
111 package_info[key.strip().lower().replace(' ', '_')] = value.strip()
112 except SysCallError:
113 pass
114
115 return LocalPackage(**package_info)
116
[end of archinstall/lib/packages/packages.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/archinstall/lib/packages/packages.py b/archinstall/lib/packages/packages.py
--- a/archinstall/lib/packages/packages.py
+++ b/archinstall/lib/packages/packages.py
@@ -1,3 +1,4 @@
+import dataclasses
import json
import ssl
from typing import Dict, Any, Tuple, List
@@ -112,4 +113,4 @@
except SysCallError:
pass
- return LocalPackage(**package_info)
+ return LocalPackage({field.name: package_info.get(field.name) for field in dataclasses.fields(LocalPackage)})
| {"golden_diff": "diff --git a/archinstall/lib/packages/packages.py b/archinstall/lib/packages/packages.py\n--- a/archinstall/lib/packages/packages.py\n+++ b/archinstall/lib/packages/packages.py\n@@ -1,3 +1,4 @@\n+import dataclasses\n import json\n import ssl\n from typing import Dict, Any, Tuple, List\n@@ -112,4 +113,4 @@\n \texcept SysCallError:\n \t\tpass\n \n-\treturn LocalPackage(**package_info)\n+\treturn LocalPackage({field.name: package_info.get(field.name) for field in dataclasses.fields(LocalPackage)})\n", "issue": "archinstall won't start due to TypeError\n# Error\r\nWhen I execute `archinstall` I get the following error:\r\n\r\n```\r\n...\r\nFile \"/usr/lib/python3.10/site-packages/archinstall/lib/packages/packages.py\", line 115, in installed_package\r\n return LocalPackage(**package_info)\r\nTypeError: LocalPackage.__init__() got an unexpected keyword argument 'warning'\r\n```\r\n# What I tried\r\nI looked at the code of `installed_package` and found it executes `pacman -Q --info {package}`, so I tried to execute that with archlinux-keyring to see what the output was like.\r\n\r\nI executed `pacman -Q --info archlinux-keyring` and got the following output\r\n```\r\nwarning: config file /etc/pacman.conf, line 19: directive 'SyncFirst' in section 'options' not recognized\r\nName \t: archlinux-keyring\r\nVersion : 20230225-1\r\n...\r\n```\r\n# Why this seems to be happening\r\n## Code\r\nhttps://github.com/archlinux/archinstall/blob/8f6cc07062968b259bebd346521ef685c16f89dc/archinstall/lib/packages/packages.py#L105-L115\r\n## Explanation \r\nBecause the line `warning: config file /etc/pacman.conf, line 19: directive 'SyncFirst' in section 'options' not recognized` I get as part of the output of `pacman -Q --info {package}` has a colon it is being interpreted as a key value pair.\r\n\r\n# Possible fix\r\nIgnore all lines that start with 'warning'\r\n\r\n\n", "before_files": [{"content": "import json\nimport ssl\nfrom typing import Dict, Any, Tuple, List\nfrom urllib.error import HTTPError\nfrom urllib.parse import urlencode\nfrom urllib.request import urlopen\n\nfrom ..exceptions import PackageError, SysCallError\nfrom ..models.dataclasses import PackageSearch, PackageSearchResult, LocalPackage\nfrom ..pacman import run_pacman\n\nBASE_URL_PKG_SEARCH = 'https://archlinux.org/packages/search/json/'\n# BASE_URL_PKG_CONTENT = 'https://archlinux.org/packages/search/json/'\nBASE_GROUP_URL = 'https://archlinux.org/groups/search/json/'\n\n\ndef _make_request(url: str, params: Dict) -> Any:\n\tssl_context = ssl.create_default_context()\n\tssl_context.check_hostname = False\n\tssl_context.verify_mode = ssl.CERT_NONE\n\n\tencoded = urlencode(params)\n\tfull_url = f'{url}?{encoded}'\n\n\treturn urlopen(full_url, context=ssl_context)\n\n\ndef group_search(name :str) -> List[PackageSearchResult]:\n\t# TODO UPSTREAM: Implement /json/ for the groups search\n\ttry:\n\t\tresponse = _make_request(BASE_GROUP_URL, {'name': name})\n\texcept HTTPError as err:\n\t\tif err.code == 404:\n\t\t\treturn []\n\t\telse:\n\t\t\traise err\n\n\t# Just to be sure some code didn't slip through the exception\n\tdata = response.read().decode('UTF-8')\n\n\treturn [PackageSearchResult(**package) for package in json.loads(data)['results']]\n\n\ndef package_search(package :str) -> PackageSearch:\n\t\"\"\"\n\tFinds a specific package via the package database.\n\tIt makes a simple web-request, which might be a bit slow.\n\t\"\"\"\n\t# TODO UPSTREAM: Implement bulk search, either support name=X&name=Y or split on space (%20 or ' ')\n\t# TODO: utilize pacman cache first, upstream second.\n\tresponse = _make_request(BASE_URL_PKG_SEARCH, {'name': package})\n\n\tif response.code != 200:\n\t\traise PackageError(f\"Could not locate package: [{response.code}] {response}\")\n\n\tdata = response.read().decode('UTF-8')\n\n\treturn PackageSearch(**json.loads(data))\n\n\ndef find_package(package :str) -> List[PackageSearchResult]:\n\tdata = package_search(package)\n\tresults = []\n\n\tfor result in data.results:\n\t\tif result.pkgname == package:\n\t\t\tresults.append(result)\n\n\t# If we didn't find the package in the search results,\n\t# odds are it's a group package\n\tif not results:\n\t\t# Check if the package is actually a group\n\t\tfor result in group_search(package):\n\t\t\tresults.append(result)\n\n\treturn results\n\n\ndef find_packages(*names :str) -> Dict[str, Any]:\n\t\"\"\"\n\tThis function returns the search results for many packages.\n\tThe function itself is rather slow, so consider not sending to\n\tmany packages to the search query.\n\t\"\"\"\n\tresult = {}\n\tfor package in names:\n\t\tfor found_package in find_package(package):\n\t\t\tresult[package] = found_package\n\n\treturn result\n\n\ndef validate_package_list(packages :list) -> Tuple[list, list]:\n\t\"\"\"\n\tValidates a list of given packages.\n\treturn: Tuple of lists containing valid packavges in the first and invalid\n\tpackages in the second entry\n\t\"\"\"\n\tvalid_packages = {package for package in packages if find_package(package)}\n\tinvalid_packages = set(packages) - valid_packages\n\n\treturn list(valid_packages), list(invalid_packages)\n\n\ndef installed_package(package :str) -> LocalPackage:\n\tpackage_info = {}\n\ttry:\n\t\tfor line in run_pacman(f\"-Q --info {package}\"):\n\t\t\tif b':' in line:\n\t\t\t\tkey, value = line.decode().split(':', 1)\n\t\t\t\tpackage_info[key.strip().lower().replace(' ', '_')] = value.strip()\n\texcept SysCallError:\n\t\tpass\n\n\treturn LocalPackage(**package_info)\n", "path": "archinstall/lib/packages/packages.py"}]} | 2,006 | 123 |
gh_patches_debug_29273 | rasdani/github-patches | git_diff | digitalfabrik__integreat-cms-577 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Old permalinks not mapped in single page API endpoint
### Describe the Bug
<!-- A clear and concise description of what the bug is. -->
The permalink is checked for correctness in the single page API endpoint, but in #119 we decided not to keep track of old permalinks. Instead, we will just map permalinks based on the last url parameter (the slug) and ignore the rest.
### Steps to Reproduce
1. Copy one page permalink
2. Move that page to another parent to change the permalink
3. Request this page in the single page API endpoint with the old permalink from step 1
### Expected Behavior
<!-- A clear and concise description of what you expected to happen. -->
The page should be found even if the permalink is old
### Actual Behavior
<!-- A clear and concise description of what actually happened. -->
The error `No Page matches the given url or id` is returned.
</issue>
<code>
[start of src/api/v3/single_page.py]
1 """
2 View to return a JSON representation of a single page. The page can
3 be selected via the id or the permalink.
4 """
5 from django.http import JsonResponse, Http404
6 from django.shortcuts import get_object_or_404
7
8 from cms.models import Region, Page
9 from .pages import transform_page
10
11
12 # pylint: disable=unused-argument
13 def single_page(request, region_slug, language_code):
14 """
15 View function returning the desired page as a JSON or a 404 if the
16 requested page does not exist.
17
18 :param request: The request that has been sent to the Django server
19 :type request: ~django.http.HttpRequest
20
21 :param region_slug: Slug defining the region
22 :type region_slug: str
23
24 :param language_code: Code to identify the desired language
25 :type language_code: str
26
27 :raises ~django.http.Http404: HTTP status 404 if the request is malformed or no page with the given id or url exists.
28
29 :return: Return a JSON with the requested page and a HTTP status 200.
30 :rtype: ~django.http.JsonResponse
31 """
32 region = Region.get_current_region(request)
33
34 if request.GET.get("id"):
35 page = get_object_or_404(region.pages, id=request.GET.get("id"))
36 page_translation = page.get_public_translation(language_code)
37 if page_translation:
38 return JsonResponse(transform_page(page_translation), safe=False)
39
40 elif request.GET.get("url"):
41 # Strip leading and trailing slashes to avoid ambiguous urls
42 url = request.GET.get("url").strip("/")
43 # Get potential page candidate by only filtering for the translation slug
44 page = get_object_or_404(
45 Page, region=region, translations__slug=url.split("/")[-1]
46 )
47 # Get most recent public revision of the page
48 page_translation = page.get_public_translation(language_code)
49 # Check if the whole path is correct, not only the slug
50 # TODO: Once we have a permalink mapping of old versions, we also have to check whether the permalink was valid in the past
51 if page_translation.permalink == url:
52 return JsonResponse(transform_page(page_translation), safe=False)
53
54 raise Http404("No Page matches the given url or id.")
55
[end of src/api/v3/single_page.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/api/v3/single_page.py b/src/api/v3/single_page.py
--- a/src/api/v3/single_page.py
+++ b/src/api/v3/single_page.py
@@ -5,7 +5,7 @@
from django.http import JsonResponse, Http404
from django.shortcuts import get_object_or_404
-from cms.models import Region, Page
+from cms.models import Region
from .pages import transform_page
@@ -40,15 +40,17 @@
elif request.GET.get("url"):
# Strip leading and trailing slashes to avoid ambiguous urls
url = request.GET.get("url").strip("/")
- # Get potential page candidate by only filtering for the translation slug
+ # The last path component of the url is the page translation slug
+ page_translation_slug = url.split("/")[-1]
+ # Get page by filtering for translation slug and translation language code
page = get_object_or_404(
- Page, region=region, translations__slug=url.split("/")[-1]
+ region.pages,
+ translations__slug=page_translation_slug,
+ translations__language__code=language_code,
)
# Get most recent public revision of the page
page_translation = page.get_public_translation(language_code)
- # Check if the whole path is correct, not only the slug
- # TODO: Once we have a permalink mapping of old versions, we also have to check whether the permalink was valid in the past
- if page_translation.permalink == url:
+ if page_translation:
return JsonResponse(transform_page(page_translation), safe=False)
raise Http404("No Page matches the given url or id.")
| {"golden_diff": "diff --git a/src/api/v3/single_page.py b/src/api/v3/single_page.py\n--- a/src/api/v3/single_page.py\n+++ b/src/api/v3/single_page.py\n@@ -5,7 +5,7 @@\n from django.http import JsonResponse, Http404\n from django.shortcuts import get_object_or_404\n \n-from cms.models import Region, Page\n+from cms.models import Region\n from .pages import transform_page\n \n \n@@ -40,15 +40,17 @@\n elif request.GET.get(\"url\"):\n # Strip leading and trailing slashes to avoid ambiguous urls\n url = request.GET.get(\"url\").strip(\"/\")\n- # Get potential page candidate by only filtering for the translation slug\n+ # The last path component of the url is the page translation slug\n+ page_translation_slug = url.split(\"/\")[-1]\n+ # Get page by filtering for translation slug and translation language code\n page = get_object_or_404(\n- Page, region=region, translations__slug=url.split(\"/\")[-1]\n+ region.pages,\n+ translations__slug=page_translation_slug,\n+ translations__language__code=language_code,\n )\n # Get most recent public revision of the page\n page_translation = page.get_public_translation(language_code)\n- # Check if the whole path is correct, not only the slug\n- # TODO: Once we have a permalink mapping of old versions, we also have to check whether the permalink was valid in the past\n- if page_translation.permalink == url:\n+ if page_translation:\n return JsonResponse(transform_page(page_translation), safe=False)\n \n raise Http404(\"No Page matches the given url or id.\")\n", "issue": "Old permalinks not mapped in single page API endpoint\n### Describe the Bug\r\n<!-- A clear and concise description of what the bug is. -->\r\nThe permalink is checked for correctness in the single page API endpoint, but in #119 we decided not to keep track of old permalinks. Instead, we will just map permalinks based on the last url parameter (the slug) and ignore the rest.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Copy one page permalink\r\n2. Move that page to another parent to change the permalink\r\n3. Request this page in the single page API endpoint with the old permalink from step 1\r\n\r\n### Expected Behavior\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nThe page should be found even if the permalink is old\r\n\r\n### Actual Behavior\r\n<!-- A clear and concise description of what actually happened. -->\r\nThe error `No Page matches the given url or id` is returned.\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nView to return a JSON representation of a single page. The page can\nbe selected via the id or the permalink.\n\"\"\"\nfrom django.http import JsonResponse, Http404\nfrom django.shortcuts import get_object_or_404\n\nfrom cms.models import Region, Page\nfrom .pages import transform_page\n\n\n# pylint: disable=unused-argument\ndef single_page(request, region_slug, language_code):\n \"\"\"\n View function returning the desired page as a JSON or a 404 if the\n requested page does not exist.\n\n :param request: The request that has been sent to the Django server\n :type request: ~django.http.HttpRequest\n\n :param region_slug: Slug defining the region\n :type region_slug: str\n\n :param language_code: Code to identify the desired language\n :type language_code: str\n\n :raises ~django.http.Http404: HTTP status 404 if the request is malformed or no page with the given id or url exists.\n\n :return: Return a JSON with the requested page and a HTTP status 200.\n :rtype: ~django.http.JsonResponse\n \"\"\"\n region = Region.get_current_region(request)\n\n if request.GET.get(\"id\"):\n page = get_object_or_404(region.pages, id=request.GET.get(\"id\"))\n page_translation = page.get_public_translation(language_code)\n if page_translation:\n return JsonResponse(transform_page(page_translation), safe=False)\n\n elif request.GET.get(\"url\"):\n # Strip leading and trailing slashes to avoid ambiguous urls\n url = request.GET.get(\"url\").strip(\"/\")\n # Get potential page candidate by only filtering for the translation slug\n page = get_object_or_404(\n Page, region=region, translations__slug=url.split(\"/\")[-1]\n )\n # Get most recent public revision of the page\n page_translation = page.get_public_translation(language_code)\n # Check if the whole path is correct, not only the slug\n # TODO: Once we have a permalink mapping of old versions, we also have to check whether the permalink was valid in the past\n if page_translation.permalink == url:\n return JsonResponse(transform_page(page_translation), safe=False)\n\n raise Http404(\"No Page matches the given url or id.\")\n", "path": "src/api/v3/single_page.py"}]} | 1,329 | 367 |
gh_patches_debug_21571 | rasdani/github-patches | git_diff | e-valuation__EvaP-1805 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove sass + ts compilation from ./manage.py run
These slow down starting the development server.
For developers actually changing TS or SASS files, they are not helpful enough, and those will likely start a `--watch` job anyway. We can simply provide a ´./manage.py watch` script for this use case.
For anyone else, they simply slow down the development server startup, which is a bit annoying.
Opinions?
</issue>
<code>
[start of evap/development/management/commands/run.py]
1 import sys
2
3 from django.core.management import execute_from_command_line
4 from django.core.management.base import BaseCommand
5
6
7 class Command(BaseCommand):
8 args = ""
9 help = 'Execute "runserver 0.0.0.0:8000"'
10
11 def handle(self, *args, **options):
12 self.stdout.write('Executing "manage.py scss"')
13 execute_from_command_line(["manage.py", "scss"])
14 self.stdout.write('Executing "manage.py ts compile"')
15 execute_from_command_line(["manage.py", "ts", "compile"])
16 self.stdout.write('Executing "manage.py runserver 0.0.0.0:8000"')
17 sys.argv = ["manage.py", "runserver", "0.0.0.0:8000"]
18 execute_from_command_line(sys.argv)
19
[end of evap/development/management/commands/run.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/evap/development/management/commands/run.py b/evap/development/management/commands/run.py
--- a/evap/development/management/commands/run.py
+++ b/evap/development/management/commands/run.py
@@ -1,4 +1,5 @@
import sys
+from subprocess import Popen # nosec
from django.core.management import execute_from_command_line
from django.core.management.base import BaseCommand
@@ -9,10 +10,8 @@
help = 'Execute "runserver 0.0.0.0:8000"'
def handle(self, *args, **options):
- self.stdout.write('Executing "manage.py scss"')
- execute_from_command_line(["manage.py", "scss"])
- self.stdout.write('Executing "manage.py ts compile"')
- execute_from_command_line(["manage.py", "ts", "compile"])
- self.stdout.write('Executing "manage.py runserver 0.0.0.0:8000"')
- sys.argv = ["manage.py", "runserver", "0.0.0.0:8000"]
- execute_from_command_line(sys.argv)
+ self.stdout.write('Executing "manage.py scss" and "manage.py ts compile"')
+ with Popen(["./manage.py", "scss"]), Popen(["./manage.py", "ts", "compile"]): # nosec
+ self.stdout.write('Executing "manage.py runserver 0.0.0.0:8000"')
+ sys.argv = ["manage.py", "runserver", "0.0.0.0:8000"]
+ execute_from_command_line(sys.argv)
| {"golden_diff": "diff --git a/evap/development/management/commands/run.py b/evap/development/management/commands/run.py\n--- a/evap/development/management/commands/run.py\n+++ b/evap/development/management/commands/run.py\n@@ -1,4 +1,5 @@\n import sys\n+from subprocess import Popen # nosec\n \n from django.core.management import execute_from_command_line\n from django.core.management.base import BaseCommand\n@@ -9,10 +10,8 @@\n help = 'Execute \"runserver 0.0.0.0:8000\"'\n \n def handle(self, *args, **options):\n- self.stdout.write('Executing \"manage.py scss\"')\n- execute_from_command_line([\"manage.py\", \"scss\"])\n- self.stdout.write('Executing \"manage.py ts compile\"')\n- execute_from_command_line([\"manage.py\", \"ts\", \"compile\"])\n- self.stdout.write('Executing \"manage.py runserver 0.0.0.0:8000\"')\n- sys.argv = [\"manage.py\", \"runserver\", \"0.0.0.0:8000\"]\n- execute_from_command_line(sys.argv)\n+ self.stdout.write('Executing \"manage.py scss\" and \"manage.py ts compile\"')\n+ with Popen([\"./manage.py\", \"scss\"]), Popen([\"./manage.py\", \"ts\", \"compile\"]): # nosec\n+ self.stdout.write('Executing \"manage.py runserver 0.0.0.0:8000\"')\n+ sys.argv = [\"manage.py\", \"runserver\", \"0.0.0.0:8000\"]\n+ execute_from_command_line(sys.argv)\n", "issue": "Remove sass + ts compilation from ./manage.py run\nThese slow down starting the development server.\r\n\r\nFor developers actually changing TS or SASS files, they are not helpful enough, and those will likely start a `--watch` job anyway. We can simply provide a \u00b4./manage.py watch` script for this use case.\r\n\r\nFor anyone else, they simply slow down the development server startup, which is a bit annoying.\r\n\r\nOpinions?\n", "before_files": [{"content": "import sys\n\nfrom django.core.management import execute_from_command_line\nfrom django.core.management.base import BaseCommand\n\n\nclass Command(BaseCommand):\n args = \"\"\n help = 'Execute \"runserver 0.0.0.0:8000\"'\n\n def handle(self, *args, **options):\n self.stdout.write('Executing \"manage.py scss\"')\n execute_from_command_line([\"manage.py\", \"scss\"])\n self.stdout.write('Executing \"manage.py ts compile\"')\n execute_from_command_line([\"manage.py\", \"ts\", \"compile\"])\n self.stdout.write('Executing \"manage.py runserver 0.0.0.0:8000\"')\n sys.argv = [\"manage.py\", \"runserver\", \"0.0.0.0:8000\"]\n execute_from_command_line(sys.argv)\n", "path": "evap/development/management/commands/run.py"}]} | 844 | 382 |
gh_patches_debug_562 | rasdani/github-patches | git_diff | mabel-dev__opteryx-1641 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
🪲 Python 3.9 tests stalling
### Thank you for taking the time to report a problem with Opteryx.
_To help us to respond to your request we ask that you try to provide the below detail about the bug._
**Describe the bug** _A clear and specific description of what the bug is. What the error, incorrect or unexpected behaviour was._
**Expected behaviour** _A clear and concise description of what you expected to happen._
**Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._
~~~sql
~~~
**Additional context** _Add any other context about the problem here, for example what you have done to try to diagnose or workaround the problem._
</issue>
<code>
[start of opteryx/__version__.py]
1 __build__ = 477
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Store the version here so:
17 1) we don't load dependencies by storing it in __init__.py
18 2) we can import it in setup.py for the same reason
19 """
20 from enum import Enum # isort: skip
21
22
23 class VersionStatus(Enum):
24 ALPHA = "alpha"
25 BETA = "beta"
26 RELEASE = "release"
27
28
29 _major = 0
30 _minor = 15
31 _revision = 0
32 _status = VersionStatus.BETA
33
34 __author__ = "@joocer"
35 __version__ = f"{_major}.{_minor}.{_revision}" + (
36 f"-{_status.value}.{__build__}" if _status != VersionStatus.RELEASE else ""
37 )
38
[end of opteryx/__version__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opteryx/__version__.py b/opteryx/__version__.py
--- a/opteryx/__version__.py
+++ b/opteryx/__version__.py
@@ -1,4 +1,4 @@
-__build__ = 477
+__build__ = 482
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
| {"golden_diff": "diff --git a/opteryx/__version__.py b/opteryx/__version__.py\n--- a/opteryx/__version__.py\n+++ b/opteryx/__version__.py\n@@ -1,4 +1,4 @@\n-__build__ = 477\n+__build__ = 482\n \n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\n", "issue": "\ud83e\udeb2 Python 3.9 tests stalling\n### Thank you for taking the time to report a problem with Opteryx.\r\n_To help us to respond to your request we ask that you try to provide the below detail about the bug._\r\n\r\n**Describe the bug** _A clear and specific description of what the bug is. What the error, incorrect or unexpected behaviour was._\r\n\r\n\r\n**Expected behaviour** _A clear and concise description of what you expected to happen._\r\n\r\n\r\n**Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._\r\n\r\n~~~sql\r\n\r\n~~~\r\n\r\n**Additional context** _Add any other context about the problem here, for example what you have done to try to diagnose or workaround the problem._\r\n\n", "before_files": [{"content": "__build__ = 477\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nStore the version here so:\n1) we don't load dependencies by storing it in __init__.py\n2) we can import it in setup.py for the same reason\n\"\"\"\nfrom enum import Enum # isort: skip\n\n\nclass VersionStatus(Enum):\n ALPHA = \"alpha\"\n BETA = \"beta\"\n RELEASE = \"release\"\n\n\n_major = 0\n_minor = 15\n_revision = 0\n_status = VersionStatus.BETA\n\n__author__ = \"@joocer\"\n__version__ = f\"{_major}.{_minor}.{_revision}\" + (\n f\"-{_status.value}.{__build__}\" if _status != VersionStatus.RELEASE else \"\"\n)\n", "path": "opteryx/__version__.py"}]} | 1,058 | 102 |
gh_patches_debug_36740 | rasdani/github-patches | git_diff | plotly__dash-1970 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Dropdown: Selected options not showing when the `value` contains a comma
As [reported on the forum by @ marcus](https://community.plotly.com/t/dcc-dropdown-bug-suspected-please-confirm-or-correct-me/60585):
Problem is in this spot: 'value': "50 , 10"
Specifically comma in `value` string makes that selection is not being displayed. When coma is removed or replaced (dot creates no issue) it works fine, but when coma is within the string, the selections are not displayed in the input box and are not removed from the `options`
This only occurs in Multi-Value dropdowns. This is new in Dash V2.1.0 It works as expected in V2.0.0
```
from dash import Dash, dcc, html, Input, Output
app = Dash(__name__)
app.layout = html.Div(
[
dcc.Dropdown(
id="dropdown",
className="inputbox-long",
options=[
{"label": "New York City", "value": "50, 10"},
{"label": "Montreal", "value": "MTL" },
{"label": "San Francisco", "value": "SF"},
],
placeholder="Select one or more",
multi=True,
),
html.Div(id="output"),
]
)
@app.callback(
Output("output", "children"), Input("dropdown", "value"),
)
def update(value):
return value
if __name__ == "__main__":
app.run_server(debug=True)
```

</issue>
<code>
[start of components/dash-html-components/dash_html_components_base/__init__.py]
1 """Vanilla HTML components for Dash"""
2
3 from ._imports_ import * # noqa: E402, F401, F403
4 from ._imports_ import __all__ # noqa: E402
5
6 import json
7 import os as _os
8 import sys as _sys
9 import dash as _dash
10
11 _basepath = _os.path.dirname(__file__)
12 _filepath = _os.path.abspath(_os.path.join(_basepath, "package-info.json"))
13 with open(_filepath) as f:
14 package = json.load(f)
15
16 package_name = package["name"].replace(" ", "_").replace("-", "_")
17 __version__ = package["version"]
18
19
20 # Module imports trigger a dash.development import, need to check this first
21 if not hasattr(_dash, "__plotly_dash") and not hasattr(_dash, "development"):
22 print(
23 "Dash was not successfully imported. Make sure you don't have a file "
24 "named \n'dash.py' in your current directory.",
25 file=_sys.stderr,
26 )
27 _sys.exit(1)
28
29 _current_path = _os.path.dirname(_os.path.abspath(__file__))
30
31
32 _this_module = "dash_html_components"
33
34 _js_dist = [
35 {
36 "relative_package_path": 'html/{}.min.js'.format(_this_module),
37 "external_url": (
38 "https://unpkg.com/dash-html-components@{}"
39 "/dash_html_components/dash_html_components.min.js"
40 ).format(__version__),
41 "namespace": "dash"
42 },
43 {
44 'relative_package_path': 'html/{}.min.js.map'.format(_this_module),
45 'external_url': (
46 'https://unpkg.com/dash-html-components@{}'
47 '/dash_html_components/dash_html_components.min.js.map'
48 ).format(__version__),
49 'namespace': 'dash',
50 'dynamic': True
51 }
52 ]
53
54 _css_dist = []
55
56
57 for _component in __all__:
58 setattr(locals()[_component], '_js_dist', _js_dist)
59 setattr(locals()[_component], '_css_dist', _css_dist)
60
[end of components/dash-html-components/dash_html_components_base/__init__.py]
[start of components/dash-html-components/setup.py]
1 import io
2 import json
3 from setuptools import setup
4
5 with open('package.json') as f:
6 package = json.load(f)
7
8 package_name = str(package["name"].replace(" ", "_").replace("-", "_"))
9
10 setup(
11 name='dash_html_components',
12 version=package["version"],
13 author=package['author'],
14 author_email='[email protected]',
15 packages=[package_name],
16 url='https://github.com/plotly/dash-html-components',
17 include_package_data=True,
18 license=package['license'],
19 description=package['description'] if 'description' in package else package_name,
20 long_description=io.open('README.md', encoding='utf-8').read(),
21 long_description_content_type='text/markdown',
22 install_requires=[]
23 )
24
[end of components/dash-html-components/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/components/dash-html-components/dash_html_components_base/__init__.py b/components/dash-html-components/dash_html_components_base/__init__.py
--- a/components/dash-html-components/dash_html_components_base/__init__.py
+++ b/components/dash-html-components/dash_html_components_base/__init__.py
@@ -33,27 +33,27 @@
_js_dist = [
{
- "relative_package_path": 'html/{}.min.js'.format(_this_module),
+ "relative_package_path": "html/{}.min.js".format(_this_module),
"external_url": (
"https://unpkg.com/dash-html-components@{}"
"/dash_html_components/dash_html_components.min.js"
).format(__version__),
- "namespace": "dash"
+ "namespace": "dash",
},
{
- 'relative_package_path': 'html/{}.min.js.map'.format(_this_module),
- 'external_url': (
- 'https://unpkg.com/dash-html-components@{}'
- '/dash_html_components/dash_html_components.min.js.map'
+ "relative_package_path": "html/{}.min.js.map".format(_this_module),
+ "external_url": (
+ "https://unpkg.com/dash-html-components@{}"
+ "/dash_html_components/dash_html_components.min.js.map"
).format(__version__),
- 'namespace': 'dash',
- 'dynamic': True
- }
+ "namespace": "dash",
+ "dynamic": True,
+ },
]
_css_dist = []
for _component in __all__:
- setattr(locals()[_component], '_js_dist', _js_dist)
- setattr(locals()[_component], '_css_dist', _css_dist)
+ setattr(locals()[_component], "_js_dist", _js_dist)
+ setattr(locals()[_component], "_css_dist", _css_dist)
diff --git a/components/dash-html-components/setup.py b/components/dash-html-components/setup.py
--- a/components/dash-html-components/setup.py
+++ b/components/dash-html-components/setup.py
@@ -2,22 +2,22 @@
import json
from setuptools import setup
-with open('package.json') as f:
+with open("package.json") as f:
package = json.load(f)
package_name = str(package["name"].replace(" ", "_").replace("-", "_"))
setup(
- name='dash_html_components',
+ name="dash_html_components",
version=package["version"],
- author=package['author'],
- author_email='[email protected]',
+ author=package["author"],
+ author_email="[email protected]",
packages=[package_name],
- url='https://github.com/plotly/dash-html-components',
+ url="https://github.com/plotly/dash-html-components",
include_package_data=True,
- license=package['license'],
- description=package['description'] if 'description' in package else package_name,
- long_description=io.open('README.md', encoding='utf-8').read(),
- long_description_content_type='text/markdown',
- install_requires=[]
+ license=package["license"],
+ description=package["description"] if "description" in package else package_name,
+ long_description=io.open("README.md", encoding="utf-8").read(),
+ long_description_content_type="text/markdown",
+ install_requires=[],
)
| {"golden_diff": "diff --git a/components/dash-html-components/dash_html_components_base/__init__.py b/components/dash-html-components/dash_html_components_base/__init__.py\n--- a/components/dash-html-components/dash_html_components_base/__init__.py\n+++ b/components/dash-html-components/dash_html_components_base/__init__.py\n@@ -33,27 +33,27 @@\n \n _js_dist = [\n {\n- \"relative_package_path\": 'html/{}.min.js'.format(_this_module),\n+ \"relative_package_path\": \"html/{}.min.js\".format(_this_module),\n \"external_url\": (\n \"https://unpkg.com/dash-html-components@{}\"\n \"/dash_html_components/dash_html_components.min.js\"\n ).format(__version__),\n- \"namespace\": \"dash\"\n+ \"namespace\": \"dash\",\n },\n {\n- 'relative_package_path': 'html/{}.min.js.map'.format(_this_module),\n- 'external_url': (\n- 'https://unpkg.com/dash-html-components@{}'\n- '/dash_html_components/dash_html_components.min.js.map'\n+ \"relative_package_path\": \"html/{}.min.js.map\".format(_this_module),\n+ \"external_url\": (\n+ \"https://unpkg.com/dash-html-components@{}\"\n+ \"/dash_html_components/dash_html_components.min.js.map\"\n ).format(__version__),\n- 'namespace': 'dash',\n- 'dynamic': True\n- }\n+ \"namespace\": \"dash\",\n+ \"dynamic\": True,\n+ },\n ]\n \n _css_dist = []\n \n \n for _component in __all__:\n- setattr(locals()[_component], '_js_dist', _js_dist)\n- setattr(locals()[_component], '_css_dist', _css_dist)\n+ setattr(locals()[_component], \"_js_dist\", _js_dist)\n+ setattr(locals()[_component], \"_css_dist\", _css_dist)\ndiff --git a/components/dash-html-components/setup.py b/components/dash-html-components/setup.py\n--- a/components/dash-html-components/setup.py\n+++ b/components/dash-html-components/setup.py\n@@ -2,22 +2,22 @@\n import json\n from setuptools import setup\n \n-with open('package.json') as f:\n+with open(\"package.json\") as f:\n package = json.load(f)\n \n package_name = str(package[\"name\"].replace(\" \", \"_\").replace(\"-\", \"_\"))\n \n setup(\n- name='dash_html_components',\n+ name=\"dash_html_components\",\n version=package[\"version\"],\n- author=package['author'],\n- author_email='[email protected]',\n+ author=package[\"author\"],\n+ author_email=\"[email protected]\",\n packages=[package_name],\n- url='https://github.com/plotly/dash-html-components',\n+ url=\"https://github.com/plotly/dash-html-components\",\n include_package_data=True,\n- license=package['license'],\n- description=package['description'] if 'description' in package else package_name,\n- long_description=io.open('README.md', encoding='utf-8').read(),\n- long_description_content_type='text/markdown',\n- install_requires=[]\n+ license=package[\"license\"],\n+ description=package[\"description\"] if \"description\" in package else package_name,\n+ long_description=io.open(\"README.md\", encoding=\"utf-8\").read(),\n+ long_description_content_type=\"text/markdown\",\n+ install_requires=[],\n )\n", "issue": "[BUG] Dropdown: Selected options not showing when the `value` contains a comma \nAs [reported on the forum by @ marcus](https://community.plotly.com/t/dcc-dropdown-bug-suspected-please-confirm-or-correct-me/60585):\r\n\r\nProblem is in this spot: 'value': \"50 , 10\"\r\nSpecifically comma in `value` string makes that selection is not being displayed. When coma is removed or replaced (dot creates no issue) it works fine, but when coma is within the string, the selections are not displayed in the input box and are not removed from the `options`\r\n\r\n This only occurs in Multi-Value dropdowns. This is new in Dash V2.1.0 It works as expected in V2.0.0\r\n\r\n\r\n\r\n\r\n```\r\nfrom dash import Dash, dcc, html, Input, Output\r\n\r\napp = Dash(__name__)\r\n\r\napp.layout = html.Div(\r\n [\r\n dcc.Dropdown(\r\n id=\"dropdown\",\r\n className=\"inputbox-long\",\r\n options=[\r\n {\"label\": \"New York City\", \"value\": \"50, 10\"},\r\n {\"label\": \"Montreal\", \"value\": \"MTL\" },\r\n {\"label\": \"San Francisco\", \"value\": \"SF\"},\r\n ],\r\n placeholder=\"Select one or more\",\r\n multi=True,\r\n ),\r\n html.Div(id=\"output\"),\r\n ]\r\n)\r\n\r\n\r\[email protected](\r\n Output(\"output\", \"children\"), Input(\"dropdown\", \"value\"),\r\n)\r\ndef update(value): \r\n return value\r\n\r\n\r\nif __name__ == \"__main__\":\r\n app.run_server(debug=True)\r\n```\r\n\r\n\n", "before_files": [{"content": "\"\"\"Vanilla HTML components for Dash\"\"\"\n\nfrom ._imports_ import * # noqa: E402, F401, F403\nfrom ._imports_ import __all__ # noqa: E402\n\nimport json\nimport os as _os\nimport sys as _sys\nimport dash as _dash\n\n_basepath = _os.path.dirname(__file__)\n_filepath = _os.path.abspath(_os.path.join(_basepath, \"package-info.json\"))\nwith open(_filepath) as f:\n package = json.load(f)\n\npackage_name = package[\"name\"].replace(\" \", \"_\").replace(\"-\", \"_\")\n__version__ = package[\"version\"]\n\n\n# Module imports trigger a dash.development import, need to check this first\nif not hasattr(_dash, \"__plotly_dash\") and not hasattr(_dash, \"development\"):\n print(\n \"Dash was not successfully imported. Make sure you don't have a file \"\n \"named \\n'dash.py' in your current directory.\",\n file=_sys.stderr,\n )\n _sys.exit(1)\n\n_current_path = _os.path.dirname(_os.path.abspath(__file__))\n\n\n_this_module = \"dash_html_components\"\n\n_js_dist = [\n {\n \"relative_package_path\": 'html/{}.min.js'.format(_this_module),\n \"external_url\": (\n \"https://unpkg.com/dash-html-components@{}\"\n \"/dash_html_components/dash_html_components.min.js\"\n ).format(__version__),\n \"namespace\": \"dash\"\n },\n {\n 'relative_package_path': 'html/{}.min.js.map'.format(_this_module),\n 'external_url': (\n 'https://unpkg.com/dash-html-components@{}'\n '/dash_html_components/dash_html_components.min.js.map'\n ).format(__version__),\n 'namespace': 'dash',\n 'dynamic': True\n }\n]\n\n_css_dist = []\n\n\nfor _component in __all__:\n setattr(locals()[_component], '_js_dist', _js_dist)\n setattr(locals()[_component], '_css_dist', _css_dist)\n", "path": "components/dash-html-components/dash_html_components_base/__init__.py"}, {"content": "import io\nimport json\nfrom setuptools import setup\n\nwith open('package.json') as f:\n package = json.load(f)\n\npackage_name = str(package[\"name\"].replace(\" \", \"_\").replace(\"-\", \"_\"))\n\nsetup(\n name='dash_html_components',\n version=package[\"version\"],\n author=package['author'],\n author_email='[email protected]',\n packages=[package_name],\n url='https://github.com/plotly/dash-html-components',\n include_package_data=True,\n license=package['license'],\n description=package['description'] if 'description' in package else package_name,\n long_description=io.open('README.md', encoding='utf-8').read(),\n long_description_content_type='text/markdown',\n install_requires=[]\n)\n", "path": "components/dash-html-components/setup.py"}]} | 1,757 | 765 |
gh_patches_debug_15670 | rasdani/github-patches | git_diff | Mailu__Mailu-1925 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Optimize Rainloop image
## Before you open your issue
- [X] Check if no issue or pull-request for this already exists.
- [X] Check [documentation](https://mailu.io/master/) and [FAQ](https://mailu.io/master/faq.html). (Tip, use the search function on the documentation page)
- [X] You understand `Mailu` is made by volunteers in their **free time** — be conscise, civil and accept that delays can occur.
- [X] The title of the issue should be short and simple. It should contain specific terms related to the actual issue. Be specific while writing the title.
## Environment & Versions
### Environment
- [X] docker-compose
- [X] kubernetes
- [X] docker swarm
### Versions
Master
```
$> docker images
mailu/rainloop master 2ad8d1c29ff3 45 hours ago 607MB
```
Optimized version using NGINX:
```
user/rainloop master 7de9dee9286d 2 seconds ago 116MB
```
## Description
In order to resolve issues #1830 and #1200, it is required to reduce build time, complexity and image size of images before multi-architecture builds can be supported. The current Rainloop image size is 607MB and can be optimized when changing from Apache to NGINX with officially supported Docker images. This can also increase overall web performance.
</issue>
<code>
[start of webmails/rainloop/start.py]
1 #!/usr/bin/python3
2
3 import os
4 import shutil
5 import logging as log
6 import sys
7 from socrate import system, conf
8
9 log.basicConfig(stream=sys.stderr, level=os.environ.get("LOG_LEVEL", "WARNING"))
10
11 # Actual startup script
12 os.environ["FRONT_ADDRESS"] = system.resolve_address(os.environ.get("HOST_FRONT", "front"))
13 os.environ["IMAP_ADDRESS"] = system.resolve_address(os.environ.get("HOST_IMAP", "imap"))
14
15 os.environ["MAX_FILESIZE"] = str(int(int(os.environ.get("MESSAGE_SIZE_LIMIT"))*0.66/1048576))
16
17 base = "/data/_data_/_default_/"
18 shutil.rmtree(base + "domains/", ignore_errors=True)
19 os.makedirs(base + "domains", exist_ok=True)
20 os.makedirs(base + "configs", exist_ok=True)
21
22 conf.jinja("/default.ini", os.environ, "/data/_data_/_default_/domains/default.ini")
23 conf.jinja("/application.ini", os.environ, "/data/_data_/_default_/configs/application.ini")
24 conf.jinja("/php.ini", os.environ, "/usr/local/etc/php/conf.d/rainloop.ini")
25
26 os.system("chown -R www-data:www-data /data")
27 os.system("chmod -R a+rX /var/www/html/")
28
29 os.execv("/usr/local/bin/apache2-foreground", ["apache2-foreground"])
30
31
[end of webmails/rainloop/start.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/webmails/rainloop/start.py b/webmails/rainloop/start.py
--- a/webmails/rainloop/start.py
+++ b/webmails/rainloop/start.py
@@ -19,12 +19,11 @@
os.makedirs(base + "domains", exist_ok=True)
os.makedirs(base + "configs", exist_ok=True)
-conf.jinja("/default.ini", os.environ, "/data/_data_/_default_/domains/default.ini")
-conf.jinja("/application.ini", os.environ, "/data/_data_/_default_/configs/application.ini")
-conf.jinja("/php.ini", os.environ, "/usr/local/etc/php/conf.d/rainloop.ini")
+conf.jinja("/defaults/default.ini", os.environ, "/data/_data_/_default_/domains/default.ini")
+conf.jinja("/defaults/application.ini", os.environ, "/data/_data_/_default_/configs/application.ini")
+conf.jinja("/defaults/php.ini", os.environ, "/etc/php7/php.ini")
-os.system("chown -R www-data:www-data /data")
-os.system("chmod -R a+rX /var/www/html/")
-
-os.execv("/usr/local/bin/apache2-foreground", ["apache2-foreground"])
+os.system("chown -R nginx:nginx /data")
+os.system("chmod -R a+rX /var/www/rainloop/")
+os.execv("/usr/sbin/nginx", ["nginx", "-g", "daemon off;"])
| {"golden_diff": "diff --git a/webmails/rainloop/start.py b/webmails/rainloop/start.py\n--- a/webmails/rainloop/start.py\n+++ b/webmails/rainloop/start.py\n@@ -19,12 +19,11 @@\n os.makedirs(base + \"domains\", exist_ok=True)\n os.makedirs(base + \"configs\", exist_ok=True)\n \n-conf.jinja(\"/default.ini\", os.environ, \"/data/_data_/_default_/domains/default.ini\")\n-conf.jinja(\"/application.ini\", os.environ, \"/data/_data_/_default_/configs/application.ini\")\n-conf.jinja(\"/php.ini\", os.environ, \"/usr/local/etc/php/conf.d/rainloop.ini\")\n+conf.jinja(\"/defaults/default.ini\", os.environ, \"/data/_data_/_default_/domains/default.ini\")\n+conf.jinja(\"/defaults/application.ini\", os.environ, \"/data/_data_/_default_/configs/application.ini\")\n+conf.jinja(\"/defaults/php.ini\", os.environ, \"/etc/php7/php.ini\")\n \n-os.system(\"chown -R www-data:www-data /data\")\n-os.system(\"chmod -R a+rX /var/www/html/\")\n-\n-os.execv(\"/usr/local/bin/apache2-foreground\", [\"apache2-foreground\"])\n+os.system(\"chown -R nginx:nginx /data\")\n+os.system(\"chmod -R a+rX /var/www/rainloop/\")\n \n+os.execv(\"/usr/sbin/nginx\", [\"nginx\", \"-g\", \"daemon off;\"])\n", "issue": "Optimize Rainloop image\n## Before you open your issue\r\n- [X] Check if no issue or pull-request for this already exists.\r\n- [X] Check [documentation](https://mailu.io/master/) and [FAQ](https://mailu.io/master/faq.html). (Tip, use the search function on the documentation page)\r\n- [X] You understand `Mailu` is made by volunteers in their **free time** \u2014 be conscise, civil and accept that delays can occur.\r\n- [X] The title of the issue should be short and simple. It should contain specific terms related to the actual issue. Be specific while writing the title.\r\n\r\n## Environment & Versions\r\n### Environment\r\n - [X] docker-compose\r\n - [X] kubernetes\r\n - [X] docker swarm\r\n\r\n### Versions\r\nMaster\r\n```\r\n$> docker images\r\nmailu/rainloop master 2ad8d1c29ff3 45 hours ago 607MB\r\n```\r\nOptimized version using NGINX:\r\n```\r\nuser/rainloop master 7de9dee9286d 2 seconds ago 116MB\r\n```\r\n\r\n## Description\r\nIn order to resolve issues #1830 and #1200, it is required to reduce build time, complexity and image size of images before multi-architecture builds can be supported. The current Rainloop image size is 607MB and can be optimized when changing from Apache to NGINX with officially supported Docker images. This can also increase overall web performance.\n", "before_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport shutil\nimport logging as log\nimport sys\nfrom socrate import system, conf\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n\n# Actual startup script\nos.environ[\"FRONT_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_FRONT\", \"front\"))\nos.environ[\"IMAP_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_IMAP\", \"imap\"))\n\nos.environ[\"MAX_FILESIZE\"] = str(int(int(os.environ.get(\"MESSAGE_SIZE_LIMIT\"))*0.66/1048576))\n\nbase = \"/data/_data_/_default_/\"\nshutil.rmtree(base + \"domains/\", ignore_errors=True)\nos.makedirs(base + \"domains\", exist_ok=True)\nos.makedirs(base + \"configs\", exist_ok=True)\n\nconf.jinja(\"/default.ini\", os.environ, \"/data/_data_/_default_/domains/default.ini\")\nconf.jinja(\"/application.ini\", os.environ, \"/data/_data_/_default_/configs/application.ini\")\nconf.jinja(\"/php.ini\", os.environ, \"/usr/local/etc/php/conf.d/rainloop.ini\")\n\nos.system(\"chown -R www-data:www-data /data\")\nos.system(\"chmod -R a+rX /var/www/html/\")\n\nos.execv(\"/usr/local/bin/apache2-foreground\", [\"apache2-foreground\"])\n\n", "path": "webmails/rainloop/start.py"}]} | 1,222 | 316 |
gh_patches_debug_31146 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-2594 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
install.sh fails with empty secret_key on Mac OS Ventura
## Description
Mathesar fails to start because secret_key is empty in .env file after running `install.sh`. The script also fails due to that and steps after that do not run.
Note: This happens on Mac OS Ventura, but seems to work fine on Big Sur
</issue>
<code>
[start of db/engine.py]
1 import copy
2
3 from sqlalchemy import create_engine as sa_create_engine
4
5 from db.types.custom.base import CUSTOM_DB_TYPE_TO_SA_CLASS
6
7
8 def get_connection_string(username, password, hostname, database, port='5432'):
9 return f"postgresql://{username}:{password}@{hostname}:{port}/{database}"
10
11
12 def create_future_engine_with_custom_types(
13 username, password, hostname, database, port, *args, **kwargs
14 ):
15 engine = create_future_engine(
16 username, password, hostname, database, port, *args, **kwargs
17 )
18 # We need to add our custom types to any engine created for SQLALchemy use
19 # so that they can be used for reflection
20 add_custom_types_to_ischema_names(engine)
21 return engine
22
23
24 # TODO would an engine without ischema names updated ever be used? make it private if not
25 def create_future_engine(
26 username, password, hostname, database, port, *args, **kwargs
27 ):
28 conn_str = get_connection_string(
29 username, password, hostname, database, port
30 )
31 kwargs.update(future=True)
32 return create_engine(conn_str, *args, **kwargs)
33
34
35 # NOTE: used in testing, hence public
36 def create_engine(conn_str, *args, **kwargs):
37 """
38 Wrapper over sqlalchemy.create_engine that stops SA from propagating changes to ischema_names
39 across all engines. This is important for testing: without this intervention, fixtures become
40 randomly corrupted.
41 """
42 engine = sa_create_engine(conn_str, *args, **kwargs)
43 _make_ischema_names_unique(engine)
44 return engine
45
46
47 # TODO should refactor for this to be private
48 def add_custom_types_to_ischema_names(engine):
49 """
50 Updating the ischema_names dict changes which Postgres types are reflected into which SA
51 classes.
52 """
53 for db_type, sa_class in CUSTOM_DB_TYPE_TO_SA_CLASS.items():
54 db_type_id = db_type.id
55 engine.dialect.ischema_names[db_type_id] = sa_class
56
57
58 def get_dummy_engine():
59 """
60 In some cases we only need an engine to access the Postgres dialect. E.g. when examining the
61 ischema_names dict. In those cases, following is enough:
62 """
63 engine = create_engine("postgresql://", future=True)
64 add_custom_types_to_ischema_names(engine)
65 return engine
66
67
68 def _make_ischema_names_unique(engine):
69 """
70 For some reason, engine.dialect.ischema_names reference the same dict across different engines.
71 This resets it to a referentially unique copy of itself.
72 """
73 ischema_names = engine.dialect.ischema_names
74 ischema_names_copy = copy.deepcopy(ischema_names)
75 setattr(engine.dialect, "ischema_names", ischema_names_copy)
76
[end of db/engine.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/db/engine.py b/db/engine.py
--- a/db/engine.py
+++ b/db/engine.py
@@ -1,14 +1,11 @@
import copy
from sqlalchemy import create_engine as sa_create_engine
+from sqlalchemy.engine import URL
from db.types.custom.base import CUSTOM_DB_TYPE_TO_SA_CLASS
-def get_connection_string(username, password, hostname, database, port='5432'):
- return f"postgresql://{username}:{password}@{hostname}:{port}/{database}"
-
-
def create_future_engine_with_custom_types(
username, password, hostname, database, port, *args, **kwargs
):
@@ -25,21 +22,26 @@
def create_future_engine(
username, password, hostname, database, port, *args, **kwargs
):
- conn_str = get_connection_string(
- username, password, hostname, database, port
+ conn_url = URL.create(
+ "postgresql",
+ username=username,
+ password=password,
+ host=hostname,
+ database=database,
+ port=port,
)
kwargs.update(future=True)
- return create_engine(conn_str, *args, **kwargs)
+ return create_engine(conn_url, *args, **kwargs)
# NOTE: used in testing, hence public
-def create_engine(conn_str, *args, **kwargs):
+def create_engine(conn_url, *args, **kwargs):
"""
Wrapper over sqlalchemy.create_engine that stops SA from propagating changes to ischema_names
across all engines. This is important for testing: without this intervention, fixtures become
randomly corrupted.
"""
- engine = sa_create_engine(conn_str, *args, **kwargs)
+ engine = sa_create_engine(conn_url, *args, **kwargs)
_make_ischema_names_unique(engine)
return engine
| {"golden_diff": "diff --git a/db/engine.py b/db/engine.py\n--- a/db/engine.py\n+++ b/db/engine.py\n@@ -1,14 +1,11 @@\n import copy\n \n from sqlalchemy import create_engine as sa_create_engine\n+from sqlalchemy.engine import URL\n \n from db.types.custom.base import CUSTOM_DB_TYPE_TO_SA_CLASS\n \n \n-def get_connection_string(username, password, hostname, database, port='5432'):\n- return f\"postgresql://{username}:{password}@{hostname}:{port}/{database}\"\n-\n-\n def create_future_engine_with_custom_types(\n username, password, hostname, database, port, *args, **kwargs\n ):\n@@ -25,21 +22,26 @@\n def create_future_engine(\n username, password, hostname, database, port, *args, **kwargs\n ):\n- conn_str = get_connection_string(\n- username, password, hostname, database, port\n+ conn_url = URL.create(\n+ \"postgresql\",\n+ username=username,\n+ password=password,\n+ host=hostname,\n+ database=database,\n+ port=port,\n )\n kwargs.update(future=True)\n- return create_engine(conn_str, *args, **kwargs)\n+ return create_engine(conn_url, *args, **kwargs)\n \n \n # NOTE: used in testing, hence public\n-def create_engine(conn_str, *args, **kwargs):\n+def create_engine(conn_url, *args, **kwargs):\n \"\"\"\n Wrapper over sqlalchemy.create_engine that stops SA from propagating changes to ischema_names\n across all engines. This is important for testing: without this intervention, fixtures become\n randomly corrupted.\n \"\"\"\n- engine = sa_create_engine(conn_str, *args, **kwargs)\n+ engine = sa_create_engine(conn_url, *args, **kwargs)\n _make_ischema_names_unique(engine)\n return engine\n", "issue": "install.sh fails with empty secret_key on Mac OS Ventura\n## Description\r\n\r\nMathesar fails to start because secret_key is empty in .env file after running `install.sh`. The script also fails due to that and steps after that do not run.\r\n\r\nNote: This happens on Mac OS Ventura, but seems to work fine on Big Sur\n", "before_files": [{"content": "import copy\n\nfrom sqlalchemy import create_engine as sa_create_engine\n\nfrom db.types.custom.base import CUSTOM_DB_TYPE_TO_SA_CLASS\n\n\ndef get_connection_string(username, password, hostname, database, port='5432'):\n return f\"postgresql://{username}:{password}@{hostname}:{port}/{database}\"\n\n\ndef create_future_engine_with_custom_types(\n username, password, hostname, database, port, *args, **kwargs\n):\n engine = create_future_engine(\n username, password, hostname, database, port, *args, **kwargs\n )\n # We need to add our custom types to any engine created for SQLALchemy use\n # so that they can be used for reflection\n add_custom_types_to_ischema_names(engine)\n return engine\n\n\n# TODO would an engine without ischema names updated ever be used? make it private if not\ndef create_future_engine(\n username, password, hostname, database, port, *args, **kwargs\n):\n conn_str = get_connection_string(\n username, password, hostname, database, port\n )\n kwargs.update(future=True)\n return create_engine(conn_str, *args, **kwargs)\n\n\n# NOTE: used in testing, hence public\ndef create_engine(conn_str, *args, **kwargs):\n \"\"\"\n Wrapper over sqlalchemy.create_engine that stops SA from propagating changes to ischema_names\n across all engines. This is important for testing: without this intervention, fixtures become\n randomly corrupted.\n \"\"\"\n engine = sa_create_engine(conn_str, *args, **kwargs)\n _make_ischema_names_unique(engine)\n return engine\n\n\n# TODO should refactor for this to be private\ndef add_custom_types_to_ischema_names(engine):\n \"\"\"\n Updating the ischema_names dict changes which Postgres types are reflected into which SA\n classes.\n \"\"\"\n for db_type, sa_class in CUSTOM_DB_TYPE_TO_SA_CLASS.items():\n db_type_id = db_type.id\n engine.dialect.ischema_names[db_type_id] = sa_class\n\n\ndef get_dummy_engine():\n \"\"\"\n In some cases we only need an engine to access the Postgres dialect. E.g. when examining the\n ischema_names dict. In those cases, following is enough:\n \"\"\"\n engine = create_engine(\"postgresql://\", future=True)\n add_custom_types_to_ischema_names(engine)\n return engine\n\n\ndef _make_ischema_names_unique(engine):\n \"\"\"\n For some reason, engine.dialect.ischema_names reference the same dict across different engines.\n This resets it to a referentially unique copy of itself.\n \"\"\"\n ischema_names = engine.dialect.ischema_names\n ischema_names_copy = copy.deepcopy(ischema_names)\n setattr(engine.dialect, \"ischema_names\", ischema_names_copy)\n", "path": "db/engine.py"}]} | 1,338 | 397 |
gh_patches_debug_33551 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-5443 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Counting Comments on map popup and list items (2 issues - similar problem in a+)
**URL:** https://meinberlin-dev.liqd.net/mapideas/2023-01031/ ; https://meinberlin-dev.liqd.net/projekte/testprojekt-newsletter/
**user:** any
**expected behaviour:** the counting of comments should be consistent
**behaviour:**
1. The number of comments in the detail idea view is not the same anymore as the number in the idea overview (list & map). This is because the detail ide view now counts as well child comments while the idea overview doesn't. (see screenshot 1 vs. 2)
2. The counting in the detail view stops at 100 seperate comments. If there are child comments, it adds to counting of 100. The number is then also different to the idea overview. If I scroll down, then new comments are loaded and the counting number on top changes. This can be very confusing. (see screenshot 1, 2 & 3)
**important screensize:** any
**device & browser:** mac ff
**Comment/Question:**
Screenshot?
**1. screenshot of idea overview (map)**
<img width="821" alt="Bildschirmfoto 2023-08-01 um 15 36 52" src="https://github.com/liqd/a4-meinberlin/assets/113608720/ac6d7dd2-9785-49ad-85d4-f380cda6401d">
**2. screenshot of idea detail view with child comments**
<img width="847" alt="Bildschirmfoto 2023-08-01 um 15 37 17" src="https://github.com/liqd/a4-meinberlin/assets/113608720/45951686-f9d2-4acb-8615-8b75182ac943">
**3. screenshot of idea detail view with child comments and scrolled down**
<img width="972" alt="Bildschirmfoto 2023-08-01 um 15 37 40" src="https://github.com/liqd/a4-meinberlin/assets/113608720/3e2c3d16-0578-4a87-8f47-285d61e04be3">
</issue>
<code>
[start of meinberlin/apps/projects/templatetags/meinberlin_project_tags.py]
1 from django import template
2
3 from adhocracy4.comments.models import Comment
4 from adhocracy4.polls.models import Vote as Vote
5 from meinberlin.apps.budgeting.models import Proposal as budget_proposal
6 from meinberlin.apps.ideas.models import Idea
7 from meinberlin.apps.kiezkasse.models import Proposal as kiezkasse_proposal
8 from meinberlin.apps.likes.models import Like
9 from meinberlin.apps.livequestions.models import LiveQuestion
10 from meinberlin.apps.mapideas.models import MapIdea
11
12 register = template.Library()
13
14
15 @register.filter
16 def project_url(project):
17 if (
18 project.project_type == "meinberlin_bplan.Bplan"
19 or project.project_type == "meinberlin_extprojects.ExternalProject"
20 ):
21 return project.externalproject.url
22 return project.get_absolute_url()
23
24
25 @register.filter
26 def is_external(project):
27 return (
28 project.project_type == "meinberlin_bplan.Bplan"
29 or project.project_type == "meinberlin_extprojects.ExternalProject"
30 )
31
32
33 @register.simple_tag
34 def get_num_entries(module):
35 """Count all user-generated items."""
36 item_count = (
37 Idea.objects.filter(module=module).count()
38 + MapIdea.objects.filter(module=module).count()
39 + budget_proposal.objects.filter(module=module).count()
40 + kiezkasse_proposal.objects.filter(module=module).count()
41 + Comment.objects.filter(idea__module=module).count()
42 + Comment.objects.filter(mapidea__module=module).count()
43 + Comment.objects.filter(budget_proposal__module=module).count()
44 + Comment.objects.filter(kiezkasse_proposal__module=module).count()
45 + Comment.objects.filter(topic__module=module).count()
46 + Comment.objects.filter(maptopic__module=module).count()
47 + Comment.objects.filter(paragraph__chapter__module=module).count()
48 + Comment.objects.filter(chapter__module=module).count()
49 + Comment.objects.filter(poll__module=module).count()
50 + Vote.objects.filter(choice__question__poll__module=module).count()
51 + LiveQuestion.objects.filter(module=module).count()
52 + Like.objects.filter(question__module=module).count()
53 )
54 return item_count
55
[end of meinberlin/apps/projects/templatetags/meinberlin_project_tags.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py b/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py
--- a/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py
+++ b/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py
@@ -1,4 +1,7 @@
from django import template
+from django.db.models import Count
+from django.db.models import Q
+from django.db.models import Sum
from adhocracy4.comments.models import Comment
from adhocracy4.polls.models import Vote as Vote
@@ -38,17 +41,28 @@
+ MapIdea.objects.filter(module=module).count()
+ budget_proposal.objects.filter(module=module).count()
+ kiezkasse_proposal.objects.filter(module=module).count()
- + Comment.objects.filter(idea__module=module).count()
- + Comment.objects.filter(mapidea__module=module).count()
- + Comment.objects.filter(budget_proposal__module=module).count()
- + Comment.objects.filter(kiezkasse_proposal__module=module).count()
- + Comment.objects.filter(topic__module=module).count()
- + Comment.objects.filter(maptopic__module=module).count()
- + Comment.objects.filter(paragraph__chapter__module=module).count()
- + Comment.objects.filter(chapter__module=module).count()
- + Comment.objects.filter(poll__module=module).count()
+ Vote.objects.filter(choice__question__poll__module=module).count()
+ LiveQuestion.objects.filter(module=module).count()
+ Like.objects.filter(question__module=module).count()
)
- return item_count
+ comment_filter = (
+ Q(idea__module=module)
+ | Q(mapidea__module=module)
+ | Q(budget_proposal__module=module)
+ | Q(kiezkasse_proposal__module=module)
+ | Q(topic__module=module)
+ | Q(maptopic__module=module)
+ | Q(paragraph__chapter__module=module)
+ | Q(chapter__module=module)
+ | Q(poll__module=module)
+ )
+ comment_count = (
+ Comment.objects.filter(comment_filter)
+ .annotate(child_comment_count=Count("child_comments__pk", distinct=True))
+ .aggregate(comment_count=Count("pk") + Sum("child_comment_count"))[
+ "comment_count"
+ ]
+ )
+ if comment_count is None:
+ comment_count = 0
+ return item_count + comment_count
| {"golden_diff": "diff --git a/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py b/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py\n--- a/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py\n+++ b/meinberlin/apps/projects/templatetags/meinberlin_project_tags.py\n@@ -1,4 +1,7 @@\n from django import template\n+from django.db.models import Count\n+from django.db.models import Q\n+from django.db.models import Sum\n \n from adhocracy4.comments.models import Comment\n from adhocracy4.polls.models import Vote as Vote\n@@ -38,17 +41,28 @@\n + MapIdea.objects.filter(module=module).count()\n + budget_proposal.objects.filter(module=module).count()\n + kiezkasse_proposal.objects.filter(module=module).count()\n- + Comment.objects.filter(idea__module=module).count()\n- + Comment.objects.filter(mapidea__module=module).count()\n- + Comment.objects.filter(budget_proposal__module=module).count()\n- + Comment.objects.filter(kiezkasse_proposal__module=module).count()\n- + Comment.objects.filter(topic__module=module).count()\n- + Comment.objects.filter(maptopic__module=module).count()\n- + Comment.objects.filter(paragraph__chapter__module=module).count()\n- + Comment.objects.filter(chapter__module=module).count()\n- + Comment.objects.filter(poll__module=module).count()\n + Vote.objects.filter(choice__question__poll__module=module).count()\n + LiveQuestion.objects.filter(module=module).count()\n + Like.objects.filter(question__module=module).count()\n )\n- return item_count\n+ comment_filter = (\n+ Q(idea__module=module)\n+ | Q(mapidea__module=module)\n+ | Q(budget_proposal__module=module)\n+ | Q(kiezkasse_proposal__module=module)\n+ | Q(topic__module=module)\n+ | Q(maptopic__module=module)\n+ | Q(paragraph__chapter__module=module)\n+ | Q(chapter__module=module)\n+ | Q(poll__module=module)\n+ )\n+ comment_count = (\n+ Comment.objects.filter(comment_filter)\n+ .annotate(child_comment_count=Count(\"child_comments__pk\", distinct=True))\n+ .aggregate(comment_count=Count(\"pk\") + Sum(\"child_comment_count\"))[\n+ \"comment_count\"\n+ ]\n+ )\n+ if comment_count is None:\n+ comment_count = 0\n+ return item_count + comment_count\n", "issue": "Counting Comments on map popup and list items (2 issues - similar problem in a+)\n**URL:** https://meinberlin-dev.liqd.net/mapideas/2023-01031/ ; https://meinberlin-dev.liqd.net/projekte/testprojekt-newsletter/\r\n**user:** any\r\n**expected behaviour:** the counting of comments should be consistent\r\n**behaviour:** \r\n\r\n1. The number of comments in the detail idea view is not the same anymore as the number in the idea overview (list & map). This is because the detail ide view now counts as well child comments while the idea overview doesn't. (see screenshot 1 vs. 2)\r\n\r\n2. The counting in the detail view stops at 100 seperate comments. If there are child comments, it adds to counting of 100. The number is then also different to the idea overview. If I scroll down, then new comments are loaded and the counting number on top changes. This can be very confusing. (see screenshot 1, 2 & 3)\r\n\r\n**important screensize:** any\r\n**device & browser:** mac ff\r\n**Comment/Question:** \r\n\r\nScreenshot?\r\n**1. screenshot of idea overview (map)**\r\n<img width=\"821\" alt=\"Bildschirm\u00adfoto 2023-08-01 um 15 36 52\" src=\"https://github.com/liqd/a4-meinberlin/assets/113608720/ac6d7dd2-9785-49ad-85d4-f380cda6401d\">\r\n\r\n**2. screenshot of idea detail view with child comments**\r\n<img width=\"847\" alt=\"Bildschirm\u00adfoto 2023-08-01 um 15 37 17\" src=\"https://github.com/liqd/a4-meinberlin/assets/113608720/45951686-f9d2-4acb-8615-8b75182ac943\">\r\n\r\n**3. screenshot of idea detail view with child comments and scrolled down**\r\n<img width=\"972\" alt=\"Bildschirm\u00adfoto 2023-08-01 um 15 37 40\" src=\"https://github.com/liqd/a4-meinberlin/assets/113608720/3e2c3d16-0578-4a87-8f47-285d61e04be3\">\r\n\r\n\n", "before_files": [{"content": "from django import template\n\nfrom adhocracy4.comments.models import Comment\nfrom adhocracy4.polls.models import Vote as Vote\nfrom meinberlin.apps.budgeting.models import Proposal as budget_proposal\nfrom meinberlin.apps.ideas.models import Idea\nfrom meinberlin.apps.kiezkasse.models import Proposal as kiezkasse_proposal\nfrom meinberlin.apps.likes.models import Like\nfrom meinberlin.apps.livequestions.models import LiveQuestion\nfrom meinberlin.apps.mapideas.models import MapIdea\n\nregister = template.Library()\n\n\[email protected]\ndef project_url(project):\n if (\n project.project_type == \"meinberlin_bplan.Bplan\"\n or project.project_type == \"meinberlin_extprojects.ExternalProject\"\n ):\n return project.externalproject.url\n return project.get_absolute_url()\n\n\[email protected]\ndef is_external(project):\n return (\n project.project_type == \"meinberlin_bplan.Bplan\"\n or project.project_type == \"meinberlin_extprojects.ExternalProject\"\n )\n\n\[email protected]_tag\ndef get_num_entries(module):\n \"\"\"Count all user-generated items.\"\"\"\n item_count = (\n Idea.objects.filter(module=module).count()\n + MapIdea.objects.filter(module=module).count()\n + budget_proposal.objects.filter(module=module).count()\n + kiezkasse_proposal.objects.filter(module=module).count()\n + Comment.objects.filter(idea__module=module).count()\n + Comment.objects.filter(mapidea__module=module).count()\n + Comment.objects.filter(budget_proposal__module=module).count()\n + Comment.objects.filter(kiezkasse_proposal__module=module).count()\n + Comment.objects.filter(topic__module=module).count()\n + Comment.objects.filter(maptopic__module=module).count()\n + Comment.objects.filter(paragraph__chapter__module=module).count()\n + Comment.objects.filter(chapter__module=module).count()\n + Comment.objects.filter(poll__module=module).count()\n + Vote.objects.filter(choice__question__poll__module=module).count()\n + LiveQuestion.objects.filter(module=module).count()\n + Like.objects.filter(question__module=module).count()\n )\n return item_count\n", "path": "meinberlin/apps/projects/templatetags/meinberlin_project_tags.py"}]} | 1,728 | 599 |
gh_patches_debug_18860 | rasdani/github-patches | git_diff | Qiskit__qiskit-2755 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
inconsistency between CU1 and CU3 gate definitions
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues to confirm this idea does not exist. -->
### What is the expected enhancement?
This is not a bug or enhancement request as such, but seems like an internal inconsistency in Qiskit's gate definitions.
In [the gate definitions](https://github.com/Qiskit/qiskit-tutorials/blob/master/qiskit/terra/summary_of_quantum_operations.ipynb), U1 is defined as [1,0,0,e^(iλ)], while an Rz is a [e^(-iλ/2),0,0,e^(iλ/2)].
U3 is defined in the docs similarly to U1 - ie. a U3 is a U1*Ry*U1. Therefore, a U3(0,0,a) = U1(a). However, CU3 is defined in the docs in such a way that CU3(0,0,a) != CU1(a). CU3 is instead defined using the Rz definition, rather than the U1.
So:
U3(0,0,a) = U1(a)
CU3(0,0,a) != CU1(a)
This is a confusing set of definitions. I assume that these definitions were a conscious decision, and that you are aware of the inconsistency, but I don't understand why?
I hope this hasn't been asked already - I couldn't find a duplicate.
</issue>
<code>
[start of qiskit/extensions/standard/cu3.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """
16 controlled-u3 gate.
17 """
18 from qiskit.circuit import Gate
19 from qiskit.circuit import QuantumCircuit
20 from qiskit.circuit import QuantumRegister
21 from qiskit.extensions.standard.u1 import U1Gate
22 from qiskit.extensions.standard.u3 import U3Gate
23 from qiskit.extensions.standard.cx import CnotGate
24
25
26 class Cu3Gate(Gate):
27 """controlled-u3 gate."""
28
29 def __init__(self, theta, phi, lam):
30 """Create new cu3 gate."""
31 super().__init__("cu3", 2, [theta, phi, lam])
32
33 def _define(self):
34 """
35 gate cu3(theta,phi,lambda) c, t
36 { u1((lambda-phi)/2) t; cx c,t;
37 u3(-theta/2,0,-(phi+lambda)/2) t; cx c,t;
38 u3(theta/2,phi,0) t;
39 }
40 """
41 definition = []
42 q = QuantumRegister(2, "q")
43 rule = [
44 (U1Gate((self.params[2] - self.params[1]) / 2), [q[1]], []),
45 (CnotGate(), [q[0], q[1]], []),
46 (U3Gate(-self.params[0] / 2, 0, -(self.params[1] + self.params[2]) / 2), [q[1]], []),
47 (CnotGate(), [q[0], q[1]], []),
48 (U3Gate(self.params[0] / 2, self.params[1], 0), [q[1]], [])
49 ]
50 for inst in rule:
51 definition.append(inst)
52 self.definition = definition
53
54 def inverse(self):
55 """Invert this gate."""
56 return Cu3Gate(-self.params[0], -self.params[2], -self.params[1])
57
58
59 def cu3(self, theta, phi, lam, ctl, tgt):
60 """Apply cu3 from ctl to tgt with angle theta, phi, lam."""
61 return self.append(Cu3Gate(theta, phi, lam), [ctl, tgt], [])
62
63
64 QuantumCircuit.cu3 = cu3
65
[end of qiskit/extensions/standard/cu3.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qiskit/extensions/standard/cu3.py b/qiskit/extensions/standard/cu3.py
--- a/qiskit/extensions/standard/cu3.py
+++ b/qiskit/extensions/standard/cu3.py
@@ -33,7 +33,7 @@
def _define(self):
"""
gate cu3(theta,phi,lambda) c, t
- { u1((lambda-phi)/2) t; cx c,t;
+ { u1((lambda+phi)/2) c; u1((lambda-phi)/2) t; cx c,t;
u3(-theta/2,0,-(phi+lambda)/2) t; cx c,t;
u3(theta/2,phi,0) t;
}
@@ -41,6 +41,7 @@
definition = []
q = QuantumRegister(2, "q")
rule = [
+ (U1Gate((self.params[2] + self.params[1]) / 2), [q[0]], []),
(U1Gate((self.params[2] - self.params[1]) / 2), [q[1]], []),
(CnotGate(), [q[0], q[1]], []),
(U3Gate(-self.params[0] / 2, 0, -(self.params[1] + self.params[2]) / 2), [q[1]], []),
| {"golden_diff": "diff --git a/qiskit/extensions/standard/cu3.py b/qiskit/extensions/standard/cu3.py\n--- a/qiskit/extensions/standard/cu3.py\n+++ b/qiskit/extensions/standard/cu3.py\n@@ -33,7 +33,7 @@\n def _define(self):\n \"\"\"\n gate cu3(theta,phi,lambda) c, t\n- { u1((lambda-phi)/2) t; cx c,t;\n+ { u1((lambda+phi)/2) c; u1((lambda-phi)/2) t; cx c,t;\n u3(-theta/2,0,-(phi+lambda)/2) t; cx c,t;\n u3(theta/2,phi,0) t;\n }\n@@ -41,6 +41,7 @@\n definition = []\n q = QuantumRegister(2, \"q\")\n rule = [\n+ (U1Gate((self.params[2] + self.params[1]) / 2), [q[0]], []),\n (U1Gate((self.params[2] - self.params[1]) / 2), [q[1]], []),\n (CnotGate(), [q[0], q[1]], []),\n (U3Gate(-self.params[0] / 2, 0, -(self.params[1] + self.params[2]) / 2), [q[1]], []),\n", "issue": "inconsistency between CU1 and CU3 gate definitions\n<!-- \u26a0\ufe0f If you do not respect this template, your issue will be closed -->\r\n<!-- \u26a0\ufe0f Make sure to browse the opened and closed issues to confirm this idea does not exist. -->\r\n\r\n### What is the expected enhancement?\r\n\r\nThis is not a bug or enhancement request as such, but seems like an internal inconsistency in Qiskit's gate definitions.\r\nIn [the gate definitions](https://github.com/Qiskit/qiskit-tutorials/blob/master/qiskit/terra/summary_of_quantum_operations.ipynb), U1 is defined as [1,0,0,e^(i\u03bb)], while an Rz is a [e^(-i\u03bb/2),0,0,e^(i\u03bb/2)].\r\n\r\nU3 is defined in the docs similarly to U1 - ie. a U3 is a U1*Ry*U1. Therefore, a U3(0,0,a) = U1(a). However, CU3 is defined in the docs in such a way that CU3(0,0,a) != CU1(a). CU3 is instead defined using the Rz definition, rather than the U1.\r\n\r\nSo: \r\nU3(0,0,a) = U1(a)\r\nCU3(0,0,a) != CU1(a)\r\n\r\nThis is a confusing set of definitions. I assume that these definitions were a conscious decision, and that you are aware of the inconsistency, but I don't understand why?\r\nI hope this hasn't been asked already - I couldn't find a duplicate.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"\ncontrolled-u3 gate.\n\"\"\"\nfrom qiskit.circuit import Gate\nfrom qiskit.circuit import QuantumCircuit\nfrom qiskit.circuit import QuantumRegister\nfrom qiskit.extensions.standard.u1 import U1Gate\nfrom qiskit.extensions.standard.u3 import U3Gate\nfrom qiskit.extensions.standard.cx import CnotGate\n\n\nclass Cu3Gate(Gate):\n \"\"\"controlled-u3 gate.\"\"\"\n\n def __init__(self, theta, phi, lam):\n \"\"\"Create new cu3 gate.\"\"\"\n super().__init__(\"cu3\", 2, [theta, phi, lam])\n\n def _define(self):\n \"\"\"\n gate cu3(theta,phi,lambda) c, t\n { u1((lambda-phi)/2) t; cx c,t;\n u3(-theta/2,0,-(phi+lambda)/2) t; cx c,t;\n u3(theta/2,phi,0) t;\n }\n \"\"\"\n definition = []\n q = QuantumRegister(2, \"q\")\n rule = [\n (U1Gate((self.params[2] - self.params[1]) / 2), [q[1]], []),\n (CnotGate(), [q[0], q[1]], []),\n (U3Gate(-self.params[0] / 2, 0, -(self.params[1] + self.params[2]) / 2), [q[1]], []),\n (CnotGate(), [q[0], q[1]], []),\n (U3Gate(self.params[0] / 2, self.params[1], 0), [q[1]], [])\n ]\n for inst in rule:\n definition.append(inst)\n self.definition = definition\n\n def inverse(self):\n \"\"\"Invert this gate.\"\"\"\n return Cu3Gate(-self.params[0], -self.params[2], -self.params[1])\n\n\ndef cu3(self, theta, phi, lam, ctl, tgt):\n \"\"\"Apply cu3 from ctl to tgt with angle theta, phi, lam.\"\"\"\n return self.append(Cu3Gate(theta, phi, lam), [ctl, tgt], [])\n\n\nQuantumCircuit.cu3 = cu3\n", "path": "qiskit/extensions/standard/cu3.py"}]} | 1,602 | 314 |
gh_patches_debug_251 | rasdani/github-patches | git_diff | pyjanitor-devs__pyjanitor-497 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[DOC] Clarify Python version requirements
# Brief Description of Fix
I was looking through documentation (for users and contributors), and it was unclear to me which python versions we actually support. It seems that we support python 3.6 + 3.7. This arose as I was updating the `pyproject.toml` file to avoid the warning:
```
--py36 is deprecated and will be removed in a future version. Use --target-version py36 instead.
```
Our current locations of explicit python versions are in:
- `pyproject.toml`
- `py36 = true`
- `environment-dev.yml`
- `- python >= 3.6`
- `.azure-pipelines/pipeline-master.yml`
- `python.version: "3.7"`
# Proposed Fix
If `pyjanitor` is in fact meant to function on 3.6+, we should
- Explicitly inform contributors that their code should be 3.6+ compatible
- Inform users which python versions the package requires, on the documentation site, PyPI etc
- Add `python_requires=">=3.6"` to `setup.py`
</issue>
<code>
[start of setup.py]
1 from setuptools import setup
2
3
4 def requirements():
5 with open("requirements.txt", "r+") as f:
6 return f.read()
7
8
9 setup(
10 name="pyjanitor",
11 version="0.18.0",
12 description="Tools for cleaning pandas DataFrames",
13 author="Eric J. Ma",
14 author_email="[email protected]",
15 url="https://github.com/ericmjl/pyjanitor",
16 packages=["janitor"],
17 install_requires=requirements(),
18 )
19
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -15,4 +15,5 @@
url="https://github.com/ericmjl/pyjanitor",
packages=["janitor"],
install_requires=requirements(),
+ python_requires=">=3.6",
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -15,4 +15,5 @@\n url=\"https://github.com/ericmjl/pyjanitor\",\n packages=[\"janitor\"],\n install_requires=requirements(),\n+ python_requires=\">=3.6\",\n )\n", "issue": "[DOC] Clarify Python version requirements\n# Brief Description of Fix\r\n\r\nI was looking through documentation (for users and contributors), and it was unclear to me which python versions we actually support. It seems that we support python 3.6 + 3.7. This arose as I was updating the `pyproject.toml` file to avoid the warning:\r\n```\r\n--py36 is deprecated and will be removed in a future version. Use --target-version py36 instead.\r\n```\r\n\r\nOur current locations of explicit python versions are in:\r\n- `pyproject.toml`\r\n - `py36 = true`\r\n- `environment-dev.yml`\r\n - `- python >= 3.6`\r\n- `.azure-pipelines/pipeline-master.yml`\r\n - `python.version: \"3.7\"`\r\n\r\n# Proposed Fix\r\n\r\nIf `pyjanitor` is in fact meant to function on 3.6+, we should\r\n- Explicitly inform contributors that their code should be 3.6+ compatible\r\n- Inform users which python versions the package requires, on the documentation site, PyPI etc\r\n- Add `python_requires=\">=3.6\"` to `setup.py`\r\n\n", "before_files": [{"content": "from setuptools import setup\n\n\ndef requirements():\n with open(\"requirements.txt\", \"r+\") as f:\n return f.read()\n\n\nsetup(\n name=\"pyjanitor\",\n version=\"0.18.0\",\n description=\"Tools for cleaning pandas DataFrames\",\n author=\"Eric J. Ma\",\n author_email=\"[email protected]\",\n url=\"https://github.com/ericmjl/pyjanitor\",\n packages=[\"janitor\"],\n install_requires=requirements(),\n)\n", "path": "setup.py"}]} | 910 | 71 |
gh_patches_debug_16797 | rasdani/github-patches | git_diff | semgrep__semgrep-rules-1457 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
False positive for return-in-init when return in internal function
**Describe the bug**
[`return-in-init`](https://github.com/returntocorp/semgrep-rules/blob/master/python/lang/correctness/return-in-init.yaml) warns about a return statement in `__init__`. However, this may be valid if another function is defined within `__init__` and return is used there.
**To Reproduce**
```
class Odd:
def __init__(self, numbers):
def is_odd(n):
return n % 2 == 1
self.numbers = filter(is_odd, numbers)
```
```
$ semgrep --config=p/ci
test1.py
severity:error rule:python.lang.correctness.return-in-init.return-in-init: `return` should never appear inside a class __init__ function. This will cause a runtime error.
4: return n % 2 == 1
```
**Expected behavior**
I expect no error from `return-in-init` in this case.
**Priority**
How important is this to you?
- P2: annoying but not blocking me
</issue>
<code>
[start of python/lang/correctness/return-in-init.py]
1 class A:
2 def __init__(a, b, c):
3 # ruleid:return-in-init
4 return A(a, b, c)
5
6
7 class B:
8 def __init__(a, b, c):
9 # ok:return-in-init
10 return
11
12
13 class C:
14 def __init__(a, b, c):
15 # ruleid:yield-in-init
16 yield
17
18
19 class D:
20 def __init__():
21 # ruleid:yield-in-init
22 yield 5
23
24
25 def __init__(a, b, c):
26 # ok:yield-in-init
27 return A(a, b, c)
28
29
30 def __init__(a, b, c):
31 # ok:yield-in-init
32 yield
33
34
35 def __init__():
36 # ok:yield-in-init
37 yield 5
38
39
40 class E:
41 def func1():
42 if not hello:
43 # ok:yield-in-init
44 yield 5
45 # ok:yield-in-init
46 yield other
47
48
49 class F:
50 def __init__():
51 pass
52
53 def func1():
54 # ok:return-in-init
55 return 5
56
57 def func2():
58 # ok:return-in-init
59 return
60
61
62 class G:
63 def __init__():
64 pass
65
66 def func1():
67 # ok:yield-in-init
68 yield 5
69
70 def func2():
71 # ok:yield-in-init
72 yield
73
74 class H:
75 def __init__(self, x):
76 # ok:return-in-init
77 return None
78
[end of python/lang/correctness/return-in-init.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/lang/correctness/return-in-init.py b/python/lang/correctness/return-in-init.py
--- a/python/lang/correctness/return-in-init.py
+++ b/python/lang/correctness/return-in-init.py
@@ -75,3 +75,41 @@
def __init__(self, x):
# ok:return-in-init
return None
+
+class Odd:
+ def __init__(self, numbers):
+ def is_odd(n):
+ # ok:return-in-init
+ return n % 2 == 1
+ self.numbers = filter(is_odd, numbers)
+
+ # todoruleid:return-in-init
+ return self.numbers
+
+class Even:
+ def __init__(self):
+ class EvenNumber:
+ def __init__(self, n):
+ self.n = n
+ # todoruleid:return-in-init
+ return n
+
+ def is_even(self):
+ # ok:return-in-init
+ return self.n % 2 == 0
+
+ self.number = EvenNumber()
+
+ def not_init(self):
+ class EvenNumber:
+ def __init__(self, n):
+ self.n = n
+ # ruleid:return-in-init
+ return n
+
+ def is_even(self):
+ # ok:return-in-init
+ return self.n % 2 == 0
+
+ # ok:return-in-init
+ return EvenNumber()
| {"golden_diff": "diff --git a/python/lang/correctness/return-in-init.py b/python/lang/correctness/return-in-init.py\n--- a/python/lang/correctness/return-in-init.py\n+++ b/python/lang/correctness/return-in-init.py\n@@ -75,3 +75,41 @@\n def __init__(self, x):\n # ok:return-in-init\n return None\n+\n+class Odd:\n+ def __init__(self, numbers):\n+ def is_odd(n):\n+ # ok:return-in-init\n+ return n % 2 == 1\n+ self.numbers = filter(is_odd, numbers)\n+\n+ # todoruleid:return-in-init\n+ return self.numbers\n+\n+class Even:\n+ def __init__(self):\n+ class EvenNumber:\n+ def __init__(self, n):\n+ self.n = n\n+ # todoruleid:return-in-init\n+ return n\n+\n+ def is_even(self):\n+ # ok:return-in-init\n+ return self.n % 2 == 0\n+\n+ self.number = EvenNumber()\n+\n+ def not_init(self):\n+ class EvenNumber:\n+ def __init__(self, n):\n+ self.n = n\n+ # ruleid:return-in-init\n+ return n\n+\n+ def is_even(self):\n+ # ok:return-in-init\n+ return self.n % 2 == 0\n+\n+ # ok:return-in-init\n+ return EvenNumber()\n", "issue": "False positive for return-in-init when return in internal function\n**Describe the bug**\r\n\r\n[`return-in-init`](https://github.com/returntocorp/semgrep-rules/blob/master/python/lang/correctness/return-in-init.yaml) warns about a return statement in `__init__`. However, this may be valid if another function is defined within `__init__` and return is used there.\r\n\r\n**To Reproduce**\r\n\r\n```\r\nclass Odd:\r\n def __init__(self, numbers):\r\n def is_odd(n):\r\n return n % 2 == 1\r\n self.numbers = filter(is_odd, numbers)\r\n```\r\n\r\n```\r\n$ semgrep --config=p/ci\r\ntest1.py\r\nseverity:error rule:python.lang.correctness.return-in-init.return-in-init: `return` should never appear inside a class __init__ function. This will cause a runtime error.\r\n4: return n % 2 == 1\r\n```\r\n\r\n**Expected behavior**\r\n\r\nI expect no error from `return-in-init` in this case.\r\n\r\n**Priority**\r\nHow important is this to you?\r\n- P2: annoying but not blocking me\r\n\n", "before_files": [{"content": "class A:\n def __init__(a, b, c):\n # ruleid:return-in-init\n return A(a, b, c)\n\n\nclass B:\n def __init__(a, b, c):\n # ok:return-in-init\n return\n\n\nclass C:\n def __init__(a, b, c):\n # ruleid:yield-in-init\n yield\n\n\nclass D:\n def __init__():\n # ruleid:yield-in-init\n yield 5\n\n\ndef __init__(a, b, c):\n # ok:yield-in-init\n return A(a, b, c)\n\n\ndef __init__(a, b, c):\n # ok:yield-in-init\n yield\n\n\ndef __init__():\n # ok:yield-in-init\n yield 5\n\n\nclass E:\n def func1():\n if not hello:\n # ok:yield-in-init\n yield 5\n # ok:yield-in-init\n yield other\n\n\nclass F:\n def __init__():\n pass\n\n def func1():\n # ok:return-in-init\n return 5\n\n def func2():\n # ok:return-in-init\n return\n\n\nclass G:\n def __init__():\n pass\n\n def func1():\n # ok:yield-in-init\n yield 5\n\n def func2():\n # ok:yield-in-init\n yield\n\nclass H:\n def __init__(self, x):\n # ok:return-in-init\n return None\n", "path": "python/lang/correctness/return-in-init.py"}]} | 1,274 | 335 |
gh_patches_debug_40443 | rasdani/github-patches | git_diff | DDMAL__CantusDB-1280 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
We need a new API that displays concordances information for all chants in the database
In an email from Jan:
> The intensive process of getting all the data from CD via individual json-cid requests (59.000+) is running already on the 3rd day (and not finished yet) but this will not keep the Cantus data fresh in the Cantus Index API in the long term.
>
> The solution would be to regularly create a large JSON file export of all the CD chants (with the same fields as in json-cid exports) and make it available as a file to download. An example of such json export is here: https://austriamanus.org/files/concordances-export.json
> This kind of data transfer works also with the MMMO database which has approximately half the amount of data compared to a CD. I believe it would also be the best solution for CD.
This will not be difficult. We can use the code in our `json-con` API, but return all chants rather than filtering them by Cantus ID.
What's a good path for this API to live at? `/json-concordances-export`?
</issue>
<code>
[start of django/cantusdb_project/main_app/management/commands/update_cached_concordances.py]
1 import ujson
2 import os
3 from sys import stdout
4 from datetime import datetime
5 from collections import defaultdict
6 from django.db.models.query import QuerySet
7 from django.core.management.base import BaseCommand
8 from main_app.models import Chant
9
10
11 class Command(BaseCommand):
12 def handle(self, *args, **kwargs) -> None:
13 CACHE_DIR: str = "api_cache"
14 FILEPATH: str = f"{CACHE_DIR}/concordances.json"
15 start_time: str = datetime.now().isoformat()
16 stdout.write(f"Running update_cached_concordances at {start_time}.\n")
17 concordances: dict = get_concordances()
18 write_time: str = datetime.now().isoformat()
19 metadata: dict = {
20 "last_updated": write_time,
21 }
22 data_and_metadata: dict = {
23 "data": concordances,
24 "metadata": metadata,
25 }
26 stdout.write(f"Attempting to make directory at {CACHE_DIR} to hold cache: ")
27 try:
28 os.mkdir(CACHE_DIR)
29 stdout.write(f"successfully created directory at {CACHE_DIR}.\n")
30 except FileExistsError:
31 stdout.write(f"directory at {CACHE_DIR} already exists.\n")
32 stdout.write(f"Writing concordances to {FILEPATH} at {write_time}.\n")
33 with open(FILEPATH, "w") as json_file:
34 ujson.dump(data_and_metadata, json_file)
35 end_time = datetime.now().isoformat()
36 stdout.write(
37 f"Concordances successfully written to {FILEPATH} at {end_time}.\n\n"
38 )
39
40
41 def get_concordances() -> dict:
42 DOMAIN: str = "https://cantusdatabase.org"
43
44 stdout.write("Querying database for published chants\n")
45 published_chants: QuerySet[Chant] = Chant.objects.filter(source__published=True)
46 values: QuerySet[dict] = published_chants.select_related(
47 "source",
48 "feast",
49 "genre",
50 "office",
51 ).values(
52 "id",
53 "source_id",
54 "source__siglum",
55 "folio",
56 "c_sequence",
57 "incipit",
58 "feast__name",
59 "genre__name",
60 "office__name",
61 "position",
62 "cantus_id",
63 "image_link",
64 "mode",
65 "manuscript_full_text_std_spelling",
66 "volpiano",
67 )
68
69 stdout.write("Processing chants\n")
70 concordances: defaultdict = defaultdict(list)
71 for chant in values:
72 source_id: int = chant["source_id"]
73 source_absolute_url: str = f"{DOMAIN}/source/{source_id}/"
74 chant_id: int = chant["id"]
75 chant_absolute_url: str = f"{DOMAIN}/chant/{chant_id}/"
76
77 concordances[chant["cantus_id"]].append(
78 {
79 "siglum": chant["source__siglum"],
80 "srclink": source_absolute_url,
81 "chantlink": chant_absolute_url,
82 "folio": chant["folio"],
83 "sequence": chant["c_sequence"],
84 "incipit": chant["incipit"],
85 "feast": chant["feast__name"],
86 "genre": chant["genre__name"],
87 "office": chant["office__name"],
88 "position": chant["position"],
89 "cantus_id": chant["cantus_id"],
90 "image": chant["image_link"],
91 "mode": chant["mode"],
92 "full_text": chant["manuscript_full_text_std_spelling"],
93 "melody": chant["volpiano"],
94 "db": "CD",
95 }
96 )
97
98 stdout.write(f"All chants processed - found {len(concordances)} Cantus IDs\n")
99
100 return dict(concordances)
101
[end of django/cantusdb_project/main_app/management/commands/update_cached_concordances.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py b/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py
--- a/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py
+++ b/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py
@@ -1,6 +1,7 @@
import ujson
import os
from sys import stdout
+from typing import Optional
from datetime import datetime
from collections import defaultdict
from django.db.models.query import QuerySet
@@ -8,10 +9,27 @@
from main_app.models import Chant
+# Usage: `python manage.py update_cached_concordances`
+# or `python manage.py update_cached_concordances -d "/path/to/directory/in/which/to/save/concordances"`
+
+
class Command(BaseCommand):
+ def add_arguments(self, parser):
+ parser.add_argument(
+ "-d",
+ "--directory",
+ help="Optional filepath specifying a directory to output concordances",
+ type=str,
+ )
+
def handle(self, *args, **kwargs) -> None:
- CACHE_DIR: str = "api_cache"
- FILEPATH: str = f"{CACHE_DIR}/concordances.json"
+ cache_dir: Optional[str] = kwargs["directory"]
+ if not cache_dir:
+ # this default directory should match the value in docker-compose.yml,
+ # at services:django:volumes:api_cache_volume
+ cache_dir = "/resources/api_cache"
+
+ filepath: str = f"{cache_dir}/concordances.json"
start_time: str = datetime.now().isoformat()
stdout.write(f"Running update_cached_concordances at {start_time}.\n")
concordances: dict = get_concordances()
@@ -23,22 +41,29 @@
"data": concordances,
"metadata": metadata,
}
- stdout.write(f"Attempting to make directory at {CACHE_DIR} to hold cache: ")
+ stdout.write(f"Attempting to make directory at {cache_dir} to hold cache: ")
try:
- os.mkdir(CACHE_DIR)
- stdout.write(f"successfully created directory at {CACHE_DIR}.\n")
+ os.mkdir(cache_dir)
+ stdout.write(f"successfully created directory at {cache_dir}.\n")
except FileExistsError:
- stdout.write(f"directory at {CACHE_DIR} already exists.\n")
- stdout.write(f"Writing concordances to {FILEPATH} at {write_time}.\n")
- with open(FILEPATH, "w") as json_file:
+ stdout.write(f"directory at {cache_dir} already exists.\n")
+ stdout.write(f"Writing concordances to {filepath} at {write_time}.\n")
+ with open(filepath, "w") as json_file:
ujson.dump(data_and_metadata, json_file)
end_time = datetime.now().isoformat()
stdout.write(
- f"Concordances successfully written to {FILEPATH} at {end_time}.\n\n"
+ f"Concordances successfully written to {filepath} at {end_time}.\n\n"
)
def get_concordances() -> dict:
+ """Fetch all published chants in the database, group them by Cantus ID, and return
+ a dictionary containing information on each of these chants.
+
+ Returns:
+ dict: A dictionary where each key is a Cantus ID and each value is a list all
+ published chants in the database with that Cantus ID.
+ """
DOMAIN: str = "https://cantusdatabase.org"
stdout.write("Querying database for published chants\n")
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py b/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py\n--- a/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py\n+++ b/django/cantusdb_project/main_app/management/commands/update_cached_concordances.py\n@@ -1,6 +1,7 @@\n import ujson\n import os\n from sys import stdout\n+from typing import Optional\n from datetime import datetime\n from collections import defaultdict\n from django.db.models.query import QuerySet\n@@ -8,10 +9,27 @@\n from main_app.models import Chant\n \n \n+# Usage: `python manage.py update_cached_concordances`\n+# or `python manage.py update_cached_concordances -d \"/path/to/directory/in/which/to/save/concordances\"`\n+\n+\n class Command(BaseCommand):\n+ def add_arguments(self, parser):\n+ parser.add_argument(\n+ \"-d\",\n+ \"--directory\",\n+ help=\"Optional filepath specifying a directory to output concordances\",\n+ type=str,\n+ )\n+\n def handle(self, *args, **kwargs) -> None:\n- CACHE_DIR: str = \"api_cache\"\n- FILEPATH: str = f\"{CACHE_DIR}/concordances.json\"\n+ cache_dir: Optional[str] = kwargs[\"directory\"]\n+ if not cache_dir:\n+ # this default directory should match the value in docker-compose.yml,\n+ # at services:django:volumes:api_cache_volume\n+ cache_dir = \"/resources/api_cache\"\n+\n+ filepath: str = f\"{cache_dir}/concordances.json\"\n start_time: str = datetime.now().isoformat()\n stdout.write(f\"Running update_cached_concordances at {start_time}.\\n\")\n concordances: dict = get_concordances()\n@@ -23,22 +41,29 @@\n \"data\": concordances,\n \"metadata\": metadata,\n }\n- stdout.write(f\"Attempting to make directory at {CACHE_DIR} to hold cache: \")\n+ stdout.write(f\"Attempting to make directory at {cache_dir} to hold cache: \")\n try:\n- os.mkdir(CACHE_DIR)\n- stdout.write(f\"successfully created directory at {CACHE_DIR}.\\n\")\n+ os.mkdir(cache_dir)\n+ stdout.write(f\"successfully created directory at {cache_dir}.\\n\")\n except FileExistsError:\n- stdout.write(f\"directory at {CACHE_DIR} already exists.\\n\")\n- stdout.write(f\"Writing concordances to {FILEPATH} at {write_time}.\\n\")\n- with open(FILEPATH, \"w\") as json_file:\n+ stdout.write(f\"directory at {cache_dir} already exists.\\n\")\n+ stdout.write(f\"Writing concordances to {filepath} at {write_time}.\\n\")\n+ with open(filepath, \"w\") as json_file:\n ujson.dump(data_and_metadata, json_file)\n end_time = datetime.now().isoformat()\n stdout.write(\n- f\"Concordances successfully written to {FILEPATH} at {end_time}.\\n\\n\"\n+ f\"Concordances successfully written to {filepath} at {end_time}.\\n\\n\"\n )\n \n \n def get_concordances() -> dict:\n+ \"\"\"Fetch all published chants in the database, group them by Cantus ID, and return\n+ a dictionary containing information on each of these chants.\n+\n+ Returns:\n+ dict: A dictionary where each key is a Cantus ID and each value is a list all\n+ published chants in the database with that Cantus ID.\n+ \"\"\"\n DOMAIN: str = \"https://cantusdatabase.org\"\n \n stdout.write(\"Querying database for published chants\\n\")\n", "issue": "We need a new API that displays concordances information for all chants in the database\nIn an email from Jan:\r\n\r\n> The intensive process of getting all the data from CD via individual json-cid requests (59.000+) is running already on the 3rd day (and not finished yet) but this will not keep the Cantus data fresh in the Cantus Index API in the long term.\r\n> \r\n> The solution would be to regularly create a large JSON file export of all the CD chants (with the same fields as in json-cid exports) and make it available as a file to download. An example of such json export is here: https://austriamanus.org/files/concordances-export.json\r\n> This kind of data transfer works also with the MMMO database which has approximately half the amount of data compared to a CD. I believe it would also be the best solution for CD.\r\n\r\nThis will not be difficult. We can use the code in our `json-con` API, but return all chants rather than filtering them by Cantus ID.\r\n\r\nWhat's a good path for this API to live at? `/json-concordances-export`?\n", "before_files": [{"content": "import ujson\nimport os\nfrom sys import stdout\nfrom datetime import datetime\nfrom collections import defaultdict\nfrom django.db.models.query import QuerySet\nfrom django.core.management.base import BaseCommand\nfrom main_app.models import Chant\n\n\nclass Command(BaseCommand):\n def handle(self, *args, **kwargs) -> None:\n CACHE_DIR: str = \"api_cache\"\n FILEPATH: str = f\"{CACHE_DIR}/concordances.json\"\n start_time: str = datetime.now().isoformat()\n stdout.write(f\"Running update_cached_concordances at {start_time}.\\n\")\n concordances: dict = get_concordances()\n write_time: str = datetime.now().isoformat()\n metadata: dict = {\n \"last_updated\": write_time,\n }\n data_and_metadata: dict = {\n \"data\": concordances,\n \"metadata\": metadata,\n }\n stdout.write(f\"Attempting to make directory at {CACHE_DIR} to hold cache: \")\n try:\n os.mkdir(CACHE_DIR)\n stdout.write(f\"successfully created directory at {CACHE_DIR}.\\n\")\n except FileExistsError:\n stdout.write(f\"directory at {CACHE_DIR} already exists.\\n\")\n stdout.write(f\"Writing concordances to {FILEPATH} at {write_time}.\\n\")\n with open(FILEPATH, \"w\") as json_file:\n ujson.dump(data_and_metadata, json_file)\n end_time = datetime.now().isoformat()\n stdout.write(\n f\"Concordances successfully written to {FILEPATH} at {end_time}.\\n\\n\"\n )\n\n\ndef get_concordances() -> dict:\n DOMAIN: str = \"https://cantusdatabase.org\"\n\n stdout.write(\"Querying database for published chants\\n\")\n published_chants: QuerySet[Chant] = Chant.objects.filter(source__published=True)\n values: QuerySet[dict] = published_chants.select_related(\n \"source\",\n \"feast\",\n \"genre\",\n \"office\",\n ).values(\n \"id\",\n \"source_id\",\n \"source__siglum\",\n \"folio\",\n \"c_sequence\",\n \"incipit\",\n \"feast__name\",\n \"genre__name\",\n \"office__name\",\n \"position\",\n \"cantus_id\",\n \"image_link\",\n \"mode\",\n \"manuscript_full_text_std_spelling\",\n \"volpiano\",\n )\n\n stdout.write(\"Processing chants\\n\")\n concordances: defaultdict = defaultdict(list)\n for chant in values:\n source_id: int = chant[\"source_id\"]\n source_absolute_url: str = f\"{DOMAIN}/source/{source_id}/\"\n chant_id: int = chant[\"id\"]\n chant_absolute_url: str = f\"{DOMAIN}/chant/{chant_id}/\"\n\n concordances[chant[\"cantus_id\"]].append(\n {\n \"siglum\": chant[\"source__siglum\"],\n \"srclink\": source_absolute_url,\n \"chantlink\": chant_absolute_url,\n \"folio\": chant[\"folio\"],\n \"sequence\": chant[\"c_sequence\"],\n \"incipit\": chant[\"incipit\"],\n \"feast\": chant[\"feast__name\"],\n \"genre\": chant[\"genre__name\"],\n \"office\": chant[\"office__name\"],\n \"position\": chant[\"position\"],\n \"cantus_id\": chant[\"cantus_id\"],\n \"image\": chant[\"image_link\"],\n \"mode\": chant[\"mode\"],\n \"full_text\": chant[\"manuscript_full_text_std_spelling\"],\n \"melody\": chant[\"volpiano\"],\n \"db\": \"CD\",\n }\n )\n\n stdout.write(f\"All chants processed - found {len(concordances)} Cantus IDs\\n\")\n\n return dict(concordances)\n", "path": "django/cantusdb_project/main_app/management/commands/update_cached_concordances.py"}]} | 1,814 | 832 |
gh_patches_debug_1450 | rasdani/github-patches | git_diff | pyca__cryptography-3731 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
release infrastrucutre doesn't handle "out of order" releases
Specifically if we issue an `0.X` release, then an `0.X+1` release, and then we go to do an `0.X.1` release, the wheel automation won't work, since it builds a wheel for the latest release.
</issue>
<code>
[start of release.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import getpass
8 import io
9 import os
10 import subprocess
11 import time
12
13 import click
14
15 from clint.textui.progress import Bar as ProgressBar
16
17 import requests
18
19
20 JENKINS_URL = (
21 "https://ci.cryptography.io/job/cryptography-support-jobs/"
22 "job/wheel-builder"
23 )
24
25
26 def run(*args, **kwargs):
27 kwargs.setdefault("stderr", subprocess.STDOUT)
28 try:
29 subprocess.check_output(list(args), **kwargs)
30 except subprocess.CalledProcessError as e:
31 # Reraise this with a different type so that str(e) is something with
32 # stdout in it.
33 raise Exception(e.cmd, e.returncode, e.output)
34
35
36 def wait_for_build_completed(session):
37 # Wait 20 seconds before actually checking if the build is complete, to
38 # ensure that it had time to really start.
39 time.sleep(20)
40 while True:
41 response = session.get(
42 "{0}/lastBuild/api/json/".format(JENKINS_URL),
43 headers={
44 "Accept": "application/json",
45 }
46 )
47 response.raise_for_status()
48 if not response.json()["building"]:
49 assert response.json()["result"] == "SUCCESS"
50 break
51 time.sleep(0.1)
52
53
54 def download_artifacts(session):
55 response = session.get(
56 "{0}/lastBuild/api/json/".format(JENKINS_URL),
57 headers={
58 "Accept": "application/json"
59 }
60 )
61 response.raise_for_status()
62 json_response = response.json()
63 assert not json_response["building"]
64 assert json_response["result"] == "SUCCESS"
65
66 paths = []
67
68 for artifact in json_response["artifacts"]:
69 response = session.get(
70 "{0}artifact/{1}".format(
71 json_response["url"], artifact["relativePath"]
72 ), stream=True
73 )
74 assert response.headers["content-length"]
75 print("Downloading {0}".format(artifact["fileName"]))
76 bar = ProgressBar(
77 expected_size=int(response.headers["content-length"]),
78 filled_char="="
79 )
80 content = io.BytesIO()
81 for data in response.iter_content(chunk_size=8192):
82 content.write(data)
83 bar.show(content.tell())
84 assert bar.expected_size == content.tell()
85 bar.done()
86 out_path = os.path.join(
87 os.path.dirname(__file__),
88 "dist",
89 artifact["fileName"],
90 )
91 with open(out_path, "wb") as f:
92 f.write(content.getvalue())
93 paths.append(out_path)
94 return paths
95
96
97 @click.command()
98 @click.argument("version")
99 def release(version):
100 """
101 ``version`` should be a string like '0.4' or '1.0'.
102 """
103 run("git", "tag", "-s", version, "-m", "{0} release".format(version))
104 run("git", "push", "--tags")
105
106 run("python", "setup.py", "sdist")
107 run("python", "setup.py", "sdist", "bdist_wheel", cwd="vectors/")
108
109 run(
110 "twine", "upload", "-s", "dist/cryptography-{0}*".format(version),
111 "vectors/dist/cryptography_vectors-{0}*".format(version), shell=True
112 )
113
114 session = requests.Session()
115
116 # This tells the CDN to delete the cached response for the URL. We do this
117 # so that the Jenkins builders will see the new sdist immediately when they
118 # go to build the wheels.
119 response = session.request(
120 "PURGE", "https://pypi.python.org/simple/cryptography/"
121 )
122 response.raise_for_status()
123
124 token = getpass.getpass("Input the Jenkins token: ")
125 response = session.get(
126 "{0}/build".format(JENKINS_URL),
127 params={
128 "token": token,
129 "cause": "Building wheels for {0}".format(version)
130 }
131 )
132 response.raise_for_status()
133 wait_for_build_completed(session)
134 paths = download_artifacts(session)
135 run("twine", "upload", " ".join(paths))
136
137
138 if __name__ == "__main__":
139 release()
140
[end of release.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/release.py b/release.py
--- a/release.py
+++ b/release.py
@@ -126,6 +126,7 @@
"{0}/build".format(JENKINS_URL),
params={
"token": token,
+ "BUILD_VERSION": version,
"cause": "Building wheels for {0}".format(version)
}
)
| {"golden_diff": "diff --git a/release.py b/release.py\n--- a/release.py\n+++ b/release.py\n@@ -126,6 +126,7 @@\n \"{0}/build\".format(JENKINS_URL),\n params={\n \"token\": token,\n+ \"BUILD_VERSION\": version,\n \"cause\": \"Building wheels for {0}\".format(version)\n }\n )\n", "issue": "release infrastrucutre doesn't handle \"out of order\" releases\nSpecifically if we issue an `0.X` release, then an `0.X+1` release, and then we go to do an `0.X.1` release, the wheel automation won't work, since it builds a wheel for the latest release.\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport getpass\nimport io\nimport os\nimport subprocess\nimport time\n\nimport click\n\nfrom clint.textui.progress import Bar as ProgressBar\n\nimport requests\n\n\nJENKINS_URL = (\n \"https://ci.cryptography.io/job/cryptography-support-jobs/\"\n \"job/wheel-builder\"\n)\n\n\ndef run(*args, **kwargs):\n kwargs.setdefault(\"stderr\", subprocess.STDOUT)\n try:\n subprocess.check_output(list(args), **kwargs)\n except subprocess.CalledProcessError as e:\n # Reraise this with a different type so that str(e) is something with\n # stdout in it.\n raise Exception(e.cmd, e.returncode, e.output)\n\n\ndef wait_for_build_completed(session):\n # Wait 20 seconds before actually checking if the build is complete, to\n # ensure that it had time to really start.\n time.sleep(20)\n while True:\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n headers={\n \"Accept\": \"application/json\",\n }\n )\n response.raise_for_status()\n if not response.json()[\"building\"]:\n assert response.json()[\"result\"] == \"SUCCESS\"\n break\n time.sleep(0.1)\n\n\ndef download_artifacts(session):\n response = session.get(\n \"{0}/lastBuild/api/json/\".format(JENKINS_URL),\n headers={\n \"Accept\": \"application/json\"\n }\n )\n response.raise_for_status()\n json_response = response.json()\n assert not json_response[\"building\"]\n assert json_response[\"result\"] == \"SUCCESS\"\n\n paths = []\n\n for artifact in json_response[\"artifacts\"]:\n response = session.get(\n \"{0}artifact/{1}\".format(\n json_response[\"url\"], artifact[\"relativePath\"]\n ), stream=True\n )\n assert response.headers[\"content-length\"]\n print(\"Downloading {0}\".format(artifact[\"fileName\"]))\n bar = ProgressBar(\n expected_size=int(response.headers[\"content-length\"]),\n filled_char=\"=\"\n )\n content = io.BytesIO()\n for data in response.iter_content(chunk_size=8192):\n content.write(data)\n bar.show(content.tell())\n assert bar.expected_size == content.tell()\n bar.done()\n out_path = os.path.join(\n os.path.dirname(__file__),\n \"dist\",\n artifact[\"fileName\"],\n )\n with open(out_path, \"wb\") as f:\n f.write(content.getvalue())\n paths.append(out_path)\n return paths\n\n\[email protected]()\[email protected](\"version\")\ndef release(version):\n \"\"\"\n ``version`` should be a string like '0.4' or '1.0'.\n \"\"\"\n run(\"git\", \"tag\", \"-s\", version, \"-m\", \"{0} release\".format(version))\n run(\"git\", \"push\", \"--tags\")\n\n run(\"python\", \"setup.py\", \"sdist\")\n run(\"python\", \"setup.py\", \"sdist\", \"bdist_wheel\", cwd=\"vectors/\")\n\n run(\n \"twine\", \"upload\", \"-s\", \"dist/cryptography-{0}*\".format(version),\n \"vectors/dist/cryptography_vectors-{0}*\".format(version), shell=True\n )\n\n session = requests.Session()\n\n # This tells the CDN to delete the cached response for the URL. We do this\n # so that the Jenkins builders will see the new sdist immediately when they\n # go to build the wheels.\n response = session.request(\n \"PURGE\", \"https://pypi.python.org/simple/cryptography/\"\n )\n response.raise_for_status()\n\n token = getpass.getpass(\"Input the Jenkins token: \")\n response = session.get(\n \"{0}/build\".format(JENKINS_URL),\n params={\n \"token\": token,\n \"cause\": \"Building wheels for {0}\".format(version)\n }\n )\n response.raise_for_status()\n wait_for_build_completed(session)\n paths = download_artifacts(session)\n run(\"twine\", \"upload\", \" \".join(paths))\n\n\nif __name__ == \"__main__\":\n release()\n", "path": "release.py"}]} | 1,864 | 83 |
gh_patches_debug_7406 | rasdani/github-patches | git_diff | interlegis__sapl-1191 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Integração do SAPL 3.1 e Portal Modelo
</issue>
<code>
[start of sapl/base/templatetags/common_tags.py]
1 from compressor.utils import get_class
2 from django import template
3
4 from sapl.base.models import AppConfig
5 from sapl.materia.models import DocumentoAcessorio, MateriaLegislativa
6 from sapl.norma.models import NormaJuridica
7 from sapl.parlamentares.models import Filiacao
8
9 register = template.Library()
10
11
12 @register.simple_tag
13 def field_verbose_name(instance, field_name):
14 return instance._meta.get_field(field_name).verbose_name
15
16
17 @register.simple_tag
18 def fieldclass_verbose_name(class_name, field_name):
19 cls = get_class(class_name)
20 return cls._meta.get_field(field_name).verbose_name
21
22
23 @register.simple_tag
24 def model_verbose_name(class_name):
25 model = get_class(class_name)
26 return model._meta.verbose_name
27
28
29 @register.simple_tag
30 def model_verbose_name_plural(class_name):
31 model = get_class(class_name)
32 return model._meta.verbose_name_plural
33
34
35 @register.filter
36 def lookup(d, key):
37 return d[key] if key in d else []
38
39
40 @register.filter
41 def isinst(value, class_str):
42 classe = value.__class__.__name__
43 return classe == class_str
44
45
46 @register.filter
47 def get_add_perm(value, arg):
48 perm = value
49 view = arg
50
51 try:
52 nome_app = view.__class__.model._meta.app_label
53 except AttributeError:
54 return None
55 nome_model = view.__class__.model.__name__.lower()
56 can_add = '.add_' + nome_model
57
58 return perm.__contains__(nome_app + can_add)
59
60
61 @register.filter
62 def get_change_perm(value, arg):
63 perm = value
64 view = arg
65
66 try:
67 nome_app = view.__class__.model._meta.app_label
68 except AttributeError:
69 return None
70 nome_model = view.__class__.model.__name__.lower()
71 can_change = '.change_' + nome_model
72
73 return perm.__contains__(nome_app + can_change)
74
75
76 @register.filter
77 def get_delete_perm(value, arg):
78 perm = value
79 view = arg
80
81 try:
82 nome_app = view.__class__.model._meta.app_label
83 except AttributeError:
84 return None
85 nome_model = view.__class__.model.__name__.lower()
86 can_delete = '.delete_' + nome_model
87
88 return perm.__contains__(nome_app + can_delete)
89
90
91 @register.filter
92 def ultima_filiacao(value):
93 parlamentar = value
94
95 ultima_filiacao = Filiacao.objects.filter(
96 parlamentar=parlamentar).order_by('-data').first()
97
98 if ultima_filiacao:
99 return ultima_filiacao.partido
100 else:
101 return None
102
103
104 @register.filter
105 def get_config_attr(attribute):
106 return AppConfig.attr(attribute)
107
108
109 @register.filter
110 def str2intabs(value):
111 if not isinstance(value, str):
112 return ''
113 try:
114 v = int(value)
115 v = abs(v)
116 return v
117 except:
118 return ''
119
120
121 @register.filter
122 def url(value):
123 if value.startswith('http://') or value.startswith('https://'):
124 return True
125 return False
126
127
128 @register.filter
129 def cronometro_to_seconds(value):
130 if not AppConfig.attr('cronometro_' + value):
131 return 0
132
133 m, s, x = AppConfig.attr(
134 'cronometro_' + value).isoformat().split(':')
135
136 return 60 * int(m) + int(s)
137
138
139 @register.filter
140 def to_list_pk(object_list):
141 return [o.pk for o in object_list]
142
143
144 @register.filter
145 def search_get_model(object):
146 if type(object) == MateriaLegislativa:
147 return 'm'
148 elif type(object) == DocumentoAcessorio:
149 return 'd'
150 elif type(object) == NormaJuridica:
151 return 'n'
152
153 return None
154
[end of sapl/base/templatetags/common_tags.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sapl/base/templatetags/common_tags.py b/sapl/base/templatetags/common_tags.py
--- a/sapl/base/templatetags/common_tags.py
+++ b/sapl/base/templatetags/common_tags.py
@@ -117,6 +117,23 @@
except:
return ''
[email protected]
+def has_iframe(request):
+
+ iframe = request.session.get('iframe', False)
+ if not iframe and 'iframe' in request.GET:
+ ival = request.GET['iframe']
+ if ival and int(ival) == 1:
+ request.session['iframe'] = True
+ return True
+ elif 'iframe' in request.GET:
+ ival = request.GET['iframe']
+ if ival and int(ival) == 0:
+ del request.session['iframe']
+ return False
+
+ return iframe
+
@register.filter
def url(value):
| {"golden_diff": "diff --git a/sapl/base/templatetags/common_tags.py b/sapl/base/templatetags/common_tags.py\n--- a/sapl/base/templatetags/common_tags.py\n+++ b/sapl/base/templatetags/common_tags.py\n@@ -117,6 +117,23 @@\n except:\n return ''\n \[email protected]\n+def has_iframe(request):\n+\n+ iframe = request.session.get('iframe', False)\n+ if not iframe and 'iframe' in request.GET:\n+ ival = request.GET['iframe']\n+ if ival and int(ival) == 1:\n+ request.session['iframe'] = True\n+ return True\n+ elif 'iframe' in request.GET:\n+ ival = request.GET['iframe']\n+ if ival and int(ival) == 0:\n+ del request.session['iframe']\n+ return False\n+\n+ return iframe\n+\n \n @register.filter\n def url(value):\n", "issue": "Integra\u00e7\u00e3o do SAPL 3.1 e Portal Modelo\n\n", "before_files": [{"content": "from compressor.utils import get_class\nfrom django import template\n\nfrom sapl.base.models import AppConfig\nfrom sapl.materia.models import DocumentoAcessorio, MateriaLegislativa\nfrom sapl.norma.models import NormaJuridica\nfrom sapl.parlamentares.models import Filiacao\n\nregister = template.Library()\n\n\[email protected]_tag\ndef field_verbose_name(instance, field_name):\n return instance._meta.get_field(field_name).verbose_name\n\n\[email protected]_tag\ndef fieldclass_verbose_name(class_name, field_name):\n cls = get_class(class_name)\n return cls._meta.get_field(field_name).verbose_name\n\n\[email protected]_tag\ndef model_verbose_name(class_name):\n model = get_class(class_name)\n return model._meta.verbose_name\n\n\[email protected]_tag\ndef model_verbose_name_plural(class_name):\n model = get_class(class_name)\n return model._meta.verbose_name_plural\n\n\[email protected]\ndef lookup(d, key):\n return d[key] if key in d else []\n\n\[email protected]\ndef isinst(value, class_str):\n classe = value.__class__.__name__\n return classe == class_str\n\n\[email protected]\ndef get_add_perm(value, arg):\n perm = value\n view = arg\n\n try:\n nome_app = view.__class__.model._meta.app_label\n except AttributeError:\n return None\n nome_model = view.__class__.model.__name__.lower()\n can_add = '.add_' + nome_model\n\n return perm.__contains__(nome_app + can_add)\n\n\[email protected]\ndef get_change_perm(value, arg):\n perm = value\n view = arg\n\n try:\n nome_app = view.__class__.model._meta.app_label\n except AttributeError:\n return None\n nome_model = view.__class__.model.__name__.lower()\n can_change = '.change_' + nome_model\n\n return perm.__contains__(nome_app + can_change)\n\n\[email protected]\ndef get_delete_perm(value, arg):\n perm = value\n view = arg\n\n try:\n nome_app = view.__class__.model._meta.app_label\n except AttributeError:\n return None\n nome_model = view.__class__.model.__name__.lower()\n can_delete = '.delete_' + nome_model\n\n return perm.__contains__(nome_app + can_delete)\n\n\[email protected]\ndef ultima_filiacao(value):\n parlamentar = value\n\n ultima_filiacao = Filiacao.objects.filter(\n parlamentar=parlamentar).order_by('-data').first()\n\n if ultima_filiacao:\n return ultima_filiacao.partido\n else:\n return None\n\n\[email protected]\ndef get_config_attr(attribute):\n return AppConfig.attr(attribute)\n\n\[email protected]\ndef str2intabs(value):\n if not isinstance(value, str):\n return ''\n try:\n v = int(value)\n v = abs(v)\n return v\n except:\n return ''\n\n\[email protected]\ndef url(value):\n if value.startswith('http://') or value.startswith('https://'):\n return True\n return False\n\n\[email protected]\ndef cronometro_to_seconds(value):\n if not AppConfig.attr('cronometro_' + value):\n return 0\n\n m, s, x = AppConfig.attr(\n 'cronometro_' + value).isoformat().split(':')\n\n return 60 * int(m) + int(s)\n\n\[email protected]\ndef to_list_pk(object_list):\n return [o.pk for o in object_list]\n\n\[email protected]\ndef search_get_model(object):\n if type(object) == MateriaLegislativa:\n return 'm'\n elif type(object) == DocumentoAcessorio:\n return 'd'\n elif type(object) == NormaJuridica:\n return 'n'\n\n return None\n", "path": "sapl/base/templatetags/common_tags.py"}]} | 1,779 | 219 |
gh_patches_debug_29041 | rasdani/github-patches | git_diff | CTFd__CTFd-1699 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unnecessary ping event
**Environment**:
- CTFd Version/Commit: 3.1.1, latest commit
- Operating System: any
- Web Browser and Version: any
in the comment you said "Immediately yield a ping event to force Response headers to be set", but this event seems to lies inside the while True loop, which results to an unnecessary ping event every 5 seconds.
I believe that's an unintended behavior, though it doesn't break anything.
https://github.com/CTFd/CTFd/blob/4c31dc23e8cfa0308367732d603b16e01871b00e/CTFd/utils/events/__init__.py#L57-L67
</issue>
<code>
[start of CTFd/utils/events/__init__.py]
1 import json
2 from collections import defaultdict
3 from queue import Queue
4
5 from gevent import Timeout, spawn
6 from tenacity import retry, wait_exponential
7
8 from CTFd.cache import cache
9 from CTFd.utils import string_types
10
11
12 class ServerSentEvent(object):
13 def __init__(self, data, type=None, id=None):
14 self.data = data
15 self.type = type
16 self.id = id
17
18 def __str__(self):
19 if isinstance(self.data, string_types):
20 data = self.data
21 else:
22 data = json.dumps(self.data)
23 lines = ["data:{value}".format(value=line) for line in data.splitlines()]
24 if self.type:
25 lines.insert(0, "event:{value}".format(value=self.type))
26 if self.id:
27 lines.append("id:{value}".format(value=self.id))
28 return "\n".join(lines) + "\n\n"
29
30 def to_dict(self):
31 d = {"data": self.data}
32 if self.type:
33 d["type"] = self.type
34 if self.id:
35 d["id"] = self.id
36 return d
37
38
39 class EventManager(object):
40 def __init__(self):
41 self.clients = {}
42
43 def publish(self, data, type=None, channel="ctf"):
44 event = ServerSentEvent(data, type=type)
45 message = event.to_dict()
46 for client in list(self.clients.values()):
47 client[channel].put(message)
48 return len(self.clients)
49
50 def listen(self):
51 pass
52
53 def subscribe(self, channel="ctf"):
54 q = defaultdict(Queue)
55 self.clients[id(q)] = q
56 try:
57 while True:
58 try:
59 # Immediately yield a ping event to force Response headers to be set
60 # or else some reverse proxies will incorrectly buffer SSE
61 yield ServerSentEvent(data="", type="ping")
62
63 with Timeout(5):
64 message = q[channel].get()
65 yield ServerSentEvent(**message)
66 except Timeout:
67 yield ServerSentEvent(data="", type="ping")
68 finally:
69 del self.clients[id(q)]
70 del q
71
72
73 class RedisEventManager(EventManager):
74 def __init__(self):
75 super(EventManager, self).__init__()
76 self.client = cache.cache._write_client
77 self.clients = {}
78
79 def publish(self, data, type=None, channel="ctf"):
80 event = ServerSentEvent(data, type=type)
81 message = json.dumps(event.to_dict())
82 return self.client.publish(message=message, channel=channel)
83
84 def listen(self, channel="ctf"):
85 @retry(wait=wait_exponential(min=1, max=30))
86 def _listen():
87 while True:
88 pubsub = self.client.pubsub()
89 pubsub.subscribe(channel)
90 try:
91 while True:
92 message = pubsub.get_message(
93 ignore_subscribe_messages=True, timeout=5
94 )
95 if message:
96 if message["type"] == "message":
97 event = json.loads(message["data"])
98 for client in list(self.clients.values()):
99 client[channel].put(event)
100 finally:
101 pubsub.close()
102
103 spawn(_listen)
104
105 def subscribe(self, channel="ctf"):
106 q = defaultdict(Queue)
107 self.clients[id(q)] = q
108 try:
109 while True:
110 try:
111 # Immediately yield a ping event to force Response headers to be set
112 # or else some reverse proxies will incorrectly buffer SSE
113 yield ServerSentEvent(data="", type="ping")
114
115 with Timeout(5):
116 message = q[channel].get()
117 yield ServerSentEvent(**message)
118 except Timeout:
119 yield ServerSentEvent(data="", type="ping")
120 finally:
121 del self.clients[id(q)]
122 del q
123
[end of CTFd/utils/events/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/CTFd/utils/events/__init__.py b/CTFd/utils/events/__init__.py
--- a/CTFd/utils/events/__init__.py
+++ b/CTFd/utils/events/__init__.py
@@ -54,12 +54,11 @@
q = defaultdict(Queue)
self.clients[id(q)] = q
try:
+ # Immediately yield a ping event to force Response headers to be set
+ # or else some reverse proxies will incorrectly buffer SSE
+ yield ServerSentEvent(data="", type="ping")
while True:
try:
- # Immediately yield a ping event to force Response headers to be set
- # or else some reverse proxies will incorrectly buffer SSE
- yield ServerSentEvent(data="", type="ping")
-
with Timeout(5):
message = q[channel].get()
yield ServerSentEvent(**message)
@@ -106,12 +105,11 @@
q = defaultdict(Queue)
self.clients[id(q)] = q
try:
+ # Immediately yield a ping event to force Response headers to be set
+ # or else some reverse proxies will incorrectly buffer SSE
+ yield ServerSentEvent(data="", type="ping")
while True:
try:
- # Immediately yield a ping event to force Response headers to be set
- # or else some reverse proxies will incorrectly buffer SSE
- yield ServerSentEvent(data="", type="ping")
-
with Timeout(5):
message = q[channel].get()
yield ServerSentEvent(**message)
| {"golden_diff": "diff --git a/CTFd/utils/events/__init__.py b/CTFd/utils/events/__init__.py\n--- a/CTFd/utils/events/__init__.py\n+++ b/CTFd/utils/events/__init__.py\n@@ -54,12 +54,11 @@\n q = defaultdict(Queue)\n self.clients[id(q)] = q\n try:\n+ # Immediately yield a ping event to force Response headers to be set\n+ # or else some reverse proxies will incorrectly buffer SSE\n+ yield ServerSentEvent(data=\"\", type=\"ping\")\n while True:\n try:\n- # Immediately yield a ping event to force Response headers to be set\n- # or else some reverse proxies will incorrectly buffer SSE\n- yield ServerSentEvent(data=\"\", type=\"ping\")\n-\n with Timeout(5):\n message = q[channel].get()\n yield ServerSentEvent(**message)\n@@ -106,12 +105,11 @@\n q = defaultdict(Queue)\n self.clients[id(q)] = q\n try:\n+ # Immediately yield a ping event to force Response headers to be set\n+ # or else some reverse proxies will incorrectly buffer SSE\n+ yield ServerSentEvent(data=\"\", type=\"ping\")\n while True:\n try:\n- # Immediately yield a ping event to force Response headers to be set\n- # or else some reverse proxies will incorrectly buffer SSE\n- yield ServerSentEvent(data=\"\", type=\"ping\")\n-\n with Timeout(5):\n message = q[channel].get()\n yield ServerSentEvent(**message)\n", "issue": "Unnecessary ping event\n**Environment**:\r\n\r\n- CTFd Version/Commit: 3.1.1, latest commit\r\n- Operating System: any\r\n- Web Browser and Version: any\r\n\r\nin the comment you said \"Immediately yield a ping event to force Response headers to be set\", but this event seems to lies inside the while True loop, which results to an unnecessary ping event every 5 seconds.\r\nI believe that's an unintended behavior, though it doesn't break anything.\r\n\r\nhttps://github.com/CTFd/CTFd/blob/4c31dc23e8cfa0308367732d603b16e01871b00e/CTFd/utils/events/__init__.py#L57-L67\n", "before_files": [{"content": "import json\nfrom collections import defaultdict\nfrom queue import Queue\n\nfrom gevent import Timeout, spawn\nfrom tenacity import retry, wait_exponential\n\nfrom CTFd.cache import cache\nfrom CTFd.utils import string_types\n\n\nclass ServerSentEvent(object):\n def __init__(self, data, type=None, id=None):\n self.data = data\n self.type = type\n self.id = id\n\n def __str__(self):\n if isinstance(self.data, string_types):\n data = self.data\n else:\n data = json.dumps(self.data)\n lines = [\"data:{value}\".format(value=line) for line in data.splitlines()]\n if self.type:\n lines.insert(0, \"event:{value}\".format(value=self.type))\n if self.id:\n lines.append(\"id:{value}\".format(value=self.id))\n return \"\\n\".join(lines) + \"\\n\\n\"\n\n def to_dict(self):\n d = {\"data\": self.data}\n if self.type:\n d[\"type\"] = self.type\n if self.id:\n d[\"id\"] = self.id\n return d\n\n\nclass EventManager(object):\n def __init__(self):\n self.clients = {}\n\n def publish(self, data, type=None, channel=\"ctf\"):\n event = ServerSentEvent(data, type=type)\n message = event.to_dict()\n for client in list(self.clients.values()):\n client[channel].put(message)\n return len(self.clients)\n\n def listen(self):\n pass\n\n def subscribe(self, channel=\"ctf\"):\n q = defaultdict(Queue)\n self.clients[id(q)] = q\n try:\n while True:\n try:\n # Immediately yield a ping event to force Response headers to be set\n # or else some reverse proxies will incorrectly buffer SSE\n yield ServerSentEvent(data=\"\", type=\"ping\")\n\n with Timeout(5):\n message = q[channel].get()\n yield ServerSentEvent(**message)\n except Timeout:\n yield ServerSentEvent(data=\"\", type=\"ping\")\n finally:\n del self.clients[id(q)]\n del q\n\n\nclass RedisEventManager(EventManager):\n def __init__(self):\n super(EventManager, self).__init__()\n self.client = cache.cache._write_client\n self.clients = {}\n\n def publish(self, data, type=None, channel=\"ctf\"):\n event = ServerSentEvent(data, type=type)\n message = json.dumps(event.to_dict())\n return self.client.publish(message=message, channel=channel)\n\n def listen(self, channel=\"ctf\"):\n @retry(wait=wait_exponential(min=1, max=30))\n def _listen():\n while True:\n pubsub = self.client.pubsub()\n pubsub.subscribe(channel)\n try:\n while True:\n message = pubsub.get_message(\n ignore_subscribe_messages=True, timeout=5\n )\n if message:\n if message[\"type\"] == \"message\":\n event = json.loads(message[\"data\"])\n for client in list(self.clients.values()):\n client[channel].put(event)\n finally:\n pubsub.close()\n\n spawn(_listen)\n\n def subscribe(self, channel=\"ctf\"):\n q = defaultdict(Queue)\n self.clients[id(q)] = q\n try:\n while True:\n try:\n # Immediately yield a ping event to force Response headers to be set\n # or else some reverse proxies will incorrectly buffer SSE\n yield ServerSentEvent(data=\"\", type=\"ping\")\n\n with Timeout(5):\n message = q[channel].get()\n yield ServerSentEvent(**message)\n except Timeout:\n yield ServerSentEvent(data=\"\", type=\"ping\")\n finally:\n del self.clients[id(q)]\n del q\n", "path": "CTFd/utils/events/__init__.py"}]} | 1,774 | 341 |
gh_patches_debug_971 | rasdani/github-patches | git_diff | docker__docker-py-1204 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Issue with requests dependency
I found that commit 95d9306d2a1fd22dffb12a0548abf2d2f744ed9d excludes requests 2.11 for a bug that is fixed now on requests 2.11.1. And that's giving me a version conflict with another of the modules on my project:
```
pkg_resources.ContextualVersionConflict: (requests 2.11.1 (..............), Requirement.parse('requests<2.11,>=2.5.2'), {'docker-py'})
```
Can we allow requests 2.11.1 ?
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 from setuptools import setup
6
7
8 ROOT_DIR = os.path.dirname(__file__)
9 SOURCE_DIR = os.path.join(ROOT_DIR)
10
11 requirements = [
12 'requests >= 2.5.2, < 2.11',
13 'six >= 1.4.0',
14 'websocket-client >= 0.32.0',
15 'docker-pycreds >= 0.2.1'
16 ]
17
18 if sys.platform == 'win32':
19 requirements.append('pypiwin32 >= 219')
20
21 extras_require = {
22 ':python_version < "3.5"': 'backports.ssl_match_hostname >= 3.5',
23 ':python_version < "3.3"': 'ipaddress >= 1.0.16',
24 }
25
26 version = None
27 exec(open('docker/version.py').read())
28
29 with open('./test-requirements.txt') as test_reqs_txt:
30 test_requirements = [line for line in test_reqs_txt]
31
32
33 setup(
34 name="docker-py",
35 version=version,
36 description="Python client for Docker.",
37 url='https://github.com/docker/docker-py/',
38 packages=[
39 'docker', 'docker.api', 'docker.auth', 'docker.transport',
40 'docker.utils', 'docker.utils.ports', 'docker.ssladapter',
41 'docker.types',
42 ],
43 install_requires=requirements,
44 tests_require=test_requirements,
45 extras_require=extras_require,
46 zip_safe=False,
47 test_suite='tests',
48 classifiers=[
49 'Development Status :: 4 - Beta',
50 'Environment :: Other Environment',
51 'Intended Audience :: Developers',
52 'Operating System :: OS Independent',
53 'Programming Language :: Python',
54 'Programming Language :: Python :: 2',
55 'Programming Language :: Python :: 2.6',
56 'Programming Language :: Python :: 2.7',
57 'Programming Language :: Python :: 3',
58 'Programming Language :: Python :: 3.3',
59 'Programming Language :: Python :: 3.4',
60 'Programming Language :: Python :: 3.5',
61 'Topic :: Utilities',
62 'License :: OSI Approved :: Apache Software License',
63 ],
64 )
65
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -9,7 +9,7 @@
SOURCE_DIR = os.path.join(ROOT_DIR)
requirements = [
- 'requests >= 2.5.2, < 2.11',
+ 'requests >= 2.5.2',
'six >= 1.4.0',
'websocket-client >= 0.32.0',
'docker-pycreds >= 0.2.1'
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -9,7 +9,7 @@\n SOURCE_DIR = os.path.join(ROOT_DIR)\n \n requirements = [\n- 'requests >= 2.5.2, < 2.11',\n+ 'requests >= 2.5.2',\n 'six >= 1.4.0',\n 'websocket-client >= 0.32.0',\n 'docker-pycreds >= 0.2.1'\n", "issue": "Issue with requests dependency\nI found that commit 95d9306d2a1fd22dffb12a0548abf2d2f744ed9d excludes requests 2.11 for a bug that is fixed now on requests 2.11.1. And that's giving me a version conflict with another of the modules on my project:\n\n```\npkg_resources.ContextualVersionConflict: (requests 2.11.1 (..............), Requirement.parse('requests<2.11,>=2.5.2'), {'docker-py'})\n```\n\nCan we allow requests 2.11.1 ?\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nfrom setuptools import setup\n\n\nROOT_DIR = os.path.dirname(__file__)\nSOURCE_DIR = os.path.join(ROOT_DIR)\n\nrequirements = [\n 'requests >= 2.5.2, < 2.11',\n 'six >= 1.4.0',\n 'websocket-client >= 0.32.0',\n 'docker-pycreds >= 0.2.1'\n]\n\nif sys.platform == 'win32':\n requirements.append('pypiwin32 >= 219')\n\nextras_require = {\n ':python_version < \"3.5\"': 'backports.ssl_match_hostname >= 3.5',\n ':python_version < \"3.3\"': 'ipaddress >= 1.0.16',\n}\n\nversion = None\nexec(open('docker/version.py').read())\n\nwith open('./test-requirements.txt') as test_reqs_txt:\n test_requirements = [line for line in test_reqs_txt]\n\n\nsetup(\n name=\"docker-py\",\n version=version,\n description=\"Python client for Docker.\",\n url='https://github.com/docker/docker-py/',\n packages=[\n 'docker', 'docker.api', 'docker.auth', 'docker.transport',\n 'docker.utils', 'docker.utils.ports', 'docker.ssladapter',\n 'docker.types',\n ],\n install_requires=requirements,\n tests_require=test_requirements,\n extras_require=extras_require,\n zip_safe=False,\n test_suite='tests',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Other Environment',\n 'Intended Audience :: Developers',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Topic :: Utilities',\n 'License :: OSI Approved :: Apache Software License',\n ],\n)\n", "path": "setup.py"}]} | 1,267 | 113 |
gh_patches_debug_63551 | rasdani/github-patches | git_diff | falconry__falcon-602 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Hoist HTTPStatus into falcon top-level namespace
I.e., add an import line to `falcon/__init__.py`
</issue>
<code>
[start of falcon/__init__.py]
1 # Copyright 2013 by Rackspace Hosting, Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 HTTP_METHODS = (
16 'CONNECT',
17 'DELETE',
18 'GET',
19 'HEAD',
20 'OPTIONS',
21 'PATCH',
22 'POST',
23 'PUT',
24 'TRACE',
25 )
26
27 DEFAULT_MEDIA_TYPE = 'application/json; charset=utf-8'
28
29
30 # Hoist classes and functions into the falcon namespace
31 from falcon.version import __version__ # NOQA
32 from falcon.api import API, DEFAULT_MEDIA_TYPE # NOQA
33 from falcon.status_codes import * # NOQA
34 from falcon.errors import * # NOQA
35 from falcon.redirects import * # NOQA
36 from falcon.http_error import HTTPError # NOQA
37 from falcon.util import * # NOQA
38 from falcon.hooks import before, after # NOQA
39 from falcon.request import Request, RequestOptions # NOQA
40 from falcon.response import Response # NOQA
41
[end of falcon/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/falcon/__init__.py b/falcon/__init__.py
--- a/falcon/__init__.py
+++ b/falcon/__init__.py
@@ -34,6 +34,7 @@
from falcon.errors import * # NOQA
from falcon.redirects import * # NOQA
from falcon.http_error import HTTPError # NOQA
+from falcon.http_status import HTTPStatus # NOQA
from falcon.util import * # NOQA
from falcon.hooks import before, after # NOQA
from falcon.request import Request, RequestOptions # NOQA
| {"golden_diff": "diff --git a/falcon/__init__.py b/falcon/__init__.py\n--- a/falcon/__init__.py\n+++ b/falcon/__init__.py\n@@ -34,6 +34,7 @@\n from falcon.errors import * # NOQA\n from falcon.redirects import * # NOQA\n from falcon.http_error import HTTPError # NOQA\n+from falcon.http_status import HTTPStatus # NOQA\n from falcon.util import * # NOQA\n from falcon.hooks import before, after # NOQA\n from falcon.request import Request, RequestOptions # NOQA\n", "issue": "Hoist HTTPStatus into falcon top-level namespace\nI.e., add an import line to `falcon/__init__.py`\n\n", "before_files": [{"content": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nHTTP_METHODS = (\n 'CONNECT',\n 'DELETE',\n 'GET',\n 'HEAD',\n 'OPTIONS',\n 'PATCH',\n 'POST',\n 'PUT',\n 'TRACE',\n)\n\nDEFAULT_MEDIA_TYPE = 'application/json; charset=utf-8'\n\n\n# Hoist classes and functions into the falcon namespace\nfrom falcon.version import __version__ # NOQA\nfrom falcon.api import API, DEFAULT_MEDIA_TYPE # NOQA\nfrom falcon.status_codes import * # NOQA\nfrom falcon.errors import * # NOQA\nfrom falcon.redirects import * # NOQA\nfrom falcon.http_error import HTTPError # NOQA\nfrom falcon.util import * # NOQA\nfrom falcon.hooks import before, after # NOQA\nfrom falcon.request import Request, RequestOptions # NOQA\nfrom falcon.response import Response # NOQA\n", "path": "falcon/__init__.py"}]} | 970 | 137 |
gh_patches_debug_22620 | rasdani/github-patches | git_diff | getnikola__nikola-1582 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Build fails with IPython 3.0
Trying to use ipython notebooks with the current dev version of IPython (3.0.0) fails building with some warnings etc. because the `nbformat` interface has changed a little:
```
...WARNING: UserWarning: .../ipython-dev/IPython/nbformat/current.py:19: IPython.nbformat.current is deprecated.
- use IPython.nbformat for read/write/validate public API
- use IPython.nbformat.vX directly to composing notebooks of a particular version
...
... WARNING: UserWarning: .../ipython-dev/IPython/nbformat/current.py:75: reads_json is deprecated, use reads
...
AttributeError: cells
```
This is fairly easily fixed and I will send a PR shortly.
</issue>
<code>
[start of nikola/plugins/compile/ipynb/__init__.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2013-2015 Damián Avila and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 """Implementation of compile_html based on nbconvert."""
28
29 from __future__ import unicode_literals, print_function
30 import io
31 import os
32
33 try:
34 from IPython.nbconvert.exporters import HTMLExporter
35 from IPython.nbformat import current as nbformat
36 from IPython.config import Config
37 flag = True
38 except ImportError:
39 flag = None
40
41 from nikola.plugin_categories import PageCompiler
42 from nikola.utils import makedirs, req_missing
43
44
45 class CompileIPynb(PageCompiler):
46 """Compile IPynb into HTML."""
47
48 name = "ipynb"
49 supports_onefile = False
50 demote_headers = True
51
52 def compile_html(self, source, dest, is_two_file=True):
53 if flag is None:
54 req_missing(['ipython>=1.1.0'], 'build this site (compile ipynb)')
55 makedirs(os.path.dirname(dest))
56 HTMLExporter.default_template = 'basic'
57 c = Config(self.site.config['IPYNB_CONFIG'])
58 exportHtml = HTMLExporter(config=c)
59 with io.open(dest, "w+", encoding="utf8") as out_file:
60 with io.open(source, "r", encoding="utf8") as in_file:
61 nb = in_file.read()
62 nb_json = nbformat.reads_json(nb)
63 (body, resources) = exportHtml.from_notebook_node(nb_json)
64 out_file.write(body)
65
66 def create_post(self, path, **kw):
67 content = kw.pop('content', None)
68 onefile = kw.pop('onefile', False)
69 # is_page is not needed to create the file
70 kw.pop('is_page', False)
71
72 makedirs(os.path.dirname(path))
73 if onefile:
74 raise Exception('The one-file format is not supported by this compiler.')
75 with io.open(path, "w+", encoding="utf8") as fd:
76 if not content.startswith("Write your"):
77 fd.write(content)
78 else:
79 fd.write("""{
80 "metadata": {
81 "name": ""
82 },
83 "nbformat": 3,
84 "nbformat_minor": 0,
85 "worksheets": [
86 {
87 "cells": [
88 {
89 "cell_type": "code",
90 "collapsed": false,
91 "input": [],
92 "language": "python",
93 "metadata": {},
94 "outputs": []
95 }
96 ],
97 "metadata": {}
98 }
99 ]
100 }""")
101
[end of nikola/plugins/compile/ipynb/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nikola/plugins/compile/ipynb/__init__.py b/nikola/plugins/compile/ipynb/__init__.py
--- a/nikola/plugins/compile/ipynb/__init__.py
+++ b/nikola/plugins/compile/ipynb/__init__.py
@@ -31,8 +31,15 @@
import os
try:
+ import IPython
from IPython.nbconvert.exporters import HTMLExporter
- from IPython.nbformat import current as nbformat
+ if IPython.version_info[0] >= 3: # API changed with 3.0.0
+ from IPython import nbformat
+ current_nbformat = nbformat.current_nbformat
+ else:
+ import IPython.nbformat.current as nbformat
+ current_nbformat = 'json'
+
from IPython.config import Config
flag = True
except ImportError:
@@ -58,8 +65,7 @@
exportHtml = HTMLExporter(config=c)
with io.open(dest, "w+", encoding="utf8") as out_file:
with io.open(source, "r", encoding="utf8") as in_file:
- nb = in_file.read()
- nb_json = nbformat.reads_json(nb)
+ nb_json = nbformat.read(in_file, current_nbformat)
(body, resources) = exportHtml.from_notebook_node(nb_json)
out_file.write(body)
| {"golden_diff": "diff --git a/nikola/plugins/compile/ipynb/__init__.py b/nikola/plugins/compile/ipynb/__init__.py\n--- a/nikola/plugins/compile/ipynb/__init__.py\n+++ b/nikola/plugins/compile/ipynb/__init__.py\n@@ -31,8 +31,15 @@\n import os\n \n try:\n+ import IPython\n from IPython.nbconvert.exporters import HTMLExporter\n- from IPython.nbformat import current as nbformat\n+ if IPython.version_info[0] >= 3: # API changed with 3.0.0\n+ from IPython import nbformat\n+ current_nbformat = nbformat.current_nbformat\n+ else:\n+ import IPython.nbformat.current as nbformat\n+ current_nbformat = 'json'\n+\n from IPython.config import Config\n flag = True\n except ImportError:\n@@ -58,8 +65,7 @@\n exportHtml = HTMLExporter(config=c)\n with io.open(dest, \"w+\", encoding=\"utf8\") as out_file:\n with io.open(source, \"r\", encoding=\"utf8\") as in_file:\n- nb = in_file.read()\n- nb_json = nbformat.reads_json(nb)\n+ nb_json = nbformat.read(in_file, current_nbformat)\n (body, resources) = exportHtml.from_notebook_node(nb_json)\n out_file.write(body)\n", "issue": "Build fails with IPython 3.0\nTrying to use ipython notebooks with the current dev version of IPython (3.0.0) fails building with some warnings etc. because the `nbformat` interface has changed a little:\n\n```\n...WARNING: UserWarning: .../ipython-dev/IPython/nbformat/current.py:19: IPython.nbformat.current is deprecated.\n\n- use IPython.nbformat for read/write/validate public API\n- use IPython.nbformat.vX directly to composing notebooks of a particular version\n...\n... WARNING: UserWarning: .../ipython-dev/IPython/nbformat/current.py:75: reads_json is deprecated, use reads\n...\nAttributeError: cells\n```\n\nThis is fairly easily fixed and I will send a PR shortly.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2013-2015 Dami\u00e1n Avila and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n\"\"\"Implementation of compile_html based on nbconvert.\"\"\"\n\nfrom __future__ import unicode_literals, print_function\nimport io\nimport os\n\ntry:\n from IPython.nbconvert.exporters import HTMLExporter\n from IPython.nbformat import current as nbformat\n from IPython.config import Config\n flag = True\nexcept ImportError:\n flag = None\n\nfrom nikola.plugin_categories import PageCompiler\nfrom nikola.utils import makedirs, req_missing\n\n\nclass CompileIPynb(PageCompiler):\n \"\"\"Compile IPynb into HTML.\"\"\"\n\n name = \"ipynb\"\n supports_onefile = False\n demote_headers = True\n\n def compile_html(self, source, dest, is_two_file=True):\n if flag is None:\n req_missing(['ipython>=1.1.0'], 'build this site (compile ipynb)')\n makedirs(os.path.dirname(dest))\n HTMLExporter.default_template = 'basic'\n c = Config(self.site.config['IPYNB_CONFIG'])\n exportHtml = HTMLExporter(config=c)\n with io.open(dest, \"w+\", encoding=\"utf8\") as out_file:\n with io.open(source, \"r\", encoding=\"utf8\") as in_file:\n nb = in_file.read()\n nb_json = nbformat.reads_json(nb)\n (body, resources) = exportHtml.from_notebook_node(nb_json)\n out_file.write(body)\n\n def create_post(self, path, **kw):\n content = kw.pop('content', None)\n onefile = kw.pop('onefile', False)\n # is_page is not needed to create the file\n kw.pop('is_page', False)\n\n makedirs(os.path.dirname(path))\n if onefile:\n raise Exception('The one-file format is not supported by this compiler.')\n with io.open(path, \"w+\", encoding=\"utf8\") as fd:\n if not content.startswith(\"Write your\"):\n fd.write(content)\n else:\n fd.write(\"\"\"{\n \"metadata\": {\n \"name\": \"\"\n },\n \"nbformat\": 3,\n \"nbformat_minor\": 0,\n \"worksheets\": [\n {\n \"cells\": [\n {\n \"cell_type\": \"code\",\n \"collapsed\": false,\n \"input\": [],\n \"language\": \"python\",\n \"metadata\": {},\n \"outputs\": []\n }\n ],\n \"metadata\": {}\n }\n ]\n}\"\"\")\n", "path": "nikola/plugins/compile/ipynb/__init__.py"}]} | 1,689 | 317 |
gh_patches_debug_11428 | rasdani/github-patches | git_diff | saleor__saleor-11825 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: Unable to update Warehouse address
### What are you trying to achieve?
I'm trying to update the warehouse update, with the country set to "UK", according to addressValidationRules query, the required fields are
```
streetAddress1",
"city",
"postalCode"
```
### Steps to reproduce the problem
1. In shipping zone update/creating a new on select country UK
2. Fill all fields with the necessary information
3. Try to save changes
### What did you expect to happen?
Being able to update the warehouse address properly.
### Logs
Api responds with error -> Error code REQUIRED on field countryAreaAPI
### Environment
Saleor version: 3.10
</issue>
<code>
[start of saleor/account/forms.py]
1 from phonenumbers.phonenumberutil import country_code_for_region
2
3 from .i18n import AddressMetaForm, get_address_form_class
4
5
6 def get_address_form(
7 data, country_code, initial=None, instance=None, enable_normalization=True, **kwargs
8 ):
9 country_form = AddressMetaForm(data, initial=initial)
10 if country_form.is_valid():
11 country_code = country_form.cleaned_data["country"]
12
13 if initial is None and country_code:
14 initial = {}
15 if country_code:
16 initial["phone"] = "+{}".format(country_code_for_region(country_code))
17
18 address_form_class = get_address_form_class(country_code)
19
20 if instance is not None:
21 address_form_class = get_address_form_class(instance.country.code)
22 address_form = address_form_class(
23 data, instance=instance, enable_normalization=enable_normalization, **kwargs
24 )
25 else:
26 initial_address = initial
27 address_form = address_form_class(
28 data or None,
29 initial=initial_address,
30 enable_normalization=enable_normalization,
31 **kwargs,
32 )
33
34 if hasattr(address_form.fields["country_area"], "choices"):
35 choices = address_form.fields["country_area"].choices
36 choices = [(choice[1], choice[1]) for choice in choices]
37 address_form.fields["country_area"].choices = choices
38 return address_form
39
[end of saleor/account/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/saleor/account/forms.py b/saleor/account/forms.py
--- a/saleor/account/forms.py
+++ b/saleor/account/forms.py
@@ -14,11 +14,9 @@
initial = {}
if country_code:
initial["phone"] = "+{}".format(country_code_for_region(country_code))
-
address_form_class = get_address_form_class(country_code)
if instance is not None:
- address_form_class = get_address_form_class(instance.country.code)
address_form = address_form_class(
data, instance=instance, enable_normalization=enable_normalization, **kwargs
)
| {"golden_diff": "diff --git a/saleor/account/forms.py b/saleor/account/forms.py\n--- a/saleor/account/forms.py\n+++ b/saleor/account/forms.py\n@@ -14,11 +14,9 @@\n initial = {}\n if country_code:\n initial[\"phone\"] = \"+{}\".format(country_code_for_region(country_code))\n-\n address_form_class = get_address_form_class(country_code)\n \n if instance is not None:\n- address_form_class = get_address_form_class(instance.country.code)\n address_form = address_form_class(\n data, instance=instance, enable_normalization=enable_normalization, **kwargs\n )\n", "issue": "Bug: Unable to update Warehouse address\n### What are you trying to achieve?\n\nI'm trying to update the warehouse update, with the country set to \"UK\", according to addressValidationRules query, the required fields are \r\n```\r\nstreetAddress1\",\r\n\"city\",\r\n\"postalCode\"\r\n```\n\n### Steps to reproduce the problem\n\n1. In shipping zone update/creating a new on select country UK\r\n2. Fill all fields with the necessary information\r\n3. Try to save changes\n\n### What did you expect to happen?\n\nBeing able to update the warehouse address properly.\n\n### Logs\n\nApi responds with error -> Error code REQUIRED on field countryAreaAPI\n\n### Environment\n\nSaleor version: 3.10\r\n\n", "before_files": [{"content": "from phonenumbers.phonenumberutil import country_code_for_region\n\nfrom .i18n import AddressMetaForm, get_address_form_class\n\n\ndef get_address_form(\n data, country_code, initial=None, instance=None, enable_normalization=True, **kwargs\n):\n country_form = AddressMetaForm(data, initial=initial)\n if country_form.is_valid():\n country_code = country_form.cleaned_data[\"country\"]\n\n if initial is None and country_code:\n initial = {}\n if country_code:\n initial[\"phone\"] = \"+{}\".format(country_code_for_region(country_code))\n\n address_form_class = get_address_form_class(country_code)\n\n if instance is not None:\n address_form_class = get_address_form_class(instance.country.code)\n address_form = address_form_class(\n data, instance=instance, enable_normalization=enable_normalization, **kwargs\n )\n else:\n initial_address = initial\n address_form = address_form_class(\n data or None,\n initial=initial_address,\n enable_normalization=enable_normalization,\n **kwargs,\n )\n\n if hasattr(address_form.fields[\"country_area\"], \"choices\"):\n choices = address_form.fields[\"country_area\"].choices\n choices = [(choice[1], choice[1]) for choice in choices]\n address_form.fields[\"country_area\"].choices = choices\n return address_form\n", "path": "saleor/account/forms.py"}]} | 1,041 | 138 |
gh_patches_debug_19322 | rasdani/github-patches | git_diff | psf__black-3282 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support formatting Jupyter Notebooks in GitHub Actions
**Is your feature request related to a problem? Please describe.**
I'm trying to setup a GitHub Action that runs Black on a project that includes *.py and *.ipynb files, but the default action does not include the Jupyter extra. I followed the integration described in [this piece of documentation](https://black.readthedocs.io/en/stable/integrations/github_actions.html) but the option to include the Jupyter extra (`black[jupyter]`) is not available.
**Describe the solution you'd like**
If the action included an argument to include the Jupyter extra, the GitHub Action would work in as expected (when using `pip install black[jupyter]` locally).
**Describe alternatives you've considered**
I considered a custom GitHub Action and installing Black manually, but found out that modifying part of the action available in this repository is cleaner and would bring support to users with a similar need without affecting those that already use the GitHub Action.
**Additional context**
I was trying different things out and arrived to a solution that works as expected and can be included in this project without affecting users that already use the GitHub Action. **Add a new option to the GitHub Action to enable the Jupyter extra dependency**. I think that a boolean value might do the trick and using `false` as default maintains the current behavior.
``` diff
diff --git a/action.yml b/action.yml
index cfa6ef9..ed6c32e 100644
--- a/action.yml
+++ b/action.yml
@@ -8,6 +8,10 @@ inputs:
'--check --diff'"
required: false
default: "--check --diff"
+ jupyter:
+ description: "Include the required extra dependencies to format Jupyter Notebooks."
+ required: false
+ default: false
src:
description: "Source to run Black. Default: '.'"
required: false
@@ -38,6 +42,7 @@ runs:
# TODO: Remove once https://github.com/actions/runner/issues/665 is fixed.
INPUT_OPTIONS: ${{ inputs.options }}
INPUT_SRC: ${{ inputs.src }}
+ INPUT_JUPYTER: ${{ inputs.jupyter }}
INPUT_BLACK_ARGS: ${{ inputs.black_args }}
INPUT_VERSION: ${{ inputs.version }}
pythonioencoding: utf-8
```
In this file, if the flag is enabled (if the `INPUT_JUPYTER` envar has a true value) then the `jupyter` extra is included in the installation step. Colorama is already included by default.
```diff
diff --git a/action/main.py b/action/main.py
index cd920f5..fbf6e73 100644
--- a/action/main.py
+++ b/action/main.py
@@ -10,11 +10,16 @@ ENV_BIN = ENV_PATH / ("Scripts" if sys.platform == "win32" else "bin")
OPTIONS = os.getenv("INPUT_OPTIONS", default="")
SRC = os.getenv("INPUT_SRC", default="")
BLACK_ARGS = os.getenv("INPUT_BLACK_ARGS", default="")
+JUPYTER = os.getenv("INPUT_JUPYTER")
VERSION = os.getenv("INPUT_VERSION", default="")
run([sys.executable, "-m", "venv", str(ENV_PATH)], check=True)
-req = "black[colorama]"
+
+if JUPYTER:
+ req = "black[colorama,jupyter]"
+else:
+ req = "black[colorama]"
if VERSION:
req += f"=={VERSION}"
pip_proc = run(
```
The only difference would be visible in case I want to use the Jupyter extra, which can be enabled by passing the value explicitly:
```diff
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: psf/black@stable
+ jupyter: true
options: "--check --diff --verbose"
```
I forked this project to test the GitHub Action and it does work as expected (https://github.com/aaossa/black/commit/7af4287355003cd44e0febd8fe88e92f205db324). If you agree with this feature request, I can submit a PR with these changes and update the relevant documentation 👌
</issue>
<code>
[start of action/main.py]
1 import os
2 import shlex
3 import sys
4 from pathlib import Path
5 from subprocess import PIPE, STDOUT, run
6
7 ACTION_PATH = Path(os.environ["GITHUB_ACTION_PATH"])
8 ENV_PATH = ACTION_PATH / ".black-env"
9 ENV_BIN = ENV_PATH / ("Scripts" if sys.platform == "win32" else "bin")
10 OPTIONS = os.getenv("INPUT_OPTIONS", default="")
11 SRC = os.getenv("INPUT_SRC", default="")
12 BLACK_ARGS = os.getenv("INPUT_BLACK_ARGS", default="")
13 VERSION = os.getenv("INPUT_VERSION", default="")
14
15 run([sys.executable, "-m", "venv", str(ENV_PATH)], check=True)
16
17 version_specifier = VERSION
18 if VERSION and VERSION[0] in "0123456789":
19 version_specifier = f"=={VERSION}"
20 req = f"black[colorama]{version_specifier}"
21 pip_proc = run(
22 [str(ENV_BIN / "python"), "-m", "pip", "install", req],
23 stdout=PIPE,
24 stderr=STDOUT,
25 encoding="utf-8",
26 )
27 if pip_proc.returncode:
28 print(pip_proc.stdout)
29 print("::error::Failed to install Black.", flush=True)
30 sys.exit(pip_proc.returncode)
31
32
33 base_cmd = [str(ENV_BIN / "black")]
34 if BLACK_ARGS:
35 # TODO: remove after a while since this is deprecated in favour of SRC + OPTIONS.
36 proc = run([*base_cmd, *shlex.split(BLACK_ARGS)])
37 else:
38 proc = run([*base_cmd, *shlex.split(OPTIONS), *shlex.split(SRC)])
39
40 sys.exit(proc.returncode)
41
[end of action/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/action/main.py b/action/main.py
--- a/action/main.py
+++ b/action/main.py
@@ -9,6 +9,7 @@
ENV_BIN = ENV_PATH / ("Scripts" if sys.platform == "win32" else "bin")
OPTIONS = os.getenv("INPUT_OPTIONS", default="")
SRC = os.getenv("INPUT_SRC", default="")
+JUPYTER = os.getenv("INPUT_JUPYTER") == "true"
BLACK_ARGS = os.getenv("INPUT_BLACK_ARGS", default="")
VERSION = os.getenv("INPUT_VERSION", default="")
@@ -17,7 +18,11 @@
version_specifier = VERSION
if VERSION and VERSION[0] in "0123456789":
version_specifier = f"=={VERSION}"
-req = f"black[colorama]{version_specifier}"
+if JUPYTER:
+ extra_deps = "[colorama,jupyter]"
+else:
+ extra_deps = "[colorama]"
+req = f"black{extra_deps}{version_specifier}"
pip_proc = run(
[str(ENV_BIN / "python"), "-m", "pip", "install", req],
stdout=PIPE,
| {"golden_diff": "diff --git a/action/main.py b/action/main.py\n--- a/action/main.py\n+++ b/action/main.py\n@@ -9,6 +9,7 @@\n ENV_BIN = ENV_PATH / (\"Scripts\" if sys.platform == \"win32\" else \"bin\")\n OPTIONS = os.getenv(\"INPUT_OPTIONS\", default=\"\")\n SRC = os.getenv(\"INPUT_SRC\", default=\"\")\n+JUPYTER = os.getenv(\"INPUT_JUPYTER\") == \"true\"\n BLACK_ARGS = os.getenv(\"INPUT_BLACK_ARGS\", default=\"\")\n VERSION = os.getenv(\"INPUT_VERSION\", default=\"\")\n \n@@ -17,7 +18,11 @@\n version_specifier = VERSION\n if VERSION and VERSION[0] in \"0123456789\":\n version_specifier = f\"=={VERSION}\"\n-req = f\"black[colorama]{version_specifier}\"\n+if JUPYTER:\n+ extra_deps = \"[colorama,jupyter]\"\n+else:\n+ extra_deps = \"[colorama]\"\n+req = f\"black{extra_deps}{version_specifier}\"\n pip_proc = run(\n [str(ENV_BIN / \"python\"), \"-m\", \"pip\", \"install\", req],\n stdout=PIPE,\n", "issue": "Support formatting Jupyter Notebooks in GitHub Actions\n**Is your feature request related to a problem? Please describe.**\r\n\r\nI'm trying to setup a GitHub Action that runs Black on a project that includes *.py and *.ipynb files, but the default action does not include the Jupyter extra. I followed the integration described in [this piece of documentation](https://black.readthedocs.io/en/stable/integrations/github_actions.html) but the option to include the Jupyter extra (`black[jupyter]`) is not available.\r\n\r\n**Describe the solution you'd like**\r\n\r\nIf the action included an argument to include the Jupyter extra, the GitHub Action would work in as expected (when using `pip install black[jupyter]` locally).\r\n\r\n**Describe alternatives you've considered**\r\n\r\nI considered a custom GitHub Action and installing Black manually, but found out that modifying part of the action available in this repository is cleaner and would bring support to users with a similar need without affecting those that already use the GitHub Action.\r\n\r\n**Additional context**\r\n\r\nI was trying different things out and arrived to a solution that works as expected and can be included in this project without affecting users that already use the GitHub Action. **Add a new option to the GitHub Action to enable the Jupyter extra dependency**. I think that a boolean value might do the trick and using `false` as default maintains the current behavior.\r\n\r\n``` diff\r\ndiff --git a/action.yml b/action.yml\r\nindex cfa6ef9..ed6c32e 100644\r\n--- a/action.yml\r\n+++ b/action.yml\r\n@@ -8,6 +8,10 @@ inputs:\r\n '--check --diff'\"\r\n required: false\r\n default: \"--check --diff\"\r\n+ jupyter:\r\n+ description: \"Include the required extra dependencies to format Jupyter Notebooks.\"\r\n+ required: false\r\n+ default: false\r\n src:\r\n description: \"Source to run Black. Default: '.'\"\r\n required: false\r\n@@ -38,6 +42,7 @@ runs:\r\n # TODO: Remove once https://github.com/actions/runner/issues/665 is fixed.\r\n INPUT_OPTIONS: ${{ inputs.options }}\r\n INPUT_SRC: ${{ inputs.src }}\r\n+ INPUT_JUPYTER: ${{ inputs.jupyter }}\r\n INPUT_BLACK_ARGS: ${{ inputs.black_args }}\r\n INPUT_VERSION: ${{ inputs.version }}\r\n pythonioencoding: utf-8\r\n```\r\n\r\nIn this file, if the flag is enabled (if the `INPUT_JUPYTER` envar has a true value) then the `jupyter` extra is included in the installation step. Colorama is already included by default. \r\n\r\n```diff\r\ndiff --git a/action/main.py b/action/main.py\r\nindex cd920f5..fbf6e73 100644\r\n--- a/action/main.py\r\n+++ b/action/main.py\r\n@@ -10,11 +10,16 @@ ENV_BIN = ENV_PATH / (\"Scripts\" if sys.platform == \"win32\" else \"bin\")\r\n OPTIONS = os.getenv(\"INPUT_OPTIONS\", default=\"\")\r\n SRC = os.getenv(\"INPUT_SRC\", default=\"\")\r\n BLACK_ARGS = os.getenv(\"INPUT_BLACK_ARGS\", default=\"\")\r\n+JUPYTER = os.getenv(\"INPUT_JUPYTER\")\r\n VERSION = os.getenv(\"INPUT_VERSION\", default=\"\")\r\n\r\n run([sys.executable, \"-m\", \"venv\", str(ENV_PATH)], check=True)\r\n\r\n-req = \"black[colorama]\"\r\n+\r\n+if JUPYTER:\r\n+ req = \"black[colorama,jupyter]\"\r\n+else:\r\n+ req = \"black[colorama]\"\r\n if VERSION:\r\n req += f\"=={VERSION}\"\r\n pip_proc = run(\r\n```\r\n\r\nThe only difference would be visible in case I want to use the Jupyter extra, which can be enabled by passing the value explicitly:\r\n\r\n```diff\r\njobs:\r\n lint:\r\n runs-on: ubuntu-latest\r\n steps:\r\n - uses: actions/checkout@v2\r\n - uses: psf/black@stable\r\n+ jupyter: true\r\n options: \"--check --diff --verbose\"\r\n\r\n```\r\n\r\nI forked this project to test the GitHub Action and it does work as expected (https://github.com/aaossa/black/commit/7af4287355003cd44e0febd8fe88e92f205db324). If you agree with this feature request, I can submit a PR with these changes and update the relevant documentation \ud83d\udc4c \r\n\r\n\n", "before_files": [{"content": "import os\nimport shlex\nimport sys\nfrom pathlib import Path\nfrom subprocess import PIPE, STDOUT, run\n\nACTION_PATH = Path(os.environ[\"GITHUB_ACTION_PATH\"])\nENV_PATH = ACTION_PATH / \".black-env\"\nENV_BIN = ENV_PATH / (\"Scripts\" if sys.platform == \"win32\" else \"bin\")\nOPTIONS = os.getenv(\"INPUT_OPTIONS\", default=\"\")\nSRC = os.getenv(\"INPUT_SRC\", default=\"\")\nBLACK_ARGS = os.getenv(\"INPUT_BLACK_ARGS\", default=\"\")\nVERSION = os.getenv(\"INPUT_VERSION\", default=\"\")\n\nrun([sys.executable, \"-m\", \"venv\", str(ENV_PATH)], check=True)\n\nversion_specifier = VERSION\nif VERSION and VERSION[0] in \"0123456789\":\n version_specifier = f\"=={VERSION}\"\nreq = f\"black[colorama]{version_specifier}\"\npip_proc = run(\n [str(ENV_BIN / \"python\"), \"-m\", \"pip\", \"install\", req],\n stdout=PIPE,\n stderr=STDOUT,\n encoding=\"utf-8\",\n)\nif pip_proc.returncode:\n print(pip_proc.stdout)\n print(\"::error::Failed to install Black.\", flush=True)\n sys.exit(pip_proc.returncode)\n\n\nbase_cmd = [str(ENV_BIN / \"black\")]\nif BLACK_ARGS:\n # TODO: remove after a while since this is deprecated in favour of SRC + OPTIONS.\n proc = run([*base_cmd, *shlex.split(BLACK_ARGS)])\nelse:\n proc = run([*base_cmd, *shlex.split(OPTIONS), *shlex.split(SRC)])\n\nsys.exit(proc.returncode)\n", "path": "action/main.py"}]} | 1,925 | 257 |
gh_patches_debug_57398 | rasdani/github-patches | git_diff | translate__pootle-5797 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pootle_fs not expiring cache_keys
When a project uses pootle FS, stats are not updated. We have to manually call `pootle flush_cache --lru --django-cache` to update it manually.
</issue>
<code>
[start of pootle/apps/pootle_revision/receivers.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 from django.db.models.signals import post_save, pre_delete
10 from django.dispatch import receiver
11
12 from pootle.core.delegate import revision_updater
13 from pootle_app.models import Directory
14 from pootle_data.models import StoreData
15 from pootle_store.models import Store
16
17
18 @receiver(post_save, sender=StoreData)
19 def handle_storedata_save(**kwargs):
20 revision_updater.get(Store)(
21 context=kwargs["instance"].store).update(keys=["stats", "checks"])
22
23
24 @receiver(post_save, sender=Directory)
25 def handle_directory_save(**kwargs):
26 if kwargs.get("created"):
27 return
28 revision_updater.get(Directory)(
29 context=kwargs["instance"]).update(keys=["stats", "checks"])
30
31
32 @receiver(pre_delete, sender=Directory)
33 def handle_directory_delete(**kwargs):
34 revision_updater.get(Directory)(
35 context=kwargs["instance"].parent).update(keys=["stats", "checks"])
36
[end of pootle/apps/pootle_revision/receivers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pootle/apps/pootle_revision/receivers.py b/pootle/apps/pootle_revision/receivers.py
--- a/pootle/apps/pootle_revision/receivers.py
+++ b/pootle/apps/pootle_revision/receivers.py
@@ -23,10 +23,12 @@
@receiver(post_save, sender=Directory)
def handle_directory_save(**kwargs):
- if kwargs.get("created"):
- return
+ context = (
+ kwargs["instance"].parent
+ if kwargs.get("created")
+ else kwargs["instance"])
revision_updater.get(Directory)(
- context=kwargs["instance"]).update(keys=["stats", "checks"])
+ context=context).update(keys=["stats", "checks"])
@receiver(pre_delete, sender=Directory)
| {"golden_diff": "diff --git a/pootle/apps/pootle_revision/receivers.py b/pootle/apps/pootle_revision/receivers.py\n--- a/pootle/apps/pootle_revision/receivers.py\n+++ b/pootle/apps/pootle_revision/receivers.py\n@@ -23,10 +23,12 @@\n \n @receiver(post_save, sender=Directory)\n def handle_directory_save(**kwargs):\n- if kwargs.get(\"created\"):\n- return\n+ context = (\n+ kwargs[\"instance\"].parent\n+ if kwargs.get(\"created\")\n+ else kwargs[\"instance\"])\n revision_updater.get(Directory)(\n- context=kwargs[\"instance\"]).update(keys=[\"stats\", \"checks\"])\n+ context=context).update(keys=[\"stats\", \"checks\"])\n \n \n @receiver(pre_delete, sender=Directory)\n", "issue": "pootle_fs not expiring cache_keys\nWhen a project uses pootle FS, stats are not updated. We have to manually call `pootle flush_cache --lru --django-cache` to update it manually.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.db.models.signals import post_save, pre_delete\nfrom django.dispatch import receiver\n\nfrom pootle.core.delegate import revision_updater\nfrom pootle_app.models import Directory\nfrom pootle_data.models import StoreData\nfrom pootle_store.models import Store\n\n\n@receiver(post_save, sender=StoreData)\ndef handle_storedata_save(**kwargs):\n revision_updater.get(Store)(\n context=kwargs[\"instance\"].store).update(keys=[\"stats\", \"checks\"])\n\n\n@receiver(post_save, sender=Directory)\ndef handle_directory_save(**kwargs):\n if kwargs.get(\"created\"):\n return\n revision_updater.get(Directory)(\n context=kwargs[\"instance\"]).update(keys=[\"stats\", \"checks\"])\n\n\n@receiver(pre_delete, sender=Directory)\ndef handle_directory_delete(**kwargs):\n revision_updater.get(Directory)(\n context=kwargs[\"instance\"].parent).update(keys=[\"stats\", \"checks\"])\n", "path": "pootle/apps/pootle_revision/receivers.py"}]} | 927 | 179 |
gh_patches_debug_27040 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-1005 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Imported ratings added as reviews
During a goodreads import, star ratings seem to be added as Reviews, rather than ReviewRatings
</issue>
<code>
[start of bookwyrm/importers/importer.py]
1 """ handle reading a csv from an external service, defaults are from GoodReads """
2 import csv
3 import logging
4
5 from bookwyrm import models
6 from bookwyrm.models import ImportJob, ImportItem
7 from bookwyrm.tasks import app
8
9 logger = logging.getLogger(__name__)
10
11
12 class Importer:
13 """Generic class for csv data import from an outside service"""
14
15 service = "Unknown"
16 delimiter = ","
17 encoding = "UTF-8"
18 mandatory_fields = ["Title", "Author"]
19
20 def create_job(self, user, csv_file, include_reviews, privacy):
21 """check over a csv and creates a database entry for the job"""
22 job = ImportJob.objects.create(
23 user=user, include_reviews=include_reviews, privacy=privacy
24 )
25 for index, entry in enumerate(
26 list(csv.DictReader(csv_file, delimiter=self.delimiter))
27 ):
28 if not all(x in entry for x in self.mandatory_fields):
29 raise ValueError("Author and title must be in data.")
30 entry = self.parse_fields(entry)
31 self.save_item(job, index, entry)
32 return job
33
34 def save_item(self, job, index, data): # pylint: disable=no-self-use
35 """creates and saves an import item"""
36 ImportItem(job=job, index=index, data=data).save()
37
38 def parse_fields(self, entry):
39 """updates csv data with additional info"""
40 entry.update({"import_source": self.service})
41 return entry
42
43 def create_retry_job(self, user, original_job, items):
44 """retry items that didn't import"""
45 job = ImportJob.objects.create(
46 user=user,
47 include_reviews=original_job.include_reviews,
48 privacy=original_job.privacy,
49 retry=True,
50 )
51 for item in items:
52 self.save_item(job, item.index, item.data)
53 return job
54
55 def start_import(self, job):
56 """initalizes a csv import job"""
57 result = import_data.delay(self.service, job.id)
58 job.task_id = result.id
59 job.save()
60
61
62 @app.task
63 def import_data(source, job_id):
64 """does the actual lookup work in a celery task"""
65 job = ImportJob.objects.get(id=job_id)
66 try:
67 for item in job.items.all():
68 try:
69 item.resolve()
70 except Exception as e: # pylint: disable=broad-except
71 logger.exception(e)
72 item.fail_reason = "Error loading book"
73 item.save()
74 continue
75
76 if item.book:
77 item.save()
78
79 # shelves book and handles reviews
80 handle_imported_book(
81 source, job.user, item, job.include_reviews, job.privacy
82 )
83 else:
84 item.fail_reason = "Could not find a match for book"
85 item.save()
86 finally:
87 job.complete = True
88 job.save()
89
90
91 def handle_imported_book(source, user, item, include_reviews, privacy):
92 """process a csv and then post about it"""
93 if isinstance(item.book, models.Work):
94 item.book = item.book.default_edition
95 if not item.book:
96 return
97
98 existing_shelf = models.ShelfBook.objects.filter(book=item.book, user=user).exists()
99
100 # shelve the book if it hasn't been shelved already
101 if item.shelf and not existing_shelf:
102 desired_shelf = models.Shelf.objects.get(identifier=item.shelf, user=user)
103 models.ShelfBook.objects.create(book=item.book, shelf=desired_shelf, user=user)
104
105 for read in item.reads:
106 # check for an existing readthrough with the same dates
107 if models.ReadThrough.objects.filter(
108 user=user,
109 book=item.book,
110 start_date=read.start_date,
111 finish_date=read.finish_date,
112 ).exists():
113 continue
114 read.book = item.book
115 read.user = user
116 read.save()
117
118 if include_reviews and (item.rating or item.review):
119 review_title = (
120 "Review of {!r} on {!r}".format(
121 item.book.title,
122 source,
123 )
124 if item.review
125 else ""
126 )
127
128 # we don't know the publication date of the review,
129 # but "now" is a bad guess
130 published_date_guess = item.date_read or item.date_added
131 models.Review.objects.create(
132 user=user,
133 book=item.book,
134 name=review_title,
135 content=item.review,
136 rating=item.rating,
137 published_date=published_date_guess,
138 privacy=privacy,
139 )
140
[end of bookwyrm/importers/importer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bookwyrm/importers/importer.py b/bookwyrm/importers/importer.py
--- a/bookwyrm/importers/importer.py
+++ b/bookwyrm/importers/importer.py
@@ -116,24 +116,33 @@
read.save()
if include_reviews and (item.rating or item.review):
- review_title = (
- "Review of {!r} on {!r}".format(
- item.book.title,
- source,
- )
- if item.review
- else ""
- )
-
# we don't know the publication date of the review,
# but "now" is a bad guess
published_date_guess = item.date_read or item.date_added
- models.Review.objects.create(
- user=user,
- book=item.book,
- name=review_title,
- content=item.review,
- rating=item.rating,
- published_date=published_date_guess,
- privacy=privacy,
- )
+ if item.review:
+ review_title = (
+ "Review of {!r} on {!r}".format(
+ item.book.title,
+ source,
+ )
+ if item.review
+ else ""
+ )
+ models.Review.objects.create(
+ user=user,
+ book=item.book,
+ name=review_title,
+ content=item.review,
+ rating=item.rating,
+ published_date=published_date_guess,
+ privacy=privacy,
+ )
+ else:
+ # just a rating
+ models.ReviewRating.objects.create(
+ user=user,
+ book=item.book,
+ rating=item.rating,
+ published_date=published_date_guess,
+ privacy=privacy,
+ )
| {"golden_diff": "diff --git a/bookwyrm/importers/importer.py b/bookwyrm/importers/importer.py\n--- a/bookwyrm/importers/importer.py\n+++ b/bookwyrm/importers/importer.py\n@@ -116,24 +116,33 @@\n read.save()\n \n if include_reviews and (item.rating or item.review):\n- review_title = (\n- \"Review of {!r} on {!r}\".format(\n- item.book.title,\n- source,\n- )\n- if item.review\n- else \"\"\n- )\n-\n # we don't know the publication date of the review,\n # but \"now\" is a bad guess\n published_date_guess = item.date_read or item.date_added\n- models.Review.objects.create(\n- user=user,\n- book=item.book,\n- name=review_title,\n- content=item.review,\n- rating=item.rating,\n- published_date=published_date_guess,\n- privacy=privacy,\n- )\n+ if item.review:\n+ review_title = (\n+ \"Review of {!r} on {!r}\".format(\n+ item.book.title,\n+ source,\n+ )\n+ if item.review\n+ else \"\"\n+ )\n+ models.Review.objects.create(\n+ user=user,\n+ book=item.book,\n+ name=review_title,\n+ content=item.review,\n+ rating=item.rating,\n+ published_date=published_date_guess,\n+ privacy=privacy,\n+ )\n+ else:\n+ # just a rating\n+ models.ReviewRating.objects.create(\n+ user=user,\n+ book=item.book,\n+ rating=item.rating,\n+ published_date=published_date_guess,\n+ privacy=privacy,\n+ )\n", "issue": "Imported ratings added as reviews\nDuring a goodreads import, star ratings seem to be added as Reviews, rather than ReviewRatings\n", "before_files": [{"content": "\"\"\" handle reading a csv from an external service, defaults are from GoodReads \"\"\"\nimport csv\nimport logging\n\nfrom bookwyrm import models\nfrom bookwyrm.models import ImportJob, ImportItem\nfrom bookwyrm.tasks import app\n\nlogger = logging.getLogger(__name__)\n\n\nclass Importer:\n \"\"\"Generic class for csv data import from an outside service\"\"\"\n\n service = \"Unknown\"\n delimiter = \",\"\n encoding = \"UTF-8\"\n mandatory_fields = [\"Title\", \"Author\"]\n\n def create_job(self, user, csv_file, include_reviews, privacy):\n \"\"\"check over a csv and creates a database entry for the job\"\"\"\n job = ImportJob.objects.create(\n user=user, include_reviews=include_reviews, privacy=privacy\n )\n for index, entry in enumerate(\n list(csv.DictReader(csv_file, delimiter=self.delimiter))\n ):\n if not all(x in entry for x in self.mandatory_fields):\n raise ValueError(\"Author and title must be in data.\")\n entry = self.parse_fields(entry)\n self.save_item(job, index, entry)\n return job\n\n def save_item(self, job, index, data): # pylint: disable=no-self-use\n \"\"\"creates and saves an import item\"\"\"\n ImportItem(job=job, index=index, data=data).save()\n\n def parse_fields(self, entry):\n \"\"\"updates csv data with additional info\"\"\"\n entry.update({\"import_source\": self.service})\n return entry\n\n def create_retry_job(self, user, original_job, items):\n \"\"\"retry items that didn't import\"\"\"\n job = ImportJob.objects.create(\n user=user,\n include_reviews=original_job.include_reviews,\n privacy=original_job.privacy,\n retry=True,\n )\n for item in items:\n self.save_item(job, item.index, item.data)\n return job\n\n def start_import(self, job):\n \"\"\"initalizes a csv import job\"\"\"\n result = import_data.delay(self.service, job.id)\n job.task_id = result.id\n job.save()\n\n\[email protected]\ndef import_data(source, job_id):\n \"\"\"does the actual lookup work in a celery task\"\"\"\n job = ImportJob.objects.get(id=job_id)\n try:\n for item in job.items.all():\n try:\n item.resolve()\n except Exception as e: # pylint: disable=broad-except\n logger.exception(e)\n item.fail_reason = \"Error loading book\"\n item.save()\n continue\n\n if item.book:\n item.save()\n\n # shelves book and handles reviews\n handle_imported_book(\n source, job.user, item, job.include_reviews, job.privacy\n )\n else:\n item.fail_reason = \"Could not find a match for book\"\n item.save()\n finally:\n job.complete = True\n job.save()\n\n\ndef handle_imported_book(source, user, item, include_reviews, privacy):\n \"\"\"process a csv and then post about it\"\"\"\n if isinstance(item.book, models.Work):\n item.book = item.book.default_edition\n if not item.book:\n return\n\n existing_shelf = models.ShelfBook.objects.filter(book=item.book, user=user).exists()\n\n # shelve the book if it hasn't been shelved already\n if item.shelf and not existing_shelf:\n desired_shelf = models.Shelf.objects.get(identifier=item.shelf, user=user)\n models.ShelfBook.objects.create(book=item.book, shelf=desired_shelf, user=user)\n\n for read in item.reads:\n # check for an existing readthrough with the same dates\n if models.ReadThrough.objects.filter(\n user=user,\n book=item.book,\n start_date=read.start_date,\n finish_date=read.finish_date,\n ).exists():\n continue\n read.book = item.book\n read.user = user\n read.save()\n\n if include_reviews and (item.rating or item.review):\n review_title = (\n \"Review of {!r} on {!r}\".format(\n item.book.title,\n source,\n )\n if item.review\n else \"\"\n )\n\n # we don't know the publication date of the review,\n # but \"now\" is a bad guess\n published_date_guess = item.date_read or item.date_added\n models.Review.objects.create(\n user=user,\n book=item.book,\n name=review_title,\n content=item.review,\n rating=item.rating,\n published_date=published_date_guess,\n privacy=privacy,\n )\n", "path": "bookwyrm/importers/importer.py"}]} | 1,846 | 381 |
gh_patches_debug_22233 | rasdani/github-patches | git_diff | statsmodels__statsmodels-4999 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[MAINT/CLN] remove function explicitly marked as duplicate
In the function docstring:
`duplicate: Skipper added sm.tools.drop_missing`
<b>update</b> The relevant function is not used outside this module; nor is the other function in this module.
</issue>
<code>
[start of statsmodels/tools/wrappers.py]
1 # -*- coding: utf-8 -*-
2 """Convenience Wrappers
3
4 Created on Sat Oct 30 14:56:35 2010
5
6 Author: josef-pktd
7 License: BSD
8 """
9
10 import numpy as np
11 import statsmodels.api as sm
12 from statsmodels import GLS, WLS, OLS
13
14 def remove_nanrows(y, x):
15 '''remove common rows in [y,x] that contain at least one nan
16
17 TODO: this should be made more flexible,
18 arbitrary number of arrays and 1d or 2d arrays
19
20 duplicate: Skipper added sm.tools.drop_missing
21
22 '''
23 mask = ~np.isnan(y)
24 mask *= ~(np.isnan(x).any(-1)) #* or &
25 y = y[mask]
26 x = x[mask]
27 return y, x
28
29
30 def linmod(y, x, weights=None, sigma=None, add_const=True, filter_missing=True,
31 **kwds):
32 '''get linear model with extra options for entry
33
34 dispatches to regular model class and does not wrap the output
35
36 If several options are exclusive, for example sigma and weights, then the
37 chosen class depends on the implementation sequence.
38 '''
39
40 if filter_missing:
41 y, x = remove_nanrows(y, x)
42 #do the same for masked arrays
43
44 if add_const:
45 x = sm.add_constant(x, prepend=True)
46
47 if not sigma is None:
48 return GLS(y, x, sigma=sigma, **kwds)
49 elif not weights is None:
50 return WLS(y, x, weights=weights, **kwds)
51 else:
52 return OLS(y, x, **kwds)
53
[end of statsmodels/tools/wrappers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/statsmodels/tools/wrappers.py b/statsmodels/tools/wrappers.py
deleted file mode 100644
--- a/statsmodels/tools/wrappers.py
+++ /dev/null
@@ -1,52 +0,0 @@
-# -*- coding: utf-8 -*-
-"""Convenience Wrappers
-
-Created on Sat Oct 30 14:56:35 2010
-
-Author: josef-pktd
-License: BSD
-"""
-
-import numpy as np
-import statsmodels.api as sm
-from statsmodels import GLS, WLS, OLS
-
-def remove_nanrows(y, x):
- '''remove common rows in [y,x] that contain at least one nan
-
- TODO: this should be made more flexible,
- arbitrary number of arrays and 1d or 2d arrays
-
- duplicate: Skipper added sm.tools.drop_missing
-
- '''
- mask = ~np.isnan(y)
- mask *= ~(np.isnan(x).any(-1)) #* or &
- y = y[mask]
- x = x[mask]
- return y, x
-
-
-def linmod(y, x, weights=None, sigma=None, add_const=True, filter_missing=True,
- **kwds):
- '''get linear model with extra options for entry
-
- dispatches to regular model class and does not wrap the output
-
- If several options are exclusive, for example sigma and weights, then the
- chosen class depends on the implementation sequence.
- '''
-
- if filter_missing:
- y, x = remove_nanrows(y, x)
- #do the same for masked arrays
-
- if add_const:
- x = sm.add_constant(x, prepend=True)
-
- if not sigma is None:
- return GLS(y, x, sigma=sigma, **kwds)
- elif not weights is None:
- return WLS(y, x, weights=weights, **kwds)
- else:
- return OLS(y, x, **kwds)
| {"golden_diff": "diff --git a/statsmodels/tools/wrappers.py b/statsmodels/tools/wrappers.py\ndeleted file mode 100644\n--- a/statsmodels/tools/wrappers.py\n+++ /dev/null\n@@ -1,52 +0,0 @@\n-# -*- coding: utf-8 -*-\n-\"\"\"Convenience Wrappers\n-\n-Created on Sat Oct 30 14:56:35 2010\n-\n-Author: josef-pktd\n-License: BSD\n-\"\"\"\n-\n-import numpy as np\n-import statsmodels.api as sm\n-from statsmodels import GLS, WLS, OLS\n-\n-def remove_nanrows(y, x):\n- '''remove common rows in [y,x] that contain at least one nan\n-\n- TODO: this should be made more flexible,\n- arbitrary number of arrays and 1d or 2d arrays\n-\n- duplicate: Skipper added sm.tools.drop_missing\n-\n- '''\n- mask = ~np.isnan(y)\n- mask *= ~(np.isnan(x).any(-1)) #* or &\n- y = y[mask]\n- x = x[mask]\n- return y, x\n-\n-\n-def linmod(y, x, weights=None, sigma=None, add_const=True, filter_missing=True,\n- **kwds):\n- '''get linear model with extra options for entry\n-\n- dispatches to regular model class and does not wrap the output\n-\n- If several options are exclusive, for example sigma and weights, then the\n- chosen class depends on the implementation sequence.\n- '''\n-\n- if filter_missing:\n- y, x = remove_nanrows(y, x)\n- #do the same for masked arrays\n-\n- if add_const:\n- x = sm.add_constant(x, prepend=True)\n-\n- if not sigma is None:\n- return GLS(y, x, sigma=sigma, **kwds)\n- elif not weights is None:\n- return WLS(y, x, weights=weights, **kwds)\n- else:\n- return OLS(y, x, **kwds)\n", "issue": "[MAINT/CLN] remove function explicitly marked as duplicate\nIn the function docstring:\r\n`duplicate: Skipper added sm.tools.drop_missing`\r\n\r\n<b>update</b> The relevant function is not used outside this module; nor is the other function in this module.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Convenience Wrappers\n\nCreated on Sat Oct 30 14:56:35 2010\n\nAuthor: josef-pktd\nLicense: BSD\n\"\"\"\n\nimport numpy as np\nimport statsmodels.api as sm\nfrom statsmodels import GLS, WLS, OLS\n\ndef remove_nanrows(y, x):\n '''remove common rows in [y,x] that contain at least one nan\n\n TODO: this should be made more flexible,\n arbitrary number of arrays and 1d or 2d arrays\n\n duplicate: Skipper added sm.tools.drop_missing\n\n '''\n mask = ~np.isnan(y)\n mask *= ~(np.isnan(x).any(-1)) #* or &\n y = y[mask]\n x = x[mask]\n return y, x\n\n\ndef linmod(y, x, weights=None, sigma=None, add_const=True, filter_missing=True,\n **kwds):\n '''get linear model with extra options for entry\n\n dispatches to regular model class and does not wrap the output\n\n If several options are exclusive, for example sigma and weights, then the\n chosen class depends on the implementation sequence.\n '''\n\n if filter_missing:\n y, x = remove_nanrows(y, x)\n #do the same for masked arrays\n\n if add_const:\n x = sm.add_constant(x, prepend=True)\n\n if not sigma is None:\n return GLS(y, x, sigma=sigma, **kwds)\n elif not weights is None:\n return WLS(y, x, weights=weights, **kwds)\n else:\n return OLS(y, x, **kwds)\n", "path": "statsmodels/tools/wrappers.py"}]} | 1,069 | 465 |
gh_patches_debug_4669 | rasdani/github-patches | git_diff | joke2k__faker-1441 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Change in Python 3.9.5 (and 3.8.10) causes Faker's list_module() to fail
* Faker version: 8.1.2
* OS: macOS 11.3.1
A [regression in Python](https://bugs.python.org/issue44061) breaks Faker, specifically [this line of code in Faker](https://github.com/joke2k/faker/blob/master/faker/utils/loading.py#L35) that calls `pkgutil.iter_modules([path])`.
It's not clear to me from the discussion in that python bug report exactly how they intend to resolve the issue, but I thought I'd flag this here.
### Steps to reproduce
1. Install python 3.9.5 or 3.8.10
1. Install faker
1. `import faker`
### Expected behavior
`import faker` should succeed
### Actual behavior
`import faker` raises an exception
```shell
>>> import faker
>>> import faker
Traceback (most recent call last):
File "/python/3.9/lib/python3.9/pkgutil.py", line 416, in get_importer
importer = sys.path_importer_cache[path_item]
KeyError: PosixPath('/venv/lib/python3.9/site-packages/faker/providers')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/venv/lib/python3.9/site-packages/faker/__init__.py", line 1, in <module>
from faker.factory import Factory
File "/venv/lib/python3.9/site-packages/faker/factory.py", line 7, in <module>
from faker.config import AVAILABLE_LOCALES, DEFAULT_LOCALE, PROVIDERS
File "/venv/lib/python3.9/site-packages/faker/config.py", line 11, in <module>
PROVIDERS = find_available_providers(
File "/venv/lib/python3.9/site-packages/faker/utils/loading.py", line 57, in find_available_providers
for mod in list_module(providers_mod) if mod != '__pycache__'
File "/venv/lib/python3.9/site-packages/faker/utils/loading.py", line 35, in list_module
return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]
File "/venv/lib/python3.9/site-packages/faker/utils/loading.py", line 35, in <listcomp>
return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]
File "/python/3.9/lib/python3.9/pkgutil.py", line 130, in iter_modules
for i in importers:
File "/python/3.9/lib/python3.9/pkgutil.py", line 420, in get_importer
importer = path_hook(path_item)
File "<frozen importlib._bootstrap_external>", line 1601, in path_hook_for_FileFinder
File "<frozen importlib._bootstrap_external>", line 1476, in __init__
File "<frozen importlib._bootstrap_external>", line 177, in _path_isabs
AttributeError: 'PosixPath' object has no attribute 'startswith'
```
</issue>
<code>
[start of faker/utils/loading.py]
1 import pkgutil
2 import sys
3
4 from importlib import import_module
5 from pathlib import Path
6 from types import ModuleType
7 from typing import List, Set
8
9
10 def get_path(module: ModuleType) -> str:
11 if getattr(sys, 'frozen', False):
12 # frozen
13
14 if getattr(sys, '_MEIPASS', False):
15 # PyInstaller
16 lib_dir = Path(getattr(sys, '_MEIPASS'))
17 else:
18 # others
19 lib_dir = Path(sys.executable).parent / 'lib'
20
21 path = lib_dir.joinpath(*module.__package__.split("."))
22 else:
23 # unfrozen
24 path = Path(module.__file__).parent
25 return str(path)
26
27
28 def list_module(module: ModuleType) -> List[str]:
29 path = get_path(module)
30
31 if getattr(sys, '_MEIPASS', False):
32 # PyInstaller
33 return [file.parent.name for file in Path(path).glob('*/__init__.py')]
34 else:
35 return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]
36
37
38 def find_available_locales(providers: List[str]) -> List[str]:
39 available_locales: Set[str] = set()
40
41 for provider_path in providers:
42
43 provider_module = import_module(provider_path)
44 if getattr(provider_module, 'localized', False):
45 langs = list_module(provider_module)
46 available_locales.update(langs)
47 available_locales: List[str] = sorted(available_locales)
48 return available_locales
49
50
51 def find_available_providers(modules: List[ModuleType]) -> List[str]:
52 available_providers = set()
53 for providers_mod in modules:
54 if providers_mod.__package__:
55 providers = [
56 '.'.join([providers_mod.__package__, mod])
57 for mod in list_module(providers_mod) if mod != '__pycache__'
58 ]
59 available_providers.update(providers)
60 return sorted(available_providers)
61
[end of faker/utils/loading.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/faker/utils/loading.py b/faker/utils/loading.py
--- a/faker/utils/loading.py
+++ b/faker/utils/loading.py
@@ -32,7 +32,7 @@
# PyInstaller
return [file.parent.name for file in Path(path).glob('*/__init__.py')]
else:
- return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]
+ return [name for _, name, is_pkg in pkgutil.iter_modules([str(path)]) if is_pkg]
def find_available_locales(providers: List[str]) -> List[str]:
| {"golden_diff": "diff --git a/faker/utils/loading.py b/faker/utils/loading.py\n--- a/faker/utils/loading.py\n+++ b/faker/utils/loading.py\n@@ -32,7 +32,7 @@\n # PyInstaller\n return [file.parent.name for file in Path(path).glob('*/__init__.py')]\n else:\n- return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]\n+ return [name for _, name, is_pkg in pkgutil.iter_modules([str(path)]) if is_pkg]\n \n \n def find_available_locales(providers: List[str]) -> List[str]:\n", "issue": "Change in Python 3.9.5 (and 3.8.10) causes Faker's list_module() to fail\n* Faker version: 8.1.2\r\n* OS: macOS 11.3.1\r\n\r\nA [regression in Python](https://bugs.python.org/issue44061) breaks Faker, specifically [this line of code in Faker](https://github.com/joke2k/faker/blob/master/faker/utils/loading.py#L35) that calls `pkgutil.iter_modules([path])`.\r\n\r\nIt's not clear to me from the discussion in that python bug report exactly how they intend to resolve the issue, but I thought I'd flag this here.\r\n\r\n### Steps to reproduce\r\n\r\n1. Install python 3.9.5 or 3.8.10\r\n1. Install faker\r\n1. `import faker`\r\n\r\n### Expected behavior\r\n\r\n`import faker` should succeed\r\n\r\n### Actual behavior\r\n\r\n`import faker` raises an exception\r\n\r\n```shell\r\n>>> import faker\r\n>>> import faker\r\nTraceback (most recent call last):\r\n File \"/python/3.9/lib/python3.9/pkgutil.py\", line 416, in get_importer\r\n importer = sys.path_importer_cache[path_item]\r\nKeyError: PosixPath('/venv/lib/python3.9/site-packages/faker/providers')\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/venv/lib/python3.9/site-packages/faker/__init__.py\", line 1, in <module>\r\n from faker.factory import Factory\r\n File \"/venv/lib/python3.9/site-packages/faker/factory.py\", line 7, in <module>\r\n from faker.config import AVAILABLE_LOCALES, DEFAULT_LOCALE, PROVIDERS\r\n File \"/venv/lib/python3.9/site-packages/faker/config.py\", line 11, in <module>\r\n PROVIDERS = find_available_providers(\r\n File \"/venv/lib/python3.9/site-packages/faker/utils/loading.py\", line 57, in find_available_providers\r\n for mod in list_module(providers_mod) if mod != '__pycache__'\r\n File \"/venv/lib/python3.9/site-packages/faker/utils/loading.py\", line 35, in list_module\r\n return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]\r\n File \"/venv/lib/python3.9/site-packages/faker/utils/loading.py\", line 35, in <listcomp>\r\n return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]\r\n File \"/python/3.9/lib/python3.9/pkgutil.py\", line 130, in iter_modules\r\n for i in importers:\r\n File \"/python/3.9/lib/python3.9/pkgutil.py\", line 420, in get_importer\r\n importer = path_hook(path_item)\r\n File \"<frozen importlib._bootstrap_external>\", line 1601, in path_hook_for_FileFinder\r\n File \"<frozen importlib._bootstrap_external>\", line 1476, in __init__\r\n File \"<frozen importlib._bootstrap_external>\", line 177, in _path_isabs\r\nAttributeError: 'PosixPath' object has no attribute 'startswith'\r\n```\n", "before_files": [{"content": "import pkgutil\nimport sys\n\nfrom importlib import import_module\nfrom pathlib import Path\nfrom types import ModuleType\nfrom typing import List, Set\n\n\ndef get_path(module: ModuleType) -> str:\n if getattr(sys, 'frozen', False):\n # frozen\n\n if getattr(sys, '_MEIPASS', False):\n # PyInstaller\n lib_dir = Path(getattr(sys, '_MEIPASS'))\n else:\n # others\n lib_dir = Path(sys.executable).parent / 'lib'\n\n path = lib_dir.joinpath(*module.__package__.split(\".\"))\n else:\n # unfrozen\n path = Path(module.__file__).parent\n return str(path)\n\n\ndef list_module(module: ModuleType) -> List[str]:\n path = get_path(module)\n\n if getattr(sys, '_MEIPASS', False):\n # PyInstaller\n return [file.parent.name for file in Path(path).glob('*/__init__.py')]\n else:\n return [name for _, name, is_pkg in pkgutil.iter_modules([path]) if is_pkg]\n\n\ndef find_available_locales(providers: List[str]) -> List[str]:\n available_locales: Set[str] = set()\n\n for provider_path in providers:\n\n provider_module = import_module(provider_path)\n if getattr(provider_module, 'localized', False):\n langs = list_module(provider_module)\n available_locales.update(langs)\n available_locales: List[str] = sorted(available_locales)\n return available_locales\n\n\ndef find_available_providers(modules: List[ModuleType]) -> List[str]:\n available_providers = set()\n for providers_mod in modules:\n if providers_mod.__package__:\n providers = [\n '.'.join([providers_mod.__package__, mod])\n for mod in list_module(providers_mod) if mod != '__pycache__'\n ]\n available_providers.update(providers)\n return sorted(available_providers)\n", "path": "faker/utils/loading.py"}]} | 1,804 | 136 |
gh_patches_debug_14375 | rasdani/github-patches | git_diff | mabel-dev__opteryx-1467 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
🪲 Column Names not Aliased
**Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._
Example from user
~~~sql
SELECT *
FROM $planets AS P
INNER JOIN $satellites AS S
ON P.id = S.id
~~~
Simplified example
~~~sql
SELECT *
FROM $planets
INNER JOIN $satellites
ON $planets.id = $satellites.id
~~~
**Additional context** _Add any other context about the problem here, for example what you have done to try to diagnose or workaround the problem._
</issue>
<code>
[start of opteryx/operators/exit_node.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 """
14 Exit Node
15
16 This is a SQL Query Execution Plan Node.
17
18 This does the final preparation before returning results to users.
19
20 This does two things that the projection node doesn't do:
21 - renames columns from the internal names
22 - removes all columns not being returned to the user
23
24 This node doesn't do any calculations, it is a pure Projection.
25 """
26 import time
27 from typing import Generator
28
29 from opteryx.exceptions import AmbiguousIdentifierError
30 from opteryx.exceptions import InvalidInternalStateError
31 from opteryx.models import QueryProperties
32 from opteryx.operators import BasePlanNode
33
34
35 class ExitNode(BasePlanNode):
36 def __init__(self, properties: QueryProperties, **config):
37 super().__init__(properties=properties)
38 self.columns = config.get("columns", [])
39
40 @property
41 def config(self): # pragma: no cover
42 return None
43
44 @property
45 def name(self): # pragma: no cover
46 return "Exit"
47
48 def execute(self) -> Generator:
49 start = time.monotonic_ns()
50 morsels = self._producers[0] # type:ignore
51
52 final_columns = []
53 final_names = []
54 for column in self.columns:
55 final_columns.append(column.schema_column.identity)
56 final_names.append(column.current_name)
57
58 if len(final_columns) != len(set(final_columns)): # pragma: no cover
59 from collections import Counter
60
61 duplicates = [column for column, count in Counter(final_columns).items() if count > 1]
62 matches = {a for a, b in zip(final_names, final_columns) if b in duplicates}
63 raise AmbiguousIdentifierError(
64 message=f"Query result contains multiple instances of the same column(s) - `{'`, `'.join(matches)}`"
65 )
66
67 self.statistics.time_exiting += time.monotonic_ns() - start
68 for morsel in morsels.execute():
69 start = time.monotonic_ns()
70 if not set(final_columns).issubset(morsel.column_names): # pragma: no cover
71 mapping = {name: int_name for name, int_name in zip(final_columns, final_names)}
72 missing_references = {
73 mapping.get(ref): ref for ref in final_columns if ref not in morsel.column_names
74 }
75
76 raise InvalidInternalStateError(
77 f"The following fields were not in the resultset - {', '.join(missing_references.keys())}"
78 )
79
80 morsel = morsel.select(final_columns)
81 morsel = morsel.rename_columns(final_names)
82
83 self.statistics.time_exiting += time.monotonic_ns() - start
84 yield morsel
85 start = time.monotonic_ns()
86
[end of opteryx/operators/exit_node.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opteryx/operators/exit_node.py b/opteryx/operators/exit_node.py
--- a/opteryx/operators/exit_node.py
+++ b/opteryx/operators/exit_node.py
@@ -64,6 +64,14 @@
message=f"Query result contains multiple instances of the same column(s) - `{'`, `'.join(matches)}`"
)
+ if len(set(final_names)) != len(final_names): # we have duplicate names
+ final_names = []
+ for column in self.columns:
+ if column.schema_column.origin:
+ final_names.append(f"{column.schema_column.origin[0]}.{column.current_name}")
+ else:
+ final_names.append(column.qualified_name)
+
self.statistics.time_exiting += time.monotonic_ns() - start
for morsel in morsels.execute():
start = time.monotonic_ns()
| {"golden_diff": "diff --git a/opteryx/operators/exit_node.py b/opteryx/operators/exit_node.py\n--- a/opteryx/operators/exit_node.py\n+++ b/opteryx/operators/exit_node.py\n@@ -64,6 +64,14 @@\n message=f\"Query result contains multiple instances of the same column(s) - `{'`, `'.join(matches)}`\"\n )\n \n+ if len(set(final_names)) != len(final_names): # we have duplicate names\n+ final_names = []\n+ for column in self.columns:\n+ if column.schema_column.origin:\n+ final_names.append(f\"{column.schema_column.origin[0]}.{column.current_name}\")\n+ else:\n+ final_names.append(column.qualified_name)\n+\n self.statistics.time_exiting += time.monotonic_ns() - start\n for morsel in morsels.execute():\n start = time.monotonic_ns()\n", "issue": "\ud83e\udeb2 Column Names not Aliased\n\r\n**Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._\r\n\r\nExample from user\r\n~~~sql\r\nSELECT *\r\n FROM $planets AS P\r\n INNER JOIN $satellites AS S\r\n ON P.id = S.id\r\n~~~\r\n\r\nSimplified example\r\n~~~sql\r\nSELECT *\r\n FROM $planets\r\n INNER JOIN $satellites\r\n ON $planets.id = $satellites.id\r\n~~~\r\n\r\n**Additional context** _Add any other context about the problem here, for example what you have done to try to diagnose or workaround the problem._\r\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nExit Node\n\nThis is a SQL Query Execution Plan Node.\n\nThis does the final preparation before returning results to users.\n\nThis does two things that the projection node doesn't do:\n - renames columns from the internal names\n - removes all columns not being returned to the user\n\nThis node doesn't do any calculations, it is a pure Projection.\n\"\"\"\nimport time\nfrom typing import Generator\n\nfrom opteryx.exceptions import AmbiguousIdentifierError\nfrom opteryx.exceptions import InvalidInternalStateError\nfrom opteryx.models import QueryProperties\nfrom opteryx.operators import BasePlanNode\n\n\nclass ExitNode(BasePlanNode):\n def __init__(self, properties: QueryProperties, **config):\n super().__init__(properties=properties)\n self.columns = config.get(\"columns\", [])\n\n @property\n def config(self): # pragma: no cover\n return None\n\n @property\n def name(self): # pragma: no cover\n return \"Exit\"\n\n def execute(self) -> Generator:\n start = time.monotonic_ns()\n morsels = self._producers[0] # type:ignore\n\n final_columns = []\n final_names = []\n for column in self.columns:\n final_columns.append(column.schema_column.identity)\n final_names.append(column.current_name)\n\n if len(final_columns) != len(set(final_columns)): # pragma: no cover\n from collections import Counter\n\n duplicates = [column for column, count in Counter(final_columns).items() if count > 1]\n matches = {a for a, b in zip(final_names, final_columns) if b in duplicates}\n raise AmbiguousIdentifierError(\n message=f\"Query result contains multiple instances of the same column(s) - `{'`, `'.join(matches)}`\"\n )\n\n self.statistics.time_exiting += time.monotonic_ns() - start\n for morsel in morsels.execute():\n start = time.monotonic_ns()\n if not set(final_columns).issubset(morsel.column_names): # pragma: no cover\n mapping = {name: int_name for name, int_name in zip(final_columns, final_names)}\n missing_references = {\n mapping.get(ref): ref for ref in final_columns if ref not in morsel.column_names\n }\n\n raise InvalidInternalStateError(\n f\"The following fields were not in the resultset - {', '.join(missing_references.keys())}\"\n )\n\n morsel = morsel.select(final_columns)\n morsel = morsel.rename_columns(final_names)\n\n self.statistics.time_exiting += time.monotonic_ns() - start\n yield morsel\n start = time.monotonic_ns()\n", "path": "opteryx/operators/exit_node.py"}]} | 1,561 | 199 |
gh_patches_debug_26167 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-969 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Indent JSON data while exporting it as Python code
I was testing out a web API and used the "Export flow as Python code" feature for the first time as user, and noticed an improvement.
Currently we just export the `flow.request.body` as is (independent of it's content type) but mitmproxy's interface is smart and renders different bodies differently (for eg. it indents JSON)
I think we could add this indent behaviour while exporting things as code too.
</issue>
<code>
[start of mitmproxy/flow_export.py]
1 import urllib
2 import netlib.http
3 from textwrap import dedent
4
5
6 def curl_command(flow):
7 data = "curl "
8
9 for k, v in flow.request.headers.fields:
10 data += "-H '%s:%s' " % (k, v)
11
12 if flow.request.method != "GET":
13 data += "-X %s " % flow.request.method
14
15 full_url = flow.request.scheme + "://" + flow.request.host + flow.request.path
16 data += "'%s'" % full_url
17
18 if flow.request.content:
19 data += " --data-binary '%s'" % flow.request.content
20
21 return data
22
23
24 def python_code(flow):
25 code = dedent("""
26 import requests
27
28 url = '{url}'
29 {headers}{params}{data}
30 response = requests.request(
31 method='{method}',
32 url=url,{args}
33 )
34
35 print(response.text)
36 """).strip()
37
38 components = map(lambda x: urllib.quote(x, safe=""), flow.request.path_components)
39 url = flow.request.scheme + "://" + flow.request.host + "/" + "/".join(components)
40
41 args = ""
42 headers = ""
43 if flow.request.headers:
44 lines = [" '%s': '%s',\n" % (k, v) for k, v in flow.request.headers.fields]
45 headers += "\nheaders = {\n%s}\n" % "".join(lines)
46 args += "\n headers=headers,"
47
48 params = ""
49 if flow.request.query:
50 lines = [" '%s': '%s',\n" % (k, v) for k, v in flow.request.query]
51 params = "\nparams = {\n%s}\n" % "".join(lines)
52 args += "\n params=params,"
53
54 data = ""
55 if flow.request.body:
56 data = "\ndata = '''%s'''\n" % flow.request.body
57 args += "\n data=data,"
58
59 code = code.format(
60 url=url,
61 headers=headers,
62 params=params,
63 data=data,
64 method=flow.request.method,
65 args=args,
66 )
67
68 return code
69
70
71 def raw_request(flow):
72 data = netlib.http.http1.assemble_request(flow.request)
73 return data
74
[end of mitmproxy/flow_export.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mitmproxy/flow_export.py b/mitmproxy/flow_export.py
--- a/mitmproxy/flow_export.py
+++ b/mitmproxy/flow_export.py
@@ -1,7 +1,10 @@
+import json
import urllib
-import netlib.http
from textwrap import dedent
+import netlib.http
+from netlib.utils import parse_content_type
+
def curl_command(flow):
data = "curl "
@@ -53,8 +56,16 @@
data = ""
if flow.request.body:
- data = "\ndata = '''%s'''\n" % flow.request.body
- args += "\n data=data,"
+ json_obj = is_json(flow.request.headers, flow.request.body)
+ if json_obj:
+ # Without the separators field json.dumps() produces
+ # trailing white spaces: https://bugs.python.org/issue16333
+ data = json.dumps(json_obj, indent=4, separators=(',', ': '))
+ data = "\njson = %s\n" % data
+ args += "\n json=json,"
+ else:
+ data = "\ndata = '''%s'''\n" % flow.request.body
+ args += "\n data=data,"
code = code.format(
url=url,
@@ -71,3 +82,14 @@
def raw_request(flow):
data = netlib.http.http1.assemble_request(flow.request)
return data
+
+
+def is_json(headers, content):
+ if headers:
+ ct = parse_content_type(headers.get("content-type", ""))
+ if ct and "%s/%s" % (ct[0], ct[1]) == "application/json":
+ try:
+ return json.loads(content)
+ except ValueError:
+ return False
+ return False
| {"golden_diff": "diff --git a/mitmproxy/flow_export.py b/mitmproxy/flow_export.py\n--- a/mitmproxy/flow_export.py\n+++ b/mitmproxy/flow_export.py\n@@ -1,7 +1,10 @@\n+import json\n import urllib\n-import netlib.http\n from textwrap import dedent\n \n+import netlib.http\n+from netlib.utils import parse_content_type\n+\n \n def curl_command(flow):\n data = \"curl \"\n@@ -53,8 +56,16 @@\n \n data = \"\"\n if flow.request.body:\n- data = \"\\ndata = '''%s'''\\n\" % flow.request.body\n- args += \"\\n data=data,\"\n+ json_obj = is_json(flow.request.headers, flow.request.body)\n+ if json_obj:\n+ # Without the separators field json.dumps() produces\n+ # trailing white spaces: https://bugs.python.org/issue16333\n+ data = json.dumps(json_obj, indent=4, separators=(',', ': '))\n+ data = \"\\njson = %s\\n\" % data\n+ args += \"\\n json=json,\"\n+ else:\n+ data = \"\\ndata = '''%s'''\\n\" % flow.request.body\n+ args += \"\\n data=data,\"\n \n code = code.format(\n url=url,\n@@ -71,3 +82,14 @@\n def raw_request(flow):\n data = netlib.http.http1.assemble_request(flow.request)\n return data\n+\n+\n+def is_json(headers, content):\n+ if headers:\n+ ct = parse_content_type(headers.get(\"content-type\", \"\"))\n+ if ct and \"%s/%s\" % (ct[0], ct[1]) == \"application/json\":\n+ try:\n+ return json.loads(content)\n+ except ValueError:\n+ return False\n+ return False\n", "issue": "Indent JSON data while exporting it as Python code\nI was testing out a web API and used the \"Export flow as Python code\" feature for the first time as user, and noticed an improvement.\n\nCurrently we just export the `flow.request.body` as is (independent of it's content type) but mitmproxy's interface is smart and renders different bodies differently (for eg. it indents JSON)\n\nI think we could add this indent behaviour while exporting things as code too.\n\n", "before_files": [{"content": "import urllib\nimport netlib.http\nfrom textwrap import dedent\n\n\ndef curl_command(flow):\n data = \"curl \"\n\n for k, v in flow.request.headers.fields:\n data += \"-H '%s:%s' \" % (k, v)\n\n if flow.request.method != \"GET\":\n data += \"-X %s \" % flow.request.method\n\n full_url = flow.request.scheme + \"://\" + flow.request.host + flow.request.path\n data += \"'%s'\" % full_url\n\n if flow.request.content:\n data += \" --data-binary '%s'\" % flow.request.content\n\n return data\n\n\ndef python_code(flow):\n code = dedent(\"\"\"\n import requests\n\n url = '{url}'\n {headers}{params}{data}\n response = requests.request(\n method='{method}',\n url=url,{args}\n )\n\n print(response.text)\n \"\"\").strip()\n\n components = map(lambda x: urllib.quote(x, safe=\"\"), flow.request.path_components)\n url = flow.request.scheme + \"://\" + flow.request.host + \"/\" + \"/\".join(components)\n\n args = \"\"\n headers = \"\"\n if flow.request.headers:\n lines = [\" '%s': '%s',\\n\" % (k, v) for k, v in flow.request.headers.fields]\n headers += \"\\nheaders = {\\n%s}\\n\" % \"\".join(lines)\n args += \"\\n headers=headers,\"\n\n params = \"\"\n if flow.request.query:\n lines = [\" '%s': '%s',\\n\" % (k, v) for k, v in flow.request.query]\n params = \"\\nparams = {\\n%s}\\n\" % \"\".join(lines)\n args += \"\\n params=params,\"\n\n data = \"\"\n if flow.request.body:\n data = \"\\ndata = '''%s'''\\n\" % flow.request.body\n args += \"\\n data=data,\"\n\n code = code.format(\n url=url,\n headers=headers,\n params=params,\n data=data,\n method=flow.request.method,\n args=args,\n )\n\n return code\n\n\ndef raw_request(flow):\n data = netlib.http.http1.assemble_request(flow.request)\n return data\n", "path": "mitmproxy/flow_export.py"}]} | 1,261 | 409 |
gh_patches_debug_252 | rasdani/github-patches | git_diff | google-deepmind__dm-haiku-48 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Jax version upgrade (AttributeError: CallPrimitive)
Using the current version of master 66f9c69 of Haiku, I am getting the following error on Colab
```
AttributeError Traceback (most recent call last)
<ipython-input-3-3a9e6adbfff5> in <module>()
----> 1 import haiku as hk
/usr/local/lib/python3.6/dist-packages/haiku/__init__.py in <module>()
17
18 from haiku import data_structures
---> 19 from haiku import experimental
20 from haiku import initializers
21 from haiku import nets
/usr/local/lib/python3.6/dist-packages/haiku/experimental.py in <module>()
22 from haiku._src.base import custom_getter
23 from haiku._src.base import ParamContext
---> 24 from haiku._src.dot import to_dot
25 from haiku._src.lift import lift
26 from haiku._src.module import profiler_name_scopes
/usr/local/lib/python3.6/dist-packages/haiku/_src/dot.py in <module>()
23
24 from haiku._src import data_structures
---> 25 from haiku._src import module
26 from haiku._src import utils
27 import jax
/usr/local/lib/python3.6/dist-packages/haiku/_src/module.py in <module>()
26 from haiku._src import base
27 from haiku._src import data_structures
---> 28 from haiku._src import named_call
29 from haiku._src import utils
30 import jax.numpy as jnp
/usr/local/lib/python3.6/dist-packages/haiku/_src/named_call.py in <module>()
29
30 # Registering named call as a primitive
---> 31 named_call_p = core.CallPrimitive('named_call')
32 # named_call is implemented as a plain core.call and only diverges
33 # under compilation (see named_call_translation_rule)
AttributeError: module 'jax.core' has no attribute 'CallPrimitive'
```
I believe that's because Haiku now requires `jax>=0.1.71`, while the version by default on Colab is `jax==0.1.69`. `CallPrimitive` was introduced in jax 0.1.71.
https://github.com/google/jax/blob/1545a29e6d69a7b3c7fdf9a49b38004759a9fbfa/jax/core.py#L1106-L1115
To reproduce (inside a Colab):
```python
import jax
print(jax.__version__) # 0.1.69
!pip install -q git+https://github.com/deepmind/dm-haiku
import haiku as hk
```
Run `!pip install -q --upgrade jax jaxlib` first in your Colab to fix this issue.
</issue>
<code>
[start of setup.py]
1 # Lint as: python3
2 # Copyright 2019 DeepMind Technologies Limited. All Rights Reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 # ==============================================================================
16 """Setup for pip package."""
17
18 from setuptools import find_namespace_packages
19 from setuptools import setup
20
21
22 def _get_version():
23 with open('haiku/__init__.py') as fp:
24 for line in fp:
25 if line.startswith('__version__'):
26 g = {}
27 exec(line, g) # pylint: disable=exec-used
28 return g['__version__']
29 raise ValueError('`__version__` not defined in `haiku/__init__.py`')
30
31
32 def _parse_requirements(requirements_txt_path):
33 with open(requirements_txt_path) as fp:
34 return fp.read().splitlines()
35
36
37 _VERSION = _get_version()
38
39 EXTRA_PACKAGES = {
40 'jax': ['jax>=0.1.55'],
41 'jaxlib': ['jaxlib>=0.1.37'],
42 }
43
44 setup(
45 name='dm-haiku',
46 version=_VERSION,
47 url='https://github.com/deepmind/dm-haiku',
48 license='Apache 2.0',
49 author='DeepMind',
50 description='Haiku is a library for building neural networks in JAX.',
51 long_description=open('README.md').read(),
52 long_description_content_type='text/markdown',
53 author_email='[email protected]',
54 # Contained modules and scripts.
55 packages=find_namespace_packages(exclude=['*_test.py']),
56 install_requires=_parse_requirements('requirements.txt'),
57 extras_require=EXTRA_PACKAGES,
58 tests_require=_parse_requirements('requirements-test.txt'),
59 requires_python='>=3.6',
60 include_package_data=True,
61 zip_safe=False,
62 # PyPI package information.
63 classifiers=[
64 'Development Status :: 4 - Beta',
65 'Intended Audience :: Developers',
66 'Intended Audience :: Education',
67 'Intended Audience :: Science/Research',
68 'License :: OSI Approved :: Apache Software License',
69 'Programming Language :: Python :: 3',
70 'Programming Language :: Python :: 3.6',
71 'Programming Language :: Python :: 3.7',
72 'Topic :: Scientific/Engineering :: Mathematics',
73 'Topic :: Software Development :: Libraries :: Python Modules',
74 'Topic :: Software Development :: Libraries',
75 ],
76 )
77
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -37,8 +37,8 @@
_VERSION = _get_version()
EXTRA_PACKAGES = {
- 'jax': ['jax>=0.1.55'],
- 'jaxlib': ['jaxlib>=0.1.37'],
+ 'jax': ['jax>=0.1.71'],
+ 'jaxlib': ['jaxlib>=0.1.49'],
}
setup(
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -37,8 +37,8 @@\n _VERSION = _get_version()\n \n EXTRA_PACKAGES = {\n- 'jax': ['jax>=0.1.55'],\n- 'jaxlib': ['jaxlib>=0.1.37'],\n+ 'jax': ['jax>=0.1.71'],\n+ 'jaxlib': ['jaxlib>=0.1.49'],\n }\n \n setup(\n", "issue": "Jax version upgrade (AttributeError: CallPrimitive)\nUsing the current version of master 66f9c69 of Haiku, I am getting the following error on Colab\r\n```\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-3-3a9e6adbfff5> in <module>()\r\n----> 1 import haiku as hk\r\n\r\n/usr/local/lib/python3.6/dist-packages/haiku/__init__.py in <module>()\r\n 17 \r\n 18 from haiku import data_structures\r\n---> 19 from haiku import experimental\r\n 20 from haiku import initializers\r\n 21 from haiku import nets\r\n\r\n/usr/local/lib/python3.6/dist-packages/haiku/experimental.py in <module>()\r\n 22 from haiku._src.base import custom_getter\r\n 23 from haiku._src.base import ParamContext\r\n---> 24 from haiku._src.dot import to_dot\r\n 25 from haiku._src.lift import lift\r\n 26 from haiku._src.module import profiler_name_scopes\r\n\r\n/usr/local/lib/python3.6/dist-packages/haiku/_src/dot.py in <module>()\r\n 23 \r\n 24 from haiku._src import data_structures\r\n---> 25 from haiku._src import module\r\n 26 from haiku._src import utils\r\n 27 import jax\r\n\r\n/usr/local/lib/python3.6/dist-packages/haiku/_src/module.py in <module>()\r\n 26 from haiku._src import base\r\n 27 from haiku._src import data_structures\r\n---> 28 from haiku._src import named_call\r\n 29 from haiku._src import utils\r\n 30 import jax.numpy as jnp\r\n\r\n/usr/local/lib/python3.6/dist-packages/haiku/_src/named_call.py in <module>()\r\n 29 \r\n 30 # Registering named call as a primitive\r\n---> 31 named_call_p = core.CallPrimitive('named_call')\r\n 32 # named_call is implemented as a plain core.call and only diverges\r\n 33 # under compilation (see named_call_translation_rule)\r\n\r\nAttributeError: module 'jax.core' has no attribute 'CallPrimitive'\r\n```\r\n\r\nI believe that's because Haiku now requires `jax>=0.1.71`, while the version by default on Colab is `jax==0.1.69`. `CallPrimitive` was introduced in jax 0.1.71.\r\nhttps://github.com/google/jax/blob/1545a29e6d69a7b3c7fdf9a49b38004759a9fbfa/jax/core.py#L1106-L1115\r\n\r\nTo reproduce (inside a Colab):\r\n```python\r\nimport jax\r\nprint(jax.__version__) # 0.1.69\r\n\r\n!pip install -q git+https://github.com/deepmind/dm-haiku\r\nimport haiku as hk\r\n```\r\n\r\nRun `!pip install -q --upgrade jax jaxlib` first in your Colab to fix this issue.\n", "before_files": [{"content": "# Lint as: python3\n# Copyright 2019 DeepMind Technologies Limited. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Setup for pip package.\"\"\"\n\nfrom setuptools import find_namespace_packages\nfrom setuptools import setup\n\n\ndef _get_version():\n with open('haiku/__init__.py') as fp:\n for line in fp:\n if line.startswith('__version__'):\n g = {}\n exec(line, g) # pylint: disable=exec-used\n return g['__version__']\n raise ValueError('`__version__` not defined in `haiku/__init__.py`')\n\n\ndef _parse_requirements(requirements_txt_path):\n with open(requirements_txt_path) as fp:\n return fp.read().splitlines()\n\n\n_VERSION = _get_version()\n\nEXTRA_PACKAGES = {\n 'jax': ['jax>=0.1.55'],\n 'jaxlib': ['jaxlib>=0.1.37'],\n}\n\nsetup(\n name='dm-haiku',\n version=_VERSION,\n url='https://github.com/deepmind/dm-haiku',\n license='Apache 2.0',\n author='DeepMind',\n description='Haiku is a library for building neural networks in JAX.',\n long_description=open('README.md').read(),\n long_description_content_type='text/markdown',\n author_email='[email protected]',\n # Contained modules and scripts.\n packages=find_namespace_packages(exclude=['*_test.py']),\n install_requires=_parse_requirements('requirements.txt'),\n extras_require=EXTRA_PACKAGES,\n tests_require=_parse_requirements('requirements-test.txt'),\n requires_python='>=3.6',\n include_package_data=True,\n zip_safe=False,\n # PyPI package information.\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Libraries',\n ],\n)\n", "path": "setup.py"}]} | 2,001 | 114 |
gh_patches_debug_3065 | rasdani/github-patches | git_diff | coala__coala-3348 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wrong doc string syntax in coalib.bearlib.aspects.Root
The doc string of the `Root` aspectclass has a formatting issue at https://github.com/coala/coala/blob/master/coalib/bearlib/aspects/__init__.py#L61
You can see the wrongly rendered result at https://api.coala.io/en/latest/coalib.bearlib.aspects.html#module-coalib.bearlib.aspects
</issue>
<code>
[start of coalib/bearlib/aspects/__init__.py]
1 from .base import aspectbase
2 from .meta import aspectclass
3 from .taste import Taste, TasteError
4
5 __all__ = ['Root', 'Taste', 'TasteError', 'aspectclass']
6
7
8 class Root(aspectbase, metaclass=aspectclass):
9 """
10 The root aspectclass.
11
12 Define sub-aspectclasses with class-bound ``.subaspect`` decorator.
13 Definition string is taken from doc-string of decorated class.
14 Remaining docs are taken from a nested ``docs`` class.
15 Tastes are defined as class attributes that are instances of
16 :class:`coalib.bearlib.aspectclasses.Taste`.
17
18 >>> @Root.subaspect
19 ... class Formatting:
20 ... \"""
21 ... A parent aspect for code formatting aspects...
22 ... \"""
23
24 We can now create subaspects like this:
25
26 >>> @Formatting.subaspect
27 ... class LineLength:
28 ... \"""
29 ... This aspect controls the length of a line...
30 ... \"""
31 ... class docs:
32 ... example = "..."
33 ... example_language = "..."
34 ... importance_reason = "..."
35 ... fix_suggestions = "..."
36 ...
37 ... max_line_length = Taste[int](
38 ... "Maximum length allowed for a line.",
39 ... (80, 90, 120), default=80)
40
41 The representation will show the full "path" to the leaf of the tree:
42
43 >>> Root.Formatting.LineLength
44 <aspectclass 'Root.Formatting.LineLength'>
45
46 We can see, which settings are availables:
47
48 >>> Formatting.tastes
49 {}
50 >>> LineLength.tastes
51 {'max_line_length': <....Taste[int] object at ...>}
52
53 And instantiate the aspect with the values, they will be automatically
54 converted:
55
56 >>> Formatting('Python')
57 <coalib.bearlib.aspects.Root.Formatting object at 0x...>
58 >>> LineLength('Python', max_line_length="100").tastes
59 {'max_line_length': 100}
60
61 If no settings are given, the defaults will be taken>
62 >>> LineLength('Python').tastes
63 {'max_line_length': 80}
64
65 Tastes can also be made available for only specific languages:
66
67 >>> from coalib.bearlib.languages import Language
68 >>> @Language
69 ... class GreaterTrumpScript:
70 ... pass
71
72 >>> @Formatting.subaspect
73 ... class Greatness:
74 ... \"""
75 ... This aspect controls the greatness of a file...
76 ... \"""
77 ...
78 ... min_greatness = Taste[int](
79 ... "Minimum greatness factor needed for a TrumpScript file. "
80 ... "This is fact.",
81 ... (1000000, 1000000000, 1000000000000), default=1000000,
82 ... languages=('GreaterTrumpScript' ,))
83
84 >>> Greatness.tastes
85 {'min_greatness': <....Taste[int] object at ...>}
86 >>> Greatness('GreaterTrumpScript').tastes
87 {'min_greatness': 1000000}
88 >>> Greatness('GreaterTrumpScript', min_greatness=1000000000000).tastes
89 {'min_greatness': 1000000000000}
90
91 >>> Greatness('Python').tastes
92 {}
93
94 >>> Greatness('Python', min_greatness=1000000000)
95 ... # doctest: +NORMALIZE_WHITESPACE
96 Traceback (most recent call last):
97 ...
98 coalib.bearlib.aspects.taste.TasteError:
99 Root.Formatting.Greatness.min_greatness is not available ...
100
101 >>> Greatness('Python').min_greatness
102 ... # doctest: +NORMALIZE_WHITESPACE
103 Traceback (most recent call last):
104 ...
105 coalib.bearlib.aspects.taste.TasteError:
106 Root.Formatting.Greatness.min_greatness is not available ...
107 """
108 parent = None
109
110 _tastes = {}
111
[end of coalib/bearlib/aspects/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/coalib/bearlib/aspects/__init__.py b/coalib/bearlib/aspects/__init__.py
--- a/coalib/bearlib/aspects/__init__.py
+++ b/coalib/bearlib/aspects/__init__.py
@@ -58,7 +58,8 @@
>>> LineLength('Python', max_line_length="100").tastes
{'max_line_length': 100}
- If no settings are given, the defaults will be taken>
+ If no settings are given, the defaults will be taken:
+
>>> LineLength('Python').tastes
{'max_line_length': 80}
| {"golden_diff": "diff --git a/coalib/bearlib/aspects/__init__.py b/coalib/bearlib/aspects/__init__.py\n--- a/coalib/bearlib/aspects/__init__.py\n+++ b/coalib/bearlib/aspects/__init__.py\n@@ -58,7 +58,8 @@\n >>> LineLength('Python', max_line_length=\"100\").tastes\n {'max_line_length': 100}\n \n- If no settings are given, the defaults will be taken>\n+ If no settings are given, the defaults will be taken:\n+\n >>> LineLength('Python').tastes\n {'max_line_length': 80}\n", "issue": "Wrong doc string syntax in coalib.bearlib.aspects.Root\nThe doc string of the `Root` aspectclass has a formatting issue at https://github.com/coala/coala/blob/master/coalib/bearlib/aspects/__init__.py#L61\r\n\r\nYou can see the wrongly rendered result at https://api.coala.io/en/latest/coalib.bearlib.aspects.html#module-coalib.bearlib.aspects\n", "before_files": [{"content": "from .base import aspectbase\nfrom .meta import aspectclass\nfrom .taste import Taste, TasteError\n\n__all__ = ['Root', 'Taste', 'TasteError', 'aspectclass']\n\n\nclass Root(aspectbase, metaclass=aspectclass):\n \"\"\"\n The root aspectclass.\n\n Define sub-aspectclasses with class-bound ``.subaspect`` decorator.\n Definition string is taken from doc-string of decorated class.\n Remaining docs are taken from a nested ``docs`` class.\n Tastes are defined as class attributes that are instances of\n :class:`coalib.bearlib.aspectclasses.Taste`.\n\n >>> @Root.subaspect\n ... class Formatting:\n ... \\\"\"\"\n ... A parent aspect for code formatting aspects...\n ... \\\"\"\"\n\n We can now create subaspects like this:\n\n >>> @Formatting.subaspect\n ... class LineLength:\n ... \\\"\"\"\n ... This aspect controls the length of a line...\n ... \\\"\"\"\n ... class docs:\n ... example = \"...\"\n ... example_language = \"...\"\n ... importance_reason = \"...\"\n ... fix_suggestions = \"...\"\n ...\n ... max_line_length = Taste[int](\n ... \"Maximum length allowed for a line.\",\n ... (80, 90, 120), default=80)\n\n The representation will show the full \"path\" to the leaf of the tree:\n\n >>> Root.Formatting.LineLength\n <aspectclass 'Root.Formatting.LineLength'>\n\n We can see, which settings are availables:\n\n >>> Formatting.tastes\n {}\n >>> LineLength.tastes\n {'max_line_length': <....Taste[int] object at ...>}\n\n And instantiate the aspect with the values, they will be automatically\n converted:\n\n >>> Formatting('Python')\n <coalib.bearlib.aspects.Root.Formatting object at 0x...>\n >>> LineLength('Python', max_line_length=\"100\").tastes\n {'max_line_length': 100}\n\n If no settings are given, the defaults will be taken>\n >>> LineLength('Python').tastes\n {'max_line_length': 80}\n\n Tastes can also be made available for only specific languages:\n\n >>> from coalib.bearlib.languages import Language\n >>> @Language\n ... class GreaterTrumpScript:\n ... pass\n\n >>> @Formatting.subaspect\n ... class Greatness:\n ... \\\"\"\"\n ... This aspect controls the greatness of a file...\n ... \\\"\"\"\n ...\n ... min_greatness = Taste[int](\n ... \"Minimum greatness factor needed for a TrumpScript file. \"\n ... \"This is fact.\",\n ... (1000000, 1000000000, 1000000000000), default=1000000,\n ... languages=('GreaterTrumpScript' ,))\n\n >>> Greatness.tastes\n {'min_greatness': <....Taste[int] object at ...>}\n >>> Greatness('GreaterTrumpScript').tastes\n {'min_greatness': 1000000}\n >>> Greatness('GreaterTrumpScript', min_greatness=1000000000000).tastes\n {'min_greatness': 1000000000000}\n\n >>> Greatness('Python').tastes\n {}\n\n >>> Greatness('Python', min_greatness=1000000000)\n ... # doctest: +NORMALIZE_WHITESPACE\n Traceback (most recent call last):\n ...\n coalib.bearlib.aspects.taste.TasteError:\n Root.Formatting.Greatness.min_greatness is not available ...\n\n >>> Greatness('Python').min_greatness\n ... # doctest: +NORMALIZE_WHITESPACE\n Traceback (most recent call last):\n ...\n coalib.bearlib.aspects.taste.TasteError:\n Root.Formatting.Greatness.min_greatness is not available ...\n \"\"\"\n parent = None\n\n _tastes = {}\n", "path": "coalib/bearlib/aspects/__init__.py"}]} | 1,801 | 152 |
gh_patches_debug_17047 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-2079 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Skipped Baggage entries in propagation still count against max entries
The decrement operation should be moved after the last continue block if the over-long entry is truly skipped, otherwise this behavior should probably be documented/tested for.
https://github.com/open-telemetry/opentelemetry-python/blob/4250078e43ddb24c88e19270c7af01ae63336fb9/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py#L57-L65
</issue>
<code>
[start of opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 import typing
16 from urllib.parse import quote_plus, unquote_plus
17
18 from opentelemetry.baggage import get_all, set_baggage
19 from opentelemetry.context import get_current
20 from opentelemetry.context.context import Context
21 from opentelemetry.propagators import textmap
22
23
24 class W3CBaggagePropagator(textmap.TextMapPropagator):
25 """Extracts and injects Baggage which is used to annotate telemetry."""
26
27 _MAX_HEADER_LENGTH = 8192
28 _MAX_PAIR_LENGTH = 4096
29 _MAX_PAIRS = 180
30 _BAGGAGE_HEADER_NAME = "baggage"
31
32 def extract(
33 self,
34 carrier: textmap.CarrierT,
35 context: typing.Optional[Context] = None,
36 getter: textmap.Getter = textmap.default_getter,
37 ) -> Context:
38 """Extract Baggage from the carrier.
39
40 See
41 `opentelemetry.propagators.textmap.TextMapPropagator.extract`
42 """
43
44 if context is None:
45 context = get_current()
46
47 header = _extract_first_element(
48 getter.get(carrier, self._BAGGAGE_HEADER_NAME)
49 )
50
51 if not header or len(header) > self._MAX_HEADER_LENGTH:
52 return context
53
54 baggage_entries = header.split(",")
55 total_baggage_entries = self._MAX_PAIRS
56 for entry in baggage_entries:
57 if total_baggage_entries <= 0:
58 return context
59 total_baggage_entries -= 1
60 if len(entry) > self._MAX_PAIR_LENGTH:
61 continue
62 try:
63 name, value = entry.split("=", 1)
64 except Exception: # pylint: disable=broad-except
65 continue
66 context = set_baggage(
67 unquote_plus(name).strip(),
68 unquote_plus(value).strip(),
69 context=context,
70 )
71
72 return context
73
74 def inject(
75 self,
76 carrier: textmap.CarrierT,
77 context: typing.Optional[Context] = None,
78 setter: textmap.Setter = textmap.default_setter,
79 ) -> None:
80 """Injects Baggage into the carrier.
81
82 See
83 `opentelemetry.propagators.textmap.TextMapPropagator.inject`
84 """
85 baggage_entries = get_all(context=context)
86 if not baggage_entries:
87 return
88
89 baggage_string = _format_baggage(baggage_entries)
90 setter.set(carrier, self._BAGGAGE_HEADER_NAME, baggage_string)
91
92 @property
93 def fields(self) -> typing.Set[str]:
94 """Returns a set with the fields set in `inject`."""
95 return {self._BAGGAGE_HEADER_NAME}
96
97
98 def _format_baggage(baggage_entries: typing.Mapping[str, object]) -> str:
99 return ",".join(
100 quote_plus(str(key)) + "=" + quote_plus(str(value))
101 for key, value in baggage_entries.items()
102 )
103
104
105 def _extract_first_element(
106 items: typing.Optional[typing.Iterable[textmap.CarrierT]],
107 ) -> typing.Optional[textmap.CarrierT]:
108 if items is None:
109 return None
110 return next(iter(items), None)
111
[end of opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py
--- a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py
+++ b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py
@@ -54,9 +54,6 @@
baggage_entries = header.split(",")
total_baggage_entries = self._MAX_PAIRS
for entry in baggage_entries:
- if total_baggage_entries <= 0:
- return context
- total_baggage_entries -= 1
if len(entry) > self._MAX_PAIR_LENGTH:
continue
try:
@@ -68,6 +65,9 @@
unquote_plus(value).strip(),
context=context,
)
+ total_baggage_entries -= 1
+ if total_baggage_entries == 0:
+ break
return context
| {"golden_diff": "diff --git a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py\n--- a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py\n+++ b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py\n@@ -54,9 +54,6 @@\n baggage_entries = header.split(\",\")\n total_baggage_entries = self._MAX_PAIRS\n for entry in baggage_entries:\n- if total_baggage_entries <= 0:\n- return context\n- total_baggage_entries -= 1\n if len(entry) > self._MAX_PAIR_LENGTH:\n continue\n try:\n@@ -68,6 +65,9 @@\n unquote_plus(value).strip(),\n context=context,\n )\n+ total_baggage_entries -= 1\n+ if total_baggage_entries == 0:\n+ break\n \n return context\n", "issue": "Skipped Baggage entries in propagation still count against max entries\nThe decrement operation should be moved after the last continue block if the over-long entry is truly skipped, otherwise this behavior should probably be documented/tested for.\r\n\r\nhttps://github.com/open-telemetry/opentelemetry-python/blob/4250078e43ddb24c88e19270c7af01ae63336fb9/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py#L57-L65\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport typing\nfrom urllib.parse import quote_plus, unquote_plus\n\nfrom opentelemetry.baggage import get_all, set_baggage\nfrom opentelemetry.context import get_current\nfrom opentelemetry.context.context import Context\nfrom opentelemetry.propagators import textmap\n\n\nclass W3CBaggagePropagator(textmap.TextMapPropagator):\n \"\"\"Extracts and injects Baggage which is used to annotate telemetry.\"\"\"\n\n _MAX_HEADER_LENGTH = 8192\n _MAX_PAIR_LENGTH = 4096\n _MAX_PAIRS = 180\n _BAGGAGE_HEADER_NAME = \"baggage\"\n\n def extract(\n self,\n carrier: textmap.CarrierT,\n context: typing.Optional[Context] = None,\n getter: textmap.Getter = textmap.default_getter,\n ) -> Context:\n \"\"\"Extract Baggage from the carrier.\n\n See\n `opentelemetry.propagators.textmap.TextMapPropagator.extract`\n \"\"\"\n\n if context is None:\n context = get_current()\n\n header = _extract_first_element(\n getter.get(carrier, self._BAGGAGE_HEADER_NAME)\n )\n\n if not header or len(header) > self._MAX_HEADER_LENGTH:\n return context\n\n baggage_entries = header.split(\",\")\n total_baggage_entries = self._MAX_PAIRS\n for entry in baggage_entries:\n if total_baggage_entries <= 0:\n return context\n total_baggage_entries -= 1\n if len(entry) > self._MAX_PAIR_LENGTH:\n continue\n try:\n name, value = entry.split(\"=\", 1)\n except Exception: # pylint: disable=broad-except\n continue\n context = set_baggage(\n unquote_plus(name).strip(),\n unquote_plus(value).strip(),\n context=context,\n )\n\n return context\n\n def inject(\n self,\n carrier: textmap.CarrierT,\n context: typing.Optional[Context] = None,\n setter: textmap.Setter = textmap.default_setter,\n ) -> None:\n \"\"\"Injects Baggage into the carrier.\n\n See\n `opentelemetry.propagators.textmap.TextMapPropagator.inject`\n \"\"\"\n baggage_entries = get_all(context=context)\n if not baggage_entries:\n return\n\n baggage_string = _format_baggage(baggage_entries)\n setter.set(carrier, self._BAGGAGE_HEADER_NAME, baggage_string)\n\n @property\n def fields(self) -> typing.Set[str]:\n \"\"\"Returns a set with the fields set in `inject`.\"\"\"\n return {self._BAGGAGE_HEADER_NAME}\n\n\ndef _format_baggage(baggage_entries: typing.Mapping[str, object]) -> str:\n return \",\".join(\n quote_plus(str(key)) + \"=\" + quote_plus(str(value))\n for key, value in baggage_entries.items()\n )\n\n\ndef _extract_first_element(\n items: typing.Optional[typing.Iterable[textmap.CarrierT]],\n) -> typing.Optional[textmap.CarrierT]:\n if items is None:\n return None\n return next(iter(items), None)\n", "path": "opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py"}]} | 1,719 | 222 |
gh_patches_debug_3979 | rasdani/github-patches | git_diff | pyca__cryptography-1246 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Need binding to void GENERAL_NAMES_free(GENERAL_NAMES *)
the function call to d2i methods on the altSubjectName extension returned a dynamicly allocated memory object that must be garbage collected so binding for GENERAL_NAMES_free should be exposed from hazmat so that higher level code can avoid memory leaks. Not sure which module should expose the binding but I used x509v3.py module in the Proposed solution https://github.com/crc32a/cryptography/commit/24df02646de1e5c1773c9048076b5d67d4c5c0fa
this effects issue https://github.com/pyca/pyopenssl/issues/139 of pyopenssl and an example of its usage to avoid memory leaks is
https://github.com/rackerlabs/pyopenssl/commit/a479a74820619da13dfab8925cf49c4f766b6536
</issue>
<code>
[start of cryptography/hazmat/bindings/openssl/x509v3.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from __future__ import absolute_import, division, print_function
15
16 INCLUDES = """
17 #include <openssl/x509v3.h>
18 """
19
20 TYPES = """
21 typedef struct {
22 X509 *issuer_cert;
23 X509 *subject_cert;
24 ...;
25 } X509V3_CTX;
26
27 typedef void * (*X509V3_EXT_D2I)(void *, const unsigned char **, long);
28
29 typedef struct {
30 ASN1_ITEM_EXP *it;
31 X509V3_EXT_D2I d2i;
32 ...;
33 } X509V3_EXT_METHOD;
34
35 static const int GEN_OTHERNAME;
36 static const int GEN_EMAIL;
37 static const int GEN_X400;
38 static const int GEN_DNS;
39 static const int GEN_URI;
40 static const int GEN_DIRNAME;
41 static const int GEN_EDIPARTY;
42 static const int GEN_IPADD;
43 static const int GEN_RID;
44
45 typedef struct {
46 ...;
47 } OTHERNAME;
48
49 typedef struct {
50 ...;
51 } EDIPARTYNAME;
52
53 typedef struct {
54 int type;
55 union {
56 char *ptr;
57 OTHERNAME *otherName; /* otherName */
58 ASN1_IA5STRING *rfc822Name;
59 ASN1_IA5STRING *dNSName;
60 ASN1_TYPE *x400Address;
61 X509_NAME *directoryName;
62 EDIPARTYNAME *ediPartyName;
63 ASN1_IA5STRING *uniformResourceIdentifier;
64 ASN1_OCTET_STRING *iPAddress;
65 ASN1_OBJECT *registeredID;
66
67 /* Old names */
68 ASN1_OCTET_STRING *ip; /* iPAddress */
69 X509_NAME *dirn; /* dirn */
70 ASN1_IA5STRING *ia5; /* rfc822Name, dNSName, */
71 /* uniformResourceIdentifier */
72 ASN1_OBJECT *rid; /* registeredID */
73 ASN1_TYPE *other; /* x400Address */
74 } d;
75 ...;
76 } GENERAL_NAME;
77
78 typedef struct stack_st_GENERAL_NAME GENERAL_NAMES;
79 """
80
81 FUNCTIONS = """
82 void X509V3_set_ctx(X509V3_CTX *, X509 *, X509 *, X509_REQ *, X509_CRL *, int);
83 X509_EXTENSION *X509V3_EXT_nconf(CONF *, X509V3_CTX *, char *, char *);
84 int GENERAL_NAME_print(BIO *, GENERAL_NAME *);
85 """
86
87 MACROS = """
88 void *X509V3_set_ctx_nodb(X509V3_CTX *);
89 int sk_GENERAL_NAME_num(struct stack_st_GENERAL_NAME *);
90 int sk_GENERAL_NAME_push(struct stack_st_GENERAL_NAME *, GENERAL_NAME *);
91 GENERAL_NAME *sk_GENERAL_NAME_value(struct stack_st_GENERAL_NAME *, int);
92
93 /* These aren't macros these functions are all const X on openssl > 1.0.x */
94 const X509V3_EXT_METHOD *X509V3_EXT_get(X509_EXTENSION *);
95 const X509V3_EXT_METHOD *X509V3_EXT_get_nid(int);
96 """
97
98 CUSTOMIZATIONS = """
99 """
100
101 CONDITIONAL_NAMES = {}
102
[end of cryptography/hazmat/bindings/openssl/x509v3.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cryptography/hazmat/bindings/openssl/x509v3.py b/cryptography/hazmat/bindings/openssl/x509v3.py
--- a/cryptography/hazmat/bindings/openssl/x509v3.py
+++ b/cryptography/hazmat/bindings/openssl/x509v3.py
@@ -82,6 +82,7 @@
void X509V3_set_ctx(X509V3_CTX *, X509 *, X509 *, X509_REQ *, X509_CRL *, int);
X509_EXTENSION *X509V3_EXT_nconf(CONF *, X509V3_CTX *, char *, char *);
int GENERAL_NAME_print(BIO *, GENERAL_NAME *);
+void GENERAL_NAMES_free(GENERAL_NAMES *);
"""
MACROS = """
| {"golden_diff": "diff --git a/cryptography/hazmat/bindings/openssl/x509v3.py b/cryptography/hazmat/bindings/openssl/x509v3.py\n--- a/cryptography/hazmat/bindings/openssl/x509v3.py\n+++ b/cryptography/hazmat/bindings/openssl/x509v3.py\n@@ -82,6 +82,7 @@\n void X509V3_set_ctx(X509V3_CTX *, X509 *, X509 *, X509_REQ *, X509_CRL *, int);\n X509_EXTENSION *X509V3_EXT_nconf(CONF *, X509V3_CTX *, char *, char *);\n int GENERAL_NAME_print(BIO *, GENERAL_NAME *);\n+void GENERAL_NAMES_free(GENERAL_NAMES *);\n \"\"\"\n \n MACROS = \"\"\"\n", "issue": "Need binding to void GENERAL_NAMES_free(GENERAL_NAMES *)\nthe function call to d2i methods on the altSubjectName extension returned a dynamicly allocated memory object that must be garbage collected so binding for GENERAL_NAMES_free should be exposed from hazmat so that higher level code can avoid memory leaks. Not sure which module should expose the binding but I used x509v3.py module in the Proposed solution https://github.com/crc32a/cryptography/commit/24df02646de1e5c1773c9048076b5d67d4c5c0fa\n\nthis effects issue https://github.com/pyca/pyopenssl/issues/139 of pyopenssl and an example of its usage to avoid memory leaks is\nhttps://github.com/rackerlabs/pyopenssl/commit/a479a74820619da13dfab8925cf49c4f766b6536\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nINCLUDES = \"\"\"\n#include <openssl/x509v3.h>\n\"\"\"\n\nTYPES = \"\"\"\ntypedef struct {\n X509 *issuer_cert;\n X509 *subject_cert;\n ...;\n} X509V3_CTX;\n\ntypedef void * (*X509V3_EXT_D2I)(void *, const unsigned char **, long);\n\ntypedef struct {\n ASN1_ITEM_EXP *it;\n X509V3_EXT_D2I d2i;\n ...;\n} X509V3_EXT_METHOD;\n\nstatic const int GEN_OTHERNAME;\nstatic const int GEN_EMAIL;\nstatic const int GEN_X400;\nstatic const int GEN_DNS;\nstatic const int GEN_URI;\nstatic const int GEN_DIRNAME;\nstatic const int GEN_EDIPARTY;\nstatic const int GEN_IPADD;\nstatic const int GEN_RID;\n\ntypedef struct {\n ...;\n} OTHERNAME;\n\ntypedef struct {\n ...;\n} EDIPARTYNAME;\n\ntypedef struct {\n int type;\n union {\n char *ptr;\n OTHERNAME *otherName; /* otherName */\n ASN1_IA5STRING *rfc822Name;\n ASN1_IA5STRING *dNSName;\n ASN1_TYPE *x400Address;\n X509_NAME *directoryName;\n EDIPARTYNAME *ediPartyName;\n ASN1_IA5STRING *uniformResourceIdentifier;\n ASN1_OCTET_STRING *iPAddress;\n ASN1_OBJECT *registeredID;\n\n /* Old names */\n ASN1_OCTET_STRING *ip; /* iPAddress */\n X509_NAME *dirn; /* dirn */\n ASN1_IA5STRING *ia5; /* rfc822Name, dNSName, */\n /* uniformResourceIdentifier */\n ASN1_OBJECT *rid; /* registeredID */\n ASN1_TYPE *other; /* x400Address */\n } d;\n ...;\n} GENERAL_NAME;\n\ntypedef struct stack_st_GENERAL_NAME GENERAL_NAMES;\n\"\"\"\n\nFUNCTIONS = \"\"\"\nvoid X509V3_set_ctx(X509V3_CTX *, X509 *, X509 *, X509_REQ *, X509_CRL *, int);\nX509_EXTENSION *X509V3_EXT_nconf(CONF *, X509V3_CTX *, char *, char *);\nint GENERAL_NAME_print(BIO *, GENERAL_NAME *);\n\"\"\"\n\nMACROS = \"\"\"\nvoid *X509V3_set_ctx_nodb(X509V3_CTX *);\nint sk_GENERAL_NAME_num(struct stack_st_GENERAL_NAME *);\nint sk_GENERAL_NAME_push(struct stack_st_GENERAL_NAME *, GENERAL_NAME *);\nGENERAL_NAME *sk_GENERAL_NAME_value(struct stack_st_GENERAL_NAME *, int);\n\n/* These aren't macros these functions are all const X on openssl > 1.0.x */\nconst X509V3_EXT_METHOD *X509V3_EXT_get(X509_EXTENSION *);\nconst X509V3_EXT_METHOD *X509V3_EXT_get_nid(int);\n\"\"\"\n\nCUSTOMIZATIONS = \"\"\"\n\"\"\"\n\nCONDITIONAL_NAMES = {}\n", "path": "cryptography/hazmat/bindings/openssl/x509v3.py"}]} | 1,779 | 187 |
gh_patches_debug_5324 | rasdani/github-patches | git_diff | deepchecks__deepchecks-968 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[FEAT][CV] Add conditions to checks missing conditions
Some checks are missing conditions:
- [x] Heatmap
- [x] Image Drift
- [x] Train Test Drift
- [x] Robustness
</issue>
<code>
[start of deepchecks/vision/suites/default_suites.py]
1 # ----------------------------------------------------------------------------
2 # Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)
3 #
4 # This file is part of Deepchecks.
5 # Deepchecks is distributed under the terms of the GNU Affero General
6 # Public License (version 3 or later).
7 # You should have received a copy of the GNU Affero General Public License
8 # along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.
9 # ----------------------------------------------------------------------------
10 #
11 """Functions for loading the default (built-in) vision suites for various validation stages.
12
13 Each function returns a new suite that is initialized with a list of checks and default conditions.
14 It is possible to customize these suites by editing the checks and conditions inside it after the suites' creation.
15 """
16 from deepchecks.vision.checks import ClassPerformance, TrainTestLabelDrift, MeanAveragePrecisionReport, \
17 MeanAverageRecallReport, ImagePropertyDrift, ImageDatasetDrift, SimpleModelComparison, ConfusionMatrixReport, \
18 RobustnessReport, TrainTestPredictionDrift
19 from deepchecks.vision import Suite
20
21
22 __all__ = ['train_test_validation', 'model_evaluation', 'full_suite']
23
24 from deepchecks.vision.checks.distribution import HeatmapComparison
25
26
27 def train_test_validation() -> Suite:
28 """Create a suite that is meant to validate correctness of train-test split, including integrity, \
29 distribution and leakage checks."""
30 return Suite(
31 'Train Test Validation Suite',
32 HeatmapComparison(),
33 TrainTestLabelDrift(),
34 TrainTestPredictionDrift(),
35 ImagePropertyDrift().add_condition_drift_score_not_greater_than(),
36 ImageDatasetDrift()
37 )
38
39
40 def model_evaluation() -> Suite:
41 """Create a suite that is meant to test model performance and overfit."""
42 return Suite(
43 'Model Evaluation Suite',
44 ClassPerformance(),
45 MeanAveragePrecisionReport(),
46 MeanAverageRecallReport(),
47 SimpleModelComparison(),
48 ConfusionMatrixReport(),
49 RobustnessReport().add_condition_degradation_not_greater_than()
50 )
51
52
53 def full_suite() -> Suite:
54 """Create a suite that includes many of the implemented checks, for a quick overview of your model and data."""
55 return Suite(
56 'Full Suite',
57 model_evaluation(),
58 train_test_validation(),
59 )
60
[end of deepchecks/vision/suites/default_suites.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/deepchecks/vision/suites/default_suites.py b/deepchecks/vision/suites/default_suites.py
--- a/deepchecks/vision/suites/default_suites.py
+++ b/deepchecks/vision/suites/default_suites.py
@@ -31,7 +31,7 @@
'Train Test Validation Suite',
HeatmapComparison(),
TrainTestLabelDrift(),
- TrainTestPredictionDrift(),
+ TrainTestPredictionDrift().add_condition_drift_score_not_greater_than(),
ImagePropertyDrift().add_condition_drift_score_not_greater_than(),
ImageDatasetDrift()
)
| {"golden_diff": "diff --git a/deepchecks/vision/suites/default_suites.py b/deepchecks/vision/suites/default_suites.py\n--- a/deepchecks/vision/suites/default_suites.py\n+++ b/deepchecks/vision/suites/default_suites.py\n@@ -31,7 +31,7 @@\n 'Train Test Validation Suite',\n HeatmapComparison(),\n TrainTestLabelDrift(),\n- TrainTestPredictionDrift(),\n+ TrainTestPredictionDrift().add_condition_drift_score_not_greater_than(),\n ImagePropertyDrift().add_condition_drift_score_not_greater_than(),\n ImageDatasetDrift()\n )\n", "issue": "[FEAT][CV] Add conditions to checks missing conditions\nSome checks are missing conditions:\r\n\r\n- [x] Heatmap\r\n- [x] Image Drift\r\n- [x] Train Test Drift\r\n- [x] Robustness \n", "before_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021-2022 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"Functions for loading the default (built-in) vision suites for various validation stages.\n\nEach function returns a new suite that is initialized with a list of checks and default conditions.\nIt is possible to customize these suites by editing the checks and conditions inside it after the suites' creation.\n\"\"\"\nfrom deepchecks.vision.checks import ClassPerformance, TrainTestLabelDrift, MeanAveragePrecisionReport, \\\n MeanAverageRecallReport, ImagePropertyDrift, ImageDatasetDrift, SimpleModelComparison, ConfusionMatrixReport, \\\n RobustnessReport, TrainTestPredictionDrift\nfrom deepchecks.vision import Suite\n\n\n__all__ = ['train_test_validation', 'model_evaluation', 'full_suite']\n\nfrom deepchecks.vision.checks.distribution import HeatmapComparison\n\n\ndef train_test_validation() -> Suite:\n \"\"\"Create a suite that is meant to validate correctness of train-test split, including integrity, \\\n distribution and leakage checks.\"\"\"\n return Suite(\n 'Train Test Validation Suite',\n HeatmapComparison(),\n TrainTestLabelDrift(),\n TrainTestPredictionDrift(),\n ImagePropertyDrift().add_condition_drift_score_not_greater_than(),\n ImageDatasetDrift()\n )\n\n\ndef model_evaluation() -> Suite:\n \"\"\"Create a suite that is meant to test model performance and overfit.\"\"\"\n return Suite(\n 'Model Evaluation Suite',\n ClassPerformance(),\n MeanAveragePrecisionReport(),\n MeanAverageRecallReport(),\n SimpleModelComparison(),\n ConfusionMatrixReport(),\n RobustnessReport().add_condition_degradation_not_greater_than()\n )\n\n\ndef full_suite() -> Suite:\n \"\"\"Create a suite that includes many of the implemented checks, for a quick overview of your model and data.\"\"\"\n return Suite(\n 'Full Suite',\n model_evaluation(),\n train_test_validation(),\n )\n", "path": "deepchecks/vision/suites/default_suites.py"}]} | 1,192 | 144 |
gh_patches_debug_35089 | rasdani/github-patches | git_diff | aio-libs__aiohttp-2237 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AttributeError: 'NoneType' object has no attribute 'errno'
## Long story short
Trying to resolve a domain which is an alias for another one, which does not have an A or CNAME record, raises AttributeError: 'NoneType' object has no attribute 'errno'
## Expected behaviour
Raise an error correctly, socket.gaierror probably.
## Actual behaviour
```Traceback (most recent call last):
File "xtest.py", line 16, in <module>
process()
File "/usr/lib/python3.6/asyncio/base_events.py", line 449, in run_until_complete
return future.result()
File "/usr/lib/python3.6/asyncio/tasks.py", line 239, in _step
result = coro.send(None)
File "/myenv/lib/python3.6/site-packages/aiohttp/helpers.py", line 72, in send
return self._coro.send(arg)
File "/myenv/lib/python3.6/site-packages/aiohttp/client.py", line 233, in _request
conn = yield from self._connector.connect(req)
File "/myenv/lib/python3.6/site-packages/aiohttp/connector.py", line 378, in connect
proto = yield from self._create_connection(req)
File "/myenv/lib/python3.6/site-packages/aiohttp/connector.py", line 687, in _create_connection
_, proto = yield from self._create_direct_connection(req)
File "/myenv/lib/python3.6/site-packages/aiohttp/connector.py", line 735, in _create_direct_connection
exc.errno,
AttributeError: 'NoneType' object has no attribute 'errno'
```
## Steps to reproduce
This script will reproduce the error.
```
import asyncio
import aiohttp
from aiohttp.resolver import AsyncResolver
def process():
url = 'http://esly.win/'
resolver = AsyncResolver()
conn = aiohttp.TCPConnector(resolver=resolver, verify_ssl=False)
session = aiohttp.ClientSession(connector=conn)
return session.get(url)
loop = asyncio.get_event_loop()
loop.run_until_complete(
process()
)
```
If I use the session without setting the connector it first raises a socket.gaierror but then
> During handling of the above exception, another exception occurred...
And the same traceback appears.
## Your environment
Python 3.6.0b2
Ubuntu 10.10
aiohttp==2.2,5
Also happens with aiohttp==2.3.0a0 (installed from git on 29/Aug/2017)
</issue>
<code>
[start of aiohttp/resolver.py]
1 import asyncio
2 import socket
3
4 from .abc import AbstractResolver
5
6
7 __all__ = ('ThreadedResolver', 'AsyncResolver', 'DefaultResolver')
8
9 try:
10 import aiodns
11 # aiodns_default = hasattr(aiodns.DNSResolver, 'gethostbyname')
12 except ImportError: # pragma: no cover
13 aiodns = None
14
15 aiodns_default = False
16
17
18 class ThreadedResolver(AbstractResolver):
19 """Use Executor for synchronous getaddrinfo() calls, which defaults to
20 concurrent.futures.ThreadPoolExecutor.
21 """
22
23 def __init__(self, loop=None):
24 if loop is None:
25 loop = asyncio.get_event_loop()
26 self._loop = loop
27
28 @asyncio.coroutine
29 def resolve(self, host, port=0, family=socket.AF_INET):
30 infos = yield from self._loop.getaddrinfo(
31 host, port, type=socket.SOCK_STREAM, family=family)
32
33 hosts = []
34 for family, _, proto, _, address in infos:
35 hosts.append(
36 {'hostname': host,
37 'host': address[0], 'port': address[1],
38 'family': family, 'proto': proto,
39 'flags': socket.AI_NUMERICHOST})
40
41 return hosts
42
43 @asyncio.coroutine
44 def close(self):
45 pass
46
47
48 class AsyncResolver(AbstractResolver):
49 """Use the `aiodns` package to make asynchronous DNS lookups"""
50
51 def __init__(self, loop=None, *args, **kwargs):
52 if loop is None:
53 loop = asyncio.get_event_loop()
54
55 if aiodns is None:
56 raise RuntimeError("Resolver requires aiodns library")
57
58 self._loop = loop
59 self._resolver = aiodns.DNSResolver(*args, loop=loop, **kwargs)
60
61 if not hasattr(self._resolver, 'gethostbyname'):
62 # aiodns 1.1 is not available, fallback to DNSResolver.query
63 self.resolve = self.resolve_with_query
64
65 @asyncio.coroutine
66 def resolve(self, host, port=0, family=socket.AF_INET):
67 hosts = []
68 resp = yield from self._resolver.gethostbyname(host, family)
69
70 for address in resp.addresses:
71 hosts.append(
72 {'hostname': host,
73 'host': address, 'port': port,
74 'family': family, 'proto': 0,
75 'flags': socket.AI_NUMERICHOST})
76 return hosts
77
78 @asyncio.coroutine
79 def resolve_with_query(self, host, port=0, family=socket.AF_INET):
80 if family == socket.AF_INET6:
81 qtype = 'AAAA'
82 else:
83 qtype = 'A'
84
85 hosts = []
86 resp = yield from self._resolver.query(host, qtype)
87
88 for rr in resp:
89 hosts.append(
90 {'hostname': host,
91 'host': rr.host, 'port': port,
92 'family': family, 'proto': 0,
93 'flags': socket.AI_NUMERICHOST})
94
95 return hosts
96
97 @asyncio.coroutine
98 def close(self):
99 return self._resolver.cancel()
100
101
102 DefaultResolver = AsyncResolver if aiodns_default else ThreadedResolver
103
[end of aiohttp/resolver.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/aiohttp/resolver.py b/aiohttp/resolver.py
--- a/aiohttp/resolver.py
+++ b/aiohttp/resolver.py
@@ -60,31 +60,42 @@
if not hasattr(self._resolver, 'gethostbyname'):
# aiodns 1.1 is not available, fallback to DNSResolver.query
- self.resolve = self.resolve_with_query
+ self.resolve = self._resolve_with_query
@asyncio.coroutine
def resolve(self, host, port=0, family=socket.AF_INET):
+ try:
+ resp = yield from self._resolver.gethostbyname(host, family)
+ except aiodns.error.DNSError as exc:
+ msg = exc.args[1] if len(exc.args) >= 1 else "DNS lookup failed"
+ raise OSError(msg) from exc
hosts = []
- resp = yield from self._resolver.gethostbyname(host, family)
-
for address in resp.addresses:
hosts.append(
{'hostname': host,
'host': address, 'port': port,
'family': family, 'proto': 0,
'flags': socket.AI_NUMERICHOST})
+
+ if not hosts:
+ raise OSError("DNS lookup failed")
+
return hosts
@asyncio.coroutine
- def resolve_with_query(self, host, port=0, family=socket.AF_INET):
+ def _resolve_with_query(self, host, port=0, family=socket.AF_INET):
if family == socket.AF_INET6:
qtype = 'AAAA'
else:
qtype = 'A'
- hosts = []
- resp = yield from self._resolver.query(host, qtype)
+ try:
+ resp = yield from self._resolver.query(host, qtype)
+ except aiodns.error.DNSError as exc:
+ msg = exc.args[1] if len(exc.args) >= 1 else "DNS lookup failed"
+ raise OSError(msg) from exc
+ hosts = []
for rr in resp:
hosts.append(
{'hostname': host,
@@ -92,6 +103,9 @@
'family': family, 'proto': 0,
'flags': socket.AI_NUMERICHOST})
+ if not hosts:
+ raise OSError("DNS lookup failed")
+
return hosts
@asyncio.coroutine
| {"golden_diff": "diff --git a/aiohttp/resolver.py b/aiohttp/resolver.py\n--- a/aiohttp/resolver.py\n+++ b/aiohttp/resolver.py\n@@ -60,31 +60,42 @@\n \n if not hasattr(self._resolver, 'gethostbyname'):\n # aiodns 1.1 is not available, fallback to DNSResolver.query\n- self.resolve = self.resolve_with_query\n+ self.resolve = self._resolve_with_query\n \n @asyncio.coroutine\n def resolve(self, host, port=0, family=socket.AF_INET):\n+ try:\n+ resp = yield from self._resolver.gethostbyname(host, family)\n+ except aiodns.error.DNSError as exc:\n+ msg = exc.args[1] if len(exc.args) >= 1 else \"DNS lookup failed\"\n+ raise OSError(msg) from exc\n hosts = []\n- resp = yield from self._resolver.gethostbyname(host, family)\n-\n for address in resp.addresses:\n hosts.append(\n {'hostname': host,\n 'host': address, 'port': port,\n 'family': family, 'proto': 0,\n 'flags': socket.AI_NUMERICHOST})\n+\n+ if not hosts:\n+ raise OSError(\"DNS lookup failed\")\n+\n return hosts\n \n @asyncio.coroutine\n- def resolve_with_query(self, host, port=0, family=socket.AF_INET):\n+ def _resolve_with_query(self, host, port=0, family=socket.AF_INET):\n if family == socket.AF_INET6:\n qtype = 'AAAA'\n else:\n qtype = 'A'\n \n- hosts = []\n- resp = yield from self._resolver.query(host, qtype)\n+ try:\n+ resp = yield from self._resolver.query(host, qtype)\n+ except aiodns.error.DNSError as exc:\n+ msg = exc.args[1] if len(exc.args) >= 1 else \"DNS lookup failed\"\n+ raise OSError(msg) from exc\n \n+ hosts = []\n for rr in resp:\n hosts.append(\n {'hostname': host,\n@@ -92,6 +103,9 @@\n 'family': family, 'proto': 0,\n 'flags': socket.AI_NUMERICHOST})\n \n+ if not hosts:\n+ raise OSError(\"DNS lookup failed\")\n+\n return hosts\n \n @asyncio.coroutine\n", "issue": "AttributeError: 'NoneType' object has no attribute 'errno'\n## Long story short\r\n\r\nTrying to resolve a domain which is an alias for another one, which does not have an A or CNAME record, raises AttributeError: 'NoneType' object has no attribute 'errno'\r\n\r\n## Expected behaviour\r\n\r\nRaise an error correctly, socket.gaierror probably.\r\n\r\n## Actual behaviour\r\n\r\n```Traceback (most recent call last):\r\n File \"xtest.py\", line 16, in <module>\r\n process()\r\n File \"/usr/lib/python3.6/asyncio/base_events.py\", line 449, in run_until_complete\r\n return future.result()\r\n File \"/usr/lib/python3.6/asyncio/tasks.py\", line 239, in _step\r\n result = coro.send(None)\r\n File \"/myenv/lib/python3.6/site-packages/aiohttp/helpers.py\", line 72, in send\r\n return self._coro.send(arg)\r\n File \"/myenv/lib/python3.6/site-packages/aiohttp/client.py\", line 233, in _request\r\n conn = yield from self._connector.connect(req)\r\n File \"/myenv/lib/python3.6/site-packages/aiohttp/connector.py\", line 378, in connect\r\n proto = yield from self._create_connection(req)\r\n File \"/myenv/lib/python3.6/site-packages/aiohttp/connector.py\", line 687, in _create_connection\r\n _, proto = yield from self._create_direct_connection(req)\r\n File \"/myenv/lib/python3.6/site-packages/aiohttp/connector.py\", line 735, in _create_direct_connection\r\n exc.errno,\r\nAttributeError: 'NoneType' object has no attribute 'errno'\r\n```\r\n\r\n## Steps to reproduce\r\n\r\nThis script will reproduce the error.\r\n\r\n```\r\nimport asyncio\r\nimport aiohttp\r\nfrom aiohttp.resolver import AsyncResolver\r\n\r\ndef process():\r\n url = 'http://esly.win/'\r\n resolver = AsyncResolver()\r\n conn = aiohttp.TCPConnector(resolver=resolver, verify_ssl=False)\r\n session = aiohttp.ClientSession(connector=conn)\r\n return session.get(url)\r\n\r\nloop = asyncio.get_event_loop()\r\nloop.run_until_complete(\r\n process()\r\n)\r\n```\r\n\r\nIf I use the session without setting the connector it first raises a socket.gaierror but then \r\n> During handling of the above exception, another exception occurred...\r\n\r\nAnd the same traceback appears.\r\n\r\n## Your environment\r\nPython 3.6.0b2\r\nUbuntu 10.10\r\naiohttp==2.2,5 \r\nAlso happens with aiohttp==2.3.0a0 (installed from git on 29/Aug/2017)\n", "before_files": [{"content": "import asyncio\nimport socket\n\nfrom .abc import AbstractResolver\n\n\n__all__ = ('ThreadedResolver', 'AsyncResolver', 'DefaultResolver')\n\ntry:\n import aiodns\n # aiodns_default = hasattr(aiodns.DNSResolver, 'gethostbyname')\nexcept ImportError: # pragma: no cover\n aiodns = None\n\naiodns_default = False\n\n\nclass ThreadedResolver(AbstractResolver):\n \"\"\"Use Executor for synchronous getaddrinfo() calls, which defaults to\n concurrent.futures.ThreadPoolExecutor.\n \"\"\"\n\n def __init__(self, loop=None):\n if loop is None:\n loop = asyncio.get_event_loop()\n self._loop = loop\n\n @asyncio.coroutine\n def resolve(self, host, port=0, family=socket.AF_INET):\n infos = yield from self._loop.getaddrinfo(\n host, port, type=socket.SOCK_STREAM, family=family)\n\n hosts = []\n for family, _, proto, _, address in infos:\n hosts.append(\n {'hostname': host,\n 'host': address[0], 'port': address[1],\n 'family': family, 'proto': proto,\n 'flags': socket.AI_NUMERICHOST})\n\n return hosts\n\n @asyncio.coroutine\n def close(self):\n pass\n\n\nclass AsyncResolver(AbstractResolver):\n \"\"\"Use the `aiodns` package to make asynchronous DNS lookups\"\"\"\n\n def __init__(self, loop=None, *args, **kwargs):\n if loop is None:\n loop = asyncio.get_event_loop()\n\n if aiodns is None:\n raise RuntimeError(\"Resolver requires aiodns library\")\n\n self._loop = loop\n self._resolver = aiodns.DNSResolver(*args, loop=loop, **kwargs)\n\n if not hasattr(self._resolver, 'gethostbyname'):\n # aiodns 1.1 is not available, fallback to DNSResolver.query\n self.resolve = self.resolve_with_query\n\n @asyncio.coroutine\n def resolve(self, host, port=0, family=socket.AF_INET):\n hosts = []\n resp = yield from self._resolver.gethostbyname(host, family)\n\n for address in resp.addresses:\n hosts.append(\n {'hostname': host,\n 'host': address, 'port': port,\n 'family': family, 'proto': 0,\n 'flags': socket.AI_NUMERICHOST})\n return hosts\n\n @asyncio.coroutine\n def resolve_with_query(self, host, port=0, family=socket.AF_INET):\n if family == socket.AF_INET6:\n qtype = 'AAAA'\n else:\n qtype = 'A'\n\n hosts = []\n resp = yield from self._resolver.query(host, qtype)\n\n for rr in resp:\n hosts.append(\n {'hostname': host,\n 'host': rr.host, 'port': port,\n 'family': family, 'proto': 0,\n 'flags': socket.AI_NUMERICHOST})\n\n return hosts\n\n @asyncio.coroutine\n def close(self):\n return self._resolver.cancel()\n\n\nDefaultResolver = AsyncResolver if aiodns_default else ThreadedResolver\n", "path": "aiohttp/resolver.py"}]} | 2,016 | 526 |
gh_patches_debug_13455 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-3811 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
gcp serverless runtime error on implicit boto dependency
reported in gitter, gcp functions should not need to depend on boto3, looks like some of the securityhub work caused an implicit dependency on boto3.
```
textPayload: "ModuleNotFoundError: No module named 'boto3'" - Getting this error for the cloud function to stop a instance in GCP
instance-off
qte7iow5dhzi
Traceback (most recent call last): File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 346, in run_http_function result = _function_handler.invoke_user_function(flask.request) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 217, in invoke_user_function return call_user_function(request_or_event) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 210, in call_user_function return self._user_function(request_or_event) File "/user_code/main.py", line 21, in run from c7n_gcp.handler import run File "/user_code/c7n_gcp/handler.py", line 24, in <module> from c7n_gcp.entry import initialize_gcp File "/user_code/c7n_gcp/entry.py", line 18, in <module> import c7n_gcp.resources.bigquery File "/user_code/c7n_gcp/resources/bigquery.py", line 16, in <module> from c7n_gcp.query import QueryResourceManager, TypeInfo File "/user_code/c7n_gcp/query.py", line 23, in <module> from c7n.filters import FilterRegistry File "/user_code/c7n/filters/init.py", line 32, in <module> from .securityhub import SecurityHubFindingFilter File "/user_code/c7n/filters/securityhub.py", line 19, in <module> from c7n.resources import aws File "/user_code/c7n/resources/aws.py", line 31, in <module> import boto3 ModuleNotFoundError: No module named 'boto3
```
</issue>
<code>
[start of c7n/filters/securityhub.py]
1 # Copyright 2019 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from __future__ import absolute_import, division, print_function, unicode_literals
15
16 from c7n.utils import local_session, type_schema
17 from .core import Filter
18 from c7n.manager import resources
19 from c7n.resources import aws
20
21
22 class SecurityHubFindingFilter(Filter):
23 """Check if there are Security Hub Findings related to the resources
24 """
25 schema = type_schema(
26 'finding',
27 # Many folks do an aggregator region, allow them to use that
28 # for filtering.
29 region={'type': 'string'},
30 query={'type': 'object'})
31
32 permissions = ('securityhub:GetFindings',)
33 annotation_key = 'c7n:finding-filter'
34 query_shape = 'AwsSecurityFindingFilters'
35
36 def validate(self):
37 query = self.data.get('query')
38 if query:
39 aws.shape_validate(query, self.query_shape, 'securityhub')
40
41 def process(self, resources, event=None):
42 client = local_session(
43 self.manager.session_factory).client(
44 'securityhub', region_name=self.data.get('region'))
45 found = []
46 params = dict(self.data.get('query', {}))
47
48 for r_arn, resource in zip(self.manager.get_arns(resources), resources):
49 params['ResourceId'] = [{"Value": r_arn, "Comparison": "EQUALS"}]
50 findings = client.get_findings(Filters=params).get("Findings")
51 if len(findings) > 0:
52 resource[self.annotation_key] = findings
53 found.append(resource)
54 return found
55
56 @classmethod
57 def register_resources(klass, registry, resource_class):
58 """ meta model subscriber on resource registration.
59
60 SecurityHub Findings Filter
61 """
62 for rtype, resource_manager in registry.items():
63 if not resource_manager.has_arn():
64 continue
65 if 'post-finding' in resource_manager.action_registry:
66 continue
67 resource_class.filter_registry.register('finding', klass)
68
69
70 resources.subscribe(resources.EVENT_REGISTER, SecurityHubFindingFilter.register_resources)
71
[end of c7n/filters/securityhub.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/c7n/filters/securityhub.py b/c7n/filters/securityhub.py
--- a/c7n/filters/securityhub.py
+++ b/c7n/filters/securityhub.py
@@ -16,7 +16,6 @@
from c7n.utils import local_session, type_schema
from .core import Filter
from c7n.manager import resources
-from c7n.resources import aws
class SecurityHubFindingFilter(Filter):
@@ -36,6 +35,7 @@
def validate(self):
query = self.data.get('query')
if query:
+ from c7n.resources import aws
aws.shape_validate(query, self.query_shape, 'securityhub')
def process(self, resources, event=None):
| {"golden_diff": "diff --git a/c7n/filters/securityhub.py b/c7n/filters/securityhub.py\n--- a/c7n/filters/securityhub.py\n+++ b/c7n/filters/securityhub.py\n@@ -16,7 +16,6 @@\n from c7n.utils import local_session, type_schema\n from .core import Filter\n from c7n.manager import resources\n-from c7n.resources import aws\n \n \n class SecurityHubFindingFilter(Filter):\n@@ -36,6 +35,7 @@\n def validate(self):\n query = self.data.get('query')\n if query:\n+ from c7n.resources import aws\n aws.shape_validate(query, self.query_shape, 'securityhub')\n \n def process(self, resources, event=None):\n", "issue": "gcp serverless runtime error on implicit boto dependency\nreported in gitter, gcp functions should not need to depend on boto3, looks like some of the securityhub work caused an implicit dependency on boto3.\r\n\r\n```\r\ntextPayload: \"ModuleNotFoundError: No module named 'boto3'\" - Getting this error for the cloud function to stop a instance in GCP\r\ninstance-off\r\nqte7iow5dhzi\r\nTraceback (most recent call last): File \"/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py\", line 346, in run_http_function result = _function_handler.invoke_user_function(flask.request) File \"/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py\", line 217, in invoke_user_function return call_user_function(request_or_event) File \"/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py\", line 210, in call_user_function return self._user_function(request_or_event) File \"/user_code/main.py\", line 21, in run from c7n_gcp.handler import run File \"/user_code/c7n_gcp/handler.py\", line 24, in <module> from c7n_gcp.entry import initialize_gcp File \"/user_code/c7n_gcp/entry.py\", line 18, in <module> import c7n_gcp.resources.bigquery File \"/user_code/c7n_gcp/resources/bigquery.py\", line 16, in <module> from c7n_gcp.query import QueryResourceManager, TypeInfo File \"/user_code/c7n_gcp/query.py\", line 23, in <module> from c7n.filters import FilterRegistry File \"/user_code/c7n/filters/init.py\", line 32, in <module> from .securityhub import SecurityHubFindingFilter File \"/user_code/c7n/filters/securityhub.py\", line 19, in <module> from c7n.resources import aws File \"/user_code/c7n/resources/aws.py\", line 31, in <module> import boto3 ModuleNotFoundError: No module named 'boto3\r\n```\n", "before_files": [{"content": "# Copyright 2019 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom c7n.utils import local_session, type_schema\nfrom .core import Filter\nfrom c7n.manager import resources\nfrom c7n.resources import aws\n\n\nclass SecurityHubFindingFilter(Filter):\n \"\"\"Check if there are Security Hub Findings related to the resources\n \"\"\"\n schema = type_schema(\n 'finding',\n # Many folks do an aggregator region, allow them to use that\n # for filtering.\n region={'type': 'string'},\n query={'type': 'object'})\n\n permissions = ('securityhub:GetFindings',)\n annotation_key = 'c7n:finding-filter'\n query_shape = 'AwsSecurityFindingFilters'\n\n def validate(self):\n query = self.data.get('query')\n if query:\n aws.shape_validate(query, self.query_shape, 'securityhub')\n\n def process(self, resources, event=None):\n client = local_session(\n self.manager.session_factory).client(\n 'securityhub', region_name=self.data.get('region'))\n found = []\n params = dict(self.data.get('query', {}))\n\n for r_arn, resource in zip(self.manager.get_arns(resources), resources):\n params['ResourceId'] = [{\"Value\": r_arn, \"Comparison\": \"EQUALS\"}]\n findings = client.get_findings(Filters=params).get(\"Findings\")\n if len(findings) > 0:\n resource[self.annotation_key] = findings\n found.append(resource)\n return found\n\n @classmethod\n def register_resources(klass, registry, resource_class):\n \"\"\" meta model subscriber on resource registration.\n\n SecurityHub Findings Filter\n \"\"\"\n for rtype, resource_manager in registry.items():\n if not resource_manager.has_arn():\n continue\n if 'post-finding' in resource_manager.action_registry:\n continue\n resource_class.filter_registry.register('finding', klass)\n\n\nresources.subscribe(resources.EVENT_REGISTER, SecurityHubFindingFilter.register_resources)\n", "path": "c7n/filters/securityhub.py"}]} | 1,691 | 164 |
gh_patches_debug_20277 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-1080 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Reduce detail level of timestamp on posts
**Is your feature request related to a problem? Please describe.**
I think the time when a post was posted is a tad too detailed. For posts in the last 24h, it changes every time you refresh.

**Describe the solution you'd like**
I think the firstmost unit would be enough.
Also, after a few days (I suggest 3), the date (Apr 28) rather than "2 weeks(, 4 days in the current version)" seems a bit more helpful. After 1 year, the date could be shown in "Apr 2021",
This is subjective of course, but imho Bookwyrm is a platform where the "when" doesn't really matter (in comparison to e.g. Mastodon where many are posting news and other stuff where the temporal context is more important).
**Describe alternatives you've considered**
Hovering over the time could show the exact time as a tooltip. I think of this rather as an addition than an alternative and think both would complement each other.
</issue>
<code>
[start of bookwyrm/templatetags/status_display.py]
1 """ template filters """
2 from django import template
3
4 from bookwyrm import models
5 from bookwyrm.templatetags.utilities import get_user_identifier
6
7
8 register = template.Library()
9
10
11 @register.filter(name="mentions")
12 def get_mentions(status, user):
13 """people to @ in a reply: the parent and all mentions"""
14 mentions = set([status.user] + list(status.mention_users.all()))
15 return (
16 " ".join("@" + get_user_identifier(m) for m in mentions if not m == user) + " "
17 )
18
19
20 @register.filter(name="replies")
21 def get_replies(status):
22 """get all direct replies to a status"""
23 # TODO: this limit could cause problems
24 return models.Status.objects.filter(
25 reply_parent=status,
26 deleted=False,
27 ).select_subclasses()[:10]
28
29
30 @register.filter(name="parent")
31 def get_parent(status):
32 """get the reply parent for a status"""
33 return (
34 models.Status.objects.filter(id=status.reply_parent_id)
35 .select_subclasses()
36 .get()
37 )
38
39
40 @register.filter(name="boosted_status")
41 def get_boosted(boost):
42 """load a boosted status. have to do this or it won't get foreign keys"""
43 return models.Status.objects.select_subclasses().get(id=boost.boosted_status.id)
44
[end of bookwyrm/templatetags/status_display.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bookwyrm/templatetags/status_display.py b/bookwyrm/templatetags/status_display.py
--- a/bookwyrm/templatetags/status_display.py
+++ b/bookwyrm/templatetags/status_display.py
@@ -1,6 +1,8 @@
""" template filters """
+from dateutil.relativedelta import relativedelta
from django import template
-
+from django.contrib.humanize.templatetags.humanize import naturaltime, naturalday
+from django.utils import timezone
from bookwyrm import models
from bookwyrm.templatetags.utilities import get_user_identifier
@@ -41,3 +43,17 @@
def get_boosted(boost):
"""load a boosted status. have to do this or it won't get foreign keys"""
return models.Status.objects.select_subclasses().get(id=boost.boosted_status.id)
+
+
[email protected](name="published_date")
+def get_published_date(date):
+ """less verbose combo of humanize filters"""
+ if not date:
+ return ""
+ now = timezone.now()
+ delta = relativedelta(now, date)
+ if delta.years:
+ return naturalday(date)
+ if delta.days:
+ return naturalday(date, "M j")
+ return naturaltime(date)
| {"golden_diff": "diff --git a/bookwyrm/templatetags/status_display.py b/bookwyrm/templatetags/status_display.py\n--- a/bookwyrm/templatetags/status_display.py\n+++ b/bookwyrm/templatetags/status_display.py\n@@ -1,6 +1,8 @@\n \"\"\" template filters \"\"\"\n+from dateutil.relativedelta import relativedelta\n from django import template\n-\n+from django.contrib.humanize.templatetags.humanize import naturaltime, naturalday\n+from django.utils import timezone\n from bookwyrm import models\n from bookwyrm.templatetags.utilities import get_user_identifier\n \n@@ -41,3 +43,17 @@\n def get_boosted(boost):\n \"\"\"load a boosted status. have to do this or it won't get foreign keys\"\"\"\n return models.Status.objects.select_subclasses().get(id=boost.boosted_status.id)\n+\n+\[email protected](name=\"published_date\")\n+def get_published_date(date):\n+ \"\"\"less verbose combo of humanize filters\"\"\"\n+ if not date:\n+ return \"\"\n+ now = timezone.now()\n+ delta = relativedelta(now, date)\n+ if delta.years:\n+ return naturalday(date)\n+ if delta.days:\n+ return naturalday(date, \"M j\")\n+ return naturaltime(date)\n", "issue": "Reduce detail level of timestamp on posts\n**Is your feature request related to a problem? Please describe.**\r\nI think the time when a post was posted is a tad too detailed. For posts in the last 24h, it changes every time you refresh.\r\n\r\n\r\n**Describe the solution you'd like**\r\nI think the firstmost unit would be enough.\r\n\r\nAlso, after a few days (I suggest 3), the date (Apr 28) rather than \"2 weeks(, 4 days in the current version)\" seems a bit more helpful. After 1 year, the date could be shown in \"Apr 2021\",\r\n\r\nThis is subjective of course, but imho Bookwyrm is a platform where the \"when\" doesn't really matter (in comparison to e.g. Mastodon where many are posting news and other stuff where the temporal context is more important). \r\n\r\n**Describe alternatives you've considered**\r\nHovering over the time could show the exact time as a tooltip. I think of this rather as an addition than an alternative and think both would complement each other.\n", "before_files": [{"content": "\"\"\" template filters \"\"\"\nfrom django import template\n\nfrom bookwyrm import models\nfrom bookwyrm.templatetags.utilities import get_user_identifier\n\n\nregister = template.Library()\n\n\[email protected](name=\"mentions\")\ndef get_mentions(status, user):\n \"\"\"people to @ in a reply: the parent and all mentions\"\"\"\n mentions = set([status.user] + list(status.mention_users.all()))\n return (\n \" \".join(\"@\" + get_user_identifier(m) for m in mentions if not m == user) + \" \"\n )\n\n\[email protected](name=\"replies\")\ndef get_replies(status):\n \"\"\"get all direct replies to a status\"\"\"\n # TODO: this limit could cause problems\n return models.Status.objects.filter(\n reply_parent=status,\n deleted=False,\n ).select_subclasses()[:10]\n\n\[email protected](name=\"parent\")\ndef get_parent(status):\n \"\"\"get the reply parent for a status\"\"\"\n return (\n models.Status.objects.filter(id=status.reply_parent_id)\n .select_subclasses()\n .get()\n )\n\n\[email protected](name=\"boosted_status\")\ndef get_boosted(boost):\n \"\"\"load a boosted status. have to do this or it won't get foreign keys\"\"\"\n return models.Status.objects.select_subclasses().get(id=boost.boosted_status.id)\n", "path": "bookwyrm/templatetags/status_display.py"}]} | 1,193 | 292 |
gh_patches_debug_35071 | rasdani/github-patches | git_diff | microsoft__playwright-python-53 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Auto release on PyPi on tags
General interest in that? Should be pretty easy with GitHub Actions, only have to set the a Pypi API key on your end.
Example: https://github.com/microsoft/playwright-python/new/master?filename=.github%2Fworkflows%2Fpython-publish.yml&workflow_template=python-publish
</issue>
<code>
[start of upload_package.py]
1 # Copyright (c) Microsoft Corporation.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import subprocess
16
17 subprocess.run("python -m twine upload dist/*", shell=True)
18
[end of upload_package.py]
[start of setup.py]
1 # Copyright (c) Microsoft Corporation.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import setuptools
16
17 with open("README.md", "r", encoding="utf-8") as fh:
18 long_description = fh.read()
19
20 setuptools.setup(
21 name="playwright",
22 version="0.0.3",
23 author="Microsoft Corporation",
24 author_email="",
25 description="A high-level API to automate web browsers",
26 long_description=long_description,
27 long_description_content_type="text/markdown",
28 url="https://github.com/Microsoft/playwright-python",
29 packages=setuptools.find_packages(),
30 include_package_data=True,
31 install_requires=["pyee", "typing-extensions",],
32 classifiers=[
33 "Topic :: Software Development :: Testing",
34 "Topic :: Internet :: WWW/HTTP :: Browsers",
35 "Intended Audience :: Developers",
36 "Programming Language :: Python :: 3",
37 "Programming Language :: Python :: 3.7",
38 "Programming Language :: Python :: 3.8",
39 "License :: OSI Approved :: Apache Software License",
40 "Operating System :: OS Independent",
41 ],
42 python_requires=">=3.7",
43 )
44
[end of setup.py]
[start of playwright/__init__.py]
1 # Copyright (c) Microsoft Corporation.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from playwright.main import playwright_object
16 import playwright.helper as helper
17
18 chromium = playwright_object.chromium
19 firefox = playwright_object.firefox
20 webkit = playwright_object.webkit
21 devices = playwright_object.devices
22 browser_types = playwright_object.browser_types
23 Error = helper.Error
24 TimeoutError = helper.TimeoutError
25
26 __all__ = [
27 "browser_types",
28 "chromium",
29 "firefox",
30 "webkit",
31 "devices",
32 "Error",
33 "TimeoutError",
34 ]
35
[end of playwright/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/playwright/__init__.py b/playwright/__init__.py
--- a/playwright/__init__.py
+++ b/playwright/__init__.py
@@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+from playwright._repo_version import version as __version__ # noqa:F401
from playwright.main import playwright_object
import playwright.helper as helper
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -19,16 +19,15 @@
setuptools.setup(
name="playwright",
- version="0.0.3",
author="Microsoft Corporation",
author_email="",
description="A high-level API to automate web browsers",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/Microsoft/playwright-python",
- packages=setuptools.find_packages(),
+ packages=["playwright"],
include_package_data=True,
- install_requires=["pyee", "typing-extensions",],
+ install_requires=["pyee", "typing-extensions"],
classifiers=[
"Topic :: Software Development :: Testing",
"Topic :: Internet :: WWW/HTTP :: Browsers",
@@ -40,4 +39,10 @@
"Operating System :: OS Independent",
],
python_requires=">=3.7",
+ use_scm_version={
+ "version_scheme": "post-release",
+ "write_to": "playwright/_repo_version.py",
+ "write_to_template": 'version = "{version}"\n',
+ },
+ setup_requires=["setuptools_scm"],
)
diff --git a/upload_package.py b/upload_package.py
deleted file mode 100644
--- a/upload_package.py
+++ /dev/null
@@ -1,17 +0,0 @@
-# Copyright (c) Microsoft Corporation.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import subprocess
-
-subprocess.run("python -m twine upload dist/*", shell=True)
| {"golden_diff": "diff --git a/playwright/__init__.py b/playwright/__init__.py\n--- a/playwright/__init__.py\n+++ b/playwright/__init__.py\n@@ -12,6 +12,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+from playwright._repo_version import version as __version__ # noqa:F401\n from playwright.main import playwright_object\n import playwright.helper as helper\n \ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -19,16 +19,15 @@\n \n setuptools.setup(\n name=\"playwright\",\n- version=\"0.0.3\",\n author=\"Microsoft Corporation\",\n author_email=\"\",\n description=\"A high-level API to automate web browsers\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/Microsoft/playwright-python\",\n- packages=setuptools.find_packages(),\n+ packages=[\"playwright\"],\n include_package_data=True,\n- install_requires=[\"pyee\", \"typing-extensions\",],\n+ install_requires=[\"pyee\", \"typing-extensions\"],\n classifiers=[\n \"Topic :: Software Development :: Testing\",\n \"Topic :: Internet :: WWW/HTTP :: Browsers\",\n@@ -40,4 +39,10 @@\n \"Operating System :: OS Independent\",\n ],\n python_requires=\">=3.7\",\n+ use_scm_version={\n+ \"version_scheme\": \"post-release\",\n+ \"write_to\": \"playwright/_repo_version.py\",\n+ \"write_to_template\": 'version = \"{version}\"\\n',\n+ },\n+ setup_requires=[\"setuptools_scm\"],\n )\ndiff --git a/upload_package.py b/upload_package.py\ndeleted file mode 100644\n--- a/upload_package.py\n+++ /dev/null\n@@ -1,17 +0,0 @@\n-# Copyright (c) Microsoft Corporation.\n-#\n-# Licensed under the Apache License, Version 2.0 (the \"License\");\n-# you may not use this file except in compliance with the License.\n-# You may obtain a copy of the License at\n-#\n-# http://www.apache.org/licenses/LICENSE-2.0\n-#\n-# Unless required by applicable law or agreed to in writing, software\n-# distributed under the License is distributed on an \"AS IS\" BASIS,\n-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n-# See the License for the specific language governing permissions and\n-# limitations under the License.\n-\n-import subprocess\n-\n-subprocess.run(\"python -m twine upload dist/*\", shell=True)\n", "issue": "Auto release on PyPi on tags\nGeneral interest in that? Should be pretty easy with GitHub Actions, only have to set the a Pypi API key on your end.\r\n\r\nExample: https://github.com/microsoft/playwright-python/new/master?filename=.github%2Fworkflows%2Fpython-publish.yml&workflow_template=python-publish\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport subprocess\n\nsubprocess.run(\"python -m twine upload dist/*\", shell=True)\n", "path": "upload_package.py"}, {"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport setuptools\n\nwith open(\"README.md\", \"r\", encoding=\"utf-8\") as fh:\n long_description = fh.read()\n\nsetuptools.setup(\n name=\"playwright\",\n version=\"0.0.3\",\n author=\"Microsoft Corporation\",\n author_email=\"\",\n description=\"A high-level API to automate web browsers\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/Microsoft/playwright-python\",\n packages=setuptools.find_packages(),\n include_package_data=True,\n install_requires=[\"pyee\", \"typing-extensions\",],\n classifiers=[\n \"Topic :: Software Development :: Testing\",\n \"Topic :: Internet :: WWW/HTTP :: Browsers\",\n \"Intended Audience :: Developers\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n ],\n python_requires=\">=3.7\",\n)\n", "path": "setup.py"}, {"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom playwright.main import playwright_object\nimport playwright.helper as helper\n\nchromium = playwright_object.chromium\nfirefox = playwright_object.firefox\nwebkit = playwright_object.webkit\ndevices = playwright_object.devices\nbrowser_types = playwright_object.browser_types\nError = helper.Error\nTimeoutError = helper.TimeoutError\n\n__all__ = [\n \"browser_types\",\n \"chromium\",\n \"firefox\",\n \"webkit\",\n \"devices\",\n \"Error\",\n \"TimeoutError\",\n]\n", "path": "playwright/__init__.py"}]} | 1,524 | 580 |
gh_patches_debug_166 | rasdani/github-patches | git_diff | goauthentik__authentik-9516 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
2024.4.0 LongRunningTransaction
**Describe the bug**
Prometheus alert for a long running transaction.
I think the transaction is
```
SELECT pg_advisory_unlock($1)
```
**To Reproduce**
No activity, sitting idle
**Expected behavior**
Shouldn't have the alert
**Screenshots**
**Logs**
**Version and Deployment (please complete the following information):**
2024.4.0 kubernetes
**Additional context**
Add any other context about the problem here.
</issue>
<code>
[start of lifecycle/migrate.py]
1 #!/usr/bin/env python
2 """System Migration handler"""
3 from importlib.util import module_from_spec, spec_from_file_location
4 from inspect import getmembers, isclass
5 from os import environ, system
6 from pathlib import Path
7 from typing import Any
8
9 from psycopg import Connection, Cursor, connect
10 from structlog.stdlib import get_logger
11
12 from authentik.lib.config import CONFIG
13
14 LOGGER = get_logger()
15 ADV_LOCK_UID = 1000
16 LOCKED = False
17
18
19 class CommandError(Exception):
20 """Error raised when a system_crit command fails"""
21
22
23 class BaseMigration:
24 """Base System Migration"""
25
26 cur: Cursor
27 con: Connection
28
29 def __init__(self, cur: Any, con: Any):
30 self.cur = cur
31 self.con = con
32
33 def system_crit(self, command: str):
34 """Run system command"""
35 LOGGER.debug("Running system_crit command", command=command)
36 retval = system(command) # nosec
37 if retval != 0:
38 raise CommandError("Migration error")
39
40 def fake_migration(self, *app_migration: tuple[str, str]):
41 """Fake apply a list of migrations, arguments are
42 expected to be tuples of (app_label, migration_name)"""
43 for app, _migration in app_migration:
44 self.system_crit(f"./manage.py migrate {app} {_migration} --fake")
45
46 def needs_migration(self) -> bool:
47 """Return true if Migration needs to be run"""
48 return False
49
50 def run(self):
51 """Run the actual migration"""
52
53
54 def wait_for_lock(cursor: Cursor):
55 """lock an advisory lock to prevent multiple instances from migrating at once"""
56 LOGGER.info("waiting to acquire database lock")
57 cursor.execute("SELECT pg_advisory_lock(%s)", (ADV_LOCK_UID,))
58
59 global LOCKED # noqa: PLW0603
60 LOCKED = True
61
62
63 def release_lock(cursor: Cursor):
64 """Release database lock"""
65 if not LOCKED:
66 return
67 LOGGER.info("releasing database lock")
68 cursor.execute("SELECT pg_advisory_unlock(%s)", (ADV_LOCK_UID,))
69
70
71 def run_migrations():
72 conn = connect(
73 dbname=CONFIG.get("postgresql.name"),
74 user=CONFIG.get("postgresql.user"),
75 password=CONFIG.get("postgresql.password"),
76 host=CONFIG.get("postgresql.host"),
77 port=CONFIG.get_int("postgresql.port"),
78 sslmode=CONFIG.get("postgresql.sslmode"),
79 sslrootcert=CONFIG.get("postgresql.sslrootcert"),
80 sslcert=CONFIG.get("postgresql.sslcert"),
81 sslkey=CONFIG.get("postgresql.sslkey"),
82 )
83 curr = conn.cursor()
84 try:
85 for migration_path in Path(__file__).parent.absolute().glob("system_migrations/*.py"):
86 spec = spec_from_file_location("lifecycle.system_migrations", migration_path)
87 if not spec:
88 continue
89 mod = module_from_spec(spec)
90 spec.loader.exec_module(mod)
91
92 for name, sub in getmembers(mod, isclass):
93 if name != "Migration":
94 continue
95 migration = sub(curr, conn)
96 if migration.needs_migration():
97 wait_for_lock(curr)
98 LOGGER.info("Migration needs to be applied", migration=migration_path.name)
99 migration.run()
100 LOGGER.info("Migration finished applying", migration=migration_path.name)
101 release_lock(curr)
102 LOGGER.info("applying django migrations")
103 environ.setdefault("DJANGO_SETTINGS_MODULE", "authentik.root.settings")
104 wait_for_lock(curr)
105 try:
106 from django.core.management import execute_from_command_line
107 except ImportError as exc:
108 raise ImportError(
109 "Couldn't import Django. Are you sure it's installed and "
110 "available on your PYTHONPATH environment variable? Did you "
111 "forget to activate a virtual environment?"
112 ) from exc
113 execute_from_command_line(["", "migrate_schemas"])
114 execute_from_command_line(["", "migrate_schemas", "--schema", "template", "--tenant"])
115 execute_from_command_line(
116 ["", "check"] + ([] if CONFIG.get_bool("debug") else ["--deploy"])
117 )
118 finally:
119 release_lock(curr)
120
121
122 if __name__ == "__main__":
123 run_migrations()
124
[end of lifecycle/migrate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lifecycle/migrate.py b/lifecycle/migrate.py
--- a/lifecycle/migrate.py
+++ b/lifecycle/migrate.py
@@ -117,6 +117,8 @@
)
finally:
release_lock(curr)
+ curr.close()
+ conn.close()
if __name__ == "__main__":
| {"golden_diff": "diff --git a/lifecycle/migrate.py b/lifecycle/migrate.py\n--- a/lifecycle/migrate.py\n+++ b/lifecycle/migrate.py\n@@ -117,6 +117,8 @@\n )\n finally:\n release_lock(curr)\n+ curr.close()\n+ conn.close()\n \n \n if __name__ == \"__main__\":\n", "issue": "2024.4.0 LongRunningTransaction\n**Describe the bug**\r\nPrometheus alert for a long running transaction.\r\n\r\nI think the transaction is\r\n\r\n```\r\nSELECT pg_advisory_unlock($1)\r\n```\r\n\r\n**To Reproduce**\r\nNo activity, sitting idle\r\n\r\n**Expected behavior**\r\nShouldn't have the alert\r\n\r\n**Screenshots**\r\n\r\n**Logs**\r\n\r\n**Version and Deployment (please complete the following information):**\r\n2024.4.0 kubernetes\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"System Migration handler\"\"\"\nfrom importlib.util import module_from_spec, spec_from_file_location\nfrom inspect import getmembers, isclass\nfrom os import environ, system\nfrom pathlib import Path\nfrom typing import Any\n\nfrom psycopg import Connection, Cursor, connect\nfrom structlog.stdlib import get_logger\n\nfrom authentik.lib.config import CONFIG\n\nLOGGER = get_logger()\nADV_LOCK_UID = 1000\nLOCKED = False\n\n\nclass CommandError(Exception):\n \"\"\"Error raised when a system_crit command fails\"\"\"\n\n\nclass BaseMigration:\n \"\"\"Base System Migration\"\"\"\n\n cur: Cursor\n con: Connection\n\n def __init__(self, cur: Any, con: Any):\n self.cur = cur\n self.con = con\n\n def system_crit(self, command: str):\n \"\"\"Run system command\"\"\"\n LOGGER.debug(\"Running system_crit command\", command=command)\n retval = system(command) # nosec\n if retval != 0:\n raise CommandError(\"Migration error\")\n\n def fake_migration(self, *app_migration: tuple[str, str]):\n \"\"\"Fake apply a list of migrations, arguments are\n expected to be tuples of (app_label, migration_name)\"\"\"\n for app, _migration in app_migration:\n self.system_crit(f\"./manage.py migrate {app} {_migration} --fake\")\n\n def needs_migration(self) -> bool:\n \"\"\"Return true if Migration needs to be run\"\"\"\n return False\n\n def run(self):\n \"\"\"Run the actual migration\"\"\"\n\n\ndef wait_for_lock(cursor: Cursor):\n \"\"\"lock an advisory lock to prevent multiple instances from migrating at once\"\"\"\n LOGGER.info(\"waiting to acquire database lock\")\n cursor.execute(\"SELECT pg_advisory_lock(%s)\", (ADV_LOCK_UID,))\n\n global LOCKED # noqa: PLW0603\n LOCKED = True\n\n\ndef release_lock(cursor: Cursor):\n \"\"\"Release database lock\"\"\"\n if not LOCKED:\n return\n LOGGER.info(\"releasing database lock\")\n cursor.execute(\"SELECT pg_advisory_unlock(%s)\", (ADV_LOCK_UID,))\n\n\ndef run_migrations():\n conn = connect(\n dbname=CONFIG.get(\"postgresql.name\"),\n user=CONFIG.get(\"postgresql.user\"),\n password=CONFIG.get(\"postgresql.password\"),\n host=CONFIG.get(\"postgresql.host\"),\n port=CONFIG.get_int(\"postgresql.port\"),\n sslmode=CONFIG.get(\"postgresql.sslmode\"),\n sslrootcert=CONFIG.get(\"postgresql.sslrootcert\"),\n sslcert=CONFIG.get(\"postgresql.sslcert\"),\n sslkey=CONFIG.get(\"postgresql.sslkey\"),\n )\n curr = conn.cursor()\n try:\n for migration_path in Path(__file__).parent.absolute().glob(\"system_migrations/*.py\"):\n spec = spec_from_file_location(\"lifecycle.system_migrations\", migration_path)\n if not spec:\n continue\n mod = module_from_spec(spec)\n spec.loader.exec_module(mod)\n\n for name, sub in getmembers(mod, isclass):\n if name != \"Migration\":\n continue\n migration = sub(curr, conn)\n if migration.needs_migration():\n wait_for_lock(curr)\n LOGGER.info(\"Migration needs to be applied\", migration=migration_path.name)\n migration.run()\n LOGGER.info(\"Migration finished applying\", migration=migration_path.name)\n release_lock(curr)\n LOGGER.info(\"applying django migrations\")\n environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"authentik.root.settings\")\n wait_for_lock(curr)\n try:\n from django.core.management import execute_from_command_line\n except ImportError as exc:\n raise ImportError(\n \"Couldn't import Django. Are you sure it's installed and \"\n \"available on your PYTHONPATH environment variable? Did you \"\n \"forget to activate a virtual environment?\"\n ) from exc\n execute_from_command_line([\"\", \"migrate_schemas\"])\n execute_from_command_line([\"\", \"migrate_schemas\", \"--schema\", \"template\", \"--tenant\"])\n execute_from_command_line(\n [\"\", \"check\"] + ([] if CONFIG.get_bool(\"debug\") else [\"--deploy\"])\n )\n finally:\n release_lock(curr)\n\n\nif __name__ == \"__main__\":\n run_migrations()\n", "path": "lifecycle/migrate.py"}]} | 1,815 | 76 |
gh_patches_debug_31693 | rasdani/github-patches | git_diff | mlflow__mlflow-10923 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Security Vulnerability
Please check it here https://huntr.com/bounties/e3d7a994-bfd6-4772-ac9b-9aee1aa16a5f/
</issue>
<code>
[start of mlflow/store/artifact/local_artifact_repo.py]
1 import os
2 import shutil
3
4 from mlflow.store.artifact.artifact_repo import ArtifactRepository, verify_artifact_path
5 from mlflow.utils.file_utils import (
6 get_file_info,
7 list_all,
8 local_file_uri_to_path,
9 mkdir,
10 relative_path_to_artifact_path,
11 )
12
13
14 class LocalArtifactRepository(ArtifactRepository):
15 """Stores artifacts as files in a local directory."""
16
17 def __init__(self, *args, **kwargs):
18 super().__init__(*args, **kwargs)
19 self._artifact_dir = local_file_uri_to_path(self.artifact_uri)
20
21 @property
22 def artifact_dir(self):
23 return self._artifact_dir
24
25 def log_artifact(self, local_file, artifact_path=None):
26 verify_artifact_path(artifact_path)
27 # NOTE: The artifact_path is expected to be in posix format.
28 # Posix paths work fine on windows but just in case we normalize it here.
29 if artifact_path:
30 artifact_path = os.path.normpath(artifact_path)
31
32 artifact_dir = (
33 os.path.join(self.artifact_dir, artifact_path) if artifact_path else self.artifact_dir
34 )
35 if not os.path.exists(artifact_dir):
36 mkdir(artifact_dir)
37 try:
38 shutil.copy2(local_file, os.path.join(artifact_dir, os.path.basename(local_file)))
39 except shutil.SameFileError:
40 pass
41
42 def _is_directory(self, artifact_path):
43 # NOTE: The path is expected to be in posix format.
44 # Posix paths work fine on windows but just in case we normalize it here.
45 path = os.path.normpath(artifact_path) if artifact_path else ""
46 list_dir = os.path.join(self.artifact_dir, path) if path else self.artifact_dir
47 return os.path.isdir(list_dir)
48
49 def log_artifacts(self, local_dir, artifact_path=None):
50 verify_artifact_path(artifact_path)
51 # NOTE: The artifact_path is expected to be in posix format.
52 # Posix paths work fine on windows but just in case we normalize it here.
53 if artifact_path:
54 artifact_path = os.path.normpath(artifact_path)
55 artifact_dir = (
56 os.path.join(self.artifact_dir, artifact_path) if artifact_path else self.artifact_dir
57 )
58 if not os.path.exists(artifact_dir):
59 mkdir(artifact_dir)
60 shutil.copytree(src=local_dir, dst=artifact_dir, dirs_exist_ok=True)
61
62 def download_artifacts(self, artifact_path, dst_path=None):
63 """
64 Artifacts tracked by ``LocalArtifactRepository`` already exist on the local filesystem.
65 If ``dst_path`` is ``None``, the absolute filesystem path of the specified artifact is
66 returned. If ``dst_path`` is not ``None``, the local artifact is copied to ``dst_path``.
67
68 :param artifact_path: Relative source path to the desired artifacts.
69 :param dst_path: Absolute path of the local filesystem destination directory to which to
70 download the specified artifacts. This directory must already exist. If
71 unspecified, the absolute path of the local artifact will be returned.
72
73 :return: Absolute path of the local filesystem location containing the desired artifacts.
74 """
75 if dst_path:
76 return super().download_artifacts(artifact_path, dst_path)
77 # NOTE: The artifact_path is expected to be in posix format.
78 # Posix paths work fine on windows but just in case we normalize it here.
79 local_artifact_path = os.path.join(self.artifact_dir, os.path.normpath(artifact_path))
80 if not os.path.exists(local_artifact_path):
81 raise OSError(f"No such file or directory: '{local_artifact_path}'")
82 return os.path.abspath(local_artifact_path)
83
84 def list_artifacts(self, path=None):
85 # NOTE: The path is expected to be in posix format.
86 # Posix paths work fine on windows but just in case we normalize it here.
87 if path:
88 path = os.path.normpath(path)
89 list_dir = os.path.join(self.artifact_dir, path) if path else self.artifact_dir
90 if os.path.isdir(list_dir):
91 artifact_files = list_all(list_dir, full_path=True)
92 infos = [
93 get_file_info(
94 f, relative_path_to_artifact_path(os.path.relpath(f, self.artifact_dir))
95 )
96 for f in artifact_files
97 ]
98 return sorted(infos, key=lambda f: f.path)
99 else:
100 return []
101
102 def _download_file(self, remote_file_path, local_path):
103 # NOTE: The remote_file_path is expected to be in posix format.
104 # Posix paths work fine on windows but just in case we normalize it here.
105 remote_file_path = os.path.join(self.artifact_dir, os.path.normpath(remote_file_path))
106 shutil.copy2(remote_file_path, local_path)
107
108 def delete_artifacts(self, artifact_path=None):
109 artifact_path = local_file_uri_to_path(
110 os.path.join(self._artifact_dir, artifact_path) if artifact_path else self._artifact_dir
111 )
112
113 if os.path.exists(artifact_path):
114 shutil.rmtree(artifact_path)
115
[end of mlflow/store/artifact/local_artifact_repo.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mlflow/store/artifact/local_artifact_repo.py b/mlflow/store/artifact/local_artifact_repo.py
--- a/mlflow/store/artifact/local_artifact_repo.py
+++ b/mlflow/store/artifact/local_artifact_repo.py
@@ -9,6 +9,7 @@
mkdir,
relative_path_to_artifact_path,
)
+from mlflow.utils.uri import validate_path_is_safe
class LocalArtifactRepository(ArtifactRepository):
@@ -74,8 +75,9 @@
"""
if dst_path:
return super().download_artifacts(artifact_path, dst_path)
- # NOTE: The artifact_path is expected to be in posix format.
+ # NOTE: The artifact_path is expected to be a relative path in posix format.
# Posix paths work fine on windows but just in case we normalize it here.
+ artifact_path = validate_path_is_safe(artifact_path)
local_artifact_path = os.path.join(self.artifact_dir, os.path.normpath(artifact_path))
if not os.path.exists(local_artifact_path):
raise OSError(f"No such file or directory: '{local_artifact_path}'")
@@ -100,8 +102,9 @@
return []
def _download_file(self, remote_file_path, local_path):
- # NOTE: The remote_file_path is expected to be in posix format.
+ # NOTE: The remote_file_path is expected to be a relative path in posix format.
# Posix paths work fine on windows but just in case we normalize it here.
+ remote_file_path = validate_path_is_safe(remote_file_path)
remote_file_path = os.path.join(self.artifact_dir, os.path.normpath(remote_file_path))
shutil.copy2(remote_file_path, local_path)
| {"golden_diff": "diff --git a/mlflow/store/artifact/local_artifact_repo.py b/mlflow/store/artifact/local_artifact_repo.py\n--- a/mlflow/store/artifact/local_artifact_repo.py\n+++ b/mlflow/store/artifact/local_artifact_repo.py\n@@ -9,6 +9,7 @@\n mkdir,\n relative_path_to_artifact_path,\n )\n+from mlflow.utils.uri import validate_path_is_safe\n \n \n class LocalArtifactRepository(ArtifactRepository):\n@@ -74,8 +75,9 @@\n \"\"\"\n if dst_path:\n return super().download_artifacts(artifact_path, dst_path)\n- # NOTE: The artifact_path is expected to be in posix format.\n+ # NOTE: The artifact_path is expected to be a relative path in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n+ artifact_path = validate_path_is_safe(artifact_path)\n local_artifact_path = os.path.join(self.artifact_dir, os.path.normpath(artifact_path))\n if not os.path.exists(local_artifact_path):\n raise OSError(f\"No such file or directory: '{local_artifact_path}'\")\n@@ -100,8 +102,9 @@\n return []\n \n def _download_file(self, remote_file_path, local_path):\n- # NOTE: The remote_file_path is expected to be in posix format.\n+ # NOTE: The remote_file_path is expected to be a relative path in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n+ remote_file_path = validate_path_is_safe(remote_file_path)\n remote_file_path = os.path.join(self.artifact_dir, os.path.normpath(remote_file_path))\n shutil.copy2(remote_file_path, local_path)\n", "issue": "[BUG] Security Vulnerability\nPlease check it here https://huntr.com/bounties/e3d7a994-bfd6-4772-ac9b-9aee1aa16a5f/\n", "before_files": [{"content": "import os\nimport shutil\n\nfrom mlflow.store.artifact.artifact_repo import ArtifactRepository, verify_artifact_path\nfrom mlflow.utils.file_utils import (\n get_file_info,\n list_all,\n local_file_uri_to_path,\n mkdir,\n relative_path_to_artifact_path,\n)\n\n\nclass LocalArtifactRepository(ArtifactRepository):\n \"\"\"Stores artifacts as files in a local directory.\"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._artifact_dir = local_file_uri_to_path(self.artifact_uri)\n\n @property\n def artifact_dir(self):\n return self._artifact_dir\n\n def log_artifact(self, local_file, artifact_path=None):\n verify_artifact_path(artifact_path)\n # NOTE: The artifact_path is expected to be in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n if artifact_path:\n artifact_path = os.path.normpath(artifact_path)\n\n artifact_dir = (\n os.path.join(self.artifact_dir, artifact_path) if artifact_path else self.artifact_dir\n )\n if not os.path.exists(artifact_dir):\n mkdir(artifact_dir)\n try:\n shutil.copy2(local_file, os.path.join(artifact_dir, os.path.basename(local_file)))\n except shutil.SameFileError:\n pass\n\n def _is_directory(self, artifact_path):\n # NOTE: The path is expected to be in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n path = os.path.normpath(artifact_path) if artifact_path else \"\"\n list_dir = os.path.join(self.artifact_dir, path) if path else self.artifact_dir\n return os.path.isdir(list_dir)\n\n def log_artifacts(self, local_dir, artifact_path=None):\n verify_artifact_path(artifact_path)\n # NOTE: The artifact_path is expected to be in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n if artifact_path:\n artifact_path = os.path.normpath(artifact_path)\n artifact_dir = (\n os.path.join(self.artifact_dir, artifact_path) if artifact_path else self.artifact_dir\n )\n if not os.path.exists(artifact_dir):\n mkdir(artifact_dir)\n shutil.copytree(src=local_dir, dst=artifact_dir, dirs_exist_ok=True)\n\n def download_artifacts(self, artifact_path, dst_path=None):\n \"\"\"\n Artifacts tracked by ``LocalArtifactRepository`` already exist on the local filesystem.\n If ``dst_path`` is ``None``, the absolute filesystem path of the specified artifact is\n returned. If ``dst_path`` is not ``None``, the local artifact is copied to ``dst_path``.\n\n :param artifact_path: Relative source path to the desired artifacts.\n :param dst_path: Absolute path of the local filesystem destination directory to which to\n download the specified artifacts. This directory must already exist. If\n unspecified, the absolute path of the local artifact will be returned.\n\n :return: Absolute path of the local filesystem location containing the desired artifacts.\n \"\"\"\n if dst_path:\n return super().download_artifacts(artifact_path, dst_path)\n # NOTE: The artifact_path is expected to be in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n local_artifact_path = os.path.join(self.artifact_dir, os.path.normpath(artifact_path))\n if not os.path.exists(local_artifact_path):\n raise OSError(f\"No such file or directory: '{local_artifact_path}'\")\n return os.path.abspath(local_artifact_path)\n\n def list_artifacts(self, path=None):\n # NOTE: The path is expected to be in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n if path:\n path = os.path.normpath(path)\n list_dir = os.path.join(self.artifact_dir, path) if path else self.artifact_dir\n if os.path.isdir(list_dir):\n artifact_files = list_all(list_dir, full_path=True)\n infos = [\n get_file_info(\n f, relative_path_to_artifact_path(os.path.relpath(f, self.artifact_dir))\n )\n for f in artifact_files\n ]\n return sorted(infos, key=lambda f: f.path)\n else:\n return []\n\n def _download_file(self, remote_file_path, local_path):\n # NOTE: The remote_file_path is expected to be in posix format.\n # Posix paths work fine on windows but just in case we normalize it here.\n remote_file_path = os.path.join(self.artifact_dir, os.path.normpath(remote_file_path))\n shutil.copy2(remote_file_path, local_path)\n\n def delete_artifacts(self, artifact_path=None):\n artifact_path = local_file_uri_to_path(\n os.path.join(self._artifact_dir, artifact_path) if artifact_path else self._artifact_dir\n )\n\n if os.path.exists(artifact_path):\n shutil.rmtree(artifact_path)\n", "path": "mlflow/store/artifact/local_artifact_repo.py"}]} | 1,924 | 378 |
gh_patches_debug_25018 | rasdani/github-patches | git_diff | magenta__magenta-1851 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
KeyError: 'tfds_data_dir'(GANSynth)
Hi, I got this error on GANSynth demo colab . How can I resolve it?

</issue>
<code>
[start of magenta/models/gansynth/gansynth_generate.py]
1 # Copyright 2020 The Magenta Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # Lint as: python3
16 r"""Generate samples with a pretrained GANSynth model.
17
18 To use a config of hyperparameters and manual hparams:
19 >>> python magenta/models/gansynth/generate.py \
20 >>> --ckpt_dir=/path/to/ckpt/dir --output_dir=/path/to/output/dir \
21 >>> --midi_file=/path/to/file.mid
22
23 If a MIDI file is specified, notes are synthesized with interpolation between
24 latent vectors in time. If no MIDI file is given, a random batch of notes is
25 synthesized.
26 """
27
28 import os
29
30 import absl.flags
31 from magenta.models.gansynth.lib import flags as lib_flags
32 from magenta.models.gansynth.lib import generate_util as gu
33 from magenta.models.gansynth.lib import model as lib_model
34 from magenta.models.gansynth.lib import util
35 import tensorflow.compat.v1 as tf
36
37
38 absl.flags.DEFINE_string('ckpt_dir',
39 '/tmp/gansynth/acoustic_only',
40 'Path to the base directory of pretrained checkpoints.'
41 'The base directory should contain many '
42 '"stage_000*" subdirectories.')
43 absl.flags.DEFINE_string('output_dir',
44 '/tmp/gansynth/samples',
45 'Path to directory to save wave files.')
46 absl.flags.DEFINE_string('midi_file',
47 '',
48 'Path to a MIDI file (.mid) to synthesize.')
49 absl.flags.DEFINE_integer('batch_size', 8, 'Batch size for generation.')
50 absl.flags.DEFINE_float('secs_per_instrument', 6.0,
51 'In random interpolations, the seconds it takes to '
52 'interpolate from one instrument to another.')
53
54 FLAGS = absl.flags.FLAGS
55 tf.logging.set_verbosity(tf.logging.INFO)
56
57
58 def main(unused_argv):
59 absl.flags.FLAGS.alsologtostderr = True
60
61 # Load the model
62 flags = lib_flags.Flags({'batch_size_schedule': [FLAGS.batch_size]})
63 model = lib_model.Model.load_from_path(FLAGS.ckpt_dir, flags)
64
65 # Make an output directory if it doesn't exist
66 output_dir = util.expand_path(FLAGS.output_dir)
67 if not tf.gfile.Exists(output_dir):
68 tf.gfile.MakeDirs(output_dir)
69
70 if FLAGS.midi_file:
71 # If a MIDI file is provided, synthesize interpolations across the clip
72 unused_ns, notes = gu.load_midi(FLAGS.midi_file)
73
74 # Distribute latent vectors linearly in time
75 z_instruments, t_instruments = gu.get_random_instruments(
76 model,
77 notes['end_times'][-1],
78 secs_per_instrument=FLAGS.secs_per_instrument)
79
80 # Get latent vectors for each note
81 z_notes = gu.get_z_notes(notes['start_times'], z_instruments, t_instruments)
82
83 # Generate audio for each note
84 print('Generating {} samples...'.format(len(z_notes)))
85 audio_notes = model.generate_samples_from_z(z_notes, notes['pitches'])
86
87 # Make a single audio clip
88 audio_clip = gu.combine_notes(audio_notes,
89 notes['start_times'],
90 notes['end_times'],
91 notes['velocities'])
92
93 # Write the wave files
94 fname = os.path.join(output_dir, 'generated_clip.wav')
95 gu.save_wav(audio_clip, fname)
96 else:
97 # Otherwise, just generate a batch of random sounds
98 waves = model.generate_samples(FLAGS.batch_size)
99 # Write the wave files
100 for i in range(len(waves)):
101 fname = os.path.join(output_dir, 'generated_{}.wav'.format(i))
102 gu.save_wav(waves[i], fname)
103
104
105 def console_entry_point():
106 tf.disable_v2_behavior()
107 tf.app.run(main)
108
109
110 if __name__ == '__main__':
111 console_entry_point()
112
[end of magenta/models/gansynth/gansynth_generate.py]
[start of magenta/version.py]
1 # Copyright 2020 The Magenta Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 r"""Separate file for storing the current version of Magenta.
16
17 Stored in a separate file so that setup.py can reference the version without
18 pulling in all the dependencies in __init__.py.
19 """
20
21 __version__ = '2.1.2'
22
[end of magenta/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/magenta/models/gansynth/gansynth_generate.py b/magenta/models/gansynth/gansynth_generate.py
--- a/magenta/models/gansynth/gansynth_generate.py
+++ b/magenta/models/gansynth/gansynth_generate.py
@@ -50,6 +50,9 @@
absl.flags.DEFINE_float('secs_per_instrument', 6.0,
'In random interpolations, the seconds it takes to '
'interpolate from one instrument to another.')
+absl.flags.DEFINE_string('tfds_data_dir',
+ 'gs://tfds-data/datasets',
+ 'Data directory for the TFDS dataset used to train.')
FLAGS = absl.flags.FLAGS
tf.logging.set_verbosity(tf.logging.INFO)
@@ -59,7 +62,11 @@
absl.flags.FLAGS.alsologtostderr = True
# Load the model
- flags = lib_flags.Flags({'batch_size_schedule': [FLAGS.batch_size]})
+ flags = lib_flags.Flags(
+ {
+ 'batch_size_schedule': [FLAGS.batch_size],
+ 'tfds_data_dir': FLAGS.tfds_data_dir
+ })
model = lib_model.Model.load_from_path(FLAGS.ckpt_dir, flags)
# Make an output directory if it doesn't exist
diff --git a/magenta/version.py b/magenta/version.py
--- a/magenta/version.py
+++ b/magenta/version.py
@@ -18,4 +18,4 @@
pulling in all the dependencies in __init__.py.
"""
-__version__ = '2.1.2'
+__version__ = '2.1.3'
| {"golden_diff": "diff --git a/magenta/models/gansynth/gansynth_generate.py b/magenta/models/gansynth/gansynth_generate.py\n--- a/magenta/models/gansynth/gansynth_generate.py\n+++ b/magenta/models/gansynth/gansynth_generate.py\n@@ -50,6 +50,9 @@\n absl.flags.DEFINE_float('secs_per_instrument', 6.0,\n 'In random interpolations, the seconds it takes to '\n 'interpolate from one instrument to another.')\n+absl.flags.DEFINE_string('tfds_data_dir',\n+ 'gs://tfds-data/datasets',\n+ 'Data directory for the TFDS dataset used to train.')\n \n FLAGS = absl.flags.FLAGS\n tf.logging.set_verbosity(tf.logging.INFO)\n@@ -59,7 +62,11 @@\n absl.flags.FLAGS.alsologtostderr = True\n \n # Load the model\n- flags = lib_flags.Flags({'batch_size_schedule': [FLAGS.batch_size]})\n+ flags = lib_flags.Flags(\n+ {\n+ 'batch_size_schedule': [FLAGS.batch_size],\n+ 'tfds_data_dir': FLAGS.tfds_data_dir\n+ })\n model = lib_model.Model.load_from_path(FLAGS.ckpt_dir, flags)\n \n # Make an output directory if it doesn't exist\ndiff --git a/magenta/version.py b/magenta/version.py\n--- a/magenta/version.py\n+++ b/magenta/version.py\n@@ -18,4 +18,4 @@\n pulling in all the dependencies in __init__.py.\n \"\"\"\n \n-__version__ = '2.1.2'\n+__version__ = '2.1.3'\n", "issue": "KeyError: 'tfds_data_dir'(GANSynth)\nHi, I got this error on GANSynth demo colab . How can I resolve it?\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2020 The Magenta Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Lint as: python3\nr\"\"\"Generate samples with a pretrained GANSynth model.\n\nTo use a config of hyperparameters and manual hparams:\n>>> python magenta/models/gansynth/generate.py \\\n>>> --ckpt_dir=/path/to/ckpt/dir --output_dir=/path/to/output/dir \\\n>>> --midi_file=/path/to/file.mid\n\nIf a MIDI file is specified, notes are synthesized with interpolation between\nlatent vectors in time. If no MIDI file is given, a random batch of notes is\nsynthesized.\n\"\"\"\n\nimport os\n\nimport absl.flags\nfrom magenta.models.gansynth.lib import flags as lib_flags\nfrom magenta.models.gansynth.lib import generate_util as gu\nfrom magenta.models.gansynth.lib import model as lib_model\nfrom magenta.models.gansynth.lib import util\nimport tensorflow.compat.v1 as tf\n\n\nabsl.flags.DEFINE_string('ckpt_dir',\n '/tmp/gansynth/acoustic_only',\n 'Path to the base directory of pretrained checkpoints.'\n 'The base directory should contain many '\n '\"stage_000*\" subdirectories.')\nabsl.flags.DEFINE_string('output_dir',\n '/tmp/gansynth/samples',\n 'Path to directory to save wave files.')\nabsl.flags.DEFINE_string('midi_file',\n '',\n 'Path to a MIDI file (.mid) to synthesize.')\nabsl.flags.DEFINE_integer('batch_size', 8, 'Batch size for generation.')\nabsl.flags.DEFINE_float('secs_per_instrument', 6.0,\n 'In random interpolations, the seconds it takes to '\n 'interpolate from one instrument to another.')\n\nFLAGS = absl.flags.FLAGS\ntf.logging.set_verbosity(tf.logging.INFO)\n\n\ndef main(unused_argv):\n absl.flags.FLAGS.alsologtostderr = True\n\n # Load the model\n flags = lib_flags.Flags({'batch_size_schedule': [FLAGS.batch_size]})\n model = lib_model.Model.load_from_path(FLAGS.ckpt_dir, flags)\n\n # Make an output directory if it doesn't exist\n output_dir = util.expand_path(FLAGS.output_dir)\n if not tf.gfile.Exists(output_dir):\n tf.gfile.MakeDirs(output_dir)\n\n if FLAGS.midi_file:\n # If a MIDI file is provided, synthesize interpolations across the clip\n unused_ns, notes = gu.load_midi(FLAGS.midi_file)\n\n # Distribute latent vectors linearly in time\n z_instruments, t_instruments = gu.get_random_instruments(\n model,\n notes['end_times'][-1],\n secs_per_instrument=FLAGS.secs_per_instrument)\n\n # Get latent vectors for each note\n z_notes = gu.get_z_notes(notes['start_times'], z_instruments, t_instruments)\n\n # Generate audio for each note\n print('Generating {} samples...'.format(len(z_notes)))\n audio_notes = model.generate_samples_from_z(z_notes, notes['pitches'])\n\n # Make a single audio clip\n audio_clip = gu.combine_notes(audio_notes,\n notes['start_times'],\n notes['end_times'],\n notes['velocities'])\n\n # Write the wave files\n fname = os.path.join(output_dir, 'generated_clip.wav')\n gu.save_wav(audio_clip, fname)\n else:\n # Otherwise, just generate a batch of random sounds\n waves = model.generate_samples(FLAGS.batch_size)\n # Write the wave files\n for i in range(len(waves)):\n fname = os.path.join(output_dir, 'generated_{}.wav'.format(i))\n gu.save_wav(waves[i], fname)\n\n\ndef console_entry_point():\n tf.disable_v2_behavior()\n tf.app.run(main)\n\n\nif __name__ == '__main__':\n console_entry_point()\n", "path": "magenta/models/gansynth/gansynth_generate.py"}, {"content": "# Copyright 2020 The Magenta Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nr\"\"\"Separate file for storing the current version of Magenta.\n\nStored in a separate file so that setup.py can reference the version without\npulling in all the dependencies in __init__.py.\n\"\"\"\n\n__version__ = '2.1.2'\n", "path": "magenta/version.py"}]} | 2,043 | 358 |
gh_patches_debug_38325 | rasdani/github-patches | git_diff | encode__starlette-8 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Request should present a scope-like interface
The `Request` class should present a dict-like interface so that it can be used in the same way as `scope`. Should also allow it to be instantiated without a `receive` channel being set initially.
</issue>
<code>
[start of starlette/request.py]
1 from starlette.datastructures import URL, Headers, QueryParams
2 import json
3
4
5 class Request:
6 def __init__(self, scope, receive):
7 self._scope = scope
8 self._receive = receive
9 self._stream_consumed = False
10
11 @property
12 def method(self):
13 return self._scope["method"]
14
15 @property
16 def url(self):
17 if not hasattr(self, "_url"):
18 scheme = self._scope["scheme"]
19 host, port = self._scope["server"]
20 path = self._scope["path"]
21 query_string = self._scope["query_string"]
22
23 if (scheme == "http" and port != 80) or (scheme == "https" and port != 443):
24 url = "%s://%s:%s%s" % (scheme, host, port, path)
25 else:
26 url = "%s://%s%s" % (scheme, host, path)
27
28 if query_string:
29 url += "?" + query_string.decode()
30
31 self._url = URL(url)
32 return self._url
33
34 @property
35 def headers(self):
36 if not hasattr(self, "_headers"):
37 self._headers = Headers(
38 [
39 (key.decode(), value.decode())
40 for key, value in self._scope["headers"]
41 ]
42 )
43 return self._headers
44
45 @property
46 def query_params(self):
47 if not hasattr(self, "_query_params"):
48 query_string = self._scope["query_string"].decode()
49 self._query_params = QueryParams(query_string)
50 return self._query_params
51
52 async def stream(self):
53 if hasattr(self, "_body"):
54 yield self._body
55 return
56
57 if self._stream_consumed:
58 raise RuntimeError("Stream consumed")
59
60 self._stream_consumed = True
61 while True:
62 message = await self._receive()
63 if message["type"] == "http.request":
64 yield message.get("body", b"")
65 if not message.get("more_body", False):
66 break
67
68 async def body(self):
69 if not hasattr(self, "_body"):
70 body = b""
71 async for chunk in self.stream():
72 body += chunk
73 self._body = body
74 return self._body
75
76 async def json(self):
77 if not hasattr(self, "_json"):
78 body = await self.body()
79 self._json = json.loads(body)
80 return self._json
81
[end of starlette/request.py]
[start of starlette/decorators.py]
1 from starlette.request import Request
2 from starlette.response import Response
3 from starlette.types import ASGIInstance, Receive, Send, Scope
4
5
6 def asgi_application(func):
7 def app(scope: Scope) -> ASGIInstance:
8 async def awaitable(receive: Receive, send: Send) -> None:
9 request = Request(scope, receive)
10 response = func(request)
11 await response(receive, send)
12
13 return awaitable
14
15 return app
16
[end of starlette/decorators.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/starlette/decorators.py b/starlette/decorators.py
--- a/starlette/decorators.py
+++ b/starlette/decorators.py
@@ -5,8 +5,10 @@
def asgi_application(func):
def app(scope: Scope) -> ASGIInstance:
+ request = Request(scope)
+
async def awaitable(receive: Receive, send: Send) -> None:
- request = Request(scope, receive)
+ request.set_receive_channel(receive)
response = func(request)
await response(receive, send)
diff --git a/starlette/request.py b/starlette/request.py
--- a/starlette/request.py
+++ b/starlette/request.py
@@ -1,19 +1,33 @@
from starlette.datastructures import URL, Headers, QueryParams
+from collections.abc import Mapping
import json
+import typing
-class Request:
- def __init__(self, scope, receive):
+class Request(Mapping):
+ def __init__(self, scope, receive=None):
self._scope = scope
self._receive = receive
self._stream_consumed = False
+ def __getitem__(self, key):
+ return self._scope[key]
+
+ def __iter__(self):
+ return iter(self._scope)
+
+ def __len__(self):
+ return len(self._scope)
+
+ def set_receive_channel(self, receive):
+ self._receive = receive
+
@property
- def method(self):
+ def method(self) -> str:
return self._scope["method"]
@property
- def url(self):
+ def url(self) -> URL:
if not hasattr(self, "_url"):
scheme = self._scope["scheme"]
host, port = self._scope["server"]
@@ -32,7 +46,7 @@
return self._url
@property
- def headers(self):
+ def headers(self) -> Headers:
if not hasattr(self, "_headers"):
self._headers = Headers(
[
@@ -43,7 +57,7 @@
return self._headers
@property
- def query_params(self):
+ def query_params(self) -> QueryParams:
if not hasattr(self, "_query_params"):
query_string = self._scope["query_string"].decode()
self._query_params = QueryParams(query_string)
@@ -57,6 +71,9 @@
if self._stream_consumed:
raise RuntimeError("Stream consumed")
+ if self._receive is None:
+ raise RuntimeError("Receive channel has not been made available")
+
self._stream_consumed = True
while True:
message = await self._receive()
| {"golden_diff": "diff --git a/starlette/decorators.py b/starlette/decorators.py\n--- a/starlette/decorators.py\n+++ b/starlette/decorators.py\n@@ -5,8 +5,10 @@\n \n def asgi_application(func):\n def app(scope: Scope) -> ASGIInstance:\n+ request = Request(scope)\n+\n async def awaitable(receive: Receive, send: Send) -> None:\n- request = Request(scope, receive)\n+ request.set_receive_channel(receive)\n response = func(request)\n await response(receive, send)\n \ndiff --git a/starlette/request.py b/starlette/request.py\n--- a/starlette/request.py\n+++ b/starlette/request.py\n@@ -1,19 +1,33 @@\n from starlette.datastructures import URL, Headers, QueryParams\n+from collections.abc import Mapping\n import json\n+import typing\n \n \n-class Request:\n- def __init__(self, scope, receive):\n+class Request(Mapping):\n+ def __init__(self, scope, receive=None):\n self._scope = scope\n self._receive = receive\n self._stream_consumed = False\n \n+ def __getitem__(self, key):\n+ return self._scope[key]\n+\n+ def __iter__(self):\n+ return iter(self._scope)\n+\n+ def __len__(self):\n+ return len(self._scope)\n+\n+ def set_receive_channel(self, receive):\n+ self._receive = receive\n+\n @property\n- def method(self):\n+ def method(self) -> str:\n return self._scope[\"method\"]\n \n @property\n- def url(self):\n+ def url(self) -> URL:\n if not hasattr(self, \"_url\"):\n scheme = self._scope[\"scheme\"]\n host, port = self._scope[\"server\"]\n@@ -32,7 +46,7 @@\n return self._url\n \n @property\n- def headers(self):\n+ def headers(self) -> Headers:\n if not hasattr(self, \"_headers\"):\n self._headers = Headers(\n [\n@@ -43,7 +57,7 @@\n return self._headers\n \n @property\n- def query_params(self):\n+ def query_params(self) -> QueryParams:\n if not hasattr(self, \"_query_params\"):\n query_string = self._scope[\"query_string\"].decode()\n self._query_params = QueryParams(query_string)\n@@ -57,6 +71,9 @@\n if self._stream_consumed:\n raise RuntimeError(\"Stream consumed\")\n \n+ if self._receive is None:\n+ raise RuntimeError(\"Receive channel has not been made available\")\n+\n self._stream_consumed = True\n while True:\n message = await self._receive()\n", "issue": "Request should present a scope-like interface\nThe `Request` class should present a dict-like interface so that it can be used in the same way as `scope`. Should also allow it to be instantiated without a `receive` channel being set initially.\n", "before_files": [{"content": "from starlette.datastructures import URL, Headers, QueryParams\nimport json\n\n\nclass Request:\n def __init__(self, scope, receive):\n self._scope = scope\n self._receive = receive\n self._stream_consumed = False\n\n @property\n def method(self):\n return self._scope[\"method\"]\n\n @property\n def url(self):\n if not hasattr(self, \"_url\"):\n scheme = self._scope[\"scheme\"]\n host, port = self._scope[\"server\"]\n path = self._scope[\"path\"]\n query_string = self._scope[\"query_string\"]\n\n if (scheme == \"http\" and port != 80) or (scheme == \"https\" and port != 443):\n url = \"%s://%s:%s%s\" % (scheme, host, port, path)\n else:\n url = \"%s://%s%s\" % (scheme, host, path)\n\n if query_string:\n url += \"?\" + query_string.decode()\n\n self._url = URL(url)\n return self._url\n\n @property\n def headers(self):\n if not hasattr(self, \"_headers\"):\n self._headers = Headers(\n [\n (key.decode(), value.decode())\n for key, value in self._scope[\"headers\"]\n ]\n )\n return self._headers\n\n @property\n def query_params(self):\n if not hasattr(self, \"_query_params\"):\n query_string = self._scope[\"query_string\"].decode()\n self._query_params = QueryParams(query_string)\n return self._query_params\n\n async def stream(self):\n if hasattr(self, \"_body\"):\n yield self._body\n return\n\n if self._stream_consumed:\n raise RuntimeError(\"Stream consumed\")\n\n self._stream_consumed = True\n while True:\n message = await self._receive()\n if message[\"type\"] == \"http.request\":\n yield message.get(\"body\", b\"\")\n if not message.get(\"more_body\", False):\n break\n\n async def body(self):\n if not hasattr(self, \"_body\"):\n body = b\"\"\n async for chunk in self.stream():\n body += chunk\n self._body = body\n return self._body\n\n async def json(self):\n if not hasattr(self, \"_json\"):\n body = await self.body()\n self._json = json.loads(body)\n return self._json\n", "path": "starlette/request.py"}, {"content": "from starlette.request import Request\nfrom starlette.response import Response\nfrom starlette.types import ASGIInstance, Receive, Send, Scope\n\n\ndef asgi_application(func):\n def app(scope: Scope) -> ASGIInstance:\n async def awaitable(receive: Receive, send: Send) -> None:\n request = Request(scope, receive)\n response = func(request)\n await response(receive, send)\n\n return awaitable\n\n return app\n", "path": "starlette/decorators.py"}]} | 1,404 | 600 |
gh_patches_debug_16899 | rasdani/github-patches | git_diff | open-mmlab__mmdetection-1099 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ImportError: cannot import name 'build_sampler' from 'mmdet.core.bbox.assign_sampling'
I have successful install the mmdetection by the command "pip install -v -e .". But I have the problem in the test. Would anyone help me ?
(lab) gpuserver@ubuntu:~/ht/labs/mmdetection-master$ python
Python 3.7.3 (default, Mar 27 2019, 22:11:17)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from mmdet.apis import init_detector
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/apis/__init__.py", line 2, in <module>
from .inference import inference_detector, init_detector, show_result
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/apis/inference.py", line 9, in <module>
from mmdet.core import get_classes
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/__init__.py", line 1, in <module>
from .anchor import * # noqa: F401, F403
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/anchor/__init__.py", line 2, in <module>
from .anchor_target import anchor_inside_flags, anchor_target
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/anchor/anchor_target.py", line 3, in <module>
from ..bbox import PseudoSampler, assign_and_sample, bbox2delta, build_assigner
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/bbox/__init__.py", line 1, in <module>
from .assign_sampling import assign_and_sample, build_assigner, build_sampler
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/bbox/assign_sampling.py", line 3, in <module>
from . import assigners, samplers
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/bbox/samplers/__init__.py", line 2, in <module>
from .combined_sampler import CombinedSampler
File "/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/bbox/samplers/combined_sampler.py", line 1, in <module>
from ..assign_sampling import build_sampler
ImportError: cannot import name 'build_sampler' from 'mmdet.core.bbox.assign_sampling' (/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/bbox/assign_sampling.py)
</issue>
<code>
[start of mmdet/core/bbox/__init__.py]
1 from .assign_sampling import assign_and_sample, build_assigner, build_sampler
2 from .assigners import AssignResult, BaseAssigner, MaxIoUAssigner
3 from .bbox_target import bbox_target
4 from .geometry import bbox_overlaps
5 from .samplers import (BaseSampler, CombinedSampler,
6 InstanceBalancedPosSampler, IoUBalancedNegSampler,
7 PseudoSampler, RandomSampler, SamplingResult)
8 from .transforms import (bbox2delta, bbox2result, bbox2roi, bbox_flip,
9 bbox_mapping, bbox_mapping_back, delta2bbox,
10 distance2bbox, roi2bbox)
11
12 __all__ = [
13 'bbox_overlaps', 'BaseAssigner', 'MaxIoUAssigner', 'AssignResult',
14 'BaseSampler', 'PseudoSampler', 'RandomSampler',
15 'InstanceBalancedPosSampler', 'IoUBalancedNegSampler', 'CombinedSampler',
16 'SamplingResult', 'build_assigner', 'build_sampler', 'assign_and_sample',
17 'bbox2delta', 'delta2bbox', 'bbox_flip', 'bbox_mapping',
18 'bbox_mapping_back', 'bbox2roi', 'roi2bbox', 'bbox2result',
19 'distance2bbox', 'bbox_target'
20 ]
21
[end of mmdet/core/bbox/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mmdet/core/bbox/__init__.py b/mmdet/core/bbox/__init__.py
--- a/mmdet/core/bbox/__init__.py
+++ b/mmdet/core/bbox/__init__.py
@@ -1,4 +1,3 @@
-from .assign_sampling import assign_and_sample, build_assigner, build_sampler
from .assigners import AssignResult, BaseAssigner, MaxIoUAssigner
from .bbox_target import bbox_target
from .geometry import bbox_overlaps
@@ -9,6 +8,9 @@
bbox_mapping, bbox_mapping_back, delta2bbox,
distance2bbox, roi2bbox)
+from .assign_sampling import ( # isort:skip, avoid recursive imports
+ assign_and_sample, build_assigner, build_sampler)
+
__all__ = [
'bbox_overlaps', 'BaseAssigner', 'MaxIoUAssigner', 'AssignResult',
'BaseSampler', 'PseudoSampler', 'RandomSampler',
| {"golden_diff": "diff --git a/mmdet/core/bbox/__init__.py b/mmdet/core/bbox/__init__.py\n--- a/mmdet/core/bbox/__init__.py\n+++ b/mmdet/core/bbox/__init__.py\n@@ -1,4 +1,3 @@\n-from .assign_sampling import assign_and_sample, build_assigner, build_sampler\n from .assigners import AssignResult, BaseAssigner, MaxIoUAssigner\n from .bbox_target import bbox_target\n from .geometry import bbox_overlaps\n@@ -9,6 +8,9 @@\n bbox_mapping, bbox_mapping_back, delta2bbox,\n distance2bbox, roi2bbox)\n \n+from .assign_sampling import ( # isort:skip, avoid recursive imports\n+ assign_and_sample, build_assigner, build_sampler)\n+\n __all__ = [\n 'bbox_overlaps', 'BaseAssigner', 'MaxIoUAssigner', 'AssignResult',\n 'BaseSampler', 'PseudoSampler', 'RandomSampler',\n", "issue": "ImportError: cannot import name 'build_sampler' from 'mmdet.core.bbox.assign_sampling' \nI have successful install the mmdetection by the command \"pip install -v -e .\". But I have the problem in the test. Would anyone help me ?\r\n\r\n(lab) gpuserver@ubuntu:~/ht/labs/mmdetection-master$ python\r\nPython 3.7.3 (default, Mar 27 2019, 22:11:17) \r\n[GCC 7.3.0] :: Anaconda, Inc. on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from mmdet.apis import init_detector\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/gpuserver/ht/labs/mmdetection-master/mmdet/apis/__init__.py\", line 2, in <module>\r\n from .inference import inference_detector, init_detector, show_result\r\n File \"/home/gpuserver/ht/labs/mmdetection-master/mmdet/apis/inference.py\", line 9, in <module>\r\n from mmdet.core import get_classes\r\n File \"/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/__init__.py\", line 1, in <module>\r\n from .anchor import * # noqa: F401, F403\r\n File \"/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/anchor/__init__.py\", line 2, in <module>\r\n from .anchor_target import anchor_inside_flags, anchor_target\r\n File \"/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/anchor/anchor_target.py\", line 3, in <module>\r\n from ..bbox import PseudoSampler, assign_and_sample, bbox2delta, build_assigner\r\n File \"/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/bbox/__init__.py\", line 1, in <module>\r\n from .assign_sampling import assign_and_sample, build_assigner, build_sampler\r\n File \"/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/bbox/assign_sampling.py\", line 3, in <module>\r\n from . import assigners, samplers\r\n File \"/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/bbox/samplers/__init__.py\", line 2, in <module>\r\n from .combined_sampler import CombinedSampler\r\n File \"/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/bbox/samplers/combined_sampler.py\", line 1, in <module>\r\n from ..assign_sampling import build_sampler\r\nImportError: cannot import name 'build_sampler' from 'mmdet.core.bbox.assign_sampling' (/home/gpuserver/ht/labs/mmdetection-master/mmdet/core/bbox/assign_sampling.py)\n", "before_files": [{"content": "from .assign_sampling import assign_and_sample, build_assigner, build_sampler\nfrom .assigners import AssignResult, BaseAssigner, MaxIoUAssigner\nfrom .bbox_target import bbox_target\nfrom .geometry import bbox_overlaps\nfrom .samplers import (BaseSampler, CombinedSampler,\n InstanceBalancedPosSampler, IoUBalancedNegSampler,\n PseudoSampler, RandomSampler, SamplingResult)\nfrom .transforms import (bbox2delta, bbox2result, bbox2roi, bbox_flip,\n bbox_mapping, bbox_mapping_back, delta2bbox,\n distance2bbox, roi2bbox)\n\n__all__ = [\n 'bbox_overlaps', 'BaseAssigner', 'MaxIoUAssigner', 'AssignResult',\n 'BaseSampler', 'PseudoSampler', 'RandomSampler',\n 'InstanceBalancedPosSampler', 'IoUBalancedNegSampler', 'CombinedSampler',\n 'SamplingResult', 'build_assigner', 'build_sampler', 'assign_and_sample',\n 'bbox2delta', 'delta2bbox', 'bbox_flip', 'bbox_mapping',\n 'bbox_mapping_back', 'bbox2roi', 'roi2bbox', 'bbox2result',\n 'distance2bbox', 'bbox_target'\n]\n", "path": "mmdet/core/bbox/__init__.py"}]} | 1,499 | 218 |
gh_patches_debug_35992 | rasdani/github-patches | git_diff | safe-global__safe-config-service-8 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add Safe App Provider information
The service should (optionally) include information about the provider of the application.
The provider information should have at least the following fields:
```json
{
"name" : <string>,
"url": <string>
}
```
</issue>
<code>
[start of src/safe_apps/admin.py]
1 from django.contrib import admin
2
3 from .models import SafeApp
4
5 admin.site.register(SafeApp)
6
[end of src/safe_apps/admin.py]
[start of src/safe_apps/models.py]
1 from django.contrib.postgres.fields import ArrayField
2 from django.db import models
3
4
5 class SafeApp(models.Model):
6 url = models.URLField(primary_key=True)
7 name = models.CharField(max_length=200)
8 icon_url = models.URLField()
9 description = models.CharField(max_length=200)
10 networks = ArrayField(models.IntegerField())
11
[end of src/safe_apps/models.py]
[start of src/safe_apps/migrations/0001_initial.py]
1 # Generated by Django 3.2 on 2021-04-28 13:39
2
3 import django.contrib.postgres.fields
4 from django.db import migrations, models
5
6
7 class Migration(migrations.Migration):
8
9 initial = True
10
11 dependencies = [
12 ]
13
14 operations = [
15 migrations.CreateModel(
16 name='SafeApp',
17 fields=[
18 ('url', models.URLField(primary_key=True, serialize=False)),
19 ('name', models.CharField(max_length=200)),
20 ('icon_url', models.URLField()),
21 ('description', models.CharField(max_length=200)),
22 ('networks', django.contrib.postgres.fields.ArrayField(base_field=models.IntegerField(), size=None)),
23 ],
24 ),
25 ]
26
[end of src/safe_apps/migrations/0001_initial.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/safe_apps/admin.py b/src/safe_apps/admin.py
--- a/src/safe_apps/admin.py
+++ b/src/safe_apps/admin.py
@@ -1,5 +1,6 @@
from django.contrib import admin
-from .models import SafeApp
+from .models import SafeApp, Provider
-admin.site.register(SafeApp)
+models = [SafeApp, Provider]
+admin.site.register(models)
diff --git a/src/safe_apps/migrations/0001_initial.py b/src/safe_apps/migrations/0001_initial.py
--- a/src/safe_apps/migrations/0001_initial.py
+++ b/src/safe_apps/migrations/0001_initial.py
@@ -1,7 +1,8 @@
-# Generated by Django 3.2 on 2021-04-28 13:39
+# Generated by Django 3.2 on 2021-04-29 08:22
import django.contrib.postgres.fields
from django.db import migrations, models
+import django.db.models.deletion
class Migration(migrations.Migration):
@@ -12,6 +13,13 @@
]
operations = [
+ migrations.CreateModel(
+ name='Provider',
+ fields=[
+ ('url', models.URLField(primary_key=True, serialize=False)),
+ ('name', models.CharField(max_length=200)),
+ ],
+ ),
migrations.CreateModel(
name='SafeApp',
fields=[
@@ -20,6 +28,7 @@
('icon_url', models.URLField()),
('description', models.CharField(max_length=200)),
('networks', django.contrib.postgres.fields.ArrayField(base_field=models.IntegerField(), size=None)),
+ ('provider', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='safe_apps.provider')),
],
),
]
diff --git a/src/safe_apps/models.py b/src/safe_apps/models.py
--- a/src/safe_apps/models.py
+++ b/src/safe_apps/models.py
@@ -2,9 +2,21 @@
from django.db import models
+class Provider(models.Model):
+ url = models.URLField(primary_key=True)
+ name = models.CharField(max_length=200)
+
+ def __str__(self):
+ return f'{self.name} | {self.url}'
+
+
class SafeApp(models.Model):
url = models.URLField(primary_key=True)
name = models.CharField(max_length=200)
icon_url = models.URLField()
description = models.CharField(max_length=200)
networks = ArrayField(models.IntegerField())
+ provider = models.ForeignKey(Provider, null=True, on_delete=models.SET_NULL)
+
+ def __str__(self):
+ return f'{self.name} | {self.url} | networks={self.networks}'
| {"golden_diff": "diff --git a/src/safe_apps/admin.py b/src/safe_apps/admin.py\n--- a/src/safe_apps/admin.py\n+++ b/src/safe_apps/admin.py\n@@ -1,5 +1,6 @@\n from django.contrib import admin\n \n-from .models import SafeApp\n+from .models import SafeApp, Provider\n \n-admin.site.register(SafeApp)\n+models = [SafeApp, Provider]\n+admin.site.register(models)\ndiff --git a/src/safe_apps/migrations/0001_initial.py b/src/safe_apps/migrations/0001_initial.py\n--- a/src/safe_apps/migrations/0001_initial.py\n+++ b/src/safe_apps/migrations/0001_initial.py\n@@ -1,7 +1,8 @@\n-# Generated by Django 3.2 on 2021-04-28 13:39\n+# Generated by Django 3.2 on 2021-04-29 08:22\n \n import django.contrib.postgres.fields\n from django.db import migrations, models\n+import django.db.models.deletion\n \n \n class Migration(migrations.Migration):\n@@ -12,6 +13,13 @@\n ]\n \n operations = [\n+ migrations.CreateModel(\n+ name='Provider',\n+ fields=[\n+ ('url', models.URLField(primary_key=True, serialize=False)),\n+ ('name', models.CharField(max_length=200)),\n+ ],\n+ ),\n migrations.CreateModel(\n name='SafeApp',\n fields=[\n@@ -20,6 +28,7 @@\n ('icon_url', models.URLField()),\n ('description', models.CharField(max_length=200)),\n ('networks', django.contrib.postgres.fields.ArrayField(base_field=models.IntegerField(), size=None)),\n+ ('provider', models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, to='safe_apps.provider')),\n ],\n ),\n ]\ndiff --git a/src/safe_apps/models.py b/src/safe_apps/models.py\n--- a/src/safe_apps/models.py\n+++ b/src/safe_apps/models.py\n@@ -2,9 +2,21 @@\n from django.db import models\n \n \n+class Provider(models.Model):\n+ url = models.URLField(primary_key=True)\n+ name = models.CharField(max_length=200)\n+\n+ def __str__(self):\n+ return f'{self.name} | {self.url}'\n+\n+\n class SafeApp(models.Model):\n url = models.URLField(primary_key=True)\n name = models.CharField(max_length=200)\n icon_url = models.URLField()\n description = models.CharField(max_length=200)\n networks = ArrayField(models.IntegerField())\n+ provider = models.ForeignKey(Provider, null=True, on_delete=models.SET_NULL)\n+\n+ def __str__(self):\n+ return f'{self.name} | {self.url} | networks={self.networks}'\n", "issue": "Add Safe App Provider information\nThe service should (optionally) include information about the provider of the application.\r\n\r\nThe provider information should have at least the following fields:\r\n\r\n```json\r\n{\r\n \"name\" : <string>,\r\n \"url\": <string>\r\n}\r\n```\n", "before_files": [{"content": "from django.contrib import admin\n\nfrom .models import SafeApp\n\nadmin.site.register(SafeApp)\n", "path": "src/safe_apps/admin.py"}, {"content": "from django.contrib.postgres.fields import ArrayField\nfrom django.db import models\n\n\nclass SafeApp(models.Model):\n url = models.URLField(primary_key=True)\n name = models.CharField(max_length=200)\n icon_url = models.URLField()\n description = models.CharField(max_length=200)\n networks = ArrayField(models.IntegerField())\n", "path": "src/safe_apps/models.py"}, {"content": "# Generated by Django 3.2 on 2021-04-28 13:39\n\nimport django.contrib.postgres.fields\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n initial = True\n\n dependencies = [\n ]\n\n operations = [\n migrations.CreateModel(\n name='SafeApp',\n fields=[\n ('url', models.URLField(primary_key=True, serialize=False)),\n ('name', models.CharField(max_length=200)),\n ('icon_url', models.URLField()),\n ('description', models.CharField(max_length=200)),\n ('networks', django.contrib.postgres.fields.ArrayField(base_field=models.IntegerField(), size=None)),\n ],\n ),\n ]\n", "path": "src/safe_apps/migrations/0001_initial.py"}]} | 963 | 635 |
gh_patches_debug_17476 | rasdani/github-patches | git_diff | ray-project__ray-840 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Not possible to install Ray from git
I want to install Ray for Python 3.
```
$ apt-get install -y cmake pkg-config python3-dev build-essential autoconf curl libtool libboost-all-dev unzip
$ pip3 install git+https://github.com/ray-project/ray.git@37282330c0ea687fd1b983176dce85731fcf189d#subdirectory=python
```
But this tries to install it for python2.7, and not python3, failing with:
```
CMake Error at cmake_modules/FindNumPy.cmake:62 (message):
NumPy import failure:
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: No module named numpy
```
Because numpy is installed only for Python 3.
</issue>
<code>
[start of python/setup.py]
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4
5 import os
6 import shutil
7 import subprocess
8 import sys
9
10 from setuptools import setup, find_packages, Distribution
11 import setuptools.command.build_ext as _build_ext
12
13 # Ideally, we could include these files by putting them in a
14 # MANIFEST.in or using the package_data argument to setup, but the
15 # MANIFEST.in gets applied at the very beginning when setup.py runs
16 # before these files have been created, so we have to move the files
17 # manually.
18 ray_files = [
19 "ray/core/src/common/thirdparty/redis/src/redis-server",
20 "ray/core/src/common/redis_module/libray_redis_module.so",
21 "ray/core/src/plasma/plasma_store",
22 "ray/core/src/plasma/plasma_manager",
23 "ray/core/src/local_scheduler/local_scheduler",
24 "ray/core/src/local_scheduler/liblocal_scheduler_library.so",
25 "ray/core/src/numbuf/libnumbuf.so",
26 "ray/core/src/global_scheduler/global_scheduler",
27 "ray/WebUI.ipynb"
28 ]
29
30
31 class build_ext(_build_ext.build_ext):
32 def run(self):
33 # Note: We are passing in sys.executable so that we use the same
34 # version of Python to build pyarrow inside the build.sh script. Note
35 # that certain flags will not be passed along such as --user or sudo.
36 # TODO(rkn): Fix this.
37 subprocess.check_call(["../build.sh", sys.executable])
38
39 # We also need to install pyarrow along with Ray, so make sure that the
40 # relevant non-Python pyarrow files get copied.
41 pyarrow_files = [
42 os.path.join("ray/pyarrow_files/pyarrow", filename)
43 for filename in os.listdir("./ray/pyarrow_files/pyarrow")
44 if not os.path.isdir(os.path.join("ray/pyarrow_files/pyarrow",
45 filename))]
46
47 files_to_include = ray_files + pyarrow_files
48
49 for filename in files_to_include:
50 self.move_file(filename)
51 # Copy over the autogenerated flatbuffer Python bindings.
52 generated_python_directory = "ray/core/generated"
53 for filename in os.listdir(generated_python_directory):
54 if filename[-3:] == ".py":
55 self.move_file(os.path.join(generated_python_directory,
56 filename))
57
58 def move_file(self, filename):
59 # TODO(rkn): This feels very brittle. It may not handle all cases. See
60 # https://github.com/apache/arrow/blob/master/python/setup.py for an
61 # example.
62 source = filename
63 destination = os.path.join(self.build_lib, filename)
64 # Create the target directory if it doesn't already exist.
65 parent_directory = os.path.dirname(destination)
66 if not os.path.exists(parent_directory):
67 os.makedirs(parent_directory)
68 print("Copying {} to {}.".format(source, destination))
69 shutil.copy(source, destination)
70
71
72 class BinaryDistribution(Distribution):
73 def has_ext_modules(self):
74 return True
75
76
77 setup(name="ray",
78 version="0.1.2",
79 packages=find_packages(),
80 cmdclass={"build_ext": build_ext},
81 # The BinaryDistribution argument triggers build_ext.
82 distclass=BinaryDistribution,
83 install_requires=["numpy",
84 "funcsigs",
85 "click",
86 "colorama",
87 "psutil",
88 "redis",
89 "cloudpickle >= 0.2.2",
90 "flatbuffers"],
91 entry_points={"console_scripts": ["ray=ray.scripts.scripts:main"]},
92 include_package_data=True,
93 zip_safe=False,
94 license="Apache 2.0")
95
[end of python/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/setup.py b/python/setup.py
--- a/python/setup.py
+++ b/python/setup.py
@@ -81,12 +81,15 @@
# The BinaryDistribution argument triggers build_ext.
distclass=BinaryDistribution,
install_requires=["numpy",
+ "cython",
"funcsigs",
"click",
"colorama",
"psutil",
"redis",
"cloudpickle >= 0.2.2",
+ # The six module is required by pyarrow.
+ "six >= 1.0.0",
"flatbuffers"],
entry_points={"console_scripts": ["ray=ray.scripts.scripts:main"]},
include_package_data=True,
| {"golden_diff": "diff --git a/python/setup.py b/python/setup.py\n--- a/python/setup.py\n+++ b/python/setup.py\n@@ -81,12 +81,15 @@\n # The BinaryDistribution argument triggers build_ext.\n distclass=BinaryDistribution,\n install_requires=[\"numpy\",\n+ \"cython\",\n \"funcsigs\",\n \"click\",\n \"colorama\",\n \"psutil\",\n \"redis\",\n \"cloudpickle >= 0.2.2\",\n+ # The six module is required by pyarrow.\n+ \"six >= 1.0.0\",\n \"flatbuffers\"],\n entry_points={\"console_scripts\": [\"ray=ray.scripts.scripts:main\"]},\n include_package_data=True,\n", "issue": "Not possible to install Ray from git\nI want to install Ray for Python 3.\r\n\r\n```\r\n$ apt-get install -y cmake pkg-config python3-dev build-essential autoconf curl libtool libboost-all-dev unzip\r\n$ pip3 install git+https://github.com/ray-project/ray.git@37282330c0ea687fd1b983176dce85731fcf189d#subdirectory=python\r\n```\r\n\r\nBut this tries to install it for python2.7, and not python3, failing with:\r\n\r\n```\r\n CMake Error at cmake_modules/FindNumPy.cmake:62 (message):\r\n NumPy import failure:\r\n \r\n Traceback (most recent call last):\r\n \r\n File \"<string>\", line 1, in <module>\r\n \r\n ImportError: No module named numpy\r\n```\r\n\r\nBecause numpy is installed only for Python 3.\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport shutil\nimport subprocess\nimport sys\n\nfrom setuptools import setup, find_packages, Distribution\nimport setuptools.command.build_ext as _build_ext\n\n# Ideally, we could include these files by putting them in a\n# MANIFEST.in or using the package_data argument to setup, but the\n# MANIFEST.in gets applied at the very beginning when setup.py runs\n# before these files have been created, so we have to move the files\n# manually.\nray_files = [\n \"ray/core/src/common/thirdparty/redis/src/redis-server\",\n \"ray/core/src/common/redis_module/libray_redis_module.so\",\n \"ray/core/src/plasma/plasma_store\",\n \"ray/core/src/plasma/plasma_manager\",\n \"ray/core/src/local_scheduler/local_scheduler\",\n \"ray/core/src/local_scheduler/liblocal_scheduler_library.so\",\n \"ray/core/src/numbuf/libnumbuf.so\",\n \"ray/core/src/global_scheduler/global_scheduler\",\n \"ray/WebUI.ipynb\"\n]\n\n\nclass build_ext(_build_ext.build_ext):\n def run(self):\n # Note: We are passing in sys.executable so that we use the same\n # version of Python to build pyarrow inside the build.sh script. Note\n # that certain flags will not be passed along such as --user or sudo.\n # TODO(rkn): Fix this.\n subprocess.check_call([\"../build.sh\", sys.executable])\n\n # We also need to install pyarrow along with Ray, so make sure that the\n # relevant non-Python pyarrow files get copied.\n pyarrow_files = [\n os.path.join(\"ray/pyarrow_files/pyarrow\", filename)\n for filename in os.listdir(\"./ray/pyarrow_files/pyarrow\")\n if not os.path.isdir(os.path.join(\"ray/pyarrow_files/pyarrow\",\n filename))]\n\n files_to_include = ray_files + pyarrow_files\n\n for filename in files_to_include:\n self.move_file(filename)\n # Copy over the autogenerated flatbuffer Python bindings.\n generated_python_directory = \"ray/core/generated\"\n for filename in os.listdir(generated_python_directory):\n if filename[-3:] == \".py\":\n self.move_file(os.path.join(generated_python_directory,\n filename))\n\n def move_file(self, filename):\n # TODO(rkn): This feels very brittle. It may not handle all cases. See\n # https://github.com/apache/arrow/blob/master/python/setup.py for an\n # example.\n source = filename\n destination = os.path.join(self.build_lib, filename)\n # Create the target directory if it doesn't already exist.\n parent_directory = os.path.dirname(destination)\n if not os.path.exists(parent_directory):\n os.makedirs(parent_directory)\n print(\"Copying {} to {}.\".format(source, destination))\n shutil.copy(source, destination)\n\n\nclass BinaryDistribution(Distribution):\n def has_ext_modules(self):\n return True\n\n\nsetup(name=\"ray\",\n version=\"0.1.2\",\n packages=find_packages(),\n cmdclass={\"build_ext\": build_ext},\n # The BinaryDistribution argument triggers build_ext.\n distclass=BinaryDistribution,\n install_requires=[\"numpy\",\n \"funcsigs\",\n \"click\",\n \"colorama\",\n \"psutil\",\n \"redis\",\n \"cloudpickle >= 0.2.2\",\n \"flatbuffers\"],\n entry_points={\"console_scripts\": [\"ray=ray.scripts.scripts:main\"]},\n include_package_data=True,\n zip_safe=False,\n license=\"Apache 2.0\")\n", "path": "python/setup.py"}]} | 1,684 | 157 |
gh_patches_debug_15745 | rasdani/github-patches | git_diff | pwndbg__pwndbg-874 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pwndbg gets stuck in GDB 10.1
<!--
Before reporting a new issue, make sure that we do not have any duplicates already open.
If there is one it might be good to take part in the discussion there.
Please make sure you have checked that the issue persists on LATEST pwndbg version.
Below is a template for BUG REPORTS.
Don't include it if this is a FEATURE REQUEST.
-->
### Description
pwndbg gets stuck in GDB 10.1
<!--
Briefly describe the problem you are having in a few paragraphs.
-->
### Steps to reproduce

<!--
What do we have to do to reproduce the problem?
If this is connected to particular C/asm code,
please provide the smallest C code that reproduces the issue.
-->
### My setup
<!--
Show us your gdb/python/pwndbg/OS/IDA Pro version (depending on your case).
NOTE: We are currently supporting only Ubuntu installations.
It is known that pwndbg is not fully working e.g. on Arch Linux (the heap stuff is not working there).
If you would like to change this situation - help us improving pwndbg and supporting other distros!
This can be displayed in pwndbg through `version` command.
If it is somehow unavailable, use:
* `show version` - for gdb
* `py import sys; print(sys.version)` - for python
* pwndbg version/git commit id
-->
```
Gdb: 10.1
Python: 3.8.6 (default, Sep 30 2020, 04:00:38) [GCC 10.2.0]
Pwndbg: 1.1.0
Capstone: 4.0.1024
Unicorn: 1.0.2
```
</issue>
<code>
[start of pwndbg/heap/__init__.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 import pwndbg.heap.heap
5 import pwndbg.symbol
6
7 current = None
8
9 heap_chain_limit = pwndbg.config.Parameter('heap-dereference-limit', 8, 'number of bins to dereference')
10
11 @pwndbg.events.new_objfile
12 def update():
13 import pwndbg.heap.dlmalloc
14 import pwndbg.heap.ptmalloc
15
16 global current
17
18
19 if pwndbg.symbol.address('ptmalloc_init'):
20 current = pwndbg.heap.ptmalloc.Heap()
21
22 else:
23 # Default to ptmalloc heap for now until
24 # there are more implementations
25 current = pwndbg.heap.ptmalloc.Heap()
26
[end of pwndbg/heap/__init__.py]
[start of pwndbg/heap/dlmalloc.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 import gdb
5
6 import pwndbg.events
7 import pwndbg.typeinfo
8
9
10 class Heap(pwndbg.heap.heap.BaseHeap):
11 pass
12
[end of pwndbg/heap/dlmalloc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pwndbg/heap/__init__.py b/pwndbg/heap/__init__.py
--- a/pwndbg/heap/__init__.py
+++ b/pwndbg/heap/__init__.py
@@ -8,18 +8,8 @@
heap_chain_limit = pwndbg.config.Parameter('heap-dereference-limit', 8, 'number of bins to dereference')
[email protected]_objfile
[email protected]
def update():
- import pwndbg.heap.dlmalloc
import pwndbg.heap.ptmalloc
-
global current
-
-
- if pwndbg.symbol.address('ptmalloc_init'):
- current = pwndbg.heap.ptmalloc.Heap()
-
- else:
- # Default to ptmalloc heap for now until
- # there are more implementations
- current = pwndbg.heap.ptmalloc.Heap()
+ current = pwndbg.heap.ptmalloc.Heap()
diff --git a/pwndbg/heap/dlmalloc.py b/pwndbg/heap/dlmalloc.py
deleted file mode 100644
--- a/pwndbg/heap/dlmalloc.py
+++ /dev/null
@@ -1,11 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-
-import gdb
-
-import pwndbg.events
-import pwndbg.typeinfo
-
-
-class Heap(pwndbg.heap.heap.BaseHeap):
- pass
| {"golden_diff": "diff --git a/pwndbg/heap/__init__.py b/pwndbg/heap/__init__.py\n--- a/pwndbg/heap/__init__.py\n+++ b/pwndbg/heap/__init__.py\n@@ -8,18 +8,8 @@\n \n heap_chain_limit = pwndbg.config.Parameter('heap-dereference-limit', 8, 'number of bins to dereference')\n \[email protected]_objfile\[email protected]\n def update():\n- import pwndbg.heap.dlmalloc\n import pwndbg.heap.ptmalloc\n-\n global current\n-\n-\n- if pwndbg.symbol.address('ptmalloc_init'):\n- current = pwndbg.heap.ptmalloc.Heap()\n-\n- else:\n- # Default to ptmalloc heap for now until\n- # there are more implementations\n- current = pwndbg.heap.ptmalloc.Heap()\n+ current = pwndbg.heap.ptmalloc.Heap()\ndiff --git a/pwndbg/heap/dlmalloc.py b/pwndbg/heap/dlmalloc.py\ndeleted file mode 100644\n--- a/pwndbg/heap/dlmalloc.py\n+++ /dev/null\n@@ -1,11 +0,0 @@\n-#!/usr/bin/env python\n-# -*- coding: utf-8 -*-\n-\n-import gdb\n-\n-import pwndbg.events\n-import pwndbg.typeinfo\n-\n-\n-class Heap(pwndbg.heap.heap.BaseHeap):\n- pass\n", "issue": "pwndbg gets stuck in GDB 10.1\n<!--\r\nBefore reporting a new issue, make sure that we do not have any duplicates already open.\r\nIf there is one it might be good to take part in the discussion there.\r\n\r\nPlease make sure you have checked that the issue persists on LATEST pwndbg version.\r\n\r\nBelow is a template for BUG REPORTS.\r\nDon't include it if this is a FEATURE REQUEST.\r\n-->\r\n\r\n\r\n### Description\r\npwndbg gets stuck in GDB 10.1\r\n<!--\r\nBriefly describe the problem you are having in a few paragraphs.\r\n-->\r\n\r\n### Steps to reproduce\r\n\r\n\r\n<!--\r\nWhat do we have to do to reproduce the problem?\r\nIf this is connected to particular C/asm code, \r\nplease provide the smallest C code that reproduces the issue.\r\n-->\r\n\r\n### My setup\r\n\r\n<!--\r\nShow us your gdb/python/pwndbg/OS/IDA Pro version (depending on your case).\r\n\r\nNOTE: We are currently supporting only Ubuntu installations.\r\nIt is known that pwndbg is not fully working e.g. on Arch Linux (the heap stuff is not working there).\r\nIf you would like to change this situation - help us improving pwndbg and supporting other distros!\r\n\r\nThis can be displayed in pwndbg through `version` command.\r\n\r\nIf it is somehow unavailable, use:\r\n* `show version` - for gdb\r\n* `py import sys; print(sys.version)` - for python\r\n* pwndbg version/git commit id\r\n-->\r\n```\r\nGdb: 10.1\r\nPython: 3.8.6 (default, Sep 30 2020, 04:00:38) [GCC 10.2.0]\r\nPwndbg: 1.1.0\r\nCapstone: 4.0.1024\r\nUnicorn: 1.0.2\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport pwndbg.heap.heap\nimport pwndbg.symbol\n\ncurrent = None\n\nheap_chain_limit = pwndbg.config.Parameter('heap-dereference-limit', 8, 'number of bins to dereference')\n\[email protected]_objfile\ndef update():\n import pwndbg.heap.dlmalloc\n import pwndbg.heap.ptmalloc\n\n global current\n\n\n if pwndbg.symbol.address('ptmalloc_init'):\n current = pwndbg.heap.ptmalloc.Heap()\n\n else:\n # Default to ptmalloc heap for now until\n # there are more implementations\n current = pwndbg.heap.ptmalloc.Heap()\n", "path": "pwndbg/heap/__init__.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport gdb\n\nimport pwndbg.events\nimport pwndbg.typeinfo\n\n\nclass Heap(pwndbg.heap.heap.BaseHeap):\n pass\n", "path": "pwndbg/heap/dlmalloc.py"}]} | 1,313 | 327 |
gh_patches_debug_14934 | rasdani/github-patches | git_diff | hylang__hy-932 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Multiple implementations overwrite each others' entrypoint hooks
It's not uncommon to find Python 2 and Python 3 coexisting on the same system, and it's not unreasonable for the user to also want Hy for both Pythons. However, where Python handles this gracefully by providing separate executables (`python2` and `python3` alongside `python`, `pip2` and `pip3` alongside `pip` etc) Hy simply bulldozes an existing `hy`, `hyc` etc.
Sorry if this has already been reported; I tried to look for previous issues but nothing much came to mind when I was trying to play guess-the-keyword.
Multiple implementations overwrite each others' entrypoint hooks
It's not uncommon to find Python 2 and Python 3 coexisting on the same system, and it's not unreasonable for the user to also want Hy for both Pythons. However, where Python handles this gracefully by providing separate executables (`python2` and `python3` alongside `python`, `pip2` and `pip3` alongside `pip` etc) Hy simply bulldozes an existing `hy`, `hyc` etc.
Sorry if this has already been reported; I tried to look for previous issues but nothing much came to mind when I was trying to play guess-the-keyword.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # Copyright (c) 2012, 2013 Paul Tagliamonte <[email protected]>
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining a
5 # copy of this software and associated documentation files (the "Software"),
6 # to deal in the Software without restriction, including without limitation
7 # the rights to use, copy, modify, merge, publish, distribute, sublicense,
8 # and/or sell copies of the Software, and to permit persons to whom the
9 # Software is furnished to do so, subject to the following conditions:
10 #
11 # The above copyright notice and this permission notice shall be included in
12 # all copies or substantial portions of the Software.
13 #
14 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
15 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
16 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
17 # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
18 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
19 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
20 # DEALINGS IN THE SOFTWARE.
21
22 import os
23 import re
24 import sys
25
26 from setuptools import find_packages, setup
27
28 PKG = "hy"
29 VERSIONFILE = os.path.join(PKG, "version.py")
30 verstr = "unknown"
31 try:
32 verstrline = open(VERSIONFILE, "rt").read()
33 except EnvironmentError:
34 pass # Okay, there is no version file.
35 else:
36 VSRE = r"^__version__ = ['\"]([^'\"]*)['\"]"
37 mo = re.search(VSRE, verstrline, re.M)
38 if mo:
39 __version__ = mo.group(1)
40 else:
41 msg = "if %s.py exists, it is required to be well-formed" % VERSIONFILE
42 raise RuntimeError(msg)
43
44 long_description = """Hy is a Python <--> Lisp layer. It helps
45 make things work nicer, and lets Python and the Hy lisp variant play
46 nice together. """
47
48 install_requires = ['rply>=0.7.0', 'astor>=0.5', 'clint>=0.4']
49 if sys.version_info[:2] < (2, 7):
50 install_requires.append('argparse>=1.2.1')
51 install_requires.append('importlib>=1.0.2')
52 if os.name == 'nt':
53 install_requires.append('pyreadline==2.0')
54
55 setup(
56 name=PKG,
57 version=__version__,
58 install_requires=install_requires,
59 entry_points={
60 'console_scripts': [
61 'hy = hy.cmdline:hy_main',
62 'hyc = hy.cmdline:hyc_main',
63 'hy2py = hy.cmdline:hy2py_main',
64 ]
65 },
66 packages=find_packages(exclude=['tests*']),
67 package_data={
68 'hy.contrib': ['*.hy'],
69 'hy.core': ['*.hy'],
70 },
71 author="Paul Tagliamonte",
72 author_email="[email protected]",
73 long_description=long_description,
74 description='Lisp and Python love each other.',
75 license="Expat",
76 url="http://hylang.org/",
77 platforms=['any'],
78 classifiers=[
79 "Development Status :: 4 - Beta",
80 "Intended Audience :: Developers",
81 "License :: DFSG approved",
82 "License :: OSI Approved :: MIT License", # Really "Expat". Ugh.
83 "Operating System :: OS Independent",
84 "Programming Language :: Lisp",
85 "Programming Language :: Python",
86 "Programming Language :: Python :: 2",
87 "Programming Language :: Python :: 2.6",
88 "Programming Language :: Python :: 2.7",
89 "Programming Language :: Python :: 3",
90 "Programming Language :: Python :: 3.3",
91 "Programming Language :: Python :: 3.4",
92 "Topic :: Software Development :: Code Generators",
93 "Topic :: Software Development :: Compilers",
94 "Topic :: Software Development :: Libraries",
95 ]
96 )
97
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -52,6 +52,8 @@
if os.name == 'nt':
install_requires.append('pyreadline==2.0')
+ver = sys.version_info[0]
+
setup(
name=PKG,
version=__version__,
@@ -59,8 +61,11 @@
entry_points={
'console_scripts': [
'hy = hy.cmdline:hy_main',
+ 'hy%d = hy.cmdline:hy_main' % ver,
'hyc = hy.cmdline:hyc_main',
+ 'hyc%d = hy.cmdline:hyc_main' % ver,
'hy2py = hy.cmdline:hy2py_main',
+ 'hy2py%d = hy.cmdline:hy2py_main' % ver,
]
},
packages=find_packages(exclude=['tests*']),
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -52,6 +52,8 @@\n if os.name == 'nt':\n install_requires.append('pyreadline==2.0')\n \n+ver = sys.version_info[0]\n+\n setup(\n name=PKG,\n version=__version__,\n@@ -59,8 +61,11 @@\n entry_points={\n 'console_scripts': [\n 'hy = hy.cmdline:hy_main',\n+ 'hy%d = hy.cmdline:hy_main' % ver,\n 'hyc = hy.cmdline:hyc_main',\n+ 'hyc%d = hy.cmdline:hyc_main' % ver,\n 'hy2py = hy.cmdline:hy2py_main',\n+ 'hy2py%d = hy.cmdline:hy2py_main' % ver,\n ]\n },\n packages=find_packages(exclude=['tests*']),\n", "issue": "Multiple implementations overwrite each others' entrypoint hooks\nIt's not uncommon to find Python 2 and Python 3 coexisting on the same system, and it's not unreasonable for the user to also want Hy for both Pythons. However, where Python handles this gracefully by providing separate executables (`python2` and `python3` alongside `python`, `pip2` and `pip3` alongside `pip` etc) Hy simply bulldozes an existing `hy`, `hyc` etc.\n\nSorry if this has already been reported; I tried to look for previous issues but nothing much came to mind when I was trying to play guess-the-keyword.\n\nMultiple implementations overwrite each others' entrypoint hooks\nIt's not uncommon to find Python 2 and Python 3 coexisting on the same system, and it's not unreasonable for the user to also want Hy for both Pythons. However, where Python handles this gracefully by providing separate executables (`python2` and `python3` alongside `python`, `pip2` and `pip3` alongside `pip` etc) Hy simply bulldozes an existing `hy`, `hyc` etc.\n\nSorry if this has already been reported; I tried to look for previous issues but nothing much came to mind when I was trying to play guess-the-keyword.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# Copyright (c) 2012, 2013 Paul Tagliamonte <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport os\nimport re\nimport sys\n\nfrom setuptools import find_packages, setup\n\nPKG = \"hy\"\nVERSIONFILE = os.path.join(PKG, \"version.py\")\nverstr = \"unknown\"\ntry:\n verstrline = open(VERSIONFILE, \"rt\").read()\nexcept EnvironmentError:\n pass # Okay, there is no version file.\nelse:\n VSRE = r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\"\n mo = re.search(VSRE, verstrline, re.M)\n if mo:\n __version__ = mo.group(1)\n else:\n msg = \"if %s.py exists, it is required to be well-formed\" % VERSIONFILE\n raise RuntimeError(msg)\n\nlong_description = \"\"\"Hy is a Python <--> Lisp layer. It helps\nmake things work nicer, and lets Python and the Hy lisp variant play\nnice together. \"\"\"\n\ninstall_requires = ['rply>=0.7.0', 'astor>=0.5', 'clint>=0.4']\nif sys.version_info[:2] < (2, 7):\n install_requires.append('argparse>=1.2.1')\n install_requires.append('importlib>=1.0.2')\nif os.name == 'nt':\n install_requires.append('pyreadline==2.0')\n\nsetup(\n name=PKG,\n version=__version__,\n install_requires=install_requires,\n entry_points={\n 'console_scripts': [\n 'hy = hy.cmdline:hy_main',\n 'hyc = hy.cmdline:hyc_main',\n 'hy2py = hy.cmdline:hy2py_main',\n ]\n },\n packages=find_packages(exclude=['tests*']),\n package_data={\n 'hy.contrib': ['*.hy'],\n 'hy.core': ['*.hy'],\n },\n author=\"Paul Tagliamonte\",\n author_email=\"[email protected]\",\n long_description=long_description,\n description='Lisp and Python love each other.',\n license=\"Expat\",\n url=\"http://hylang.org/\",\n platforms=['any'],\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"License :: DFSG approved\",\n \"License :: OSI Approved :: MIT License\", # Really \"Expat\". Ugh.\n \"Operating System :: OS Independent\",\n \"Programming Language :: Lisp\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.6\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Topic :: Software Development :: Code Generators\",\n \"Topic :: Software Development :: Compilers\",\n \"Topic :: Software Development :: Libraries\",\n ]\n)\n", "path": "setup.py"}]} | 1,867 | 204 |
gh_patches_debug_28507 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-1815 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Worker hangs when connects to PS.
The worker will wait the grpc channel of PS is ready using `channel_ready_future`. We should set timeout and retry.

</issue>
<code>
[start of elasticdl/python/worker/main.py]
1 import grpc
2
3 from elasticdl.python.common import log_utils
4 from elasticdl.python.common.args import parse_worker_args
5 from elasticdl.python.common.grpc_utils import build_channel
6 from elasticdl.python.worker.worker import Worker
7
8
9 def main():
10 args = parse_worker_args()
11 logger = log_utils.get_logger(__name__)
12 logger.info("Starting worker %d", args.worker_id)
13 if args.master_addr is None:
14 raise ValueError("master_addr is missing for worker")
15
16 master_channel = build_channel(args.master_addr)
17
18 ps_channels = []
19 if args.ps_addrs:
20 ps_addrs = args.ps_addrs.split(",")
21
22 for addr in ps_addrs:
23 # addr is in the form as "ps-pod-name.namespace.svc:port"
24 channel = build_channel(addr)
25
26 # Wait the channel is ready by a Future object.
27 grpc.channel_ready_future(channel).result()
28 logger.info(
29 "grpc channel %s to connect pod %s is ready"
30 % (addr, addr.split(".")[0])
31 )
32 ps_channels.append(channel)
33
34 worker = Worker(
35 args,
36 channel=master_channel,
37 ps_channels=ps_channels,
38 set_parallelism=True,
39 )
40 worker.run()
41
42
43 if __name__ == "__main__":
44 main()
45
[end of elasticdl/python/worker/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticdl/python/worker/main.py b/elasticdl/python/worker/main.py
--- a/elasticdl/python/worker/main.py
+++ b/elasticdl/python/worker/main.py
@@ -5,6 +5,9 @@
from elasticdl.python.common.grpc_utils import build_channel
from elasticdl.python.worker.worker import Worker
+CONNECT_PS_MAX_RETRIES = 3
+CONNECT_PS_TIMEOUT = 60
+
def main():
args = parse_worker_args()
@@ -23,13 +26,29 @@
# addr is in the form as "ps-pod-name.namespace.svc:port"
channel = build_channel(addr)
- # Wait the channel is ready by a Future object.
- grpc.channel_ready_future(channel).result()
- logger.info(
- "grpc channel %s to connect pod %s is ready"
- % (addr, addr.split(".")[0])
- )
- ps_channels.append(channel)
+ succeeded = False
+ for i in range(CONNECT_PS_MAX_RETRIES):
+ try:
+ grpc.channel_ready_future(channel).result(
+ timeout=CONNECT_PS_TIMEOUT
+ )
+ logger.info(
+ "grpc channel %s to connect pod %s is ready"
+ % (addr, addr.split(".")[0])
+ )
+ ps_channels.append(channel)
+ succeeded = True
+ break
+ except grpc.FutureTimeoutError:
+ logger.warning(
+ "Failed to connect pod %s with %d retry"
+ % (addr.split(".")[0], i)
+ )
+ if not succeeded:
+ raise TimeoutError(
+ "Time out to connect pod %s with 3 retries"
+ % addr.split(".")[0]
+ )
worker = Worker(
args,
| {"golden_diff": "diff --git a/elasticdl/python/worker/main.py b/elasticdl/python/worker/main.py\n--- a/elasticdl/python/worker/main.py\n+++ b/elasticdl/python/worker/main.py\n@@ -5,6 +5,9 @@\n from elasticdl.python.common.grpc_utils import build_channel\n from elasticdl.python.worker.worker import Worker\n \n+CONNECT_PS_MAX_RETRIES = 3\n+CONNECT_PS_TIMEOUT = 60\n+\n \n def main():\n args = parse_worker_args()\n@@ -23,13 +26,29 @@\n # addr is in the form as \"ps-pod-name.namespace.svc:port\"\n channel = build_channel(addr)\n \n- # Wait the channel is ready by a Future object.\n- grpc.channel_ready_future(channel).result()\n- logger.info(\n- \"grpc channel %s to connect pod %s is ready\"\n- % (addr, addr.split(\".\")[0])\n- )\n- ps_channels.append(channel)\n+ succeeded = False\n+ for i in range(CONNECT_PS_MAX_RETRIES):\n+ try:\n+ grpc.channel_ready_future(channel).result(\n+ timeout=CONNECT_PS_TIMEOUT\n+ )\n+ logger.info(\n+ \"grpc channel %s to connect pod %s is ready\"\n+ % (addr, addr.split(\".\")[0])\n+ )\n+ ps_channels.append(channel)\n+ succeeded = True\n+ break\n+ except grpc.FutureTimeoutError:\n+ logger.warning(\n+ \"Failed to connect pod %s with %d retry\"\n+ % (addr.split(\".\")[0], i)\n+ )\n+ if not succeeded:\n+ raise TimeoutError(\n+ \"Time out to connect pod %s with 3 retries\"\n+ % addr.split(\".\")[0]\n+ )\n \n worker = Worker(\n args,\n", "issue": "Worker hangs when connects to PS.\nThe worker will wait the grpc channel of PS is ready using `channel_ready_future`. We should set timeout and retry.\r\n\r\n\n", "before_files": [{"content": "import grpc\n\nfrom elasticdl.python.common import log_utils\nfrom elasticdl.python.common.args import parse_worker_args\nfrom elasticdl.python.common.grpc_utils import build_channel\nfrom elasticdl.python.worker.worker import Worker\n\n\ndef main():\n args = parse_worker_args()\n logger = log_utils.get_logger(__name__)\n logger.info(\"Starting worker %d\", args.worker_id)\n if args.master_addr is None:\n raise ValueError(\"master_addr is missing for worker\")\n\n master_channel = build_channel(args.master_addr)\n\n ps_channels = []\n if args.ps_addrs:\n ps_addrs = args.ps_addrs.split(\",\")\n\n for addr in ps_addrs:\n # addr is in the form as \"ps-pod-name.namespace.svc:port\"\n channel = build_channel(addr)\n\n # Wait the channel is ready by a Future object.\n grpc.channel_ready_future(channel).result()\n logger.info(\n \"grpc channel %s to connect pod %s is ready\"\n % (addr, addr.split(\".\")[0])\n )\n ps_channels.append(channel)\n\n worker = Worker(\n args,\n channel=master_channel,\n ps_channels=ps_channels,\n set_parallelism=True,\n )\n worker.run()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "elasticdl/python/worker/main.py"}]} | 991 | 398 |
gh_patches_debug_31741 | rasdani/github-patches | git_diff | pyro-ppl__pyro-738 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support arbitrary sample_shape in Rejector
This will require the `proposal` to accept an optional `sample_shape` arg and the `log_prob_accept()` method to support broadcasting.
</issue>
<code>
[start of pyro/distributions/rejector.py]
1 from __future__ import absolute_import, division, print_function
2
3 import torch
4 from pyro.distributions.distribution import Distribution
5 from pyro.distributions.score_parts import ScoreParts
6 from pyro.distributions.util import copy_docs_from
7
8
9 @copy_docs_from(Distribution)
10 class Rejector(Distribution):
11 """
12 Rejection sampled distribution given an acceptance rate function.
13
14 :param Distribution propose: A proposal distribution that samples batched
15 propsals via `propose()`.
16 :param callable log_prob_accept: A callable that inputs a batch of
17 proposals and returns a batch of log acceptance probabilities.
18 :param log_scale: Total log probability of acceptance.
19 """
20 stateful = True
21 reparameterized = True
22
23 def __init__(self, propose, log_prob_accept, log_scale):
24 self.propose = propose
25 self.log_prob_accept = log_prob_accept
26 self._log_scale = log_scale
27
28 # These LRU(1) caches allow work to be shared across different method calls.
29 self._log_prob_accept_cache = None, None
30 self._propose_batch_log_pdf_cache = None, None
31
32 def _log_prob_accept(self, x):
33 if x is not self._log_prob_accept_cache[0]:
34 self._log_prob_accept_cache = x, self.log_prob_accept(x) - self._log_scale
35 return self._log_prob_accept_cache[1]
36
37 def _propose_batch_log_pdf(self, x):
38 if x is not self._propose_batch_log_pdf_cache[0]:
39 self._propose_batch_log_pdf_cache = x, self.propose.log_prob(x)
40 return self._propose_batch_log_pdf_cache[1]
41
42 def sample(self, sample_shape=torch.Size()):
43 if sample_shape:
44 raise ValueError("Arbitrary `sample_shape` not supported by Rejector class.")
45 # Implements parallel batched accept-reject sampling.
46 x = self.propose()
47 log_prob_accept = self.log_prob_accept(x)
48 probs = torch.exp(log_prob_accept).clamp_(0.0, 1.0)
49 done = torch.bernoulli(probs).byte()
50 while not done.all():
51 proposed_x = self.propose()
52 log_prob_accept = self.log_prob_accept(proposed_x)
53 prob_accept = torch.exp(log_prob_accept).clamp_(0.0, 1.0)
54 accept = torch.bernoulli(prob_accept).byte() & ~done
55 if accept.any():
56 x[accept] = proposed_x[accept]
57 done |= accept
58 return x
59
60 def log_prob(self, x):
61 return self._propose_batch_log_pdf(x) + self._log_prob_accept(x)
62
63 def score_parts(self, x):
64 score_function = self._log_prob_accept(x)
65 log_pdf = self.log_prob(x)
66 return ScoreParts(log_pdf, score_function, log_pdf)
67
[end of pyro/distributions/rejector.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyro/distributions/rejector.py b/pyro/distributions/rejector.py
--- a/pyro/distributions/rejector.py
+++ b/pyro/distributions/rejector.py
@@ -12,7 +12,8 @@
Rejection sampled distribution given an acceptance rate function.
:param Distribution propose: A proposal distribution that samples batched
- propsals via `propose()`.
+ proposals via ``propose()``. :meth:`sample` supports a ``sample_shape``
+ arg only if ``propose()`` supports a ``sample_shape`` arg.
:param callable log_prob_accept: A callable that inputs a batch of
proposals and returns a batch of log acceptance probabilities.
:param log_scale: Total log probability of acceptance.
@@ -40,15 +41,13 @@
return self._propose_batch_log_pdf_cache[1]
def sample(self, sample_shape=torch.Size()):
- if sample_shape:
- raise ValueError("Arbitrary `sample_shape` not supported by Rejector class.")
# Implements parallel batched accept-reject sampling.
- x = self.propose()
+ x = self.propose(sample_shape) if sample_shape else self.propose()
log_prob_accept = self.log_prob_accept(x)
probs = torch.exp(log_prob_accept).clamp_(0.0, 1.0)
done = torch.bernoulli(probs).byte()
while not done.all():
- proposed_x = self.propose()
+ proposed_x = self.propose(sample_shape) if sample_shape else self.propose()
log_prob_accept = self.log_prob_accept(proposed_x)
prob_accept = torch.exp(log_prob_accept).clamp_(0.0, 1.0)
accept = torch.bernoulli(prob_accept).byte() & ~done
| {"golden_diff": "diff --git a/pyro/distributions/rejector.py b/pyro/distributions/rejector.py\n--- a/pyro/distributions/rejector.py\n+++ b/pyro/distributions/rejector.py\n@@ -12,7 +12,8 @@\n Rejection sampled distribution given an acceptance rate function.\n \n :param Distribution propose: A proposal distribution that samples batched\n- propsals via `propose()`.\n+ proposals via ``propose()``. :meth:`sample` supports a ``sample_shape``\n+ arg only if ``propose()`` supports a ``sample_shape`` arg.\n :param callable log_prob_accept: A callable that inputs a batch of\n proposals and returns a batch of log acceptance probabilities.\n :param log_scale: Total log probability of acceptance.\n@@ -40,15 +41,13 @@\n return self._propose_batch_log_pdf_cache[1]\n \n def sample(self, sample_shape=torch.Size()):\n- if sample_shape:\n- raise ValueError(\"Arbitrary `sample_shape` not supported by Rejector class.\")\n # Implements parallel batched accept-reject sampling.\n- x = self.propose()\n+ x = self.propose(sample_shape) if sample_shape else self.propose()\n log_prob_accept = self.log_prob_accept(x)\n probs = torch.exp(log_prob_accept).clamp_(0.0, 1.0)\n done = torch.bernoulli(probs).byte()\n while not done.all():\n- proposed_x = self.propose()\n+ proposed_x = self.propose(sample_shape) if sample_shape else self.propose()\n log_prob_accept = self.log_prob_accept(proposed_x)\n prob_accept = torch.exp(log_prob_accept).clamp_(0.0, 1.0)\n accept = torch.bernoulli(prob_accept).byte() & ~done\n", "issue": "Support arbitrary sample_shape in Rejector\nThis will require the `proposal` to accept an optional `sample_shape` arg and the `log_prob_accept()` method to support broadcasting.\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport torch\nfrom pyro.distributions.distribution import Distribution\nfrom pyro.distributions.score_parts import ScoreParts\nfrom pyro.distributions.util import copy_docs_from\n\n\n@copy_docs_from(Distribution)\nclass Rejector(Distribution):\n \"\"\"\n Rejection sampled distribution given an acceptance rate function.\n\n :param Distribution propose: A proposal distribution that samples batched\n propsals via `propose()`.\n :param callable log_prob_accept: A callable that inputs a batch of\n proposals and returns a batch of log acceptance probabilities.\n :param log_scale: Total log probability of acceptance.\n \"\"\"\n stateful = True\n reparameterized = True\n\n def __init__(self, propose, log_prob_accept, log_scale):\n self.propose = propose\n self.log_prob_accept = log_prob_accept\n self._log_scale = log_scale\n\n # These LRU(1) caches allow work to be shared across different method calls.\n self._log_prob_accept_cache = None, None\n self._propose_batch_log_pdf_cache = None, None\n\n def _log_prob_accept(self, x):\n if x is not self._log_prob_accept_cache[0]:\n self._log_prob_accept_cache = x, self.log_prob_accept(x) - self._log_scale\n return self._log_prob_accept_cache[1]\n\n def _propose_batch_log_pdf(self, x):\n if x is not self._propose_batch_log_pdf_cache[0]:\n self._propose_batch_log_pdf_cache = x, self.propose.log_prob(x)\n return self._propose_batch_log_pdf_cache[1]\n\n def sample(self, sample_shape=torch.Size()):\n if sample_shape:\n raise ValueError(\"Arbitrary `sample_shape` not supported by Rejector class.\")\n # Implements parallel batched accept-reject sampling.\n x = self.propose()\n log_prob_accept = self.log_prob_accept(x)\n probs = torch.exp(log_prob_accept).clamp_(0.0, 1.0)\n done = torch.bernoulli(probs).byte()\n while not done.all():\n proposed_x = self.propose()\n log_prob_accept = self.log_prob_accept(proposed_x)\n prob_accept = torch.exp(log_prob_accept).clamp_(0.0, 1.0)\n accept = torch.bernoulli(prob_accept).byte() & ~done\n if accept.any():\n x[accept] = proposed_x[accept]\n done |= accept\n return x\n\n def log_prob(self, x):\n return self._propose_batch_log_pdf(x) + self._log_prob_accept(x)\n\n def score_parts(self, x):\n score_function = self._log_prob_accept(x)\n log_pdf = self.log_prob(x)\n return ScoreParts(log_pdf, score_function, log_pdf)\n", "path": "pyro/distributions/rejector.py"}]} | 1,321 | 399 |
gh_patches_debug_10405 | rasdani/github-patches | git_diff | e-valuation__EvaP-340 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
A user must never evaluate him/herself
In rare cases a user can be participant and contributor in a course (e.g. participating student and tutor for the exercises at the same time).
The system has to make sure that this user can't answer questions related to him/herself.
</issue>
<code>
[start of evap/student/views.py]
1 from django.contrib import messages
2 from django.core.exceptions import PermissionDenied
3 from django.db import transaction
4 from django.shortcuts import get_object_or_404, redirect, render_to_response
5 from django.template import RequestContext
6 from django.utils.datastructures import SortedDict
7 from django.utils.translation import ugettext as _
8
9 from evap.evaluation.auth import login_required
10 from evap.evaluation.models import Course, Semester
11 from evap.evaluation.tools import questionnaires_and_contributions
12 from evap.student.forms import QuestionsForm
13 from evap.student.tools import make_form_identifier
14
15 from datetime import datetime
16
17
18 @login_required
19 def index(request):
20 # retrieve all courses, which the user can evaluate at some point
21 users_courses = Course.objects.filter(
22 participants=request.user
23 ).exclude(
24 voters=request.user
25 )
26 # split up into current and future courses
27 current_courses = users_courses.filter(state='inEvaluation')
28 future_courses = users_courses.filter(state='approved')
29
30 return render_to_response(
31 "student_index.html",
32 dict(current_courses=current_courses,
33 future_courses=future_courses),
34 context_instance=RequestContext(request))
35
36
37 @login_required
38 def vote(request, course_id):
39 # retrieve course and make sure that the user is allowed to vote
40 course = get_object_or_404(Course, id=course_id)
41 if not course.can_user_vote(request.user):
42 raise PermissionDenied
43
44 # build forms
45 forms = SortedDict()
46 for questionnaire, contribution in questionnaires_and_contributions(course):
47 form = QuestionsForm(request.POST or None, contribution=contribution, questionnaire=questionnaire)
48 forms[(contribution, questionnaire)] = form
49
50 if all(form.is_valid() for form in forms.values()):
51 # begin vote operation
52 with transaction.commit_on_success():
53 for (contribution, questionnaire), form in forms.items():
54 for question in questionnaire.question_set.all():
55 identifier = make_form_identifier(contribution, questionnaire, question)
56 value = form.cleaned_data.get(identifier)
57
58 if type(value) in [str, unicode]:
59 value = value.strip()
60
61 if value == 6: #no answer
62 value = None
63
64 # store the answer if one was given
65 if value:
66 question.answer_class.objects.create(
67 contribution=contribution,
68 question=question,
69 answer=value)
70
71 # remember that the user voted already
72 course.voters.add(request.user)
73
74 messages.add_message(request, messages.INFO, _("Your vote was recorded."))
75 return redirect('evap.student.views.index')
76 else:
77 return render_to_response(
78 "student_vote.html",
79 dict(forms=forms.values(),
80 course=course),
81 context_instance=RequestContext(request))
82
[end of evap/student/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/evap/student/views.py b/evap/student/views.py
--- a/evap/student/views.py
+++ b/evap/student/views.py
@@ -45,6 +45,8 @@
forms = SortedDict()
for questionnaire, contribution in questionnaires_and_contributions(course):
form = QuestionsForm(request.POST or None, contribution=contribution, questionnaire=questionnaire)
+ if form.contribution.contributor == request.user:
+ continue # users shall not vote about themselves
forms[(contribution, questionnaire)] = form
if all(form.is_valid() for form in forms.values()):
| {"golden_diff": "diff --git a/evap/student/views.py b/evap/student/views.py\n--- a/evap/student/views.py\n+++ b/evap/student/views.py\n@@ -45,6 +45,8 @@\n forms = SortedDict()\n for questionnaire, contribution in questionnaires_and_contributions(course):\n form = QuestionsForm(request.POST or None, contribution=contribution, questionnaire=questionnaire)\n+ if form.contribution.contributor == request.user:\n+ continue # users shall not vote about themselves\n forms[(contribution, questionnaire)] = form\n \n if all(form.is_valid() for form in forms.values()):\n", "issue": "A user must never evaluate him/herself\nIn rare cases a user can be participant and contributor in a course (e.g. participating student and tutor for the exercises at the same time).\nThe system has to make sure that this user can't answer questions related to him/herself.\n\n", "before_files": [{"content": "from django.contrib import messages\nfrom django.core.exceptions import PermissionDenied\nfrom django.db import transaction\nfrom django.shortcuts import get_object_or_404, redirect, render_to_response\nfrom django.template import RequestContext\nfrom django.utils.datastructures import SortedDict\nfrom django.utils.translation import ugettext as _\n\nfrom evap.evaluation.auth import login_required\nfrom evap.evaluation.models import Course, Semester\nfrom evap.evaluation.tools import questionnaires_and_contributions\nfrom evap.student.forms import QuestionsForm\nfrom evap.student.tools import make_form_identifier\n\nfrom datetime import datetime\n\n\n@login_required\ndef index(request):\n # retrieve all courses, which the user can evaluate at some point\n users_courses = Course.objects.filter(\n participants=request.user\n ).exclude(\n voters=request.user\n )\n # split up into current and future courses\n current_courses = users_courses.filter(state='inEvaluation')\n future_courses = users_courses.filter(state='approved')\n\n return render_to_response(\n \"student_index.html\",\n dict(current_courses=current_courses,\n future_courses=future_courses),\n context_instance=RequestContext(request))\n\n\n@login_required\ndef vote(request, course_id):\n # retrieve course and make sure that the user is allowed to vote\n course = get_object_or_404(Course, id=course_id)\n if not course.can_user_vote(request.user):\n raise PermissionDenied\n\n # build forms\n forms = SortedDict()\n for questionnaire, contribution in questionnaires_and_contributions(course):\n form = QuestionsForm(request.POST or None, contribution=contribution, questionnaire=questionnaire)\n forms[(contribution, questionnaire)] = form\n\n if all(form.is_valid() for form in forms.values()):\n # begin vote operation\n with transaction.commit_on_success():\n for (contribution, questionnaire), form in forms.items():\n for question in questionnaire.question_set.all():\n identifier = make_form_identifier(contribution, questionnaire, question)\n value = form.cleaned_data.get(identifier)\n\n if type(value) in [str, unicode]:\n value = value.strip()\n\n if value == 6: #no answer\n value = None\n\n # store the answer if one was given\n if value:\n question.answer_class.objects.create(\n contribution=contribution,\n question=question,\n answer=value)\n\n # remember that the user voted already\n course.voters.add(request.user)\n\n messages.add_message(request, messages.INFO, _(\"Your vote was recorded.\"))\n return redirect('evap.student.views.index')\n else:\n return render_to_response(\n \"student_vote.html\",\n dict(forms=forms.values(),\n course=course),\n context_instance=RequestContext(request))\n", "path": "evap/student/views.py"}]} | 1,328 | 135 |
gh_patches_debug_5905 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-746 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Deprecation notice about urllib3[secure]
### Description
pyOpenSSL and urllib3[secure] are deprecated in the upcoming release (1.26.12)
https://github.com/urllib3/urllib3/issues/2680
Removed 'urllib3[secure] < 1.25 ; python_version < "3.5"' and 'urllib3[secure] < 2 ; python_version >= "3.5"'
Closes #746
</issue>
<code>
[start of setup.py]
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import os
5 import sys
6
7 from setuptools import Extension, find_packages, setup
8
9 with open("README.md", "r") as fp:
10 long_description = fp.read()
11
12 packages = find_packages("src")
13 if sys.version_info < (3, 6):
14 packages = [p for p in packages if not p.startswith("scout_apm.async_")]
15
16 compile_extensions = (
17 # Python 3+
18 sys.version_info >= (3,)
19 # Not Jython
20 and not sys.platform.startswith("java")
21 # Not PyPy
22 and "__pypy__" not in sys.builtin_module_names
23 # Not explicitly disabled
24 and (os.environ.get("SCOUT_DISABLE_EXTENSIONS", "") == "")
25 )
26 if compile_extensions:
27 ext_modules = [
28 Extension(
29 name=str("scout_apm.core._objtrace"),
30 sources=[str("src/scout_apm/core/_objtrace.c")],
31 optional=True,
32 )
33 ]
34 else:
35 ext_modules = []
36
37 setup(
38 name="scout_apm",
39 version="2.26.1",
40 description="Scout Application Performance Monitoring Agent",
41 long_description=long_description,
42 long_description_content_type="text/markdown",
43 url="https://github.com/scoutapp/scout_apm_python",
44 project_urls={
45 "Documentation": "https://docs.scoutapm.com/#python-agent",
46 "Changelog": (
47 "https://github.com/scoutapp/scout_apm_python/blob/master/CHANGELOG.md"
48 ),
49 },
50 author="Scout",
51 author_email="[email protected]",
52 license="MIT",
53 zip_safe=False,
54 python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4",
55 packages=packages,
56 package_dir={str(""): str("src")},
57 ext_modules=ext_modules,
58 entry_points={
59 "console_scripts": [
60 "core-agent-manager = scout_apm.core.cli.core_agent_manager:main"
61 ]
62 },
63 install_requires=[
64 'asgiref ; python_version >= "3.5"',
65 'contextvars ; python_version >= "3.6" and python_version < "3.7"',
66 'importlib-metadata ; python_version < "3.8"',
67 "psutil>=5,<6",
68 'urllib3[secure] < 1.25 ; python_version < "3.5"',
69 'urllib3[secure] < 2 ; python_version >= "3.5"',
70 "wrapt>=1.10,<2.0",
71 ],
72 keywords=["apm", "performance monitoring", "development"],
73 classifiers=[
74 "Development Status :: 5 - Production/Stable",
75 "Framework :: Bottle",
76 "Framework :: Django",
77 "Framework :: Django :: 1.8",
78 "Framework :: Django :: 1.9",
79 "Framework :: Django :: 1.10",
80 "Framework :: Django :: 1.11",
81 "Framework :: Django :: 2.0",
82 "Framework :: Django :: 2.1",
83 "Framework :: Django :: 2.2",
84 "Framework :: Django :: 3.0",
85 "Framework :: Django :: 3.1",
86 "Framework :: Django :: 3.2",
87 "Framework :: Django :: 4.0",
88 "Framework :: Flask",
89 "Framework :: Pyramid",
90 "Intended Audience :: Developers",
91 "Topic :: System :: Monitoring",
92 "License :: OSI Approved :: MIT License",
93 "Operating System :: MacOS",
94 "Operating System :: POSIX",
95 "Operating System :: POSIX :: Linux",
96 "Programming Language :: Python :: 2",
97 "Programming Language :: Python :: 2.7",
98 "Programming Language :: Python :: 3",
99 "Programming Language :: Python :: 3.4",
100 "Programming Language :: Python :: 3.5",
101 "Programming Language :: Python :: 3.6",
102 "Programming Language :: Python :: 3.7",
103 "Programming Language :: Python :: 3.8",
104 "Programming Language :: Python :: 3.9",
105 "Programming Language :: Python :: 3.10",
106 ],
107 )
108
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -66,7 +66,7 @@
'importlib-metadata ; python_version < "3.8"',
"psutil>=5,<6",
'urllib3[secure] < 1.25 ; python_version < "3.5"',
- 'urllib3[secure] < 2 ; python_version >= "3.5"',
+ 'urllib3 < 2 ; python_version >= "3.5"',
"wrapt>=1.10,<2.0",
],
keywords=["apm", "performance monitoring", "development"],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -66,7 +66,7 @@\n 'importlib-metadata ; python_version < \"3.8\"',\n \"psutil>=5,<6\",\n 'urllib3[secure] < 1.25 ; python_version < \"3.5\"',\n- 'urllib3[secure] < 2 ; python_version >= \"3.5\"',\n+ 'urllib3 < 2 ; python_version >= \"3.5\"',\n \"wrapt>=1.10,<2.0\",\n ],\n keywords=[\"apm\", \"performance monitoring\", \"development\"],\n", "issue": "Deprecation notice about urllib3[secure]\n ### Description\r\n\r\n pyOpenSSL and urllib3[secure] are deprecated in the upcoming release (1.26.12)\r\n https://github.com/urllib3/urllib3/issues/2680\r\n Removed 'urllib3[secure] < 1.25 ; python_version < \"3.5\"' and 'urllib3[secure] < 2 ; python_version >= \"3.5\"'\r\n\r\nCloses #746\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport os\nimport sys\n\nfrom setuptools import Extension, find_packages, setup\n\nwith open(\"README.md\", \"r\") as fp:\n long_description = fp.read()\n\npackages = find_packages(\"src\")\nif sys.version_info < (3, 6):\n packages = [p for p in packages if not p.startswith(\"scout_apm.async_\")]\n\ncompile_extensions = (\n # Python 3+\n sys.version_info >= (3,)\n # Not Jython\n and not sys.platform.startswith(\"java\")\n # Not PyPy\n and \"__pypy__\" not in sys.builtin_module_names\n # Not explicitly disabled\n and (os.environ.get(\"SCOUT_DISABLE_EXTENSIONS\", \"\") == \"\")\n)\nif compile_extensions:\n ext_modules = [\n Extension(\n name=str(\"scout_apm.core._objtrace\"),\n sources=[str(\"src/scout_apm/core/_objtrace.c\")],\n optional=True,\n )\n ]\nelse:\n ext_modules = []\n\nsetup(\n name=\"scout_apm\",\n version=\"2.26.1\",\n description=\"Scout Application Performance Monitoring Agent\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/scoutapp/scout_apm_python\",\n project_urls={\n \"Documentation\": \"https://docs.scoutapm.com/#python-agent\",\n \"Changelog\": (\n \"https://github.com/scoutapp/scout_apm_python/blob/master/CHANGELOG.md\"\n ),\n },\n author=\"Scout\",\n author_email=\"[email protected]\",\n license=\"MIT\",\n zip_safe=False,\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4\",\n packages=packages,\n package_dir={str(\"\"): str(\"src\")},\n ext_modules=ext_modules,\n entry_points={\n \"console_scripts\": [\n \"core-agent-manager = scout_apm.core.cli.core_agent_manager:main\"\n ]\n },\n install_requires=[\n 'asgiref ; python_version >= \"3.5\"',\n 'contextvars ; python_version >= \"3.6\" and python_version < \"3.7\"',\n 'importlib-metadata ; python_version < \"3.8\"',\n \"psutil>=5,<6\",\n 'urllib3[secure] < 1.25 ; python_version < \"3.5\"',\n 'urllib3[secure] < 2 ; python_version >= \"3.5\"',\n \"wrapt>=1.10,<2.0\",\n ],\n keywords=[\"apm\", \"performance monitoring\", \"development\"],\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Framework :: Bottle\",\n \"Framework :: Django\",\n \"Framework :: Django :: 1.8\",\n \"Framework :: Django :: 1.9\",\n \"Framework :: Django :: 1.10\",\n \"Framework :: Django :: 1.11\",\n \"Framework :: Django :: 2.0\",\n \"Framework :: Django :: 2.1\",\n \"Framework :: Django :: 2.2\",\n \"Framework :: Django :: 3.0\",\n \"Framework :: Django :: 3.1\",\n \"Framework :: Django :: 3.2\",\n \"Framework :: Django :: 4.0\",\n \"Framework :: Flask\",\n \"Framework :: Pyramid\",\n \"Intended Audience :: Developers\",\n \"Topic :: System :: Monitoring\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: MacOS\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n ],\n)\n", "path": "setup.py"}]} | 1,785 | 147 |
gh_patches_debug_31075 | rasdani/github-patches | git_diff | databricks__koalas-104 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix pypi description
See https://pypi.org/project/databricks-koalas/
It just pulls in our entire README in markdown format.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 #
4 # Copyright (C) 2019 Databricks, Inc.
5 #
6 # Licensed under the Apache License, Version 2.0 (the "License");
7 # you may not use this file except in compliance with the License.
8 # You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing, software
13 # distributed under the License is distributed on an "AS IS" BASIS,
14 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
17 #
18
19 from setuptools import setup
20
21
22 install_requires = [
23 'pandas>=0.23',
24 'decorator',
25 'pyarrow>=0.10,<0.11', # See https://github.com/databricks/spark-pandas/issues/26
26 ]
27
28 setup(
29 name='databricks-koalas',
30 version='0.0.6',
31 packages=['databricks', 'databricks.koalas', 'databricks.koalas.dask',
32 'databricks.koalas.missing'],
33 extras_require={
34 'spark': ['pyspark>=2.4.0'],
35 },
36 install_requires=install_requires,
37 author="Timothy Hunter",
38 author_email="[email protected]",
39 license='http://www.apache.org/licenses/LICENSE-2.0',
40 long_description=open('README.md').read(),
41 )
42
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -18,24 +18,46 @@
from setuptools import setup
+DESCRIPTION = "Pandas DataFrame API on Apache Spark"
-install_requires = [
- 'pandas>=0.23',
- 'decorator',
- 'pyarrow>=0.10,<0.11', # See https://github.com/databricks/spark-pandas/issues/26
-]
+LONG_DESCRIPTION = """
+Koalas makes data scientists more productive when interacting with big data,
+by augmenting Apache Spark's Python DataFrame API to be compatible with
+Pandas'.
+
+Pandas is the de facto standard (single-node) dataframe implementation in
+Python, while Spark is the de facto standard for big data processing.
+With this package, data scientists can:
+
+- Be immediately productive with Spark, with no learning curve, if one
+ is already familiar with Pandas.
+- Have a single codebase that works both with Pandas (tests, smaller datasets)
+ and with Spark (distributed datasets).
+"""
setup(
- name='databricks-koalas',
+ name='koalas',
version='0.0.6',
packages=['databricks', 'databricks.koalas', 'databricks.koalas.dask',
'databricks.koalas.missing'],
extras_require={
'spark': ['pyspark>=2.4.0'],
},
- install_requires=install_requires,
- author="Timothy Hunter",
- author_email="[email protected]",
+ python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*',
+ install_requires=[
+ 'pandas>=0.23',
+ 'decorator',
+ 'pyarrow>=0.10,<0.11', # See https://github.com/databricks/spark-pandas/issues/26
+ ],
+ maintainer="Databricks",
+ maintainer_email="[email protected]",
license='http://www.apache.org/licenses/LICENSE-2.0',
- long_description=open('README.md').read(),
+ url="https://github.com/databricks/spark-pandas",
+ project_urls={
+ 'Bug Tracker': 'https://github.com/databricks/spark-pandas/issues',
+ # 'Documentation': '',
+ 'Source Code': 'https://github.com/databricks/spark-pandas'
+ },
+ description=DESCRIPTION,
+ long_description=LONG_DESCRIPTION,
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -18,24 +18,46 @@\n \n from setuptools import setup\n \n+DESCRIPTION = \"Pandas DataFrame API on Apache Spark\"\n \n-install_requires = [\n- 'pandas>=0.23',\n- 'decorator',\n- 'pyarrow>=0.10,<0.11', # See https://github.com/databricks/spark-pandas/issues/26\n-]\n+LONG_DESCRIPTION = \"\"\"\n+Koalas makes data scientists more productive when interacting with big data,\n+by augmenting Apache Spark's Python DataFrame API to be compatible with\n+Pandas'.\n+\n+Pandas is the de facto standard (single-node) dataframe implementation in\n+Python, while Spark is the de facto standard for big data processing.\n+With this package, data scientists can:\n+\n+- Be immediately productive with Spark, with no learning curve, if one\n+ is already familiar with Pandas.\n+- Have a single codebase that works both with Pandas (tests, smaller datasets)\n+ and with Spark (distributed datasets).\n+\"\"\"\n \n setup(\n- name='databricks-koalas',\n+ name='koalas',\n version='0.0.6',\n packages=['databricks', 'databricks.koalas', 'databricks.koalas.dask',\n 'databricks.koalas.missing'],\n extras_require={\n 'spark': ['pyspark>=2.4.0'],\n },\n- install_requires=install_requires,\n- author=\"Timothy Hunter\",\n- author_email=\"[email protected]\",\n+ python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*',\n+ install_requires=[\n+ 'pandas>=0.23',\n+ 'decorator',\n+ 'pyarrow>=0.10,<0.11', # See https://github.com/databricks/spark-pandas/issues/26\n+ ],\n+ maintainer=\"Databricks\",\n+ maintainer_email=\"[email protected]\",\n license='http://www.apache.org/licenses/LICENSE-2.0',\n- long_description=open('README.md').read(),\n+ url=\"https://github.com/databricks/spark-pandas\",\n+ project_urls={\n+ 'Bug Tracker': 'https://github.com/databricks/spark-pandas/issues',\n+ # 'Documentation': '',\n+ 'Source Code': 'https://github.com/databricks/spark-pandas'\n+ },\n+ description=DESCRIPTION,\n+ long_description=LONG_DESCRIPTION,\n )\n", "issue": "Fix pypi description\nSee https://pypi.org/project/databricks-koalas/\r\n\r\nIt just pulls in our entire README in markdown format.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n#\n# Copyright (C) 2019 Databricks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\nfrom setuptools import setup\n\n\ninstall_requires = [\n 'pandas>=0.23',\n 'decorator',\n 'pyarrow>=0.10,<0.11', # See https://github.com/databricks/spark-pandas/issues/26\n]\n\nsetup(\n name='databricks-koalas',\n version='0.0.6',\n packages=['databricks', 'databricks.koalas', 'databricks.koalas.dask',\n 'databricks.koalas.missing'],\n extras_require={\n 'spark': ['pyspark>=2.4.0'],\n },\n install_requires=install_requires,\n author=\"Timothy Hunter\",\n author_email=\"[email protected]\",\n license='http://www.apache.org/licenses/LICENSE-2.0',\n long_description=open('README.md').read(),\n)\n", "path": "setup.py"}]} | 979 | 598 |
gh_patches_debug_31276 | rasdani/github-patches | git_diff | pulp__pulpcore-3857 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Global tasks like repair or reclaim should probably not run in parallel
**Version**
Please provide the versions of the pulpcore and plugin packages in use, and how they are installed. If you are using Pulp via Katello, please provide the Katello version.
**Describe the bug**
Global tasks that involve all repos like repair or reclaim should run one at a time, like orphan cleanup https://github.com/pulp/pulpcore/blob/main/pulpcore/app/viewsets/orphans.py#L29
**To Reproduce**
Steps to reproduce the behavior:
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here. Please provide links to any previous discussions via Discourse or Bugzilla.
</issue>
<code>
[start of pulpcore/app/views/repair.py]
1 from drf_spectacular.utils import extend_schema
2 from rest_framework.views import APIView
3
4 from pulpcore.app.response import OperationPostponedResponse
5 from pulpcore.app.serializers import AsyncOperationResponseSerializer, RepairSerializer
6 from pulpcore.app.tasks import repair_all_artifacts
7 from pulpcore.tasking.tasks import dispatch
8
9
10 class RepairView(APIView):
11 @extend_schema(
12 description=(
13 "Trigger an asynchronous task that checks for missing "
14 "or corrupted artifacts, and attempts to redownload them."
15 ),
16 summary="Repair Artifact Storage",
17 request=RepairSerializer,
18 responses={202: AsyncOperationResponseSerializer},
19 )
20 def post(self, request):
21 """
22 Repair artifacts.
23 """
24 serializer = RepairSerializer(data=request.data)
25 serializer.is_valid(raise_exception=True)
26
27 verify_checksums = serializer.validated_data["verify_checksums"]
28
29 task = dispatch(repair_all_artifacts, args=[verify_checksums])
30
31 return OperationPostponedResponse(task, request)
32
[end of pulpcore/app/views/repair.py]
[start of pulpcore/app/viewsets/reclaim.py]
1 from drf_spectacular.utils import extend_schema
2 from rest_framework.viewsets import ViewSet
3
4 from pulpcore.app.response import OperationPostponedResponse
5 from pulpcore.app.serializers import AsyncOperationResponseSerializer, ReclaimSpaceSerializer
6 from pulpcore.app.tasks import reclaim_space
7 from pulpcore.tasking.tasks import dispatch
8
9
10 class ReclaimSpaceViewSet(ViewSet):
11 """
12 Viewset for reclaim disk space endpoint.
13 """
14
15 serializer_class = ReclaimSpaceSerializer
16
17 @extend_schema(
18 description="Trigger an asynchronous space reclaim operation.",
19 responses={202: AsyncOperationResponseSerializer},
20 )
21 def reclaim(self, request):
22 """
23 Triggers an asynchronous space reclaim operation.
24 """
25 serializer = ReclaimSpaceSerializer(data=request.data)
26 serializer.is_valid(raise_exception=True)
27
28 repos = serializer.validated_data.get("repo_hrefs", [])
29 keeplist = serializer.validated_data.get("repo_versions_keeplist", [])
30 reclaim_repo_pks = []
31 keeplist_rv_pks = []
32 for repo in repos:
33 reclaim_repo_pks.append(repo.pk)
34 for rv in keeplist:
35 repos.append(rv.repository)
36 keeplist_rv_pks.append(rv.pk)
37
38 task = dispatch(
39 reclaim_space,
40 shared_resources=repos,
41 kwargs={
42 "repo_pks": reclaim_repo_pks,
43 "keeplist_rv_pks": keeplist_rv_pks,
44 },
45 )
46
47 return OperationPostponedResponse(task, request)
48
[end of pulpcore/app/viewsets/reclaim.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pulpcore/app/views/repair.py b/pulpcore/app/views/repair.py
--- a/pulpcore/app/views/repair.py
+++ b/pulpcore/app/views/repair.py
@@ -1,4 +1,5 @@
from drf_spectacular.utils import extend_schema
+from django.conf import settings
from rest_framework.views import APIView
from pulpcore.app.response import OperationPostponedResponse
@@ -26,6 +27,9 @@
verify_checksums = serializer.validated_data["verify_checksums"]
- task = dispatch(repair_all_artifacts, args=[verify_checksums])
+ uri = "/api/v3/repair/"
+ if settings.DOMAIN_ENABLED:
+ uri = f"/{request.pulp_domain.name}{uri}"
+ task = dispatch(repair_all_artifacts, exclusive_resources=[uri], args=[verify_checksums])
return OperationPostponedResponse(task, request)
diff --git a/pulpcore/app/viewsets/reclaim.py b/pulpcore/app/viewsets/reclaim.py
--- a/pulpcore/app/viewsets/reclaim.py
+++ b/pulpcore/app/viewsets/reclaim.py
@@ -1,4 +1,5 @@
from drf_spectacular.utils import extend_schema
+from django.conf import settings
from rest_framework.viewsets import ViewSet
from pulpcore.app.response import OperationPostponedResponse
@@ -35,8 +36,17 @@
repos.append(rv.repository)
keeplist_rv_pks.append(rv.pk)
+ if repos:
+ exclusive_resources = None
+ else:
+ uri = "/api/v3/repositories/reclaim_space/"
+ if settings.DOMAIN_ENABLED:
+ uri = f"/{request.pulp_domain.name}{uri}"
+ exclusive_resources = [uri]
+
task = dispatch(
reclaim_space,
+ exclusive_resources=exclusive_resources,
shared_resources=repos,
kwargs={
"repo_pks": reclaim_repo_pks,
| {"golden_diff": "diff --git a/pulpcore/app/views/repair.py b/pulpcore/app/views/repair.py\n--- a/pulpcore/app/views/repair.py\n+++ b/pulpcore/app/views/repair.py\n@@ -1,4 +1,5 @@\n from drf_spectacular.utils import extend_schema\n+from django.conf import settings\n from rest_framework.views import APIView\n \n from pulpcore.app.response import OperationPostponedResponse\n@@ -26,6 +27,9 @@\n \n verify_checksums = serializer.validated_data[\"verify_checksums\"]\n \n- task = dispatch(repair_all_artifacts, args=[verify_checksums])\n+ uri = \"/api/v3/repair/\"\n+ if settings.DOMAIN_ENABLED:\n+ uri = f\"/{request.pulp_domain.name}{uri}\"\n+ task = dispatch(repair_all_artifacts, exclusive_resources=[uri], args=[verify_checksums])\n \n return OperationPostponedResponse(task, request)\ndiff --git a/pulpcore/app/viewsets/reclaim.py b/pulpcore/app/viewsets/reclaim.py\n--- a/pulpcore/app/viewsets/reclaim.py\n+++ b/pulpcore/app/viewsets/reclaim.py\n@@ -1,4 +1,5 @@\n from drf_spectacular.utils import extend_schema\n+from django.conf import settings\n from rest_framework.viewsets import ViewSet\n \n from pulpcore.app.response import OperationPostponedResponse\n@@ -35,8 +36,17 @@\n repos.append(rv.repository)\n keeplist_rv_pks.append(rv.pk)\n \n+ if repos:\n+ exclusive_resources = None\n+ else:\n+ uri = \"/api/v3/repositories/reclaim_space/\"\n+ if settings.DOMAIN_ENABLED:\n+ uri = f\"/{request.pulp_domain.name}{uri}\"\n+ exclusive_resources = [uri]\n+\n task = dispatch(\n reclaim_space,\n+ exclusive_resources=exclusive_resources,\n shared_resources=repos,\n kwargs={\n \"repo_pks\": reclaim_repo_pks,\n", "issue": "Global tasks like repair or reclaim should probably not run in parallel\n**Version**\r\nPlease provide the versions of the pulpcore and plugin packages in use, and how they are installed. If you are using Pulp via Katello, please provide the Katello version.\r\n\r\n**Describe the bug**\r\nGlobal tasks that involve all repos like repair or reclaim should run one at a time, like orphan cleanup https://github.com/pulp/pulpcore/blob/main/pulpcore/app/viewsets/orphans.py#L29\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n\r\n**Expected behavior**\r\nA clear and concise description of what you expected to happen.\r\n\r\n**Additional context**\r\nAdd any other context about the problem here. Please provide links to any previous discussions via Discourse or Bugzilla.\r\n\n", "before_files": [{"content": "from drf_spectacular.utils import extend_schema\nfrom rest_framework.views import APIView\n\nfrom pulpcore.app.response import OperationPostponedResponse\nfrom pulpcore.app.serializers import AsyncOperationResponseSerializer, RepairSerializer\nfrom pulpcore.app.tasks import repair_all_artifacts\nfrom pulpcore.tasking.tasks import dispatch\n\n\nclass RepairView(APIView):\n @extend_schema(\n description=(\n \"Trigger an asynchronous task that checks for missing \"\n \"or corrupted artifacts, and attempts to redownload them.\"\n ),\n summary=\"Repair Artifact Storage\",\n request=RepairSerializer,\n responses={202: AsyncOperationResponseSerializer},\n )\n def post(self, request):\n \"\"\"\n Repair artifacts.\n \"\"\"\n serializer = RepairSerializer(data=request.data)\n serializer.is_valid(raise_exception=True)\n\n verify_checksums = serializer.validated_data[\"verify_checksums\"]\n\n task = dispatch(repair_all_artifacts, args=[verify_checksums])\n\n return OperationPostponedResponse(task, request)\n", "path": "pulpcore/app/views/repair.py"}, {"content": "from drf_spectacular.utils import extend_schema\nfrom rest_framework.viewsets import ViewSet\n\nfrom pulpcore.app.response import OperationPostponedResponse\nfrom pulpcore.app.serializers import AsyncOperationResponseSerializer, ReclaimSpaceSerializer\nfrom pulpcore.app.tasks import reclaim_space\nfrom pulpcore.tasking.tasks import dispatch\n\n\nclass ReclaimSpaceViewSet(ViewSet):\n \"\"\"\n Viewset for reclaim disk space endpoint.\n \"\"\"\n\n serializer_class = ReclaimSpaceSerializer\n\n @extend_schema(\n description=\"Trigger an asynchronous space reclaim operation.\",\n responses={202: AsyncOperationResponseSerializer},\n )\n def reclaim(self, request):\n \"\"\"\n Triggers an asynchronous space reclaim operation.\n \"\"\"\n serializer = ReclaimSpaceSerializer(data=request.data)\n serializer.is_valid(raise_exception=True)\n\n repos = serializer.validated_data.get(\"repo_hrefs\", [])\n keeplist = serializer.validated_data.get(\"repo_versions_keeplist\", [])\n reclaim_repo_pks = []\n keeplist_rv_pks = []\n for repo in repos:\n reclaim_repo_pks.append(repo.pk)\n for rv in keeplist:\n repos.append(rv.repository)\n keeplist_rv_pks.append(rv.pk)\n\n task = dispatch(\n reclaim_space,\n shared_resources=repos,\n kwargs={\n \"repo_pks\": reclaim_repo_pks,\n \"keeplist_rv_pks\": keeplist_rv_pks,\n },\n )\n\n return OperationPostponedResponse(task, request)\n", "path": "pulpcore/app/viewsets/reclaim.py"}]} | 1,404 | 434 |
gh_patches_debug_6402 | rasdani/github-patches | git_diff | engnadeau__pybotics-425 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Strip links/badges/images from README for PyPi
- While the new PyPi Warehouse supports markdown, images/badges/links are slow to load and will not work if they use relative paths (e.g., https://test.pypi.org/project/pybotics/201803222157/#description)
- These elements should be stripped from the README prior to upload
</issue>
<code>
[start of setup.py]
1 """Setup module."""
2 import logging
3 from pathlib import Path
4
5 from setuptools import find_packages, setup # type: ignore
6
7
8 def main() -> None:
9 """Run setup."""
10 # run setup
11 setup(name='pybotics',
12 packages=find_packages(include=['pybotics']),
13 url='https://github.com/nnadeau/pybotics',
14 license='MIT',
15 author='Nicholas Nadeau',
16 author_email='[email protected]',
17 description='Python Toolbox for Robotics',
18 long_description=get_readme(),
19 long_description_content_type='text/markdown',
20 use_scm_version=True,
21 setup_requires=['setuptools_scm'],
22 install_requires=get_requirements(), # type: ignore
23 tests_require=['pytest'],
24 classifiers=[
25 'Development Status :: 4 - Beta',
26 'Intended Audience :: Developers',
27 'Intended Audience :: Education',
28 'Intended Audience :: End Users/Desktop',
29 'Intended Audience :: Manufacturing',
30 'Intended Audience :: Science/Research',
31 'Topic :: Education',
32 'Topic :: Scientific/Engineering',
33 'Topic :: Scientific/Engineering :: Artificial Intelligence',
34 'Topic :: Scientific/Engineering :: Human Machine Interfaces',
35 'Topic :: Scientific/Engineering :: Mathematics',
36 'Topic :: Scientific/Engineering :: Physics',
37 'Topic :: Utilities',
38 'License :: OSI Approved :: MIT License',
39 'Programming Language :: Python :: 3 :: Only',
40 'Programming Language :: Python :: 3',
41 'Programming Language :: Python :: 3.4',
42 'Programming Language :: Python :: 3.5',
43 'Programming Language :: Python :: 3.6',
44 ],
45 keywords='python robot robotics research '
46 'automation kinematics geometry')
47
48
49 def get_readme() -> str:
50 """Get README text."""
51 # description
52 readme_path = Path(__file__).parent / 'README.md'
53 logging.info('README path: {}'.format(readme_path.resolve()))
54 with open(str(readme_path)) as f:
55 readme = f.read()
56 return readme
57
58
59 # don't want to import typing... so ignore
60 def get_requirements(): # type: ignore
61 """Get requirements list."""
62 # requirements
63 requirements_path = Path(__file__).parent / 'requirements.txt'
64 logging.info('Requirements path: {}'.format(requirements_path.resolve()))
65 with open(str(requirements_path)) as f:
66 requirements = f.read().splitlines()
67 for i, req in enumerate(requirements):
68 requirements[i] = req.split()[0]
69 return requirements
70
71
72 if __name__ == '__main__':
73 logging.basicConfig(level=logging.INFO)
74 main()
75
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -18,7 +18,7 @@
long_description=get_readme(),
long_description_content_type='text/markdown',
use_scm_version=True,
- setup_requires=['setuptools_scm'],
+ setup_requires=['setuptools', 'setuptools_scm'],
install_requires=get_requirements(), # type: ignore
tests_require=['pytest'],
classifiers=[
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -18,7 +18,7 @@\n long_description=get_readme(),\n long_description_content_type='text/markdown',\n use_scm_version=True,\n- setup_requires=['setuptools_scm'],\n+ setup_requires=['setuptools', 'setuptools_scm'],\n install_requires=get_requirements(), # type: ignore\n tests_require=['pytest'],\n classifiers=[\n", "issue": "Strip links/badges/images from README for PyPi\n- While the new PyPi Warehouse supports markdown, images/badges/links are slow to load and will not work if they use relative paths (e.g., https://test.pypi.org/project/pybotics/201803222157/#description)\r\n- These elements should be stripped from the README prior to upload\n", "before_files": [{"content": "\"\"\"Setup module.\"\"\"\nimport logging\nfrom pathlib import Path\n\nfrom setuptools import find_packages, setup # type: ignore\n\n\ndef main() -> None:\n \"\"\"Run setup.\"\"\"\n # run setup\n setup(name='pybotics',\n packages=find_packages(include=['pybotics']),\n url='https://github.com/nnadeau/pybotics',\n license='MIT',\n author='Nicholas Nadeau',\n author_email='[email protected]',\n description='Python Toolbox for Robotics',\n long_description=get_readme(),\n long_description_content_type='text/markdown',\n use_scm_version=True,\n setup_requires=['setuptools_scm'],\n install_requires=get_requirements(), # type: ignore\n tests_require=['pytest'],\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: End Users/Desktop',\n 'Intended Audience :: Manufacturing',\n 'Intended Audience :: Science/Research',\n 'Topic :: Education',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Scientific/Engineering :: Human Machine Interfaces',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Scientific/Engineering :: Physics',\n 'Topic :: Utilities',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 3 :: Only',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n keywords='python robot robotics research '\n 'automation kinematics geometry')\n\n\ndef get_readme() -> str:\n \"\"\"Get README text.\"\"\"\n # description\n readme_path = Path(__file__).parent / 'README.md'\n logging.info('README path: {}'.format(readme_path.resolve()))\n with open(str(readme_path)) as f:\n readme = f.read()\n return readme\n\n\n# don't want to import typing... so ignore\ndef get_requirements(): # type: ignore\n \"\"\"Get requirements list.\"\"\"\n # requirements\n requirements_path = Path(__file__).parent / 'requirements.txt'\n logging.info('Requirements path: {}'.format(requirements_path.resolve()))\n with open(str(requirements_path)) as f:\n requirements = f.read().splitlines()\n for i, req in enumerate(requirements):\n requirements[i] = req.split()[0]\n return requirements\n\n\nif __name__ == '__main__':\n logging.basicConfig(level=logging.INFO)\n main()\n", "path": "setup.py"}]} | 1,317 | 102 |
gh_patches_debug_32678 | rasdani/github-patches | git_diff | raspiblitz__raspiblitz-1227 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Move BlitzTUI log to RAM disk
The RAM disk (`/var/cache/raspiblitz/`) is a nice way to avoid reads and writes to either the SD card or the external disk for non-persistent data. Several things can be moved; the BlitzTUI log should be a prime example.
</issue>
<code>
[start of home.admin/BlitzTUI/blitztui/version.py]
1 """ Store the version here so:
2 # 1) we don't load dependencies by storing it in __init__.py
3 # 2) we can import it in setup.py for the same reason
4 # 3) we can import it into your module module
5 """
6
7 __version_info__ = ('0', '47', '0')
8 __version__ = '.'.join(__version_info__)
9
[end of home.admin/BlitzTUI/blitztui/version.py]
[start of home.admin/BlitzTUI/blitztui/file_logger.py]
1 import json
2 import logging
3 import logging.config
4 import os
5 import sys
6
7 IS_WIN32_ENV = sys.platform == "win32"
8
9
10 def setup_logging(default_path=os.path.abspath(os.path.expanduser('~/.blitz-tui.json')), log_level="INFO"):
11 """Setup logging configuration"""
12 path = default_path
13 if os.path.exists(path):
14 with open(path, 'rt') as f:
15 config = json.load(f)
16 logging.config.dictConfig(config)
17
18 else: # if $default_path does not exist use the following default log setup
19
20 if IS_WIN32_ENV:
21 log_file = "blitz-tui.log"
22 else:
23 log_file = os.path.abspath(os.path.expanduser('~/blitz-tui.log'))
24
25 default_config_as_dict = dict(
26 version=1,
27 disable_existing_loggers=False,
28 formatters={'simple': {'format': '%(asctime)s - %(levelname)s - %(message)s'},
29 'extended': {
30 'format': '%(asctime)s - %(name)s - %(levelname)s - %(module)s:%(lineno)d - %(message)s'}},
31 handlers={'console': {'class': 'logging.StreamHandler',
32 'level': 'INFO',
33 'formatter': 'extended',
34 'stream': 'ext://sys.stdout'},
35 'file_handler': {'class': 'logging.handlers.RotatingFileHandler',
36 'level': log_level,
37 'formatter': 'extended',
38 'filename': log_file,
39 'maxBytes': 10485760,
40 'backupCount': 0,
41 'encoding': 'utf8'}},
42 loggers={'infoblitz': {'level': 'DEBUG',
43 'handlers': ['console', 'file_handler'],
44 'propagate': 'no'}},
45 root={'level': 'DEBUG', 'handlers': ['console', 'file_handler']}
46 )
47
48 logging.config.dictConfig(default_config_as_dict)
49
[end of home.admin/BlitzTUI/blitztui/file_logger.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/home.admin/BlitzTUI/blitztui/file_logger.py b/home.admin/BlitzTUI/blitztui/file_logger.py
--- a/home.admin/BlitzTUI/blitztui/file_logger.py
+++ b/home.admin/BlitzTUI/blitztui/file_logger.py
@@ -1,3 +1,4 @@
+import getpass
import json
import logging
import logging.config
@@ -20,7 +21,14 @@
if IS_WIN32_ENV:
log_file = "blitz-tui.log"
else:
- log_file = os.path.abspath(os.path.expanduser('~/blitz-tui.log'))
+ if os.path.isdir('/var/cache/raspiblitz'):
+ try:
+ os.mkdir('/var/cache/raspiblitz/{}'.format(getpass.getuser()))
+ except FileExistsError:
+ pass
+ log_file = os.path.abspath('/var/cache/raspiblitz/{}/blitz-tui.log'.format(getpass.getuser()))
+ else:
+ log_file = os.path.abspath(os.path.expanduser('~/blitz-tui.log'))
default_config_as_dict = dict(
version=1,
@@ -36,8 +44,8 @@
'level': log_level,
'formatter': 'extended',
'filename': log_file,
- 'maxBytes': 10485760,
- 'backupCount': 0,
+ 'maxBytes': 2*1024*1024, # 2 MB
+ 'backupCount': 1,
'encoding': 'utf8'}},
loggers={'infoblitz': {'level': 'DEBUG',
'handlers': ['console', 'file_handler'],
diff --git a/home.admin/BlitzTUI/blitztui/version.py b/home.admin/BlitzTUI/blitztui/version.py
--- a/home.admin/BlitzTUI/blitztui/version.py
+++ b/home.admin/BlitzTUI/blitztui/version.py
@@ -4,5 +4,5 @@
# 3) we can import it into your module module
"""
-__version_info__ = ('0', '47', '0')
+__version_info__ = ('0', '48', '1')
__version__ = '.'.join(__version_info__)
| {"golden_diff": "diff --git a/home.admin/BlitzTUI/blitztui/file_logger.py b/home.admin/BlitzTUI/blitztui/file_logger.py\n--- a/home.admin/BlitzTUI/blitztui/file_logger.py\n+++ b/home.admin/BlitzTUI/blitztui/file_logger.py\n@@ -1,3 +1,4 @@\n+import getpass\n import json\n import logging\n import logging.config\n@@ -20,7 +21,14 @@\n if IS_WIN32_ENV:\n log_file = \"blitz-tui.log\"\n else:\n- log_file = os.path.abspath(os.path.expanduser('~/blitz-tui.log'))\n+ if os.path.isdir('/var/cache/raspiblitz'):\n+ try:\n+ os.mkdir('/var/cache/raspiblitz/{}'.format(getpass.getuser()))\n+ except FileExistsError:\n+ pass\n+ log_file = os.path.abspath('/var/cache/raspiblitz/{}/blitz-tui.log'.format(getpass.getuser()))\n+ else:\n+ log_file = os.path.abspath(os.path.expanduser('~/blitz-tui.log'))\n \n default_config_as_dict = dict(\n version=1,\n@@ -36,8 +44,8 @@\n 'level': log_level,\n 'formatter': 'extended',\n 'filename': log_file,\n- 'maxBytes': 10485760,\n- 'backupCount': 0,\n+ 'maxBytes': 2*1024*1024, # 2 MB\n+ 'backupCount': 1,\n 'encoding': 'utf8'}},\n loggers={'infoblitz': {'level': 'DEBUG',\n 'handlers': ['console', 'file_handler'],\ndiff --git a/home.admin/BlitzTUI/blitztui/version.py b/home.admin/BlitzTUI/blitztui/version.py\n--- a/home.admin/BlitzTUI/blitztui/version.py\n+++ b/home.admin/BlitzTUI/blitztui/version.py\n@@ -4,5 +4,5 @@\n # 3) we can import it into your module module\n \"\"\"\n \n-__version_info__ = ('0', '47', '0')\n+__version_info__ = ('0', '48', '1')\n __version__ = '.'.join(__version_info__)\n", "issue": "Move BlitzTUI log to RAM disk\nThe RAM disk (`/var/cache/raspiblitz/`) is a nice way to avoid reads and writes to either the SD card or the external disk for non-persistent data. Several things can be moved; the BlitzTUI log should be a prime example.\n", "before_files": [{"content": "\"\"\" Store the version here so:\n# 1) we don't load dependencies by storing it in __init__.py\n# 2) we can import it in setup.py for the same reason\n# 3) we can import it into your module module\n\"\"\"\n\n__version_info__ = ('0', '47', '0')\n__version__ = '.'.join(__version_info__)\n", "path": "home.admin/BlitzTUI/blitztui/version.py"}, {"content": "import json\nimport logging\nimport logging.config\nimport os\nimport sys\n\nIS_WIN32_ENV = sys.platform == \"win32\"\n\n\ndef setup_logging(default_path=os.path.abspath(os.path.expanduser('~/.blitz-tui.json')), log_level=\"INFO\"):\n \"\"\"Setup logging configuration\"\"\"\n path = default_path\n if os.path.exists(path):\n with open(path, 'rt') as f:\n config = json.load(f)\n logging.config.dictConfig(config)\n\n else: # if $default_path does not exist use the following default log setup\n\n if IS_WIN32_ENV:\n log_file = \"blitz-tui.log\"\n else:\n log_file = os.path.abspath(os.path.expanduser('~/blitz-tui.log'))\n\n default_config_as_dict = dict(\n version=1,\n disable_existing_loggers=False,\n formatters={'simple': {'format': '%(asctime)s - %(levelname)s - %(message)s'},\n 'extended': {\n 'format': '%(asctime)s - %(name)s - %(levelname)s - %(module)s:%(lineno)d - %(message)s'}},\n handlers={'console': {'class': 'logging.StreamHandler',\n 'level': 'INFO',\n 'formatter': 'extended',\n 'stream': 'ext://sys.stdout'},\n 'file_handler': {'class': 'logging.handlers.RotatingFileHandler',\n 'level': log_level,\n 'formatter': 'extended',\n 'filename': log_file,\n 'maxBytes': 10485760,\n 'backupCount': 0,\n 'encoding': 'utf8'}},\n loggers={'infoblitz': {'level': 'DEBUG',\n 'handlers': ['console', 'file_handler'],\n 'propagate': 'no'}},\n root={'level': 'DEBUG', 'handlers': ['console', 'file_handler']}\n )\n\n logging.config.dictConfig(default_config_as_dict)\n", "path": "home.admin/BlitzTUI/blitztui/file_logger.py"}]} | 1,246 | 524 |
gh_patches_debug_1532 | rasdani/github-patches | git_diff | mne-tools__mne-bids-259 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update text before release
The setup.py still states that this is experimental. I think it's grown up a fair bit :) this is the text that ends up on pypi. we should update it to reflect the package's aim
https://github.com/mne-tools/mne-bids/blob/f8f267b45ac36e1600ea9ceb5540299e1bf3ab21/setup.py#L17
</issue>
<code>
[start of setup.py]
1 #! /usr/bin/env python
2 """Setup MNE-BIDS."""
3 import os
4 from setuptools import setup, find_packages
5
6 # get the version
7 version = None
8 with open(os.path.join('mne_bids', '__init__.py'), 'r') as fid:
9 for line in (line.strip() for line in fid):
10 if line.startswith('__version__'):
11 version = line.split('=')[1].strip().strip('\'')
12 break
13 if version is None:
14 raise RuntimeError('Could not determine version')
15
16
17 descr = """Experimental code for BIDS using MNE."""
18
19 DISTNAME = 'mne-bids'
20 DESCRIPTION = descr
21 MAINTAINER = 'Mainak Jas'
22 MAINTAINER_EMAIL = '[email protected]'
23 URL = 'https://mne-tools.github.io/mne-bids/'
24 LICENSE = 'BSD (3-clause)'
25 DOWNLOAD_URL = 'http://github.com/mne-tools/mne-bids'
26 VERSION = version
27
28 if __name__ == "__main__":
29 setup(name=DISTNAME,
30 maintainer=MAINTAINER,
31 maintainer_email=MAINTAINER_EMAIL,
32 description=DESCRIPTION,
33 license=LICENSE,
34 url=URL,
35 version=VERSION,
36 download_url=DOWNLOAD_URL,
37 long_description=open('README.rst').read(),
38 long_description_content_type='text/x-rst',
39 classifiers=[
40 'Intended Audience :: Science/Research',
41 'Intended Audience :: Developers',
42 'License :: OSI Approved',
43 'Programming Language :: Python',
44 'Topic :: Software Development',
45 'Topic :: Scientific/Engineering',
46 'Operating System :: Microsoft :: Windows',
47 'Operating System :: POSIX',
48 'Operating System :: Unix',
49 'Operating System :: MacOS',
50 ],
51 platforms='any',
52 packages=find_packages(),
53 scripts=['bin/mne_bids']
54 )
55
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -14,7 +14,8 @@
raise RuntimeError('Could not determine version')
-descr = """Experimental code for BIDS using MNE."""
+descr = ('An MNE project for organizing and formatting MEG and EEG data '
+ 'according to the BIDS specification.')
DISTNAME = 'mne-bids'
DESCRIPTION = descr
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -14,7 +14,8 @@\n raise RuntimeError('Could not determine version')\n \n \n-descr = \"\"\"Experimental code for BIDS using MNE.\"\"\"\n+descr = ('An MNE project for organizing and formatting MEG and EEG data '\n+ 'according to the BIDS specification.')\n \n DISTNAME = 'mne-bids'\n DESCRIPTION = descr\n", "issue": "Update text before release\nThe setup.py still states that this is experimental. I think it's grown up a fair bit :) this is the text that ends up on pypi. we should update it to reflect the package's aim\r\n\r\nhttps://github.com/mne-tools/mne-bids/blob/f8f267b45ac36e1600ea9ceb5540299e1bf3ab21/setup.py#L17\n", "before_files": [{"content": "#! /usr/bin/env python\n\"\"\"Setup MNE-BIDS.\"\"\"\nimport os\nfrom setuptools import setup, find_packages\n\n# get the version\nversion = None\nwith open(os.path.join('mne_bids', '__init__.py'), 'r') as fid:\n for line in (line.strip() for line in fid):\n if line.startswith('__version__'):\n version = line.split('=')[1].strip().strip('\\'')\n break\nif version is None:\n raise RuntimeError('Could not determine version')\n\n\ndescr = \"\"\"Experimental code for BIDS using MNE.\"\"\"\n\nDISTNAME = 'mne-bids'\nDESCRIPTION = descr\nMAINTAINER = 'Mainak Jas'\nMAINTAINER_EMAIL = '[email protected]'\nURL = 'https://mne-tools.github.io/mne-bids/'\nLICENSE = 'BSD (3-clause)'\nDOWNLOAD_URL = 'http://github.com/mne-tools/mne-bids'\nVERSION = version\n\nif __name__ == \"__main__\":\n setup(name=DISTNAME,\n maintainer=MAINTAINER,\n maintainer_email=MAINTAINER_EMAIL,\n description=DESCRIPTION,\n license=LICENSE,\n url=URL,\n version=VERSION,\n download_url=DOWNLOAD_URL,\n long_description=open('README.rst').read(),\n long_description_content_type='text/x-rst',\n classifiers=[\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved',\n 'Programming Language :: Python',\n 'Topic :: Software Development',\n 'Topic :: Scientific/Engineering',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Operating System :: Unix',\n 'Operating System :: MacOS',\n ],\n platforms='any',\n packages=find_packages(),\n scripts=['bin/mne_bids']\n )\n", "path": "setup.py"}]} | 1,121 | 98 |
gh_patches_debug_22071 | rasdani/github-patches | git_diff | pre-commit__pre-commit-1919 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Failure to get the container id
With #1888 in place, there's a regression when running inside a container. There's an assumption in https://github.com/pre-commit/pre-commit/blob/master/pre_commit/languages/docker.py#L32 that the hostname is the container ID which is not always the case (it's easy enough to set a different hostname with `docker run --hostname foo`). It causes the `docker inspect` command that follows to fail. A more reliable way to get the container id is from `/proc/1/cpuset` or from the first line in `/proc/1/cgroup` which is already checked in `_is_in_docker`.
Thanks @asottile and @okainov for your work on #1387 and pre-commit in general.
</issue>
<code>
[start of pre_commit/languages/docker.py]
1 import hashlib
2 import json
3 import os
4 import socket
5 from typing import Sequence
6 from typing import Tuple
7
8 import pre_commit.constants as C
9 from pre_commit.hook import Hook
10 from pre_commit.languages import helpers
11 from pre_commit.prefix import Prefix
12 from pre_commit.util import clean_path_on_failure
13 from pre_commit.util import cmd_output_b
14
15 ENVIRONMENT_DIR = 'docker'
16 PRE_COMMIT_LABEL = 'PRE_COMMIT'
17 get_default_version = helpers.basic_get_default_version
18 healthy = helpers.basic_healthy
19
20
21 def _is_in_docker() -> bool:
22 try:
23 with open('/proc/1/cgroup', 'rb') as f:
24 return b'docker' in f.read()
25 except FileNotFoundError:
26 return False
27
28
29 def _get_docker_path(path: str) -> str:
30 if not _is_in_docker():
31 return path
32 hostname = socket.gethostname()
33
34 _, out, _ = cmd_output_b('docker', 'inspect', hostname)
35
36 container, = json.loads(out)
37 for mount in container['Mounts']:
38 src_path = mount['Source']
39 to_path = mount['Destination']
40 if os.path.commonpath((path, to_path)) == to_path:
41 # So there is something in common,
42 # and we can proceed remapping it
43 return path.replace(to_path, src_path)
44 # we're in Docker, but the path is not mounted, cannot really do anything,
45 # so fall back to original path
46 return path
47
48
49 def md5(s: str) -> str: # pragma: win32 no cover
50 return hashlib.md5(s.encode()).hexdigest()
51
52
53 def docker_tag(prefix: Prefix) -> str: # pragma: win32 no cover
54 md5sum = md5(os.path.basename(prefix.prefix_dir)).lower()
55 return f'pre-commit-{md5sum}'
56
57
58 def build_docker_image(
59 prefix: Prefix,
60 *,
61 pull: bool,
62 ) -> None: # pragma: win32 no cover
63 cmd: Tuple[str, ...] = (
64 'docker', 'build',
65 '--tag', docker_tag(prefix),
66 '--label', PRE_COMMIT_LABEL,
67 )
68 if pull:
69 cmd += ('--pull',)
70 # This must come last for old versions of docker. See #477
71 cmd += ('.',)
72 helpers.run_setup_cmd(prefix, cmd)
73
74
75 def install_environment(
76 prefix: Prefix, version: str, additional_dependencies: Sequence[str],
77 ) -> None: # pragma: win32 no cover
78 helpers.assert_version_default('docker', version)
79 helpers.assert_no_additional_deps('docker', additional_dependencies)
80
81 directory = prefix.path(
82 helpers.environment_dir(ENVIRONMENT_DIR, C.DEFAULT),
83 )
84
85 # Docker doesn't really have relevant disk environment, but pre-commit
86 # still needs to cleanup its state files on failure
87 with clean_path_on_failure(directory):
88 build_docker_image(prefix, pull=True)
89 os.mkdir(directory)
90
91
92 def get_docker_user() -> Tuple[str, ...]: # pragma: win32 no cover
93 try:
94 return ('-u', f'{os.getuid()}:{os.getgid()}')
95 except AttributeError:
96 return ()
97
98
99 def docker_cmd() -> Tuple[str, ...]: # pragma: win32 no cover
100 return (
101 'docker', 'run',
102 '--rm',
103 *get_docker_user(),
104 # https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from
105 # The `Z` option tells Docker to label the content with a private
106 # unshared label. Only the current container can use a private volume.
107 '-v', f'{_get_docker_path(os.getcwd())}:/src:rw,Z',
108 '--workdir', '/src',
109 )
110
111
112 def run_hook(
113 hook: Hook,
114 file_args: Sequence[str],
115 color: bool,
116 ) -> Tuple[int, bytes]: # pragma: win32 no cover
117 # Rebuild the docker image in case it has gone missing, as many people do
118 # automated cleanup of docker images.
119 build_docker_image(hook.prefix, pull=False)
120
121 entry_exe, *cmd_rest = hook.cmd
122
123 entry_tag = ('--entrypoint', entry_exe, docker_tag(hook.prefix))
124 cmd = (*docker_cmd(), *entry_tag, *cmd_rest)
125 return helpers.run_xargs(hook, cmd, file_args, color=color)
126
[end of pre_commit/languages/docker.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/languages/docker.py b/pre_commit/languages/docker.py
--- a/pre_commit/languages/docker.py
+++ b/pre_commit/languages/docker.py
@@ -1,7 +1,6 @@
import hashlib
import json
import os
-import socket
from typing import Sequence
from typing import Tuple
@@ -26,12 +25,24 @@
return False
+def _get_container_id() -> str:
+ # It's assumed that we already check /proc/1/cgroup in _is_in_docker. The
+ # cpuset cgroup controller existed since cgroups were introduced so this
+ # way of getting the container ID is pretty reliable.
+ with open('/proc/1/cgroup', 'rb') as f:
+ for line in f.readlines():
+ if line.split(b':')[1] == b'cpuset':
+ return os.path.basename(line.split(b':')[2]).strip().decode()
+ raise RuntimeError('Failed to find the container ID in /proc/1/cgroup.')
+
+
def _get_docker_path(path: str) -> str:
if not _is_in_docker():
return path
- hostname = socket.gethostname()
- _, out, _ = cmd_output_b('docker', 'inspect', hostname)
+ container_id = _get_container_id()
+
+ _, out, _ = cmd_output_b('docker', 'inspect', container_id)
container, = json.loads(out)
for mount in container['Mounts']:
| {"golden_diff": "diff --git a/pre_commit/languages/docker.py b/pre_commit/languages/docker.py\n--- a/pre_commit/languages/docker.py\n+++ b/pre_commit/languages/docker.py\n@@ -1,7 +1,6 @@\n import hashlib\n import json\n import os\n-import socket\n from typing import Sequence\n from typing import Tuple\n \n@@ -26,12 +25,24 @@\n return False\n \n \n+def _get_container_id() -> str:\n+ # It's assumed that we already check /proc/1/cgroup in _is_in_docker. The\n+ # cpuset cgroup controller existed since cgroups were introduced so this\n+ # way of getting the container ID is pretty reliable.\n+ with open('/proc/1/cgroup', 'rb') as f:\n+ for line in f.readlines():\n+ if line.split(b':')[1] == b'cpuset':\n+ return os.path.basename(line.split(b':')[2]).strip().decode()\n+ raise RuntimeError('Failed to find the container ID in /proc/1/cgroup.')\n+\n+\n def _get_docker_path(path: str) -> str:\n if not _is_in_docker():\n return path\n- hostname = socket.gethostname()\n \n- _, out, _ = cmd_output_b('docker', 'inspect', hostname)\n+ container_id = _get_container_id()\n+\n+ _, out, _ = cmd_output_b('docker', 'inspect', container_id)\n \n container, = json.loads(out)\n for mount in container['Mounts']:\n", "issue": "Failure to get the container id\nWith #1888 in place, there's a regression when running inside a container. There's an assumption in https://github.com/pre-commit/pre-commit/blob/master/pre_commit/languages/docker.py#L32 that the hostname is the container ID which is not always the case (it's easy enough to set a different hostname with `docker run --hostname foo`). It causes the `docker inspect` command that follows to fail. A more reliable way to get the container id is from `/proc/1/cpuset` or from the first line in `/proc/1/cgroup` which is already checked in `_is_in_docker`.\r\n\r\nThanks @asottile and @okainov for your work on #1387 and pre-commit in general.\n", "before_files": [{"content": "import hashlib\nimport json\nimport os\nimport socket\nfrom typing import Sequence\nfrom typing import Tuple\n\nimport pre_commit.constants as C\nfrom pre_commit.hook import Hook\nfrom pre_commit.languages import helpers\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output_b\n\nENVIRONMENT_DIR = 'docker'\nPRE_COMMIT_LABEL = 'PRE_COMMIT'\nget_default_version = helpers.basic_get_default_version\nhealthy = helpers.basic_healthy\n\n\ndef _is_in_docker() -> bool:\n try:\n with open('/proc/1/cgroup', 'rb') as f:\n return b'docker' in f.read()\n except FileNotFoundError:\n return False\n\n\ndef _get_docker_path(path: str) -> str:\n if not _is_in_docker():\n return path\n hostname = socket.gethostname()\n\n _, out, _ = cmd_output_b('docker', 'inspect', hostname)\n\n container, = json.loads(out)\n for mount in container['Mounts']:\n src_path = mount['Source']\n to_path = mount['Destination']\n if os.path.commonpath((path, to_path)) == to_path:\n # So there is something in common,\n # and we can proceed remapping it\n return path.replace(to_path, src_path)\n # we're in Docker, but the path is not mounted, cannot really do anything,\n # so fall back to original path\n return path\n\n\ndef md5(s: str) -> str: # pragma: win32 no cover\n return hashlib.md5(s.encode()).hexdigest()\n\n\ndef docker_tag(prefix: Prefix) -> str: # pragma: win32 no cover\n md5sum = md5(os.path.basename(prefix.prefix_dir)).lower()\n return f'pre-commit-{md5sum}'\n\n\ndef build_docker_image(\n prefix: Prefix,\n *,\n pull: bool,\n) -> None: # pragma: win32 no cover\n cmd: Tuple[str, ...] = (\n 'docker', 'build',\n '--tag', docker_tag(prefix),\n '--label', PRE_COMMIT_LABEL,\n )\n if pull:\n cmd += ('--pull',)\n # This must come last for old versions of docker. See #477\n cmd += ('.',)\n helpers.run_setup_cmd(prefix, cmd)\n\n\ndef install_environment(\n prefix: Prefix, version: str, additional_dependencies: Sequence[str],\n) -> None: # pragma: win32 no cover\n helpers.assert_version_default('docker', version)\n helpers.assert_no_additional_deps('docker', additional_dependencies)\n\n directory = prefix.path(\n helpers.environment_dir(ENVIRONMENT_DIR, C.DEFAULT),\n )\n\n # Docker doesn't really have relevant disk environment, but pre-commit\n # still needs to cleanup its state files on failure\n with clean_path_on_failure(directory):\n build_docker_image(prefix, pull=True)\n os.mkdir(directory)\n\n\ndef get_docker_user() -> Tuple[str, ...]: # pragma: win32 no cover\n try:\n return ('-u', f'{os.getuid()}:{os.getgid()}')\n except AttributeError:\n return ()\n\n\ndef docker_cmd() -> Tuple[str, ...]: # pragma: win32 no cover\n return (\n 'docker', 'run',\n '--rm',\n *get_docker_user(),\n # https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from\n # The `Z` option tells Docker to label the content with a private\n # unshared label. Only the current container can use a private volume.\n '-v', f'{_get_docker_path(os.getcwd())}:/src:rw,Z',\n '--workdir', '/src',\n )\n\n\ndef run_hook(\n hook: Hook,\n file_args: Sequence[str],\n color: bool,\n) -> Tuple[int, bytes]: # pragma: win32 no cover\n # Rebuild the docker image in case it has gone missing, as many people do\n # automated cleanup of docker images.\n build_docker_image(hook.prefix, pull=False)\n\n entry_exe, *cmd_rest = hook.cmd\n\n entry_tag = ('--entrypoint', entry_exe, docker_tag(hook.prefix))\n cmd = (*docker_cmd(), *entry_tag, *cmd_rest)\n return helpers.run_xargs(hook, cmd, file_args, color=color)\n", "path": "pre_commit/languages/docker.py"}]} | 1,936 | 333 |
gh_patches_debug_6310 | rasdani/github-patches | git_diff | kornia__kornia-1421 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PyPI tarball missing required files
### Describe the bug
The tarball uploaded to PyPI does not contain `requirements/*` files which are required to run the `setup.py` file.
### Reproduction steps
```bash
$ wget https://files.pythonhosted.org/packages/source/k/kornia/kornia-0.6.0.tar.gz
$ tar zxf kornia-0.6.0.tar.gz
$ cd kornia-0.6.0
$ python setup.py install
...
Traceback (most recent call last):
File "setup.py", line 43, in <module>
"x": load_requirements("requirements/x.txt"),
File "setup.py", line 38, in load_requirements
with open(filename) as f:
FileNotFoundError: [Errno 2] No such file or directory: 'requirements/x.txt'
```
### Expected behavior
I would expect the `setup.py` to function correctly. I believe there's a setuptools option to control which files get included in the upload tarball.
### Environment
```shell
- PyTorch Version (e.g., 1.0): 1.10
- OS (e.g., Linux): macOS
- How you installed PyTorch (`conda`, `pip`, source): `spack`
- Build command you used (if compiling from source): `python setup.py install`
- Python version: 3.8.11
- CUDA/cuDNN version: N/A
- GPU models and configuration: N/A
- Any other relevant information: N/A
```
### Additional context
_No response_
</issue>
<code>
[start of setup.py]
1 # Welcome to the Kornia setup.py.
2 #
3 import re
4 import sys
5
6 # Make sure that kornia is running on Python 3.6.0 or later
7 # (to avoid running into this bug: https://bugs.python.org/issue29246)
8
9 if sys.version_info < (3, 6, 0):
10 raise RuntimeError("Kornia requires Python 3.6.0 or later.")
11
12
13 from setuptools import find_packages, setup
14
15
16 def find_version(file_path: str) -> str:
17 version_file = open(file_path).read()
18 version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", version_file, re.M)
19 if not version_match:
20 raise RuntimeError(f"Unable to find version string in {file_path}")
21 return version_match.group(1)
22
23
24 VERSION = find_version("kornia/_version.py")
25
26
27 # NOTE: kornia MUST only require PyTorch
28 requirements = [
29 'torch>=1.8.1', 'packaging',
30 ]
31
32 # open readme file and set long description
33 with open("README.md", encoding="utf-8") as fh:
34 long_description = fh.read()
35
36
37 def load_requirements(filename: str):
38 with open(filename) as f:
39 return [x.strip() for x in f.readlines() if "-r" != x[0:2]]
40
41
42 requirements_extras = {
43 "x": load_requirements("requirements/x.txt"),
44 "dev": load_requirements("requirements/dev.txt")
45 }
46 requirements_extras["all"] = requirements_extras["x"] + requirements_extras["dev"]
47
48
49 if __name__ == '__main__':
50 setup(
51 name='kornia',
52 version=VERSION,
53 author='Edgar Riba',
54 author_email='[email protected]',
55 url='https://www.kornia.org',
56 download_url='https://github.com/kornia/kornia',
57 license='Apache License 2.0',
58 description='Open Source Differentiable Computer Vision Library for PyTorch',
59 long_description=long_description,
60 long_description_content_type='text/markdown',
61 python_requires='>=3.6',
62 setup_requires=['pytest-runner'],
63 tests_require=['pytest'],
64 packages=find_packages(exclude=('docs', 'test', 'examples')),
65 package_data={"kornia": ["py.typed"]},
66 zip_safe=True,
67 install_requires=requirements,
68 extras_require=requirements_extras,
69 keywords=['computer vision', 'deep learning', 'pytorch'],
70 project_urls={
71 "Bug Tracker": "https://github.com/kornia/kornia/issues",
72 "Documentation": "https://kornia.readthedocs.io/en/latest",
73 "Source Code": "https://github.com/kornia/kornia",
74 },
75 classifiers=[
76 'Environment :: GPU',
77 'Environment :: Console',
78 'Natural Language :: English',
79 # How mature is this project? Common values are
80 # 3 - Alpha, 4 - Beta, 5 - Production/Stable
81 'Development Status :: 4 - Beta',
82 # Indicate who your project is intended for
83 'Intended Audience :: Developers',
84 'Intended Audience :: Education',
85 'Intended Audience :: Science/Research',
86 'Intended Audience :: Information Technology',
87 'Topic :: Software Development :: Libraries',
88 'Topic :: Scientific/Engineering :: Artificial Intelligence',
89 'Topic :: Scientific/Engineering :: Image Processing',
90 # Pick your license as you wish
91 'License :: OSI Approved :: Apache Software License',
92 'Operating System :: OS Independent',
93 # Specify the Python versions you support here. In particular, ensure
94 # that you indicate whether you support Python 2, Python 3 or both.
95 'Programming Language :: Python :: 3',
96 'Programming Language :: Python :: 3.6',
97 'Programming Language :: Python :: 3.7',
98 'Programming Language :: Python :: 3.8',
99 ],
100 )
101
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -63,6 +63,7 @@
tests_require=['pytest'],
packages=find_packages(exclude=('docs', 'test', 'examples')),
package_data={"kornia": ["py.typed"]},
+ data_files=[('', ['requirements/x.txt', 'requirements/dev.txt'])],
zip_safe=True,
install_requires=requirements,
extras_require=requirements_extras,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -63,6 +63,7 @@\n tests_require=['pytest'],\n packages=find_packages(exclude=('docs', 'test', 'examples')),\n package_data={\"kornia\": [\"py.typed\"]},\n+ data_files=[('', ['requirements/x.txt', 'requirements/dev.txt'])],\n zip_safe=True,\n install_requires=requirements,\n extras_require=requirements_extras,\n", "issue": "PyPI tarball missing required files\n### Describe the bug\r\n\r\nThe tarball uploaded to PyPI does not contain `requirements/*` files which are required to run the `setup.py` file.\r\n\r\n### Reproduction steps\r\n\r\n```bash\r\n$ wget https://files.pythonhosted.org/packages/source/k/kornia/kornia-0.6.0.tar.gz\r\n$ tar zxf kornia-0.6.0.tar.gz\r\n$ cd kornia-0.6.0\r\n$ python setup.py install\r\n...\r\nTraceback (most recent call last):\r\n File \"setup.py\", line 43, in <module>\r\n \"x\": load_requirements(\"requirements/x.txt\"),\r\n File \"setup.py\", line 38, in load_requirements\r\n with open(filename) as f:\r\nFileNotFoundError: [Errno 2] No such file or directory: 'requirements/x.txt'\r\n```\r\n\r\n\r\n### Expected behavior\r\n\r\nI would expect the `setup.py` to function correctly. I believe there's a setuptools option to control which files get included in the upload tarball.\r\n\r\n### Environment\r\n\r\n```shell\r\n- PyTorch Version (e.g., 1.0): 1.10\r\n- OS (e.g., Linux): macOS\r\n- How you installed PyTorch (`conda`, `pip`, source): `spack`\r\n- Build command you used (if compiling from source): `python setup.py install`\r\n- Python version: 3.8.11\r\n- CUDA/cuDNN version: N/A\r\n- GPU models and configuration: N/A\r\n- Any other relevant information: N/A\r\n```\r\n\r\n\r\n### Additional context\r\n\r\n_No response_\n", "before_files": [{"content": "# Welcome to the Kornia setup.py.\n#\nimport re\nimport sys\n\n# Make sure that kornia is running on Python 3.6.0 or later\n# (to avoid running into this bug: https://bugs.python.org/issue29246)\n\nif sys.version_info < (3, 6, 0):\n raise RuntimeError(\"Kornia requires Python 3.6.0 or later.\")\n\n\nfrom setuptools import find_packages, setup\n\n\ndef find_version(file_path: str) -> str:\n version_file = open(file_path).read()\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file, re.M)\n if not version_match:\n raise RuntimeError(f\"Unable to find version string in {file_path}\")\n return version_match.group(1)\n\n\nVERSION = find_version(\"kornia/_version.py\")\n\n\n# NOTE: kornia MUST only require PyTorch\nrequirements = [\n 'torch>=1.8.1', 'packaging',\n]\n\n# open readme file and set long description\nwith open(\"README.md\", encoding=\"utf-8\") as fh:\n long_description = fh.read()\n\n\ndef load_requirements(filename: str):\n with open(filename) as f:\n return [x.strip() for x in f.readlines() if \"-r\" != x[0:2]]\n\n\nrequirements_extras = {\n \"x\": load_requirements(\"requirements/x.txt\"),\n \"dev\": load_requirements(\"requirements/dev.txt\")\n}\nrequirements_extras[\"all\"] = requirements_extras[\"x\"] + requirements_extras[\"dev\"]\n\n\nif __name__ == '__main__':\n setup(\n name='kornia',\n version=VERSION,\n author='Edgar Riba',\n author_email='[email protected]',\n url='https://www.kornia.org',\n download_url='https://github.com/kornia/kornia',\n license='Apache License 2.0',\n description='Open Source Differentiable Computer Vision Library for PyTorch',\n long_description=long_description,\n long_description_content_type='text/markdown',\n python_requires='>=3.6',\n setup_requires=['pytest-runner'],\n tests_require=['pytest'],\n packages=find_packages(exclude=('docs', 'test', 'examples')),\n package_data={\"kornia\": [\"py.typed\"]},\n zip_safe=True,\n install_requires=requirements,\n extras_require=requirements_extras,\n keywords=['computer vision', 'deep learning', 'pytorch'],\n project_urls={\n \"Bug Tracker\": \"https://github.com/kornia/kornia/issues\",\n \"Documentation\": \"https://kornia.readthedocs.io/en/latest\",\n \"Source Code\": \"https://github.com/kornia/kornia\",\n },\n classifiers=[\n 'Environment :: GPU',\n 'Environment :: Console',\n 'Natural Language :: English',\n # How mature is this project? Common values are\n # 3 - Alpha, 4 - Beta, 5 - Production/Stable\n 'Development Status :: 4 - Beta',\n # Indicate who your project is intended for\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Information Technology',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Scientific/Engineering :: Image Processing',\n # Pick your license as you wish\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n # Specify the Python versions you support here. In particular, ensure\n # that you indicate whether you support Python 2, Python 3 or both.\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],\n )\n", "path": "setup.py"}]} | 1,928 | 104 |
gh_patches_debug_16698 | rasdani/github-patches | git_diff | GeotrekCE__Geotrek-admin-2462 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Outdoor - Orientations
- [x] Ajouter les noms des champs dans les filtres
</issue>
<code>
[start of mapentity/filters.py]
1 from django.db.models.fields.related import ManyToOneRel
2 from django.conf import settings
3
4 from django_filters import FilterSet, Filter
5 from django_filters.filterset import get_model_field
6 from django.contrib.gis import forms
7
8 from .settings import app_settings, API_SRID
9 from .widgets import HiddenGeometryWidget
10
11
12 class PolygonFilter(Filter):
13
14 field_class = forms.PolygonField
15
16 def __init__(self, *args, **kwargs):
17 kwargs.setdefault('field_name', app_settings['GEOM_FIELD_NAME'])
18 kwargs.setdefault('widget', HiddenGeometryWidget)
19 kwargs.setdefault('lookup_expr', 'intersects')
20 super(PolygonFilter, self).__init__(*args, **kwargs)
21
22
23 class PythonPolygonFilter(PolygonFilter):
24
25 def filter(self, qs, value):
26 if not value:
27 return qs
28 if not value.srid:
29 value.srid = API_SRID
30 value.transform(settings.SRID)
31 filtered = []
32 for o in qs.all():
33 geom = getattr(o, self.field_name)
34 if geom and geom.valid and not geom.empty:
35 if getattr(geom, self.lookup_expr)(value):
36 filtered.append(o.pk)
37 else:
38 filtered.append(o.pk)
39 return qs.filter(pk__in=filtered)
40
41
42 class BaseMapEntityFilterSet(FilterSet):
43 def __init__(self, *args, **kwargs):
44 super(BaseMapEntityFilterSet, self).__init__(*args, **kwargs)
45 self.__bypass_labels()
46
47 def __bypass_labels(self):
48 """
49 These hacks allow to bypass field labels. Using either placeholders,
50 empty choices label, etc. This allows to greatly save space in form layout,
51 which is required for concise filter forms.
52 """
53 for fieldname in self.base_filters.keys():
54 field = self.form.fields[fieldname]
55 if isinstance(field, forms.MultiValueField):
56 for i, widget in enumerate(field.widget.widgets):
57 self.__set_placeholder(field.fields[i], widget)
58 elif isinstance(field, forms.ChoiceField):
59 field.empty_label = field.label
60 self.__set_placeholder(field, field.widget)
61 elif isinstance(field, forms.NullBooleanField):
62 choices = [(u'1', field.label)] + field.widget.choices[1:]
63 field.widget.choices = choices
64 self.__set_placeholder(field, field.widget)
65 else:
66 self.__set_placeholder(field, field.widget)
67
68 def __set_placeholder(self, field, widget):
69 field.help_text = '' # Hide help text
70 widget.attrs['placeholder'] = field.label
71 widget.attrs['data-placeholder'] = field.label
72 widget.attrs['title'] = field.label
73 widget.attrs['data-label'] = field.label
74
75 @classmethod
76 def add_filter(cls, name, filter_=None):
77 field = get_model_field(cls._meta.model, name)
78 if filter_ is None:
79 if isinstance(field, ManyToOneRel):
80 filter_ = cls.filter_for_reverse_field(field, name)
81 else:
82 filter_ = cls.filter_for_field(field, name)
83 cls.base_filters[name] = filter_
84
85 @classmethod
86 def add_filters(cls, filters):
87 for name, filter_ in filters.items():
88 filter_.field_name = name
89 cls.add_filter(name, filter_)
90
91
92 class MapEntityFilterSet(BaseMapEntityFilterSet):
93 bbox = PolygonFilter()
94
95 class Meta:
96 fields = ['bbox']
97
[end of mapentity/filters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mapentity/filters.py b/mapentity/filters.py
--- a/mapentity/filters.py
+++ b/mapentity/filters.py
@@ -2,6 +2,7 @@
from django.conf import settings
from django_filters import FilterSet, Filter
+from django_filters.fields import ChoiceField
from django_filters.filterset import get_model_field
from django.contrib.gis import forms
@@ -42,6 +43,9 @@
class BaseMapEntityFilterSet(FilterSet):
def __init__(self, *args, **kwargs):
super(BaseMapEntityFilterSet, self).__init__(*args, **kwargs)
+ for filter_ in self.filters.values():
+ if filter_.field_class == ChoiceField:
+ filter_.extra.setdefault('empty_label', filter_.label)
self.__bypass_labels()
def __bypass_labels(self):
| {"golden_diff": "diff --git a/mapentity/filters.py b/mapentity/filters.py\n--- a/mapentity/filters.py\n+++ b/mapentity/filters.py\n@@ -2,6 +2,7 @@\n from django.conf import settings\n \n from django_filters import FilterSet, Filter\n+from django_filters.fields import ChoiceField\n from django_filters.filterset import get_model_field\n from django.contrib.gis import forms\n \n@@ -42,6 +43,9 @@\n class BaseMapEntityFilterSet(FilterSet):\n def __init__(self, *args, **kwargs):\n super(BaseMapEntityFilterSet, self).__init__(*args, **kwargs)\n+ for filter_ in self.filters.values():\n+ if filter_.field_class == ChoiceField:\n+ filter_.extra.setdefault('empty_label', filter_.label)\n self.__bypass_labels()\n \n def __bypass_labels(self):\n", "issue": "Outdoor - Orientations\n- [x] Ajouter les noms des champs dans les filtres\n", "before_files": [{"content": "from django.db.models.fields.related import ManyToOneRel\nfrom django.conf import settings\n\nfrom django_filters import FilterSet, Filter\nfrom django_filters.filterset import get_model_field\nfrom django.contrib.gis import forms\n\nfrom .settings import app_settings, API_SRID\nfrom .widgets import HiddenGeometryWidget\n\n\nclass PolygonFilter(Filter):\n\n field_class = forms.PolygonField\n\n def __init__(self, *args, **kwargs):\n kwargs.setdefault('field_name', app_settings['GEOM_FIELD_NAME'])\n kwargs.setdefault('widget', HiddenGeometryWidget)\n kwargs.setdefault('lookup_expr', 'intersects')\n super(PolygonFilter, self).__init__(*args, **kwargs)\n\n\nclass PythonPolygonFilter(PolygonFilter):\n\n def filter(self, qs, value):\n if not value:\n return qs\n if not value.srid:\n value.srid = API_SRID\n value.transform(settings.SRID)\n filtered = []\n for o in qs.all():\n geom = getattr(o, self.field_name)\n if geom and geom.valid and not geom.empty:\n if getattr(geom, self.lookup_expr)(value):\n filtered.append(o.pk)\n else:\n filtered.append(o.pk)\n return qs.filter(pk__in=filtered)\n\n\nclass BaseMapEntityFilterSet(FilterSet):\n def __init__(self, *args, **kwargs):\n super(BaseMapEntityFilterSet, self).__init__(*args, **kwargs)\n self.__bypass_labels()\n\n def __bypass_labels(self):\n \"\"\"\n These hacks allow to bypass field labels. Using either placeholders,\n empty choices label, etc. This allows to greatly save space in form layout,\n which is required for concise filter forms.\n \"\"\"\n for fieldname in self.base_filters.keys():\n field = self.form.fields[fieldname]\n if isinstance(field, forms.MultiValueField):\n for i, widget in enumerate(field.widget.widgets):\n self.__set_placeholder(field.fields[i], widget)\n elif isinstance(field, forms.ChoiceField):\n field.empty_label = field.label\n self.__set_placeholder(field, field.widget)\n elif isinstance(field, forms.NullBooleanField):\n choices = [(u'1', field.label)] + field.widget.choices[1:]\n field.widget.choices = choices\n self.__set_placeholder(field, field.widget)\n else:\n self.__set_placeholder(field, field.widget)\n\n def __set_placeholder(self, field, widget):\n field.help_text = '' # Hide help text\n widget.attrs['placeholder'] = field.label\n widget.attrs['data-placeholder'] = field.label\n widget.attrs['title'] = field.label\n widget.attrs['data-label'] = field.label\n\n @classmethod\n def add_filter(cls, name, filter_=None):\n field = get_model_field(cls._meta.model, name)\n if filter_ is None:\n if isinstance(field, ManyToOneRel):\n filter_ = cls.filter_for_reverse_field(field, name)\n else:\n filter_ = cls.filter_for_field(field, name)\n cls.base_filters[name] = filter_\n\n @classmethod\n def add_filters(cls, filters):\n for name, filter_ in filters.items():\n filter_.field_name = name\n cls.add_filter(name, filter_)\n\n\nclass MapEntityFilterSet(BaseMapEntityFilterSet):\n bbox = PolygonFilter()\n\n class Meta:\n fields = ['bbox']\n", "path": "mapentity/filters.py"}]} | 1,465 | 187 |
gh_patches_debug_27824 | rasdani/github-patches | git_diff | pytorch__ignite-976 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve Frequency
## 🚀 Feature
If we would like to log datapoints/second every 100 iterations, we most probably do like this
```python
wps_metric = Frequency(output_transformer=lambda x: x['ntokens'])
wps_metric.attach(trainer, name='wps', event_name=Events.ITERATION_COMPLETED(every=100))
```
however, seems like this wont take into account all other iterations while computing the total number of tokens.
```python
class Frequency(Metric):
....
def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):
engine.add_event_handler(Events.EPOCH_STARTED, self.started)
engine.add_event_handler(event_name, self.iteration_completed)
engine.add_event_handler(event_name, self.completed, name)
```
IMO, should be
```python
class Frequency(Metric):
....
def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):
engine.add_event_handler(Events.EPOCH_STARTED, self.started)
engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)
engine.add_event_handler(event_name, self.completed, name)
```
cc @erip
Improve Frequency
## 🚀 Feature
If we would like to log datapoints/second every 100 iterations, we most probably do like this
```python
wps_metric = Frequency(output_transformer=lambda x: x['ntokens'])
wps_metric.attach(trainer, name='wps', event_name=Events.ITERATION_COMPLETED(every=100))
```
however, seems like this wont take into account all other iterations while computing the total number of tokens.
```python
class Frequency(Metric):
....
def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):
engine.add_event_handler(Events.EPOCH_STARTED, self.started)
engine.add_event_handler(event_name, self.iteration_completed)
engine.add_event_handler(event_name, self.completed, name)
```
IMO, should be
```python
class Frequency(Metric):
....
def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):
engine.add_event_handler(Events.EPOCH_STARTED, self.started)
engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)
engine.add_event_handler(event_name, self.completed, name)
```
cc @erip
</issue>
<code>
[start of ignite/metrics/frequency.py]
1 import torch
2 import torch.distributed as dist
3
4 from ignite.engine import Events
5 from ignite.metrics import Metric
6 from ignite.handlers.timing import Timer
7 from ignite.metrics.metric import sync_all_reduce, reinit__is_reduced
8
9
10 class Frequency(Metric):
11 """Provides metrics for the number of examples processed per second.
12
13 Examples:
14
15 .. code-block:: python
16
17 # Compute number of tokens processed
18 wps_metric = Frequency(output_transform=lambda x: x['ntokens'])
19 wps_metric.attach(trainer, name='wps')
20 # Logging with TQDM
21 ProgressBar(persist=True).attach(trainer, metric_names=['wps'])
22 # Progress bar will looks like
23 # Epoch [2/10]: [12/24] 50%|█████ , wps=400 [00:17<1:23]
24 """
25
26 def __init__(self, output_transform=lambda x: x, device=None):
27 self._timer = None
28 self._acc = None
29 self._n = None
30 self._elapsed = None
31 super(Frequency, self).__init__(output_transform=output_transform, device=device)
32
33 @reinit__is_reduced
34 def reset(self):
35 self._timer = Timer()
36 self._acc = 0
37 self._n = 0
38 self._elapsed = 0.0
39 super(Frequency, self).reset()
40
41 @reinit__is_reduced
42 def update(self, output):
43 self._acc += output
44 self._n = self._acc
45 self._elapsed = torch.tensor(self._timer.value(), device=self._device)
46
47 @sync_all_reduce("_n", "_elapsed")
48 def compute(self):
49 time_divisor = 1.0
50
51 if dist.is_available() and dist.is_initialized():
52 time_divisor *= dist.get_world_size()
53
54 # Returns the average processed objects per second across all workers
55 return self._n / self._elapsed.item() * time_divisor
56
57 def completed(self, engine, name):
58 engine.state.metrics[name] = int(self.compute())
59
60 def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):
61 engine.add_event_handler(Events.EPOCH_STARTED, self.started)
62 engine.add_event_handler(event_name, self.iteration_completed)
63 engine.add_event_handler(event_name, self.completed, name)
64
[end of ignite/metrics/frequency.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ignite/metrics/frequency.py b/ignite/metrics/frequency.py
--- a/ignite/metrics/frequency.py
+++ b/ignite/metrics/frequency.py
@@ -19,8 +19,21 @@
wps_metric.attach(trainer, name='wps')
# Logging with TQDM
ProgressBar(persist=True).attach(trainer, metric_names=['wps'])
- # Progress bar will looks like
+ # Progress bar will look like
# Epoch [2/10]: [12/24] 50%|█████ , wps=400 [00:17<1:23]
+
+
+ To compute examples processed per second every 50th iteration:
+
+ .. code-block:: python
+
+ # Compute number of tokens processed
+ wps_metric = Frequency(output_transform=lambda x: x['ntokens'])
+ wps_metric.attach(trainer, name='wps', event_name=Events.ITERATION_COMPLETED(every=50))
+ # Logging with TQDM
+ ProgressBar(persist=True).attach(trainer, metric_names=['wps'])
+ # Progress bar will look like
+ # Epoch [2/10]: [50/100] 50%|█████ , wps=400 [00:17<00:35]
"""
def __init__(self, output_transform=lambda x: x, device=None):
@@ -59,5 +72,5 @@
def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):
engine.add_event_handler(Events.EPOCH_STARTED, self.started)
- engine.add_event_handler(event_name, self.iteration_completed)
+ engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)
engine.add_event_handler(event_name, self.completed, name)
| {"golden_diff": "diff --git a/ignite/metrics/frequency.py b/ignite/metrics/frequency.py\n--- a/ignite/metrics/frequency.py\n+++ b/ignite/metrics/frequency.py\n@@ -19,8 +19,21 @@\n wps_metric.attach(trainer, name='wps')\n # Logging with TQDM\n ProgressBar(persist=True).attach(trainer, metric_names=['wps'])\n- # Progress bar will looks like\n+ # Progress bar will look like\n # Epoch [2/10]: [12/24] 50%|\u2588\u2588\u2588\u2588\u2588 , wps=400 [00:17<1:23]\n+\n+\n+ To compute examples processed per second every 50th iteration:\n+\n+ .. code-block:: python\n+\n+ # Compute number of tokens processed\n+ wps_metric = Frequency(output_transform=lambda x: x['ntokens'])\n+ wps_metric.attach(trainer, name='wps', event_name=Events.ITERATION_COMPLETED(every=50))\n+ # Logging with TQDM\n+ ProgressBar(persist=True).attach(trainer, metric_names=['wps'])\n+ # Progress bar will look like\n+ # Epoch [2/10]: [50/100] 50%|\u2588\u2588\u2588\u2588\u2588 , wps=400 [00:17<00:35]\n \"\"\"\n \n def __init__(self, output_transform=lambda x: x, device=None):\n@@ -59,5 +72,5 @@\n \n def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):\n engine.add_event_handler(Events.EPOCH_STARTED, self.started)\n- engine.add_event_handler(event_name, self.iteration_completed)\n+ engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)\n engine.add_event_handler(event_name, self.completed, name)\n", "issue": "Improve Frequency\n## \ud83d\ude80 Feature\r\n\r\nIf we would like to log datapoints/second every 100 iterations, we most probably do like this \r\n```python\r\nwps_metric = Frequency(output_transformer=lambda x: x['ntokens'])\r\nwps_metric.attach(trainer, name='wps', event_name=Events.ITERATION_COMPLETED(every=100))\r\n```\r\nhowever, seems like this wont take into account all other iterations while computing the total number of tokens.\r\n```python\r\nclass Frequency(Metric):\r\n ....\r\n def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):\r\n engine.add_event_handler(Events.EPOCH_STARTED, self.started)\r\n engine.add_event_handler(event_name, self.iteration_completed)\r\n engine.add_event_handler(event_name, self.completed, name)\r\n```\r\nIMO, should be \r\n```python\r\nclass Frequency(Metric):\r\n ....\r\n def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):\r\n engine.add_event_handler(Events.EPOCH_STARTED, self.started)\r\n engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)\r\n engine.add_event_handler(event_name, self.completed, name)\r\n```\r\n\r\ncc @erip \r\n\nImprove Frequency\n## \ud83d\ude80 Feature\r\n\r\nIf we would like to log datapoints/second every 100 iterations, we most probably do like this \r\n```python\r\nwps_metric = Frequency(output_transformer=lambda x: x['ntokens'])\r\nwps_metric.attach(trainer, name='wps', event_name=Events.ITERATION_COMPLETED(every=100))\r\n```\r\nhowever, seems like this wont take into account all other iterations while computing the total number of tokens.\r\n```python\r\nclass Frequency(Metric):\r\n ....\r\n def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):\r\n engine.add_event_handler(Events.EPOCH_STARTED, self.started)\r\n engine.add_event_handler(event_name, self.iteration_completed)\r\n engine.add_event_handler(event_name, self.completed, name)\r\n```\r\nIMO, should be \r\n```python\r\nclass Frequency(Metric):\r\n ....\r\n def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):\r\n engine.add_event_handler(Events.EPOCH_STARTED, self.started)\r\n engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)\r\n engine.add_event_handler(event_name, self.completed, name)\r\n```\r\n\r\ncc @erip \r\n\n", "before_files": [{"content": "import torch\nimport torch.distributed as dist\n\nfrom ignite.engine import Events\nfrom ignite.metrics import Metric\nfrom ignite.handlers.timing import Timer\nfrom ignite.metrics.metric import sync_all_reduce, reinit__is_reduced\n\n\nclass Frequency(Metric):\n \"\"\"Provides metrics for the number of examples processed per second.\n\n Examples:\n\n .. code-block:: python\n\n # Compute number of tokens processed\n wps_metric = Frequency(output_transform=lambda x: x['ntokens'])\n wps_metric.attach(trainer, name='wps')\n # Logging with TQDM\n ProgressBar(persist=True).attach(trainer, metric_names=['wps'])\n # Progress bar will looks like\n # Epoch [2/10]: [12/24] 50%|\u2588\u2588\u2588\u2588\u2588 , wps=400 [00:17<1:23]\n \"\"\"\n\n def __init__(self, output_transform=lambda x: x, device=None):\n self._timer = None\n self._acc = None\n self._n = None\n self._elapsed = None\n super(Frequency, self).__init__(output_transform=output_transform, device=device)\n\n @reinit__is_reduced\n def reset(self):\n self._timer = Timer()\n self._acc = 0\n self._n = 0\n self._elapsed = 0.0\n super(Frequency, self).reset()\n\n @reinit__is_reduced\n def update(self, output):\n self._acc += output\n self._n = self._acc\n self._elapsed = torch.tensor(self._timer.value(), device=self._device)\n\n @sync_all_reduce(\"_n\", \"_elapsed\")\n def compute(self):\n time_divisor = 1.0\n\n if dist.is_available() and dist.is_initialized():\n time_divisor *= dist.get_world_size()\n\n # Returns the average processed objects per second across all workers\n return self._n / self._elapsed.item() * time_divisor\n\n def completed(self, engine, name):\n engine.state.metrics[name] = int(self.compute())\n\n def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):\n engine.add_event_handler(Events.EPOCH_STARTED, self.started)\n engine.add_event_handler(event_name, self.iteration_completed)\n engine.add_event_handler(event_name, self.completed, name)\n", "path": "ignite/metrics/frequency.py"}]} | 1,699 | 426 |
gh_patches_debug_24002 | rasdani/github-patches | git_diff | bridgecrewio__checkov-831 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CKV_K8S_31 failure with RuntimeDefault configured for workloads
**Describe the bug**
Please see #710
**To Reproduce**
Please see #710
**Expected behavior**
Please see #710
**Additional context**
The bug reported in #710 needs to be fixed for workloads in https://github.com/bridgecrewio/checkov/blob/master/checkov/kubernetes/checks/Seccomp.py#L44:L48 as well.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
....
spec:
...
...
template:
...
...
spec:
....
....
securityContext:
allowPrivilegeEscalation: false
seccompProfile:
type: RuntimeDefault
```
**Related PRs**
#711
</issue>
<code>
[start of checkov/kubernetes/checks/Seccomp.py]
1 import dpath
2
3 from checkov.common.models.enums import CheckCategories, CheckResult
4 from checkov.kubernetes.base_spec_check import BaseK8Check
5
6
7 class Seccomp(BaseK8Check):
8
9 def __init__(self):
10 # CIS-1.5 5.7.2
11 name = "Ensure that the seccomp profile is set to docker/default or runtime/default"
12 id = "CKV_K8S_31"
13 # Location: Pod.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod
14 # Location: CronJob.spec.jobTemplate.spec.template.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod
15 # Location: *.spec.template.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod
16 # Location: *.spec.securityContext.seccompProfile.type
17 supported_kind = ['Pod', 'Deployment', 'DaemonSet', 'StatefulSet', 'ReplicaSet', 'ReplicationController', 'Job', 'CronJob']
18 categories = [CheckCategories.KUBERNETES]
19 super().__init__(name=name, id=id, categories=categories, supported_entities=supported_kind)
20
21 def get_resource_id(self, conf):
22 if "namespace" in conf["metadata"]:
23 return "{}.{}.{}".format(conf["kind"], conf["metadata"]["name"], conf["metadata"]["namespace"])
24 else:
25 return "{}.{}.default".format(conf["kind"], conf["metadata"]["name"])
26
27 def scan_spec_conf(self, conf):
28 metadata = {}
29
30 if conf['kind'] == 'Pod':
31 security_profile = dpath.search(conf, 'spec/securityContext/seccompProfile/type')
32 if security_profile:
33 security_profile = dpath.get(conf, 'spec/securityContext/seccompProfile/type')
34 return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED
35 if "metadata" in conf:
36 metadata = conf["metadata"]
37 elif conf['kind'] == 'CronJob':
38 if "spec" in conf:
39 if "jobTemplate" in conf["spec"]:
40 if "spec" in conf["spec"]["jobTemplate"]:
41 if "template" in conf["spec"]["jobTemplate"]["spec"]:
42 if "metadata" in conf["spec"]["jobTemplate"]["spec"]["template"]:
43 metadata = conf["spec"]["jobTemplate"]["spec"]["template"]["metadata"]
44 else:
45 if "spec" in conf:
46 if "template" in conf["spec"]:
47 if "metadata" in conf["spec"]["template"]:
48 metadata = conf["spec"]["template"]["metadata"]
49
50 if metadata:
51 if "annotations" in metadata and isinstance(metadata['annotations'], dict):
52 if "seccomp.security.alpha.kubernetes.io/pod" in metadata["annotations"]:
53 if ("docker/default" in metadata["annotations"]["seccomp.security.alpha.kubernetes.io/pod"] or
54 "runtime/default" in metadata["annotations"]["seccomp.security.alpha.kubernetes.io/pod"]):
55 return CheckResult.PASSED
56 return CheckResult.FAILED
57
58
59 check = Seccomp()
60
[end of checkov/kubernetes/checks/Seccomp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/kubernetes/checks/Seccomp.py b/checkov/kubernetes/checks/Seccomp.py
--- a/checkov/kubernetes/checks/Seccomp.py
+++ b/checkov/kubernetes/checks/Seccomp.py
@@ -34,6 +34,20 @@
return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED
if "metadata" in conf:
metadata = conf["metadata"]
+ if conf['kind'] == 'Deployment':
+ security_profile = dpath.search(conf, 'spec/template/spec/securityContext/seccompProfile/type')
+ if security_profile:
+ security_profile = dpath.get(conf, 'spec/template/spec/securityContext/seccompProfile/type')
+ return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED
+ if "metadata" in conf:
+ metadata = conf["metadata"]
+ if conf['kind'] == 'StatefulSet':
+ security_profile = dpath.search(conf, 'spec/template/spec/securityContext/seccompProfile/type')
+ if security_profile:
+ security_profile = dpath.get(conf, 'spec/template/spec/securityContext/seccompProfile/type')
+ return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED
+ if "metadata" in conf:
+ metadata = conf["metadata"]
elif conf['kind'] == 'CronJob':
if "spec" in conf:
if "jobTemplate" in conf["spec"]:
| {"golden_diff": "diff --git a/checkov/kubernetes/checks/Seccomp.py b/checkov/kubernetes/checks/Seccomp.py\n--- a/checkov/kubernetes/checks/Seccomp.py\n+++ b/checkov/kubernetes/checks/Seccomp.py\n@@ -34,6 +34,20 @@\n return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED\n if \"metadata\" in conf:\n metadata = conf[\"metadata\"]\n+ if conf['kind'] == 'Deployment':\n+ security_profile = dpath.search(conf, 'spec/template/spec/securityContext/seccompProfile/type')\n+ if security_profile:\n+ security_profile = dpath.get(conf, 'spec/template/spec/securityContext/seccompProfile/type')\n+ return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED\n+ if \"metadata\" in conf:\n+ metadata = conf[\"metadata\"]\n+ if conf['kind'] == 'StatefulSet':\n+ security_profile = dpath.search(conf, 'spec/template/spec/securityContext/seccompProfile/type')\n+ if security_profile:\n+ security_profile = dpath.get(conf, 'spec/template/spec/securityContext/seccompProfile/type')\n+ return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED\n+ if \"metadata\" in conf:\n+ metadata = conf[\"metadata\"] \n elif conf['kind'] == 'CronJob':\n if \"spec\" in conf:\n if \"jobTemplate\" in conf[\"spec\"]:\n", "issue": "CKV_K8S_31 failure with RuntimeDefault configured for workloads\n**Describe the bug**\r\nPlease see #710\r\n\r\n**To Reproduce**\r\nPlease see #710\r\n\r\n**Expected behavior**\r\nPlease see #710\r\n\r\n**Additional context**\r\nThe bug reported in #710 needs to be fixed for workloads in https://github.com/bridgecrewio/checkov/blob/master/checkov/kubernetes/checks/Seccomp.py#L44:L48 as well.\r\n\r\n```yaml\r\napiVersion: apps/v1\r\nkind: Deployment\r\nmetadata:\r\n....\r\nspec:\r\n...\r\n...\r\n template:\r\n ...\r\n ...\r\n spec:\r\n ....\r\n ....\r\n securityContext:\r\n allowPrivilegeEscalation: false\r\n seccompProfile:\r\n type: RuntimeDefault\r\n```\r\n\r\n**Related PRs**\r\n#711 \r\n\n", "before_files": [{"content": "import dpath\n\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.kubernetes.base_spec_check import BaseK8Check\n\n\nclass Seccomp(BaseK8Check):\n\n def __init__(self):\n # CIS-1.5 5.7.2\n name = \"Ensure that the seccomp profile is set to docker/default or runtime/default\"\n id = \"CKV_K8S_31\"\n # Location: Pod.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod\n # Location: CronJob.spec.jobTemplate.spec.template.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod\n # Location: *.spec.template.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod\n # Location: *.spec.securityContext.seccompProfile.type\n supported_kind = ['Pod', 'Deployment', 'DaemonSet', 'StatefulSet', 'ReplicaSet', 'ReplicationController', 'Job', 'CronJob']\n categories = [CheckCategories.KUBERNETES]\n super().__init__(name=name, id=id, categories=categories, supported_entities=supported_kind)\n\n def get_resource_id(self, conf):\n if \"namespace\" in conf[\"metadata\"]:\n return \"{}.{}.{}\".format(conf[\"kind\"], conf[\"metadata\"][\"name\"], conf[\"metadata\"][\"namespace\"])\n else:\n return \"{}.{}.default\".format(conf[\"kind\"], conf[\"metadata\"][\"name\"])\n\n def scan_spec_conf(self, conf):\n metadata = {}\n\n if conf['kind'] == 'Pod':\n security_profile = dpath.search(conf, 'spec/securityContext/seccompProfile/type')\n if security_profile:\n security_profile = dpath.get(conf, 'spec/securityContext/seccompProfile/type')\n return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED\n if \"metadata\" in conf:\n metadata = conf[\"metadata\"]\n elif conf['kind'] == 'CronJob':\n if \"spec\" in conf:\n if \"jobTemplate\" in conf[\"spec\"]:\n if \"spec\" in conf[\"spec\"][\"jobTemplate\"]:\n if \"template\" in conf[\"spec\"][\"jobTemplate\"][\"spec\"]:\n if \"metadata\" in conf[\"spec\"][\"jobTemplate\"][\"spec\"][\"template\"]:\n metadata = conf[\"spec\"][\"jobTemplate\"][\"spec\"][\"template\"][\"metadata\"]\n else:\n if \"spec\" in conf:\n if \"template\" in conf[\"spec\"]:\n if \"metadata\" in conf[\"spec\"][\"template\"]:\n metadata = conf[\"spec\"][\"template\"][\"metadata\"]\n\n if metadata:\n if \"annotations\" in metadata and isinstance(metadata['annotations'], dict):\n if \"seccomp.security.alpha.kubernetes.io/pod\" in metadata[\"annotations\"]:\n if (\"docker/default\" in metadata[\"annotations\"][\"seccomp.security.alpha.kubernetes.io/pod\"] or\n \"runtime/default\" in metadata[\"annotations\"][\"seccomp.security.alpha.kubernetes.io/pod\"]):\n return CheckResult.PASSED\n return CheckResult.FAILED\n\n\ncheck = Seccomp()\n", "path": "checkov/kubernetes/checks/Seccomp.py"}]} | 1,497 | 339 |
gh_patches_debug_18031 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-2725 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Internal server error when importing CSVs with long names
Follow the same steps as reported in #2634 and observer the error from the screenshot below:
API: `http://localhost/api/db/v0/tables/12/records/?limit=500&offset=0`
<img width="1512" alt="Screenshot 2023-03-20 at 5 29 52 AM" src="https://user-images.githubusercontent.com/11032856/226218521-75355de8-eee0-4b5e-9a9c-47aa3ff67da2.png">
</issue>
<code>
[start of db/identifiers.py]
1 import hashlib
2
3
4 def truncate_if_necessary(identifier):
5 """
6 Takes an identifier and returns it, truncating it, if it is too long. The truncated version
7 will end with a hash of the passed identifier, therefore column name collision should be very
8 rare.
9
10 Iteratively removes characters from the end of the identifier, until the resulting string, with
11 the suffix hash of the identifier appended, is short enough that it doesn't need to be truncated
12 anymore. Whitespace is trimmed from the truncated identifier before appending the suffix.
13 """
14 assert type(identifier) is str
15 if not is_identifier_too_long(identifier):
16 return identifier
17 right_side = "-" + _get_truncation_hash(identifier)
18 identifier_length = len(identifier)
19 assert len(right_side) < identifier_length # Sanity check
20 range_of_num_of_chars_to_remove = range(1, identifier_length)
21 for num_of_chars_to_remove in range_of_num_of_chars_to_remove:
22 left_side = identifier[:num_of_chars_to_remove * -1]
23 left_side = left_side.rstrip()
24 truncated_identifier = left_side + right_side
25 if not is_identifier_too_long(truncated_identifier):
26 return truncated_identifier
27 raise Exception(
28 "Acceptable truncation not found; should never happen."
29 )
30
31
32 def is_identifier_too_long(identifier):
33 postgres_identifier_size_limit = 63
34 size = _get_size_of_identifier_in_bytes(identifier)
35 return size > postgres_identifier_size_limit
36
37
38 def _get_truncation_hash(identifier):
39 """
40 Produces an 8-character string hash of the passed identifier.
41
42 Using hash function blake2s, because it seems fairly recommended and it seems to be better
43 suited for shorter digests than blake2b. We want short digests to not take up too much of the
44 truncated identifier in whose construction this will be used.
45 """
46 h = hashlib.blake2s(digest_size=4)
47 bytes = _get_identifier_in_bytes(identifier)
48 h.update(bytes)
49 return h.hexdigest()
50
51
52 def _get_size_of_identifier_in_bytes(s):
53 bytes = _get_identifier_in_bytes(s)
54 return len(bytes)
55
56
57 def _get_identifier_in_bytes(s):
58 """
59 Afaict, following Postgres doc [0] says that UTF-8 supports all languages; therefore, different
60 server locale configurations should not break this.
61
62 [0] https://www.postgresql.org/docs/13/multibyte.html
63 """
64 return s.encode('utf-8')
65
[end of db/identifiers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/db/identifiers.py b/db/identifiers.py
--- a/db/identifiers.py
+++ b/db/identifiers.py
@@ -1,6 +1,9 @@
import hashlib
+POSTGRES_IDENTIFIER_SIZE_LIMIT = 63
+
+
def truncate_if_necessary(identifier):
"""
Takes an identifier and returns it, truncating it, if it is too long. The truncated version
@@ -30,9 +33,13 @@
def is_identifier_too_long(identifier):
- postgres_identifier_size_limit = 63
+ # TODO we should support POSTGRES_IDENTIFIER_SIZE_LIMIT here;
+ # Our current limit due to an unknown bug that manifests at least
+ # when importing CSVs seems to be 57 bytes. Here we're setting it even
+ # lower just in case.
+ our_temporary_identifier_size_limit = 48
size = _get_size_of_identifier_in_bytes(identifier)
- return size > postgres_identifier_size_limit
+ return size > our_temporary_identifier_size_limit
def _get_truncation_hash(identifier):
| {"golden_diff": "diff --git a/db/identifiers.py b/db/identifiers.py\n--- a/db/identifiers.py\n+++ b/db/identifiers.py\n@@ -1,6 +1,9 @@\n import hashlib\n \n \n+POSTGRES_IDENTIFIER_SIZE_LIMIT = 63\n+\n+\n def truncate_if_necessary(identifier):\n \"\"\"\n Takes an identifier and returns it, truncating it, if it is too long. The truncated version\n@@ -30,9 +33,13 @@\n \n \n def is_identifier_too_long(identifier):\n- postgres_identifier_size_limit = 63\n+ # TODO we should support POSTGRES_IDENTIFIER_SIZE_LIMIT here;\n+ # Our current limit due to an unknown bug that manifests at least\n+ # when importing CSVs seems to be 57 bytes. Here we're setting it even\n+ # lower just in case.\n+ our_temporary_identifier_size_limit = 48\n size = _get_size_of_identifier_in_bytes(identifier)\n- return size > postgres_identifier_size_limit\n+ return size > our_temporary_identifier_size_limit\n \n \n def _get_truncation_hash(identifier):\n", "issue": "Internal server error when importing CSVs with long names\nFollow the same steps as reported in #2634 and observer the error from the screenshot below: \r\n\r\nAPI: `http://localhost/api/db/v0/tables/12/records/?limit=500&offset=0`\r\n\r\n<img width=\"1512\" alt=\"Screenshot 2023-03-20 at 5 29 52 AM\" src=\"https://user-images.githubusercontent.com/11032856/226218521-75355de8-eee0-4b5e-9a9c-47aa3ff67da2.png\">\r\n\n", "before_files": [{"content": "import hashlib\n\n\ndef truncate_if_necessary(identifier):\n \"\"\"\n Takes an identifier and returns it, truncating it, if it is too long. The truncated version\n will end with a hash of the passed identifier, therefore column name collision should be very\n rare.\n\n Iteratively removes characters from the end of the identifier, until the resulting string, with\n the suffix hash of the identifier appended, is short enough that it doesn't need to be truncated\n anymore. Whitespace is trimmed from the truncated identifier before appending the suffix.\n \"\"\"\n assert type(identifier) is str\n if not is_identifier_too_long(identifier):\n return identifier\n right_side = \"-\" + _get_truncation_hash(identifier)\n identifier_length = len(identifier)\n assert len(right_side) < identifier_length # Sanity check\n range_of_num_of_chars_to_remove = range(1, identifier_length)\n for num_of_chars_to_remove in range_of_num_of_chars_to_remove:\n left_side = identifier[:num_of_chars_to_remove * -1]\n left_side = left_side.rstrip()\n truncated_identifier = left_side + right_side\n if not is_identifier_too_long(truncated_identifier):\n return truncated_identifier\n raise Exception(\n \"Acceptable truncation not found; should never happen.\"\n )\n\n\ndef is_identifier_too_long(identifier):\n postgres_identifier_size_limit = 63\n size = _get_size_of_identifier_in_bytes(identifier)\n return size > postgres_identifier_size_limit\n\n\ndef _get_truncation_hash(identifier):\n \"\"\"\n Produces an 8-character string hash of the passed identifier.\n\n Using hash function blake2s, because it seems fairly recommended and it seems to be better\n suited for shorter digests than blake2b. We want short digests to not take up too much of the\n truncated identifier in whose construction this will be used.\n \"\"\"\n h = hashlib.blake2s(digest_size=4)\n bytes = _get_identifier_in_bytes(identifier)\n h.update(bytes)\n return h.hexdigest()\n\n\ndef _get_size_of_identifier_in_bytes(s):\n bytes = _get_identifier_in_bytes(s)\n return len(bytes)\n\n\ndef _get_identifier_in_bytes(s):\n \"\"\"\n Afaict, following Postgres doc [0] says that UTF-8 supports all languages; therefore, different\n server locale configurations should not break this.\n\n [0] https://www.postgresql.org/docs/13/multibyte.html\n \"\"\"\n return s.encode('utf-8')\n", "path": "db/identifiers.py"}]} | 1,358 | 241 |
gh_patches_debug_7763 | rasdani/github-patches | git_diff | plotly__dash-808 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Defer `pytest` import?
Looks like `pytest` isn't the safest dependency, causing issues with scikit-learn: https://community.plot.ly/t/pytest-transient-dependency/25383
Could we move the `import pytest` into the testing module/class/function itself and not require it upon install? We could even have a separate install with setup.py's "extras" feature (https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies) like `pip install dash[testing]` or something.
</issue>
<code>
[start of setup.py]
1 import io
2 from setuptools import setup, find_packages
3
4 main_ns = {}
5 exec(open("dash/version.py").read(), main_ns) # pylint: disable=exec-used
6
7
8 def read_req_file(req_type):
9 with open("requires-{}.txt".format(req_type)) as fp:
10 requires = (line.strip() for line in fp)
11 return [req for req in requires if req and not req.startswith("#")]
12
13
14 setup(
15 name="dash",
16 version=main_ns["__version__"],
17 author="chris p",
18 author_email="[email protected]",
19 packages=find_packages(exclude=["tests*"]),
20 include_package_data=True,
21 license="MIT",
22 description=(
23 "A Python framework for building reactive web-apps. "
24 "Developed by Plotly."
25 ),
26 long_description=io.open("README.md", encoding="utf-8").read(),
27 long_description_content_type="text/markdown",
28 install_requires=read_req_file("install"),
29 extras_require={"ci": read_req_file("ci")},
30 entry_points={
31 "console_scripts": [
32 "dash-generate-components ="
33 " dash.development.component_generator:cli"
34 ],
35 "pytest11": ["dash = dash.testing.plugin"],
36 },
37 url="https://plot.ly/dash",
38 classifiers=[
39 "Development Status :: 5 - Production/Stable",
40 "Environment :: Web Environment",
41 "Framework :: Flask",
42 "Intended Audience :: Developers",
43 "Intended Audience :: Education",
44 "Intended Audience :: Financial and Insurance Industry",
45 "Intended Audience :: Healthcare Industry",
46 "Intended Audience :: Manufacturing",
47 "Intended Audience :: Science/Research",
48 "License :: OSI Approved :: MIT License",
49 "Programming Language :: Python",
50 "Programming Language :: Python :: 2",
51 "Programming Language :: Python :: 2.7",
52 "Programming Language :: Python :: 3",
53 "Programming Language :: Python :: 3.3",
54 "Programming Language :: Python :: 3.4",
55 "Programming Language :: Python :: 3.5",
56 "Programming Language :: Python :: 3.6",
57 "Programming Language :: Python :: 3.7",
58 "Topic :: Database :: Front-Ends",
59 "Topic :: Office/Business :: Financial :: Spreadsheet",
60 "Topic :: Scientific/Engineering :: Visualization",
61 "Topic :: Software Development :: Libraries :: Application Frameworks",
62 "Topic :: Software Development :: Widget Sets",
63 ],
64 )
65
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -26,7 +26,10 @@
long_description=io.open("README.md", encoding="utf-8").read(),
long_description_content_type="text/markdown",
install_requires=read_req_file("install"),
- extras_require={"ci": read_req_file("ci")},
+ extras_require={
+ "ci": read_req_file("ci"),
+ "testing": read_req_file("testing"),
+ },
entry_points={
"console_scripts": [
"dash-generate-components ="
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -26,7 +26,10 @@\n long_description=io.open(\"README.md\", encoding=\"utf-8\").read(),\n long_description_content_type=\"text/markdown\",\n install_requires=read_req_file(\"install\"),\n- extras_require={\"ci\": read_req_file(\"ci\")},\n+ extras_require={\n+ \"ci\": read_req_file(\"ci\"),\n+ \"testing\": read_req_file(\"testing\"),\n+ },\n entry_points={\n \"console_scripts\": [\n \"dash-generate-components =\"\n", "issue": "Defer `pytest` import?\nLooks like `pytest` isn't the safest dependency, causing issues with scikit-learn: https://community.plot.ly/t/pytest-transient-dependency/25383\r\n\r\nCould we move the `import pytest` into the testing module/class/function itself and not require it upon install? We could even have a separate install with setup.py's \"extras\" feature (https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies) like `pip install dash[testing]` or something.\n", "before_files": [{"content": "import io\nfrom setuptools import setup, find_packages\n\nmain_ns = {}\nexec(open(\"dash/version.py\").read(), main_ns) # pylint: disable=exec-used\n\n\ndef read_req_file(req_type):\n with open(\"requires-{}.txt\".format(req_type)) as fp:\n requires = (line.strip() for line in fp)\n return [req for req in requires if req and not req.startswith(\"#\")]\n\n\nsetup(\n name=\"dash\",\n version=main_ns[\"__version__\"],\n author=\"chris p\",\n author_email=\"[email protected]\",\n packages=find_packages(exclude=[\"tests*\"]),\n include_package_data=True,\n license=\"MIT\",\n description=(\n \"A Python framework for building reactive web-apps. \"\n \"Developed by Plotly.\"\n ),\n long_description=io.open(\"README.md\", encoding=\"utf-8\").read(),\n long_description_content_type=\"text/markdown\",\n install_requires=read_req_file(\"install\"),\n extras_require={\"ci\": read_req_file(\"ci\")},\n entry_points={\n \"console_scripts\": [\n \"dash-generate-components =\"\n \" dash.development.component_generator:cli\"\n ],\n \"pytest11\": [\"dash = dash.testing.plugin\"],\n },\n url=\"https://plot.ly/dash\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Framework :: Flask\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Financial and Insurance Industry\",\n \"Intended Audience :: Healthcare Industry\",\n \"Intended Audience :: Manufacturing\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Database :: Front-Ends\",\n \"Topic :: Office/Business :: Financial :: Spreadsheet\",\n \"Topic :: Scientific/Engineering :: Visualization\",\n \"Topic :: Software Development :: Libraries :: Application Frameworks\",\n \"Topic :: Software Development :: Widget Sets\",\n ],\n)\n", "path": "setup.py"}]} | 1,302 | 132 |
gh_patches_debug_42715 | rasdani/github-patches | git_diff | openai__gym-1878 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Box bound precision warning
I get this warning a lot when using Box environments:
```
.../gym/logger.py:30: UserWarning: WARN: Box bound precision lowered by casting to float32
```
This is particularly annoying, especially because the [default dtype for Box is](https://github.com/openai/gym/blob/master/gym/spaces/box.py#L24) `np.float(32)`
</issue>
<code>
[start of gym/spaces/box.py]
1 import numpy as np
2
3 from .space import Space
4 from gym import logger
5
6
7 class Box(Space):
8 """
9 A (possibly unbounded) box in R^n. Specifically, a Box represents the
10 Cartesian product of n closed intervals. Each interval has the form of one
11 of [a, b], (-oo, b], [a, oo), or (-oo, oo).
12
13 There are two common use cases:
14
15 * Identical bound for each dimension::
16 >>> Box(low=-1.0, high=2.0, shape=(3, 4), dtype=np.float32)
17 Box(3, 4)
18
19 * Independent bound for each dimension::
20 >>> Box(low=np.array([-1.0, -2.0]), high=np.array([2.0, 4.0]), dtype=np.float32)
21 Box(2,)
22
23 """
24 def __init__(self, low, high, shape=None, dtype=np.float32):
25 assert dtype is not None, 'dtype must be explicitly provided. '
26 self.dtype = np.dtype(dtype)
27
28 if shape is None:
29 assert low.shape == high.shape, 'box dimension mismatch. '
30 self.shape = low.shape
31 self.low = low
32 self.high = high
33 else:
34 assert np.isscalar(low) and np.isscalar(high), 'box requires scalar bounds. '
35 self.shape = tuple(shape)
36 self.low = np.full(self.shape, low)
37 self.high = np.full(self.shape, high)
38
39 def _get_precision(dtype):
40 if np.issubdtype(dtype, np.floating):
41 return np.finfo(dtype).precision
42 else:
43 return np.inf
44 low_precision = _get_precision(self.low.dtype)
45 high_precision = _get_precision(self.high.dtype)
46 dtype_precision = _get_precision(self.dtype)
47 if min(low_precision, high_precision) > dtype_precision:
48 logger.warn("Box bound precision lowered by casting to {}".format(self.dtype))
49 self.low = self.low.astype(self.dtype)
50 self.high = self.high.astype(self.dtype)
51
52 # Boolean arrays which indicate the interval type for each coordinate
53 self.bounded_below = -np.inf < self.low
54 self.bounded_above = np.inf > self.high
55
56 super(Box, self).__init__(self.shape, self.dtype)
57
58 def is_bounded(self, manner="both"):
59 below = np.all(self.bounded_below)
60 above = np.all(self.bounded_above)
61 if manner == "both":
62 return below and above
63 elif manner == "below":
64 return below
65 elif manner == "above":
66 return above
67 else:
68 raise ValueError("manner is not in {'below', 'above', 'both'}")
69
70 def sample(self):
71 """
72 Generates a single random sample inside of the Box.
73
74 In creating a sample of the box, each coordinate is sampled according to
75 the form of the interval:
76
77 * [a, b] : uniform distribution
78 * [a, oo) : shifted exponential distribution
79 * (-oo, b] : shifted negative exponential distribution
80 * (-oo, oo) : normal distribution
81 """
82 high = self.high if self.dtype.kind == 'f' \
83 else self.high.astype('int64') + 1
84 sample = np.empty(self.shape)
85
86 # Masking arrays which classify the coordinates according to interval
87 # type
88 unbounded = ~self.bounded_below & ~self.bounded_above
89 upp_bounded = ~self.bounded_below & self.bounded_above
90 low_bounded = self.bounded_below & ~self.bounded_above
91 bounded = self.bounded_below & self.bounded_above
92
93
94 # Vectorized sampling by interval type
95 sample[unbounded] = self.np_random.normal(
96 size=unbounded[unbounded].shape)
97
98 sample[low_bounded] = self.np_random.exponential(
99 size=low_bounded[low_bounded].shape) + self.low[low_bounded]
100
101 sample[upp_bounded] = -self.np_random.exponential(
102 size=upp_bounded[upp_bounded].shape) + self.high[upp_bounded]
103
104 sample[bounded] = self.np_random.uniform(low=self.low[bounded],
105 high=high[bounded],
106 size=bounded[bounded].shape)
107 if self.dtype.kind == 'i':
108 sample = np.floor(sample)
109
110 return sample.astype(self.dtype)
111
112 def contains(self, x):
113 if isinstance(x, list):
114 x = np.array(x) # Promote list to array for contains check
115 return x.shape == self.shape and np.all(x >= self.low) and np.all(x <= self.high)
116
117 def to_jsonable(self, sample_n):
118 return np.array(sample_n).tolist()
119
120 def from_jsonable(self, sample_n):
121 return [np.asarray(sample) for sample in sample_n]
122
123 def __repr__(self):
124 return "Box" + str(self.shape)
125
126 def __eq__(self, other):
127 return isinstance(other, Box) and (self.shape == other.shape) and np.allclose(self.low, other.low) and np.allclose(self.high, other.high)
128
[end of gym/spaces/box.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gym/spaces/box.py b/gym/spaces/box.py
--- a/gym/spaces/box.py
+++ b/gym/spaces/box.py
@@ -9,13 +9,13 @@
A (possibly unbounded) box in R^n. Specifically, a Box represents the
Cartesian product of n closed intervals. Each interval has the form of one
of [a, b], (-oo, b], [a, oo), or (-oo, oo).
-
+
There are two common use cases:
-
+
* Identical bound for each dimension::
>>> Box(low=-1.0, high=2.0, shape=(3, 4), dtype=np.float32)
Box(3, 4)
-
+
* Independent bound for each dimension::
>>> Box(low=np.array([-1.0, -2.0]), high=np.array([2.0, 4.0]), dtype=np.float32)
Box(2,)
@@ -33,8 +33,8 @@
else:
assert np.isscalar(low) and np.isscalar(high), 'box requires scalar bounds. '
self.shape = tuple(shape)
- self.low = np.full(self.shape, low)
- self.high = np.full(self.shape, high)
+ self.low = np.full(self.shape, low, dtype=dtype)
+ self.high = np.full(self.shape, high, dtype=dtype)
def _get_precision(dtype):
if np.issubdtype(dtype, np.floating):
@@ -69,12 +69,12 @@
def sample(self):
"""
- Generates a single random sample inside of the Box.
+ Generates a single random sample inside of the Box.
In creating a sample of the box, each coordinate is sampled according to
the form of the interval:
-
- * [a, b] : uniform distribution
+
+ * [a, b] : uniform distribution
* [a, oo) : shifted exponential distribution
* (-oo, b] : shifted negative exponential distribution
* (-oo, oo) : normal distribution
@@ -89,7 +89,7 @@
upp_bounded = ~self.bounded_below & self.bounded_above
low_bounded = self.bounded_below & ~self.bounded_above
bounded = self.bounded_below & self.bounded_above
-
+
# Vectorized sampling by interval type
sample[unbounded] = self.np_random.normal(
@@ -97,18 +97,18 @@
sample[low_bounded] = self.np_random.exponential(
size=low_bounded[low_bounded].shape) + self.low[low_bounded]
-
+
sample[upp_bounded] = -self.np_random.exponential(
size=upp_bounded[upp_bounded].shape) + self.high[upp_bounded]
-
- sample[bounded] = self.np_random.uniform(low=self.low[bounded],
+
+ sample[bounded] = self.np_random.uniform(low=self.low[bounded],
high=high[bounded],
size=bounded[bounded].shape)
if self.dtype.kind == 'i':
sample = np.floor(sample)
return sample.astype(self.dtype)
-
+
def contains(self, x):
if isinstance(x, list):
x = np.array(x) # Promote list to array for contains check
| {"golden_diff": "diff --git a/gym/spaces/box.py b/gym/spaces/box.py\n--- a/gym/spaces/box.py\n+++ b/gym/spaces/box.py\n@@ -9,13 +9,13 @@\n A (possibly unbounded) box in R^n. Specifically, a Box represents the\n Cartesian product of n closed intervals. Each interval has the form of one\n of [a, b], (-oo, b], [a, oo), or (-oo, oo).\n- \n+\n There are two common use cases:\n- \n+\n * Identical bound for each dimension::\n >>> Box(low=-1.0, high=2.0, shape=(3, 4), dtype=np.float32)\n Box(3, 4)\n- \n+\n * Independent bound for each dimension::\n >>> Box(low=np.array([-1.0, -2.0]), high=np.array([2.0, 4.0]), dtype=np.float32)\n Box(2,)\n@@ -33,8 +33,8 @@\n else:\n assert np.isscalar(low) and np.isscalar(high), 'box requires scalar bounds. '\n self.shape = tuple(shape)\n- self.low = np.full(self.shape, low)\n- self.high = np.full(self.shape, high)\n+ self.low = np.full(self.shape, low, dtype=dtype)\n+ self.high = np.full(self.shape, high, dtype=dtype)\n \n def _get_precision(dtype):\n if np.issubdtype(dtype, np.floating):\n@@ -69,12 +69,12 @@\n \n def sample(self):\n \"\"\"\n- Generates a single random sample inside of the Box. \n+ Generates a single random sample inside of the Box.\n \n In creating a sample of the box, each coordinate is sampled according to\n the form of the interval:\n- \n- * [a, b] : uniform distribution \n+\n+ * [a, b] : uniform distribution\n * [a, oo) : shifted exponential distribution\n * (-oo, b] : shifted negative exponential distribution\n * (-oo, oo) : normal distribution\n@@ -89,7 +89,7 @@\n upp_bounded = ~self.bounded_below & self.bounded_above\n low_bounded = self.bounded_below & ~self.bounded_above\n bounded = self.bounded_below & self.bounded_above\n- \n+\n \n # Vectorized sampling by interval type\n sample[unbounded] = self.np_random.normal(\n@@ -97,18 +97,18 @@\n \n sample[low_bounded] = self.np_random.exponential(\n size=low_bounded[low_bounded].shape) + self.low[low_bounded]\n- \n+\n sample[upp_bounded] = -self.np_random.exponential(\n size=upp_bounded[upp_bounded].shape) + self.high[upp_bounded]\n- \n- sample[bounded] = self.np_random.uniform(low=self.low[bounded], \n+\n+ sample[bounded] = self.np_random.uniform(low=self.low[bounded],\n high=high[bounded],\n size=bounded[bounded].shape)\n if self.dtype.kind == 'i':\n sample = np.floor(sample)\n \n return sample.astype(self.dtype)\n- \n+\n def contains(self, x):\n if isinstance(x, list):\n x = np.array(x) # Promote list to array for contains check\n", "issue": "Box bound precision warning\nI get this warning a lot when using Box environments:\r\n\r\n```\r\n.../gym/logger.py:30: UserWarning: WARN: Box bound precision lowered by casting to float32\r\n```\r\nThis is particularly annoying, especially because the [default dtype for Box is](https://github.com/openai/gym/blob/master/gym/spaces/box.py#L24) `np.float(32)`\n", "before_files": [{"content": "import numpy as np\n\nfrom .space import Space\nfrom gym import logger\n\n\nclass Box(Space):\n \"\"\"\n A (possibly unbounded) box in R^n. Specifically, a Box represents the\n Cartesian product of n closed intervals. Each interval has the form of one\n of [a, b], (-oo, b], [a, oo), or (-oo, oo).\n \n There are two common use cases:\n \n * Identical bound for each dimension::\n >>> Box(low=-1.0, high=2.0, shape=(3, 4), dtype=np.float32)\n Box(3, 4)\n \n * Independent bound for each dimension::\n >>> Box(low=np.array([-1.0, -2.0]), high=np.array([2.0, 4.0]), dtype=np.float32)\n Box(2,)\n\n \"\"\"\n def __init__(self, low, high, shape=None, dtype=np.float32):\n assert dtype is not None, 'dtype must be explicitly provided. '\n self.dtype = np.dtype(dtype)\n\n if shape is None:\n assert low.shape == high.shape, 'box dimension mismatch. '\n self.shape = low.shape\n self.low = low\n self.high = high\n else:\n assert np.isscalar(low) and np.isscalar(high), 'box requires scalar bounds. '\n self.shape = tuple(shape)\n self.low = np.full(self.shape, low)\n self.high = np.full(self.shape, high)\n\n def _get_precision(dtype):\n if np.issubdtype(dtype, np.floating):\n return np.finfo(dtype).precision\n else:\n return np.inf\n low_precision = _get_precision(self.low.dtype)\n high_precision = _get_precision(self.high.dtype)\n dtype_precision = _get_precision(self.dtype)\n if min(low_precision, high_precision) > dtype_precision:\n logger.warn(\"Box bound precision lowered by casting to {}\".format(self.dtype))\n self.low = self.low.astype(self.dtype)\n self.high = self.high.astype(self.dtype)\n\n # Boolean arrays which indicate the interval type for each coordinate\n self.bounded_below = -np.inf < self.low\n self.bounded_above = np.inf > self.high\n\n super(Box, self).__init__(self.shape, self.dtype)\n\n def is_bounded(self, manner=\"both\"):\n below = np.all(self.bounded_below)\n above = np.all(self.bounded_above)\n if manner == \"both\":\n return below and above\n elif manner == \"below\":\n return below\n elif manner == \"above\":\n return above\n else:\n raise ValueError(\"manner is not in {'below', 'above', 'both'}\")\n\n def sample(self):\n \"\"\"\n Generates a single random sample inside of the Box. \n\n In creating a sample of the box, each coordinate is sampled according to\n the form of the interval:\n \n * [a, b] : uniform distribution \n * [a, oo) : shifted exponential distribution\n * (-oo, b] : shifted negative exponential distribution\n * (-oo, oo) : normal distribution\n \"\"\"\n high = self.high if self.dtype.kind == 'f' \\\n else self.high.astype('int64') + 1\n sample = np.empty(self.shape)\n\n # Masking arrays which classify the coordinates according to interval\n # type\n unbounded = ~self.bounded_below & ~self.bounded_above\n upp_bounded = ~self.bounded_below & self.bounded_above\n low_bounded = self.bounded_below & ~self.bounded_above\n bounded = self.bounded_below & self.bounded_above\n \n\n # Vectorized sampling by interval type\n sample[unbounded] = self.np_random.normal(\n size=unbounded[unbounded].shape)\n\n sample[low_bounded] = self.np_random.exponential(\n size=low_bounded[low_bounded].shape) + self.low[low_bounded]\n \n sample[upp_bounded] = -self.np_random.exponential(\n size=upp_bounded[upp_bounded].shape) + self.high[upp_bounded]\n \n sample[bounded] = self.np_random.uniform(low=self.low[bounded], \n high=high[bounded],\n size=bounded[bounded].shape)\n if self.dtype.kind == 'i':\n sample = np.floor(sample)\n\n return sample.astype(self.dtype)\n \n def contains(self, x):\n if isinstance(x, list):\n x = np.array(x) # Promote list to array for contains check\n return x.shape == self.shape and np.all(x >= self.low) and np.all(x <= self.high)\n\n def to_jsonable(self, sample_n):\n return np.array(sample_n).tolist()\n\n def from_jsonable(self, sample_n):\n return [np.asarray(sample) for sample in sample_n]\n\n def __repr__(self):\n return \"Box\" + str(self.shape)\n\n def __eq__(self, other):\n return isinstance(other, Box) and (self.shape == other.shape) and np.allclose(self.low, other.low) and np.allclose(self.high, other.high)\n", "path": "gym/spaces/box.py"}]} | 2,042 | 770 |
gh_patches_debug_30451 | rasdani/github-patches | git_diff | bids-standard__pybids-447 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update prep_zenodo.py to only count commits in grabbit up to 0.2.6
With #369, we're dropping the grabbit dependency, so changes there will no longer contribute to pybids.
</issue>
<code>
[start of tools/prep_zenodo.py]
1 #!/usr/bin/env python3
2 import git
3 import json
4 from subprocess import run, PIPE, CalledProcessError
5 from pathlib import Path
6 from tempfile import TemporaryDirectory
7
8
9 def decommify(name):
10 return ' '.join(name.split(', ')[::-1])
11
12
13 # List of repositories whose commits should be counted as contributions
14 codependents = ['https://github.com/grabbles/grabbit.git']
15
16 # Last shablona commit
17 origin_commit = 'd72caaf5933907ed699d57faddaec7bfc836ce6f'
18
19 git_root = Path(git.Repo('.', search_parent_directories=True).working_dir)
20 zenodo_file = git_root / '.zenodo.json'
21
22 zenodo = json.loads(zenodo_file.read_text()) if zenodo_file.exists() else {}
23
24 orig_creators = zenodo.get('creators', [])
25 creator_map = {decommify(creator['name']): creator
26 for creator in orig_creators}
27
28 shortlog = run(['git', 'shortlog', '-ns', f'{origin_commit}..'], stdout=PIPE)
29 counts = [line.split('\t', 1)[::-1]
30 for line in shortlog.stdout.decode().split('\n') if line]
31
32 # Get additional commit counts from dependencies
33 with TemporaryDirectory() as tmpdir:
34 tmppath = Path(tmpdir)
35 for repo in codependents:
36 repo_dir = str(tmppath / repo.rsplit('/', 1)[1].split('.', 1)[0])
37 try:
38 clone = run(['git', 'clone', repo, repo_dir], check=True)
39 except CalledProcessError as err:
40 raise RuntimeError("Could not clone {}".format(repo)) from err
41 tag = run(['git', '-C', repo_dir, 'tag'], stdout=PIPE)
42 latest_tag = tag.stdout.decode().strip().rsplit('\n', 1)[1]
43 dep_shortlog = run(
44 ['git', '-C', repo_dir, 'shortlog', '-ns', latest_tag],
45 stdout=PIPE)
46 counts.extend(line.split('\t', 1)[::-1]
47 for line in dep_shortlog.stdout.decode().split('\n')
48 if line)
49
50 commit_counts = {}
51 for committer, commits in counts:
52 commit_counts[committer] = commit_counts.get(committer, 0) + int(commits)
53
54 # Stable sort:
55 # Number of commits in reverse order
56 # Ties broken by alphabetical order of first name
57 committers = [committer
58 for committer, _ in sorted(commit_counts.items(),
59 key=lambda x: (-x[1], x[0]))]
60
61 # Tal to the top
62 first_author = 'Tal Yarkoni'
63 if committers[0] != first_author:
64 committers.remove(first_author)
65 committers.insert(0, first_author)
66
67 creators = [
68 creator_map.get(committer, {'name': committer})
69 for committer in committers
70 ]
71
72 zenodo['creators'] = creators
73 zenodo_file.write_text(json.dumps(zenodo, indent=2, sort_keys=True) + '\n')
74
[end of tools/prep_zenodo.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tools/prep_zenodo.py b/tools/prep_zenodo.py
--- a/tools/prep_zenodo.py
+++ b/tools/prep_zenodo.py
@@ -11,7 +11,7 @@
# List of repositories whose commits should be counted as contributions
-codependents = ['https://github.com/grabbles/grabbit.git']
+codependents = [('https://github.com/grabbles/grabbit.git', '0.2.6')]
# Last shablona commit
origin_commit = 'd72caaf5933907ed699d57faddaec7bfc836ce6f'
@@ -33,15 +33,23 @@
with TemporaryDirectory() as tmpdir:
tmppath = Path(tmpdir)
for repo in codependents:
+ try:
+ repo, ref = repo
+ except (TypeError, ValueError):
+ ref = None
repo_dir = str(tmppath / repo.rsplit('/', 1)[1].split('.', 1)[0])
try:
- clone = run(['git', 'clone', repo, repo_dir], check=True)
+ clone = run(['git', 'clone', '-q', repo, repo_dir], check=True)
except CalledProcessError as err:
raise RuntimeError("Could not clone {}".format(repo)) from err
- tag = run(['git', '-C', repo_dir, 'tag'], stdout=PIPE)
- latest_tag = tag.stdout.decode().strip().rsplit('\n', 1)[1]
+
+ if ref is None:
+ tag = run(['git', '-C', repo_dir, 'tag'], stdout=PIPE)
+ # latest tag
+ ref = tag.stdout.decode().strip().rsplit('\n', 1)[1]
+
dep_shortlog = run(
- ['git', '-C', repo_dir, 'shortlog', '-ns', latest_tag],
+ ['git', '-C', repo_dir, 'shortlog', '-ns', ref],
stdout=PIPE)
counts.extend(line.split('\t', 1)[::-1]
for line in dep_shortlog.stdout.decode().split('\n')
| {"golden_diff": "diff --git a/tools/prep_zenodo.py b/tools/prep_zenodo.py\n--- a/tools/prep_zenodo.py\n+++ b/tools/prep_zenodo.py\n@@ -11,7 +11,7 @@\n \n \n # List of repositories whose commits should be counted as contributions\n-codependents = ['https://github.com/grabbles/grabbit.git']\n+codependents = [('https://github.com/grabbles/grabbit.git', '0.2.6')]\n \n # Last shablona commit\n origin_commit = 'd72caaf5933907ed699d57faddaec7bfc836ce6f'\n@@ -33,15 +33,23 @@\n with TemporaryDirectory() as tmpdir:\n tmppath = Path(tmpdir)\n for repo in codependents:\n+ try:\n+ repo, ref = repo\n+ except (TypeError, ValueError):\n+ ref = None\n repo_dir = str(tmppath / repo.rsplit('/', 1)[1].split('.', 1)[0])\n try:\n- clone = run(['git', 'clone', repo, repo_dir], check=True)\n+ clone = run(['git', 'clone', '-q', repo, repo_dir], check=True)\n except CalledProcessError as err:\n raise RuntimeError(\"Could not clone {}\".format(repo)) from err\n- tag = run(['git', '-C', repo_dir, 'tag'], stdout=PIPE)\n- latest_tag = tag.stdout.decode().strip().rsplit('\\n', 1)[1]\n+\n+ if ref is None:\n+ tag = run(['git', '-C', repo_dir, 'tag'], stdout=PIPE)\n+ # latest tag\n+ ref = tag.stdout.decode().strip().rsplit('\\n', 1)[1]\n+\n dep_shortlog = run(\n- ['git', '-C', repo_dir, 'shortlog', '-ns', latest_tag],\n+ ['git', '-C', repo_dir, 'shortlog', '-ns', ref],\n stdout=PIPE)\n counts.extend(line.split('\\t', 1)[::-1]\n for line in dep_shortlog.stdout.decode().split('\\n')\n", "issue": "Update prep_zenodo.py to only count commits in grabbit up to 0.2.6\nWith #369, we're dropping the grabbit dependency, so changes there will no longer contribute to pybids.\n", "before_files": [{"content": "#!/usr/bin/env python3\nimport git\nimport json\nfrom subprocess import run, PIPE, CalledProcessError\nfrom pathlib import Path\nfrom tempfile import TemporaryDirectory\n\n\ndef decommify(name):\n return ' '.join(name.split(', ')[::-1])\n\n\n# List of repositories whose commits should be counted as contributions\ncodependents = ['https://github.com/grabbles/grabbit.git']\n\n# Last shablona commit\norigin_commit = 'd72caaf5933907ed699d57faddaec7bfc836ce6f'\n\ngit_root = Path(git.Repo('.', search_parent_directories=True).working_dir)\nzenodo_file = git_root / '.zenodo.json'\n\nzenodo = json.loads(zenodo_file.read_text()) if zenodo_file.exists() else {}\n\norig_creators = zenodo.get('creators', [])\ncreator_map = {decommify(creator['name']): creator\n for creator in orig_creators}\n\nshortlog = run(['git', 'shortlog', '-ns', f'{origin_commit}..'], stdout=PIPE)\ncounts = [line.split('\\t', 1)[::-1]\n for line in shortlog.stdout.decode().split('\\n') if line]\n\n# Get additional commit counts from dependencies\nwith TemporaryDirectory() as tmpdir:\n tmppath = Path(tmpdir)\n for repo in codependents:\n repo_dir = str(tmppath / repo.rsplit('/', 1)[1].split('.', 1)[0])\n try:\n clone = run(['git', 'clone', repo, repo_dir], check=True)\n except CalledProcessError as err:\n raise RuntimeError(\"Could not clone {}\".format(repo)) from err\n tag = run(['git', '-C', repo_dir, 'tag'], stdout=PIPE)\n latest_tag = tag.stdout.decode().strip().rsplit('\\n', 1)[1]\n dep_shortlog = run(\n ['git', '-C', repo_dir, 'shortlog', '-ns', latest_tag],\n stdout=PIPE)\n counts.extend(line.split('\\t', 1)[::-1]\n for line in dep_shortlog.stdout.decode().split('\\n')\n if line)\n\ncommit_counts = {}\nfor committer, commits in counts:\n commit_counts[committer] = commit_counts.get(committer, 0) + int(commits)\n\n# Stable sort:\n# Number of commits in reverse order\n# Ties broken by alphabetical order of first name\ncommitters = [committer\n for committer, _ in sorted(commit_counts.items(),\n key=lambda x: (-x[1], x[0]))]\n\n# Tal to the top\nfirst_author = 'Tal Yarkoni'\nif committers[0] != first_author:\n committers.remove(first_author)\n committers.insert(0, first_author)\n\ncreators = [\n creator_map.get(committer, {'name': committer})\n for committer in committers\n ]\n\nzenodo['creators'] = creators\nzenodo_file.write_text(json.dumps(zenodo, indent=2, sort_keys=True) + '\\n')\n", "path": "tools/prep_zenodo.py"}]} | 1,403 | 485 |
gh_patches_debug_33722 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-1225 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
redis-py-cluster new API 2.0.0
### Which version of dd-trace-py are you using?
lastest 0.34.0
### Which version of the libraries are you using?
redis-py-cluster 2.0.0
### How can we reproduce your problem?
change https://github.com/DataDog/dd-trace-py/blob/master/tox.ini redis-py-cluster version
### What is the result that you get?
ERROR. It's no more StrictRedisCluster but just RedisCluster
### What is result that you expected?
moving to new api
</issue>
<code>
[start of ddtrace/contrib/rediscluster/patch.py]
1 # 3p
2 import rediscluster
3 from ddtrace.vendor import wrapt
4
5 # project
6 from ddtrace import config
7 from ...constants import ANALYTICS_SAMPLE_RATE_KEY, SPAN_MEASURED_KEY
8 from ...pin import Pin
9 from ...ext import SpanTypes, redis as redisx
10 from ...utils.wrappers import unwrap
11 from ..redis.patch import traced_execute_command, traced_pipeline
12 from ..redis.util import format_command_args
13
14
15 def patch():
16 """Patch the instrumented methods
17 """
18 if getattr(rediscluster, '_datadog_patch', False):
19 return
20 setattr(rediscluster, '_datadog_patch', True)
21
22 _w = wrapt.wrap_function_wrapper
23 _w('rediscluster', 'StrictRedisCluster.execute_command', traced_execute_command)
24 _w('rediscluster', 'StrictRedisCluster.pipeline', traced_pipeline)
25 _w('rediscluster', 'StrictClusterPipeline.execute', traced_execute_pipeline)
26 Pin(service=redisx.DEFAULT_SERVICE, app=redisx.APP).onto(rediscluster.StrictRedisCluster)
27
28
29 def unpatch():
30 if getattr(rediscluster, '_datadog_patch', False):
31 setattr(rediscluster, '_datadog_patch', False)
32 unwrap(rediscluster.StrictRedisCluster, 'execute_command')
33 unwrap(rediscluster.StrictRedisCluster, 'pipeline')
34 unwrap(rediscluster.StrictClusterPipeline, 'execute')
35
36
37 #
38 # tracing functions
39 #
40
41 def traced_execute_pipeline(func, instance, args, kwargs):
42 pin = Pin.get_from(instance)
43 if not pin or not pin.enabled():
44 return func(*args, **kwargs)
45
46 cmds = [format_command_args(c.args) for c in instance.command_stack]
47 resource = '\n'.join(cmds)
48 tracer = pin.tracer
49 with tracer.trace(redisx.CMD, resource=resource, service=pin.service, span_type=SpanTypes.REDIS) as s:
50 s.set_tag(SPAN_MEASURED_KEY)
51 s.set_tag(redisx.RAWCMD, resource)
52 s.set_metric(redisx.PIPELINE_LEN, len(instance.command_stack))
53
54 # set analytics sample rate if enabled
55 s.set_tag(
56 ANALYTICS_SAMPLE_RATE_KEY,
57 config.rediscluster.get_analytics_sample_rate()
58 )
59
60 return func(*args, **kwargs)
61
[end of ddtrace/contrib/rediscluster/patch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ddtrace/contrib/rediscluster/patch.py b/ddtrace/contrib/rediscluster/patch.py
--- a/ddtrace/contrib/rediscluster/patch.py
+++ b/ddtrace/contrib/rediscluster/patch.py
@@ -12,6 +12,11 @@
from ..redis.util import format_command_args
+# DEV: In `2.0.0` `__version__` is a string and `VERSION` is a tuple,
+# but in `1.x.x` `__version__` is a tuple annd `VERSION` does not exist
+REDISCLUSTER_VERSION = getattr(rediscluster, 'VERSION', rediscluster.__version__)
+
+
def patch():
"""Patch the instrumented methods
"""
@@ -20,18 +25,30 @@
setattr(rediscluster, '_datadog_patch', True)
_w = wrapt.wrap_function_wrapper
- _w('rediscluster', 'StrictRedisCluster.execute_command', traced_execute_command)
- _w('rediscluster', 'StrictRedisCluster.pipeline', traced_pipeline)
- _w('rediscluster', 'StrictClusterPipeline.execute', traced_execute_pipeline)
- Pin(service=redisx.DEFAULT_SERVICE, app=redisx.APP).onto(rediscluster.StrictRedisCluster)
+ if REDISCLUSTER_VERSION >= (2, 0, 0):
+ _w('rediscluster', 'RedisCluster.execute_command', traced_execute_command)
+ _w('rediscluster', 'RedisCluster.pipeline', traced_pipeline)
+ _w('rediscluster', 'ClusterPipeline.execute', traced_execute_pipeline)
+ Pin(service=redisx.DEFAULT_SERVICE, app=redisx.APP).onto(rediscluster.RedisCluster)
+ else:
+ _w('rediscluster', 'StrictRedisCluster.execute_command', traced_execute_command)
+ _w('rediscluster', 'StrictRedisCluster.pipeline', traced_pipeline)
+ _w('rediscluster', 'StrictClusterPipeline.execute', traced_execute_pipeline)
+ Pin(service=redisx.DEFAULT_SERVICE, app=redisx.APP).onto(rediscluster.StrictRedisCluster)
def unpatch():
if getattr(rediscluster, '_datadog_patch', False):
setattr(rediscluster, '_datadog_patch', False)
- unwrap(rediscluster.StrictRedisCluster, 'execute_command')
- unwrap(rediscluster.StrictRedisCluster, 'pipeline')
- unwrap(rediscluster.StrictClusterPipeline, 'execute')
+
+ if REDISCLUSTER_VERSION >= (2, 0, 0):
+ unwrap(rediscluster.RedisCluster, 'execute_command')
+ unwrap(rediscluster.RedisCluster, 'pipeline')
+ unwrap(rediscluster.ClusterPipeline, 'execute')
+ else:
+ unwrap(rediscluster.StrictRedisCluster, 'execute_command')
+ unwrap(rediscluster.StrictRedisCluster, 'pipeline')
+ unwrap(rediscluster.StrictClusterPipeline, 'execute')
#
| {"golden_diff": "diff --git a/ddtrace/contrib/rediscluster/patch.py b/ddtrace/contrib/rediscluster/patch.py\n--- a/ddtrace/contrib/rediscluster/patch.py\n+++ b/ddtrace/contrib/rediscluster/patch.py\n@@ -12,6 +12,11 @@\n from ..redis.util import format_command_args\n \n \n+# DEV: In `2.0.0` `__version__` is a string and `VERSION` is a tuple,\n+# but in `1.x.x` `__version__` is a tuple annd `VERSION` does not exist\n+REDISCLUSTER_VERSION = getattr(rediscluster, 'VERSION', rediscluster.__version__)\n+\n+\n def patch():\n \"\"\"Patch the instrumented methods\n \"\"\"\n@@ -20,18 +25,30 @@\n setattr(rediscluster, '_datadog_patch', True)\n \n _w = wrapt.wrap_function_wrapper\n- _w('rediscluster', 'StrictRedisCluster.execute_command', traced_execute_command)\n- _w('rediscluster', 'StrictRedisCluster.pipeline', traced_pipeline)\n- _w('rediscluster', 'StrictClusterPipeline.execute', traced_execute_pipeline)\n- Pin(service=redisx.DEFAULT_SERVICE, app=redisx.APP).onto(rediscluster.StrictRedisCluster)\n+ if REDISCLUSTER_VERSION >= (2, 0, 0):\n+ _w('rediscluster', 'RedisCluster.execute_command', traced_execute_command)\n+ _w('rediscluster', 'RedisCluster.pipeline', traced_pipeline)\n+ _w('rediscluster', 'ClusterPipeline.execute', traced_execute_pipeline)\n+ Pin(service=redisx.DEFAULT_SERVICE, app=redisx.APP).onto(rediscluster.RedisCluster)\n+ else:\n+ _w('rediscluster', 'StrictRedisCluster.execute_command', traced_execute_command)\n+ _w('rediscluster', 'StrictRedisCluster.pipeline', traced_pipeline)\n+ _w('rediscluster', 'StrictClusterPipeline.execute', traced_execute_pipeline)\n+ Pin(service=redisx.DEFAULT_SERVICE, app=redisx.APP).onto(rediscluster.StrictRedisCluster)\n \n \n def unpatch():\n if getattr(rediscluster, '_datadog_patch', False):\n setattr(rediscluster, '_datadog_patch', False)\n- unwrap(rediscluster.StrictRedisCluster, 'execute_command')\n- unwrap(rediscluster.StrictRedisCluster, 'pipeline')\n- unwrap(rediscluster.StrictClusterPipeline, 'execute')\n+\n+ if REDISCLUSTER_VERSION >= (2, 0, 0):\n+ unwrap(rediscluster.RedisCluster, 'execute_command')\n+ unwrap(rediscluster.RedisCluster, 'pipeline')\n+ unwrap(rediscluster.ClusterPipeline, 'execute')\n+ else:\n+ unwrap(rediscluster.StrictRedisCluster, 'execute_command')\n+ unwrap(rediscluster.StrictRedisCluster, 'pipeline')\n+ unwrap(rediscluster.StrictClusterPipeline, 'execute')\n \n \n #\n", "issue": "redis-py-cluster new API 2.0.0\n### Which version of dd-trace-py are you using?\r\n lastest 0.34.0\r\n\r\n### Which version of the libraries are you using?\r\n\r\nredis-py-cluster 2.0.0\r\n\r\n### How can we reproduce your problem?\r\n\r\nchange https://github.com/DataDog/dd-trace-py/blob/master/tox.ini redis-py-cluster version\r\n\r\n### What is the result that you get?\r\n\r\nERROR. It's no more StrictRedisCluster but just RedisCluster\r\n\r\n### What is result that you expected?\r\n\r\nmoving to new api\r\n\n", "before_files": [{"content": "# 3p\nimport rediscluster\nfrom ddtrace.vendor import wrapt\n\n# project\nfrom ddtrace import config\nfrom ...constants import ANALYTICS_SAMPLE_RATE_KEY, SPAN_MEASURED_KEY\nfrom ...pin import Pin\nfrom ...ext import SpanTypes, redis as redisx\nfrom ...utils.wrappers import unwrap\nfrom ..redis.patch import traced_execute_command, traced_pipeline\nfrom ..redis.util import format_command_args\n\n\ndef patch():\n \"\"\"Patch the instrumented methods\n \"\"\"\n if getattr(rediscluster, '_datadog_patch', False):\n return\n setattr(rediscluster, '_datadog_patch', True)\n\n _w = wrapt.wrap_function_wrapper\n _w('rediscluster', 'StrictRedisCluster.execute_command', traced_execute_command)\n _w('rediscluster', 'StrictRedisCluster.pipeline', traced_pipeline)\n _w('rediscluster', 'StrictClusterPipeline.execute', traced_execute_pipeline)\n Pin(service=redisx.DEFAULT_SERVICE, app=redisx.APP).onto(rediscluster.StrictRedisCluster)\n\n\ndef unpatch():\n if getattr(rediscluster, '_datadog_patch', False):\n setattr(rediscluster, '_datadog_patch', False)\n unwrap(rediscluster.StrictRedisCluster, 'execute_command')\n unwrap(rediscluster.StrictRedisCluster, 'pipeline')\n unwrap(rediscluster.StrictClusterPipeline, 'execute')\n\n\n#\n# tracing functions\n#\n\ndef traced_execute_pipeline(func, instance, args, kwargs):\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n return func(*args, **kwargs)\n\n cmds = [format_command_args(c.args) for c in instance.command_stack]\n resource = '\\n'.join(cmds)\n tracer = pin.tracer\n with tracer.trace(redisx.CMD, resource=resource, service=pin.service, span_type=SpanTypes.REDIS) as s:\n s.set_tag(SPAN_MEASURED_KEY)\n s.set_tag(redisx.RAWCMD, resource)\n s.set_metric(redisx.PIPELINE_LEN, len(instance.command_stack))\n\n # set analytics sample rate if enabled\n s.set_tag(\n ANALYTICS_SAMPLE_RATE_KEY,\n config.rediscluster.get_analytics_sample_rate()\n )\n\n return func(*args, **kwargs)\n", "path": "ddtrace/contrib/rediscluster/patch.py"}]} | 1,272 | 614 |
gh_patches_debug_34609 | rasdani/github-patches | git_diff | microsoft__botbuilder-python-285 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add support for Message Reactions to ActivityHandler
ActivityHandler should be extended to include MessageReactions. This has now been added to the C# and The JavaScript.
Here is a pointer to the JavaScript implementation:
https://github.com/microsoft/botbuilder-js/pull/1038
</issue>
<code>
[start of libraries/botbuilder-core/botbuilder/core/activity_handler.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from botbuilder.schema import ActivityTypes, ChannelAccount
5 from .turn_context import TurnContext
6
7
8 class ActivityHandler:
9 async def on_turn(self, turn_context: TurnContext):
10 if turn_context is None:
11 raise TypeError("ActivityHandler.on_turn(): turn_context cannot be None.")
12
13 if hasattr(turn_context, "activity") and turn_context.activity is None:
14 raise TypeError(
15 "ActivityHandler.on_turn(): turn_context must have a non-None activity."
16 )
17
18 if (
19 hasattr(turn_context.activity, "type")
20 and turn_context.activity.type is None
21 ):
22 raise TypeError(
23 "ActivityHandler.on_turn(): turn_context activity must have a non-None type."
24 )
25
26 if turn_context.activity.type == ActivityTypes.message:
27 await self.on_message_activity(turn_context)
28 elif turn_context.activity.type == ActivityTypes.conversation_update:
29 await self.on_conversation_update_activity(turn_context)
30 elif turn_context.activity.type == ActivityTypes.event:
31 await self.on_event_activity(turn_context)
32 else:
33 await self.on_unrecognized_activity_type(turn_context)
34
35 async def on_message_activity( # pylint: disable=unused-argument
36 self, turn_context: TurnContext
37 ):
38 return
39
40 async def on_conversation_update_activity(self, turn_context: TurnContext):
41 if (
42 turn_context.activity.members_added is not None
43 and turn_context.activity.members_added
44 ):
45 return await self.on_members_added_activity(
46 turn_context.activity.members_added, turn_context
47 )
48 if (
49 turn_context.activity.members_removed is not None
50 and turn_context.activity.members_removed
51 ):
52 return await self.on_members_removed_activity(
53 turn_context.activity.members_removed, turn_context
54 )
55 return
56
57 async def on_members_added_activity(
58 self, members_added: ChannelAccount, turn_context: TurnContext
59 ): # pylint: disable=unused-argument
60 return
61
62 async def on_members_removed_activity(
63 self, members_removed: ChannelAccount, turn_context: TurnContext
64 ): # pylint: disable=unused-argument
65 return
66
67 async def on_event_activity(self, turn_context: TurnContext):
68 if turn_context.activity.name == "tokens/response":
69 return await self.on_token_response_event(turn_context)
70
71 return await self.on_event(turn_context)
72
73 async def on_token_response_event( # pylint: disable=unused-argument
74 self, turn_context: TurnContext
75 ):
76 return
77
78 async def on_event( # pylint: disable=unused-argument
79 self, turn_context: TurnContext
80 ):
81 return
82
83 async def on_unrecognized_activity_type( # pylint: disable=unused-argument
84 self, turn_context: TurnContext
85 ):
86 return
87
[end of libraries/botbuilder-core/botbuilder/core/activity_handler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/libraries/botbuilder-core/botbuilder/core/activity_handler.py b/libraries/botbuilder-core/botbuilder/core/activity_handler.py
--- a/libraries/botbuilder-core/botbuilder/core/activity_handler.py
+++ b/libraries/botbuilder-core/botbuilder/core/activity_handler.py
@@ -1,7 +1,8 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
+from typing import List
-from botbuilder.schema import ActivityTypes, ChannelAccount
+from botbuilder.schema import ActivityTypes, ChannelAccount, MessageReaction
from .turn_context import TurnContext
@@ -27,6 +28,8 @@
await self.on_message_activity(turn_context)
elif turn_context.activity.type == ActivityTypes.conversation_update:
await self.on_conversation_update_activity(turn_context)
+ elif turn_context.activity.type == ActivityTypes.message_reaction:
+ await self.on_message_reaction_activity(turn_context)
elif turn_context.activity.type == ActivityTypes.event:
await self.on_event_activity(turn_context)
else:
@@ -64,6 +67,27 @@
): # pylint: disable=unused-argument
return
+ async def on_message_reaction_activity(self, turn_context: TurnContext):
+ if turn_context.activity.reactions_added is not None:
+ await self.on_reactions_added(
+ turn_context.activity.reactions_added, turn_context
+ )
+
+ if turn_context.activity.reactions_removed is not None:
+ await self.on_reactions_removed(
+ turn_context.activity.reactions_removed, turn_context
+ )
+
+ async def on_reactions_added( # pylint: disable=unused-argument
+ self, message_reactions: List[MessageReaction], turn_context: TurnContext
+ ):
+ return
+
+ async def on_reactions_removed( # pylint: disable=unused-argument
+ self, message_reactions: List[MessageReaction], turn_context: TurnContext
+ ):
+ return
+
async def on_event_activity(self, turn_context: TurnContext):
if turn_context.activity.name == "tokens/response":
return await self.on_token_response_event(turn_context)
| {"golden_diff": "diff --git a/libraries/botbuilder-core/botbuilder/core/activity_handler.py b/libraries/botbuilder-core/botbuilder/core/activity_handler.py\n--- a/libraries/botbuilder-core/botbuilder/core/activity_handler.py\n+++ b/libraries/botbuilder-core/botbuilder/core/activity_handler.py\n@@ -1,7 +1,8 @@\n # Copyright (c) Microsoft Corporation. All rights reserved.\n # Licensed under the MIT License.\n+from typing import List\n \n-from botbuilder.schema import ActivityTypes, ChannelAccount\n+from botbuilder.schema import ActivityTypes, ChannelAccount, MessageReaction\n from .turn_context import TurnContext\n \n \n@@ -27,6 +28,8 @@\n await self.on_message_activity(turn_context)\n elif turn_context.activity.type == ActivityTypes.conversation_update:\n await self.on_conversation_update_activity(turn_context)\n+ elif turn_context.activity.type == ActivityTypes.message_reaction:\n+ await self.on_message_reaction_activity(turn_context)\n elif turn_context.activity.type == ActivityTypes.event:\n await self.on_event_activity(turn_context)\n else:\n@@ -64,6 +67,27 @@\n ): # pylint: disable=unused-argument\n return\n \n+ async def on_message_reaction_activity(self, turn_context: TurnContext):\n+ if turn_context.activity.reactions_added is not None:\n+ await self.on_reactions_added(\n+ turn_context.activity.reactions_added, turn_context\n+ )\n+\n+ if turn_context.activity.reactions_removed is not None:\n+ await self.on_reactions_removed(\n+ turn_context.activity.reactions_removed, turn_context\n+ )\n+\n+ async def on_reactions_added( # pylint: disable=unused-argument\n+ self, message_reactions: List[MessageReaction], turn_context: TurnContext\n+ ):\n+ return\n+\n+ async def on_reactions_removed( # pylint: disable=unused-argument\n+ self, message_reactions: List[MessageReaction], turn_context: TurnContext\n+ ):\n+ return\n+\n async def on_event_activity(self, turn_context: TurnContext):\n if turn_context.activity.name == \"tokens/response\":\n return await self.on_token_response_event(turn_context)\n", "issue": "Add support for Message Reactions to ActivityHandler \nActivityHandler should be extended to include MessageReactions. This has now been added to the C# and The JavaScript.\r\n\r\nHere is a pointer to the JavaScript implementation:\r\n\r\nhttps://github.com/microsoft/botbuilder-js/pull/1038\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom botbuilder.schema import ActivityTypes, ChannelAccount\nfrom .turn_context import TurnContext\n\n\nclass ActivityHandler:\n async def on_turn(self, turn_context: TurnContext):\n if turn_context is None:\n raise TypeError(\"ActivityHandler.on_turn(): turn_context cannot be None.\")\n\n if hasattr(turn_context, \"activity\") and turn_context.activity is None:\n raise TypeError(\n \"ActivityHandler.on_turn(): turn_context must have a non-None activity.\"\n )\n\n if (\n hasattr(turn_context.activity, \"type\")\n and turn_context.activity.type is None\n ):\n raise TypeError(\n \"ActivityHandler.on_turn(): turn_context activity must have a non-None type.\"\n )\n\n if turn_context.activity.type == ActivityTypes.message:\n await self.on_message_activity(turn_context)\n elif turn_context.activity.type == ActivityTypes.conversation_update:\n await self.on_conversation_update_activity(turn_context)\n elif turn_context.activity.type == ActivityTypes.event:\n await self.on_event_activity(turn_context)\n else:\n await self.on_unrecognized_activity_type(turn_context)\n\n async def on_message_activity( # pylint: disable=unused-argument\n self, turn_context: TurnContext\n ):\n return\n\n async def on_conversation_update_activity(self, turn_context: TurnContext):\n if (\n turn_context.activity.members_added is not None\n and turn_context.activity.members_added\n ):\n return await self.on_members_added_activity(\n turn_context.activity.members_added, turn_context\n )\n if (\n turn_context.activity.members_removed is not None\n and turn_context.activity.members_removed\n ):\n return await self.on_members_removed_activity(\n turn_context.activity.members_removed, turn_context\n )\n return\n\n async def on_members_added_activity(\n self, members_added: ChannelAccount, turn_context: TurnContext\n ): # pylint: disable=unused-argument\n return\n\n async def on_members_removed_activity(\n self, members_removed: ChannelAccount, turn_context: TurnContext\n ): # pylint: disable=unused-argument\n return\n\n async def on_event_activity(self, turn_context: TurnContext):\n if turn_context.activity.name == \"tokens/response\":\n return await self.on_token_response_event(turn_context)\n\n return await self.on_event(turn_context)\n\n async def on_token_response_event( # pylint: disable=unused-argument\n self, turn_context: TurnContext\n ):\n return\n\n async def on_event( # pylint: disable=unused-argument\n self, turn_context: TurnContext\n ):\n return\n\n async def on_unrecognized_activity_type( # pylint: disable=unused-argument\n self, turn_context: TurnContext\n ):\n return\n", "path": "libraries/botbuilder-core/botbuilder/core/activity_handler.py"}]} | 1,387 | 475 |
gh_patches_debug_4811 | rasdani/github-patches | git_diff | pytorch__text-254 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use getattr rather than __dict__ in Batch (adds support for __slots__ in Example subclasses)
This is a proposal to change [one line of code](https://github.com/pytorch/text/blob/c839a7934930819be7e240ea972e4d600966afdc/torchtext/data/batch.py#L27) in Batch.py
I suggest `[x.__dict__[name] for x in data]` should become `[getattr(x, name) for x in data]`
A major advantage to doing this is compatibility with `__slots__`. A class that is going to be instantiated for every data point is an ideal use-case for `__slots__`, which reduces per-instance memory overhead. It makes sense for specific projects to subclass Example using `__slots__` with the known fields of the project. If you do, the instances will have empty `__dicts__` but the slots can be accessed via `getattr`.
I don't _think_ this change would break anything...
</issue>
<code>
[start of torchtext/data/batch.py]
1 from torch import typename
2 from torch.tensor import _TensorBase
3
4
5 class Batch(object):
6 """Defines a batch of examples along with its Fields.
7
8 Attributes:
9 batch_size: Number of examples in the batch.
10 dataset: A reference to the dataset object the examples come from
11 (which itself contains the dataset's Field objects).
12 train: Whether the batch is from a training set.
13
14 Also stores the Variable for each column in the batch as an attribute.
15 """
16
17 def __init__(self, data=None, dataset=None, device=None, train=True):
18 """Create a Batch from a list of examples."""
19 if data is not None:
20 self.batch_size = len(data)
21 self.dataset = dataset
22 self.train = train
23 self.fields = dataset.fields.keys() # copy field names
24
25 for (name, field) in dataset.fields.items():
26 if field is not None:
27 batch = [x.__dict__[name] for x in data]
28 setattr(self, name, field.process(batch, device=device, train=train))
29
30 @classmethod
31 def fromvars(cls, dataset, batch_size, train=True, **kwargs):
32 """Create a Batch directly from a number of Variables."""
33 batch = cls()
34 batch.batch_size = batch_size
35 batch.dataset = dataset
36 batch.train = train
37 for k, v in kwargs.items():
38 setattr(batch, k, v)
39 return batch
40
41 def __repr__(self):
42 return str(self)
43
44 def __str__(self):
45 if not self.__dict__:
46 return 'Empty {} instance'.format(typename(self))
47
48 var_strs = '\n'.join(['\t[.' + name + ']' + ":" + _short_str(getattr(self, name))
49 for name in self.fields if hasattr(self, name)])
50
51 data_str = (' from {}'.format(self.dataset.name.upper())
52 if hasattr(self.dataset, 'name') and
53 isinstance(self.dataset.name, str) else '')
54
55 strt = '[{} of size {}{}]\n{}'.format(typename(self),
56 self.batch_size, data_str, var_strs)
57 return '\n' + strt
58
59 def __len__(self):
60 return self.batch_size
61
62
63 def _short_str(tensor):
64 # unwrap variable to tensor
65 if hasattr(tensor, 'data'):
66 tensor = tensor.data
67
68 # fallback in case of wrong argument type
69 if issubclass(type(tensor), _TensorBase) is False:
70 return str(tensor)
71
72 # copied from torch _tensor_str
73 size_str = 'x'.join(str(size) for size in tensor.size())
74 device_str = '' if not tensor.is_cuda else \
75 ' (GPU {})'.format(tensor.get_device())
76 strt = '[{} of size {}{}]'.format(typename(tensor),
77 size_str, device_str)
78 return strt
79
[end of torchtext/data/batch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchtext/data/batch.py b/torchtext/data/batch.py
--- a/torchtext/data/batch.py
+++ b/torchtext/data/batch.py
@@ -24,7 +24,7 @@
for (name, field) in dataset.fields.items():
if field is not None:
- batch = [x.__dict__[name] for x in data]
+ batch = [getattr(x, name) for x in data]
setattr(self, name, field.process(batch, device=device, train=train))
@classmethod
| {"golden_diff": "diff --git a/torchtext/data/batch.py b/torchtext/data/batch.py\n--- a/torchtext/data/batch.py\n+++ b/torchtext/data/batch.py\n@@ -24,7 +24,7 @@\n \n for (name, field) in dataset.fields.items():\n if field is not None:\n- batch = [x.__dict__[name] for x in data]\n+ batch = [getattr(x, name) for x in data]\n setattr(self, name, field.process(batch, device=device, train=train))\n \n @classmethod\n", "issue": "Use getattr rather than __dict__ in Batch (adds support for __slots__ in Example subclasses)\nThis is a proposal to change [one line of code](https://github.com/pytorch/text/blob/c839a7934930819be7e240ea972e4d600966afdc/torchtext/data/batch.py#L27) in Batch.py\r\n\r\nI suggest `[x.__dict__[name] for x in data]` should become `[getattr(x, name) for x in data]`\r\n\r\nA major advantage to doing this is compatibility with `__slots__`. A class that is going to be instantiated for every data point is an ideal use-case for `__slots__`, which reduces per-instance memory overhead. It makes sense for specific projects to subclass Example using `__slots__` with the known fields of the project. If you do, the instances will have empty `__dicts__` but the slots can be accessed via `getattr`.\r\n\r\nI don't _think_ this change would break anything...\n", "before_files": [{"content": "from torch import typename\nfrom torch.tensor import _TensorBase\n\n\nclass Batch(object):\n \"\"\"Defines a batch of examples along with its Fields.\n\n Attributes:\n batch_size: Number of examples in the batch.\n dataset: A reference to the dataset object the examples come from\n (which itself contains the dataset's Field objects).\n train: Whether the batch is from a training set.\n\n Also stores the Variable for each column in the batch as an attribute.\n \"\"\"\n\n def __init__(self, data=None, dataset=None, device=None, train=True):\n \"\"\"Create a Batch from a list of examples.\"\"\"\n if data is not None:\n self.batch_size = len(data)\n self.dataset = dataset\n self.train = train\n self.fields = dataset.fields.keys() # copy field names\n\n for (name, field) in dataset.fields.items():\n if field is not None:\n batch = [x.__dict__[name] for x in data]\n setattr(self, name, field.process(batch, device=device, train=train))\n\n @classmethod\n def fromvars(cls, dataset, batch_size, train=True, **kwargs):\n \"\"\"Create a Batch directly from a number of Variables.\"\"\"\n batch = cls()\n batch.batch_size = batch_size\n batch.dataset = dataset\n batch.train = train\n for k, v in kwargs.items():\n setattr(batch, k, v)\n return batch\n\n def __repr__(self):\n return str(self)\n\n def __str__(self):\n if not self.__dict__:\n return 'Empty {} instance'.format(typename(self))\n\n var_strs = '\\n'.join(['\\t[.' + name + ']' + \":\" + _short_str(getattr(self, name))\n for name in self.fields if hasattr(self, name)])\n\n data_str = (' from {}'.format(self.dataset.name.upper())\n if hasattr(self.dataset, 'name') and\n isinstance(self.dataset.name, str) else '')\n\n strt = '[{} of size {}{}]\\n{}'.format(typename(self),\n self.batch_size, data_str, var_strs)\n return '\\n' + strt\n\n def __len__(self):\n return self.batch_size\n\n\ndef _short_str(tensor):\n # unwrap variable to tensor\n if hasattr(tensor, 'data'):\n tensor = tensor.data\n\n # fallback in case of wrong argument type\n if issubclass(type(tensor), _TensorBase) is False:\n return str(tensor)\n\n # copied from torch _tensor_str\n size_str = 'x'.join(str(size) for size in tensor.size())\n device_str = '' if not tensor.is_cuda else \\\n ' (GPU {})'.format(tensor.get_device())\n strt = '[{} of size {}{}]'.format(typename(tensor),\n size_str, device_str)\n return strt\n", "path": "torchtext/data/batch.py"}]} | 1,525 | 125 |
gh_patches_debug_4743 | rasdani/github-patches | git_diff | netket__netket-1112 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Upgrade `flakehell` in the pre-commit hook
It seems that `flakehell` is not actively maintained, and it is incompatible with `flake8 4.x` released in last October (see flakehell/flakehell#22). That issue is not resolved after a few months. If a new developer of NetKet runs `pre-commit install-hooks`, it will just fail.
We may use [this fix](https://github.com/flakehell/flakehell/pull/23#issuecomment-985879201), or change it to [flakeheaven](https://github.com/flakeheaven/flakeheaven) which seems more actively maintained.
</issue>
<code>
[start of setup.py]
1 from setuptools import setup, find_packages
2
3 DEV_DEPENDENCIES = [
4 "pytest>=6",
5 "pytest-xdist>=2",
6 "coverage>=5",
7 "pytest-cov>=2.10.1",
8 "networkx~=2.4",
9 "flaky>=3.7",
10 "pre-commit",
11 "black==22.1.0",
12 "flakehell>=0.9",
13 ]
14 MPI_DEPENDENCIES = ["mpi4py>=3.0.1, <4", "mpi4jax~=0.3.1"]
15 EXTRA_DEPENDENCIES = ["tensorboardx>=2.0.0", "openfermion>=1.0.0"]
16 BASE_DEPENDENCIES = [
17 "numpy~=1.18",
18 "scipy>=1.5.3, <2",
19 "tqdm~=4.60",
20 "plum-dispatch~=1.5.1",
21 "numba>=0.52, <0.56",
22 "igraph~=0.9.8",
23 "jax>=0.2.23, <0.4",
24 "jaxlib>=0.1.69",
25 "flax>=0.3.5, <0.5",
26 "orjson~=3.4",
27 "optax>=0.1.1, <0.2",
28 "numba4jax>=0.0.3, <0.1",
29 ]
30
31 setup(
32 name="netket",
33 author="Giuseppe Carleo et al.",
34 url="http://github.com/netket/netket",
35 author_email="[email protected]",
36 license="Apache 2.0",
37 description="Netket : Machine Learning techniques for many-body quantum systems.",
38 long_description="""NetKet is an open-source project delivering cutting-edge
39 methods for the study of many-body quantum systems with artificial
40 neural networks and machine learning techniques.""",
41 classifiers=[
42 "Programming Language :: Python :: 3",
43 "Development Status :: 5 - Production/Stable",
44 "Intended Audience :: Science/Research",
45 "License :: OSI Approved :: Apache Software License",
46 "Operating System :: MacOS :: MacOS X",
47 "Operating System :: POSIX :: Linux",
48 "Operating System :: Unix",
49 "Topic :: Scientific/Engineering :: Physics",
50 ],
51 packages=find_packages(),
52 install_requires=BASE_DEPENDENCIES,
53 python_requires=">=3.7",
54 extras_require={
55 "dev": DEV_DEPENDENCIES,
56 "mpi": MPI_DEPENDENCIES,
57 "extra": EXTRA_DEPENDENCIES,
58 "all": MPI_DEPENDENCIES + DEV_DEPENDENCIES + EXTRA_DEPENDENCIES,
59 },
60 )
61
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -6,10 +6,9 @@
"coverage>=5",
"pytest-cov>=2.10.1",
"networkx~=2.4",
- "flaky>=3.7",
- "pre-commit",
+ "pre-commit>=2.7",
"black==22.1.0",
- "flakehell>=0.9",
+ "flake8==4.0.1",
]
MPI_DEPENDENCIES = ["mpi4py>=3.0.1, <4", "mpi4jax~=0.3.1"]
EXTRA_DEPENDENCIES = ["tensorboardx>=2.0.0", "openfermion>=1.0.0"]
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -6,10 +6,9 @@\n \"coverage>=5\",\n \"pytest-cov>=2.10.1\",\n \"networkx~=2.4\",\n- \"flaky>=3.7\",\n- \"pre-commit\",\n+ \"pre-commit>=2.7\",\n \"black==22.1.0\",\n- \"flakehell>=0.9\",\n+ \"flake8==4.0.1\",\n ]\n MPI_DEPENDENCIES = [\"mpi4py>=3.0.1, <4\", \"mpi4jax~=0.3.1\"]\n EXTRA_DEPENDENCIES = [\"tensorboardx>=2.0.0\", \"openfermion>=1.0.0\"]\n", "issue": "Upgrade `flakehell` in the pre-commit hook\nIt seems that `flakehell` is not actively maintained, and it is incompatible with `flake8 4.x` released in last October (see flakehell/flakehell#22). That issue is not resolved after a few months. If a new developer of NetKet runs `pre-commit install-hooks`, it will just fail.\r\n\r\nWe may use [this fix](https://github.com/flakehell/flakehell/pull/23#issuecomment-985879201), or change it to [flakeheaven](https://github.com/flakeheaven/flakeheaven) which seems more actively maintained.\n", "before_files": [{"content": "from setuptools import setup, find_packages\n\nDEV_DEPENDENCIES = [\n \"pytest>=6\",\n \"pytest-xdist>=2\",\n \"coverage>=5\",\n \"pytest-cov>=2.10.1\",\n \"networkx~=2.4\",\n \"flaky>=3.7\",\n \"pre-commit\",\n \"black==22.1.0\",\n \"flakehell>=0.9\",\n]\nMPI_DEPENDENCIES = [\"mpi4py>=3.0.1, <4\", \"mpi4jax~=0.3.1\"]\nEXTRA_DEPENDENCIES = [\"tensorboardx>=2.0.0\", \"openfermion>=1.0.0\"]\nBASE_DEPENDENCIES = [\n \"numpy~=1.18\",\n \"scipy>=1.5.3, <2\",\n \"tqdm~=4.60\",\n \"plum-dispatch~=1.5.1\",\n \"numba>=0.52, <0.56\",\n \"igraph~=0.9.8\",\n \"jax>=0.2.23, <0.4\",\n \"jaxlib>=0.1.69\",\n \"flax>=0.3.5, <0.5\",\n \"orjson~=3.4\",\n \"optax>=0.1.1, <0.2\",\n \"numba4jax>=0.0.3, <0.1\",\n]\n\nsetup(\n name=\"netket\",\n author=\"Giuseppe Carleo et al.\",\n url=\"http://github.com/netket/netket\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n description=\"Netket : Machine Learning techniques for many-body quantum systems.\",\n long_description=\"\"\"NetKet is an open-source project delivering cutting-edge\n methods for the study of many-body quantum systems with artificial\n neural networks and machine learning techniques.\"\"\",\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Unix\",\n \"Topic :: Scientific/Engineering :: Physics\",\n ],\n packages=find_packages(),\n install_requires=BASE_DEPENDENCIES,\n python_requires=\">=3.7\",\n extras_require={\n \"dev\": DEV_DEPENDENCIES,\n \"mpi\": MPI_DEPENDENCIES,\n \"extra\": EXTRA_DEPENDENCIES,\n \"all\": MPI_DEPENDENCIES + DEV_DEPENDENCIES + EXTRA_DEPENDENCIES,\n },\n)\n", "path": "setup.py"}]} | 1,358 | 176 |
gh_patches_debug_27640 | rasdani/github-patches | git_diff | wagtail__wagtail-6301 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wagtail should not change month name translations
### Issue Summary
Wagtail translations overrides month name translations (at least for Slovenian language) which changes how dates are formatted.
### Steps to Reproduce
With wagtail installed:
```python
>>> from django.utils.translation import activate
>>> activate("sl")
>>> from django.utils import formats
...
>>> from datetime import date
>>> formats.date_format(date.today())
'5. Avgust 2020'
```
It should be (and without wagtail installed it is) `5. avgust 2020`.
* I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: (yes / no)
yes
### Technical details
* Python version: Run `python --version`.
Python 3.7.1
Django version: Look in your requirements.txt, or run `pip show django | grep Version`.
Version: 2.2.14
* Wagtail version: Look at the bottom of the Settings menu in the Wagtail admin, or run `pip show wagtail | grep Version:`.
Version: 2.9.2
</issue>
<code>
[start of wagtail/admin/localization.py]
1 import pytz
2
3 from django.conf import settings
4 from django.utils.translation import gettext as _
5 from django.utils.translation import gettext_lazy
6
7
8 # Wagtail languages with >=90% coverage
9 # This list is manually maintained
10 WAGTAILADMIN_PROVIDED_LANGUAGES = [
11 ('ar', gettext_lazy('Arabic')),
12 ('ca', gettext_lazy('Catalan')),
13 ('cs', gettext_lazy('Czech')),
14 ('de', gettext_lazy('German')),
15 ('el', gettext_lazy('Greek')),
16 ('en', gettext_lazy('English')),
17 ('es', gettext_lazy('Spanish')),
18 ('fi', gettext_lazy('Finnish')),
19 ('fr', gettext_lazy('French')),
20 ('gl', gettext_lazy('Galician')),
21 ('hu', gettext_lazy('Hungarian')),
22 ('id-id', gettext_lazy('Indonesian')),
23 ('is-is', gettext_lazy('Icelandic')),
24 ('it', gettext_lazy('Italian')),
25 ('ja', gettext_lazy('Japanese')),
26 ('ko', gettext_lazy('Korean')),
27 ('lt', gettext_lazy('Lithuanian')),
28 ('mn', gettext_lazy('Mongolian')),
29 ('nb', gettext_lazy('Norwegian Bokmål')),
30 ('nl-nl', gettext_lazy('Netherlands Dutch')),
31 ('fa', gettext_lazy('Persian')),
32 ('pl', gettext_lazy('Polish')),
33 ('pt-br', gettext_lazy('Brazilian Portuguese')),
34 ('pt-pt', gettext_lazy('Portuguese')),
35 ('ro', gettext_lazy('Romanian')),
36 ('ru', gettext_lazy('Russian')),
37 ('sv', gettext_lazy('Swedish')),
38 ('sk-sk', gettext_lazy('Slovak')),
39 ('th', gettext_lazy('Thai')),
40 ('tr', gettext_lazy('Turkish')),
41 ('tr-tr', gettext_lazy('Turkish (Turkey)')),
42 ('uk', gettext_lazy('Ukrainian')),
43 ('zh-hans', gettext_lazy('Chinese (Simplified)')),
44 ('zh-hant', gettext_lazy('Chinese (Traditional)')),
45 ]
46
47
48 # Translatable strings to be made available to JavaScript code
49 # as the wagtailConfig.STRINGS object
50 def get_js_translation_strings():
51 return {
52 'DELETE': _('Delete'),
53 'EDIT': _('Edit'),
54 'PAGE': _('Page'),
55 'PAGES': _('Pages'),
56 'LOADING': _('Loading…'),
57 'NO_RESULTS': _('No results'),
58 'SERVER_ERROR': _('Server Error'),
59 'SEE_ALL': _('See all'),
60 'CLOSE_EXPLORER': _('Close explorer'),
61 'ALT_TEXT': _('Alt text'),
62 'WRITE_HERE': _('Write here…'),
63 'HORIZONTAL_LINE': _('Horizontal line'),
64 'LINE_BREAK': _('Line break'),
65 'UNDO': _('Undo'),
66 'REDO': _('Redo'),
67 'RELOAD_PAGE': _('Reload the page'),
68 'RELOAD_EDITOR': _('Reload saved content'),
69 'SHOW_LATEST_CONTENT': _('Show latest content'),
70 'SHOW_ERROR': _('Show error'),
71 'EDITOR_CRASH': _('The editor just crashed. Content has been reset to the last saved version.'),
72 'BROKEN_LINK': _('Broken link'),
73 'MISSING_DOCUMENT': _('Missing document'),
74 'CLOSE': _('Close'),
75 'EDIT_PAGE': _('Edit \'{title}\''),
76 'VIEW_CHILD_PAGES_OF_PAGE': _('View child pages of \'{title}\''),
77 'PAGE_EXPLORER': _('Page explorer'),
78
79 'MONTHS': [
80 _('January'),
81 _('February'),
82 _('March'),
83 _('April'),
84 _('May'),
85 _('June'),
86 _('July'),
87 _('August'),
88 _('September'),
89 _('October'),
90 _('November'),
91 _('December')
92 ],
93 'WEEKDAYS': [
94 _('Sunday'),
95 _('Monday'),
96 _('Tuesday'),
97 _('Wednesday'),
98 _('Thursday'),
99 _('Friday'),
100 _('Saturday')
101 ],
102 'WEEKDAYS_SHORT': [
103 _('Sun'),
104 _('Mon'),
105 _('Tue'),
106 _('Wed'),
107 _('Thu'),
108 _('Fri'),
109 _('Sat')
110 ]
111 }
112
113
114 def get_available_admin_languages():
115 return getattr(settings, 'WAGTAILADMIN_PERMITTED_LANGUAGES', WAGTAILADMIN_PROVIDED_LANGUAGES)
116
117
118 def get_available_admin_time_zones():
119 return getattr(settings, 'WAGTAIL_USER_TIME_ZONES', pytz.common_timezones)
120
[end of wagtail/admin/localization.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wagtail/admin/localization.py b/wagtail/admin/localization.py
--- a/wagtail/admin/localization.py
+++ b/wagtail/admin/localization.py
@@ -1,6 +1,7 @@
import pytz
from django.conf import settings
+from django.utils.dates import MONTHS, WEEKDAYS, WEEKDAYS_ABBR
from django.utils.translation import gettext as _
from django.utils.translation import gettext_lazy
@@ -76,38 +77,12 @@
'VIEW_CHILD_PAGES_OF_PAGE': _('View child pages of \'{title}\''),
'PAGE_EXPLORER': _('Page explorer'),
- 'MONTHS': [
- _('January'),
- _('February'),
- _('March'),
- _('April'),
- _('May'),
- _('June'),
- _('July'),
- _('August'),
- _('September'),
- _('October'),
- _('November'),
- _('December')
- ],
- 'WEEKDAYS': [
- _('Sunday'),
- _('Monday'),
- _('Tuesday'),
- _('Wednesday'),
- _('Thursday'),
- _('Friday'),
- _('Saturday')
- ],
- 'WEEKDAYS_SHORT': [
- _('Sun'),
- _('Mon'),
- _('Tue'),
- _('Wed'),
- _('Thu'),
- _('Fri'),
- _('Sat')
- ]
+ 'MONTHS': [str(m) for m in MONTHS.values()],
+
+ # Django's WEEKDAYS list begins on Monday, but ours should start on Sunday, so start
+ # counting from -1 and use modulo 7 to get an array index
+ 'WEEKDAYS': [str(WEEKDAYS[d % 7]) for d in range(-1, 6)],
+ 'WEEKDAYS_SHORT': [str(WEEKDAYS_ABBR[d % 7]) for d in range(-1, 6)],
}
| {"golden_diff": "diff --git a/wagtail/admin/localization.py b/wagtail/admin/localization.py\n--- a/wagtail/admin/localization.py\n+++ b/wagtail/admin/localization.py\n@@ -1,6 +1,7 @@\n import pytz\n \n from django.conf import settings\n+from django.utils.dates import MONTHS, WEEKDAYS, WEEKDAYS_ABBR\n from django.utils.translation import gettext as _\n from django.utils.translation import gettext_lazy\n \n@@ -76,38 +77,12 @@\n 'VIEW_CHILD_PAGES_OF_PAGE': _('View child pages of \\'{title}\\''),\n 'PAGE_EXPLORER': _('Page explorer'),\n \n- 'MONTHS': [\n- _('January'),\n- _('February'),\n- _('March'),\n- _('April'),\n- _('May'),\n- _('June'),\n- _('July'),\n- _('August'),\n- _('September'),\n- _('October'),\n- _('November'),\n- _('December')\n- ],\n- 'WEEKDAYS': [\n- _('Sunday'),\n- _('Monday'),\n- _('Tuesday'),\n- _('Wednesday'),\n- _('Thursday'),\n- _('Friday'),\n- _('Saturday')\n- ],\n- 'WEEKDAYS_SHORT': [\n- _('Sun'),\n- _('Mon'),\n- _('Tue'),\n- _('Wed'),\n- _('Thu'),\n- _('Fri'),\n- _('Sat')\n- ]\n+ 'MONTHS': [str(m) for m in MONTHS.values()],\n+\n+ # Django's WEEKDAYS list begins on Monday, but ours should start on Sunday, so start\n+ # counting from -1 and use modulo 7 to get an array index\n+ 'WEEKDAYS': [str(WEEKDAYS[d % 7]) for d in range(-1, 6)],\n+ 'WEEKDAYS_SHORT': [str(WEEKDAYS_ABBR[d % 7]) for d in range(-1, 6)],\n }\n", "issue": "Wagtail should not change month name translations\n### Issue Summary\r\n\r\nWagtail translations overrides month name translations (at least for Slovenian language) which changes how dates are formatted.\r\n\r\n### Steps to Reproduce\r\n\r\nWith wagtail installed:\r\n\r\n```python\r\n>>> from django.utils.translation import activate \r\n>>> activate(\"sl\") \r\n>>> from django.utils import formats \r\n... \r\n>>> from datetime import date \r\n>>> formats.date_format(date.today()) \r\n'5. Avgust 2020'\r\n```\r\n\r\nIt should be (and without wagtail installed it is) `5. avgust 2020`.\r\n\r\n* I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: (yes / no)\r\n\r\nyes\r\n\r\n### Technical details\r\n\r\n* Python version: Run `python --version`.\r\n\r\nPython 3.7.1\r\n\r\nDjango version: Look in your requirements.txt, or run `pip show django | grep Version`.\r\n\r\nVersion: 2.2.14\r\n\r\n* Wagtail version: Look at the bottom of the Settings menu in the Wagtail admin, or run `pip show wagtail | grep Version:`.\r\n\r\nVersion: 2.9.2\r\n\n", "before_files": [{"content": "import pytz\n\nfrom django.conf import settings\nfrom django.utils.translation import gettext as _\nfrom django.utils.translation import gettext_lazy\n\n\n# Wagtail languages with >=90% coverage\n# This list is manually maintained\nWAGTAILADMIN_PROVIDED_LANGUAGES = [\n ('ar', gettext_lazy('Arabic')),\n ('ca', gettext_lazy('Catalan')),\n ('cs', gettext_lazy('Czech')),\n ('de', gettext_lazy('German')),\n ('el', gettext_lazy('Greek')),\n ('en', gettext_lazy('English')),\n ('es', gettext_lazy('Spanish')),\n ('fi', gettext_lazy('Finnish')),\n ('fr', gettext_lazy('French')),\n ('gl', gettext_lazy('Galician')),\n ('hu', gettext_lazy('Hungarian')),\n ('id-id', gettext_lazy('Indonesian')),\n ('is-is', gettext_lazy('Icelandic')),\n ('it', gettext_lazy('Italian')),\n ('ja', gettext_lazy('Japanese')),\n ('ko', gettext_lazy('Korean')),\n ('lt', gettext_lazy('Lithuanian')),\n ('mn', gettext_lazy('Mongolian')),\n ('nb', gettext_lazy('Norwegian Bokm\u00e5l')),\n ('nl-nl', gettext_lazy('Netherlands Dutch')),\n ('fa', gettext_lazy('Persian')),\n ('pl', gettext_lazy('Polish')),\n ('pt-br', gettext_lazy('Brazilian Portuguese')),\n ('pt-pt', gettext_lazy('Portuguese')),\n ('ro', gettext_lazy('Romanian')),\n ('ru', gettext_lazy('Russian')),\n ('sv', gettext_lazy('Swedish')),\n ('sk-sk', gettext_lazy('Slovak')),\n ('th', gettext_lazy('Thai')),\n ('tr', gettext_lazy('Turkish')),\n ('tr-tr', gettext_lazy('Turkish (Turkey)')),\n ('uk', gettext_lazy('Ukrainian')),\n ('zh-hans', gettext_lazy('Chinese (Simplified)')),\n ('zh-hant', gettext_lazy('Chinese (Traditional)')),\n]\n\n\n# Translatable strings to be made available to JavaScript code\n# as the wagtailConfig.STRINGS object\ndef get_js_translation_strings():\n return {\n 'DELETE': _('Delete'),\n 'EDIT': _('Edit'),\n 'PAGE': _('Page'),\n 'PAGES': _('Pages'),\n 'LOADING': _('Loading\u2026'),\n 'NO_RESULTS': _('No results'),\n 'SERVER_ERROR': _('Server Error'),\n 'SEE_ALL': _('See all'),\n 'CLOSE_EXPLORER': _('Close explorer'),\n 'ALT_TEXT': _('Alt text'),\n 'WRITE_HERE': _('Write here\u2026'),\n 'HORIZONTAL_LINE': _('Horizontal line'),\n 'LINE_BREAK': _('Line break'),\n 'UNDO': _('Undo'),\n 'REDO': _('Redo'),\n 'RELOAD_PAGE': _('Reload the page'),\n 'RELOAD_EDITOR': _('Reload saved content'),\n 'SHOW_LATEST_CONTENT': _('Show latest content'),\n 'SHOW_ERROR': _('Show error'),\n 'EDITOR_CRASH': _('The editor just crashed. Content has been reset to the last saved version.'),\n 'BROKEN_LINK': _('Broken link'),\n 'MISSING_DOCUMENT': _('Missing document'),\n 'CLOSE': _('Close'),\n 'EDIT_PAGE': _('Edit \\'{title}\\''),\n 'VIEW_CHILD_PAGES_OF_PAGE': _('View child pages of \\'{title}\\''),\n 'PAGE_EXPLORER': _('Page explorer'),\n\n 'MONTHS': [\n _('January'),\n _('February'),\n _('March'),\n _('April'),\n _('May'),\n _('June'),\n _('July'),\n _('August'),\n _('September'),\n _('October'),\n _('November'),\n _('December')\n ],\n 'WEEKDAYS': [\n _('Sunday'),\n _('Monday'),\n _('Tuesday'),\n _('Wednesday'),\n _('Thursday'),\n _('Friday'),\n _('Saturday')\n ],\n 'WEEKDAYS_SHORT': [\n _('Sun'),\n _('Mon'),\n _('Tue'),\n _('Wed'),\n _('Thu'),\n _('Fri'),\n _('Sat')\n ]\n }\n\n\ndef get_available_admin_languages():\n return getattr(settings, 'WAGTAILADMIN_PERMITTED_LANGUAGES', WAGTAILADMIN_PROVIDED_LANGUAGES)\n\n\ndef get_available_admin_time_zones():\n return getattr(settings, 'WAGTAIL_USER_TIME_ZONES', pytz.common_timezones)\n", "path": "wagtail/admin/localization.py"}]} | 1,973 | 432 |
gh_patches_debug_29474 | rasdani/github-patches | git_diff | borgbackup__borg-1193 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
borgbackup build failure when using OpenSSL 1.1.0
https://groups.google.com/d/msg/linux.debian.devel/53fq9S-Qpp4/V_0pPtdzBQAJ
</issue>
<code>
[start of borg/testsuite/crypto.py]
1 from binascii import hexlify
2
3 from ..crypto import AES, bytes_to_long, bytes_to_int, long_to_bytes
4 from . import BaseTestCase
5
6
7 class CryptoTestCase(BaseTestCase):
8
9 def test_bytes_to_int(self):
10 self.assert_equal(bytes_to_int(b'\0\0\0\1'), 1)
11
12 def test_bytes_to_long(self):
13 self.assert_equal(bytes_to_long(b'\0\0\0\0\0\0\0\1'), 1)
14 self.assert_equal(long_to_bytes(1), b'\0\0\0\0\0\0\0\1')
15
16 def test_aes(self):
17 key = b'X' * 32
18 data = b'foo' * 10
19 # encrypt
20 aes = AES(is_encrypt=True, key=key)
21 self.assert_equal(bytes_to_long(aes.iv, 8), 0)
22 cdata = aes.encrypt(data)
23 self.assert_equal(hexlify(cdata), b'c6efb702de12498f34a2c2bbc8149e759996d08bf6dc5c610aefc0c3a466')
24 self.assert_equal(bytes_to_long(aes.iv, 8), 2)
25 # decrypt
26 aes = AES(is_encrypt=False, key=key)
27 self.assert_equal(bytes_to_long(aes.iv, 8), 0)
28 pdata = aes.decrypt(cdata)
29 self.assert_equal(data, pdata)
30 self.assert_equal(bytes_to_long(aes.iv, 8), 2)
31
[end of borg/testsuite/crypto.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/borg/testsuite/crypto.py b/borg/testsuite/crypto.py
--- a/borg/testsuite/crypto.py
+++ b/borg/testsuite/crypto.py
@@ -1,6 +1,7 @@
from binascii import hexlify
from ..crypto import AES, bytes_to_long, bytes_to_int, long_to_bytes
+from ..crypto import increment_iv, bytes16_to_int, int_to_bytes16
from . import BaseTestCase
@@ -13,6 +14,27 @@
self.assert_equal(bytes_to_long(b'\0\0\0\0\0\0\0\1'), 1)
self.assert_equal(long_to_bytes(1), b'\0\0\0\0\0\0\0\1')
+ def test_bytes16_to_int(self):
+ self.assert_equal(bytes16_to_int(b'\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1'), 1)
+ self.assert_equal(int_to_bytes16(1), b'\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1')
+ self.assert_equal(bytes16_to_int(b'\0\0\0\0\0\0\0\1\0\0\0\0\0\0\0\0'), 2 ** 64)
+ self.assert_equal(int_to_bytes16(2 ** 64), b'\0\0\0\0\0\0\0\1\0\0\0\0\0\0\0\0')
+
+ def test_increment_iv(self):
+ iv0 = b'\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0'
+ iv1 = b'\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1'
+ iv2 = b'\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\2'
+ self.assert_equal(increment_iv(iv0, 0), iv0)
+ self.assert_equal(increment_iv(iv0, 1), iv1)
+ self.assert_equal(increment_iv(iv0, 2), iv2)
+ iva = b'\0\0\0\0\0\0\0\0\xff\xff\xff\xff\xff\xff\xff\xff'
+ ivb = b'\0\0\0\0\0\0\0\1\x00\x00\x00\x00\x00\x00\x00\x00'
+ ivc = b'\0\0\0\0\0\0\0\1\x00\x00\x00\x00\x00\x00\x00\x01'
+ self.assert_equal(increment_iv(iva, 0), iva)
+ self.assert_equal(increment_iv(iva, 1), ivb)
+ self.assert_equal(increment_iv(iva, 2), ivc)
+ self.assert_equal(increment_iv(iv0, 2**64), ivb)
+
def test_aes(self):
key = b'X' * 32
data = b'foo' * 10
| {"golden_diff": "diff --git a/borg/testsuite/crypto.py b/borg/testsuite/crypto.py\n--- a/borg/testsuite/crypto.py\n+++ b/borg/testsuite/crypto.py\n@@ -1,6 +1,7 @@\n from binascii import hexlify\n \n from ..crypto import AES, bytes_to_long, bytes_to_int, long_to_bytes\n+from ..crypto import increment_iv, bytes16_to_int, int_to_bytes16\n from . import BaseTestCase\n \n \n@@ -13,6 +14,27 @@\n self.assert_equal(bytes_to_long(b'\\0\\0\\0\\0\\0\\0\\0\\1'), 1)\n self.assert_equal(long_to_bytes(1), b'\\0\\0\\0\\0\\0\\0\\0\\1')\n \n+ def test_bytes16_to_int(self):\n+ self.assert_equal(bytes16_to_int(b'\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\1'), 1)\n+ self.assert_equal(int_to_bytes16(1), b'\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\1')\n+ self.assert_equal(bytes16_to_int(b'\\0\\0\\0\\0\\0\\0\\0\\1\\0\\0\\0\\0\\0\\0\\0\\0'), 2 ** 64)\n+ self.assert_equal(int_to_bytes16(2 ** 64), b'\\0\\0\\0\\0\\0\\0\\0\\1\\0\\0\\0\\0\\0\\0\\0\\0')\n+\n+ def test_increment_iv(self):\n+ iv0 = b'\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0'\n+ iv1 = b'\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\1'\n+ iv2 = b'\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\2'\n+ self.assert_equal(increment_iv(iv0, 0), iv0)\n+ self.assert_equal(increment_iv(iv0, 1), iv1)\n+ self.assert_equal(increment_iv(iv0, 2), iv2)\n+ iva = b'\\0\\0\\0\\0\\0\\0\\0\\0\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff'\n+ ivb = b'\\0\\0\\0\\0\\0\\0\\0\\1\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00'\n+ ivc = b'\\0\\0\\0\\0\\0\\0\\0\\1\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01'\n+ self.assert_equal(increment_iv(iva, 0), iva)\n+ self.assert_equal(increment_iv(iva, 1), ivb)\n+ self.assert_equal(increment_iv(iva, 2), ivc)\n+ self.assert_equal(increment_iv(iv0, 2**64), ivb)\n+\n def test_aes(self):\n key = b'X' * 32\n data = b'foo' * 10\n", "issue": "borgbackup build failure when using OpenSSL 1.1.0\nhttps://groups.google.com/d/msg/linux.debian.devel/53fq9S-Qpp4/V_0pPtdzBQAJ\n\n", "before_files": [{"content": "from binascii import hexlify\n\nfrom ..crypto import AES, bytes_to_long, bytes_to_int, long_to_bytes\nfrom . import BaseTestCase\n\n\nclass CryptoTestCase(BaseTestCase):\n\n def test_bytes_to_int(self):\n self.assert_equal(bytes_to_int(b'\\0\\0\\0\\1'), 1)\n\n def test_bytes_to_long(self):\n self.assert_equal(bytes_to_long(b'\\0\\0\\0\\0\\0\\0\\0\\1'), 1)\n self.assert_equal(long_to_bytes(1), b'\\0\\0\\0\\0\\0\\0\\0\\1')\n\n def test_aes(self):\n key = b'X' * 32\n data = b'foo' * 10\n # encrypt\n aes = AES(is_encrypt=True, key=key)\n self.assert_equal(bytes_to_long(aes.iv, 8), 0)\n cdata = aes.encrypt(data)\n self.assert_equal(hexlify(cdata), b'c6efb702de12498f34a2c2bbc8149e759996d08bf6dc5c610aefc0c3a466')\n self.assert_equal(bytes_to_long(aes.iv, 8), 2)\n # decrypt\n aes = AES(is_encrypt=False, key=key)\n self.assert_equal(bytes_to_long(aes.iv, 8), 0)\n pdata = aes.decrypt(cdata)\n self.assert_equal(data, pdata)\n self.assert_equal(bytes_to_long(aes.iv, 8), 2)\n", "path": "borg/testsuite/crypto.py"}]} | 988 | 780 |
gh_patches_debug_7561 | rasdani/github-patches | git_diff | mdn__kuma-6693 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
T - TypeError 'count.toLocaleString' in SSR
https://sentry.prod.mozaws.net/operations/mdn-prod/issues/7090931/
```
TypeError: Cannot read property 'toLocaleString' of undefined
File "/app/kuma/javascript/dist/ssr.js", line 22, in kt
'{snip} ocuments found for "%(query)s" in %(locale)s.',a),{count:a.toLocaleString(),locale:n,query:c})," ",t))}function jt(e){var t=e.locale;return( {snip}
File "/app/kuma/javascript/dist/ssr.js", line 22, in a
'{snip} .state);null!=d&&(f.state=r({},f.state,d))}}else if(D={},f=a(o.props,i,s),null==(f=G(a,o.props,f,i))||null==f.render)return void he(e=f,a);i {snip}
File "/app/kuma/javascript/dist/ssr.js", line 22, in ye
'{snip} lement(e);){var i=e,c=i.type;if("function"!=typeof c)break;a(i,c)}return{child:e,context:t}}var ve=function(){function e(t,n){if(!(this inst {snip}
File "/app/kuma/javascript/dist/ssr.js", line 22, in e.render
'{snip} -- --\x3e"+I(n):(this.previousWasTextNode=!0,I(n));if(e=(t=ye(e,t,this.threadID)).child,t=t.context,null===e||!1===e)return"";if(!o.isValidE {snip}
File "/app/kuma/javascript/dist/ssr.js", line 22, in e.read
'{snip} +=c}else{var f=i.children[i.childIndex++],m="";try{m+=this.render(f,i.context,i.domNamespace)}catch(e){throw e}r.length<=this.suspenseDepth& {snip}
...
(5 additional frame(s) were not displayed)
```
</issue>
<code>
[start of kuma/search/views.py]
1 from django.shortcuts import render
2 from django.urls import reverse_lazy
3 from django.views.decorators.cache import never_cache
4 from django.views.decorators.http import require_GET
5 from django.views.generic import RedirectView
6 from ratelimit.decorators import ratelimit
7
8 from kuma.api.v1.views import search as search_api
9 from kuma.core.decorators import shared_cache_control
10 from kuma.core.utils import is_wiki
11
12 from .search import SearchView
13
14 # Since the search endpoint accepts user input (via query parameters) and its
15 # response is compressed, use rate limiting to mitigate the BREACH attack
16 # (see http://breachattack.com/). It still needs to allow a user to click
17 # the filter switches (bug 1426968).
18 # Alternate: forbid gzip by setting Content-Encoding: identity
19 @never_cache
20 @require_GET
21 @ratelimit(key="user_or_ip", rate="25/m", block=True)
22 def search(request, *args, **kwargs):
23 """
24 The search view.
25 """
26 if is_wiki(request):
27 return wiki_search(request, *args, **kwargs)
28
29 results = search_api(request, *args, **kwargs).data
30
31 # Determine if there were validation errors
32 error = results.get("error") or results.get("q")
33 # If q is returned in the data, there was a validation error for that field,
34 # so return 400 status.
35 status = 200 if results.get("q") is None else 400
36
37 context = {"results": {"results": None if error else results, "error": error}}
38
39 return render(request, "search/react.html", context, status=status)
40
41
42 wiki_search = SearchView.as_view()
43
44
45 class SearchRedirectView(RedirectView):
46 permanent = True
47
48 def get_redirect_url(self, *args, **kwargs):
49 query_string = self.request.META.get("QUERY_STRING")
50 url = reverse_lazy(
51 "api.v1.search", kwargs={"locale": self.request.LANGUAGE_CODE}
52 )
53 if query_string:
54 url += "?" + query_string
55 return url
56
57
58 @shared_cache_control(s_maxage=60 * 60 * 24 * 7)
59 def plugin(request):
60 """Render an OpenSearch Plugin."""
61 return render(
62 request,
63 "search/plugin.html",
64 {"locale": request.LANGUAGE_CODE},
65 content_type="application/opensearchdescription+xml",
66 )
67
[end of kuma/search/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kuma/search/views.py b/kuma/search/views.py
--- a/kuma/search/views.py
+++ b/kuma/search/views.py
@@ -33,6 +33,10 @@
# If q is returned in the data, there was a validation error for that field,
# so return 400 status.
status = 200 if results.get("q") is None else 400
+ # If there was an error with the pagination you'll get...
+ if results.get("detail"):
+ error = str(results["detail"])
+ status = 400
context = {"results": {"results": None if error else results, "error": error}}
| {"golden_diff": "diff --git a/kuma/search/views.py b/kuma/search/views.py\n--- a/kuma/search/views.py\n+++ b/kuma/search/views.py\n@@ -33,6 +33,10 @@\n # If q is returned in the data, there was a validation error for that field,\n # so return 400 status.\n status = 200 if results.get(\"q\") is None else 400\n+ # If there was an error with the pagination you'll get...\n+ if results.get(\"detail\"):\n+ error = str(results[\"detail\"])\n+ status = 400\n \n context = {\"results\": {\"results\": None if error else results, \"error\": error}}\n", "issue": "T - TypeError 'count.toLocaleString' in SSR\nhttps://sentry.prod.mozaws.net/operations/mdn-prod/issues/7090931/\n\n```\nTypeError: Cannot read property 'toLocaleString' of undefined\n File \"/app/kuma/javascript/dist/ssr.js\", line 22, in kt\n '{snip} ocuments found for \"%(query)s\" in %(locale)s.',a),{count:a.toLocaleString(),locale:n,query:c}),\" \",t))}function jt(e){var t=e.locale;return( {snip}\n File \"/app/kuma/javascript/dist/ssr.js\", line 22, in a\n '{snip} .state);null!=d&&(f.state=r({},f.state,d))}}else if(D={},f=a(o.props,i,s),null==(f=G(a,o.props,f,i))||null==f.render)return void he(e=f,a);i {snip}\n File \"/app/kuma/javascript/dist/ssr.js\", line 22, in ye\n '{snip} lement(e);){var i=e,c=i.type;if(\"function\"!=typeof c)break;a(i,c)}return{child:e,context:t}}var ve=function(){function e(t,n){if(!(this inst {snip}\n File \"/app/kuma/javascript/dist/ssr.js\", line 22, in e.render\n '{snip} -- --\\x3e\"+I(n):(this.previousWasTextNode=!0,I(n));if(e=(t=ye(e,t,this.threadID)).child,t=t.context,null===e||!1===e)return\"\";if(!o.isValidE {snip}\n File \"/app/kuma/javascript/dist/ssr.js\", line 22, in e.read\n '{snip} +=c}else{var f=i.children[i.childIndex++],m=\"\";try{m+=this.render(f,i.context,i.domNamespace)}catch(e){throw e}r.length<=this.suspenseDepth& {snip}\n...\n(5 additional frame(s) were not displayed)\n```\n", "before_files": [{"content": "from django.shortcuts import render\nfrom django.urls import reverse_lazy\nfrom django.views.decorators.cache import never_cache\nfrom django.views.decorators.http import require_GET\nfrom django.views.generic import RedirectView\nfrom ratelimit.decorators import ratelimit\n\nfrom kuma.api.v1.views import search as search_api\nfrom kuma.core.decorators import shared_cache_control\nfrom kuma.core.utils import is_wiki\n\nfrom .search import SearchView\n\n# Since the search endpoint accepts user input (via query parameters) and its\n# response is compressed, use rate limiting to mitigate the BREACH attack\n# (see http://breachattack.com/). It still needs to allow a user to click\n# the filter switches (bug 1426968).\n# Alternate: forbid gzip by setting Content-Encoding: identity\n@never_cache\n@require_GET\n@ratelimit(key=\"user_or_ip\", rate=\"25/m\", block=True)\ndef search(request, *args, **kwargs):\n \"\"\"\n The search view.\n \"\"\"\n if is_wiki(request):\n return wiki_search(request, *args, **kwargs)\n\n results = search_api(request, *args, **kwargs).data\n\n # Determine if there were validation errors\n error = results.get(\"error\") or results.get(\"q\")\n # If q is returned in the data, there was a validation error for that field,\n # so return 400 status.\n status = 200 if results.get(\"q\") is None else 400\n\n context = {\"results\": {\"results\": None if error else results, \"error\": error}}\n\n return render(request, \"search/react.html\", context, status=status)\n\n\nwiki_search = SearchView.as_view()\n\n\nclass SearchRedirectView(RedirectView):\n permanent = True\n\n def get_redirect_url(self, *args, **kwargs):\n query_string = self.request.META.get(\"QUERY_STRING\")\n url = reverse_lazy(\n \"api.v1.search\", kwargs={\"locale\": self.request.LANGUAGE_CODE}\n )\n if query_string:\n url += \"?\" + query_string\n return url\n\n\n@shared_cache_control(s_maxage=60 * 60 * 24 * 7)\ndef plugin(request):\n \"\"\"Render an OpenSearch Plugin.\"\"\"\n return render(\n request,\n \"search/plugin.html\",\n {\"locale\": request.LANGUAGE_CODE},\n content_type=\"application/opensearchdescription+xml\",\n )\n", "path": "kuma/search/views.py"}]} | 1,634 | 157 |
gh_patches_debug_66285 | rasdani/github-patches | git_diff | python-poetry__poetry-578 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Poetry run: ModuleOrPackageNotFound with implicit namespace packages (PEP420)
<!-- Checked checkbox should look like this: [x] -->
- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.
- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.
- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).
- **OS version and name**: Arch Linux 4.18.16
- **Poetry version**: 0.12.5
- **Link of a [Gist](https://gist.github.com/) with the contents of your pyproject.toml file**: https://gist.github.com/Kazy/692963f6a41c64d38f38ac9a3f95619a
## Issue
I'm using implicit namespace packages to organize the packages at work, which works well by specifying the `packages` value in the `pyproject.toml` like that:
```toml
packages = [
{ include = "org" }
]
```
The file structure is like that:
```
├── org
│ └── subpackage
│ ├── __init__.py
│ └── command
│ └── cli.py
└── pyproject.toml
```
The issue is when you try to do `poetry run my-command`, you get:
```
[ModuleOrPackageNotFound]
No file/folder found for package org-subpackage-command
```
I already found how to fix it and will follow with a PR, but I wanted to create the issue in case my fix isn't the right one, and to make organization easier on your side as well.
</issue>
<code>
[start of poetry/console/commands/run.py]
1 from .env_command import EnvCommand
2
3
4 class RunCommand(EnvCommand):
5 """
6 Runs a command in the appropriate environment.
7
8 run
9 { args* : The command and arguments/options to run. }
10 """
11
12 def handle(self):
13 args = self.argument("args")
14 script = args[0]
15 scripts = self.poetry.local_config.get("scripts")
16
17 if scripts and script in scripts:
18 return self.run_script(scripts[script], args)
19
20 return self.env.execute(*args)
21
22 def run_script(self, script, args):
23 if isinstance(script, dict):
24 script = script["callable"]
25
26 module, callable_ = script.split(":")
27
28 src_in_sys_path = "sys.path.append('src'); " if self._module.is_in_src() else ""
29
30 cmd = ["python", "-c"]
31
32 cmd += [
33 '"import sys; '
34 "from importlib import import_module; "
35 "sys.argv = {!r}; {}"
36 "import_module('{}').{}()\"".format(
37 args, src_in_sys_path, module, callable_
38 )
39 ]
40
41 return self.env.run(*cmd, shell=True, call=True)
42
43 @property
44 def _module(self):
45 from ...masonry.utils.module import Module
46
47 poetry = self.poetry
48 package = poetry.package
49 path = poetry.file.parent
50 module = Module(package.name, path.as_posix())
51 return module
52
53 def merge_application_definition(self, merge_args=True):
54 if self._application is None or (
55 self._application_definition_merged
56 and (self._application_definition_merged_with_args or not merge_args)
57 ):
58 return
59
60 if merge_args:
61 current_arguments = self._definition.get_arguments()
62 self._definition.set_arguments(
63 self._application.get_definition().get_arguments()
64 )
65 self._definition.add_arguments(current_arguments)
66
67 self._application_definition_merged = True
68 if merge_args:
69 self._application_definition_merged_with_args = True
70
[end of poetry/console/commands/run.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/poetry/console/commands/run.py b/poetry/console/commands/run.py
--- a/poetry/console/commands/run.py
+++ b/poetry/console/commands/run.py
@@ -47,7 +47,7 @@
poetry = self.poetry
package = poetry.package
path = poetry.file.parent
- module = Module(package.name, path.as_posix())
+ module = Module(package.name, path.as_posix(), package.packages)
return module
def merge_application_definition(self, merge_args=True):
| {"golden_diff": "diff --git a/poetry/console/commands/run.py b/poetry/console/commands/run.py\n--- a/poetry/console/commands/run.py\n+++ b/poetry/console/commands/run.py\n@@ -47,7 +47,7 @@\n poetry = self.poetry\n package = poetry.package\n path = poetry.file.parent\n- module = Module(package.name, path.as_posix())\n+ module = Module(package.name, path.as_posix(), package.packages)\n return module\n \n def merge_application_definition(self, merge_args=True):\n", "issue": "Poetry run: ModuleOrPackageNotFound with implicit namespace packages (PEP420)\n<!-- Checked checkbox should look like this: [x] -->\r\n- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.\r\n- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.\r\n- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).\r\n\r\n- **OS version and name**: Arch Linux 4.18.16\r\n- **Poetry version**: 0.12.5\r\n- **Link of a [Gist](https://gist.github.com/) with the contents of your pyproject.toml file**: https://gist.github.com/Kazy/692963f6a41c64d38f38ac9a3f95619a\r\n\r\n## Issue\r\nI'm using implicit namespace packages to organize the packages at work, which works well by specifying the `packages` value in the `pyproject.toml` like that:\r\n```toml\r\npackages = [\r\n { include = \"org\" }\r\n]\r\n```\r\nThe file structure is like that:\r\n```\r\n\u251c\u2500\u2500 org\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 subpackage\r\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 __init__.py\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 command\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 cli.py\r\n\u2514\u2500\u2500 pyproject.toml\r\n```\r\n\r\nThe issue is when you try to do `poetry run my-command`, you get:\r\n```\r\n[ModuleOrPackageNotFound]\r\nNo file/folder found for package org-subpackage-command\r\n```\r\n\r\nI already found how to fix it and will follow with a PR, but I wanted to create the issue in case my fix isn't the right one, and to make organization easier on your side as well.\r\n\n", "before_files": [{"content": "from .env_command import EnvCommand\n\n\nclass RunCommand(EnvCommand):\n \"\"\"\n Runs a command in the appropriate environment.\n\n run\n { args* : The command and arguments/options to run. }\n \"\"\"\n\n def handle(self):\n args = self.argument(\"args\")\n script = args[0]\n scripts = self.poetry.local_config.get(\"scripts\")\n\n if scripts and script in scripts:\n return self.run_script(scripts[script], args)\n\n return self.env.execute(*args)\n\n def run_script(self, script, args):\n if isinstance(script, dict):\n script = script[\"callable\"]\n\n module, callable_ = script.split(\":\")\n\n src_in_sys_path = \"sys.path.append('src'); \" if self._module.is_in_src() else \"\"\n\n cmd = [\"python\", \"-c\"]\n\n cmd += [\n '\"import sys; '\n \"from importlib import import_module; \"\n \"sys.argv = {!r}; {}\"\n \"import_module('{}').{}()\\\"\".format(\n args, src_in_sys_path, module, callable_\n )\n ]\n\n return self.env.run(*cmd, shell=True, call=True)\n\n @property\n def _module(self):\n from ...masonry.utils.module import Module\n\n poetry = self.poetry\n package = poetry.package\n path = poetry.file.parent\n module = Module(package.name, path.as_posix())\n return module\n\n def merge_application_definition(self, merge_args=True):\n if self._application is None or (\n self._application_definition_merged\n and (self._application_definition_merged_with_args or not merge_args)\n ):\n return\n\n if merge_args:\n current_arguments = self._definition.get_arguments()\n self._definition.set_arguments(\n self._application.get_definition().get_arguments()\n )\n self._definition.add_arguments(current_arguments)\n\n self._application_definition_merged = True\n if merge_args:\n self._application_definition_merged_with_args = True\n", "path": "poetry/console/commands/run.py"}]} | 1,515 | 122 |
gh_patches_debug_25695 | rasdani/github-patches | git_diff | bokeh__bokeh-6665 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
export_csv example under 0.12.7dev11 doesn't resize table
Running example export_csv from https://github.com/bokeh/bokeh/tree/master/examples/app/export_csv
under 0.12.7dev11 the table doesn't resize and extra rows are filled with 'undefined', '$NaN'.
The number of rows is 248 and doesn't change when moving the slider.
The rows after 248 are not shown.
Under 0.12.6 everything works perfect.
</issue>
<code>
[start of examples/app/export_csv/main.py]
1 from os.path import dirname, join
2
3 import pandas as pd
4
5 from bokeh.layouts import row, widgetbox
6 from bokeh.models import ColumnDataSource, CustomJS
7 from bokeh.models.widgets import Slider, Button, DataTable, TableColumn, NumberFormatter
8 from bokeh.io import curdoc
9
10 df = pd.read_csv(join(dirname(__file__), 'salary_data.csv'))
11
12 source = ColumnDataSource(data=dict())
13
14 def update():
15 current = df[df['salary'] <= slider.value].dropna()
16 source.data = {
17 'name' : current.name,
18 'salary' : current.salary,
19 'years_experience' : current.years_experience,
20 }
21
22 slider = Slider(title="Max Salary", start=10000, end=250000, value=150000, step=1000)
23 slider.on_change('value', lambda attr, old, new: update())
24
25 button = Button(label="Download", button_type="success")
26 button.callback = CustomJS(args=dict(source=source),
27 code=open(join(dirname(__file__), "download.js")).read())
28
29 columns = [
30 TableColumn(field="name", title="Employee Name"),
31 TableColumn(field="salary", title="Income", formatter=NumberFormatter(format="$0,0.00")),
32 TableColumn(field="years_experience", title="Experience (years)")
33 ]
34
35 data_table = DataTable(source=source, columns=columns, width=800)
36
37 controls = widgetbox(slider, button)
38 table = widgetbox(data_table)
39
40 curdoc().add_root(row(controls, table))
41 curdoc().title = "Export CSV"
42
43 update()
44
[end of examples/app/export_csv/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/app/export_csv/main.py b/examples/app/export_csv/main.py
--- a/examples/app/export_csv/main.py
+++ b/examples/app/export_csv/main.py
@@ -4,7 +4,7 @@
from bokeh.layouts import row, widgetbox
from bokeh.models import ColumnDataSource, CustomJS
-from bokeh.models.widgets import Slider, Button, DataTable, TableColumn, NumberFormatter
+from bokeh.models.widgets import RangeSlider, Button, DataTable, TableColumn, NumberFormatter
from bokeh.io import curdoc
df = pd.read_csv(join(dirname(__file__), 'salary_data.csv'))
@@ -12,14 +12,14 @@
source = ColumnDataSource(data=dict())
def update():
- current = df[df['salary'] <= slider.value].dropna()
+ current = df[(df['salary'] >= slider.value[0]) & (df['salary'] <= slider.value[1])].dropna()
source.data = {
'name' : current.name,
'salary' : current.salary,
'years_experience' : current.years_experience,
}
-slider = Slider(title="Max Salary", start=10000, end=250000, value=150000, step=1000)
+slider = RangeSlider(title="Max Salary", start=10000, end=110000, value=(10000, 50000), step=1000, format="0,0")
slider.on_change('value', lambda attr, old, new: update())
button = Button(label="Download", button_type="success")
| {"golden_diff": "diff --git a/examples/app/export_csv/main.py b/examples/app/export_csv/main.py\n--- a/examples/app/export_csv/main.py\n+++ b/examples/app/export_csv/main.py\n@@ -4,7 +4,7 @@\n \n from bokeh.layouts import row, widgetbox\n from bokeh.models import ColumnDataSource, CustomJS\n-from bokeh.models.widgets import Slider, Button, DataTable, TableColumn, NumberFormatter\n+from bokeh.models.widgets import RangeSlider, Button, DataTable, TableColumn, NumberFormatter\n from bokeh.io import curdoc\n \n df = pd.read_csv(join(dirname(__file__), 'salary_data.csv'))\n@@ -12,14 +12,14 @@\n source = ColumnDataSource(data=dict())\n \n def update():\n- current = df[df['salary'] <= slider.value].dropna()\n+ current = df[(df['salary'] >= slider.value[0]) & (df['salary'] <= slider.value[1])].dropna()\n source.data = {\n 'name' : current.name,\n 'salary' : current.salary,\n 'years_experience' : current.years_experience,\n }\n \n-slider = Slider(title=\"Max Salary\", start=10000, end=250000, value=150000, step=1000)\n+slider = RangeSlider(title=\"Max Salary\", start=10000, end=110000, value=(10000, 50000), step=1000, format=\"0,0\")\n slider.on_change('value', lambda attr, old, new: update())\n \n button = Button(label=\"Download\", button_type=\"success\")\n", "issue": "export_csv example under 0.12.7dev11 doesn't resize table\nRunning example export_csv from https://github.com/bokeh/bokeh/tree/master/examples/app/export_csv\r\n\r\nunder 0.12.7dev11 the table doesn't resize and extra rows are filled with 'undefined', '$NaN'.\r\nThe number of rows is 248 and doesn't change when moving the slider.\r\nThe rows after 248 are not shown.\r\n\r\nUnder 0.12.6 everything works perfect.\n", "before_files": [{"content": "from os.path import dirname, join\n\nimport pandas as pd\n\nfrom bokeh.layouts import row, widgetbox\nfrom bokeh.models import ColumnDataSource, CustomJS\nfrom bokeh.models.widgets import Slider, Button, DataTable, TableColumn, NumberFormatter\nfrom bokeh.io import curdoc\n\ndf = pd.read_csv(join(dirname(__file__), 'salary_data.csv'))\n\nsource = ColumnDataSource(data=dict())\n\ndef update():\n current = df[df['salary'] <= slider.value].dropna()\n source.data = {\n 'name' : current.name,\n 'salary' : current.salary,\n 'years_experience' : current.years_experience,\n }\n\nslider = Slider(title=\"Max Salary\", start=10000, end=250000, value=150000, step=1000)\nslider.on_change('value', lambda attr, old, new: update())\n\nbutton = Button(label=\"Download\", button_type=\"success\")\nbutton.callback = CustomJS(args=dict(source=source),\n code=open(join(dirname(__file__), \"download.js\")).read())\n\ncolumns = [\n TableColumn(field=\"name\", title=\"Employee Name\"),\n TableColumn(field=\"salary\", title=\"Income\", formatter=NumberFormatter(format=\"$0,0.00\")),\n TableColumn(field=\"years_experience\", title=\"Experience (years)\")\n]\n\ndata_table = DataTable(source=source, columns=columns, width=800)\n\ncontrols = widgetbox(slider, button)\ntable = widgetbox(data_table)\n\ncurdoc().add_root(row(controls, table))\ncurdoc().title = \"Export CSV\"\n\nupdate()\n", "path": "examples/app/export_csv/main.py"}]} | 1,075 | 362 |
gh_patches_debug_21709 | rasdani/github-patches | git_diff | Lightning-AI__pytorch-lightning-706 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TensorBoardLogger and ModelCheckpoint are not using the same folder by default
## 🐛 Bug
(master branch)
By default, the TensorBoardLogger writes logs into `lightning_logs/0` but ModelCheckpoint writes checkpoint into `lightning_logs/version_0`.
</issue>
<code>
[start of pytorch_lightning/logging/tensorboard.py]
1 import os
2 from warnings import warn
3 from argparse import Namespace
4 from pkg_resources import parse_version
5
6 import torch
7 import pandas as pd
8 from torch.utils.tensorboard import SummaryWriter
9
10 from .base import LightningLoggerBase, rank_zero_only
11
12
13 class TensorBoardLogger(LightningLoggerBase):
14 r"""
15
16 Log to local file system in TensorBoard format
17
18 Implemented using :class:`torch.utils.tensorboard.SummaryWriter`. Logs are saved to
19 `os.path.join(save_dir, name, version)`
20
21 Example
22 --------
23
24 .. code-block:: python
25
26 logger = TensorBoardLogger("tb_logs", name="my_model")
27 trainer = Trainer(logger=logger)
28 trainer.train(model)
29
30 Args:
31 save_dir (str): Save directory
32 name (str): Experiment name. Defaults to "default".
33 version (int): Experiment version. If version is not specified the logger inspects the save
34 directory for existing versions, then automatically assigns the next available version.
35 \**kwargs (dict): Other arguments are passed directly to the :class:`SummaryWriter` constructor.
36
37 """
38 NAME_CSV_TAGS = 'meta_tags.csv'
39
40 def __init__(self, save_dir, name="default", version=None, **kwargs):
41 super().__init__()
42 self.save_dir = save_dir
43 self._name = name
44 self._version = version
45
46 self._experiment = None
47 self.tags = {}
48 self.kwargs = kwargs
49
50 @property
51 def experiment(self):
52 r"""
53
54 Actual tensorboard object. To use tensorboard features do the following.
55
56 Example::
57
58 self.logger.experiment.some_tensorboard_function()
59
60 """
61 if self._experiment is not None:
62 return self._experiment
63
64 root_dir = os.path.join(self.save_dir, self.name)
65 os.makedirs(root_dir, exist_ok=True)
66 log_dir = os.path.join(root_dir, str(self.version))
67 self._experiment = SummaryWriter(log_dir=log_dir, **self.kwargs)
68 return self._experiment
69
70 @rank_zero_only
71 def log_hyperparams(self, params):
72 if params is None:
73 return
74
75 # in case converting from namespace
76 if isinstance(params, Namespace):
77 params = vars(params)
78 params = dict(params)
79
80 if parse_version(torch.__version__) < parse_version("1.3.0"):
81 warn(
82 f"Hyperparameter logging is not available for Torch version {torch.__version__}."
83 " Skipping log_hyperparams. Upgrade to Torch 1.3.0 or above to enable"
84 " hyperparameter logging."
85 )
86 else:
87 # `add_hparams` requires both - hparams and metric
88 self.experiment.add_hparams(hparam_dict=params, metric_dict={})
89 # some alternative should be added
90 self.tags.update(params)
91
92 @rank_zero_only
93 def log_metrics(self, metrics, step=None):
94 for k, v in metrics.items():
95 if isinstance(v, torch.Tensor):
96 v = v.item()
97 self.experiment.add_scalar(k, v, step)
98
99 @rank_zero_only
100 def save(self):
101 try:
102 self.experiment.flush()
103 except AttributeError:
104 # you are using PT version (<v1.2) which does not have implemented flush
105 self.experiment._get_file_writer().flush()
106
107 # create a preudo standard path ala test-tube
108 dir_path = os.path.join(self.save_dir, self.name, 'version_%s' % self.version)
109 if not os.path.isdir(dir_path):
110 dir_path = self.save_dir
111 # prepare the file path
112 meta_tags_path = os.path.join(dir_path, self.NAME_CSV_TAGS)
113 # save the metatags file
114 df = pd.DataFrame({'key': list(self.tags.keys()),
115 'value': list(self.tags.values())})
116 df.to_csv(meta_tags_path, index=False)
117
118 @rank_zero_only
119 def finalize(self, status):
120 self.save()
121
122 @property
123 def name(self):
124 return self._name
125
126 @property
127 def version(self):
128 if self._version is None:
129 self._version = self._get_next_version()
130 return self._version
131
132 def _get_next_version(self):
133 root_dir = os.path.join(self.save_dir, self.name)
134 existing_versions = [
135 int(d) for d in os.listdir(root_dir) if os.path.isdir(os.path.join(root_dir, d)) and d.isdigit()
136 ]
137 if len(existing_versions) == 0:
138 return 0
139 else:
140 return max(existing_versions) + 1
141
[end of pytorch_lightning/logging/tensorboard.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pytorch_lightning/logging/tensorboard.py b/pytorch_lightning/logging/tensorboard.py
--- a/pytorch_lightning/logging/tensorboard.py
+++ b/pytorch_lightning/logging/tensorboard.py
@@ -63,7 +63,7 @@
root_dir = os.path.join(self.save_dir, self.name)
os.makedirs(root_dir, exist_ok=True)
- log_dir = os.path.join(root_dir, str(self.version))
+ log_dir = os.path.join(root_dir, "version_" + str(self.version))
self._experiment = SummaryWriter(log_dir=log_dir, **self.kwargs)
return self._experiment
@@ -131,9 +131,11 @@
def _get_next_version(self):
root_dir = os.path.join(self.save_dir, self.name)
- existing_versions = [
- int(d) for d in os.listdir(root_dir) if os.path.isdir(os.path.join(root_dir, d)) and d.isdigit()
- ]
+ existing_versions = []
+ for d in os.listdir(root_dir):
+ if os.path.isdir(os.path.join(root_dir, d)) and d.startswith("version_"):
+ existing_versions.append(int(d.split("_")[1]))
+
if len(existing_versions) == 0:
return 0
else:
| {"golden_diff": "diff --git a/pytorch_lightning/logging/tensorboard.py b/pytorch_lightning/logging/tensorboard.py\n--- a/pytorch_lightning/logging/tensorboard.py\n+++ b/pytorch_lightning/logging/tensorboard.py\n@@ -63,7 +63,7 @@\n \n root_dir = os.path.join(self.save_dir, self.name)\n os.makedirs(root_dir, exist_ok=True)\n- log_dir = os.path.join(root_dir, str(self.version))\n+ log_dir = os.path.join(root_dir, \"version_\" + str(self.version))\n self._experiment = SummaryWriter(log_dir=log_dir, **self.kwargs)\n return self._experiment\n \n@@ -131,9 +131,11 @@\n \n def _get_next_version(self):\n root_dir = os.path.join(self.save_dir, self.name)\n- existing_versions = [\n- int(d) for d in os.listdir(root_dir) if os.path.isdir(os.path.join(root_dir, d)) and d.isdigit()\n- ]\n+ existing_versions = []\n+ for d in os.listdir(root_dir):\n+ if os.path.isdir(os.path.join(root_dir, d)) and d.startswith(\"version_\"):\n+ existing_versions.append(int(d.split(\"_\")[1]))\n+\n if len(existing_versions) == 0:\n return 0\n else:\n", "issue": "TensorBoardLogger and ModelCheckpoint are not using the same folder by default\n## \ud83d\udc1b Bug\r\n(master branch)\r\nBy default, the TensorBoardLogger writes logs into `lightning_logs/0` but ModelCheckpoint writes checkpoint into `lightning_logs/version_0`.\n", "before_files": [{"content": "import os\nfrom warnings import warn\nfrom argparse import Namespace\nfrom pkg_resources import parse_version\n\nimport torch\nimport pandas as pd\nfrom torch.utils.tensorboard import SummaryWriter\n\nfrom .base import LightningLoggerBase, rank_zero_only\n\n\nclass TensorBoardLogger(LightningLoggerBase):\n r\"\"\"\n\n Log to local file system in TensorBoard format\n\n Implemented using :class:`torch.utils.tensorboard.SummaryWriter`. Logs are saved to\n `os.path.join(save_dir, name, version)`\n\n Example\n --------\n\n .. code-block:: python\n\n logger = TensorBoardLogger(\"tb_logs\", name=\"my_model\")\n trainer = Trainer(logger=logger)\n trainer.train(model)\n\n Args:\n save_dir (str): Save directory\n name (str): Experiment name. Defaults to \"default\".\n version (int): Experiment version. If version is not specified the logger inspects the save\n directory for existing versions, then automatically assigns the next available version.\n \\**kwargs (dict): Other arguments are passed directly to the :class:`SummaryWriter` constructor.\n\n \"\"\"\n NAME_CSV_TAGS = 'meta_tags.csv'\n\n def __init__(self, save_dir, name=\"default\", version=None, **kwargs):\n super().__init__()\n self.save_dir = save_dir\n self._name = name\n self._version = version\n\n self._experiment = None\n self.tags = {}\n self.kwargs = kwargs\n\n @property\n def experiment(self):\n r\"\"\"\n\n Actual tensorboard object. To use tensorboard features do the following.\n\n Example::\n\n self.logger.experiment.some_tensorboard_function()\n\n \"\"\"\n if self._experiment is not None:\n return self._experiment\n\n root_dir = os.path.join(self.save_dir, self.name)\n os.makedirs(root_dir, exist_ok=True)\n log_dir = os.path.join(root_dir, str(self.version))\n self._experiment = SummaryWriter(log_dir=log_dir, **self.kwargs)\n return self._experiment\n\n @rank_zero_only\n def log_hyperparams(self, params):\n if params is None:\n return\n\n # in case converting from namespace\n if isinstance(params, Namespace):\n params = vars(params)\n params = dict(params)\n\n if parse_version(torch.__version__) < parse_version(\"1.3.0\"):\n warn(\n f\"Hyperparameter logging is not available for Torch version {torch.__version__}.\"\n \" Skipping log_hyperparams. Upgrade to Torch 1.3.0 or above to enable\"\n \" hyperparameter logging.\"\n )\n else:\n # `add_hparams` requires both - hparams and metric\n self.experiment.add_hparams(hparam_dict=params, metric_dict={})\n # some alternative should be added\n self.tags.update(params)\n\n @rank_zero_only\n def log_metrics(self, metrics, step=None):\n for k, v in metrics.items():\n if isinstance(v, torch.Tensor):\n v = v.item()\n self.experiment.add_scalar(k, v, step)\n\n @rank_zero_only\n def save(self):\n try:\n self.experiment.flush()\n except AttributeError:\n # you are using PT version (<v1.2) which does not have implemented flush\n self.experiment._get_file_writer().flush()\n\n # create a preudo standard path ala test-tube\n dir_path = os.path.join(self.save_dir, self.name, 'version_%s' % self.version)\n if not os.path.isdir(dir_path):\n dir_path = self.save_dir\n # prepare the file path\n meta_tags_path = os.path.join(dir_path, self.NAME_CSV_TAGS)\n # save the metatags file\n df = pd.DataFrame({'key': list(self.tags.keys()),\n 'value': list(self.tags.values())})\n df.to_csv(meta_tags_path, index=False)\n\n @rank_zero_only\n def finalize(self, status):\n self.save()\n\n @property\n def name(self):\n return self._name\n\n @property\n def version(self):\n if self._version is None:\n self._version = self._get_next_version()\n return self._version\n\n def _get_next_version(self):\n root_dir = os.path.join(self.save_dir, self.name)\n existing_versions = [\n int(d) for d in os.listdir(root_dir) if os.path.isdir(os.path.join(root_dir, d)) and d.isdigit()\n ]\n if len(existing_versions) == 0:\n return 0\n else:\n return max(existing_versions) + 1\n", "path": "pytorch_lightning/logging/tensorboard.py"}]} | 1,908 | 286 |
gh_patches_debug_17184 | rasdani/github-patches | git_diff | comic__grand-challenge.org-33 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Do not print page title above each page
Having a h1 HOME on your home page looks stupid. Either remove this completely and show currently selected page in menu, or put page title at top of content by default, so it is there by default but can be edited away if needed
</issue>
<code>
[start of django/comicsite/views.py]
1 '''
2 Created on Jun 18, 2012
3
4 Testing views. Each of these views is referenced in urls.py
5
6 @author: Sjoerd
7 '''
8
9 from django.http import HttpResponse
10 from django.http import Http404
11 from django.shortcuts import render_to_response
12 from django.template import RequestContext
13
14 from comicsite.models import ComicSite,Page,ComicSiteException
15 from dataproviders import FileSystemDataProvider
16
17
18 def index(request):
19 return HttpResponse("ComicSite index page.",context_instance=RequestContext(request))
20
21
22 def site(request, site_short_name):
23 """ show a single COMIC site, default start page """
24 #TODO: Is it bad to use site name here, which is not the specified key?
25
26 site = getSite(site_short_name)
27
28 pages = getPages(site_short_name)
29
30 return render_to_response('page.html', {'site': site, 'page': pages[0], "pages":pages },context_instance=RequestContext(request))
31
32
33 def page(request, site_short_name, page_title):
34 """ show a single page on a site """
35
36 try:
37 p = Page.objects.get(ComicSite__short_name=site_short_name, title=page_title)
38 except Page.DoesNotExist:
39 raise Http404
40 pages = getPages(site_short_name)
41
42 return render_to_response('page.html', {'site': p.ComicSite, 'page': p, "pages":pages },context_instance=RequestContext(request))
43
44
45
46
47 def dataPage(request):
48 """ test function for data provider. Just get some files from provider and show them as list"""
49 #= r"D:\userdata\Sjoerd\Aptana Studio 3 Workspace\comic-django\django\static\files"
50
51 path = r"D:\userdata\Sjoerd\Aptana Studio 3 Workspace\comic-django\django\static\files"
52 dp = FileSystemDataProvider.FileSystemDataProvider(path)
53 images = dp.getImages()
54
55 htmlOut = "available files:"+", ".join(images)
56 p = createTestPage(html=htmlOut)
57 pages = [p]
58
59 return render_to_response('page.html', {'site': p.ComicSite, 'page': p, "pages":pages },context_instance=RequestContext(request))
60
61 # ======================================== not called directly from urls.py =========================================
62
63 def getSite(site_short_name):
64 try:
65 site = ComicSite.objects.get(short_name=site_short_name)
66 except ComicSite.DoesNotExist:
67 raise Http404
68 return site
69
70
71 def getPages(site_short_name):
72 """ get all pages of the given site from db"""
73 try:
74 pages = Page.objects.filter(ComicSite__short_name=site_short_name)
75 except Page.DoesNotExist:
76 raise Http404
77 return pages
78
79 # trying to follow pep 0008 here, finally.
80 def site_exists(site_short_name):
81 try:
82 site = ComicSite.objects.get(short_name=site_short_name)
83 return True
84 except ComicSite.DoesNotExist:
85 return False
86
87
88 # ====================================================== debug and test ==================================================
89 def createTestPage(title="testPage",html=""):
90 """ Create a quick mockup on the ComicSite 'Test'"""
91
92 if site_exists("test"):
93 #TODO log a warning here, no exception.
94 raise ComicSiteException("I am creating a spoof ComicSite called 'test' on the fly, by a project called 'test' was already defined in DB. This message should be a warning instead of an exception")
95
96 # if no site exists by that name, create it on the fly.
97 site = ComicSite()
98 site.short_name = "test"
99 site.name = "Test Page"
100 site.skin = ""
101
102 return Page(ComicSite=site,title=title,html=html)
103
104
105 def givePageHTML(page):
106 return "<h1>%s</h1> <p>%s</p>" %(page.title ,page.html)
[end of django/comicsite/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/django/comicsite/views.py b/django/comicsite/views.py
--- a/django/comicsite/views.py
+++ b/django/comicsite/views.py
@@ -27,7 +27,7 @@
pages = getPages(site_short_name)
- return render_to_response('page.html', {'site': site, 'page': pages[0], "pages":pages },context_instance=RequestContext(request))
+ return render_to_response('page.html', {'site': site, 'currentpage': pages[0], "pages":pages },context_instance=RequestContext(request))
def page(request, site_short_name, page_title):
@@ -39,7 +39,7 @@
raise Http404
pages = getPages(site_short_name)
- return render_to_response('page.html', {'site': p.ComicSite, 'page': p, "pages":pages },context_instance=RequestContext(request))
+ return render_to_response('page.html', {'site': p.ComicSite, 'currentpage': p, "pages":pages },context_instance=RequestContext(request))
| {"golden_diff": "diff --git a/django/comicsite/views.py b/django/comicsite/views.py\n--- a/django/comicsite/views.py\n+++ b/django/comicsite/views.py\n@@ -27,7 +27,7 @@\n \n pages = getPages(site_short_name)\n \n- return render_to_response('page.html', {'site': site, 'page': pages[0], \"pages\":pages },context_instance=RequestContext(request))\n+ return render_to_response('page.html', {'site': site, 'currentpage': pages[0], \"pages\":pages },context_instance=RequestContext(request))\n \n \n def page(request, site_short_name, page_title):\n@@ -39,7 +39,7 @@\n raise Http404\n pages = getPages(site_short_name)\n \n- return render_to_response('page.html', {'site': p.ComicSite, 'page': p, \"pages\":pages },context_instance=RequestContext(request))\n+ return render_to_response('page.html', {'site': p.ComicSite, 'currentpage': p, \"pages\":pages },context_instance=RequestContext(request))\n", "issue": "Do not print page title above each page\nHaving a h1 HOME on your home page looks stupid. Either remove this completely and show currently selected page in menu, or put page title at top of content by default, so it is there by default but can be edited away if needed \n\n", "before_files": [{"content": "'''\nCreated on Jun 18, 2012\n\nTesting views. Each of these views is referenced in urls.py \n\n@author: Sjoerd\n'''\n\nfrom django.http import HttpResponse\nfrom django.http import Http404\nfrom django.shortcuts import render_to_response\nfrom django.template import RequestContext\n\nfrom comicsite.models import ComicSite,Page,ComicSiteException\nfrom dataproviders import FileSystemDataProvider\n\n\ndef index(request):\n return HttpResponse(\"ComicSite index page.\",context_instance=RequestContext(request))\n\n\ndef site(request, site_short_name):\n \"\"\" show a single COMIC site, default start page \"\"\"\n #TODO: Is it bad to use site name here, which is not the specified key?\n \n site = getSite(site_short_name)\n \n pages = getPages(site_short_name)\n \n return render_to_response('page.html', {'site': site, 'page': pages[0], \"pages\":pages },context_instance=RequestContext(request))\n \n\ndef page(request, site_short_name, page_title):\n \"\"\" show a single page on a site \"\"\"\n \n try:\n p = Page.objects.get(ComicSite__short_name=site_short_name, title=page_title)\n except Page.DoesNotExist: \n raise Http404\n pages = getPages(site_short_name)\n \n return render_to_response('page.html', {'site': p.ComicSite, 'page': p, \"pages\":pages },context_instance=RequestContext(request))\n \n \n \n\ndef dataPage(request):\n \"\"\" test function for data provider. Just get some files from provider and show them as list\"\"\"\n #= r\"D:\\userdata\\Sjoerd\\Aptana Studio 3 Workspace\\comic-django\\django\\static\\files\"\n \n path = r\"D:\\userdata\\Sjoerd\\Aptana Studio 3 Workspace\\comic-django\\django\\static\\files\"\n dp = FileSystemDataProvider.FileSystemDataProvider(path)\n images = dp.getImages()\n \n htmlOut = \"available files:\"+\", \".join(images)\n p = createTestPage(html=htmlOut)\n pages = [p]\n \n return render_to_response('page.html', {'site': p.ComicSite, 'page': p, \"pages\":pages },context_instance=RequestContext(request))\n\n# ======================================== not called directly from urls.py =========================================\n\ndef getSite(site_short_name):\n try:\n site = ComicSite.objects.get(short_name=site_short_name)\n except ComicSite.DoesNotExist: \n raise Http404 \n return site \n \n \ndef getPages(site_short_name):\n \"\"\" get all pages of the given site from db\"\"\"\n try:\n pages = Page.objects.filter(ComicSite__short_name=site_short_name)\n except Page.DoesNotExist: \n raise Http404\n return pages\n\n# trying to follow pep 0008 here, finally.\ndef site_exists(site_short_name):\n try:\n site = ComicSite.objects.get(short_name=site_short_name)\n return True\n except ComicSite.DoesNotExist: \n return False\n \n \n# ====================================================== debug and test ==================================================\ndef createTestPage(title=\"testPage\",html=\"\"):\n \"\"\" Create a quick mockup on the ComicSite 'Test'\"\"\"\n \n if site_exists(\"test\"):\n #TODO log a warning here, no exception.\n raise ComicSiteException(\"I am creating a spoof ComicSite called 'test' on the fly, by a project called 'test' was already defined in DB. This message should be a warning instead of an exception\") \n \n # if no site exists by that name, create it on the fly.\n site = ComicSite()\n site.short_name = \"test\"\n site.name = \"Test Page\"\n site.skin = \"\"\n \n return Page(ComicSite=site,title=title,html=html)\n \n\ndef givePageHTML(page):\n return \"<h1>%s</h1> <p>%s</p>\" %(page.title ,page.html)", "path": "django/comicsite/views.py"}]} | 1,655 | 246 |
gh_patches_debug_19590 | rasdani/github-patches | git_diff | comic__grand-challenge.org-1913 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Change order of the blog posts
Currently all blog posts that are published on grand-challenge are sorted based on the date the post was initially created. We would like to change this to the date the post was published such that the most recent post are shown on top. I had contact with @jmsmkn on slack after which he suggested a few changes I could make in the code to change this. After discussing this with Kiran we thought it might be best to first create an issue here.
</issue>
<code>
[start of app/grandchallenge/blogs/models.py]
1 from django.conf import settings
2 from django.contrib.auth import get_user_model
3 from django.db import models
4 from django_extensions.db.fields import AutoSlugField
5 from simple_history.models import HistoricalRecords
6 from stdimage import JPEGField
7
8 from grandchallenge.core.storage import get_logo_path, public_s3_storage
9 from grandchallenge.subdomains.utils import reverse
10
11
12 class Tag(models.Model):
13 name = models.CharField(max_length=200, unique=True)
14 slug = AutoSlugField(populate_from="name", max_length=200)
15
16 def __str__(self):
17 return self.name
18
19
20 class Post(models.Model):
21 created = models.DateTimeField(auto_now_add=True)
22 modified = models.DateTimeField(auto_now=True)
23
24 title = models.CharField(max_length=1024)
25 slug = AutoSlugField(populate_from="title", max_length=1024)
26 description = models.TextField()
27 content = models.TextField()
28
29 authors = models.ManyToManyField(
30 to=get_user_model(), related_name="blog_authors"
31 )
32
33 logo = JPEGField(
34 upload_to=get_logo_path,
35 storage=public_s3_storage,
36 variations=settings.STDIMAGE_SOCIAL_VARIATIONS,
37 )
38
39 tags = models.ManyToManyField(to=Tag, blank=True, related_name="posts")
40
41 published = models.BooleanField(default=False)
42
43 history = HistoricalRecords()
44
45 class Meta:
46 ordering = ("-created",)
47
48 def __str__(self):
49 return self.title
50
51 def get_absolute_url(self):
52 return reverse("blogs:detail", kwargs={"slug": self.slug})
53
54 @property
55 def public(self):
56 return self.published
57
[end of app/grandchallenge/blogs/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/grandchallenge/blogs/models.py b/app/grandchallenge/blogs/models.py
--- a/app/grandchallenge/blogs/models.py
+++ b/app/grandchallenge/blogs/models.py
@@ -1,6 +1,7 @@
from django.conf import settings
from django.contrib.auth import get_user_model
from django.db import models
+from django.utils import timezone
from django_extensions.db.fields import AutoSlugField
from simple_history.models import HistoricalRecords
from stdimage import JPEGField
@@ -48,6 +49,16 @@
def __str__(self):
return self.title
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self._published_orig = self.published
+
+ def save(self, *args, **kwargs):
+ if self._published_orig is False and self.published is True:
+ self.created = timezone.now()
+
+ super().save(*args, **kwargs)
+
def get_absolute_url(self):
return reverse("blogs:detail", kwargs={"slug": self.slug})
| {"golden_diff": "diff --git a/app/grandchallenge/blogs/models.py b/app/grandchallenge/blogs/models.py\n--- a/app/grandchallenge/blogs/models.py\n+++ b/app/grandchallenge/blogs/models.py\n@@ -1,6 +1,7 @@\n from django.conf import settings\n from django.contrib.auth import get_user_model\n from django.db import models\n+from django.utils import timezone\n from django_extensions.db.fields import AutoSlugField\n from simple_history.models import HistoricalRecords\n from stdimage import JPEGField\n@@ -48,6 +49,16 @@\n def __str__(self):\n return self.title\n \n+ def __init__(self, *args, **kwargs):\n+ super().__init__(*args, **kwargs)\n+ self._published_orig = self.published\n+\n+ def save(self, *args, **kwargs):\n+ if self._published_orig is False and self.published is True:\n+ self.created = timezone.now()\n+\n+ super().save(*args, **kwargs)\n+\n def get_absolute_url(self):\n return reverse(\"blogs:detail\", kwargs={\"slug\": self.slug})\n", "issue": "Change order of the blog posts\nCurrently all blog posts that are published on grand-challenge are sorted based on the date the post was initially created. We would like to change this to the date the post was published such that the most recent post are shown on top. I had contact with @jmsmkn on slack after which he suggested a few changes I could make in the code to change this. After discussing this with Kiran we thought it might be best to first create an issue here. \r\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.contrib.auth import get_user_model\nfrom django.db import models\nfrom django_extensions.db.fields import AutoSlugField\nfrom simple_history.models import HistoricalRecords\nfrom stdimage import JPEGField\n\nfrom grandchallenge.core.storage import get_logo_path, public_s3_storage\nfrom grandchallenge.subdomains.utils import reverse\n\n\nclass Tag(models.Model):\n name = models.CharField(max_length=200, unique=True)\n slug = AutoSlugField(populate_from=\"name\", max_length=200)\n\n def __str__(self):\n return self.name\n\n\nclass Post(models.Model):\n created = models.DateTimeField(auto_now_add=True)\n modified = models.DateTimeField(auto_now=True)\n\n title = models.CharField(max_length=1024)\n slug = AutoSlugField(populate_from=\"title\", max_length=1024)\n description = models.TextField()\n content = models.TextField()\n\n authors = models.ManyToManyField(\n to=get_user_model(), related_name=\"blog_authors\"\n )\n\n logo = JPEGField(\n upload_to=get_logo_path,\n storage=public_s3_storage,\n variations=settings.STDIMAGE_SOCIAL_VARIATIONS,\n )\n\n tags = models.ManyToManyField(to=Tag, blank=True, related_name=\"posts\")\n\n published = models.BooleanField(default=False)\n\n history = HistoricalRecords()\n\n class Meta:\n ordering = (\"-created\",)\n\n def __str__(self):\n return self.title\n\n def get_absolute_url(self):\n return reverse(\"blogs:detail\", kwargs={\"slug\": self.slug})\n\n @property\n def public(self):\n return self.published\n", "path": "app/grandchallenge/blogs/models.py"}]} | 1,103 | 239 |
gh_patches_debug_643 | rasdani/github-patches | git_diff | pex-tool__pex-1925 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.1.107
On the docket:
+ [x] `git` username replaced with `****` redaction in lockfile for `git+ssh` direct references #1918
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.106"
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.106"
+__version__ = "2.1.107"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.106\"\n+__version__ = \"2.1.107\"\n", "issue": "Release 2.1.107\nOn the docket:\r\n+ [x] `git` username replaced with `****` redaction in lockfile for `git+ssh` direct references #1918\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.106\"\n", "path": "pex/version.py"}]} | 631 | 99 |
gh_patches_debug_1877 | rasdani/github-patches | git_diff | conan-io__conan-2921 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
local cache inconsistent after enabling short_paths in a recipe
To help us debug your issue please explain:
- [x] I've read the [CONTRIBUTING guide](https://raw.githubusercontent.com/conan-io/conan/develop/.github/CONTRIBUTING.md).
- [x] I've specified the Conan version, operating system version and any tool that can be relevant.
- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
Conan Version 1.3.3
Windows 10
With a package in local cache whose recipe does NOT have `short_paths=True`, modify in normal development folder the recipe and set `short_paths=True` and run conan create.
Folders in local cache become inconsistent showing both folders from previous conan create run and .conan_link files pointing to the short paths folders.
This seems no not affect conan tool behavior when running commands and works well if `short_paths` is removed once again.
</issue>
<code>
[start of conans/util/windows.py]
1 import os
2 import subprocess
3
4 from conans.util.files import load, mkdir, save, rmdir
5 import tempfile
6
7
8 CONAN_LINK = ".conan_link"
9
10
11 def conan_expand_user(path):
12 """ wrapper to the original expanduser function, to workaround python returning
13 verbatim %USERPROFILE% when some other app (git for windows) sets HOME envvar
14 """
15 # In win these variables should exist and point to user directory, which
16 # must exist. Using context to avoid permanent modification of os.environ
17 old_env = dict(os.environ)
18 try:
19 home = os.environ.get("HOME")
20 # Problematic cases of wrong HOME variable
21 # - HOME = %USERPROFILE% verbatim, as messed by some other tools
22 # - MSYS console, that defines a different user home in /c/mingw/msys/users/xxx
23 # In these cases, it is safe to remove it and rely on USERPROFILE directly
24 if home and (not os.path.exists(home) or
25 (os.getenv("MSYSTEM") and os.getenv("USERPROFILE"))):
26 del os.environ["HOME"]
27 result = os.path.expanduser(path)
28 finally:
29 os.environ.clear()
30 os.environ.update(old_env)
31 return result
32
33
34 def path_shortener(path, short_paths):
35 """ short_paths is 4-state:
36 False: Never shorten the path
37 True: Always shorten the path, create link if not existing
38 None: Use shorten path only if already exists, not create
39 """
40 if short_paths is False or os.getenv("CONAN_USER_HOME_SHORT") == "None":
41 return path
42 link = os.path.join(path, CONAN_LINK)
43 if os.path.exists(link):
44 return load(link)
45 elif short_paths is None:
46 return path
47
48 short_home = os.getenv("CONAN_USER_HOME_SHORT")
49 if not short_home:
50 drive = os.path.splitdrive(path)[0]
51 short_home = drive + "/.conan"
52 mkdir(short_home)
53
54 # Workaround for short_home living in NTFS file systems. Give full control permission to current user to avoid
55 # access problems in cygwin/msys2 windows subsystems when using short_home folder
56 try:
57 username = os.getenv("USERDOMAIN")
58 domainname = "%s\%s" % (username, os.environ["USERNAME"]) if username else os.environ["USERNAME"]
59 cmd = r'cacls %s /E /G "%s":F' % (short_home, domainname)
60 subprocess.check_output(cmd, stderr=subprocess.STDOUT) # Ignoring any returned output, make command quiet
61 except subprocess.CalledProcessError:
62 # cmd can fail if trying to set ACL in non NTFS drives, ignoring it.
63 pass
64
65 redirect = tempfile.mkdtemp(dir=short_home, prefix="")
66 # This "1" is the way to have a non-existing directory, so commands like
67 # shutil.copytree() to it, works. It can be removed without compromising the
68 # temp folder generator and conan-links consistency
69 redirect = os.path.join(redirect, "1")
70 save(link, redirect)
71 return redirect
72
73
74 def ignore_long_path_files(src_folder, build_folder, output):
75 def _filter(src, files):
76 filtered_files = []
77 for the_file in files:
78 source_path = os.path.join(src, the_file)
79 # Without storage path, just relative
80 rel_path = os.path.relpath(source_path, src_folder)
81 dest_path = os.path.normpath(os.path.join(build_folder, rel_path))
82 # it is NOT that "/" is counted as "\\" so it counts double
83 # seems a bug in python, overflows paths near the limit of 260,
84 if len(dest_path) >= 249:
85 filtered_files.append(the_file)
86 output.warn("Filename too long, file excluded: %s" % dest_path)
87 return filtered_files
88 return _filter
89
90
91 def rm_conandir(path):
92 """removal of a directory that might contain a link to a short path"""
93 link = os.path.join(path, CONAN_LINK)
94 if os.path.exists(link):
95 short_path = load(link)
96 rmdir(os.path.dirname(short_path))
97 rmdir(path)
98
[end of conans/util/windows.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conans/util/windows.py b/conans/util/windows.py
--- a/conans/util/windows.py
+++ b/conans/util/windows.py
@@ -45,6 +45,9 @@
elif short_paths is None:
return path
+ if os.path.exists(path):
+ rmdir(path)
+
short_home = os.getenv("CONAN_USER_HOME_SHORT")
if not short_home:
drive = os.path.splitdrive(path)[0]
| {"golden_diff": "diff --git a/conans/util/windows.py b/conans/util/windows.py\n--- a/conans/util/windows.py\n+++ b/conans/util/windows.py\n@@ -45,6 +45,9 @@\n elif short_paths is None:\n return path\n \n+ if os.path.exists(path):\n+ rmdir(path)\n+\n short_home = os.getenv(\"CONAN_USER_HOME_SHORT\")\n if not short_home:\n drive = os.path.splitdrive(path)[0]\n", "issue": "local cache inconsistent after enabling short_paths in a recipe\nTo help us debug your issue please explain:\r\n\r\n- [x] I've read the [CONTRIBUTING guide](https://raw.githubusercontent.com/conan-io/conan/develop/.github/CONTRIBUTING.md).\r\n- [x] I've specified the Conan version, operating system version and any tool that can be relevant.\r\n- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.\r\n\r\nConan Version 1.3.3\r\nWindows 10\r\n\r\nWith a package in local cache whose recipe does NOT have `short_paths=True`, modify in normal development folder the recipe and set `short_paths=True` and run conan create.\r\n\r\nFolders in local cache become inconsistent showing both folders from previous conan create run and .conan_link files pointing to the short paths folders.\r\n\r\nThis seems no not affect conan tool behavior when running commands and works well if `short_paths` is removed once again.\r\n\r\n\n", "before_files": [{"content": "import os\nimport subprocess\n\nfrom conans.util.files import load, mkdir, save, rmdir\nimport tempfile\n\n\nCONAN_LINK = \".conan_link\"\n\n\ndef conan_expand_user(path):\n \"\"\" wrapper to the original expanduser function, to workaround python returning\n verbatim %USERPROFILE% when some other app (git for windows) sets HOME envvar\n \"\"\"\n # In win these variables should exist and point to user directory, which\n # must exist. Using context to avoid permanent modification of os.environ\n old_env = dict(os.environ)\n try:\n home = os.environ.get(\"HOME\")\n # Problematic cases of wrong HOME variable\n # - HOME = %USERPROFILE% verbatim, as messed by some other tools\n # - MSYS console, that defines a different user home in /c/mingw/msys/users/xxx\n # In these cases, it is safe to remove it and rely on USERPROFILE directly\n if home and (not os.path.exists(home) or\n (os.getenv(\"MSYSTEM\") and os.getenv(\"USERPROFILE\"))):\n del os.environ[\"HOME\"]\n result = os.path.expanduser(path)\n finally:\n os.environ.clear()\n os.environ.update(old_env)\n return result\n\n\ndef path_shortener(path, short_paths):\n \"\"\" short_paths is 4-state:\n False: Never shorten the path\n True: Always shorten the path, create link if not existing\n None: Use shorten path only if already exists, not create\n \"\"\"\n if short_paths is False or os.getenv(\"CONAN_USER_HOME_SHORT\") == \"None\":\n return path\n link = os.path.join(path, CONAN_LINK)\n if os.path.exists(link):\n return load(link)\n elif short_paths is None:\n return path\n\n short_home = os.getenv(\"CONAN_USER_HOME_SHORT\")\n if not short_home:\n drive = os.path.splitdrive(path)[0]\n short_home = drive + \"/.conan\"\n mkdir(short_home)\n\n # Workaround for short_home living in NTFS file systems. Give full control permission to current user to avoid\n # access problems in cygwin/msys2 windows subsystems when using short_home folder\n try:\n username = os.getenv(\"USERDOMAIN\")\n domainname = \"%s\\%s\" % (username, os.environ[\"USERNAME\"]) if username else os.environ[\"USERNAME\"]\n cmd = r'cacls %s /E /G \"%s\":F' % (short_home, domainname)\n subprocess.check_output(cmd, stderr=subprocess.STDOUT) # Ignoring any returned output, make command quiet\n except subprocess.CalledProcessError:\n # cmd can fail if trying to set ACL in non NTFS drives, ignoring it.\n pass\n\n redirect = tempfile.mkdtemp(dir=short_home, prefix=\"\")\n # This \"1\" is the way to have a non-existing directory, so commands like\n # shutil.copytree() to it, works. It can be removed without compromising the\n # temp folder generator and conan-links consistency\n redirect = os.path.join(redirect, \"1\")\n save(link, redirect)\n return redirect\n\n\ndef ignore_long_path_files(src_folder, build_folder, output):\n def _filter(src, files):\n filtered_files = []\n for the_file in files:\n source_path = os.path.join(src, the_file)\n # Without storage path, just relative\n rel_path = os.path.relpath(source_path, src_folder)\n dest_path = os.path.normpath(os.path.join(build_folder, rel_path))\n # it is NOT that \"/\" is counted as \"\\\\\" so it counts double\n # seems a bug in python, overflows paths near the limit of 260,\n if len(dest_path) >= 249:\n filtered_files.append(the_file)\n output.warn(\"Filename too long, file excluded: %s\" % dest_path)\n return filtered_files\n return _filter\n\n\ndef rm_conandir(path):\n \"\"\"removal of a directory that might contain a link to a short path\"\"\"\n link = os.path.join(path, CONAN_LINK)\n if os.path.exists(link):\n short_path = load(link)\n rmdir(os.path.dirname(short_path))\n rmdir(path)\n", "path": "conans/util/windows.py"}]} | 1,853 | 101 |
gh_patches_debug_28256 | rasdani/github-patches | git_diff | meltano__meltano-8355 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
feature: Install multiple plugins of any type
### Feature scope
CLI (options, error messages, logging, etc.)
### Description
Currently, you can only leverage `meltano install` in parallel for all plugin types or all plugins of a specific type:
```sh
# all plugins
meltano install
# all extractors
meltano install [extractor|extractors]
# all loaders
meltano install [loader|loaders]
```
It would be great if you could install multiple plugins of any type - something like:
```sh
meltano install <extractor> <loader> <transformer> <utility>
```
This change would remove the need to specify a plugin type at all, since a plugin name is already unique to a Meltano project. This is currently not possible without a breaking change however, since a plugin type is required as the first argument to `meltano install` when specifying plugin names. #8228 introduced the `--from-file` option for `meltano config <plugin> set`, which accepts a special character `-` to refer to stdin - `meltano install` could reuse this or a similar concept to skip the plugin type argument and leverage parallel install:
```sh
meltano install - <extractor> <loader> <transformer> <utility>
```
Once a convention is established, this feature could be extended to `meltano remove` as well.
I have a POC of this working for `meltano install` locally, so happy to open a PR.
</issue>
<code>
[start of src/meltano/cli/install.py]
1 """CLI command `meltano install`."""
2
3 from __future__ import annotations
4
5 import typing as t
6
7 import click
8 import structlog
9
10 from meltano.cli.params import pass_project
11 from meltano.cli.utils import CliError, PartialInstrumentedCmd, install_plugins
12 from meltano.core.block.parser import BlockParser
13 from meltano.core.plugin import PluginType
14 from meltano.core.schedule_service import ScheduleService
15 from meltano.core.tracking.contexts import CliEvent, PluginsTrackingContext
16
17 if t.TYPE_CHECKING:
18 from meltano.core.project import Project
19 from meltano.core.tracking import Tracker
20
21 logger = structlog.getLogger(__name__)
22
23
24 @click.command(cls=PartialInstrumentedCmd, short_help="Install project dependencies.")
25 @click.argument(
26 "plugin_type",
27 type=click.Choice(PluginType.cli_arguments()),
28 required=False,
29 )
30 @click.argument("plugin_name", nargs=-1, required=False)
31 @click.option(
32 "--clean",
33 is_flag=True,
34 help="Completely reinstall a plugin rather than simply upgrading if necessary.",
35 )
36 @click.option(
37 "--parallelism",
38 "-p",
39 type=click.INT,
40 default=None,
41 help=(
42 "Limit the number of plugins to install in parallel. "
43 "Defaults to the number of cores."
44 ),
45 )
46 @click.option(
47 "--force",
48 "-f",
49 is_flag=True,
50 help="Ignore the required Python version declared by the plugins.",
51 )
52 @click.option(
53 "--schedule",
54 "-s",
55 "schedule_name",
56 help="Install all plugins from the given schedule.",
57 )
58 @click.pass_context
59 @pass_project(migrate=True)
60 def install( # noqa: C901
61 project: Project,
62 ctx: click.Context,
63 plugin_type: str,
64 plugin_name: str,
65 clean: bool,
66 parallelism: int,
67 force: bool,
68 schedule_name: str,
69 ):
70 """
71 Install all the dependencies of your project based on the meltano.yml file.
72
73 \b\nRead more at https://docs.meltano.com/reference/command-line-interface#install
74 """
75 tracker: Tracker = ctx.obj["tracker"]
76 try:
77 if plugin_type:
78 plugin_type = PluginType.from_cli_argument(plugin_type)
79 plugins = project.plugins.get_plugins_of_type(plugin_type)
80 if plugin_name:
81 plugins = [plugin for plugin in plugins if plugin.name in plugin_name]
82 else:
83 plugins = list(project.plugins.plugins())
84
85 if schedule_name:
86 schedule_plugins = _get_schedule_plugins(
87 ctx.obj["project"],
88 schedule_name,
89 )
90 plugins = list(set(plugins) & set(schedule_plugins))
91 except Exception:
92 tracker.track_command_event(CliEvent.aborted)
93 raise
94
95 click.echo(f"Installing {len(plugins)} plugins...")
96 tracker.add_contexts(
97 PluginsTrackingContext([(candidate, None) for candidate in plugins]),
98 )
99 tracker.track_command_event(CliEvent.inflight)
100
101 success = install_plugins(
102 project,
103 plugins,
104 parallelism=parallelism,
105 clean=clean,
106 force=force,
107 )
108 if not success:
109 tracker.track_command_event(CliEvent.failed)
110 raise CliError("Failed to install plugin(s)") # noqa: EM101
111 tracker.track_command_event(CliEvent.completed)
112
113
114 def _get_schedule_plugins(project: Project, schedule_name: str):
115 schedule_service = ScheduleService(project)
116 schedule_obj = schedule_service.find_schedule(schedule_name)
117 schedule_plugins = set()
118 if schedule_obj.elt_schedule:
119 for plugin_name in (schedule_obj.extractor, schedule_obj.loader):
120 schedule_plugins.add(project.plugins.find_plugin(plugin_name))
121 else:
122 task_sets = schedule_service.task_sets_service.get(schedule_obj.job)
123 for blocks in task_sets.flat_args_per_set:
124 parser = BlockParser(logger, project, blocks)
125 for plugin in parser.plugins:
126 schedule_plugins.add(
127 project.plugins.find_plugin(plugin.info.get("name"))
128 if plugin.type == PluginType.MAPPERS
129 else plugin,
130 )
131 return schedule_plugins
132
[end of src/meltano/cli/install.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/meltano/cli/install.py b/src/meltano/cli/install.py
--- a/src/meltano/cli/install.py
+++ b/src/meltano/cli/install.py
@@ -18,13 +18,15 @@
from meltano.core.project import Project
from meltano.core.tracking import Tracker
+ANY = "-"
+
logger = structlog.getLogger(__name__)
@click.command(cls=PartialInstrumentedCmd, short_help="Install project dependencies.")
@click.argument(
"plugin_type",
- type=click.Choice(PluginType.cli_arguments()),
+ type=click.Choice((*PluginType.cli_arguments(), ANY)),
required=False,
)
@click.argument("plugin_name", nargs=-1, required=False)
@@ -74,14 +76,15 @@
"""
tracker: Tracker = ctx.obj["tracker"]
try:
- if plugin_type:
+ if plugin_type and plugin_type != ANY:
plugin_type = PluginType.from_cli_argument(plugin_type)
plugins = project.plugins.get_plugins_of_type(plugin_type)
- if plugin_name:
- plugins = [plugin for plugin in plugins if plugin.name in plugin_name]
else:
plugins = list(project.plugins.plugins())
+ if plugin_name:
+ plugins = [plugin for plugin in plugins if plugin.name in plugin_name]
+
if schedule_name:
schedule_plugins = _get_schedule_plugins(
ctx.obj["project"],
| {"golden_diff": "diff --git a/src/meltano/cli/install.py b/src/meltano/cli/install.py\n--- a/src/meltano/cli/install.py\n+++ b/src/meltano/cli/install.py\n@@ -18,13 +18,15 @@\n from meltano.core.project import Project\n from meltano.core.tracking import Tracker\n \n+ANY = \"-\"\n+\n logger = structlog.getLogger(__name__)\n \n \n @click.command(cls=PartialInstrumentedCmd, short_help=\"Install project dependencies.\")\n @click.argument(\n \"plugin_type\",\n- type=click.Choice(PluginType.cli_arguments()),\n+ type=click.Choice((*PluginType.cli_arguments(), ANY)),\n required=False,\n )\n @click.argument(\"plugin_name\", nargs=-1, required=False)\n@@ -74,14 +76,15 @@\n \"\"\"\n tracker: Tracker = ctx.obj[\"tracker\"]\n try:\n- if plugin_type:\n+ if plugin_type and plugin_type != ANY:\n plugin_type = PluginType.from_cli_argument(plugin_type)\n plugins = project.plugins.get_plugins_of_type(plugin_type)\n- if plugin_name:\n- plugins = [plugin for plugin in plugins if plugin.name in plugin_name]\n else:\n plugins = list(project.plugins.plugins())\n \n+ if plugin_name:\n+ plugins = [plugin for plugin in plugins if plugin.name in plugin_name]\n+\n if schedule_name:\n schedule_plugins = _get_schedule_plugins(\n ctx.obj[\"project\"],\n", "issue": "feature: Install multiple plugins of any type\n### Feature scope\r\n\r\nCLI (options, error messages, logging, etc.)\r\n\r\n### Description\r\n\r\nCurrently, you can only leverage `meltano install` in parallel for all plugin types or all plugins of a specific type:\r\n\r\n```sh\r\n# all plugins\r\nmeltano install\r\n\r\n# all extractors\r\nmeltano install [extractor|extractors]\r\n\r\n# all loaders\r\nmeltano install [loader|loaders]\r\n```\r\n\r\nIt would be great if you could install multiple plugins of any type - something like:\r\n\r\n```sh\r\nmeltano install <extractor> <loader> <transformer> <utility>\r\n```\r\n\r\nThis change would remove the need to specify a plugin type at all, since a plugin name is already unique to a Meltano project. This is currently not possible without a breaking change however, since a plugin type is required as the first argument to `meltano install` when specifying plugin names. #8228 introduced the `--from-file` option for `meltano config <plugin> set`, which accepts a special character `-` to refer to stdin - `meltano install` could reuse this or a similar concept to skip the plugin type argument and leverage parallel install:\r\n\r\n```sh\r\nmeltano install - <extractor> <loader> <transformer> <utility>\r\n```\r\n\r\nOnce a convention is established, this feature could be extended to `meltano remove` as well.\r\n\r\nI have a POC of this working for `meltano install` locally, so happy to open a PR.\n", "before_files": [{"content": "\"\"\"CLI command `meltano install`.\"\"\"\n\nfrom __future__ import annotations\n\nimport typing as t\n\nimport click\nimport structlog\n\nfrom meltano.cli.params import pass_project\nfrom meltano.cli.utils import CliError, PartialInstrumentedCmd, install_plugins\nfrom meltano.core.block.parser import BlockParser\nfrom meltano.core.plugin import PluginType\nfrom meltano.core.schedule_service import ScheduleService\nfrom meltano.core.tracking.contexts import CliEvent, PluginsTrackingContext\n\nif t.TYPE_CHECKING:\n from meltano.core.project import Project\n from meltano.core.tracking import Tracker\n\nlogger = structlog.getLogger(__name__)\n\n\[email protected](cls=PartialInstrumentedCmd, short_help=\"Install project dependencies.\")\[email protected](\n \"plugin_type\",\n type=click.Choice(PluginType.cli_arguments()),\n required=False,\n)\[email protected](\"plugin_name\", nargs=-1, required=False)\[email protected](\n \"--clean\",\n is_flag=True,\n help=\"Completely reinstall a plugin rather than simply upgrading if necessary.\",\n)\[email protected](\n \"--parallelism\",\n \"-p\",\n type=click.INT,\n default=None,\n help=(\n \"Limit the number of plugins to install in parallel. \"\n \"Defaults to the number of cores.\"\n ),\n)\[email protected](\n \"--force\",\n \"-f\",\n is_flag=True,\n help=\"Ignore the required Python version declared by the plugins.\",\n)\[email protected](\n \"--schedule\",\n \"-s\",\n \"schedule_name\",\n help=\"Install all plugins from the given schedule.\",\n)\[email protected]_context\n@pass_project(migrate=True)\ndef install( # noqa: C901\n project: Project,\n ctx: click.Context,\n plugin_type: str,\n plugin_name: str,\n clean: bool,\n parallelism: int,\n force: bool,\n schedule_name: str,\n):\n \"\"\"\n Install all the dependencies of your project based on the meltano.yml file.\n\n \\b\\nRead more at https://docs.meltano.com/reference/command-line-interface#install\n \"\"\"\n tracker: Tracker = ctx.obj[\"tracker\"]\n try:\n if plugin_type:\n plugin_type = PluginType.from_cli_argument(plugin_type)\n plugins = project.plugins.get_plugins_of_type(plugin_type)\n if plugin_name:\n plugins = [plugin for plugin in plugins if plugin.name in plugin_name]\n else:\n plugins = list(project.plugins.plugins())\n\n if schedule_name:\n schedule_plugins = _get_schedule_plugins(\n ctx.obj[\"project\"],\n schedule_name,\n )\n plugins = list(set(plugins) & set(schedule_plugins))\n except Exception:\n tracker.track_command_event(CliEvent.aborted)\n raise\n\n click.echo(f\"Installing {len(plugins)} plugins...\")\n tracker.add_contexts(\n PluginsTrackingContext([(candidate, None) for candidate in plugins]),\n )\n tracker.track_command_event(CliEvent.inflight)\n\n success = install_plugins(\n project,\n plugins,\n parallelism=parallelism,\n clean=clean,\n force=force,\n )\n if not success:\n tracker.track_command_event(CliEvent.failed)\n raise CliError(\"Failed to install plugin(s)\") # noqa: EM101\n tracker.track_command_event(CliEvent.completed)\n\n\ndef _get_schedule_plugins(project: Project, schedule_name: str):\n schedule_service = ScheduleService(project)\n schedule_obj = schedule_service.find_schedule(schedule_name)\n schedule_plugins = set()\n if schedule_obj.elt_schedule:\n for plugin_name in (schedule_obj.extractor, schedule_obj.loader):\n schedule_plugins.add(project.plugins.find_plugin(plugin_name))\n else:\n task_sets = schedule_service.task_sets_service.get(schedule_obj.job)\n for blocks in task_sets.flat_args_per_set:\n parser = BlockParser(logger, project, blocks)\n for plugin in parser.plugins:\n schedule_plugins.add(\n project.plugins.find_plugin(plugin.info.get(\"name\"))\n if plugin.type == PluginType.MAPPERS\n else plugin,\n )\n return schedule_plugins\n", "path": "src/meltano/cli/install.py"}]} | 2,021 | 310 |
gh_patches_debug_3785 | rasdani/github-patches | git_diff | Mailu__Mailu-744 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Why a print statement in this code?
@kaiyou, according git blame this is yours. I wonder if it was included for debugging purposes and not taken out anymore?
https://github.com/Mailu/Mailu/blob/5c2439011413a114c470862f95206f8487fa4568/core/admin/mailu/internal/views/dovecot.py#L18
</issue>
<code>
[start of core/admin/mailu/internal/views/dovecot.py]
1 from mailu import models
2 from mailu.internal import internal
3 from flask import current_app as app
4
5 import flask
6 import socket
7 import os
8
9 @internal.route("/dovecot/passdb/<user_email>")
10 def dovecot_passdb_dict(user_email):
11 user = models.User.query.get(user_email) or flask.abort(404)
12 allow_nets = []
13 allow_nets.append(app.config["SUBNET"])
14 if app.config["POD_ADDRESS_RANGE"]:
15 allow_nets.append(app.config["POD_ADDRESS_RANGE"])
16 print(allow_nets)
17 return flask.jsonify({
18 "password": None,
19 "nopassword": "Y",
20 "allow_nets": ",".join(allow_nets)
21 })
22
23
24 @internal.route("/dovecot/userdb/<user_email>")
25 def dovecot_userdb_dict(user_email):
26 user = models.User.query.get(user_email) or flask.abort(404)
27 return flask.jsonify({
28 "quota_rule": "*:bytes={}".format(user.quota_bytes)
29 })
30
31
32 @internal.route("/dovecot/quota/<ns>/<user_email>", methods=["POST"])
33 def dovecot_quota(ns, user_email):
34 user = models.User.query.get(user_email) or flask.abort(404)
35 if ns == "storage":
36 user.quota_bytes_used = flask.request.get_json()
37 models.db.session.commit()
38 return flask.jsonify(None)
39
40
41 @internal.route("/dovecot/sieve/name/<script>/<user_email>")
42 def dovecot_sieve_name(script, user_email):
43 return flask.jsonify(script)
44
45
46 @internal.route("/dovecot/sieve/data/default/<user_email>")
47 def dovecot_sieve_data(user_email):
48 user = models.User.query.get(user_email) or flask.abort(404)
49 return flask.jsonify(flask.render_template("default.sieve", user=user))
50
[end of core/admin/mailu/internal/views/dovecot.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/admin/mailu/internal/views/dovecot.py b/core/admin/mailu/internal/views/dovecot.py
--- a/core/admin/mailu/internal/views/dovecot.py
+++ b/core/admin/mailu/internal/views/dovecot.py
@@ -13,7 +13,6 @@
allow_nets.append(app.config["SUBNET"])
if app.config["POD_ADDRESS_RANGE"]:
allow_nets.append(app.config["POD_ADDRESS_RANGE"])
- print(allow_nets)
return flask.jsonify({
"password": None,
"nopassword": "Y",
| {"golden_diff": "diff --git a/core/admin/mailu/internal/views/dovecot.py b/core/admin/mailu/internal/views/dovecot.py\n--- a/core/admin/mailu/internal/views/dovecot.py\n+++ b/core/admin/mailu/internal/views/dovecot.py\n@@ -13,7 +13,6 @@\n allow_nets.append(app.config[\"SUBNET\"])\n if app.config[\"POD_ADDRESS_RANGE\"]:\n allow_nets.append(app.config[\"POD_ADDRESS_RANGE\"])\n- print(allow_nets)\n return flask.jsonify({\n \"password\": None,\n \"nopassword\": \"Y\",\n", "issue": "Why a print statement in this code?\n@kaiyou, according git blame this is yours. I wonder if it was included for debugging purposes and not taken out anymore?\r\n\r\nhttps://github.com/Mailu/Mailu/blob/5c2439011413a114c470862f95206f8487fa4568/core/admin/mailu/internal/views/dovecot.py#L18\n", "before_files": [{"content": "from mailu import models\nfrom mailu.internal import internal\nfrom flask import current_app as app\n\nimport flask\nimport socket\nimport os\n\[email protected](\"/dovecot/passdb/<user_email>\")\ndef dovecot_passdb_dict(user_email):\n user = models.User.query.get(user_email) or flask.abort(404)\n allow_nets = []\n allow_nets.append(app.config[\"SUBNET\"])\n if app.config[\"POD_ADDRESS_RANGE\"]:\n allow_nets.append(app.config[\"POD_ADDRESS_RANGE\"])\n print(allow_nets)\n return flask.jsonify({\n \"password\": None,\n \"nopassword\": \"Y\",\n \"allow_nets\": \",\".join(allow_nets)\n })\n\n\[email protected](\"/dovecot/userdb/<user_email>\")\ndef dovecot_userdb_dict(user_email):\n user = models.User.query.get(user_email) or flask.abort(404)\n return flask.jsonify({\n \"quota_rule\": \"*:bytes={}\".format(user.quota_bytes)\n })\n\n\[email protected](\"/dovecot/quota/<ns>/<user_email>\", methods=[\"POST\"])\ndef dovecot_quota(ns, user_email):\n user = models.User.query.get(user_email) or flask.abort(404)\n if ns == \"storage\":\n user.quota_bytes_used = flask.request.get_json()\n models.db.session.commit()\n return flask.jsonify(None)\n\n\[email protected](\"/dovecot/sieve/name/<script>/<user_email>\")\ndef dovecot_sieve_name(script, user_email):\n return flask.jsonify(script)\n\n\[email protected](\"/dovecot/sieve/data/default/<user_email>\")\ndef dovecot_sieve_data(user_email):\n user = models.User.query.get(user_email) or flask.abort(404)\n return flask.jsonify(flask.render_template(\"default.sieve\", user=user))\n", "path": "core/admin/mailu/internal/views/dovecot.py"}]} | 1,142 | 127 |
gh_patches_debug_23963 | rasdani/github-patches | git_diff | optuna__optuna-3182 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve visualization tutorial
<!-- Please write a clear and concise description of what content in https://optuna.readthedocs.io/ is an issue. -->
I suggest updating the [visualization tutorial](https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/005_visualization.html) as follows
- Add missing [`visualization.plot_pareto_front`](https://optuna.readthedocs.io/en/stable/reference/visualization/generated/optuna.visualization.plot_pareto_front.html#optuna.visualization.plot_pareto_front) example; since this function needs multi-objective function unlike other visualization examples, we might need to define such an objective function after the other examples. If adding such an example is not appropriate, at least we need to mention the existence of `visualization.plot_pareto_front`.
- Mention the availability of matplotlib version in the first paragraph.
</issue>
<code>
[start of tutorial/10_key_features/005_visualization.py]
1 """
2 .. _visualization:
3
4 Quick Visualization for Hyperparameter Optimization Analysis
5 ============================================================
6
7 Optuna provides various visualization features in :mod:`optuna.visualization` to analyze optimization results visually.
8
9 This tutorial walks you through this module by visualizing the history of lightgbm model for breast cancer dataset.
10 """
11
12 ###################################################################################################
13 import lightgbm as lgb
14 import numpy as np
15 import sklearn.datasets
16 import sklearn.metrics
17 from sklearn.model_selection import train_test_split
18
19 import optuna
20 from optuna.visualization import plot_contour
21 from optuna.visualization import plot_edf
22 from optuna.visualization import plot_intermediate_values
23 from optuna.visualization import plot_optimization_history
24 from optuna.visualization import plot_parallel_coordinate
25 from optuna.visualization import plot_param_importances
26 from optuna.visualization import plot_slice
27
28 SEED = 42
29
30 np.random.seed(SEED)
31
32
33 ###################################################################################################
34 # Define the objective function.
35 def objective(trial):
36 data, target = sklearn.datasets.load_breast_cancer(return_X_y=True)
37 train_x, valid_x, train_y, valid_y = train_test_split(data, target, test_size=0.25)
38 dtrain = lgb.Dataset(train_x, label=train_y)
39 dvalid = lgb.Dataset(valid_x, label=valid_y)
40
41 param = {
42 "objective": "binary",
43 "metric": "auc",
44 "verbosity": -1,
45 "boosting_type": "gbdt",
46 "bagging_fraction": trial.suggest_float("bagging_fraction", 0.4, 1.0),
47 "bagging_freq": trial.suggest_int("bagging_freq", 1, 7),
48 "min_child_samples": trial.suggest_int("min_child_samples", 5, 100),
49 }
50
51 # Add a callback for pruning.
52 pruning_callback = optuna.integration.LightGBMPruningCallback(trial, "auc")
53 gbm = lgb.train(
54 param, dtrain, valid_sets=[dvalid], verbose_eval=False, callbacks=[pruning_callback]
55 )
56
57 preds = gbm.predict(valid_x)
58 pred_labels = np.rint(preds)
59 accuracy = sklearn.metrics.accuracy_score(valid_y, pred_labels)
60 return accuracy
61
62
63 ###################################################################################################
64 study = optuna.create_study(
65 direction="maximize",
66 sampler=optuna.samplers.TPESampler(seed=SEED),
67 pruner=optuna.pruners.MedianPruner(n_warmup_steps=10),
68 )
69 study.optimize(objective, n_trials=100, timeout=600)
70
71 ###################################################################################################
72 # Plot functions
73 # --------------
74 # Visualize the optimization history. See :func:`~optuna.visualization.plot_optimization_history` for the details.
75 plot_optimization_history(study)
76
77 ###################################################################################################
78 # Visualize the learning curves of the trials. See :func:`~optuna.visualization.plot_intermediate_values` for the details.
79 plot_intermediate_values(study)
80
81 ###################################################################################################
82 # Visualize high-dimensional parameter relationships. See :func:`~optuna.visualization.plot_parallel_coordinate` for the details.
83 plot_parallel_coordinate(study)
84
85 ###################################################################################################
86 # Select parameters to visualize.
87 plot_parallel_coordinate(study, params=["bagging_freq", "bagging_fraction"])
88
89 ###################################################################################################
90 # Visualize hyperparameter relationships. See :func:`~optuna.visualization.plot_contour` for the details.
91 plot_contour(study)
92
93 ###################################################################################################
94 # Select parameters to visualize.
95 plot_contour(study, params=["bagging_freq", "bagging_fraction"])
96
97 ###################################################################################################
98 # Visualize individual hyperparameters as slice plot. See :func:`~optuna.visualization.plot_slice` for the details.
99 plot_slice(study)
100
101 ###################################################################################################
102 # Select parameters to visualize.
103 plot_slice(study, params=["bagging_freq", "bagging_fraction"])
104
105 ###################################################################################################
106 # Visualize parameter importances. See :func:`~optuna.visualization.plot_param_importances` for the details.
107 plot_param_importances(study)
108
109 ###################################################################################################
110 # Learn which hyperparameters are affecting the trial duration with hyperparameter importance.
111 optuna.visualization.plot_param_importances(
112 study, target=lambda t: t.duration.total_seconds(), target_name="duration"
113 )
114
115 ###################################################################################################
116 # Visualize empirical distribution function. See :func:`~optuna.visualization.plot_edf` for the details.
117 plot_edf(study)
118
[end of tutorial/10_key_features/005_visualization.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tutorial/10_key_features/005_visualization.py b/tutorial/10_key_features/005_visualization.py
--- a/tutorial/10_key_features/005_visualization.py
+++ b/tutorial/10_key_features/005_visualization.py
@@ -7,6 +7,9 @@
Optuna provides various visualization features in :mod:`optuna.visualization` to analyze optimization results visually.
This tutorial walks you through this module by visualizing the history of lightgbm model for breast cancer dataset.
+
+For visualizing multi-objective optimization (i.e., the usage of :func:`optuna.visualization.plot_pareto_front`),
+please refer to the tutorial of :ref:`multi_objective`.
"""
###################################################################################################
@@ -17,6 +20,9 @@
from sklearn.model_selection import train_test_split
import optuna
+
+# You can use Matplotlib instead of Plotly for visualization by simply replacing `optuna.visualization` with
+# `optuna.visualization.matplotlib` in the following examples.
from optuna.visualization import plot_contour
from optuna.visualization import plot_edf
from optuna.visualization import plot_intermediate_values
| {"golden_diff": "diff --git a/tutorial/10_key_features/005_visualization.py b/tutorial/10_key_features/005_visualization.py\n--- a/tutorial/10_key_features/005_visualization.py\n+++ b/tutorial/10_key_features/005_visualization.py\n@@ -7,6 +7,9 @@\n Optuna provides various visualization features in :mod:`optuna.visualization` to analyze optimization results visually.\n \n This tutorial walks you through this module by visualizing the history of lightgbm model for breast cancer dataset.\n+\n+For visualizing multi-objective optimization (i.e., the usage of :func:`optuna.visualization.plot_pareto_front`),\n+please refer to the tutorial of :ref:`multi_objective`.\n \"\"\"\n \n ###################################################################################################\n@@ -17,6 +20,9 @@\n from sklearn.model_selection import train_test_split\n \n import optuna\n+\n+# You can use Matplotlib instead of Plotly for visualization by simply replacing `optuna.visualization` with\n+# `optuna.visualization.matplotlib` in the following examples.\n from optuna.visualization import plot_contour\n from optuna.visualization import plot_edf\n from optuna.visualization import plot_intermediate_values\n", "issue": "Improve visualization tutorial\n<!-- Please write a clear and concise description of what content in https://optuna.readthedocs.io/ is an issue. -->\r\n\r\nI suggest updating the [visualization tutorial](https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/005_visualization.html) as follows\r\n\r\n- Add missing [`visualization.plot_pareto_front`](https://optuna.readthedocs.io/en/stable/reference/visualization/generated/optuna.visualization.plot_pareto_front.html#optuna.visualization.plot_pareto_front) example; since this function needs multi-objective function unlike other visualization examples, we might need to define such an objective function after the other examples. If adding such an example is not appropriate, at least we need to mention the existence of `visualization.plot_pareto_front`.\r\n- Mention the availability of matplotlib version in the first paragraph. \r\n\n", "before_files": [{"content": "\"\"\"\n.. _visualization:\n\nQuick Visualization for Hyperparameter Optimization Analysis\n============================================================\n\nOptuna provides various visualization features in :mod:`optuna.visualization` to analyze optimization results visually.\n\nThis tutorial walks you through this module by visualizing the history of lightgbm model for breast cancer dataset.\n\"\"\"\n\n###################################################################################################\nimport lightgbm as lgb\nimport numpy as np\nimport sklearn.datasets\nimport sklearn.metrics\nfrom sklearn.model_selection import train_test_split\n\nimport optuna\nfrom optuna.visualization import plot_contour\nfrom optuna.visualization import plot_edf\nfrom optuna.visualization import plot_intermediate_values\nfrom optuna.visualization import plot_optimization_history\nfrom optuna.visualization import plot_parallel_coordinate\nfrom optuna.visualization import plot_param_importances\nfrom optuna.visualization import plot_slice\n\nSEED = 42\n\nnp.random.seed(SEED)\n\n\n###################################################################################################\n# Define the objective function.\ndef objective(trial):\n data, target = sklearn.datasets.load_breast_cancer(return_X_y=True)\n train_x, valid_x, train_y, valid_y = train_test_split(data, target, test_size=0.25)\n dtrain = lgb.Dataset(train_x, label=train_y)\n dvalid = lgb.Dataset(valid_x, label=valid_y)\n\n param = {\n \"objective\": \"binary\",\n \"metric\": \"auc\",\n \"verbosity\": -1,\n \"boosting_type\": \"gbdt\",\n \"bagging_fraction\": trial.suggest_float(\"bagging_fraction\", 0.4, 1.0),\n \"bagging_freq\": trial.suggest_int(\"bagging_freq\", 1, 7),\n \"min_child_samples\": trial.suggest_int(\"min_child_samples\", 5, 100),\n }\n\n # Add a callback for pruning.\n pruning_callback = optuna.integration.LightGBMPruningCallback(trial, \"auc\")\n gbm = lgb.train(\n param, dtrain, valid_sets=[dvalid], verbose_eval=False, callbacks=[pruning_callback]\n )\n\n preds = gbm.predict(valid_x)\n pred_labels = np.rint(preds)\n accuracy = sklearn.metrics.accuracy_score(valid_y, pred_labels)\n return accuracy\n\n\n###################################################################################################\nstudy = optuna.create_study(\n direction=\"maximize\",\n sampler=optuna.samplers.TPESampler(seed=SEED),\n pruner=optuna.pruners.MedianPruner(n_warmup_steps=10),\n)\nstudy.optimize(objective, n_trials=100, timeout=600)\n\n###################################################################################################\n# Plot functions\n# --------------\n# Visualize the optimization history. See :func:`~optuna.visualization.plot_optimization_history` for the details.\nplot_optimization_history(study)\n\n###################################################################################################\n# Visualize the learning curves of the trials. See :func:`~optuna.visualization.plot_intermediate_values` for the details.\nplot_intermediate_values(study)\n\n###################################################################################################\n# Visualize high-dimensional parameter relationships. See :func:`~optuna.visualization.plot_parallel_coordinate` for the details.\nplot_parallel_coordinate(study)\n\n###################################################################################################\n# Select parameters to visualize.\nplot_parallel_coordinate(study, params=[\"bagging_freq\", \"bagging_fraction\"])\n\n###################################################################################################\n# Visualize hyperparameter relationships. See :func:`~optuna.visualization.plot_contour` for the details.\nplot_contour(study)\n\n###################################################################################################\n# Select parameters to visualize.\nplot_contour(study, params=[\"bagging_freq\", \"bagging_fraction\"])\n\n###################################################################################################\n# Visualize individual hyperparameters as slice plot. See :func:`~optuna.visualization.plot_slice` for the details.\nplot_slice(study)\n\n###################################################################################################\n# Select parameters to visualize.\nplot_slice(study, params=[\"bagging_freq\", \"bagging_fraction\"])\n\n###################################################################################################\n# Visualize parameter importances. See :func:`~optuna.visualization.plot_param_importances` for the details.\nplot_param_importances(study)\n\n###################################################################################################\n# Learn which hyperparameters are affecting the trial duration with hyperparameter importance.\noptuna.visualization.plot_param_importances(\n study, target=lambda t: t.duration.total_seconds(), target_name=\"duration\"\n)\n\n###################################################################################################\n# Visualize empirical distribution function. See :func:`~optuna.visualization.plot_edf` for the details.\nplot_edf(study)\n", "path": "tutorial/10_key_features/005_visualization.py"}]} | 1,898 | 254 |
gh_patches_debug_58736 | rasdani/github-patches | git_diff | goauthentik__authentik-6081 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Not sure that OAuth2 client source should use authorization header
I've been testing authentik using an Auth0 openIDC source as well as a google source. I have gotten both to work, but Auth0 was not working before a fix to prevent adding the access token to the authorizaton Bearer headers. Google auth works fine with or without this fix.
https://auth0.com/blog/id-token-access-token-what-is-the-difference/ suggests that many endpoints should not be given the access token. Not sure this is relevant.
I think Auth0 is less permissive and prefers the access_token param rather than the Authorization Bearer token
in sources/oauth/clients/oauth2.py
```
class OAuth2Client(BaseOAuthClient):
"""OAuth2 Client"""
...
def do_request(self, method: str, url: str, **kwargs) -> Response:
"""Build remote url request. Constructs necessary auth."""
if "token" in kwargs:
token = kwargs.pop("token")
params = kwargs.get("params", {})
params["access_token"] = token["access_token"]
kwargs["params"] = params
headers = kwargs.get("headers", {})
# Note this fix
# headers["Authorization"] = f"{token['token_type']} {token['access_token']}"
kwargs["headers"] = headers
return super().do_request(method, url, **kwargs)
```
</issue>
<code>
[start of authentik/sources/oauth/types/oidc.py]
1 """OpenID Connect OAuth Views"""
2 from typing import Any
3
4 from authentik.sources.oauth.clients.oauth2 import UserprofileHeaderAuthClient
5 from authentik.sources.oauth.models import OAuthSource
6 from authentik.sources.oauth.types.registry import SourceType, registry
7 from authentik.sources.oauth.views.callback import OAuthCallback
8 from authentik.sources.oauth.views.redirect import OAuthRedirect
9
10
11 class OpenIDConnectOAuthRedirect(OAuthRedirect):
12 """OpenIDConnect OAuth2 Redirect"""
13
14 def get_additional_parameters(self, source: OAuthSource): # pragma: no cover
15 return {
16 "scope": ["openid", "email", "profile"],
17 }
18
19
20 class OpenIDConnectOAuth2Callback(OAuthCallback):
21 """OpenIDConnect OAuth2 Callback"""
22
23 client_class: UserprofileHeaderAuthClient
24
25 def get_user_id(self, info: dict[str, str]) -> str:
26 return info.get("sub", "")
27
28 def get_user_enroll_context(
29 self,
30 info: dict[str, Any],
31 ) -> dict[str, Any]:
32 return {
33 "username": info.get("nickname", info.get("preferred_username")),
34 "email": info.get("email"),
35 "name": info.get("name"),
36 }
37
38
39 @registry.register()
40 class OpenIDConnectType(SourceType):
41 """OpenIDConnect Type definition"""
42
43 callback_view = OpenIDConnectOAuth2Callback
44 redirect_view = OpenIDConnectOAuthRedirect
45 name = "OpenID Connect"
46 slug = "openidconnect"
47
48 urls_customizable = True
49
[end of authentik/sources/oauth/types/oidc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/authentik/sources/oauth/types/oidc.py b/authentik/sources/oauth/types/oidc.py
--- a/authentik/sources/oauth/types/oidc.py
+++ b/authentik/sources/oauth/types/oidc.py
@@ -20,7 +20,7 @@
class OpenIDConnectOAuth2Callback(OAuthCallback):
"""OpenIDConnect OAuth2 Callback"""
- client_class: UserprofileHeaderAuthClient
+ client_class = UserprofileHeaderAuthClient
def get_user_id(self, info: dict[str, str]) -> str:
return info.get("sub", "")
| {"golden_diff": "diff --git a/authentik/sources/oauth/types/oidc.py b/authentik/sources/oauth/types/oidc.py\n--- a/authentik/sources/oauth/types/oidc.py\n+++ b/authentik/sources/oauth/types/oidc.py\n@@ -20,7 +20,7 @@\n class OpenIDConnectOAuth2Callback(OAuthCallback):\n \"\"\"OpenIDConnect OAuth2 Callback\"\"\"\n \n- client_class: UserprofileHeaderAuthClient\n+ client_class = UserprofileHeaderAuthClient\n \n def get_user_id(self, info: dict[str, str]) -> str:\n return info.get(\"sub\", \"\")\n", "issue": "Not sure that OAuth2 client source should use authorization header\nI've been testing authentik using an Auth0 openIDC source as well as a google source. I have gotten both to work, but Auth0 was not working before a fix to prevent adding the access token to the authorizaton Bearer headers. Google auth works fine with or without this fix. \r\n\r\nhttps://auth0.com/blog/id-token-access-token-what-is-the-difference/ suggests that many endpoints should not be given the access token. Not sure this is relevant.\r\n\r\nI think Auth0 is less permissive and prefers the access_token param rather than the Authorization Bearer token\r\n\r\nin sources/oauth/clients/oauth2.py\r\n```\r\nclass OAuth2Client(BaseOAuthClient):\r\n \"\"\"OAuth2 Client\"\"\"\r\n\r\n ...\r\n\r\n def do_request(self, method: str, url: str, **kwargs) -> Response:\r\n \"\"\"Build remote url request. Constructs necessary auth.\"\"\"\r\n if \"token\" in kwargs:\r\n token = kwargs.pop(\"token\")\r\n\r\n params = kwargs.get(\"params\", {})\r\n params[\"access_token\"] = token[\"access_token\"]\r\n kwargs[\"params\"] = params\r\n\r\n headers = kwargs.get(\"headers\", {})\r\n # Note this fix\r\n # headers[\"Authorization\"] = f\"{token['token_type']} {token['access_token']}\"\r\n kwargs[\"headers\"] = headers\r\n return super().do_request(method, url, **kwargs)\r\n```\r\n\r\n\n", "before_files": [{"content": "\"\"\"OpenID Connect OAuth Views\"\"\"\nfrom typing import Any\n\nfrom authentik.sources.oauth.clients.oauth2 import UserprofileHeaderAuthClient\nfrom authentik.sources.oauth.models import OAuthSource\nfrom authentik.sources.oauth.types.registry import SourceType, registry\nfrom authentik.sources.oauth.views.callback import OAuthCallback\nfrom authentik.sources.oauth.views.redirect import OAuthRedirect\n\n\nclass OpenIDConnectOAuthRedirect(OAuthRedirect):\n \"\"\"OpenIDConnect OAuth2 Redirect\"\"\"\n\n def get_additional_parameters(self, source: OAuthSource): # pragma: no cover\n return {\n \"scope\": [\"openid\", \"email\", \"profile\"],\n }\n\n\nclass OpenIDConnectOAuth2Callback(OAuthCallback):\n \"\"\"OpenIDConnect OAuth2 Callback\"\"\"\n\n client_class: UserprofileHeaderAuthClient\n\n def get_user_id(self, info: dict[str, str]) -> str:\n return info.get(\"sub\", \"\")\n\n def get_user_enroll_context(\n self,\n info: dict[str, Any],\n ) -> dict[str, Any]:\n return {\n \"username\": info.get(\"nickname\", info.get(\"preferred_username\")),\n \"email\": info.get(\"email\"),\n \"name\": info.get(\"name\"),\n }\n\n\[email protected]()\nclass OpenIDConnectType(SourceType):\n \"\"\"OpenIDConnect Type definition\"\"\"\n\n callback_view = OpenIDConnectOAuth2Callback\n redirect_view = OpenIDConnectOAuthRedirect\n name = \"OpenID Connect\"\n slug = \"openidconnect\"\n\n urls_customizable = True\n", "path": "authentik/sources/oauth/types/oidc.py"}]} | 1,265 | 133 |
gh_patches_debug_18320 | rasdani/github-patches | git_diff | mkdocs__mkdocs-1453 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Tornado 5.0 raises error on install with older Python versions.
changed to `"tornado>=4.1,<5.0"` in setup.py
This broke installation via pip for me.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 from __future__ import print_function
5 from setuptools import setup
6 import re
7 import os
8 import sys
9
10
11 long_description = (
12 "MkDocs is a fast, simple and downright gorgeous static site generator "
13 "that's geared towards building project documentation. Documentation "
14 "source files are written in Markdown, and configured with a single YAML "
15 "configuration file."
16 )
17
18
19 def get_version(package):
20 """Return package version as listed in `__version__` in `init.py`."""
21 init_py = open(os.path.join(package, '__init__.py')).read()
22 return re.search("__version__ = ['\"]([^'\"]+)['\"]", init_py).group(1)
23
24
25 def get_packages(package):
26 """Return root package and all sub-packages."""
27 return [dirpath
28 for dirpath, dirnames, filenames in os.walk(package)
29 if os.path.exists(os.path.join(dirpath, '__init__.py'))]
30
31
32 if sys.argv[-1] == 'publish':
33 if os.system("pip freeze | grep wheel"):
34 print("wheel not installed.\nUse `pip install wheel`.\nExiting.")
35 sys.exit()
36 if os.system("pip freeze | grep twine"):
37 print("twine not installed.\nUse `pip install twine`.\nExiting.")
38 sys.exit()
39 os.system("python setup.py sdist bdist_wheel")
40 os.system("twine upload dist/*")
41 print("You probably want to also tag the version now:")
42 print(" git tag -a {0} -m 'version {0}'".format(get_version("mkdocs")))
43 print(" git push --tags")
44 sys.exit()
45
46
47 setup(
48 name="mkdocs",
49 version=get_version("mkdocs"),
50 url='http://www.mkdocs.org',
51 license='BSD',
52 description='Project documentation with Markdown.',
53 long_description=long_description,
54 author='Tom Christie',
55 author_email='[email protected]', # SEE NOTE BELOW (*)
56 packages=get_packages("mkdocs"),
57 include_package_data=True,
58 install_requires=[
59 'click>=3.3',
60 'Jinja2>=2.7.1',
61 'livereload>=2.5.1',
62 'Markdown>=2.3.1',
63 'PyYAML>=3.10',
64 'tornado>=4.1',
65 ],
66 entry_points={
67 'console_scripts': [
68 'mkdocs = mkdocs.__main__:cli',
69 ],
70 'mkdocs.themes': [
71 'mkdocs = mkdocs.themes.mkdocs',
72 'readthedocs = mkdocs.themes.readthedocs',
73 ],
74 'mkdocs.plugins': [
75 'search = mkdocs.contrib.search:SearchPlugin',
76 ],
77 },
78 classifiers=[
79 'Development Status :: 5 - Production/Stable',
80 'Environment :: Console',
81 'Environment :: Web Environment',
82 'Intended Audience :: Developers',
83 'License :: OSI Approved :: BSD License',
84 'Operating System :: OS Independent',
85 'Programming Language :: Python',
86 'Programming Language :: Python :: 2',
87 'Programming Language :: Python :: 2.7',
88 'Programming Language :: Python :: 3',
89 'Programming Language :: Python :: 3.3',
90 'Programming Language :: Python :: 3.4',
91 'Programming Language :: Python :: 3.5',
92 'Programming Language :: Python :: 3.6',
93 "Programming Language :: Python :: Implementation :: CPython",
94 "Programming Language :: Python :: Implementation :: PyPy",
95 'Topic :: Documentation',
96 'Topic :: Text Processing',
97 ],
98 zip_safe=False,
99 )
100
101 # (*) Please direct queries to the discussion group:
102 # https://groups.google.com/forum/#!forum/mkdocs
103
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -61,8 +61,9 @@
'livereload>=2.5.1',
'Markdown>=2.3.1',
'PyYAML>=3.10',
- 'tornado>=4.1',
+ 'tornado>=5.0',
],
+ python_requires='>=2.7.9,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*',
entry_points={
'console_scripts': [
'mkdocs = mkdocs.__main__:cli',
@@ -86,7 +87,6 @@
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
- 'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -61,8 +61,9 @@\n 'livereload>=2.5.1',\n 'Markdown>=2.3.1',\n 'PyYAML>=3.10',\n- 'tornado>=4.1',\n+ 'tornado>=5.0',\n ],\n+ python_requires='>=2.7.9,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*',\n entry_points={\n 'console_scripts': [\n 'mkdocs = mkdocs.__main__:cli',\n@@ -86,7 +87,6 @@\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n- 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n", "issue": "Tornado 5.0 raises error on install with older Python versions.\nchanged to `\"tornado>=4.1,<5.0\"` in setup.py\r\n\r\nThis broke installation via pip for me. \n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom __future__ import print_function\nfrom setuptools import setup\nimport re\nimport os\nimport sys\n\n\nlong_description = (\n \"MkDocs is a fast, simple and downright gorgeous static site generator \"\n \"that's geared towards building project documentation. Documentation \"\n \"source files are written in Markdown, and configured with a single YAML \"\n \"configuration file.\"\n)\n\n\ndef get_version(package):\n \"\"\"Return package version as listed in `__version__` in `init.py`.\"\"\"\n init_py = open(os.path.join(package, '__init__.py')).read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_packages(package):\n \"\"\"Return root package and all sub-packages.\"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\nif sys.argv[-1] == 'publish':\n if os.system(\"pip freeze | grep wheel\"):\n print(\"wheel not installed.\\nUse `pip install wheel`.\\nExiting.\")\n sys.exit()\n if os.system(\"pip freeze | grep twine\"):\n print(\"twine not installed.\\nUse `pip install twine`.\\nExiting.\")\n sys.exit()\n os.system(\"python setup.py sdist bdist_wheel\")\n os.system(\"twine upload dist/*\")\n print(\"You probably want to also tag the version now:\")\n print(\" git tag -a {0} -m 'version {0}'\".format(get_version(\"mkdocs\")))\n print(\" git push --tags\")\n sys.exit()\n\n\nsetup(\n name=\"mkdocs\",\n version=get_version(\"mkdocs\"),\n url='http://www.mkdocs.org',\n license='BSD',\n description='Project documentation with Markdown.',\n long_description=long_description,\n author='Tom Christie',\n author_email='[email protected]', # SEE NOTE BELOW (*)\n packages=get_packages(\"mkdocs\"),\n include_package_data=True,\n install_requires=[\n 'click>=3.3',\n 'Jinja2>=2.7.1',\n 'livereload>=2.5.1',\n 'Markdown>=2.3.1',\n 'PyYAML>=3.10',\n 'tornado>=4.1',\n ],\n entry_points={\n 'console_scripts': [\n 'mkdocs = mkdocs.__main__:cli',\n ],\n 'mkdocs.themes': [\n 'mkdocs = mkdocs.themes.mkdocs',\n 'readthedocs = mkdocs.themes.readthedocs',\n ],\n 'mkdocs.plugins': [\n 'search = mkdocs.contrib.search:SearchPlugin',\n ],\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n 'Topic :: Documentation',\n 'Topic :: Text Processing',\n ],\n zip_safe=False,\n)\n\n# (*) Please direct queries to the discussion group:\n# https://groups.google.com/forum/#!forum/mkdocs\n", "path": "setup.py"}]} | 1,594 | 236 |
gh_patches_debug_38298 | rasdani/github-patches | git_diff | jupyterhub__jupyterhub-121 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
redirect loop on invalid single-user auth token
when the single-user server's API request fails with 403, it's handled as failed login (302) causing a redirect loop, when it should be handled as "500: oh noes, I can't do anything!"
</issue>
<code>
[start of jupyterhub/singleuser.py]
1 #!/usr/bin/env python
2 """Extend regular notebook server to be aware of multiuser things."""
3
4 # Copyright (c) Jupyter Development Team.
5 # Distributed under the terms of the Modified BSD License.
6
7 import os
8
9 import requests
10
11 from tornado import ioloop
12
13 from IPython.utils.traitlets import Unicode
14
15 from IPython.html.notebookapp import NotebookApp
16 from IPython.html.auth.login import LoginHandler
17 from IPython.html.auth.logout import LogoutHandler
18
19 from IPython.html.utils import url_path_join
20
21
22 from distutils.version import LooseVersion as V
23
24 import IPython
25 if V(IPython.__version__) < V('3.0'):
26 raise ImportError("JupyterHub Requires IPython >= 3.0, found %s" % IPython.__version__)
27
28 # Define two methods to attach to AuthenticatedHandler,
29 # which authenticate via the central auth server.
30
31 class JupyterHubLoginHandler(LoginHandler):
32 @staticmethod
33 def login_available(settings):
34 return True
35
36 @staticmethod
37 def verify_token(self, cookie_name, encrypted_cookie):
38 """monkeypatch method for token verification"""
39 cookie_cache = self.settings['cookie_cache']
40 if encrypted_cookie in cookie_cache:
41 # we've seen this token before, don't ask upstream again
42 return cookie_cache[encrypted_cookie]
43
44 hub_api_url = self.settings['hub_api_url']
45 hub_api_key = self.settings['hub_api_key']
46 r = requests.get(url_path_join(
47 hub_api_url, "authorizations/cookie", cookie_name,
48 ),
49 headers = {'Authorization' : 'token %s' % hub_api_key},
50 data=encrypted_cookie,
51 )
52 if r.status_code == 404:
53 data = {'user' : ''}
54 elif r.status_code >= 400:
55 self.log.warn("Failed to check authorization: [%i] %s", r.status_code, r.reason)
56 data = None
57 else:
58 data = r.json()
59 cookie_cache[encrypted_cookie] = data
60 return data
61
62 @staticmethod
63 def get_user(self):
64 """alternative get_current_user to query the central server"""
65 my_user = self.settings['user']
66 encrypted_cookie = self.get_cookie(self.cookie_name)
67 if encrypted_cookie:
68 auth_data = JupyterHubLoginHandler.verify_token(self, self.cookie_name, encrypted_cookie)
69 if not auth_data:
70 # treat invalid token the same as no token
71 return None
72 user = auth_data['user']
73 if user == my_user:
74 return user
75 else:
76 return None
77 else:
78 self.log.debug("No token cookie")
79 return None
80
81
82 class JupyterHubLogoutHandler(LogoutHandler):
83 def get(self):
84 self.redirect(url_path_join(self.settings['hub_prefix'], 'logout'))
85
86
87 # register new hub related command-line aliases
88 aliases = NotebookApp.aliases.get_default_value()
89 aliases.update({
90 'user' : 'SingleUserNotebookApp.user',
91 'cookie-name': 'SingleUserNotebookApp.cookie_name',
92 'hub-prefix': 'SingleUserNotebookApp.hub_prefix',
93 'hub-api-url': 'SingleUserNotebookApp.hub_api_url',
94 'base-url': 'SingleUserNotebookApp.base_url',
95 })
96
97 class SingleUserNotebookApp(NotebookApp):
98 """A Subclass of the regular NotebookApp that is aware of the parent multiuser context."""
99 user = Unicode(config=True)
100 cookie_name = Unicode(config=True)
101 hub_prefix = Unicode(config=True)
102 hub_api_url = Unicode(config=True)
103 aliases = aliases
104 open_browser = False
105 login_handler_class = JupyterHubLoginHandler
106 logout_handler_class = JupyterHubLogoutHandler
107
108 def _confirm_exit(self):
109 # disable the exit confirmation for background notebook processes
110 ioloop.IOLoop.instance().stop()
111
112 def init_webapp(self):
113 # load the hub related settings into the tornado settings dict
114 env = os.environ
115 s = self.tornado_settings
116 s['cookie_cache'] = {}
117 s['user'] = self.user
118 s['hub_api_key'] = env.pop('JPY_API_TOKEN')
119 s['hub_prefix'] = self.hub_prefix
120 s['cookie_name'] = self.cookie_name
121 s['login_url'] = url_path_join(self.hub_prefix, 'login')
122 s['hub_api_url'] = self.hub_api_url
123 super(SingleUserNotebookApp, self).init_webapp()
124
125
126 def main():
127 return SingleUserNotebookApp.launch_instance()
128
129
130 if __name__ == "__main__":
131 main()
132
[end of jupyterhub/singleuser.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/jupyterhub/singleuser.py b/jupyterhub/singleuser.py
--- a/jupyterhub/singleuser.py
+++ b/jupyterhub/singleuser.py
@@ -9,6 +9,7 @@
import requests
from tornado import ioloop
+from tornado.web import HTTPError
from IPython.utils.traitlets import Unicode
@@ -35,7 +36,7 @@
@staticmethod
def verify_token(self, cookie_name, encrypted_cookie):
- """monkeypatch method for token verification"""
+ """method for token verification"""
cookie_cache = self.settings['cookie_cache']
if encrypted_cookie in cookie_cache:
# we've seen this token before, don't ask upstream again
@@ -51,9 +52,15 @@
)
if r.status_code == 404:
data = {'user' : ''}
+ if r.status_code == 403:
+ self.log.error("I don't have permission to verify cookies, my auth token may have expired: [%i] %s", r.status_code, r.reason)
+ raise HTTPError(500, "Permission failure checking authorization, I may need to be restarted")
+ elif r.status_code >= 500:
+ self.log.error("Upstream failure verifying auth token: [%i] %s", r.status_code, r.reason)
+ raise HTTPError(502, "Failed to check authorization (upstream problem)")
elif r.status_code >= 400:
self.log.warn("Failed to check authorization: [%i] %s", r.status_code, r.reason)
- data = None
+ raise HTTPError(500, "Failed to check authorization")
else:
data = r.json()
cookie_cache[encrypted_cookie] = data
@@ -62,6 +69,13 @@
@staticmethod
def get_user(self):
"""alternative get_current_user to query the central server"""
+ # only allow this to be called once per handler
+ # avoids issues if an error is raised,
+ # since this may be called again when trying to render the error page
+ if hasattr(self, '_cached_user'):
+ return self._cached_user
+
+ self._cached_user = None
my_user = self.settings['user']
encrypted_cookie = self.get_cookie(self.cookie_name)
if encrypted_cookie:
@@ -71,6 +85,7 @@
return None
user = auth_data['user']
if user == my_user:
+ self._cached_user = user
return user
else:
return None
| {"golden_diff": "diff --git a/jupyterhub/singleuser.py b/jupyterhub/singleuser.py\n--- a/jupyterhub/singleuser.py\n+++ b/jupyterhub/singleuser.py\n@@ -9,6 +9,7 @@\n import requests\n \n from tornado import ioloop\n+from tornado.web import HTTPError\n \n from IPython.utils.traitlets import Unicode\n \n@@ -35,7 +36,7 @@\n \n @staticmethod\n def verify_token(self, cookie_name, encrypted_cookie):\n- \"\"\"monkeypatch method for token verification\"\"\"\n+ \"\"\"method for token verification\"\"\"\n cookie_cache = self.settings['cookie_cache']\n if encrypted_cookie in cookie_cache:\n # we've seen this token before, don't ask upstream again\n@@ -51,9 +52,15 @@\n )\n if r.status_code == 404:\n data = {'user' : ''}\n+ if r.status_code == 403:\n+ self.log.error(\"I don't have permission to verify cookies, my auth token may have expired: [%i] %s\", r.status_code, r.reason)\n+ raise HTTPError(500, \"Permission failure checking authorization, I may need to be restarted\")\n+ elif r.status_code >= 500:\n+ self.log.error(\"Upstream failure verifying auth token: [%i] %s\", r.status_code, r.reason)\n+ raise HTTPError(502, \"Failed to check authorization (upstream problem)\")\n elif r.status_code >= 400:\n self.log.warn(\"Failed to check authorization: [%i] %s\", r.status_code, r.reason)\n- data = None\n+ raise HTTPError(500, \"Failed to check authorization\")\n else:\n data = r.json()\n cookie_cache[encrypted_cookie] = data\n@@ -62,6 +69,13 @@\n @staticmethod\n def get_user(self):\n \"\"\"alternative get_current_user to query the central server\"\"\"\n+ # only allow this to be called once per handler\n+ # avoids issues if an error is raised,\n+ # since this may be called again when trying to render the error page\n+ if hasattr(self, '_cached_user'):\n+ return self._cached_user\n+ \n+ self._cached_user = None\n my_user = self.settings['user']\n encrypted_cookie = self.get_cookie(self.cookie_name)\n if encrypted_cookie:\n@@ -71,6 +85,7 @@\n return None\n user = auth_data['user']\n if user == my_user:\n+ self._cached_user = user\n return user\n else:\n return None\n", "issue": "redirect loop on invalid single-user auth token\nwhen the single-user server's API request fails with 403, it's handled as failed login (302) causing a redirect loop, when it should be handled as \"500: oh noes, I can't do anything!\"\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"Extend regular notebook server to be aware of multiuser things.\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nimport os\n\nimport requests\n\nfrom tornado import ioloop\n\nfrom IPython.utils.traitlets import Unicode\n\nfrom IPython.html.notebookapp import NotebookApp\nfrom IPython.html.auth.login import LoginHandler\nfrom IPython.html.auth.logout import LogoutHandler\n\nfrom IPython.html.utils import url_path_join\n\n\nfrom distutils.version import LooseVersion as V\n\nimport IPython\nif V(IPython.__version__) < V('3.0'):\n raise ImportError(\"JupyterHub Requires IPython >= 3.0, found %s\" % IPython.__version__)\n\n# Define two methods to attach to AuthenticatedHandler,\n# which authenticate via the central auth server.\n\nclass JupyterHubLoginHandler(LoginHandler):\n @staticmethod\n def login_available(settings):\n return True\n \n @staticmethod\n def verify_token(self, cookie_name, encrypted_cookie):\n \"\"\"monkeypatch method for token verification\"\"\"\n cookie_cache = self.settings['cookie_cache']\n if encrypted_cookie in cookie_cache:\n # we've seen this token before, don't ask upstream again\n return cookie_cache[encrypted_cookie]\n \n hub_api_url = self.settings['hub_api_url']\n hub_api_key = self.settings['hub_api_key']\n r = requests.get(url_path_join(\n hub_api_url, \"authorizations/cookie\", cookie_name,\n ),\n headers = {'Authorization' : 'token %s' % hub_api_key},\n data=encrypted_cookie,\n )\n if r.status_code == 404:\n data = {'user' : ''}\n elif r.status_code >= 400:\n self.log.warn(\"Failed to check authorization: [%i] %s\", r.status_code, r.reason)\n data = None\n else:\n data = r.json()\n cookie_cache[encrypted_cookie] = data\n return data\n \n @staticmethod\n def get_user(self):\n \"\"\"alternative get_current_user to query the central server\"\"\"\n my_user = self.settings['user']\n encrypted_cookie = self.get_cookie(self.cookie_name)\n if encrypted_cookie:\n auth_data = JupyterHubLoginHandler.verify_token(self, self.cookie_name, encrypted_cookie)\n if not auth_data:\n # treat invalid token the same as no token\n return None\n user = auth_data['user']\n if user == my_user:\n return user\n else:\n return None\n else:\n self.log.debug(\"No token cookie\")\n return None\n\n\nclass JupyterHubLogoutHandler(LogoutHandler):\n def get(self):\n self.redirect(url_path_join(self.settings['hub_prefix'], 'logout'))\n\n\n# register new hub related command-line aliases\naliases = NotebookApp.aliases.get_default_value()\naliases.update({\n 'user' : 'SingleUserNotebookApp.user',\n 'cookie-name': 'SingleUserNotebookApp.cookie_name',\n 'hub-prefix': 'SingleUserNotebookApp.hub_prefix',\n 'hub-api-url': 'SingleUserNotebookApp.hub_api_url',\n 'base-url': 'SingleUserNotebookApp.base_url',\n})\n\nclass SingleUserNotebookApp(NotebookApp):\n \"\"\"A Subclass of the regular NotebookApp that is aware of the parent multiuser context.\"\"\"\n user = Unicode(config=True)\n cookie_name = Unicode(config=True)\n hub_prefix = Unicode(config=True)\n hub_api_url = Unicode(config=True)\n aliases = aliases\n open_browser = False\n login_handler_class = JupyterHubLoginHandler\n logout_handler_class = JupyterHubLogoutHandler\n \n def _confirm_exit(self):\n # disable the exit confirmation for background notebook processes\n ioloop.IOLoop.instance().stop()\n \n def init_webapp(self):\n # load the hub related settings into the tornado settings dict\n env = os.environ\n s = self.tornado_settings\n s['cookie_cache'] = {}\n s['user'] = self.user\n s['hub_api_key'] = env.pop('JPY_API_TOKEN')\n s['hub_prefix'] = self.hub_prefix\n s['cookie_name'] = self.cookie_name\n s['login_url'] = url_path_join(self.hub_prefix, 'login')\n s['hub_api_url'] = self.hub_api_url\n super(SingleUserNotebookApp, self).init_webapp()\n\n\ndef main():\n return SingleUserNotebookApp.launch_instance()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "jupyterhub/singleuser.py"}]} | 1,879 | 574 |
gh_patches_debug_40215 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-2869 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider regis_uk is broken
During the global build at 2021-05-26-14-42-23, spider **regis_uk** failed with **33 features** and **35 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/regis_uk.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/regis_uk.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/regis_uk.geojson))
</issue>
<code>
[start of locations/spiders/regis_salon_uk.py]
1 import scrapy
2 from locations.items import GeojsonPointItem
3 import re
4
5 regex_am = r"\s?([Aa][Mm])"
6 regex_pm = r"\s?([Pp][Mm])"
7
8
9 class RegisUKSpider(scrapy.Spider):
10 name = "regis_uk"
11 item_attributes = { 'brand': "Regis Salon" }
12 allowed_domains = ["www.regissalons.co.uk"]
13 start_urls = ['https://www.regissalons.co.uk/salon-locator?show-all=yes']
14
15 def convert_hours(self, hours):
16 hours = [x.strip() for x in hours]
17 hours = [x for x in hours if x]
18 for i in range(len(hours)):
19 converted_times = ''
20 if hours[i] != "Closed":
21 from_hr, to_hr = [hr.strip() for hr in hours[i].split('–')]
22 if re.search(regex_am, from_hr):
23 from_hr = re.sub(regex_am, '', from_hr)
24 hour_min = from_hr.split(':')
25 if len(hour_min[0]) < 2:
26 hour_min[0].zfill(2)
27 converted_times += (":".join(hour_min)) + ' - '
28 else:
29 from_hr = re.sub(regex_pm, '', from_hr)
30 hour_min = from_hr.split(':')
31 if int(hour_min[0]) < 12:
32 hour_min[0] = str(12 + int(hour_min[0]))
33 converted_times += (":".join(hour_min)) + ' - '
34
35 if re.search(regex_am, to_hr):
36 to_hr = re.sub(regex_am, '', to_hr)
37 hour_min = to_hr.split(':')
38 if len(hour_min[0]) < 2:
39 hour_min[0].zfill(2)
40 if int(hour_min[0]) == 12:
41 hour_min[0] = '00'
42 converted_times += (":".join(hour_min))
43 else:
44 to_hr = re.sub(regex_pm, '', to_hr)
45 hour_min = to_hr.split(':')
46 if int(hour_min[0]) < 12:
47 hour_min[0] = str(12 + int(hour_min[0]))
48 converted_times += (":".join(hour_min))
49 else:
50 converted_times += "off"
51 hours[i] = converted_times
52 days = ["Mo", "Tu", "We", "Th", "Fr", "Sa", "Su"]
53 hours = ''.join('{} {} '.format(*t) for t in zip(days, hours))
54 return hours
55
56 def parse_store(self, response):
57 phone = response.xpath(
58 '//a[@class="phone-tracked-link"]/text()').extract_first().strip()
59 lat = response.xpath(
60 '//div[@id="map-aside"]/@data-lat').extract_first()
61 lon = response.xpath(
62 '//div[@id="map-aside"]/@data-lng').extract_first()
63 hours = response.xpath(
64 '//div[@class="container"]//p[contains(., "am")'
65 ' or contains(., "Closed")]/text()').extract()
66 hours = self.convert_hours(hours)
67
68 yield GeojsonPointItem(
69 ref=response.url,
70 phone=phone,
71 lat=lat,
72 lon=lon,
73 opening_hours=hours,
74 website=response.url
75 )
76
77 def parse(self, response):
78 stores = response.xpath('//ul[@class="list"]//a/@href').extract()
79 for store in stores:
80 yield scrapy.Request(store, callback=self.parse_store)
81
[end of locations/spiders/regis_salon_uk.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/regis_salon_uk.py b/locations/spiders/regis_salon_uk.py
--- a/locations/spiders/regis_salon_uk.py
+++ b/locations/spiders/regis_salon_uk.py
@@ -11,6 +11,7 @@
item_attributes = { 'brand': "Regis Salon" }
allowed_domains = ["www.regissalons.co.uk"]
start_urls = ['https://www.regissalons.co.uk/salon-locator?show-all=yes']
+ download_delay = 4.0
def convert_hours(self, hours):
hours = [x.strip() for x in hours]
@@ -21,20 +22,20 @@
from_hr, to_hr = [hr.strip() for hr in hours[i].split('–')]
if re.search(regex_am, from_hr):
from_hr = re.sub(regex_am, '', from_hr)
- hour_min = from_hr.split(':')
+ hour_min = re.split('[:.]', from_hr)
if len(hour_min[0]) < 2:
hour_min[0].zfill(2)
converted_times += (":".join(hour_min)) + ' - '
else:
from_hr = re.sub(regex_pm, '', from_hr)
- hour_min = from_hr.split(':')
+ hour_min = re.split('[:.]', from_hr)
if int(hour_min[0]) < 12:
hour_min[0] = str(12 + int(hour_min[0]))
converted_times += (":".join(hour_min)) + ' - '
if re.search(regex_am, to_hr):
to_hr = re.sub(regex_am, '', to_hr)
- hour_min = to_hr.split(':')
+ hour_min = re.split('[:.]', to_hr)
if len(hour_min[0]) < 2:
hour_min[0].zfill(2)
if int(hour_min[0]) == 12:
@@ -42,7 +43,7 @@
converted_times += (":".join(hour_min))
else:
to_hr = re.sub(regex_pm, '', to_hr)
- hour_min = to_hr.split(':')
+ hour_min = re.split('[:.]', to_hr)
if int(hour_min[0]) < 12:
hour_min[0] = str(12 + int(hour_min[0]))
converted_times += (":".join(hour_min))
@@ -77,4 +78,6 @@
def parse(self, response):
stores = response.xpath('//ul[@class="list"]//a/@href').extract()
for store in stores:
+ if '/salon-region/' in store:
+ continue
yield scrapy.Request(store, callback=self.parse_store)
| {"golden_diff": "diff --git a/locations/spiders/regis_salon_uk.py b/locations/spiders/regis_salon_uk.py\n--- a/locations/spiders/regis_salon_uk.py\n+++ b/locations/spiders/regis_salon_uk.py\n@@ -11,6 +11,7 @@\n item_attributes = { 'brand': \"Regis Salon\" }\n allowed_domains = [\"www.regissalons.co.uk\"]\n start_urls = ['https://www.regissalons.co.uk/salon-locator?show-all=yes']\n+ download_delay = 4.0\n \n def convert_hours(self, hours):\n hours = [x.strip() for x in hours]\n@@ -21,20 +22,20 @@\n from_hr, to_hr = [hr.strip() for hr in hours[i].split('\u2013')]\n if re.search(regex_am, from_hr):\n from_hr = re.sub(regex_am, '', from_hr)\n- hour_min = from_hr.split(':')\n+ hour_min = re.split('[:.]', from_hr)\n if len(hour_min[0]) < 2:\n hour_min[0].zfill(2)\n converted_times += (\":\".join(hour_min)) + ' - '\n else:\n from_hr = re.sub(regex_pm, '', from_hr)\n- hour_min = from_hr.split(':')\n+ hour_min = re.split('[:.]', from_hr)\n if int(hour_min[0]) < 12:\n hour_min[0] = str(12 + int(hour_min[0]))\n converted_times += (\":\".join(hour_min)) + ' - '\n \n if re.search(regex_am, to_hr):\n to_hr = re.sub(regex_am, '', to_hr)\n- hour_min = to_hr.split(':')\n+ hour_min = re.split('[:.]', to_hr)\n if len(hour_min[0]) < 2:\n hour_min[0].zfill(2)\n if int(hour_min[0]) == 12:\n@@ -42,7 +43,7 @@\n converted_times += (\":\".join(hour_min))\n else:\n to_hr = re.sub(regex_pm, '', to_hr)\n- hour_min = to_hr.split(':')\n+ hour_min = re.split('[:.]', to_hr)\n if int(hour_min[0]) < 12:\n hour_min[0] = str(12 + int(hour_min[0]))\n converted_times += (\":\".join(hour_min))\n@@ -77,4 +78,6 @@\n def parse(self, response):\n stores = response.xpath('//ul[@class=\"list\"]//a/@href').extract()\n for store in stores:\n+ if '/salon-region/' in store:\n+ continue\n yield scrapy.Request(store, callback=self.parse_store)\n", "issue": "Spider regis_uk is broken\nDuring the global build at 2021-05-26-14-42-23, spider **regis_uk** failed with **33 features** and **35 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/regis_uk.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/regis_uk.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/regis_uk.geojson))\n", "before_files": [{"content": "import scrapy\nfrom locations.items import GeojsonPointItem\nimport re\n\nregex_am = r\"\\s?([Aa][Mm])\"\nregex_pm = r\"\\s?([Pp][Mm])\"\n\n\nclass RegisUKSpider(scrapy.Spider):\n name = \"regis_uk\"\n item_attributes = { 'brand': \"Regis Salon\" }\n allowed_domains = [\"www.regissalons.co.uk\"]\n start_urls = ['https://www.regissalons.co.uk/salon-locator?show-all=yes']\n\n def convert_hours(self, hours):\n hours = [x.strip() for x in hours]\n hours = [x for x in hours if x]\n for i in range(len(hours)):\n converted_times = ''\n if hours[i] != \"Closed\":\n from_hr, to_hr = [hr.strip() for hr in hours[i].split('\u2013')]\n if re.search(regex_am, from_hr):\n from_hr = re.sub(regex_am, '', from_hr)\n hour_min = from_hr.split(':')\n if len(hour_min[0]) < 2:\n hour_min[0].zfill(2)\n converted_times += (\":\".join(hour_min)) + ' - '\n else:\n from_hr = re.sub(regex_pm, '', from_hr)\n hour_min = from_hr.split(':')\n if int(hour_min[0]) < 12:\n hour_min[0] = str(12 + int(hour_min[0]))\n converted_times += (\":\".join(hour_min)) + ' - '\n\n if re.search(regex_am, to_hr):\n to_hr = re.sub(regex_am, '', to_hr)\n hour_min = to_hr.split(':')\n if len(hour_min[0]) < 2:\n hour_min[0].zfill(2)\n if int(hour_min[0]) == 12:\n hour_min[0] = '00'\n converted_times += (\":\".join(hour_min))\n else:\n to_hr = re.sub(regex_pm, '', to_hr)\n hour_min = to_hr.split(':')\n if int(hour_min[0]) < 12:\n hour_min[0] = str(12 + int(hour_min[0]))\n converted_times += (\":\".join(hour_min))\n else:\n converted_times += \"off\"\n hours[i] = converted_times\n days = [\"Mo\", \"Tu\", \"We\", \"Th\", \"Fr\", \"Sa\", \"Su\"]\n hours = ''.join('{} {} '.format(*t) for t in zip(days, hours))\n return hours\n\n def parse_store(self, response):\n phone = response.xpath(\n '//a[@class=\"phone-tracked-link\"]/text()').extract_first().strip()\n lat = response.xpath(\n '//div[@id=\"map-aside\"]/@data-lat').extract_first()\n lon = response.xpath(\n '//div[@id=\"map-aside\"]/@data-lng').extract_first()\n hours = response.xpath(\n '//div[@class=\"container\"]//p[contains(., \"am\")'\n ' or contains(., \"Closed\")]/text()').extract()\n hours = self.convert_hours(hours)\n\n yield GeojsonPointItem(\n ref=response.url,\n phone=phone,\n lat=lat,\n lon=lon,\n opening_hours=hours,\n website=response.url\n )\n\n def parse(self, response):\n stores = response.xpath('//ul[@class=\"list\"]//a/@href').extract()\n for store in stores:\n yield scrapy.Request(store, callback=self.parse_store)\n", "path": "locations/spiders/regis_salon_uk.py"}]} | 1,661 | 614 |
gh_patches_debug_27291 | rasdani/github-patches | git_diff | uccser__cs-unplugged-302 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Django Debug Toolbar doesn't display in local Docker development environment
Created from work in #193.
</issue>
<code>
[start of csunplugged/config/settings/local.py]
1 # -*- coding: utf-8 -*-
2 """
3 Django settings for local development environment.
4
5 - Run in Debug mode
6 - Add custom dev application
7 - Add Django Debug Toolbar
8 - Add django-extensions
9 - Use console backend for emails
10 """
11
12 import socket
13 import os
14 from .base import * # noqa: F403
15
16 # DATABASE CONFIGURATION
17 # ----------------------------------------------------------------------------
18 # See: https://docs.djangoproject.com/en/dev/ref/settings/#databases
19 DATABASES = {
20 'default': env.db('DATABASE_URL'), # noqa: F405
21 }
22 DATABASES['default']['ATOMIC_REQUESTS'] = True
23
24 # DEBUG
25 # ----------------------------------------------------------------------------
26 DEBUG = env.bool('DJANGO_DEBUG', default=True) # noqa: F405
27 TEMPLATES[0]['OPTIONS']['debug'] = DEBUG # noqa: F405
28
29 # SECRET CONFIGURATION
30 # ----------------------------------------------------------------------------
31 # See: https://docs.djangoproject.com/en/dev/ref/settings/#secret-key
32 # Note: This key only used for development and testing.
33 SECRET_KEY = env('DJANGO_SECRET_KEY', default='l@@)w&&%&u37+sjz^lsx^+29y_333oid3ygxzucar^8o(axo*f') # noqa: F405
34
35 # Mail settings
36 # ----------------------------------------------------------------------------
37
38 EMAIL_PORT = 1025
39
40 EMAIL_HOST = 'localhost'
41 EMAIL_BACKEND = env('DJANGO_EMAIL_BACKEND', default='django.core.mail.backends.console.EmailBackend') # noqa: F405
42
43
44 # CACHING
45 # ----------------------------------------------------------------------------
46 CACHES = {
47 'default': {
48 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
49 'LOCATION': ''
50 }
51 }
52
53 # django-debug-toolbar
54 # ----------------------------------------------------------------------------
55 MIDDLEWARE += ['debug_toolbar.middleware.DebugToolbarMiddleware', ] # noqa: F405
56 INSTALLED_APPS += ['debug_toolbar', ] # noqa: F405
57
58 INTERNAL_IPS = ['127.0.0.1', '10.0.2.2', ]
59 # tricks to have debug toolbar when developing with docker
60 if os.environ.get('USE_DOCKER') == 'yes':
61 ip = socket.gethostbyname(socket.gethostname())
62 INTERNAL_IPS += [ip[:-1] + '1']
63
64 DEBUG_TOOLBAR_CONFIG = {
65 'DISABLE_PANELS': [
66 'debug_toolbar.panels.redirects.RedirectsPanel',
67 ],
68 'SHOW_TEMPLATE_CONTEXT': True,
69 }
70
71 # django-extensions
72 # ----------------------------------------------------------------------------
73 INSTALLED_APPS += ['django_extensions', ]
74
75 # TESTING
76 # ----------------------------------------------------------------------------
77 TEST_RUNNER = 'django.test.runner.DiscoverRunner'
78
79
80 # Your local stuff: Below this line define 3rd party library settings
81 # ----------------------------------------------------------------------------
82 LOCAL_APPS + ['dev.apps.DevConfig'] # noqa: F405
83
[end of csunplugged/config/settings/local.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/csunplugged/config/settings/local.py b/csunplugged/config/settings/local.py
--- a/csunplugged/config/settings/local.py
+++ b/csunplugged/config/settings/local.py
@@ -9,8 +9,6 @@
- Use console backend for emails
"""
-import socket
-import os
from .base import * # noqa: F403
# DATABASE CONFIGURATION
@@ -56,16 +54,23 @@
INSTALLED_APPS += ['debug_toolbar', ] # noqa: F405
INTERNAL_IPS = ['127.0.0.1', '10.0.2.2', ]
-# tricks to have debug toolbar when developing with docker
-if os.environ.get('USE_DOCKER') == 'yes':
- ip = socket.gethostbyname(socket.gethostname())
- INTERNAL_IPS += [ip[:-1] + '1']
+
+
+def show_django_debug_toolbar(request):
+ """Show Django Debug Toolbar in every request when running locally.
+
+ Args:
+ request: The request object.
+ """
+ return True
+
DEBUG_TOOLBAR_CONFIG = {
'DISABLE_PANELS': [
'debug_toolbar.panels.redirects.RedirectsPanel',
],
'SHOW_TEMPLATE_CONTEXT': True,
+ "SHOW_TOOLBAR_CALLBACK": show_django_debug_toolbar,
}
# django-extensions
@@ -79,4 +84,4 @@
# Your local stuff: Below this line define 3rd party library settings
# ----------------------------------------------------------------------------
-LOCAL_APPS + ['dev.apps.DevConfig'] # noqa: F405
+INSTALLED_APPS += ['dev.apps.DevConfig'] # noqa: F405
| {"golden_diff": "diff --git a/csunplugged/config/settings/local.py b/csunplugged/config/settings/local.py\n--- a/csunplugged/config/settings/local.py\n+++ b/csunplugged/config/settings/local.py\n@@ -9,8 +9,6 @@\n - Use console backend for emails\n \"\"\"\n \n-import socket\n-import os\n from .base import * # noqa: F403\n \n # DATABASE CONFIGURATION\n@@ -56,16 +54,23 @@\n INSTALLED_APPS += ['debug_toolbar', ] # noqa: F405\n \n INTERNAL_IPS = ['127.0.0.1', '10.0.2.2', ]\n-# tricks to have debug toolbar when developing with docker\n-if os.environ.get('USE_DOCKER') == 'yes':\n- ip = socket.gethostbyname(socket.gethostname())\n- INTERNAL_IPS += [ip[:-1] + '1']\n+\n+\n+def show_django_debug_toolbar(request):\n+ \"\"\"Show Django Debug Toolbar in every request when running locally.\n+\n+ Args:\n+ request: The request object.\n+ \"\"\"\n+ return True\n+\n \n DEBUG_TOOLBAR_CONFIG = {\n 'DISABLE_PANELS': [\n 'debug_toolbar.panels.redirects.RedirectsPanel',\n ],\n 'SHOW_TEMPLATE_CONTEXT': True,\n+ \"SHOW_TOOLBAR_CALLBACK\": show_django_debug_toolbar,\n }\n \n # django-extensions\n@@ -79,4 +84,4 @@\n \n # Your local stuff: Below this line define 3rd party library settings\n # ----------------------------------------------------------------------------\n-LOCAL_APPS + ['dev.apps.DevConfig'] # noqa: F405\n+INSTALLED_APPS += ['dev.apps.DevConfig'] # noqa: F405\n", "issue": "Django Debug Toolbar doesn't display in local Docker development environment\nCreated from work in #193.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nDjango settings for local development environment.\n\n- Run in Debug mode\n- Add custom dev application\n- Add Django Debug Toolbar\n- Add django-extensions\n- Use console backend for emails\n\"\"\"\n\nimport socket\nimport os\nfrom .base import * # noqa: F403\n\n# DATABASE CONFIGURATION\n# ----------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#databases\nDATABASES = {\n 'default': env.db('DATABASE_URL'), # noqa: F405\n}\nDATABASES['default']['ATOMIC_REQUESTS'] = True\n\n# DEBUG\n# ----------------------------------------------------------------------------\nDEBUG = env.bool('DJANGO_DEBUG', default=True) # noqa: F405\nTEMPLATES[0]['OPTIONS']['debug'] = DEBUG # noqa: F405\n\n# SECRET CONFIGURATION\n# ----------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#secret-key\n# Note: This key only used for development and testing.\nSECRET_KEY = env('DJANGO_SECRET_KEY', default='l@@)w&&%&u37+sjz^lsx^+29y_333oid3ygxzucar^8o(axo*f') # noqa: F405\n\n# Mail settings\n# ----------------------------------------------------------------------------\n\nEMAIL_PORT = 1025\n\nEMAIL_HOST = 'localhost'\nEMAIL_BACKEND = env('DJANGO_EMAIL_BACKEND', default='django.core.mail.backends.console.EmailBackend') # noqa: F405\n\n\n# CACHING\n# ----------------------------------------------------------------------------\nCACHES = {\n 'default': {\n 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',\n 'LOCATION': ''\n }\n}\n\n# django-debug-toolbar\n# ----------------------------------------------------------------------------\nMIDDLEWARE += ['debug_toolbar.middleware.DebugToolbarMiddleware', ] # noqa: F405\nINSTALLED_APPS += ['debug_toolbar', ] # noqa: F405\n\nINTERNAL_IPS = ['127.0.0.1', '10.0.2.2', ]\n# tricks to have debug toolbar when developing with docker\nif os.environ.get('USE_DOCKER') == 'yes':\n ip = socket.gethostbyname(socket.gethostname())\n INTERNAL_IPS += [ip[:-1] + '1']\n\nDEBUG_TOOLBAR_CONFIG = {\n 'DISABLE_PANELS': [\n 'debug_toolbar.panels.redirects.RedirectsPanel',\n ],\n 'SHOW_TEMPLATE_CONTEXT': True,\n}\n\n# django-extensions\n# ----------------------------------------------------------------------------\nINSTALLED_APPS += ['django_extensions', ]\n\n# TESTING\n# ----------------------------------------------------------------------------\nTEST_RUNNER = 'django.test.runner.DiscoverRunner'\n\n\n# Your local stuff: Below this line define 3rd party library settings\n# ----------------------------------------------------------------------------\nLOCAL_APPS + ['dev.apps.DevConfig'] # noqa: F405\n", "path": "csunplugged/config/settings/local.py"}]} | 1,326 | 374 |
gh_patches_debug_12025 | rasdani/github-patches | git_diff | Showndarya__Hacktoberfest-435 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Travis test ignore first letter of filename for some reason
I'll try and figure out why, thought about simply renaming every file in the travis script but that requires alot of work and overhead for little gain, it is certainly doable, you have to configure the git on the travis instance and make a new commit etc.
Might as well have a cron job or something to it recursively and periodically over the entirety of the repo and make a single commit...
</issue>
<code>
[start of .travis.py]
1 import json
2 import os
3 import re
4 import subprocess
5
6 # Get a diff between master and current.
7 try:
8 commit_range = os.environ["TRAVIS_COMMIT_RANGE"]
9 changed_files = subprocess.check_output(["git", "diff", "--name-only", commit_range])
10 except KeyError:
11 print("🔥 This should be run on Travis. Otherwise make sure TRAVIS_BRANCH is set.")
12 exit(1)
13
14 # Filter JSON files only.
15 changed_files_json = []
16 if changed_files:
17 changed_files = changed_files.decode()
18 for changed_file in changed_files.split('\n'):
19 if re.search(r"\.json$", changed_file):
20 changed_files_json.append(changed_file)
21
22
23 # Iterate over list of changed JSON files.
24 for changed_file_json in changed_files_json:
25 print(f"Checking file {changed_file_json}...")
26 there_was_an_error = False
27
28 if not changed_file_json[0].isupper():
29 there_was_an_error = True
30 print("🔥 File name not capitalized.")
31
32 try:
33 with open(changed_file_json) as data_file:
34 file_content = json.loads(data_file.read())
35 except json.decoder.JSONDecodeError:
36 there_was_an_error = True
37 print("🔥 JSON could not be parsed.")
38
39 if 'word' not in file_content:
40 there_was_an_error = True
41 print("🔥 Key 'word' not found.")
42
43 if not file_content["word"]:
44 there_was_an_error = True
45 print("🔥 Value for 'word' appears to be empty.")
46
47 if 'definitions' not in file_content:
48 there_was_an_error = True
49 print("🔥 Key 'definitions' not found.")
50
51 if not file_content["definitions"]:
52 there_was_an_error = True
53 print("🔥 Value for 'definitions' appears to be empty.")
54
55 if 'parts-of-speech' not in file_content:
56 there_was_an_error = True
57 print("🔥 Key 'parts-of-speech' not found.")
58
59 if not file_content["parts-of-speech"]:
60 there_was_an_error = True
61 print("🔥 Value for 'parts-of-speech' appears to be empty.")
62
63 if there_was_an_error:
64 exit(1)
65
[end of .travis.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/.travis.py b/.travis.py
--- a/.travis.py
+++ b/.travis.py
@@ -19,13 +19,13 @@
if re.search(r"\.json$", changed_file):
changed_files_json.append(changed_file)
-
+
# Iterate over list of changed JSON files.
for changed_file_json in changed_files_json:
print(f"Checking file {changed_file_json}...")
there_was_an_error = False
- if not changed_file_json[0].isupper():
+ if not os.path.basename(changed_file_json)[0].isupper():
there_was_an_error = True
print("🔥 File name not capitalized.")
| {"golden_diff": "diff --git a/.travis.py b/.travis.py\n--- a/.travis.py\n+++ b/.travis.py\n@@ -19,13 +19,13 @@\n if re.search(r\"\\.json$\", changed_file):\n changed_files_json.append(changed_file)\n \n-\n+ \n # Iterate over list of changed JSON files.\n for changed_file_json in changed_files_json:\n print(f\"Checking file {changed_file_json}...\")\n there_was_an_error = False\n \n- if not changed_file_json[0].isupper():\n+ if not os.path.basename(changed_file_json)[0].isupper():\n there_was_an_error = True\n print(\"\ud83d\udd25 File name not capitalized.\")\n", "issue": "Travis test ignore first letter of filename for some reason\nI'll try and figure out why, thought about simply renaming every file in the travis script but that requires alot of work and overhead for little gain, it is certainly doable, you have to configure the git on the travis instance and make a new commit etc.\r\nMight as well have a cron job or something to it recursively and periodically over the entirety of the repo and make a single commit...\n", "before_files": [{"content": "import json\nimport os\nimport re\nimport subprocess\n\n# Get a diff between master and current.\ntry:\n commit_range = os.environ[\"TRAVIS_COMMIT_RANGE\"]\n changed_files = subprocess.check_output([\"git\", \"diff\", \"--name-only\", commit_range])\nexcept KeyError:\n print(\"\ud83d\udd25 This should be run on Travis. Otherwise make sure TRAVIS_BRANCH is set.\")\n exit(1)\n\n# Filter JSON files only.\nchanged_files_json = []\nif changed_files:\n changed_files = changed_files.decode()\n for changed_file in changed_files.split('\\n'):\n if re.search(r\"\\.json$\", changed_file):\n changed_files_json.append(changed_file)\n\n\n# Iterate over list of changed JSON files.\nfor changed_file_json in changed_files_json:\n print(f\"Checking file {changed_file_json}...\")\n there_was_an_error = False\n\n if not changed_file_json[0].isupper():\n there_was_an_error = True\n print(\"\ud83d\udd25 File name not capitalized.\")\n\n try:\n with open(changed_file_json) as data_file:\n file_content = json.loads(data_file.read())\n except json.decoder.JSONDecodeError:\n there_was_an_error = True\n print(\"\ud83d\udd25 JSON could not be parsed.\")\n\n if 'word' not in file_content:\n there_was_an_error = True\n print(\"\ud83d\udd25 Key 'word' not found.\")\n\n if not file_content[\"word\"]:\n there_was_an_error = True\n print(\"\ud83d\udd25 Value for 'word' appears to be empty.\")\n\n if 'definitions' not in file_content:\n there_was_an_error = True\n print(\"\ud83d\udd25 Key 'definitions' not found.\")\n\n if not file_content[\"definitions\"]:\n there_was_an_error = True\n print(\"\ud83d\udd25 Value for 'definitions' appears to be empty.\")\n\n if 'parts-of-speech' not in file_content:\n there_was_an_error = True\n print(\"\ud83d\udd25 Key 'parts-of-speech' not found.\")\n\n if not file_content[\"parts-of-speech\"]:\n there_was_an_error = True\n print(\"\ud83d\udd25 Value for 'parts-of-speech' appears to be empty.\")\n\n if there_was_an_error:\n exit(1)\n", "path": ".travis.py"}]} | 1,219 | 153 |
gh_patches_debug_1870 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-1743 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support for Snowflake Secure Views
### Adding support for Secure View in Snowflake
When using the Materialize feature where setting the type of materialization, adding secure-view to the {{ config(materialized='secure-view') }} would be beneficial.
### Current Work-around
Currently the solution for Snowflake secure views is running post-hook events to set the targeted views as secure, example: `alter view sv_mySecureTest set secure;`
This works, and each view that needs to be secured will need to be added to the post-hook event.
### Affects only Snowflake
This feature is specific to the Snowflake Cloud Data warehouse.
[https://docs.snowflake.net/manuals/user-guide/views-secure.html](url)
### This will help DBT Snowflake Developer / Non Developers
When creating a secure view in Snowflake, a developer can use 2 syntax commands
1. CREATE OR REPLACE SECURE VIEW...
2. Alter view <view_name> Set Secure
The first method will allow non-dbt user to render the DDL with the secure declaration as part of the DDL, the second statement is added to the end of the generated DDL however it may be ignored by developers unfamiliar with Snowflake Syntax, causing possible security issues, allowing unauthorized access to the View DDL by Read-Only roles in Snowflake.
</issue>
<code>
[start of plugins/snowflake/dbt/adapters/snowflake/impl.py]
1 from dbt.adapters.sql import SQLAdapter
2 from dbt.adapters.snowflake import SnowflakeConnectionManager
3 from dbt.adapters.snowflake import SnowflakeRelation
4 from dbt.utils import filter_null_values
5
6
7 class SnowflakeAdapter(SQLAdapter):
8 Relation = SnowflakeRelation
9 ConnectionManager = SnowflakeConnectionManager
10
11 AdapterSpecificConfigs = frozenset(
12 {"transient", "cluster_by", "automatic_clustering"}
13 )
14
15 @classmethod
16 def date_function(cls):
17 return "CURRENT_TIMESTAMP()"
18
19 @classmethod
20 def _catalog_filter_table(cls, table, manifest):
21 # On snowflake, users can set QUOTED_IDENTIFIERS_IGNORE_CASE, so force
22 # the column names to their lowercased forms.
23 lowered = table.rename(
24 column_names=[c.lower() for c in table.column_names]
25 )
26 return super()._catalog_filter_table(lowered, manifest)
27
28 def _make_match_kwargs(self, database, schema, identifier):
29 quoting = self.config.quoting
30 if identifier is not None and quoting["identifier"] is False:
31 identifier = identifier.upper()
32
33 if schema is not None and quoting["schema"] is False:
34 schema = schema.upper()
35
36 if database is not None and quoting["database"] is False:
37 database = database.upper()
38
39 return filter_null_values(
40 {"identifier": identifier, "schema": schema, "database": database}
41 )
42
[end of plugins/snowflake/dbt/adapters/snowflake/impl.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugins/snowflake/dbt/adapters/snowflake/impl.py b/plugins/snowflake/dbt/adapters/snowflake/impl.py
--- a/plugins/snowflake/dbt/adapters/snowflake/impl.py
+++ b/plugins/snowflake/dbt/adapters/snowflake/impl.py
@@ -9,7 +9,7 @@
ConnectionManager = SnowflakeConnectionManager
AdapterSpecificConfigs = frozenset(
- {"transient", "cluster_by", "automatic_clustering"}
+ {"transient", "cluster_by", "automatic_clustering", "secure"}
)
@classmethod
| {"golden_diff": "diff --git a/plugins/snowflake/dbt/adapters/snowflake/impl.py b/plugins/snowflake/dbt/adapters/snowflake/impl.py\n--- a/plugins/snowflake/dbt/adapters/snowflake/impl.py\n+++ b/plugins/snowflake/dbt/adapters/snowflake/impl.py\n@@ -9,7 +9,7 @@\n ConnectionManager = SnowflakeConnectionManager\n \n AdapterSpecificConfigs = frozenset(\n- {\"transient\", \"cluster_by\", \"automatic_clustering\"}\n+ {\"transient\", \"cluster_by\", \"automatic_clustering\", \"secure\"}\n )\n \n @classmethod\n", "issue": "Support for Snowflake Secure Views\n### Adding support for Secure View in Snowflake\r\nWhen using the Materialize feature where setting the type of materialization, adding secure-view to the {{ config(materialized='secure-view') }} would be beneficial.\r\n\r\n### Current Work-around\r\nCurrently the solution for Snowflake secure views is running post-hook events to set the targeted views as secure, example: `alter view sv_mySecureTest set secure;`\r\nThis works, and each view that needs to be secured will need to be added to the post-hook event.\r\n\r\n### Affects only Snowflake\r\nThis feature is specific to the Snowflake Cloud Data warehouse.\r\n[https://docs.snowflake.net/manuals/user-guide/views-secure.html](url)\r\n\r\n### This will help DBT Snowflake Developer / Non Developers\r\nWhen creating a secure view in Snowflake, a developer can use 2 syntax commands\r\n\r\n1. CREATE OR REPLACE SECURE VIEW...\r\n2. Alter view <view_name> Set Secure\r\n\r\nThe first method will allow non-dbt user to render the DDL with the secure declaration as part of the DDL, the second statement is added to the end of the generated DDL however it may be ignored by developers unfamiliar with Snowflake Syntax, causing possible security issues, allowing unauthorized access to the View DDL by Read-Only roles in Snowflake.\n", "before_files": [{"content": "from dbt.adapters.sql import SQLAdapter\nfrom dbt.adapters.snowflake import SnowflakeConnectionManager\nfrom dbt.adapters.snowflake import SnowflakeRelation\nfrom dbt.utils import filter_null_values\n\n\nclass SnowflakeAdapter(SQLAdapter):\n Relation = SnowflakeRelation\n ConnectionManager = SnowflakeConnectionManager\n\n AdapterSpecificConfigs = frozenset(\n {\"transient\", \"cluster_by\", \"automatic_clustering\"}\n )\n\n @classmethod\n def date_function(cls):\n return \"CURRENT_TIMESTAMP()\"\n\n @classmethod\n def _catalog_filter_table(cls, table, manifest):\n # On snowflake, users can set QUOTED_IDENTIFIERS_IGNORE_CASE, so force\n # the column names to their lowercased forms.\n lowered = table.rename(\n column_names=[c.lower() for c in table.column_names]\n )\n return super()._catalog_filter_table(lowered, manifest)\n\n def _make_match_kwargs(self, database, schema, identifier):\n quoting = self.config.quoting\n if identifier is not None and quoting[\"identifier\"] is False:\n identifier = identifier.upper()\n\n if schema is not None and quoting[\"schema\"] is False:\n schema = schema.upper()\n\n if database is not None and quoting[\"database\"] is False:\n database = database.upper()\n\n return filter_null_values(\n {\"identifier\": identifier, \"schema\": schema, \"database\": database}\n )\n", "path": "plugins/snowflake/dbt/adapters/snowflake/impl.py"}]} | 1,207 | 137 |
gh_patches_debug_1805 | rasdani/github-patches | git_diff | Mailu__Mailu-840 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Document the new setup utility
Title says all
</issue>
<code>
[start of setup/server.py]
1 import flask
2 import flask_bootstrap
3 import redis
4 import json
5 import os
6 import jinja2
7 import uuid
8 import string
9 import random
10 import ipaddress
11 import hashlib
12
13
14 version = os.getenv("this_version")
15 static_url_path = "/" + version + "/static"
16 app = flask.Flask(__name__, static_url_path=static_url_path)
17 flask_bootstrap.Bootstrap(app)
18 db = redis.StrictRedis(host='redis', port=6379, db=0)
19
20
21 def render_flavor(flavor, template, data):
22 return flask.render_template(
23 os.path.join(flavor, template),
24 **data
25 )
26
27
28 @app.add_template_global
29 def secret(length=16):
30 charset = string.ascii_uppercase + string.digits
31 return ''.join(
32 random.SystemRandom().choice(charset)
33 for _ in range(length)
34 )
35
36
37 def build_app(path):
38
39 app.jinja_env.trim_blocks = True
40 app.jinja_env.lstrip_blocks = True
41
42 @app.context_processor
43 def app_context():
44 return dict(versions=os.getenv("VERSIONS","master").split(','))
45
46 prefix_bp = flask.Blueprint(version, __name__)
47 prefix_bp.jinja_loader = jinja2.ChoiceLoader([
48 jinja2.FileSystemLoader(os.path.join(path, "templates")),
49 jinja2.FileSystemLoader(os.path.join(path, "flavors"))
50 ])
51
52 root_bp = flask.Blueprint("root", __name__)
53 root_bp.jinja_loader = jinja2.ChoiceLoader([
54 jinja2.FileSystemLoader(os.path.join(path, "templates")),
55 jinja2.FileSystemLoader(os.path.join(path, "flavors"))
56 ])
57
58 @prefix_bp.context_processor
59 @root_bp.context_processor
60 def bp_context(version=version):
61 return dict(version=version)
62
63 @prefix_bp.route("/")
64 @root_bp.route("/")
65 def wizard():
66 return flask.render_template('wizard.html')
67
68 @prefix_bp.route("/submit_flavor", methods=["POST"])
69 @root_bp.route("/submit_flavor", methods=["POST"])
70 def submit_flavor():
71 data = flask.request.form.copy()
72 steps = sorted(os.listdir(os.path.join(path, "templates", "steps", data["flavor"])))
73 return flask.render_template('wizard.html', flavor=data["flavor"], steps=steps)
74
75 @prefix_bp.route("/submit", methods=["POST"])
76 @root_bp.route("/submit", methods=["POST"])
77 def submit():
78 data = flask.request.form.copy()
79 data['uid'] = str(uuid.uuid4())
80 data['dns'] = str(ipaddress.IPv4Network(data['subnet'])[-2])
81 db.set(data['uid'], json.dumps(data))
82 return flask.redirect(flask.url_for('.setup', uid=data['uid']))
83
84 @prefix_bp.route("/setup/<uid>", methods=["GET"])
85 @root_bp.route("/setup/<uid>", methods=["GET"])
86 def setup(uid):
87 data = json.loads(db.get(uid))
88 flavor = data.get("flavor", "compose")
89 rendered = render_flavor(flavor, "setup.html", data)
90 return flask.render_template("setup.html", contents=rendered)
91
92 @prefix_bp.route("/file/<uid>/<filepath>", methods=["GET"])
93 @root_bp.route("/file/<uid>/<filepath>", methods=["GET"])
94 def file(uid, filepath):
95 data = json.loads(db.get(uid))
96 flavor = data.get("flavor", "compose")
97 return flask.Response(
98 render_flavor(flavor, filepath, data),
99 mimetype="application/text"
100 )
101
102 app.register_blueprint(prefix_bp, url_prefix="/{}".format(version))
103 app.register_blueprint(root_bp)
104
105
106 if __name__ == "__main__":
107 build_app("/tmp/mailutest")
108 app.run(debug=True)
109
[end of setup/server.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup/server.py b/setup/server.py
--- a/setup/server.py
+++ b/setup/server.py
@@ -11,7 +11,7 @@
import hashlib
-version = os.getenv("this_version")
+version = os.getenv("this_version", "master")
static_url_path = "/" + version + "/static"
app = flask.Flask(__name__, static_url_path=static_url_path)
flask_bootstrap.Bootstrap(app)
| {"golden_diff": "diff --git a/setup/server.py b/setup/server.py\n--- a/setup/server.py\n+++ b/setup/server.py\n@@ -11,7 +11,7 @@\n import hashlib\n \n \n-version = os.getenv(\"this_version\")\n+version = os.getenv(\"this_version\", \"master\")\n static_url_path = \"/\" + version + \"/static\"\n app = flask.Flask(__name__, static_url_path=static_url_path)\n flask_bootstrap.Bootstrap(app)\n", "issue": "Document the new setup utility\nTitle says all\n", "before_files": [{"content": "import flask\nimport flask_bootstrap\nimport redis\nimport json\nimport os\nimport jinja2\nimport uuid\nimport string\nimport random\nimport ipaddress\nimport hashlib\n\n\nversion = os.getenv(\"this_version\")\nstatic_url_path = \"/\" + version + \"/static\"\napp = flask.Flask(__name__, static_url_path=static_url_path)\nflask_bootstrap.Bootstrap(app)\ndb = redis.StrictRedis(host='redis', port=6379, db=0)\n\n\ndef render_flavor(flavor, template, data):\n return flask.render_template(\n os.path.join(flavor, template),\n **data\n )\n\n\[email protected]_template_global\ndef secret(length=16):\n charset = string.ascii_uppercase + string.digits\n return ''.join(\n random.SystemRandom().choice(charset)\n for _ in range(length)\n )\n\n\ndef build_app(path):\n\n app.jinja_env.trim_blocks = True\n app.jinja_env.lstrip_blocks = True\n\n @app.context_processor\n def app_context():\n return dict(versions=os.getenv(\"VERSIONS\",\"master\").split(','))\n\n prefix_bp = flask.Blueprint(version, __name__)\n prefix_bp.jinja_loader = jinja2.ChoiceLoader([\n jinja2.FileSystemLoader(os.path.join(path, \"templates\")),\n jinja2.FileSystemLoader(os.path.join(path, \"flavors\"))\n ])\n\n root_bp = flask.Blueprint(\"root\", __name__)\n root_bp.jinja_loader = jinja2.ChoiceLoader([\n jinja2.FileSystemLoader(os.path.join(path, \"templates\")),\n jinja2.FileSystemLoader(os.path.join(path, \"flavors\"))\n ])\n\n @prefix_bp.context_processor\n @root_bp.context_processor\n def bp_context(version=version):\n return dict(version=version)\n\n @prefix_bp.route(\"/\")\n @root_bp.route(\"/\")\n def wizard():\n return flask.render_template('wizard.html')\n\n @prefix_bp.route(\"/submit_flavor\", methods=[\"POST\"])\n @root_bp.route(\"/submit_flavor\", methods=[\"POST\"])\n def submit_flavor():\n data = flask.request.form.copy()\n steps = sorted(os.listdir(os.path.join(path, \"templates\", \"steps\", data[\"flavor\"])))\n return flask.render_template('wizard.html', flavor=data[\"flavor\"], steps=steps)\n\n @prefix_bp.route(\"/submit\", methods=[\"POST\"])\n @root_bp.route(\"/submit\", methods=[\"POST\"])\n def submit():\n data = flask.request.form.copy()\n data['uid'] = str(uuid.uuid4())\n data['dns'] = str(ipaddress.IPv4Network(data['subnet'])[-2])\n db.set(data['uid'], json.dumps(data))\n return flask.redirect(flask.url_for('.setup', uid=data['uid']))\n\n @prefix_bp.route(\"/setup/<uid>\", methods=[\"GET\"])\n @root_bp.route(\"/setup/<uid>\", methods=[\"GET\"])\n def setup(uid):\n data = json.loads(db.get(uid))\n flavor = data.get(\"flavor\", \"compose\")\n rendered = render_flavor(flavor, \"setup.html\", data)\n return flask.render_template(\"setup.html\", contents=rendered)\n\n @prefix_bp.route(\"/file/<uid>/<filepath>\", methods=[\"GET\"])\n @root_bp.route(\"/file/<uid>/<filepath>\", methods=[\"GET\"])\n def file(uid, filepath):\n data = json.loads(db.get(uid))\n flavor = data.get(\"flavor\", \"compose\")\n return flask.Response(\n render_flavor(flavor, filepath, data),\n mimetype=\"application/text\"\n )\n\n app.register_blueprint(prefix_bp, url_prefix=\"/{}\".format(version))\n app.register_blueprint(root_bp)\n\n\nif __name__ == \"__main__\":\n build_app(\"/tmp/mailutest\")\n app.run(debug=True)\n", "path": "setup/server.py"}]} | 1,571 | 94 |
gh_patches_debug_1293 | rasdani/github-patches | git_diff | CTPUG__wafer-643 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add support for Django 4.0
Currently failing tests (See #632)
</issue>
<code>
[start of setup.py]
1 from glob import glob
2 import subprocess
3
4 from setuptools import find_packages, setup
5
6 REQUIRES = [
7 'Django>=2.2,<4',
8 'bleach',
9 'bleach-allowlist',
10 'diff-match-patch',
11 'django-bakery>=0.12.0',
12 'django-crispy-forms',
13 'django-markitup>=4.0.0',
14 'django-registration-redux',
15 'django-reversion',
16 'django-select2',
17 'djangorestframework',
18 'drf-extensions>=0.5.0',
19 'icalendar>=4.0',
20 'jsonfield',
21 'markdown>=2.5',
22 'pillow',
23 'py3dns',
24 'pyLibravatar',
25 'pytz',
26 'requests',
27 ]
28
29 SOURCES = []
30
31
32 with open('README.rst', 'r') as f:
33 long_description = f.read()
34
35
36 def compile_translations():
37 try:
38 subprocess.check_call(['./manage.py', 'compilemessages'])
39 except subprocess.CalledProcessError:
40 print("WARNING: cannot compile translations.")
41 return glob('wafer/locale/*/LC_MESSAGES/django.mo')
42
43
44 setup(
45 name="wafer",
46 version="0.13.1a",
47 url='http://github.com/CTPUG/wafer',
48 license='ISC',
49 description="A wafer-thin Django library for running small conferences.",
50 long_description=long_description,
51 long_description_content_type="text/x-rst",
52 author='CTPUG',
53 author_email='[email protected]',
54 packages=find_packages(),
55 include_package_data=True,
56 install_requires=REQUIRES,
57 dependency_links=SOURCES,
58 data_files=[
59 ('locale', compile_translations()),
60 ],
61 setup_requires=[
62 # Add setuptools-git, so we get correct behaviour for
63 # include_package_data
64 'setuptools_git >= 1.0',
65 ],
66 classifiers=[
67 'Development Status :: 4 - Beta',
68 'Intended Audience :: Developers',
69 'License :: OSI Approved :: ISC License (ISCL)',
70 'Operating System :: POSIX',
71 'Programming Language :: Python :: 3',
72 'Programming Language :: Python :: 3.6',
73 'Programming Language :: Python :: 3.7',
74 'Programming Language :: Python :: 3.8',
75 'Framework :: Django',
76 'Topic :: Software Development :: Libraries :: Python Modules',
77 'Topic :: Internet :: WWW/HTTP',
78 ],
79 )
80
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -8,7 +8,7 @@
'bleach',
'bleach-allowlist',
'diff-match-patch',
- 'django-bakery>=0.12.0',
+ 'django-bakery>=0.13.0',
'django-crispy-forms',
'django-markitup>=4.0.0',
'django-registration-redux',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -8,7 +8,7 @@\n 'bleach',\n 'bleach-allowlist',\n 'diff-match-patch',\n- 'django-bakery>=0.12.0',\n+ 'django-bakery>=0.13.0',\n 'django-crispy-forms',\n 'django-markitup>=4.0.0',\n 'django-registration-redux',\n", "issue": "Add support for Django 4.0\nCurrently failing tests (See #632)\n", "before_files": [{"content": "from glob import glob\nimport subprocess\n\nfrom setuptools import find_packages, setup\n\nREQUIRES = [\n 'Django>=2.2,<4',\n 'bleach',\n 'bleach-allowlist',\n 'diff-match-patch',\n 'django-bakery>=0.12.0',\n 'django-crispy-forms',\n 'django-markitup>=4.0.0',\n 'django-registration-redux',\n 'django-reversion',\n 'django-select2',\n 'djangorestframework',\n 'drf-extensions>=0.5.0',\n 'icalendar>=4.0',\n 'jsonfield',\n 'markdown>=2.5',\n 'pillow',\n 'py3dns',\n 'pyLibravatar',\n 'pytz',\n 'requests',\n]\n\nSOURCES = []\n\n\nwith open('README.rst', 'r') as f:\n long_description = f.read()\n\n\ndef compile_translations():\n try:\n subprocess.check_call(['./manage.py', 'compilemessages'])\n except subprocess.CalledProcessError:\n print(\"WARNING: cannot compile translations.\")\n return glob('wafer/locale/*/LC_MESSAGES/django.mo')\n\n\nsetup(\n name=\"wafer\",\n version=\"0.13.1a\",\n url='http://github.com/CTPUG/wafer',\n license='ISC',\n description=\"A wafer-thin Django library for running small conferences.\",\n long_description=long_description,\n long_description_content_type=\"text/x-rst\",\n author='CTPUG',\n author_email='[email protected]',\n packages=find_packages(),\n include_package_data=True,\n install_requires=REQUIRES,\n dependency_links=SOURCES,\n data_files=[\n ('locale', compile_translations()),\n ],\n setup_requires=[\n # Add setuptools-git, so we get correct behaviour for\n # include_package_data\n 'setuptools_git >= 1.0',\n ],\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: ISC License (ISCL)',\n 'Operating System :: POSIX',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Framework :: Django',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Internet :: WWW/HTTP',\n ],\n)\n", "path": "setup.py"}]} | 1,243 | 109 |
gh_patches_debug_23755 | rasdani/github-patches | git_diff | akvo__akvo-rsr-2111 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Budget without value makes IATI export crash
## Test plan
GIVEN the IATI export (bottom right on project page)
WHEN a budget of the project has no value, but does have a value date / currency / label
THEN the `value` tag of the budget item should not be generated
AND no internal server error should be shown
## Issue description
See http://sentry.support.akvo-ops.org/rsr/live/group/832/
</issue>
<code>
[start of akvo/iati/exports/elements/budget.py]
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7 from lxml import etree
8
9
10 def budget(project):
11 """
12 Generate the budget elements.
13
14 :param project: Project object
15 :return: A list of Etree elements
16 """
17 budget_elements = []
18
19 for budget_item in project.budget_items.all():
20 if budget_item.amount or budget_item.period_start or budget_item.period_end or \
21 budget_item.type or budget_item.status or budget_item.value_date or \
22 budget_item.currency or budget_item.other_extra or budget_item.label:
23 element = etree.Element("budget")
24
25 if budget_item.type:
26 element.attrib['type'] = budget_item.type
27
28 if budget_item.status:
29 element.attrib['status'] = budget_item.status
30
31 if budget_item.period_start:
32 period_start_element = etree.SubElement(element, "period-start")
33 period_start_element.attrib['iso-date'] = str(budget_item.period_start)
34
35 if budget_item.period_end:
36 period_end_element = etree.SubElement(element, "period-end")
37 period_end_element.attrib['iso-date'] = str(budget_item.period_end)
38
39 if budget_item.amount == 0 or budget_item.amount:
40 value_element = etree.SubElement(element, "value")
41 value_element.text = str(budget_item.amount)
42
43 if budget_item.value_date:
44 value_element.attrib['value-date'] = str(budget_item.value_date)
45
46 if budget_item.currency:
47 value_element.attrib['currency'] = budget_item.currency
48
49 if budget_item.other_extra:
50 value_element.attrib['{http://akvo.org/iati-activities}label'] = budget_item.\
51 other_extra
52 elif budget_item.label and budget_item.label.label:
53 value_element.attrib['{http://akvo.org/iati-activities}label'] = budget_item.label.\
54 label
55
56 budget_elements.append(element)
57
58 return budget_elements
59
[end of akvo/iati/exports/elements/budget.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/akvo/iati/exports/elements/budget.py b/akvo/iati/exports/elements/budget.py
--- a/akvo/iati/exports/elements/budget.py
+++ b/akvo/iati/exports/elements/budget.py
@@ -40,18 +40,17 @@
value_element = etree.SubElement(element, "value")
value_element.text = str(budget_item.amount)
- if budget_item.value_date:
- value_element.attrib['value-date'] = str(budget_item.value_date)
-
- if budget_item.currency:
- value_element.attrib['currency'] = budget_item.currency
-
- if budget_item.other_extra:
- value_element.attrib['{http://akvo.org/iati-activities}label'] = budget_item.\
- other_extra
- elif budget_item.label and budget_item.label.label:
- value_element.attrib['{http://akvo.org/iati-activities}label'] = budget_item.label.\
- label
+ if budget_item.value_date:
+ value_element.attrib['value-date'] = str(budget_item.value_date)
+
+ if budget_item.currency:
+ value_element.attrib['currency'] = budget_item.currency
+
+ akvo_label = '{http://akvo.org/iati-activities}label'
+ if budget_item.other_extra:
+ value_element.attrib[akvo_label] = budget_item.other_extra
+ elif budget_item.label and budget_item.label.label:
+ value_element.attrib[akvo_label] = budget_item.label.label
budget_elements.append(element)
| {"golden_diff": "diff --git a/akvo/iati/exports/elements/budget.py b/akvo/iati/exports/elements/budget.py\n--- a/akvo/iati/exports/elements/budget.py\n+++ b/akvo/iati/exports/elements/budget.py\n@@ -40,18 +40,17 @@\n value_element = etree.SubElement(element, \"value\")\n value_element.text = str(budget_item.amount)\n \n- if budget_item.value_date:\n- value_element.attrib['value-date'] = str(budget_item.value_date)\n-\n- if budget_item.currency:\n- value_element.attrib['currency'] = budget_item.currency\n-\n- if budget_item.other_extra:\n- value_element.attrib['{http://akvo.org/iati-activities}label'] = budget_item.\\\n- other_extra\n- elif budget_item.label and budget_item.label.label:\n- value_element.attrib['{http://akvo.org/iati-activities}label'] = budget_item.label.\\\n- label\n+ if budget_item.value_date:\n+ value_element.attrib['value-date'] = str(budget_item.value_date)\n+\n+ if budget_item.currency:\n+ value_element.attrib['currency'] = budget_item.currency\n+\n+ akvo_label = '{http://akvo.org/iati-activities}label'\n+ if budget_item.other_extra:\n+ value_element.attrib[akvo_label] = budget_item.other_extra\n+ elif budget_item.label and budget_item.label.label:\n+ value_element.attrib[akvo_label] = budget_item.label.label\n \n budget_elements.append(element)\n", "issue": "Budget without value makes IATI export crash\n## Test plan\n\nGIVEN the IATI export (bottom right on project page)\nWHEN a budget of the project has no value, but does have a value date / currency / label\nTHEN the `value` tag of the budget item should not be generated\nAND no internal server error should be shown\n## Issue description\n\nSee http://sentry.support.akvo-ops.org/rsr/live/group/832/\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom lxml import etree\n\n\ndef budget(project):\n \"\"\"\n Generate the budget elements.\n\n :param project: Project object\n :return: A list of Etree elements\n \"\"\"\n budget_elements = []\n\n for budget_item in project.budget_items.all():\n if budget_item.amount or budget_item.period_start or budget_item.period_end or \\\n budget_item.type or budget_item.status or budget_item.value_date or \\\n budget_item.currency or budget_item.other_extra or budget_item.label:\n element = etree.Element(\"budget\")\n\n if budget_item.type:\n element.attrib['type'] = budget_item.type\n\n if budget_item.status:\n element.attrib['status'] = budget_item.status\n\n if budget_item.period_start:\n period_start_element = etree.SubElement(element, \"period-start\")\n period_start_element.attrib['iso-date'] = str(budget_item.period_start)\n\n if budget_item.period_end:\n period_end_element = etree.SubElement(element, \"period-end\")\n period_end_element.attrib['iso-date'] = str(budget_item.period_end)\n\n if budget_item.amount == 0 or budget_item.amount:\n value_element = etree.SubElement(element, \"value\")\n value_element.text = str(budget_item.amount)\n\n if budget_item.value_date:\n value_element.attrib['value-date'] = str(budget_item.value_date)\n\n if budget_item.currency:\n value_element.attrib['currency'] = budget_item.currency\n\n if budget_item.other_extra:\n value_element.attrib['{http://akvo.org/iati-activities}label'] = budget_item.\\\n other_extra\n elif budget_item.label and budget_item.label.label:\n value_element.attrib['{http://akvo.org/iati-activities}label'] = budget_item.label.\\\n label\n\n budget_elements.append(element)\n\n return budget_elements\n", "path": "akvo/iati/exports/elements/budget.py"}]} | 1,213 | 347 |
gh_patches_debug_21441 | rasdani/github-patches | git_diff | mozilla__bugbug-197 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
In the DevDocNeeded model, consider bugs for which the dev-doc-needed keyword was removed as negative examples
See also #79.
</issue>
<code>
[start of bugbug/models/devdocneeded.py]
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import xgboost
7 from imblearn.under_sampling import RandomUnderSampler
8 from sklearn.compose import ColumnTransformer
9 from sklearn.feature_extraction import DictVectorizer
10 from sklearn.pipeline import Pipeline
11
12 from bugbug import bug_features
13 from bugbug import bugzilla
14 from bugbug.model import Model
15
16
17 class DevDocNeededModel(Model):
18 def __init__(self, lemmatization=False):
19 Model.__init__(self, lemmatization)
20
21 self.sampler = RandomUnderSampler(random_state=0)
22
23 feature_extractors = [
24 bug_features.has_str(),
25 bug_features.has_regression_range(),
26 bug_features.severity(),
27 bug_features.keywords({'dev-doc-needed', 'dev-doc-complete'}),
28 bug_features.is_coverity_issue(),
29 bug_features.has_crash_signature(),
30 bug_features.has_url(),
31 bug_features.has_w3c_url(),
32 bug_features.has_github_url(),
33 bug_features.whiteboard(),
34 bug_features.patches(),
35 bug_features.landings(),
36 bug_features.title(),
37 bug_features.product(),
38 bug_features.component(),
39
40 bug_features.commit_added(),
41 bug_features.commit_deleted(),
42 bug_features.commit_types(),
43 ]
44
45 cleanup_functions = [
46 bug_features.cleanup_fileref,
47 bug_features.cleanup_url,
48 bug_features.cleanup_synonyms,
49 ]
50
51 self.extraction_pipeline = Pipeline([
52 ('bug_extractor', bug_features.BugExtractor(feature_extractors, cleanup_functions, rollback=True, rollback_when=self.rollback, commit_data=True)),
53 ('union', ColumnTransformer([
54 ('data', DictVectorizer(), 'data'),
55
56 ('title', self.text_vectorizer(), 'title'),
57
58 ('comments', self.text_vectorizer(), 'comments'),
59 ])),
60 ])
61
62 self.clf = xgboost.XGBClassifier(n_jobs=16)
63 self.clf.set_params(predictor='cpu_predictor')
64
65 def rollback(self, change):
66 return change['field_name'] == 'keywords' and any(keyword in change['added'] for keyword in ['dev-doc-needed', 'dev-doc-complete'])
67
68 def get_labels(self):
69 classes = {}
70
71 for bug_data in bugzilla.get_bugs():
72 bug_id = int(bug_data['id'])
73
74 for entry in bug_data['history']:
75 for change in entry['changes']:
76 if change['field_name'] == 'keywords' and any(keyword in change['added'] for keyword in ['dev-doc-needed', 'dev-doc-complete']):
77 classes[bug_id] = 1
78
79 if bug_id not in classes:
80 classes[bug_id] = 0
81
82 return classes
83
84 def get_feature_names(self):
85 return self.extraction_pipeline.named_steps['union'].get_feature_names()
86
[end of bugbug/models/devdocneeded.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bugbug/models/devdocneeded.py b/bugbug/models/devdocneeded.py
--- a/bugbug/models/devdocneeded.py
+++ b/bugbug/models/devdocneeded.py
@@ -73,7 +73,14 @@
for entry in bug_data['history']:
for change in entry['changes']:
- if change['field_name'] == 'keywords' and any(keyword in change['added'] for keyword in ['dev-doc-needed', 'dev-doc-complete']):
+ # Bugs that get dev-doc-needed removed from them at some point after it's been added (this suggests a false positive among human-analyzed bugs)
+ if change['field_name'] == 'keywords' and 'dev-doc-needed' in change['removed'] and 'dev-doc-complete' not in change['added']:
+ classes[bug_id] = 0
+ # Bugs that go from dev-doc-needed to dev-doc-complete are guaranteed to be good
+ # Bugs that go from not having dev-doc-needed to having dev-doc-complete are bugs
+ # that were missed by previous scans through content but someone realized it
+ # should have been flagged and updated the docs, found the docs already updated.
+ elif change['field_name'] == 'keywords' and any(keyword in change['added'] for keyword in ['dev-doc-needed', 'dev-doc-complete']):
classes[bug_id] = 1
if bug_id not in classes:
| {"golden_diff": "diff --git a/bugbug/models/devdocneeded.py b/bugbug/models/devdocneeded.py\n--- a/bugbug/models/devdocneeded.py\n+++ b/bugbug/models/devdocneeded.py\n@@ -73,7 +73,14 @@\n \n for entry in bug_data['history']:\n for change in entry['changes']:\n- if change['field_name'] == 'keywords' and any(keyword in change['added'] for keyword in ['dev-doc-needed', 'dev-doc-complete']):\n+ # Bugs that get dev-doc-needed removed from them at some point after it's been added (this suggests a false positive among human-analyzed bugs)\n+ if change['field_name'] == 'keywords' and 'dev-doc-needed' in change['removed'] and 'dev-doc-complete' not in change['added']:\n+ classes[bug_id] = 0\n+ # Bugs that go from dev-doc-needed to dev-doc-complete are guaranteed to be good\n+ # Bugs that go from not having dev-doc-needed to having dev-doc-complete are bugs\n+ # that were missed by previous scans through content but someone realized it\n+ # should have been flagged and updated the docs, found the docs already updated.\n+ elif change['field_name'] == 'keywords' and any(keyword in change['added'] for keyword in ['dev-doc-needed', 'dev-doc-complete']):\n classes[bug_id] = 1\n \n if bug_id not in classes:\n", "issue": "In the DevDocNeeded model, consider bugs for which the dev-doc-needed keyword was removed as negative examples\nSee also #79.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport xgboost\nfrom imblearn.under_sampling import RandomUnderSampler\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.feature_extraction import DictVectorizer\nfrom sklearn.pipeline import Pipeline\n\nfrom bugbug import bug_features\nfrom bugbug import bugzilla\nfrom bugbug.model import Model\n\n\nclass DevDocNeededModel(Model):\n def __init__(self, lemmatization=False):\n Model.__init__(self, lemmatization)\n\n self.sampler = RandomUnderSampler(random_state=0)\n\n feature_extractors = [\n bug_features.has_str(),\n bug_features.has_regression_range(),\n bug_features.severity(),\n bug_features.keywords({'dev-doc-needed', 'dev-doc-complete'}),\n bug_features.is_coverity_issue(),\n bug_features.has_crash_signature(),\n bug_features.has_url(),\n bug_features.has_w3c_url(),\n bug_features.has_github_url(),\n bug_features.whiteboard(),\n bug_features.patches(),\n bug_features.landings(),\n bug_features.title(),\n bug_features.product(),\n bug_features.component(),\n\n bug_features.commit_added(),\n bug_features.commit_deleted(),\n bug_features.commit_types(),\n ]\n\n cleanup_functions = [\n bug_features.cleanup_fileref,\n bug_features.cleanup_url,\n bug_features.cleanup_synonyms,\n ]\n\n self.extraction_pipeline = Pipeline([\n ('bug_extractor', bug_features.BugExtractor(feature_extractors, cleanup_functions, rollback=True, rollback_when=self.rollback, commit_data=True)),\n ('union', ColumnTransformer([\n ('data', DictVectorizer(), 'data'),\n\n ('title', self.text_vectorizer(), 'title'),\n\n ('comments', self.text_vectorizer(), 'comments'),\n ])),\n ])\n\n self.clf = xgboost.XGBClassifier(n_jobs=16)\n self.clf.set_params(predictor='cpu_predictor')\n\n def rollback(self, change):\n return change['field_name'] == 'keywords' and any(keyword in change['added'] for keyword in ['dev-doc-needed', 'dev-doc-complete'])\n\n def get_labels(self):\n classes = {}\n\n for bug_data in bugzilla.get_bugs():\n bug_id = int(bug_data['id'])\n\n for entry in bug_data['history']:\n for change in entry['changes']:\n if change['field_name'] == 'keywords' and any(keyword in change['added'] for keyword in ['dev-doc-needed', 'dev-doc-complete']):\n classes[bug_id] = 1\n\n if bug_id not in classes:\n classes[bug_id] = 0\n\n return classes\n\n def get_feature_names(self):\n return self.extraction_pipeline.named_steps['union'].get_feature_names()\n", "path": "bugbug/models/devdocneeded.py"}]} | 1,359 | 312 |
gh_patches_debug_34208 | rasdani/github-patches | git_diff | e-valuation__EvaP-353 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Grade preview
There must be an option for FSR members to see the grades for a course before publishing it.
This should be available in "in evaluation", "evaluated" and "reviewed".
</issue>
<code>
[start of evap/results/views.py]
1 from django.conf import settings
2 from django.http import HttpResponse
3 from django.shortcuts import get_object_or_404, render_to_response
4 from django.template import RequestContext
5 from django.utils.translation import get_language
6
7 from evap.evaluation.auth import login_required, fsr_required
8 from evap.evaluation.models import Semester
9 from evap.evaluation.tools import calculate_results, calculate_average_and_medium_grades, TextResult, can_publish_grades
10
11 from evap.results.exporters import ExcelExporter
12
13
14 @login_required
15 def index(request):
16 semesters = Semester.get_all_with_published_courses()
17
18 return render_to_response(
19 "results_index.html",
20 dict(semesters=semesters),
21 context_instance=RequestContext(request))
22
23
24 @login_required
25 def semester_detail(request, semester_id):
26 semester = get_object_or_404(Semester, id=semester_id)
27 courses = list(semester.course_set.filter(state="published"))
28
29 # annotate each course object with its grades
30 for course in courses:
31 # first, make sure that there are no preexisting grade attributes
32 course.avg_grade, course.med_grade = calculate_average_and_medium_grades(course)
33 course.can_publish_grades = can_publish_grades(course, request.user.is_staff)
34
35 return render_to_response(
36 "results_semester_detail.html",
37 dict(
38 semester=semester,
39 courses=courses
40 ),
41 context_instance=RequestContext(request))
42
43
44 @fsr_required
45 def semester_export(request, semester_id):
46 semester = get_object_or_404(Semester, id=semester_id)
47
48 filename = "Evaluation-%s-%s.xls" % (semester.name, get_language())
49
50 response = HttpResponse(mimetype="application/vnd.ms-excel")
51 response["Content-Disposition"] = "attachment; filename=\"%s\"" % filename
52
53 exporter = ExcelExporter(semester)
54
55 if 'all' in request.GET:
56 exporter.export(response, True)
57 else:
58 exporter.export(response)
59
60 return response
61
62
63 @login_required
64 def course_detail(request, semester_id, course_id):
65 semester = get_object_or_404(Semester, id=semester_id)
66 course = get_object_or_404(semester.course_set.filter(state="published"), id=course_id)
67
68 sections = calculate_results(course, request.user.is_staff)
69
70 if (request.user.is_staff == False): # if user is not a student representative
71 # remove TextResults if user is neither the evaluated person (or a delegate) nor responsible for the course (or a delegate)
72 for section in sections:
73 if not user_can_see_textresults(request.user, course, section):
74 for index, result in list(enumerate(section.results))[::-1]:
75 if isinstance(section.results[index], TextResult):
76 del section.results[index]
77
78 # remove empty sections
79 sections = [section for section in sections if section.results]
80
81 # check whether results are published
82 published = course.num_voters >= settings.MIN_ANSWER_COUNT and float(course.num_voters) / course.num_participants >= settings.MIN_ANSWER_PERCENTAGE
83
84 # show a publishing warning to fsr members when the results are not publicly available
85 warning = (not published) and request.user.is_staff
86
87 return render_to_response(
88 "results_course_detail.html",
89 dict(
90 course=course,
91 sections=sections,
92 warning=warning
93 ),
94 context_instance=RequestContext(request))
95
96
97 def user_can_see_textresults(user, course, section):
98 if section.contributor == user:
99 return True
100 if course.is_user_responsible_or_delegate(user):
101 return True
102
103 represented_userprofiles = user.represented_users.all()
104 represented_users = [profile.user for profile in represented_userprofiles]
105 if section.contributor in represented_users:
106 return True
107
108 return False
109
[end of evap/results/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/evap/results/views.py b/evap/results/views.py
--- a/evap/results/views.py
+++ b/evap/results/views.py
@@ -63,7 +63,13 @@
@login_required
def course_detail(request, semester_id, course_id):
semester = get_object_or_404(Semester, id=semester_id)
- course = get_object_or_404(semester.course_set.filter(state="published"), id=course_id)
+
+ # staff can see preview of results even when course is not published
+ if request.user.is_staff:
+ course = get_object_or_404(semester.course_set.filter(state__in=['inEvaluation', 'evaluated', 'reviewed', 'published']), id=course_id)
+ # everyone else can only see published results
+ else:
+ course = get_object_or_404(semester.course_set.filter(state="published"), id=course_id)
sections = calculate_results(course, request.user.is_staff)
@@ -78,18 +84,24 @@
# remove empty sections
sections = [section for section in sections if section.results]
- # check whether results are published
- published = course.num_voters >= settings.MIN_ANSWER_COUNT and float(course.num_voters) / course.num_participants >= settings.MIN_ANSWER_PERCENTAGE
+ # show a warning if course is still in evaluation (for staff preview)
+ evaluation_warning = course.state != 'published'
+
+ # check whether course has a sufficient number of votes for publishing it
+ sufficient_votes = course.num_voters >= settings.MIN_ANSWER_COUNT and float(course.num_voters) / course.num_participants >= settings.MIN_ANSWER_PERCENTAGE
- # show a publishing warning to fsr members when the results are not publicly available
- warning = (not published) and request.user.is_staff
+ # results for a course might not be visible because there are not enough answers
+ # but it can still be "published" e.g. to show the comment results to lecturers
+ # the FSR can still see all results but gets a warning message
+ sufficient_votes_warning = (not sufficient_votes) and request.user.is_staff
return render_to_response(
"results_course_detail.html",
dict(
course=course,
sections=sections,
- warning=warning
+ evaluation_warning=evaluation_warning,
+ sufficient_votes_warning=sufficient_votes_warning
),
context_instance=RequestContext(request))
| {"golden_diff": "diff --git a/evap/results/views.py b/evap/results/views.py\n--- a/evap/results/views.py\n+++ b/evap/results/views.py\n@@ -63,7 +63,13 @@\n @login_required\n def course_detail(request, semester_id, course_id):\n semester = get_object_or_404(Semester, id=semester_id)\n- course = get_object_or_404(semester.course_set.filter(state=\"published\"), id=course_id)\n+ \n+ # staff can see preview of results even when course is not published\n+ if request.user.is_staff:\n+ course = get_object_or_404(semester.course_set.filter(state__in=['inEvaluation', 'evaluated', 'reviewed', 'published']), id=course_id)\n+ # everyone else can only see published results\n+ else:\n+ course = get_object_or_404(semester.course_set.filter(state=\"published\"), id=course_id)\n \n sections = calculate_results(course, request.user.is_staff)\n \n@@ -78,18 +84,24 @@\n # remove empty sections\n sections = [section for section in sections if section.results]\n \n- # check whether results are published\n- published = course.num_voters >= settings.MIN_ANSWER_COUNT and float(course.num_voters) / course.num_participants >= settings.MIN_ANSWER_PERCENTAGE\n+ # show a warning if course is still in evaluation (for staff preview)\n+ evaluation_warning = course.state != 'published'\n+\n+ # check whether course has a sufficient number of votes for publishing it\n+ sufficient_votes = course.num_voters >= settings.MIN_ANSWER_COUNT and float(course.num_voters) / course.num_participants >= settings.MIN_ANSWER_PERCENTAGE\n \n- # show a publishing warning to fsr members when the results are not publicly available\n- warning = (not published) and request.user.is_staff\n+ # results for a course might not be visible because there are not enough answers\n+ # but it can still be \"published\" e.g. to show the comment results to lecturers\n+ # the FSR can still see all results but gets a warning message\n+ sufficient_votes_warning = (not sufficient_votes) and request.user.is_staff\n \n return render_to_response(\n \"results_course_detail.html\",\n dict(\n course=course,\n sections=sections,\n- warning=warning\n+ evaluation_warning=evaluation_warning,\n+ sufficient_votes_warning=sufficient_votes_warning\n ),\n context_instance=RequestContext(request))\n", "issue": "Grade preview\nThere must be an option for FSR members to see the grades for a course before publishing it.\nThis should be available in \"in evaluation\", \"evaluated\" and \"reviewed\".\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.http import HttpResponse\nfrom django.shortcuts import get_object_or_404, render_to_response\nfrom django.template import RequestContext\nfrom django.utils.translation import get_language\n\nfrom evap.evaluation.auth import login_required, fsr_required\nfrom evap.evaluation.models import Semester\nfrom evap.evaluation.tools import calculate_results, calculate_average_and_medium_grades, TextResult, can_publish_grades\n\nfrom evap.results.exporters import ExcelExporter\n\n\n@login_required\ndef index(request):\n semesters = Semester.get_all_with_published_courses()\n\n return render_to_response(\n \"results_index.html\",\n dict(semesters=semesters),\n context_instance=RequestContext(request))\n\n\n@login_required\ndef semester_detail(request, semester_id):\n semester = get_object_or_404(Semester, id=semester_id)\n courses = list(semester.course_set.filter(state=\"published\"))\n\n # annotate each course object with its grades\n for course in courses:\n # first, make sure that there are no preexisting grade attributes\n course.avg_grade, course.med_grade = calculate_average_and_medium_grades(course)\n course.can_publish_grades = can_publish_grades(course, request.user.is_staff)\n\n return render_to_response(\n \"results_semester_detail.html\",\n dict(\n semester=semester,\n courses=courses\n ),\n context_instance=RequestContext(request))\n\n\n@fsr_required\ndef semester_export(request, semester_id):\n semester = get_object_or_404(Semester, id=semester_id)\n\n filename = \"Evaluation-%s-%s.xls\" % (semester.name, get_language())\n\n response = HttpResponse(mimetype=\"application/vnd.ms-excel\")\n response[\"Content-Disposition\"] = \"attachment; filename=\\\"%s\\\"\" % filename\n\n exporter = ExcelExporter(semester)\n\n if 'all' in request.GET:\n exporter.export(response, True)\n else:\n exporter.export(response)\n\n return response\n\n\n@login_required\ndef course_detail(request, semester_id, course_id):\n semester = get_object_or_404(Semester, id=semester_id)\n course = get_object_or_404(semester.course_set.filter(state=\"published\"), id=course_id)\n\n sections = calculate_results(course, request.user.is_staff)\n\n if (request.user.is_staff == False): # if user is not a student representative\n # remove TextResults if user is neither the evaluated person (or a delegate) nor responsible for the course (or a delegate)\n for section in sections:\n if not user_can_see_textresults(request.user, course, section):\n for index, result in list(enumerate(section.results))[::-1]:\n if isinstance(section.results[index], TextResult):\n del section.results[index]\n\n # remove empty sections\n sections = [section for section in sections if section.results]\n\n # check whether results are published\n published = course.num_voters >= settings.MIN_ANSWER_COUNT and float(course.num_voters) / course.num_participants >= settings.MIN_ANSWER_PERCENTAGE\n\n # show a publishing warning to fsr members when the results are not publicly available\n warning = (not published) and request.user.is_staff\n\n return render_to_response(\n \"results_course_detail.html\",\n dict(\n course=course,\n sections=sections,\n warning=warning\n ),\n context_instance=RequestContext(request))\n\n\ndef user_can_see_textresults(user, course, section):\n if section.contributor == user:\n return True\n if course.is_user_responsible_or_delegate(user):\n return True\n\n represented_userprofiles = user.represented_users.all()\n represented_users = [profile.user for profile in represented_userprofiles]\n if section.contributor in represented_users:\n return True\n\n return False\n", "path": "evap/results/views.py"}]} | 1,620 | 553 |
gh_patches_debug_58558 | rasdani/github-patches | git_diff | roboflow__supervision-219 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Developement - version issue
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
I have not installed `supervision` but running directly for developement purpose.
But I got an following error when I import supervision code:
```
raise PackageNotFoundError(name)
importlib.metadata.PackageNotFoundError: supervision
```
After a quick investigation, it is found that `__init__.py` of `supervision` where version information used. It is creating an issue. If I comment this line and the bug is gone.
@onuralpszr Can you take a look?
I think it should be ignorable, if valid version is not found then use `development` version. Though, I do not have concret idea, how to tackle it.
### Environment
_No response_
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR!
</issue>
<code>
[start of supervision/__init__.py]
1 import importlib.metadata as importlib_metadata
2
3 __version__ = importlib_metadata.version(__package__)
4
5
6 from supervision.classification.core import Classifications
7 from supervision.dataset.core import (
8 BaseDataset,
9 ClassificationDataset,
10 DetectionDataset,
11 )
12 from supervision.detection.annotate import BoxAnnotator, MaskAnnotator
13 from supervision.detection.core import Detections
14 from supervision.detection.line_counter import LineZone, LineZoneAnnotator
15 from supervision.detection.tools.polygon_zone import PolygonZone, PolygonZoneAnnotator
16 from supervision.detection.utils import (
17 box_iou_batch,
18 filter_polygons_by_area,
19 mask_to_polygons,
20 mask_to_xyxy,
21 non_max_suppression,
22 polygon_to_mask,
23 polygon_to_xyxy,
24 )
25 from supervision.draw.color import Color, ColorPalette
26 from supervision.draw.utils import draw_filled_rectangle, draw_polygon, draw_text
27 from supervision.geometry.core import Point, Position, Rect
28 from supervision.geometry.utils import get_polygon_center
29 from supervision.metrics.detection import ConfusionMatrix
30 from supervision.utils.file import list_files_with_extensions
31 from supervision.utils.image import ImageSink, crop
32 from supervision.utils.notebook import plot_image, plot_images_grid
33 from supervision.utils.video import (
34 VideoInfo,
35 VideoSink,
36 get_video_frames_generator,
37 process_video,
38 )
39
[end of supervision/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/supervision/__init__.py b/supervision/__init__.py
--- a/supervision/__init__.py
+++ b/supervision/__init__.py
@@ -1,6 +1,10 @@
import importlib.metadata as importlib_metadata
-__version__ = importlib_metadata.version(__package__)
+try:
+ # This will read version from pyproject.toml
+ __version__ = importlib_metadata.version(__package__ or __name__)
+except importlib_metadata.PackageNotFoundError:
+ __version__ = "development"
from supervision.classification.core import Classifications
| {"golden_diff": "diff --git a/supervision/__init__.py b/supervision/__init__.py\n--- a/supervision/__init__.py\n+++ b/supervision/__init__.py\n@@ -1,6 +1,10 @@\n import importlib.metadata as importlib_metadata\n \n-__version__ = importlib_metadata.version(__package__)\n+try:\n+ # This will read version from pyproject.toml\n+ __version__ = importlib_metadata.version(__package__ or __name__)\n+except importlib_metadata.PackageNotFoundError:\n+ __version__ = \"development\"\n \n \n from supervision.classification.core import Classifications\n", "issue": "Developement - version issue\n### Search before asking\n\n- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.\n\n\n### Bug\n\nI have not installed `supervision` but running directly for developement purpose.\r\n\r\nBut I got an following error when I import supervision code:\r\n\r\n```\r\n raise PackageNotFoundError(name)\r\nimportlib.metadata.PackageNotFoundError: supervision\r\n```\r\n\r\nAfter a quick investigation, it is found that `__init__.py` of `supervision` where version information used. It is creating an issue. If I comment this line and the bug is gone.\r\n\r\n@onuralpszr Can you take a look?\r\n\r\n\r\nI think it should be ignorable, if valid version is not found then use `development` version. Though, I do not have concret idea, how to tackle it.\r\n\r\n\r\n\r\n\n\n### Environment\n\n_No response_\n\n### Minimal Reproducible Example\n\n_No response_\n\n### Additional\n\n_No response_\n\n### Are you willing to submit a PR?\n\n- [X] Yes I'd like to help by submitting a PR!\n", "before_files": [{"content": "import importlib.metadata as importlib_metadata\n\n__version__ = importlib_metadata.version(__package__)\n\n\nfrom supervision.classification.core import Classifications\nfrom supervision.dataset.core import (\n BaseDataset,\n ClassificationDataset,\n DetectionDataset,\n)\nfrom supervision.detection.annotate import BoxAnnotator, MaskAnnotator\nfrom supervision.detection.core import Detections\nfrom supervision.detection.line_counter import LineZone, LineZoneAnnotator\nfrom supervision.detection.tools.polygon_zone import PolygonZone, PolygonZoneAnnotator\nfrom supervision.detection.utils import (\n box_iou_batch,\n filter_polygons_by_area,\n mask_to_polygons,\n mask_to_xyxy,\n non_max_suppression,\n polygon_to_mask,\n polygon_to_xyxy,\n)\nfrom supervision.draw.color import Color, ColorPalette\nfrom supervision.draw.utils import draw_filled_rectangle, draw_polygon, draw_text\nfrom supervision.geometry.core import Point, Position, Rect\nfrom supervision.geometry.utils import get_polygon_center\nfrom supervision.metrics.detection import ConfusionMatrix\nfrom supervision.utils.file import list_files_with_extensions\nfrom supervision.utils.image import ImageSink, crop\nfrom supervision.utils.notebook import plot_image, plot_images_grid\nfrom supervision.utils.video import (\n VideoInfo,\n VideoSink,\n get_video_frames_generator,\n process_video,\n)\n", "path": "supervision/__init__.py"}]} | 1,114 | 137 |
gh_patches_debug_5710 | rasdani/github-patches | git_diff | getredash__redash-2062 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
More flexible widgets grid (allow finer control over height/width)
## User should be able to control widget's placement, width and height
### Current implementation
Currently editing a dashboard works as follow:
1. When adding a widget you select its width: regular (half width) or double (full width). Height is set based on the content.
2. When opening the "Edit Dashboard" dialog, you can rearrange the order of the widgets.

Issues with the current approach:
* You can't change widget size after placing it.
* You can't change widget's height.
* You can't control widget's width beyond the current two options.
### Solution
To solve this we want to have a flexible (_but still responsive_) grid that the user can freely place and move widgets on.
* Each visualization will define minimum width and height sizes (derived from the visualization type). This will be the default sizes for the widget when placed on the grid, but the user will be able to resize it to be larger (but not smaller) afterwards.
* Once entering dashboard "edit mode", the user will be able to move around and resize the widgets in place instead of a dedicated UI.
* We should probably change other behaviors in the UI (move add widget to be part of the edit mode, change dashboard title editing to be in place, etc), but we will leave it to second iteration after the grid behavior is implemented.
### Technical Notes
* We currently use `angular-gridster` to allow moving around the widgets in the Edit Dashboard dialog. Maybe we can use it for the grid or a similar library.
* Currently the widget knows nothing about its placement, but only its size (width). The dashboard has a layout property, which is an array of arrays of widgets ids (each item in the array is a row/array of widgets). This is error prone and makes layout changes harder. If possible, it will be better if each widget retains its own placement information.
* The API to update a widget currently supports only updating a textbox widget. We will need to extend it to support all widget types.
</issue>
<code>
[start of redash/handlers/widgets.py]
1 import json
2
3 from flask import request
4 from redash import models
5 from redash.handlers.base import BaseResource
6 from redash.permissions import (require_access,
7 require_object_modify_permission,
8 require_permission, view_only)
9
10
11 class WidgetListResource(BaseResource):
12 @require_permission('edit_dashboard')
13 def post(self):
14 """
15 Add a widget to a dashboard.
16
17 :<json number dashboard_id: The ID for the dashboard being added to
18 :<json visualization_id: The ID of the visualization to put in this widget
19 :<json object options: Widget options
20 :<json string text: Text box contents
21 :<json number width: Width for widget display
22
23 :>json object widget: The created widget
24 :>json array layout: The new layout of the dashboard this widget was added to
25 :>json boolean new_row: Whether this widget was added on a new row or not
26 :>json number version: The revision number of the dashboard
27 """
28 widget_properties = request.get_json(force=True)
29 dashboard = models.Dashboard.get_by_id_and_org(widget_properties.pop('dashboard_id'), self.current_org)
30 require_object_modify_permission(dashboard, self.current_user)
31
32 widget_properties['options'] = json.dumps(widget_properties['options'])
33 widget_properties.pop('id', None)
34 widget_properties['dashboard'] = dashboard
35
36 visualization_id = widget_properties.pop('visualization_id')
37 if visualization_id:
38 visualization = models.Visualization.get_by_id_and_org(visualization_id, self.current_org)
39 require_access(visualization.query_rel.groups, self.current_user, view_only)
40 else:
41 visualization = None
42
43 widget_properties['visualization'] = visualization
44
45 widget = models.Widget(**widget_properties)
46 models.db.session.add(widget)
47 models.db.session.commit()
48
49 layout = json.loads(widget.dashboard.layout)
50 new_row = True
51
52 if len(layout) == 0 or widget.width == 2:
53 layout.append([widget.id])
54 elif len(layout[-1]) == 1:
55 neighbour_widget = models.Widget.query.get(layout[-1][0])
56 if neighbour_widget.width == 1:
57 layout[-1].append(widget.id)
58 new_row = False
59 else:
60 layout.append([widget.id])
61 else:
62 layout.append([widget.id])
63
64 widget.dashboard.layout = json.dumps(layout)
65 models.db.session.add(widget.dashboard)
66 models.db.session.commit()
67 return {'widget': widget.to_dict(), 'layout': layout, 'new_row': new_row, 'version': dashboard.version}
68
69
70 class WidgetResource(BaseResource):
71 @require_permission('edit_dashboard')
72 def post(self, widget_id):
73 """
74 Updates a widget in a dashboard.
75 This method currently handles Text Box widgets only.
76
77 :param number widget_id: The ID of the widget to modify
78
79 :<json string text: The new contents of the text box
80 """
81 widget = models.Widget.get_by_id_and_org(widget_id, self.current_org)
82 require_object_modify_permission(widget.dashboard, self.current_user)
83 widget_properties = request.get_json(force=True)
84 widget.text = widget_properties['text']
85 models.db.session.commit()
86 return widget.to_dict()
87
88 @require_permission('edit_dashboard')
89 def delete(self, widget_id):
90 """
91 Remove a widget from a dashboard.
92
93 :param number widget_id: ID of widget to remove
94
95 :>json array layout: New layout of dashboard this widget was removed from
96 :>json number version: Revision number of dashboard
97 """
98 widget = models.Widget.get_by_id_and_org(widget_id, self.current_org)
99 require_object_modify_permission(widget.dashboard, self.current_user)
100 widget.delete()
101 models.db.session.commit()
102 return {'layout': widget.dashboard.layout, 'version': widget.dashboard.version}
103
[end of redash/handlers/widgets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/redash/handlers/widgets.py b/redash/handlers/widgets.py
--- a/redash/handlers/widgets.py
+++ b/redash/handlers/widgets.py
@@ -82,6 +82,7 @@
require_object_modify_permission(widget.dashboard, self.current_user)
widget_properties = request.get_json(force=True)
widget.text = widget_properties['text']
+ widget.options = json.dumps(widget_properties['options'])
models.db.session.commit()
return widget.to_dict()
| {"golden_diff": "diff --git a/redash/handlers/widgets.py b/redash/handlers/widgets.py\n--- a/redash/handlers/widgets.py\n+++ b/redash/handlers/widgets.py\n@@ -82,6 +82,7 @@\n require_object_modify_permission(widget.dashboard, self.current_user)\n widget_properties = request.get_json(force=True)\n widget.text = widget_properties['text']\n+ widget.options = json.dumps(widget_properties['options'])\n models.db.session.commit()\n return widget.to_dict()\n", "issue": "More flexible widgets grid (allow finer control over height/width)\n## User should be able to control widget's placement, width and height\r\n\r\n### Current implementation\r\n\r\nCurrently editing a dashboard works as follow:\r\n\r\n1. When adding a widget you select its width: regular (half width) or double (full width). Height is set based on the content.\r\n2. When opening the \"Edit Dashboard\" dialog, you can rearrange the order of the widgets.\r\n\r\n\r\n\r\nIssues with the current approach:\r\n\r\n* You can't change widget size after placing it.\r\n* You can't change widget's height.\r\n* You can't control widget's width beyond the current two options.\r\n\r\n### Solution\r\n\r\nTo solve this we want to have a flexible (_but still responsive_) grid that the user can freely place and move widgets on. \r\n\r\n* Each visualization will define minimum width and height sizes (derived from the visualization type). This will be the default sizes for the widget when placed on the grid, but the user will be able to resize it to be larger (but not smaller) afterwards.\r\n* Once entering dashboard \"edit mode\", the user will be able to move around and resize the widgets in place instead of a dedicated UI.\r\n* We should probably change other behaviors in the UI (move add widget to be part of the edit mode, change dashboard title editing to be in place, etc), but we will leave it to second iteration after the grid behavior is implemented.\r\n\r\n### Technical Notes\r\n\r\n* We currently use `angular-gridster` to allow moving around the widgets in the Edit Dashboard dialog. Maybe we can use it for the grid or a similar library.\r\n* Currently the widget knows nothing about its placement, but only its size (width). The dashboard has a layout property, which is an array of arrays of widgets ids (each item in the array is a row/array of widgets). This is error prone and makes layout changes harder. If possible, it will be better if each widget retains its own placement information.\r\n* The API to update a widget currently supports only updating a textbox widget. We will need to extend it to support all widget types.\n", "before_files": [{"content": "import json\n\nfrom flask import request\nfrom redash import models\nfrom redash.handlers.base import BaseResource\nfrom redash.permissions import (require_access,\n require_object_modify_permission,\n require_permission, view_only)\n\n\nclass WidgetListResource(BaseResource):\n @require_permission('edit_dashboard')\n def post(self):\n \"\"\"\n Add a widget to a dashboard.\n\n :<json number dashboard_id: The ID for the dashboard being added to\n :<json visualization_id: The ID of the visualization to put in this widget\n :<json object options: Widget options\n :<json string text: Text box contents\n :<json number width: Width for widget display\n\n :>json object widget: The created widget\n :>json array layout: The new layout of the dashboard this widget was added to\n :>json boolean new_row: Whether this widget was added on a new row or not\n :>json number version: The revision number of the dashboard\n \"\"\"\n widget_properties = request.get_json(force=True)\n dashboard = models.Dashboard.get_by_id_and_org(widget_properties.pop('dashboard_id'), self.current_org)\n require_object_modify_permission(dashboard, self.current_user)\n\n widget_properties['options'] = json.dumps(widget_properties['options'])\n widget_properties.pop('id', None)\n widget_properties['dashboard'] = dashboard\n\n visualization_id = widget_properties.pop('visualization_id')\n if visualization_id:\n visualization = models.Visualization.get_by_id_and_org(visualization_id, self.current_org)\n require_access(visualization.query_rel.groups, self.current_user, view_only)\n else:\n visualization = None\n\n widget_properties['visualization'] = visualization\n\n widget = models.Widget(**widget_properties)\n models.db.session.add(widget)\n models.db.session.commit()\n\n layout = json.loads(widget.dashboard.layout)\n new_row = True\n\n if len(layout) == 0 or widget.width == 2:\n layout.append([widget.id])\n elif len(layout[-1]) == 1:\n neighbour_widget = models.Widget.query.get(layout[-1][0])\n if neighbour_widget.width == 1:\n layout[-1].append(widget.id)\n new_row = False\n else:\n layout.append([widget.id])\n else:\n layout.append([widget.id])\n\n widget.dashboard.layout = json.dumps(layout)\n models.db.session.add(widget.dashboard)\n models.db.session.commit()\n return {'widget': widget.to_dict(), 'layout': layout, 'new_row': new_row, 'version': dashboard.version}\n\n\nclass WidgetResource(BaseResource):\n @require_permission('edit_dashboard')\n def post(self, widget_id):\n \"\"\"\n Updates a widget in a dashboard.\n This method currently handles Text Box widgets only.\n\n :param number widget_id: The ID of the widget to modify\n\n :<json string text: The new contents of the text box\n \"\"\"\n widget = models.Widget.get_by_id_and_org(widget_id, self.current_org)\n require_object_modify_permission(widget.dashboard, self.current_user)\n widget_properties = request.get_json(force=True)\n widget.text = widget_properties['text']\n models.db.session.commit()\n return widget.to_dict()\n\n @require_permission('edit_dashboard')\n def delete(self, widget_id):\n \"\"\"\n Remove a widget from a dashboard.\n\n :param number widget_id: ID of widget to remove\n\n :>json array layout: New layout of dashboard this widget was removed from\n :>json number version: Revision number of dashboard\n \"\"\"\n widget = models.Widget.get_by_id_and_org(widget_id, self.current_org)\n require_object_modify_permission(widget.dashboard, self.current_user)\n widget.delete()\n models.db.session.commit()\n return {'layout': widget.dashboard.layout, 'version': widget.dashboard.version}\n", "path": "redash/handlers/widgets.py"}]} | 2,037 | 108 |
gh_patches_debug_20122 | rasdani/github-patches | git_diff | Kinto__kinto-771 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[cliquet] Split tests from kinto package
original : https://github.com/mozilla-services/cliquet/issues/267
Should we have tests within the kinto package or outside ? that was discussed but I don't think a decision was made.
Another related question is not wether tests is in kinto/ or outside, but rather : do we ship tests when we release kinto ? some people strip them when they build the release.
[cliquet] Split tests from kinto package
original : https://github.com/mozilla-services/cliquet/issues/267
Should we have tests within the kinto package or outside ? that was discussed but I don't think a decision was made.
Another related question is not wether tests is in kinto/ or outside, but rather : do we ship tests when we release kinto ? some people strip them when they build the release.
</issue>
<code>
[start of setup.py]
1 import platform
2 import codecs
3 import os
4 from setuptools import setup, find_packages
5
6 here = os.path.abspath(os.path.dirname(__file__))
7
8
9 def read_file(filename):
10 """Open a related file and return its content."""
11 with codecs.open(os.path.join(here, filename), encoding='utf-8') as f:
12 content = f.read()
13 return content
14
15 README = read_file('README.rst')
16 CHANGELOG = read_file('CHANGELOG.rst')
17 CONTRIBUTORS = read_file('CONTRIBUTORS.rst')
18
19 installed_with_pypy = platform.python_implementation() == 'PyPy'
20
21 REQUIREMENTS = [
22 'colander',
23 'colorama',
24 'cornice >= 1.1', # Fix cache CORS
25 'jsonschema',
26 'python-dateutil',
27 'pyramid_multiauth >= 0.8', # User on policy selected event.
28 'pyramid_tm',
29 'requests',
30 'six',
31 'structlog >= 16.1.0',
32 'enum34',
33 'waitress',
34 ]
35
36 if installed_with_pypy:
37 # We install psycopg2cffi instead of psycopg2 when dealing with pypy
38 # Note: JSONB support landed after psycopg2cffi 2.7.0
39 POSTGRESQL_REQUIRES = [
40 'SQLAlchemy',
41 'psycopg2cffi>2.7.0',
42 'zope.sqlalchemy',
43 ]
44 else:
45 # ujson is not pypy compliant, as it uses the CPython C API
46 REQUIREMENTS.append('ujson >= 1.35')
47 POSTGRESQL_REQUIRES = [
48 'SQLAlchemy',
49 'psycopg2>2.5',
50 'zope.sqlalchemy',
51 ]
52
53 REDIS_REQUIRES = [
54 'kinto_redis'
55 ]
56
57 DEPENDENCY_LINKS = [
58 ]
59
60 MONITORING_REQUIRES = [
61 'raven',
62 'statsd',
63 'newrelic',
64 'werkzeug',
65 ]
66
67 ENTRY_POINTS = {
68 'paste.app_factory': [
69 'main = kinto:main',
70 ],
71 'console_scripts': [
72 'kinto = kinto.__main__:main'
73 ],
74 }
75
76
77 setup(name='kinto',
78 version='4.1.0.dev0',
79 description='Kinto Web Service - Store, Sync, Share, and Self-Host.',
80 long_description=README + "\n\n" + CHANGELOG + "\n\n" + CONTRIBUTORS,
81 license='Apache License (2.0)',
82 classifiers=[
83 "Programming Language :: Python",
84 "Programming Language :: Python :: 2",
85 "Programming Language :: Python :: 2.7",
86 "Programming Language :: Python :: 3",
87 "Programming Language :: Python :: 3.4",
88 "Programming Language :: Python :: 3.5",
89 "Programming Language :: Python :: Implementation :: CPython",
90 "Programming Language :: Python :: Implementation :: PyPy",
91 "Topic :: Internet :: WWW/HTTP",
92 "Topic :: Internet :: WWW/HTTP :: WSGI :: Application",
93 "License :: OSI Approved :: Apache Software License"
94 ],
95 keywords="web sync json storage services",
96 author='Mozilla Services',
97 author_email='[email protected]',
98 url='https://github.com/Kinto/kinto',
99 packages=find_packages(),
100 package_data={'': ['*.rst', '*.py']},
101 include_package_data=True,
102 zip_safe=False,
103 install_requires=REQUIREMENTS,
104 extras_require={
105 'redis': REDIS_REQUIRES,
106 'postgresql': POSTGRESQL_REQUIRES,
107 'monitoring': MONITORING_REQUIRES,
108 ":python_version=='2.7'": ["functools32", "futures"],
109 },
110 test_suite="kinto.tests",
111 dependency_links=DEPENDENCY_LINKS,
112 entry_points=ENTRY_POINTS)
113
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -54,6 +54,15 @@
'kinto_redis'
]
+SETUP_REQUIRES = [
+ 'pytest-runner'
+]
+
+TEST_REQUIREMENTS = [
+ 'pytest',
+ 'WebTest'
+]
+
DEPENDENCY_LINKS = [
]
@@ -100,6 +109,8 @@
package_data={'': ['*.rst', '*.py']},
include_package_data=True,
zip_safe=False,
+ setup_requires=SETUP_REQUIRES,
+ tests_require=TEST_REQUIREMENTS,
install_requires=REQUIREMENTS,
extras_require={
'redis': REDIS_REQUIRES,
@@ -107,6 +118,6 @@
'monitoring': MONITORING_REQUIRES,
":python_version=='2.7'": ["functools32", "futures"],
},
- test_suite="kinto.tests",
+ test_suite="tests",
dependency_links=DEPENDENCY_LINKS,
entry_points=ENTRY_POINTS)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -54,6 +54,15 @@\n 'kinto_redis'\n ]\n \n+SETUP_REQUIRES = [\n+ 'pytest-runner'\n+]\n+\n+TEST_REQUIREMENTS = [\n+ 'pytest',\n+ 'WebTest'\n+]\n+\n DEPENDENCY_LINKS = [\n ]\n \n@@ -100,6 +109,8 @@\n package_data={'': ['*.rst', '*.py']},\n include_package_data=True,\n zip_safe=False,\n+ setup_requires=SETUP_REQUIRES,\n+ tests_require=TEST_REQUIREMENTS,\n install_requires=REQUIREMENTS,\n extras_require={\n 'redis': REDIS_REQUIRES,\n@@ -107,6 +118,6 @@\n 'monitoring': MONITORING_REQUIRES,\n \":python_version=='2.7'\": [\"functools32\", \"futures\"],\n },\n- test_suite=\"kinto.tests\",\n+ test_suite=\"tests\",\n dependency_links=DEPENDENCY_LINKS,\n entry_points=ENTRY_POINTS)\n", "issue": "[cliquet] Split tests from kinto package\noriginal : https://github.com/mozilla-services/cliquet/issues/267\n\nShould we have tests within the kinto package or outside ? that was discussed but I don't think a decision was made.\n\nAnother related question is not wether tests is in kinto/ or outside, but rather : do we ship tests when we release kinto ? some people strip them when they build the release.\n\n[cliquet] Split tests from kinto package\noriginal : https://github.com/mozilla-services/cliquet/issues/267\n\nShould we have tests within the kinto package or outside ? that was discussed but I don't think a decision was made.\n\nAnother related question is not wether tests is in kinto/ or outside, but rather : do we ship tests when we release kinto ? some people strip them when they build the release.\n\n", "before_files": [{"content": "import platform\nimport codecs\nimport os\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read_file(filename):\n \"\"\"Open a related file and return its content.\"\"\"\n with codecs.open(os.path.join(here, filename), encoding='utf-8') as f:\n content = f.read()\n return content\n\nREADME = read_file('README.rst')\nCHANGELOG = read_file('CHANGELOG.rst')\nCONTRIBUTORS = read_file('CONTRIBUTORS.rst')\n\ninstalled_with_pypy = platform.python_implementation() == 'PyPy'\n\nREQUIREMENTS = [\n 'colander',\n 'colorama',\n 'cornice >= 1.1', # Fix cache CORS\n 'jsonschema',\n 'python-dateutil',\n 'pyramid_multiauth >= 0.8', # User on policy selected event.\n 'pyramid_tm',\n 'requests',\n 'six',\n 'structlog >= 16.1.0',\n 'enum34',\n 'waitress',\n]\n\nif installed_with_pypy:\n # We install psycopg2cffi instead of psycopg2 when dealing with pypy\n # Note: JSONB support landed after psycopg2cffi 2.7.0\n POSTGRESQL_REQUIRES = [\n 'SQLAlchemy',\n 'psycopg2cffi>2.7.0',\n 'zope.sqlalchemy',\n ]\nelse:\n # ujson is not pypy compliant, as it uses the CPython C API\n REQUIREMENTS.append('ujson >= 1.35')\n POSTGRESQL_REQUIRES = [\n 'SQLAlchemy',\n 'psycopg2>2.5',\n 'zope.sqlalchemy',\n ]\n\nREDIS_REQUIRES = [\n 'kinto_redis'\n]\n\nDEPENDENCY_LINKS = [\n]\n\nMONITORING_REQUIRES = [\n 'raven',\n 'statsd',\n 'newrelic',\n 'werkzeug',\n]\n\nENTRY_POINTS = {\n 'paste.app_factory': [\n 'main = kinto:main',\n ],\n 'console_scripts': [\n 'kinto = kinto.__main__:main'\n ],\n}\n\n\nsetup(name='kinto',\n version='4.1.0.dev0',\n description='Kinto Web Service - Store, Sync, Share, and Self-Host.',\n long_description=README + \"\\n\\n\" + CHANGELOG + \"\\n\\n\" + CONTRIBUTORS,\n license='Apache License (2.0)',\n classifiers=[\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\",\n \"License :: OSI Approved :: Apache Software License\"\n ],\n keywords=\"web sync json storage services\",\n author='Mozilla Services',\n author_email='[email protected]',\n url='https://github.com/Kinto/kinto',\n packages=find_packages(),\n package_data={'': ['*.rst', '*.py']},\n include_package_data=True,\n zip_safe=False,\n install_requires=REQUIREMENTS,\n extras_require={\n 'redis': REDIS_REQUIRES,\n 'postgresql': POSTGRESQL_REQUIRES,\n 'monitoring': MONITORING_REQUIRES,\n \":python_version=='2.7'\": [\"functools32\", \"futures\"],\n },\n test_suite=\"kinto.tests\",\n dependency_links=DEPENDENCY_LINKS,\n entry_points=ENTRY_POINTS)\n", "path": "setup.py"}]} | 1,775 | 244 |
gh_patches_debug_8204 | rasdani/github-patches | git_diff | vas3k__vas3k.club-381 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Баг с пажинацией в разделе «Коммьюнити»
STR:
1. Открыть https://vas3k.club/people/
2. Отфильтровать по первым 4 фильтрам
3. Перейти на 2 страницу выдачи
ER: Фильтры сохранены
AR: Фильтры частично потеряны
Видео https://youtu.be/08cz2TOq60A
</issue>
<code>
[start of posts/templatetags/query_params.py]
1 from urllib.parse import urlencode
2
3 from django import template
4
5 register = template.Library()
6
7
8 @register.simple_tag(takes_context=True)
9 def append_query_param(context, **kwargs):
10 query_params = dict(context.request.GET.items())
11 query_params.update(kwargs)
12 return "?" + urlencode(query_params)
13
[end of posts/templatetags/query_params.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/posts/templatetags/query_params.py b/posts/templatetags/query_params.py
--- a/posts/templatetags/query_params.py
+++ b/posts/templatetags/query_params.py
@@ -1,4 +1,4 @@
-from urllib.parse import urlencode
+from copy import deepcopy
from django import template
@@ -7,6 +7,6 @@
@register.simple_tag(takes_context=True)
def append_query_param(context, **kwargs):
- query_params = dict(context.request.GET.items())
+ query_params = deepcopy(context.request.GET)
query_params.update(kwargs)
- return "?" + urlencode(query_params)
+ return "?" + query_params.urlencode()
| {"golden_diff": "diff --git a/posts/templatetags/query_params.py b/posts/templatetags/query_params.py\n--- a/posts/templatetags/query_params.py\n+++ b/posts/templatetags/query_params.py\n@@ -1,4 +1,4 @@\n-from urllib.parse import urlencode\n+from copy import deepcopy\n \n from django import template\n \n@@ -7,6 +7,6 @@\n \n @register.simple_tag(takes_context=True)\n def append_query_param(context, **kwargs):\n- query_params = dict(context.request.GET.items())\n+ query_params = deepcopy(context.request.GET)\n query_params.update(kwargs)\n- return \"?\" + urlencode(query_params)\n+ return \"?\" + query_params.urlencode()\n", "issue": "\u0411\u0430\u0433 \u0441 \u043f\u0430\u0436\u0438\u043d\u0430\u0446\u0438\u0435\u0439 \u0432 \u0440\u0430\u0437\u0434\u0435\u043b\u0435 \u00ab\u041a\u043e\u043c\u043c\u044c\u044e\u043d\u0438\u0442\u0438\u00bb\nSTR:\r\n1. \u041e\u0442\u043a\u0440\u044b\u0442\u044c https://vas3k.club/people/\r\n2. \u041e\u0442\u0444\u0438\u043b\u044c\u0442\u0440\u043e\u0432\u0430\u0442\u044c\u00a0\u043f\u043e \u043f\u0435\u0440\u0432\u044b\u043c 4 \u0444\u0438\u043b\u044c\u0442\u0440\u0430\u043c\r\n3. \u041f\u0435\u0440\u0435\u0439\u0442\u0438 \u043d\u0430 2 \u0441\u0442\u0440\u0430\u043d\u0438\u0446\u0443 \u0432\u044b\u0434\u0430\u0447\u0438\r\n\r\nER: \u0424\u0438\u043b\u044c\u0442\u0440\u044b \u0441\u043e\u0445\u0440\u0430\u043d\u0435\u043d\u044b\r\nAR: \u0424\u0438\u043b\u044c\u0442\u0440\u044b \u0447\u0430\u0441\u0442\u0438\u0447\u043d\u043e \u043f\u043e\u0442\u0435\u0440\u044f\u043d\u044b\r\n\r\n\u0412\u0438\u0434\u0435\u043e\u00a0https://youtu.be/08cz2TOq60A\n", "before_files": [{"content": "from urllib.parse import urlencode\n\nfrom django import template\n\nregister = template.Library()\n\n\[email protected]_tag(takes_context=True)\ndef append_query_param(context, **kwargs):\n query_params = dict(context.request.GET.items())\n query_params.update(kwargs)\n return \"?\" + urlencode(query_params)\n", "path": "posts/templatetags/query_params.py"}]} | 727 | 151 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.