problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
9.01k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 465
11.3k
| num_tokens_prompt
int64 557
2.05k
| num_tokens_diff
int64 48
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_3028 | rasdani/github-patches | git_diff | modal-labs__modal-examples-556 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
apply #556 manually
I manually applied the patch from #556. Not sure what's up with that PR
</issue>
<code>
[start of 01_getting_started/hello_world.py]
1 # # Hello, world!
2 #
3 # This is a trivial example of a Modal function, but it illustrates a few features:
4 #
5 # * You can print things to stdout and stderr.
6 # * You can return data.
7 # * You can map over a function.
8 #
9 # ## Import Modal and define the app
10 #
11 # Let's start with the top level imports.
12 # You need to import Modal and define the app.
13 # A stub is an object that defines everything that will be run.
14
15 import sys
16
17 import modal
18
19 stub = modal.Stub("example-hello-world")
20
21 # ## Defining a function
22 #
23 # Here we define a Modal function using the `modal.function` decorator.
24 # The body of the function will automatically be run remotely.
25 # This particular function is pretty silly: it just prints "hello"
26 # and "world" alternatingly to standard out and standard error.
27
28
29 @stub.function()
30 def f(i):
31 if i % 2 == 0:
32 print("hello", i)
33 else:
34 print("world", i, file=sys.stderr)
35
36 return i * i
37
38
39 # ## Running it
40 #
41 # Finally, let's actually invoke it.
42 # We put this invocation code inside a `@stub.local_entrypoint()`.
43 # This is because this module will be imported in the cloud, and we don't want
44 # this code to be executed a second time in the cloud.
45 #
46 # Run `modal run hello_world.py` and the `@stub.local_entrypoint()` decorator will handle
47 # starting the Modal app and then executing the wrapped function body.
48 #
49 # Inside the `main()` function body, we are calling the function `f` in three ways:
50 #
51 # 1 As a simple local call, `f(1000)`
52 # 2. As a simple *remote* call `f.remote(1000)`
53 # 3. By mapping over the integers `0..19`
54
55
56 @stub.local_entrypoint()
57 def main():
58 # Call the function locally.
59 print(f.local(1000))
60
61 # Call the function remotely.
62 print(f.remote(1000))
63
64 # Parallel map.
65 total = 0
66 for ret in f.map(range(20)):
67 total += ret
68
69 print(total)
70
71
72 # ## What happens?
73 #
74 # When you do `.remote` on function `f`, Modal will execute `f` **in the cloud,**
75 # not locally on your computer. It will take the code, put it inside a
76 # container, run it, and stream all the output back to your local
77 # computer.
78 #
79 # Try doing one of these things next.
80 #
81 # ### Change the code and run again
82 #
83 # For instance, change the `print` statement in the function `f`.
84 # You can see that the latest code is always run.
85 #
86 # Modal's goal is to make running code in the cloud feel like you're
87 # running code locally. You don't need to run any commands to rebuild,
88 # push containers, or go to a web UI to download logs.
89 #
90 # ### Map over a larger dataset
91 #
92 # Change the map range from 20 to some large number. You can see that
93 # Modal will create and run more containers in parallel.
94 #
95 # The function `f` is obviously silly and doesn't do much, but you could
96 # imagine something more significant, like:
97 #
98 # * Training a machine learning model
99 # * Transcoding media
100 # * Backtesting a trading algorithm.
101 #
102 # Modal lets you parallelize that operation trivially by running hundreds or
103 # thousands of containers in the cloud.
104
[end of 01_getting_started/hello_world.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/01_getting_started/hello_world.py b/01_getting_started/hello_world.py
--- a/01_getting_started/hello_world.py
+++ b/01_getting_started/hello_world.py
@@ -48,7 +48,7 @@
#
# Inside the `main()` function body, we are calling the function `f` in three ways:
#
-# 1 As a simple local call, `f(1000)`
+# 1 As a simple local call, `f.local(1000)`
# 2. As a simple *remote* call `f.remote(1000)`
# 3. By mapping over the integers `0..19`
| {"golden_diff": "diff --git a/01_getting_started/hello_world.py b/01_getting_started/hello_world.py\n--- a/01_getting_started/hello_world.py\n+++ b/01_getting_started/hello_world.py\n@@ -48,7 +48,7 @@\n #\n # Inside the `main()` function body, we are calling the function `f` in three ways:\n #\n-# 1 As a simple local call, `f(1000)`\n+# 1 As a simple local call, `f.local(1000)`\n # 2. As a simple *remote* call `f.remote(1000)`\n # 3. By mapping over the integers `0..19`\n", "issue": "apply #556 manually\nI manually applied the patch from #556. Not sure what's up with that PR\n", "before_files": [{"content": "# # Hello, world!\n#\n# This is a trivial example of a Modal function, but it illustrates a few features:\n#\n# * You can print things to stdout and stderr.\n# * You can return data.\n# * You can map over a function.\n#\n# ## Import Modal and define the app\n#\n# Let's start with the top level imports.\n# You need to import Modal and define the app.\n# A stub is an object that defines everything that will be run.\n\nimport sys\n\nimport modal\n\nstub = modal.Stub(\"example-hello-world\")\n\n# ## Defining a function\n#\n# Here we define a Modal function using the `modal.function` decorator.\n# The body of the function will automatically be run remotely.\n# This particular function is pretty silly: it just prints \"hello\"\n# and \"world\" alternatingly to standard out and standard error.\n\n\[email protected]()\ndef f(i):\n if i % 2 == 0:\n print(\"hello\", i)\n else:\n print(\"world\", i, file=sys.stderr)\n\n return i * i\n\n\n# ## Running it\n#\n# Finally, let's actually invoke it.\n# We put this invocation code inside a `@stub.local_entrypoint()`.\n# This is because this module will be imported in the cloud, and we don't want\n# this code to be executed a second time in the cloud.\n#\n# Run `modal run hello_world.py` and the `@stub.local_entrypoint()` decorator will handle\n# starting the Modal app and then executing the wrapped function body.\n#\n# Inside the `main()` function body, we are calling the function `f` in three ways:\n#\n# 1 As a simple local call, `f(1000)`\n# 2. As a simple *remote* call `f.remote(1000)`\n# 3. By mapping over the integers `0..19`\n\n\[email protected]_entrypoint()\ndef main():\n # Call the function locally.\n print(f.local(1000))\n\n # Call the function remotely.\n print(f.remote(1000))\n\n # Parallel map.\n total = 0\n for ret in f.map(range(20)):\n total += ret\n\n print(total)\n\n\n# ## What happens?\n#\n# When you do `.remote` on function `f`, Modal will execute `f` **in the cloud,**\n# not locally on your computer. It will take the code, put it inside a\n# container, run it, and stream all the output back to your local\n# computer.\n#\n# Try doing one of these things next.\n#\n# ### Change the code and run again\n#\n# For instance, change the `print` statement in the function `f`.\n# You can see that the latest code is always run.\n#\n# Modal's goal is to make running code in the cloud feel like you're\n# running code locally. You don't need to run any commands to rebuild,\n# push containers, or go to a web UI to download logs.\n#\n# ### Map over a larger dataset\n#\n# Change the map range from 20 to some large number. You can see that\n# Modal will create and run more containers in parallel.\n#\n# The function `f` is obviously silly and doesn't do much, but you could\n# imagine something more significant, like:\n#\n# * Training a machine learning model\n# * Transcoding media\n# * Backtesting a trading algorithm.\n#\n# Modal lets you parallelize that operation trivially by running hundreds or\n# thousands of containers in the cloud.\n", "path": "01_getting_started/hello_world.py"}]} | 1,545 | 161 |
gh_patches_debug_33431 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-273 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[E2503] fails incorrectly when intrinsic function used in Protocol value
*cfn-lint version: 0.4.2*
*Description of issue.*
This is valid, and conforms to the spec, but rule throws an error:
```yaml
Parameters:
TestParam:
Type: String
Default: TCP
Conditions:
TestCond: !Equals ['a', 'a']
Resources:
OpenShiftMasterELB:
Type: AWS::ElasticLoadBalancing::LoadBalancer
Properties:
Subnets:
- subnet-1234abcd
SecurityGroups:
- sg-1234abcd
Listeners:
# Fails on Protocol
- InstancePort: '1'
InstanceProtocol: !Ref TestParam
LoadBalancerPort: '1'
Protocol: !Ref TestParam
# Also fails on Protocol
- InstancePort: '2'
InstanceProtocol: !If [TestCond, TCP, SSL]
LoadBalancerPort: '2'
Protocol: !If [TestCond, TCP, SSL]
# Works
- InstancePort: '3'
InstanceProtocol: !If [TestCond, TCP, SSL]
LoadBalancerPort: '3'
Protocol: TCP
```
</issue>
<code>
[start of src/cfnlint/rules/resources/elb/Elb.py]
1 """
2 Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 from cfnlint import CloudFormationLintRule
18 from cfnlint import RuleMatch
19
20
21 class Elb(CloudFormationLintRule):
22 """Check if Elb Resource Properties"""
23 id = 'E2503'
24 shortdesc = 'Resource ELB Properties'
25 description = 'See if Elb Resource Properties are set correctly \
26 HTTPS has certificate HTTP has no certificate'
27 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-elb-listener.html'
28 tags = ['properties', 'elb']
29
30 def match(self, cfn):
31 """Check ELB Resource Parameters"""
32
33 matches = list()
34
35 results = cfn.get_resource_properties(['AWS::ElasticLoadBalancingV2::Listener'])
36 for result in results:
37 protocol = result['Value'].get('Protocol')
38 if protocol:
39 if protocol not in ['HTTP', 'HTTPS', 'TCP']:
40 message = 'Protocol is invalid for {0}'
41 path = result['Path'] + ['Protocol']
42 matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))
43 elif protocol in ['HTTPS']:
44 certificate = result['Value'].get('Certificates')
45 if not certificate:
46 message = 'Certificates should be specified when using HTTPS for {0}'
47 path = result['Path'] + ['Protocol']
48 matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))
49
50 results = cfn.get_resource_properties(['AWS::ElasticLoadBalancing::LoadBalancer', 'Listeners'])
51 for result in results:
52 if isinstance(result['Value'], list):
53 for index, listener in enumerate(result['Value']):
54 protocol = listener.get('Protocol')
55 if protocol:
56 if protocol not in ['HTTP', 'HTTPS', 'TCP', 'SSL']:
57 message = 'Protocol is invalid for {0}'
58 path = result['Path'] + [index, 'Protocol']
59 matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))
60 elif protocol in ['HTTPS', 'SSL']:
61 certificate = listener.get('SSLCertificateId')
62 if not certificate:
63 message = 'Certificates should be specified when using HTTPS for {0}'
64 path = result['Path'] + [index, 'Protocol']
65 matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))
66
67 return matches
68
[end of src/cfnlint/rules/resources/elb/Elb.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cfnlint/rules/resources/elb/Elb.py b/src/cfnlint/rules/resources/elb/Elb.py
--- a/src/cfnlint/rules/resources/elb/Elb.py
+++ b/src/cfnlint/rules/resources/elb/Elb.py
@@ -30,13 +30,21 @@
def match(self, cfn):
"""Check ELB Resource Parameters"""
+ def is_intrinsic(input_obj):
+ """Checks if a given input looks like an intrinsic function"""
+
+ if isinstance(input_obj, dict) and len(input_obj) == 1:
+ if list(input_obj.keys())[0] == 'Ref' or list(input_obj.keys())[0].startswith('Fn::'):
+ return True
+ return False
+
matches = list()
results = cfn.get_resource_properties(['AWS::ElasticLoadBalancingV2::Listener'])
for result in results:
protocol = result['Value'].get('Protocol')
if protocol:
- if protocol not in ['HTTP', 'HTTPS', 'TCP']:
+ if protocol not in ['HTTP', 'HTTPS', 'TCP'] and not is_intrinsic(protocol):
message = 'Protocol is invalid for {0}'
path = result['Path'] + ['Protocol']
matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))
@@ -53,7 +61,7 @@
for index, listener in enumerate(result['Value']):
protocol = listener.get('Protocol')
if protocol:
- if protocol not in ['HTTP', 'HTTPS', 'TCP', 'SSL']:
+ if protocol not in ['HTTP', 'HTTPS', 'TCP', 'SSL'] and not is_intrinsic(protocol):
message = 'Protocol is invalid for {0}'
path = result['Path'] + [index, 'Protocol']
matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/elb/Elb.py b/src/cfnlint/rules/resources/elb/Elb.py\n--- a/src/cfnlint/rules/resources/elb/Elb.py\n+++ b/src/cfnlint/rules/resources/elb/Elb.py\n@@ -30,13 +30,21 @@\n def match(self, cfn):\n \"\"\"Check ELB Resource Parameters\"\"\"\n \n+ def is_intrinsic(input_obj):\n+ \"\"\"Checks if a given input looks like an intrinsic function\"\"\"\n+\n+ if isinstance(input_obj, dict) and len(input_obj) == 1:\n+ if list(input_obj.keys())[0] == 'Ref' or list(input_obj.keys())[0].startswith('Fn::'):\n+ return True\n+ return False\n+\n matches = list()\n \n results = cfn.get_resource_properties(['AWS::ElasticLoadBalancingV2::Listener'])\n for result in results:\n protocol = result['Value'].get('Protocol')\n if protocol:\n- if protocol not in ['HTTP', 'HTTPS', 'TCP']:\n+ if protocol not in ['HTTP', 'HTTPS', 'TCP'] and not is_intrinsic(protocol):\n message = 'Protocol is invalid for {0}'\n path = result['Path'] + ['Protocol']\n matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))\n@@ -53,7 +61,7 @@\n for index, listener in enumerate(result['Value']):\n protocol = listener.get('Protocol')\n if protocol:\n- if protocol not in ['HTTP', 'HTTPS', 'TCP', 'SSL']:\n+ if protocol not in ['HTTP', 'HTTPS', 'TCP', 'SSL'] and not is_intrinsic(protocol):\n message = 'Protocol is invalid for {0}'\n path = result['Path'] + [index, 'Protocol']\n matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))\n", "issue": "[E2503] fails incorrectly when intrinsic function used in Protocol value \n*cfn-lint version: 0.4.2*\r\n\r\n*Description of issue.*\r\n\r\nThis is valid, and conforms to the spec, but rule throws an error:\r\n\r\n```yaml\r\nParameters:\r\n TestParam:\r\n Type: String\r\n Default: TCP\r\nConditions:\r\n TestCond: !Equals ['a', 'a']\r\nResources:\r\n OpenShiftMasterELB:\r\n Type: AWS::ElasticLoadBalancing::LoadBalancer\r\n Properties:\r\n Subnets:\r\n - subnet-1234abcd\r\n SecurityGroups:\r\n - sg-1234abcd\r\n Listeners:\r\n # Fails on Protocol\r\n - InstancePort: '1'\r\n InstanceProtocol: !Ref TestParam\r\n LoadBalancerPort: '1'\r\n Protocol: !Ref TestParam\r\n # Also fails on Protocol\r\n - InstancePort: '2'\r\n InstanceProtocol: !If [TestCond, TCP, SSL]\r\n LoadBalancerPort: '2'\r\n Protocol: !If [TestCond, TCP, SSL]\r\n # Works\r\n - InstancePort: '3'\r\n InstanceProtocol: !If [TestCond, TCP, SSL]\r\n LoadBalancerPort: '3'\r\n Protocol: TCP\r\n```\n", "before_files": [{"content": "\"\"\"\n Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\n\nclass Elb(CloudFormationLintRule):\n \"\"\"Check if Elb Resource Properties\"\"\"\n id = 'E2503'\n shortdesc = 'Resource ELB Properties'\n description = 'See if Elb Resource Properties are set correctly \\\nHTTPS has certificate HTTP has no certificate'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-elb-listener.html'\n tags = ['properties', 'elb']\n\n def match(self, cfn):\n \"\"\"Check ELB Resource Parameters\"\"\"\n\n matches = list()\n\n results = cfn.get_resource_properties(['AWS::ElasticLoadBalancingV2::Listener'])\n for result in results:\n protocol = result['Value'].get('Protocol')\n if protocol:\n if protocol not in ['HTTP', 'HTTPS', 'TCP']:\n message = 'Protocol is invalid for {0}'\n path = result['Path'] + ['Protocol']\n matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))\n elif protocol in ['HTTPS']:\n certificate = result['Value'].get('Certificates')\n if not certificate:\n message = 'Certificates should be specified when using HTTPS for {0}'\n path = result['Path'] + ['Protocol']\n matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))\n\n results = cfn.get_resource_properties(['AWS::ElasticLoadBalancing::LoadBalancer', 'Listeners'])\n for result in results:\n if isinstance(result['Value'], list):\n for index, listener in enumerate(result['Value']):\n protocol = listener.get('Protocol')\n if protocol:\n if protocol not in ['HTTP', 'HTTPS', 'TCP', 'SSL']:\n message = 'Protocol is invalid for {0}'\n path = result['Path'] + [index, 'Protocol']\n matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))\n elif protocol in ['HTTPS', 'SSL']:\n certificate = listener.get('SSLCertificateId')\n if not certificate:\n message = 'Certificates should be specified when using HTTPS for {0}'\n path = result['Path'] + [index, 'Protocol']\n matches.append(RuleMatch(path, message.format(('/'.join(result['Path'])))))\n\n return matches\n", "path": "src/cfnlint/rules/resources/elb/Elb.py"}]} | 1,668 | 420 |
gh_patches_debug_4866 | rasdani/github-patches | git_diff | locustio__locust-528 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add Python 3.6 to build pipeline
</issue>
<code>
[start of setup.py]
1 # encoding: utf-8
2
3 from setuptools import setup, find_packages, Command
4 import sys, os, re, ast
5
6
7 # parse version from locust/__init__.py
8 _version_re = re.compile(r'__version__\s+=\s+(.*)')
9 _init_file = os.path.join(os.path.abspath(os.path.dirname(__file__)), "locust", "__init__.py")
10 with open(_init_file, 'rb') as f:
11 version = str(ast.literal_eval(_version_re.search(
12 f.read().decode('utf-8')).group(1)))
13
14 setup(
15 name='locustio',
16 version=version,
17 description="Website load testing framework",
18 long_description="""Locust is a python utility for doing easy, distributed load testing of a web site""",
19 classifiers=[
20 "Topic :: Software Development :: Testing :: Traffic Generation",
21 "Development Status :: 4 - Beta",
22 "License :: OSI Approved :: MIT License",
23 "Operating System :: OS Independent",
24 "Programming Language :: Python",
25 "Programming Language :: Python :: 2",
26 "Programming Language :: Python :: 2.7",
27 "Programming Language :: Python :: 3",
28 "Programming Language :: Python :: 3.3",
29 "Programming Language :: Python :: 3.4",
30 "Programming Language :: Python :: 3.5",
31 "Intended Audience :: Developers",
32 "Intended Audience :: System Administrators",
33 ],
34 keywords='',
35 author='Jonatan Heyman, Carl Bystrom, Joakim Hamrén, Hugo Heyman',
36 author_email='',
37 url='http://locust.io',
38 license='MIT',
39 packages=find_packages(exclude=['ez_setup', 'examples', 'tests']),
40 include_package_data=True,
41 zip_safe=False,
42 install_requires=["gevent>=1.1.2", "flask>=0.10.1", "requests>=2.9.1", "msgpack-python>=0.4.2", "six>=1.10.0", "pyzmq==15.2.0"],
43 tests_require=['unittest2', 'mock'],
44 entry_points={
45 'console_scripts': [
46 'locust = locust.main:main',
47 ]
48 },
49 )
50
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -28,6 +28,7 @@
"Programming Language :: Python :: 3.3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
+ "Programming Language :: Python :: 3.6",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -28,6 +28,7 @@\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n+ \"Programming Language :: Python :: 3.6\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: System Administrators\",\n ],\n", "issue": "Add Python 3.6 to build pipeline\n\n", "before_files": [{"content": "# encoding: utf-8\n\nfrom setuptools import setup, find_packages, Command\nimport sys, os, re, ast\n\n\n# parse version from locust/__init__.py\n_version_re = re.compile(r'__version__\\s+=\\s+(.*)')\n_init_file = os.path.join(os.path.abspath(os.path.dirname(__file__)), \"locust\", \"__init__.py\")\nwith open(_init_file, 'rb') as f:\n version = str(ast.literal_eval(_version_re.search(\n f.read().decode('utf-8')).group(1)))\n\nsetup(\n name='locustio',\n version=version,\n description=\"Website load testing framework\",\n long_description=\"\"\"Locust is a python utility for doing easy, distributed load testing of a web site\"\"\",\n classifiers=[\n \"Topic :: Software Development :: Testing :: Traffic Generation\",\n \"Development Status :: 4 - Beta\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: System Administrators\",\n ],\n keywords='',\n author='Jonatan Heyman, Carl Bystrom, Joakim Hamr\u00e9n, Hugo Heyman',\n author_email='',\n url='http://locust.io',\n license='MIT',\n packages=find_packages(exclude=['ez_setup', 'examples', 'tests']),\n include_package_data=True,\n zip_safe=False,\n install_requires=[\"gevent>=1.1.2\", \"flask>=0.10.1\", \"requests>=2.9.1\", \"msgpack-python>=0.4.2\", \"six>=1.10.0\", \"pyzmq==15.2.0\"],\n tests_require=['unittest2', 'mock'],\n entry_points={\n 'console_scripts': [\n 'locust = locust.main:main',\n ]\n },\n)\n", "path": "setup.py"}]} | 1,111 | 102 |
gh_patches_debug_5987 | rasdani/github-patches | git_diff | arviz-devs__arviz-343 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Docs are broken
Looks like one of the examples still uses `n_eff`. From travis:
```
Exception occurred:
File "/home/travis/build/arviz-devs/arviz/examples/plot_forest_ridge.py", line 20, in <module>
n_eff=False)
TypeError: plot_forest() got an unexpected keyword argument 'n_eff'
```
</issue>
<code>
[start of examples/plot_forest_ridge.py]
1 """
2 Ridgeplot
3 =========
4
5 _thumb: .8, .5
6 """
7 import arviz as az
8
9 az.style.use('arviz-darkgrid')
10
11 non_centered_data = az.load_arviz_data('non_centered_eight')
12 fig, axes = az.plot_forest(non_centered_data,
13 kind='ridgeplot',
14 var_names=['theta'],
15 combined=True,
16 textsize=11,
17 ridgeplot_overlap=3,
18 colors='white',
19 r_hat=False,
20 n_eff=False)
21 axes[0].set_title('Estimated theta for eight schools model', fontsize=11)
22
[end of examples/plot_forest_ridge.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/plot_forest_ridge.py b/examples/plot_forest_ridge.py
--- a/examples/plot_forest_ridge.py
+++ b/examples/plot_forest_ridge.py
@@ -15,7 +15,5 @@
combined=True,
textsize=11,
ridgeplot_overlap=3,
- colors='white',
- r_hat=False,
- n_eff=False)
+ colors='white')
axes[0].set_title('Estimated theta for eight schools model', fontsize=11)
| {"golden_diff": "diff --git a/examples/plot_forest_ridge.py b/examples/plot_forest_ridge.py\n--- a/examples/plot_forest_ridge.py\n+++ b/examples/plot_forest_ridge.py\n@@ -15,7 +15,5 @@\n combined=True,\n textsize=11,\n ridgeplot_overlap=3,\n- colors='white',\n- r_hat=False,\n- n_eff=False)\n+ colors='white')\n axes[0].set_title('Estimated theta for eight schools model', fontsize=11)\n", "issue": "Docs are broken\nLooks like one of the examples still uses `n_eff`. From travis: \r\n\r\n```\r\nException occurred:\r\n File \"/home/travis/build/arviz-devs/arviz/examples/plot_forest_ridge.py\", line 20, in <module>\r\n n_eff=False)\r\nTypeError: plot_forest() got an unexpected keyword argument 'n_eff'\r\n```\n", "before_files": [{"content": "\"\"\"\nRidgeplot\n=========\n\n_thumb: .8, .5\n\"\"\"\nimport arviz as az\n\naz.style.use('arviz-darkgrid')\n\nnon_centered_data = az.load_arviz_data('non_centered_eight')\nfig, axes = az.plot_forest(non_centered_data,\n kind='ridgeplot',\n var_names=['theta'],\n combined=True,\n textsize=11,\n ridgeplot_overlap=3,\n colors='white',\n r_hat=False,\n n_eff=False)\naxes[0].set_title('Estimated theta for eight schools model', fontsize=11)\n", "path": "examples/plot_forest_ridge.py"}]} | 784 | 118 |
gh_patches_debug_21777 | rasdani/github-patches | git_diff | zulip__zulip-19818 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
markdown: Document built-in preprocessor priorities.
As a follow-up to #19783, it would be good to document the priorities assigned to the built-in preprocessors that the Python-Markdown library has. A couple of notes:
- This involves a bit of grunt work, the quickest way to do this is to loop over and print `md_engine.preprocessors._priorities` in `zerver/lib/templates.py`.
- Note that in `templates.py`, there are different cases where different sets of preprocessors are added, so one has to do the additional work to figure out which preprocessors are running in which of those cases and then document all the priorities that are for built-in preprocessors.
- The file to put these priorities in is: `zerver/lib/markdown/preprocessor_priorities..py`.
Thanks!
</issue>
<code>
[start of zerver/lib/markdown/preprocessor_priorities.py]
1 # Note that in the Markdown preprocessor registry, the highest
2 # numeric value is considered the highest priority, so the dict
3 # below is ordered from highest-to-lowest priority.
4 PREPROCESSOR_PRIORITES = {
5 "generate_parameter_description": 535,
6 "generate_response_description": 531,
7 "generate_api_title": 531,
8 "generate_api_description": 530,
9 "generate_code_example": 525,
10 "generate_return_values": 510,
11 "generate_api_arguments": 505,
12 "include": 500,
13 "help_relative_links": 475,
14 "setting": 450,
15 "fenced_code_block": 25,
16 "tabbed_sections": -500,
17 "nested_code_blocks": -500,
18 "emoticon_translations": -505,
19 }
20
[end of zerver/lib/markdown/preprocessor_priorities.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/zerver/lib/markdown/preprocessor_priorities.py b/zerver/lib/markdown/preprocessor_priorities.py
--- a/zerver/lib/markdown/preprocessor_priorities.py
+++ b/zerver/lib/markdown/preprocessor_priorities.py
@@ -1,6 +1,7 @@
# Note that in the Markdown preprocessor registry, the highest
# numeric value is considered the highest priority, so the dict
# below is ordered from highest-to-lowest priority.
+# Priorities for the built-in preprocessors are commented out.
PREPROCESSOR_PRIORITES = {
"generate_parameter_description": 535,
"generate_response_description": 531,
@@ -10,9 +11,12 @@
"generate_return_values": 510,
"generate_api_arguments": 505,
"include": 500,
+ # "include_wrapper": 500,
"help_relative_links": 475,
"setting": 450,
+ # "normalize_whitespace": 30,
"fenced_code_block": 25,
+ # "html_block": 20,
"tabbed_sections": -500,
"nested_code_blocks": -500,
"emoticon_translations": -505,
| {"golden_diff": "diff --git a/zerver/lib/markdown/preprocessor_priorities.py b/zerver/lib/markdown/preprocessor_priorities.py\n--- a/zerver/lib/markdown/preprocessor_priorities.py\n+++ b/zerver/lib/markdown/preprocessor_priorities.py\n@@ -1,6 +1,7 @@\n # Note that in the Markdown preprocessor registry, the highest\n # numeric value is considered the highest priority, so the dict\n # below is ordered from highest-to-lowest priority.\n+# Priorities for the built-in preprocessors are commented out.\n PREPROCESSOR_PRIORITES = {\n \"generate_parameter_description\": 535,\n \"generate_response_description\": 531,\n@@ -10,9 +11,12 @@\n \"generate_return_values\": 510,\n \"generate_api_arguments\": 505,\n \"include\": 500,\n+ # \"include_wrapper\": 500,\n \"help_relative_links\": 475,\n \"setting\": 450,\n+ # \"normalize_whitespace\": 30,\n \"fenced_code_block\": 25,\n+ # \"html_block\": 20,\n \"tabbed_sections\": -500,\n \"nested_code_blocks\": -500,\n \"emoticon_translations\": -505,\n", "issue": "markdown: Document built-in preprocessor priorities.\nAs a follow-up to #19783, it would be good to document the priorities assigned to the built-in preprocessors that the Python-Markdown library has. A couple of notes:\r\n- This involves a bit of grunt work, the quickest way to do this is to loop over and print `md_engine.preprocessors._priorities` in `zerver/lib/templates.py`.\r\n- Note that in `templates.py`, there are different cases where different sets of preprocessors are added, so one has to do the additional work to figure out which preprocessors are running in which of those cases and then document all the priorities that are for built-in preprocessors.\r\n- The file to put these priorities in is: `zerver/lib/markdown/preprocessor_priorities..py`.\r\n\r\nThanks!\n", "before_files": [{"content": "# Note that in the Markdown preprocessor registry, the highest\n# numeric value is considered the highest priority, so the dict\n# below is ordered from highest-to-lowest priority.\nPREPROCESSOR_PRIORITES = {\n \"generate_parameter_description\": 535,\n \"generate_response_description\": 531,\n \"generate_api_title\": 531,\n \"generate_api_description\": 530,\n \"generate_code_example\": 525,\n \"generate_return_values\": 510,\n \"generate_api_arguments\": 505,\n \"include\": 500,\n \"help_relative_links\": 475,\n \"setting\": 450,\n \"fenced_code_block\": 25,\n \"tabbed_sections\": -500,\n \"nested_code_blocks\": -500,\n \"emoticon_translations\": -505,\n}\n", "path": "zerver/lib/markdown/preprocessor_priorities.py"}]} | 946 | 287 |
gh_patches_debug_33956 | rasdani/github-patches | git_diff | hylang__hy-2299 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Slow startup when Hy is installed from a wheel
Testing the new release of 0.16.0, I see that startup is much slower when installing from the wheel than from the source distribution or directly from the repository. Likewise for older Hy releases. Even when I make sure the `__pycache__`s are included in the wheel and I can see they're installed. Either there's something wonky with my system, or wheel installation doesn't play nicely with premade byte-compiled files.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 import os
4
5 import fastentrypoints # Monkey-patches setuptools.
6 from get_version import __version__
7 from setuptools import find_packages, setup
8
9 os.chdir(os.path.split(os.path.abspath(__file__))[0])
10
11 PKG = "hy"
12
13 long_description = """Hy is a Python <--> Lisp layer. It helps
14 make things work nicer, and lets Python and the Hy lisp variant play
15 nice together. """
16
17 setup(
18 name=PKG,
19 version=__version__,
20 install_requires=[
21 "funcparserlib ~= 1.0",
22 "colorama",
23 'astor>=0.8 ; python_version < "3.9"',
24 ],
25 python_requires=">= 3.7, < 3.11",
26 entry_points={
27 "console_scripts": [
28 "hy = hy.cmdline:hy_main",
29 "hy3 = hy.cmdline:hy_main",
30 "hyc = hy.cmdline:hyc_main",
31 "hyc3 = hy.cmdline:hyc_main",
32 "hy2py = hy.cmdline:hy2py_main",
33 "hy2py3 = hy.cmdline:hy2py_main",
34 ]
35 },
36 packages=find_packages(exclude=["tests*"]),
37 package_data={
38 "hy": ["*.hy", "__pycache__/*"],
39 "hy.contrib": ["*.hy", "__pycache__/*"],
40 "hy.core": ["*.hy", "__pycache__/*"],
41 "hy.extra": ["*.hy", "__pycache__/*"],
42 },
43 data_files=[("get_version", ["get_version.py"])],
44 author="Paul Tagliamonte",
45 author_email="[email protected]",
46 long_description=long_description,
47 description="Lisp and Python love each other.",
48 license="Expat",
49 url="http://hylang.org/",
50 platforms=["any"],
51 classifiers=[
52 "Development Status :: 4 - Beta",
53 "Intended Audience :: Developers",
54 "License :: DFSG approved",
55 "License :: OSI Approved :: MIT License", # Really "Expat". Ugh.
56 "Operating System :: OS Independent",
57 "Programming Language :: Lisp",
58 "Programming Language :: Python",
59 "Programming Language :: Python :: 3",
60 "Programming Language :: Python :: 3.7",
61 "Programming Language :: Python :: 3.8",
62 "Programming Language :: Python :: 3.9",
63 "Programming Language :: Python :: 3.10",
64 "Topic :: Software Development :: Code Generators",
65 "Topic :: Software Development :: Compilers",
66 "Topic :: Software Development :: Libraries",
67 ],
68 project_urls={
69 "Documentation": "https://docs.hylang.org/",
70 "Source": "https://github.com/hylang/hy",
71 },
72 )
73
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -5,6 +5,7 @@
import fastentrypoints # Monkey-patches setuptools.
from get_version import __version__
from setuptools import find_packages, setup
+from setuptools.command.install import install
os.chdir(os.path.split(os.path.abspath(__file__))[0])
@@ -14,14 +15,34 @@
make things work nicer, and lets Python and the Hy lisp variant play
nice together. """
+
+class install(install):
+ def run(self):
+ super().run()
+ import py_compile
+ from glob import glob
+
+ import hy # for compile hooks
+
+ for path in glob(os.path.join(self.install_lib, "**/*.hy"), recursive=True):
+ py_compile.compile(
+ path, invalidation_mode=py_compile.PycInvalidationMode.CHECKED_HASH
+ )
+
+
+# both setup_requires and install_requires
+# since we need to compile .hy files during setup
+requires = [
+ "funcparserlib ~= 1.0",
+ "colorama",
+ 'astor>=0.8 ; python_version < "3.9"',
+]
+
setup(
name=PKG,
version=__version__,
- install_requires=[
- "funcparserlib ~= 1.0",
- "colorama",
- 'astor>=0.8 ; python_version < "3.9"',
- ],
+ setup_requires=requires,
+ install_requires=requires,
python_requires=">= 3.7, < 3.11",
entry_points={
"console_scripts": [
@@ -35,10 +56,7 @@
},
packages=find_packages(exclude=["tests*"]),
package_data={
- "hy": ["*.hy", "__pycache__/*"],
- "hy.contrib": ["*.hy", "__pycache__/*"],
- "hy.core": ["*.hy", "__pycache__/*"],
- "hy.extra": ["*.hy", "__pycache__/*"],
+ "": ["*.hy"],
},
data_files=[("get_version", ["get_version.py"])],
author="Paul Tagliamonte",
@@ -69,4 +87,7 @@
"Documentation": "https://docs.hylang.org/",
"Source": "https://github.com/hylang/hy",
},
+ cmdclass={
+ "install": install,
+ },
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -5,6 +5,7 @@\n import fastentrypoints # Monkey-patches setuptools.\n from get_version import __version__\n from setuptools import find_packages, setup\n+from setuptools.command.install import install\n \n os.chdir(os.path.split(os.path.abspath(__file__))[0])\n \n@@ -14,14 +15,34 @@\n make things work nicer, and lets Python and the Hy lisp variant play\n nice together. \"\"\"\n \n+\n+class install(install):\n+ def run(self):\n+ super().run()\n+ import py_compile\n+ from glob import glob\n+\n+ import hy # for compile hooks\n+\n+ for path in glob(os.path.join(self.install_lib, \"**/*.hy\"), recursive=True):\n+ py_compile.compile(\n+ path, invalidation_mode=py_compile.PycInvalidationMode.CHECKED_HASH\n+ )\n+\n+\n+# both setup_requires and install_requires\n+# since we need to compile .hy files during setup\n+requires = [\n+ \"funcparserlib ~= 1.0\",\n+ \"colorama\",\n+ 'astor>=0.8 ; python_version < \"3.9\"',\n+]\n+\n setup(\n name=PKG,\n version=__version__,\n- install_requires=[\n- \"funcparserlib ~= 1.0\",\n- \"colorama\",\n- 'astor>=0.8 ; python_version < \"3.9\"',\n- ],\n+ setup_requires=requires,\n+ install_requires=requires,\n python_requires=\">= 3.7, < 3.11\",\n entry_points={\n \"console_scripts\": [\n@@ -35,10 +56,7 @@\n },\n packages=find_packages(exclude=[\"tests*\"]),\n package_data={\n- \"hy\": [\"*.hy\", \"__pycache__/*\"],\n- \"hy.contrib\": [\"*.hy\", \"__pycache__/*\"],\n- \"hy.core\": [\"*.hy\", \"__pycache__/*\"],\n- \"hy.extra\": [\"*.hy\", \"__pycache__/*\"],\n+ \"\": [\"*.hy\"],\n },\n data_files=[(\"get_version\", [\"get_version.py\"])],\n author=\"Paul Tagliamonte\",\n@@ -69,4 +87,7 @@\n \"Documentation\": \"https://docs.hylang.org/\",\n \"Source\": \"https://github.com/hylang/hy\",\n },\n+ cmdclass={\n+ \"install\": install,\n+ },\n )\n", "issue": "Slow startup when Hy is installed from a wheel\nTesting the new release of 0.16.0, I see that startup is much slower when installing from the wheel than from the source distribution or directly from the repository. Likewise for older Hy releases. Even when I make sure the `__pycache__`s are included in the wheel and I can see they're installed. Either there's something wonky with my system, or wheel installation doesn't play nicely with premade byte-compiled files.\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport os\n\nimport fastentrypoints # Monkey-patches setuptools.\nfrom get_version import __version__\nfrom setuptools import find_packages, setup\n\nos.chdir(os.path.split(os.path.abspath(__file__))[0])\n\nPKG = \"hy\"\n\nlong_description = \"\"\"Hy is a Python <--> Lisp layer. It helps\nmake things work nicer, and lets Python and the Hy lisp variant play\nnice together. \"\"\"\n\nsetup(\n name=PKG,\n version=__version__,\n install_requires=[\n \"funcparserlib ~= 1.0\",\n \"colorama\",\n 'astor>=0.8 ; python_version < \"3.9\"',\n ],\n python_requires=\">= 3.7, < 3.11\",\n entry_points={\n \"console_scripts\": [\n \"hy = hy.cmdline:hy_main\",\n \"hy3 = hy.cmdline:hy_main\",\n \"hyc = hy.cmdline:hyc_main\",\n \"hyc3 = hy.cmdline:hyc_main\",\n \"hy2py = hy.cmdline:hy2py_main\",\n \"hy2py3 = hy.cmdline:hy2py_main\",\n ]\n },\n packages=find_packages(exclude=[\"tests*\"]),\n package_data={\n \"hy\": [\"*.hy\", \"__pycache__/*\"],\n \"hy.contrib\": [\"*.hy\", \"__pycache__/*\"],\n \"hy.core\": [\"*.hy\", \"__pycache__/*\"],\n \"hy.extra\": [\"*.hy\", \"__pycache__/*\"],\n },\n data_files=[(\"get_version\", [\"get_version.py\"])],\n author=\"Paul Tagliamonte\",\n author_email=\"[email protected]\",\n long_description=long_description,\n description=\"Lisp and Python love each other.\",\n license=\"Expat\",\n url=\"http://hylang.org/\",\n platforms=[\"any\"],\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"License :: DFSG approved\",\n \"License :: OSI Approved :: MIT License\", # Really \"Expat\". Ugh.\n \"Operating System :: OS Independent\",\n \"Programming Language :: Lisp\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Topic :: Software Development :: Code Generators\",\n \"Topic :: Software Development :: Compilers\",\n \"Topic :: Software Development :: Libraries\",\n ],\n project_urls={\n \"Documentation\": \"https://docs.hylang.org/\",\n \"Source\": \"https://github.com/hylang/hy\",\n },\n)\n", "path": "setup.py"}]} | 1,373 | 553 |
gh_patches_debug_25931 | rasdani/github-patches | git_diff | joke2k__faker-1103 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Implementation of person id number for cs_CZ - Czech (rodné číslo)
Can you implement randomizer which will generate a proper person ID number (rodné číslo) for Czech local?
</issue>
<code>
[start of faker/providers/ssn/cs_CZ/__init__.py]
1 from .. import Provider as BaseProvider
2
3
4 class Provider(BaseProvider):
5 vat_id_formats = (
6 'CZ########',
7 'CZ#########',
8 'CZ##########',
9 )
10
11 def vat_id(self):
12 """
13 http://ec.europa.eu/taxation_customs/vies/faq.html#item_11
14 :return: A random Czech VAT ID
15 """
16
17 return self.bothify(self.random_element(self.vat_id_formats))
18
[end of faker/providers/ssn/cs_CZ/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/faker/providers/ssn/cs_CZ/__init__.py b/faker/providers/ssn/cs_CZ/__init__.py
--- a/faker/providers/ssn/cs_CZ/__init__.py
+++ b/faker/providers/ssn/cs_CZ/__init__.py
@@ -1,3 +1,5 @@
+from math import ceil
+
from .. import Provider as BaseProvider
@@ -8,6 +10,8 @@
'CZ##########',
)
+ national_id_months = ['%.2d' % i for i in range(1, 13)] + ['%.2d' % i for i in range(51, 63)]
+
def vat_id(self):
"""
http://ec.europa.eu/taxation_customs/vies/faq.html#item_11
@@ -15,3 +19,24 @@
"""
return self.bothify(self.random_element(self.vat_id_formats))
+
+ def birth_number(self):
+ """
+ Birth Number (Czech/Slovak: rodné číslo (RČ))
+ https://en.wikipedia.org/wiki/National_identification_number#Czech_Republic_and_Slovakia
+ """
+ birthdate = self.generator.date_of_birth()
+ year = '%.2d' % (birthdate.year % 100)
+ month = self.random_element(self.national_id_months)
+ day = '%.2d' % birthdate.day
+ if birthdate.year > 1953:
+ sn = self.random_number(4, True)
+ else:
+ sn = self.random_number(3, True)
+ number = int('{}{}{}{}'.format(year, month, day, sn))
+ birth_number = str(ceil(number / 11) * 11)
+ if year == '00':
+ birth_number = '00' + birth_number
+ elif year[0] == '0':
+ birth_number = '0' + birth_number
+ return '{}/{}'.format(birth_number[:6], birth_number[6::])
| {"golden_diff": "diff --git a/faker/providers/ssn/cs_CZ/__init__.py b/faker/providers/ssn/cs_CZ/__init__.py\n--- a/faker/providers/ssn/cs_CZ/__init__.py\n+++ b/faker/providers/ssn/cs_CZ/__init__.py\n@@ -1,3 +1,5 @@\n+from math import ceil\n+\n from .. import Provider as BaseProvider\n \n \n@@ -8,6 +10,8 @@\n 'CZ##########',\n )\n \n+ national_id_months = ['%.2d' % i for i in range(1, 13)] + ['%.2d' % i for i in range(51, 63)]\n+\n def vat_id(self):\n \"\"\"\n http://ec.europa.eu/taxation_customs/vies/faq.html#item_11\n@@ -15,3 +19,24 @@\n \"\"\"\n \n return self.bothify(self.random_element(self.vat_id_formats))\n+\n+ def birth_number(self):\n+ \"\"\"\n+ Birth Number (Czech/Slovak: rodn\u00e9 \u010d\u00edslo (R\u010c))\n+ https://en.wikipedia.org/wiki/National_identification_number#Czech_Republic_and_Slovakia\n+ \"\"\"\n+ birthdate = self.generator.date_of_birth()\n+ year = '%.2d' % (birthdate.year % 100)\n+ month = self.random_element(self.national_id_months)\n+ day = '%.2d' % birthdate.day\n+ if birthdate.year > 1953:\n+ sn = self.random_number(4, True)\n+ else:\n+ sn = self.random_number(3, True)\n+ number = int('{}{}{}{}'.format(year, month, day, sn))\n+ birth_number = str(ceil(number / 11) * 11)\n+ if year == '00':\n+ birth_number = '00' + birth_number\n+ elif year[0] == '0':\n+ birth_number = '0' + birth_number\n+ return '{}/{}'.format(birth_number[:6], birth_number[6::])\n", "issue": "Implementation of person id number for cs_CZ - Czech (rodn\u00e9 \u010d\u00edslo)\nCan you implement randomizer which will generate a proper person ID number (rodn\u00e9 \u010d\u00edslo) for Czech local?\n", "before_files": [{"content": "from .. import Provider as BaseProvider\n\n\nclass Provider(BaseProvider):\n vat_id_formats = (\n 'CZ########',\n 'CZ#########',\n 'CZ##########',\n )\n\n def vat_id(self):\n \"\"\"\n http://ec.europa.eu/taxation_customs/vies/faq.html#item_11\n :return: A random Czech VAT ID\n \"\"\"\n\n return self.bothify(self.random_element(self.vat_id_formats))\n", "path": "faker/providers/ssn/cs_CZ/__init__.py"}]} | 722 | 471 |
gh_patches_debug_17777 | rasdani/github-patches | git_diff | googleapis__google-cloud-python-5424 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Monitoring alias package is missing new service clients
https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/monitoring/google/cloud/monitoring.py is missing the new clients added to https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/monitoring/google/cloud/monitoring_v3/__init__.py
Should be a relatively easy fix.
</issue>
<code>
[start of monitoring/google/cloud/monitoring.py]
1 # Copyright 2017, Google LLC All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import absolute_import
16
17 from google.cloud.monitoring_v3.query import Query
18 from google.cloud.monitoring_v3 import GroupServiceClient
19 from google.cloud.monitoring_v3 import MetricServiceClient
20 from google.cloud.monitoring_v3 import enums
21 from google.cloud.monitoring_v3 import types
22
23 __all__ = (
24 'enums',
25 'types',
26 'GroupServiceClient',
27 'Query',
28 'MetricServiceClient', )
29
[end of monitoring/google/cloud/monitoring.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/monitoring/google/cloud/monitoring.py b/monitoring/google/cloud/monitoring.py
--- a/monitoring/google/cloud/monitoring.py
+++ b/monitoring/google/cloud/monitoring.py
@@ -15,14 +15,21 @@
from __future__ import absolute_import
from google.cloud.monitoring_v3.query import Query
+from google.cloud.monitoring_v3 import AlertPolicyServiceClient
from google.cloud.monitoring_v3 import GroupServiceClient
from google.cloud.monitoring_v3 import MetricServiceClient
+from google.cloud.monitoring_v3 import NotificationChannelServiceClient
+from google.cloud.monitoring_v3 import UptimeCheckServiceClient
from google.cloud.monitoring_v3 import enums
from google.cloud.monitoring_v3 import types
__all__ = (
'enums',
'types',
+ 'AlertPolicyServiceClient',
'GroupServiceClient',
+ 'MetricServiceClient',
+ 'NotificationChannelServiceClient',
+ 'UptimeCheckServiceClient',
'Query',
- 'MetricServiceClient', )
+)
| {"golden_diff": "diff --git a/monitoring/google/cloud/monitoring.py b/monitoring/google/cloud/monitoring.py\n--- a/monitoring/google/cloud/monitoring.py\n+++ b/monitoring/google/cloud/monitoring.py\n@@ -15,14 +15,21 @@\n from __future__ import absolute_import\n \n from google.cloud.monitoring_v3.query import Query\n+from google.cloud.monitoring_v3 import AlertPolicyServiceClient\n from google.cloud.monitoring_v3 import GroupServiceClient\n from google.cloud.monitoring_v3 import MetricServiceClient\n+from google.cloud.monitoring_v3 import NotificationChannelServiceClient\n+from google.cloud.monitoring_v3 import UptimeCheckServiceClient\n from google.cloud.monitoring_v3 import enums\n from google.cloud.monitoring_v3 import types\n \n __all__ = (\n 'enums',\n 'types',\n+ 'AlertPolicyServiceClient',\n 'GroupServiceClient',\n+ 'MetricServiceClient',\n+ 'NotificationChannelServiceClient',\n+ 'UptimeCheckServiceClient',\n 'Query',\n- 'MetricServiceClient', )\n+)\n", "issue": "Monitoring alias package is missing new service clients\nhttps://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/monitoring/google/cloud/monitoring.py is missing the new clients added to https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/monitoring/google/cloud/monitoring_v3/__init__.py\r\n\r\nShould be a relatively easy fix.\n", "before_files": [{"content": "# Copyright 2017, Google LLC All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nfrom google.cloud.monitoring_v3.query import Query\nfrom google.cloud.monitoring_v3 import GroupServiceClient\nfrom google.cloud.monitoring_v3 import MetricServiceClient\nfrom google.cloud.monitoring_v3 import enums\nfrom google.cloud.monitoring_v3 import types\n\n__all__ = (\n 'enums',\n 'types',\n 'GroupServiceClient',\n 'Query',\n 'MetricServiceClient', )\n", "path": "monitoring/google/cloud/monitoring.py"}]} | 892 | 234 |
gh_patches_debug_35214 | rasdani/github-patches | git_diff | conan-io__conan-center-index-6951 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[request] stb/20210818
### Package Details
* Package Name/Version: **stb/20210818**
There has been +1800 commits added to stb since Feb 2 of 2020, I greatly suggest updating it.
The above mentioned version is newly released by the upstream project and not yet available as a recipe. Please add this version.
</issue>
<code>
[start of recipes/stb/all/conanfile.py]
1 from conans import ConanFile, tools
2 import os
3
4 class StbConan(ConanFile):
5 name = "stb"
6 description = "single-file public domain libraries for C/C++"
7 topics = ("conan", "stb", "single-file")
8 url = "https://github.com/conan-io/conan-center-index"
9 homepage = "https://github.com/nothings/stb"
10 license = ("Unlicense", "MIT")
11 no_copy_source = True
12 _source_subfolder = "source_subfolder"
13
14 def source(self):
15 commit = os.path.splitext(os.path.basename(self.conan_data["sources"][self.version]["url"]))[0]
16 tools.get(**self.conan_data["sources"][self.version])
17 extracted_dir = self.name + "-" + commit
18 os.rename(extracted_dir, self._source_subfolder)
19
20 def package(self):
21 self.copy("LICENSE", src=self._source_subfolder, dst="licenses")
22 self.copy("*.h", src=self._source_subfolder, dst="include")
23 self.copy("stb_vorbis.c", src=self._source_subfolder, dst="include")
24 tools.rmdir(os.path.join(self.package_folder, "include", "tests"))
25
26 def package_id(self):
27 self.info.header_only()
28
29 def package_info(self):
30 self.cpp_info.defines.append('STB_TEXTEDIT_KEYTYPE=unsigned')
31
[end of recipes/stb/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recipes/stb/all/conanfile.py b/recipes/stb/all/conanfile.py
--- a/recipes/stb/all/conanfile.py
+++ b/recipes/stb/all/conanfile.py
@@ -1,27 +1,53 @@
from conans import ConanFile, tools
import os
+required_conan_version = ">=1.33.0"
+
+
class StbConan(ConanFile):
name = "stb"
description = "single-file public domain libraries for C/C++"
- topics = ("conan", "stb", "single-file")
+ topics = ("stb", "single-file")
url = "https://github.com/conan-io/conan-center-index"
homepage = "https://github.com/nothings/stb"
license = ("Unlicense", "MIT")
no_copy_source = True
- _source_subfolder = "source_subfolder"
+
+ options = {
+ "with_deprecated": [True, False]
+ }
+
+ default_options = {
+ "with_deprecated": True
+ }
+
+ @property
+ def _source_subfolder(self):
+ return "source_subfolder"
+
+ @property
+ def _version(self):
+ # HACK: Used to circumvent the incompatibility
+ # of the format cci.YYYYMMDD in tools.Version
+ return str(self.version)[4:]
+
+ def config_options(self):
+ if tools.Version(self._version) < "20210713":
+ del self.options.with_deprecated
def source(self):
- commit = os.path.splitext(os.path.basename(self.conan_data["sources"][self.version]["url"]))[0]
- tools.get(**self.conan_data["sources"][self.version])
- extracted_dir = self.name + "-" + commit
- os.rename(extracted_dir, self._source_subfolder)
+ tools.get(**self.conan_data["sources"][self.version], strip_root=True, destination=self._source_subfolder)
def package(self):
self.copy("LICENSE", src=self._source_subfolder, dst="licenses")
self.copy("*.h", src=self._source_subfolder, dst="include")
self.copy("stb_vorbis.c", src=self._source_subfolder, dst="include")
tools.rmdir(os.path.join(self.package_folder, "include", "tests"))
+ if tools.Version(self._version) >= "20210713":
+ tools.rmdir(os.path.join(self.package_folder, "include", "deprecated"))
+ if self.options.get_safe("with_deprecated", False):
+ self.copy("*.h", src=os.path.join(self._source_subfolder, "deprecated"), dst="include")
+ self.copy("stb_image.c", src=os.path.join(self._source_subfolder, "deprecated"), dst="include")
def package_id(self):
self.info.header_only()
| {"golden_diff": "diff --git a/recipes/stb/all/conanfile.py b/recipes/stb/all/conanfile.py\n--- a/recipes/stb/all/conanfile.py\n+++ b/recipes/stb/all/conanfile.py\n@@ -1,27 +1,53 @@\n from conans import ConanFile, tools\n import os\n \n+required_conan_version = \">=1.33.0\"\n+\n+\n class StbConan(ConanFile):\n name = \"stb\"\n description = \"single-file public domain libraries for C/C++\"\n- topics = (\"conan\", \"stb\", \"single-file\")\n+ topics = (\"stb\", \"single-file\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/nothings/stb\"\n license = (\"Unlicense\", \"MIT\")\n no_copy_source = True\n- _source_subfolder = \"source_subfolder\"\n+\n+ options = {\n+ \"with_deprecated\": [True, False]\n+ }\n+\n+ default_options = {\n+ \"with_deprecated\": True\n+ }\n+\n+ @property\n+ def _source_subfolder(self):\n+ return \"source_subfolder\"\n+\n+ @property\n+ def _version(self):\n+ # HACK: Used to circumvent the incompatibility\n+ # of the format cci.YYYYMMDD in tools.Version\n+ return str(self.version)[4:]\n+\n+ def config_options(self):\n+ if tools.Version(self._version) < \"20210713\":\n+ del self.options.with_deprecated\n \n def source(self):\n- commit = os.path.splitext(os.path.basename(self.conan_data[\"sources\"][self.version][\"url\"]))[0]\n- tools.get(**self.conan_data[\"sources\"][self.version])\n- extracted_dir = self.name + \"-\" + commit\n- os.rename(extracted_dir, self._source_subfolder)\n+ tools.get(**self.conan_data[\"sources\"][self.version], strip_root=True, destination=self._source_subfolder)\n \n def package(self):\n self.copy(\"LICENSE\", src=self._source_subfolder, dst=\"licenses\")\n self.copy(\"*.h\", src=self._source_subfolder, dst=\"include\")\n self.copy(\"stb_vorbis.c\", src=self._source_subfolder, dst=\"include\")\n tools.rmdir(os.path.join(self.package_folder, \"include\", \"tests\"))\n+ if tools.Version(self._version) >= \"20210713\":\n+ tools.rmdir(os.path.join(self.package_folder, \"include\", \"deprecated\"))\n+ if self.options.get_safe(\"with_deprecated\", False):\n+ self.copy(\"*.h\", src=os.path.join(self._source_subfolder, \"deprecated\"), dst=\"include\")\n+ self.copy(\"stb_image.c\", src=os.path.join(self._source_subfolder, \"deprecated\"), dst=\"include\")\n \n def package_id(self):\n self.info.header_only()\n", "issue": "[request] stb/20210818\n### Package Details\r\n * Package Name/Version: **stb/20210818**\r\n\r\nThere has been +1800 commits added to stb since Feb 2 of 2020, I greatly suggest updating it.\r\n\r\nThe above mentioned version is newly released by the upstream project and not yet available as a recipe. Please add this version.\r\n\n", "before_files": [{"content": "from conans import ConanFile, tools\nimport os\n\nclass StbConan(ConanFile):\n name = \"stb\"\n description = \"single-file public domain libraries for C/C++\"\n topics = (\"conan\", \"stb\", \"single-file\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/nothings/stb\"\n license = (\"Unlicense\", \"MIT\")\n no_copy_source = True\n _source_subfolder = \"source_subfolder\"\n\n def source(self):\n commit = os.path.splitext(os.path.basename(self.conan_data[\"sources\"][self.version][\"url\"]))[0]\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = self.name + \"-\" + commit\n os.rename(extracted_dir, self._source_subfolder)\n\n def package(self):\n self.copy(\"LICENSE\", src=self._source_subfolder, dst=\"licenses\")\n self.copy(\"*.h\", src=self._source_subfolder, dst=\"include\")\n self.copy(\"stb_vorbis.c\", src=self._source_subfolder, dst=\"include\")\n tools.rmdir(os.path.join(self.package_folder, \"include\", \"tests\"))\n\n def package_id(self):\n self.info.header_only()\n \n def package_info(self):\n self.cpp_info.defines.append('STB_TEXTEDIT_KEYTYPE=unsigned')\n", "path": "recipes/stb/all/conanfile.py"}]} | 992 | 654 |
gh_patches_debug_15650 | rasdani/github-patches | git_diff | evennia__evennia-1733 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Disabling webclient does not actually disable it
#### Steps to reproduce the issue / Reasons for adding feature:
1. Set WEBCLIENT_ENABLED to False
2. Link disappears from index page
3. Go to http://example.com/webclient
4. Webclient loads.
#### Error output / Expected result of feature
Setting WEBCLIENT_ENABLED to False should disable it, not hide it.
#### Extra information, such as Evennia revision/repo/branch, operating system and ideas for how to solve / implement:
The status check needs to happen in urls.py, not just on the navbar template. If disabled, the url for the webclient should not be added to the list of urlpatterns.
</issue>
<code>
[start of evennia/web/webclient/views.py]
1
2 """
3 This contains a simple view for rendering the webclient
4 page and serve it eventual static content.
5
6 """
7 from __future__ import print_function
8 from django.shortcuts import render
9 from django.contrib.auth import login, authenticate
10
11 from evennia.accounts.models import AccountDB
12 from evennia.utils import logger
13
14
15 def webclient(request):
16 """
17 Webclient page template loading.
18
19 """
20 # auto-login is now handled by evennia.web.utils.middleware
21
22 # make sure to store the browser session's hash so the webclient can get to it!
23 pagevars = {'browser_sessid': request.session.session_key}
24
25 return render(request, 'webclient.html', pagevars)
26
[end of evennia/web/webclient/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/evennia/web/webclient/views.py b/evennia/web/webclient/views.py
--- a/evennia/web/webclient/views.py
+++ b/evennia/web/webclient/views.py
@@ -5,6 +5,8 @@
"""
from __future__ import print_function
+from django.conf import settings
+from django.http import Http404
from django.shortcuts import render
from django.contrib.auth import login, authenticate
@@ -19,6 +21,10 @@
"""
# auto-login is now handled by evennia.web.utils.middleware
+ # check if webclient should be enabled
+ if not settings.WEBCLIENT_ENABLED:
+ raise Http404
+
# make sure to store the browser session's hash so the webclient can get to it!
pagevars = {'browser_sessid': request.session.session_key}
| {"golden_diff": "diff --git a/evennia/web/webclient/views.py b/evennia/web/webclient/views.py\n--- a/evennia/web/webclient/views.py\n+++ b/evennia/web/webclient/views.py\n@@ -5,6 +5,8 @@\n \n \"\"\"\n from __future__ import print_function\n+from django.conf import settings\n+from django.http import Http404\n from django.shortcuts import render\n from django.contrib.auth import login, authenticate\n \n@@ -19,6 +21,10 @@\n \"\"\"\n # auto-login is now handled by evennia.web.utils.middleware\n \n+ # check if webclient should be enabled\n+ if not settings.WEBCLIENT_ENABLED:\n+ raise Http404\n+ \n # make sure to store the browser session's hash so the webclient can get to it!\n pagevars = {'browser_sessid': request.session.session_key}\n", "issue": "Disabling webclient does not actually disable it\n#### Steps to reproduce the issue / Reasons for adding feature:\r\n\r\n1. Set WEBCLIENT_ENABLED to False\r\n2. Link disappears from index page\r\n3. Go to http://example.com/webclient\r\n4. Webclient loads.\r\n\r\n#### Error output / Expected result of feature\r\nSetting WEBCLIENT_ENABLED to False should disable it, not hide it.\r\n\r\n#### Extra information, such as Evennia revision/repo/branch, operating system and ideas for how to solve / implement:\r\nThe status check needs to happen in urls.py, not just on the navbar template. If disabled, the url for the webclient should not be added to the list of urlpatterns.\n", "before_files": [{"content": "\n\"\"\"\nThis contains a simple view for rendering the webclient\npage and serve it eventual static content.\n\n\"\"\"\nfrom __future__ import print_function\nfrom django.shortcuts import render\nfrom django.contrib.auth import login, authenticate\n\nfrom evennia.accounts.models import AccountDB\nfrom evennia.utils import logger\n\n\ndef webclient(request):\n \"\"\"\n Webclient page template loading.\n\n \"\"\"\n # auto-login is now handled by evennia.web.utils.middleware\n \n # make sure to store the browser session's hash so the webclient can get to it!\n pagevars = {'browser_sessid': request.session.session_key}\n\n return render(request, 'webclient.html', pagevars)\n", "path": "evennia/web/webclient/views.py"}]} | 868 | 191 |
gh_patches_debug_43232 | rasdani/github-patches | git_diff | chainer__chainer-2204 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove trigger option of snapshot and snapshot_object
They have the same functionality as the trigger argument of Trainer.extend and are redundant. I think they confuse users and they might misunderstand the trigger feature, and so they should be removed in the next major update.
</issue>
<code>
[start of chainer/training/extensions/_snapshot.py]
1 import os
2 import shutil
3 import tempfile
4
5 from chainer.serializers import npz
6 from chainer.training import extension
7
8
9 def snapshot_object(target, filename, savefun=npz.save_npz,
10 trigger=(1, 'epoch')):
11 """Returns a trainer extension to take snapshots of a given object.
12
13 This extension serializes the given object and saves it to the output
14 directory.
15
16 This extension is called once for each epoch by default. The default
17 priority is -100, which is lower than that of most built-in extensions.
18
19 Args:
20 target: Object to serialize.
21 filename (str): Name of the file into which the object is serialized.
22 It can be a format string, where the trainer object is passed to
23 the :meth:`str.format` method. For example,
24 ``'snapshot_{.updater.iteration}'`` is converted to
25 ``'snapshot_10000'`` at the 10,000th iteration.
26 savefun: Function to save the object. It takes two arguments: the
27 output file path and the object to serialize.
28 trigger: Trigger that decides when to take snapshot. It can be either
29 an already built trigger object (i.e., a callable object that
30 accepts a trainer object and returns a bool value), or a tuple in
31 the form ``<int>, 'epoch'`` or ``<int>, 'iteration'``. In latter
32 case, the tuple is passed to IntervalTrigger.
33
34 Returns:
35 An extension function.
36
37 """
38 @extension.make_extension(trigger=trigger, priority=-100)
39 def snapshot_object(trainer):
40 _snapshot_object(trainer, target, filename.format(trainer), savefun)
41
42 return snapshot_object
43
44
45 def snapshot(savefun=npz.save_npz,
46 filename='snapshot_iter_{.updater.iteration}',
47 trigger=(1, 'epoch')):
48 """Returns a trainer extension to take snapshots of the trainer.
49
50 This extension serializes the trainer object and saves it to the output
51 directory. It is used to support resuming the training loop from the saved
52 state.
53
54 This extension is called once for each epoch by default. The default
55 priority is -100, which is lower than that of most built-in extensions.
56
57 .. note::
58 This extension first writes the serialized object to a temporary file
59 and then rename it to the target file name. Thus, if the program stops
60 right before the renaming, the temporary file might be left in the
61 output directory.
62
63 Args:
64 savefun: Function to save the trainer. It takes two arguments: the
65 output file path and the trainer object.
66 filename (str): Name of the file into which the trainer is serialized.
67 It can be a format string, where the trainer object is passed to
68 the :meth:`str.format` method.
69 trigger: Trigger that decides when to take snapshot. It can be either
70 an already built trigger object (i.e., a callable object that
71 accepts a trainer object and returns a bool value), or a tuple in
72 the form ``<int>, 'epoch'`` or ``<int>, 'iteration'``. In latter
73 case, the tuple is passed to IntervalTrigger.
74
75 """
76 @extension.make_extension(trigger=trigger, priority=-100)
77 def snapshot(trainer):
78 _snapshot_object(trainer, trainer, filename.format(trainer), savefun)
79
80 return snapshot
81
82
83 def _snapshot_object(trainer, target, filename, savefun):
84 fn = filename.format(trainer)
85 prefix = 'tmp' + fn
86 fd, tmppath = tempfile.mkstemp(prefix=prefix, dir=trainer.out)
87 try:
88 savefun(tmppath, target)
89 except Exception:
90 os.close(fd)
91 os.remove(tmppath)
92 raise
93 os.close(fd)
94 shutil.move(tmppath, os.path.join(trainer.out, fn))
95
[end of chainer/training/extensions/_snapshot.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainer/training/extensions/_snapshot.py b/chainer/training/extensions/_snapshot.py
--- a/chainer/training/extensions/_snapshot.py
+++ b/chainer/training/extensions/_snapshot.py
@@ -6,15 +6,19 @@
from chainer.training import extension
-def snapshot_object(target, filename, savefun=npz.save_npz,
- trigger=(1, 'epoch')):
+def snapshot_object(target, filename, savefun=npz.save_npz):
"""Returns a trainer extension to take snapshots of a given object.
This extension serializes the given object and saves it to the output
directory.
- This extension is called once for each epoch by default. The default
- priority is -100, which is lower than that of most built-in extensions.
+ This extension is called once per epoch by default. To take a
+ snapshot at a different interval, a trigger object specifying the
+ required interval can be passed along with this extension
+ to the `extend()` method of the trainer.
+
+ The default priority is -100, which is lower than that of most
+ built-in extensions.
Args:
target: Object to serialize.
@@ -25,17 +29,12 @@
``'snapshot_10000'`` at the 10,000th iteration.
savefun: Function to save the object. It takes two arguments: the
output file path and the object to serialize.
- trigger: Trigger that decides when to take snapshot. It can be either
- an already built trigger object (i.e., a callable object that
- accepts a trainer object and returns a bool value), or a tuple in
- the form ``<int>, 'epoch'`` or ``<int>, 'iteration'``. In latter
- case, the tuple is passed to IntervalTrigger.
Returns:
An extension function.
"""
- @extension.make_extension(trigger=trigger, priority=-100)
+ @extension.make_extension(trigger=(1, 'epoch'), priority=-100)
def snapshot_object(trainer):
_snapshot_object(trainer, target, filename.format(trainer), savefun)
@@ -43,16 +42,20 @@
def snapshot(savefun=npz.save_npz,
- filename='snapshot_iter_{.updater.iteration}',
- trigger=(1, 'epoch')):
+ filename='snapshot_iter_{.updater.iteration}'):
"""Returns a trainer extension to take snapshots of the trainer.
This extension serializes the trainer object and saves it to the output
directory. It is used to support resuming the training loop from the saved
state.
- This extension is called once for each epoch by default. The default
- priority is -100, which is lower than that of most built-in extensions.
+ This extension is called once per epoch by default. To take a
+ snapshot at a different interval, a trigger object specifying the
+ required interval can be passed along with this extension
+ to the `extend()` method of the trainer.
+
+ The default priority is -100, which is lower than that of most
+ built-in extensions.
.. note::
This extension first writes the serialized object to a temporary file
@@ -66,14 +69,9 @@
filename (str): Name of the file into which the trainer is serialized.
It can be a format string, where the trainer object is passed to
the :meth:`str.format` method.
- trigger: Trigger that decides when to take snapshot. It can be either
- an already built trigger object (i.e., a callable object that
- accepts a trainer object and returns a bool value), or a tuple in
- the form ``<int>, 'epoch'`` or ``<int>, 'iteration'``. In latter
- case, the tuple is passed to IntervalTrigger.
"""
- @extension.make_extension(trigger=trigger, priority=-100)
+ @extension.make_extension(trigger=(1, 'epoch'), priority=-100)
def snapshot(trainer):
_snapshot_object(trainer, trainer, filename.format(trainer), savefun)
| {"golden_diff": "diff --git a/chainer/training/extensions/_snapshot.py b/chainer/training/extensions/_snapshot.py\n--- a/chainer/training/extensions/_snapshot.py\n+++ b/chainer/training/extensions/_snapshot.py\n@@ -6,15 +6,19 @@\n from chainer.training import extension\n \n \n-def snapshot_object(target, filename, savefun=npz.save_npz,\n- trigger=(1, 'epoch')):\n+def snapshot_object(target, filename, savefun=npz.save_npz):\n \"\"\"Returns a trainer extension to take snapshots of a given object.\n \n This extension serializes the given object and saves it to the output\n directory.\n \n- This extension is called once for each epoch by default. The default\n- priority is -100, which is lower than that of most built-in extensions.\n+ This extension is called once per epoch by default. To take a\n+ snapshot at a different interval, a trigger object specifying the\n+ required interval can be passed along with this extension\n+ to the `extend()` method of the trainer.\n+\n+ The default priority is -100, which is lower than that of most\n+ built-in extensions.\n \n Args:\n target: Object to serialize.\n@@ -25,17 +29,12 @@\n ``'snapshot_10000'`` at the 10,000th iteration.\n savefun: Function to save the object. It takes two arguments: the\n output file path and the object to serialize.\n- trigger: Trigger that decides when to take snapshot. It can be either\n- an already built trigger object (i.e., a callable object that\n- accepts a trainer object and returns a bool value), or a tuple in\n- the form ``<int>, 'epoch'`` or ``<int>, 'iteration'``. In latter\n- case, the tuple is passed to IntervalTrigger.\n \n Returns:\n An extension function.\n \n \"\"\"\n- @extension.make_extension(trigger=trigger, priority=-100)\n+ @extension.make_extension(trigger=(1, 'epoch'), priority=-100)\n def snapshot_object(trainer):\n _snapshot_object(trainer, target, filename.format(trainer), savefun)\n \n@@ -43,16 +42,20 @@\n \n \n def snapshot(savefun=npz.save_npz,\n- filename='snapshot_iter_{.updater.iteration}',\n- trigger=(1, 'epoch')):\n+ filename='snapshot_iter_{.updater.iteration}'):\n \"\"\"Returns a trainer extension to take snapshots of the trainer.\n \n This extension serializes the trainer object and saves it to the output\n directory. It is used to support resuming the training loop from the saved\n state.\n \n- This extension is called once for each epoch by default. The default\n- priority is -100, which is lower than that of most built-in extensions.\n+ This extension is called once per epoch by default. To take a\n+ snapshot at a different interval, a trigger object specifying the\n+ required interval can be passed along with this extension\n+ to the `extend()` method of the trainer.\n+\n+ The default priority is -100, which is lower than that of most\n+ built-in extensions.\n \n .. note::\n This extension first writes the serialized object to a temporary file\n@@ -66,14 +69,9 @@\n filename (str): Name of the file into which the trainer is serialized.\n It can be a format string, where the trainer object is passed to\n the :meth:`str.format` method.\n- trigger: Trigger that decides when to take snapshot. It can be either\n- an already built trigger object (i.e., a callable object that\n- accepts a trainer object and returns a bool value), or a tuple in\n- the form ``<int>, 'epoch'`` or ``<int>, 'iteration'``. In latter\n- case, the tuple is passed to IntervalTrigger.\n \n \"\"\"\n- @extension.make_extension(trigger=trigger, priority=-100)\n+ @extension.make_extension(trigger=(1, 'epoch'), priority=-100)\n def snapshot(trainer):\n _snapshot_object(trainer, trainer, filename.format(trainer), savefun)\n", "issue": "Remove trigger option of snapshot and snapshot_object\nThey have the same functionality as the trigger argument of Trainer.extend and are redundant. I think they confuse users and they might misunderstand the trigger feature, and so they should be removed in the next major update.\n", "before_files": [{"content": "import os\nimport shutil\nimport tempfile\n\nfrom chainer.serializers import npz\nfrom chainer.training import extension\n\n\ndef snapshot_object(target, filename, savefun=npz.save_npz,\n trigger=(1, 'epoch')):\n \"\"\"Returns a trainer extension to take snapshots of a given object.\n\n This extension serializes the given object and saves it to the output\n directory.\n\n This extension is called once for each epoch by default. The default\n priority is -100, which is lower than that of most built-in extensions.\n\n Args:\n target: Object to serialize.\n filename (str): Name of the file into which the object is serialized.\n It can be a format string, where the trainer object is passed to\n the :meth:`str.format` method. For example,\n ``'snapshot_{.updater.iteration}'`` is converted to\n ``'snapshot_10000'`` at the 10,000th iteration.\n savefun: Function to save the object. It takes two arguments: the\n output file path and the object to serialize.\n trigger: Trigger that decides when to take snapshot. It can be either\n an already built trigger object (i.e., a callable object that\n accepts a trainer object and returns a bool value), or a tuple in\n the form ``<int>, 'epoch'`` or ``<int>, 'iteration'``. In latter\n case, the tuple is passed to IntervalTrigger.\n\n Returns:\n An extension function.\n\n \"\"\"\n @extension.make_extension(trigger=trigger, priority=-100)\n def snapshot_object(trainer):\n _snapshot_object(trainer, target, filename.format(trainer), savefun)\n\n return snapshot_object\n\n\ndef snapshot(savefun=npz.save_npz,\n filename='snapshot_iter_{.updater.iteration}',\n trigger=(1, 'epoch')):\n \"\"\"Returns a trainer extension to take snapshots of the trainer.\n\n This extension serializes the trainer object and saves it to the output\n directory. It is used to support resuming the training loop from the saved\n state.\n\n This extension is called once for each epoch by default. The default\n priority is -100, which is lower than that of most built-in extensions.\n\n .. note::\n This extension first writes the serialized object to a temporary file\n and then rename it to the target file name. Thus, if the program stops\n right before the renaming, the temporary file might be left in the\n output directory.\n\n Args:\n savefun: Function to save the trainer. It takes two arguments: the\n output file path and the trainer object.\n filename (str): Name of the file into which the trainer is serialized.\n It can be a format string, where the trainer object is passed to\n the :meth:`str.format` method.\n trigger: Trigger that decides when to take snapshot. It can be either\n an already built trigger object (i.e., a callable object that\n accepts a trainer object and returns a bool value), or a tuple in\n the form ``<int>, 'epoch'`` or ``<int>, 'iteration'``. In latter\n case, the tuple is passed to IntervalTrigger.\n\n \"\"\"\n @extension.make_extension(trigger=trigger, priority=-100)\n def snapshot(trainer):\n _snapshot_object(trainer, trainer, filename.format(trainer), savefun)\n\n return snapshot\n\n\ndef _snapshot_object(trainer, target, filename, savefun):\n fn = filename.format(trainer)\n prefix = 'tmp' + fn\n fd, tmppath = tempfile.mkstemp(prefix=prefix, dir=trainer.out)\n try:\n savefun(tmppath, target)\n except Exception:\n os.close(fd)\n os.remove(tmppath)\n raise\n os.close(fd)\n shutil.move(tmppath, os.path.join(trainer.out, fn))\n", "path": "chainer/training/extensions/_snapshot.py"}]} | 1,630 | 938 |
gh_patches_debug_27650 | rasdani/github-patches | git_diff | biolab__orange3-4217 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
2 x Transpose + Preprocess loses information
**Describe the bug**
Second transpose cannot retrieve the domain after Preprocess.
**To Reproduce**
Steps to reproduce the behavior:
1. File (brown-selected).
2. Transpose.
3. Preprocesss (say Normalize).
4. Transpose.
**Orange version:**
3.24.dev
**Expected behavior**
Second Transpose puts columns names into a string variable.
**Screenshots**
<img width="1232" alt="Screen Shot 2019-11-14 at 09 33 02" src="https://user-images.githubusercontent.com/12524972/68839832-c910d600-06c1-11ea-9286-5bf033a9802f.png">
</issue>
<code>
[start of Orange/preprocess/normalize.py]
1 import numpy as np
2
3 from Orange.data import ContinuousVariable, Domain
4 from Orange.statistics import distribution
5 from Orange.util import Reprable
6 from .preprocess import Normalize
7 from .transformation import Normalizer as Norm
8 __all__ = ["Normalizer"]
9
10
11 class Normalizer(Reprable):
12 def __init__(self,
13 zero_based=True,
14 norm_type=Normalize.NormalizeBySD,
15 transform_class=False,
16 center=True,
17 normalize_datetime=False):
18 self.zero_based = zero_based
19 self.norm_type = norm_type
20 self.transform_class = transform_class
21 self.center = center
22 self.normalize_datetime = normalize_datetime
23
24 def __call__(self, data):
25 dists = distribution.get_distributions(data)
26 new_attrs = [self.normalize(dists[i], var) for
27 (i, var) in enumerate(data.domain.attributes)]
28
29 new_class_vars = data.domain.class_vars
30 if self.transform_class:
31 attr_len = len(data.domain.attributes)
32 new_class_vars = [self.normalize(dists[i + attr_len], var) for
33 (i, var) in enumerate(data.domain.class_vars)]
34
35 domain = Domain(new_attrs, new_class_vars, data.domain.metas)
36 return data.transform(domain)
37
38 def normalize(self, dist, var):
39 if not var.is_continuous or (var.is_time and not self.normalize_datetime):
40 return var
41 elif self.norm_type == Normalize.NormalizeBySD:
42 return self.normalize_by_sd(dist, var)
43 elif self.norm_type == Normalize.NormalizeBySpan:
44 return self.normalize_by_span(dist, var)
45
46 def normalize_by_sd(self, dist, var):
47 avg, sd = (dist.mean(), dist.standard_deviation()) if dist.size else (0, 1)
48 if sd == 0:
49 sd = 1
50 if self.center:
51 compute_val = Norm(var, avg, 1 / sd)
52 else:
53 compute_val = Norm(var, 0, 1 / sd)
54
55 return ContinuousVariable(
56 var.name,
57 compute_value=compute_val,
58 sparse=var.sparse,
59 )
60
61 def normalize_by_span(self, dist, var):
62 dma, dmi = (dist.max(), dist.min()) if dist.shape[1] else (np.nan, np.nan)
63 diff = dma - dmi
64 if diff < 1e-15:
65 diff = 1
66 if self.zero_based:
67 return ContinuousVariable(
68 var.name,
69 compute_value=Norm(var, dmi, 1 / diff),
70 sparse=var.sparse)
71 else:
72 return ContinuousVariable(
73 var.name,
74 compute_value=Norm(var, (dma + dmi) / 2, 2 / diff),
75 sparse=var.sparse)
76
[end of Orange/preprocess/normalize.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Orange/preprocess/normalize.py b/Orange/preprocess/normalize.py
--- a/Orange/preprocess/normalize.py
+++ b/Orange/preprocess/normalize.py
@@ -1,6 +1,6 @@
import numpy as np
-from Orange.data import ContinuousVariable, Domain
+from Orange.data import Domain
from Orange.statistics import distribution
from Orange.util import Reprable
from .preprocess import Normalize
@@ -51,12 +51,7 @@
compute_val = Norm(var, avg, 1 / sd)
else:
compute_val = Norm(var, 0, 1 / sd)
-
- return ContinuousVariable(
- var.name,
- compute_value=compute_val,
- sparse=var.sparse,
- )
+ return var.copy(compute_value=compute_val)
def normalize_by_span(self, dist, var):
dma, dmi = (dist.max(), dist.min()) if dist.shape[1] else (np.nan, np.nan)
@@ -64,12 +59,7 @@
if diff < 1e-15:
diff = 1
if self.zero_based:
- return ContinuousVariable(
- var.name,
- compute_value=Norm(var, dmi, 1 / diff),
- sparse=var.sparse)
+ compute_val = Norm(var, dmi, 1 / diff)
else:
- return ContinuousVariable(
- var.name,
- compute_value=Norm(var, (dma + dmi) / 2, 2 / diff),
- sparse=var.sparse)
+ compute_val = Norm(var, (dma + dmi) / 2, 2 / diff)
+ return var.copy(compute_value=compute_val)
| {"golden_diff": "diff --git a/Orange/preprocess/normalize.py b/Orange/preprocess/normalize.py\n--- a/Orange/preprocess/normalize.py\n+++ b/Orange/preprocess/normalize.py\n@@ -1,6 +1,6 @@\n import numpy as np\n \n-from Orange.data import ContinuousVariable, Domain\n+from Orange.data import Domain\n from Orange.statistics import distribution\n from Orange.util import Reprable\n from .preprocess import Normalize\n@@ -51,12 +51,7 @@\n compute_val = Norm(var, avg, 1 / sd)\n else:\n compute_val = Norm(var, 0, 1 / sd)\n-\n- return ContinuousVariable(\n- var.name,\n- compute_value=compute_val,\n- sparse=var.sparse,\n- )\n+ return var.copy(compute_value=compute_val)\n \n def normalize_by_span(self, dist, var):\n dma, dmi = (dist.max(), dist.min()) if dist.shape[1] else (np.nan, np.nan)\n@@ -64,12 +59,7 @@\n if diff < 1e-15:\n diff = 1\n if self.zero_based:\n- return ContinuousVariable(\n- var.name,\n- compute_value=Norm(var, dmi, 1 / diff),\n- sparse=var.sparse)\n+ compute_val = Norm(var, dmi, 1 / diff)\n else:\n- return ContinuousVariable(\n- var.name,\n- compute_value=Norm(var, (dma + dmi) / 2, 2 / diff),\n- sparse=var.sparse)\n+ compute_val = Norm(var, (dma + dmi) / 2, 2 / diff)\n+ return var.copy(compute_value=compute_val)\n", "issue": "2 x Transpose + Preprocess loses information\n**Describe the bug**\r\nSecond transpose cannot retrieve the domain after Preprocess.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. File (brown-selected).\r\n2. Transpose.\r\n3. Preprocesss (say Normalize).\r\n4. Transpose.\r\n\r\n**Orange version:**\r\n3.24.dev\r\n\r\n**Expected behavior**\r\nSecond Transpose puts columns names into a string variable.\r\n\r\n**Screenshots**\r\n<img width=\"1232\" alt=\"Screen Shot 2019-11-14 at 09 33 02\" src=\"https://user-images.githubusercontent.com/12524972/68839832-c910d600-06c1-11ea-9286-5bf033a9802f.png\">\r\n\r\n\n", "before_files": [{"content": "import numpy as np\n\nfrom Orange.data import ContinuousVariable, Domain\nfrom Orange.statistics import distribution\nfrom Orange.util import Reprable\nfrom .preprocess import Normalize\nfrom .transformation import Normalizer as Norm\n__all__ = [\"Normalizer\"]\n\n\nclass Normalizer(Reprable):\n def __init__(self,\n zero_based=True,\n norm_type=Normalize.NormalizeBySD,\n transform_class=False,\n center=True,\n normalize_datetime=False):\n self.zero_based = zero_based\n self.norm_type = norm_type\n self.transform_class = transform_class\n self.center = center\n self.normalize_datetime = normalize_datetime\n\n def __call__(self, data):\n dists = distribution.get_distributions(data)\n new_attrs = [self.normalize(dists[i], var) for\n (i, var) in enumerate(data.domain.attributes)]\n\n new_class_vars = data.domain.class_vars\n if self.transform_class:\n attr_len = len(data.domain.attributes)\n new_class_vars = [self.normalize(dists[i + attr_len], var) for\n (i, var) in enumerate(data.domain.class_vars)]\n\n domain = Domain(new_attrs, new_class_vars, data.domain.metas)\n return data.transform(domain)\n\n def normalize(self, dist, var):\n if not var.is_continuous or (var.is_time and not self.normalize_datetime):\n return var\n elif self.norm_type == Normalize.NormalizeBySD:\n return self.normalize_by_sd(dist, var)\n elif self.norm_type == Normalize.NormalizeBySpan:\n return self.normalize_by_span(dist, var)\n\n def normalize_by_sd(self, dist, var):\n avg, sd = (dist.mean(), dist.standard_deviation()) if dist.size else (0, 1)\n if sd == 0:\n sd = 1\n if self.center:\n compute_val = Norm(var, avg, 1 / sd)\n else:\n compute_val = Norm(var, 0, 1 / sd)\n\n return ContinuousVariable(\n var.name,\n compute_value=compute_val,\n sparse=var.sparse,\n )\n\n def normalize_by_span(self, dist, var):\n dma, dmi = (dist.max(), dist.min()) if dist.shape[1] else (np.nan, np.nan)\n diff = dma - dmi\n if diff < 1e-15:\n diff = 1\n if self.zero_based:\n return ContinuousVariable(\n var.name,\n compute_value=Norm(var, dmi, 1 / diff),\n sparse=var.sparse)\n else:\n return ContinuousVariable(\n var.name,\n compute_value=Norm(var, (dma + dmi) / 2, 2 / diff),\n sparse=var.sparse)\n", "path": "Orange/preprocess/normalize.py"}]} | 1,460 | 384 |
gh_patches_debug_20258 | rasdani/github-patches | git_diff | kserve__kserve-1877 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix serving.kubeflow.org annotations in docs/samples
I've noticed that some `docs/samples` still use in `metadata.annotations` the `serving.kubeflow.org` instead of `serving.kserve.org`. See this [example](https://github.com/kserve/kserve/blob/master/docs/samples/kafka/s3_secret.yaml).
To save debugging time for others migrating from KFserving, I could create PR that fixes that.
</issue>
<code>
[start of docs/samples/kafka/setup.py]
1 #
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 #
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Unless required by applicable law or agreed to in writing, software
9 # distributed under the License is distributed on an "AS IS" BASIS,
10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from setuptools import setup, find_packages
15
16 tests_require = [
17 'pytest',
18 'pytest-tornasync',
19 'mypy'
20 ]
21
22 setup(
23 name='transformer',
24 version='0.1.0',
25 author_email='[email protected]',
26 license='../../LICENSE.txt',
27 url='https://github.com/kserve/kserve/tree/master/docs/samples#deploy-inferenceservice-with-transformer',
28 description='Transformer',
29 long_description=open('README.md').read(),
30 python_requires='>=3.6',
31 packages=find_packages("transformer"),
32 install_requires=[
33 "kfserving>=0.2.1",
34 "argparse>=1.4.0",
35 "requests>=2.22.0",
36 "joblib>=0.13.2",
37 "pandas>=0.24.2",
38 "numpy>=1.16.3",
39 "kubernetes >= 9.0.0",
40 "opencv-python-headless==4.0.0.21",
41 "boto3==1.7.2"
42 ],
43 tests_require=tests_require,
44 extras_require={'test': tests_require}
45 )
46
[end of docs/samples/kafka/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/samples/kafka/setup.py b/docs/samples/kafka/setup.py
--- a/docs/samples/kafka/setup.py
+++ b/docs/samples/kafka/setup.py
@@ -24,21 +24,15 @@
version='0.1.0',
author_email='[email protected]',
license='../../LICENSE.txt',
- url='https://github.com/kserve/kserve/tree/master/docs/samples#deploy-inferenceservice-with-transformer',
+ url='https://github.com/kserve/kserve/tree/master/docs/samples/kafka',
description='Transformer',
long_description=open('README.md').read(),
- python_requires='>=3.6',
+ python_requires='>=3.7',
packages=find_packages("transformer"),
install_requires=[
- "kfserving>=0.2.1",
- "argparse>=1.4.0",
- "requests>=2.22.0",
- "joblib>=0.13.2",
+ "kserve>=0.7.0",
"pandas>=0.24.2",
- "numpy>=1.16.3",
- "kubernetes >= 9.0.0",
"opencv-python-headless==4.0.0.21",
- "boto3==1.7.2"
],
tests_require=tests_require,
extras_require={'test': tests_require}
| {"golden_diff": "diff --git a/docs/samples/kafka/setup.py b/docs/samples/kafka/setup.py\n--- a/docs/samples/kafka/setup.py\n+++ b/docs/samples/kafka/setup.py\n@@ -24,21 +24,15 @@\n version='0.1.0',\n author_email='[email protected]',\n license='../../LICENSE.txt',\n- url='https://github.com/kserve/kserve/tree/master/docs/samples#deploy-inferenceservice-with-transformer',\n+ url='https://github.com/kserve/kserve/tree/master/docs/samples/kafka',\n description='Transformer',\n long_description=open('README.md').read(),\n- python_requires='>=3.6',\n+ python_requires='>=3.7',\n packages=find_packages(\"transformer\"),\n install_requires=[\n- \"kfserving>=0.2.1\",\n- \"argparse>=1.4.0\",\n- \"requests>=2.22.0\",\n- \"joblib>=0.13.2\",\n+ \"kserve>=0.7.0\",\n \"pandas>=0.24.2\",\n- \"numpy>=1.16.3\",\n- \"kubernetes >= 9.0.0\",\n \"opencv-python-headless==4.0.0.21\",\n- \"boto3==1.7.2\"\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n", "issue": "Fix serving.kubeflow.org annotations in docs/samples\nI've noticed that some `docs/samples` still use in `metadata.annotations` the `serving.kubeflow.org` instead of `serving.kserve.org`. See this [example](https://github.com/kserve/kserve/blob/master/docs/samples/kafka/s3_secret.yaml).\r\nTo save debugging time for others migrating from KFserving, I could create PR that fixes that.\n", "before_files": [{"content": "#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'pytest',\n 'pytest-tornasync',\n 'mypy'\n]\n\nsetup(\n name='transformer',\n version='0.1.0',\n author_email='[email protected]',\n license='../../LICENSE.txt',\n url='https://github.com/kserve/kserve/tree/master/docs/samples#deploy-inferenceservice-with-transformer',\n description='Transformer',\n long_description=open('README.md').read(),\n python_requires='>=3.6',\n packages=find_packages(\"transformer\"),\n install_requires=[\n \"kfserving>=0.2.1\",\n \"argparse>=1.4.0\",\n \"requests>=2.22.0\",\n \"joblib>=0.13.2\",\n \"pandas>=0.24.2\",\n \"numpy>=1.16.3\",\n \"kubernetes >= 9.0.0\",\n \"opencv-python-headless==4.0.0.21\",\n \"boto3==1.7.2\"\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n)\n", "path": "docs/samples/kafka/setup.py"}]} | 1,095 | 323 |
gh_patches_debug_39161 | rasdani/github-patches | git_diff | PrefectHQ__prefect-5437 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Azure BlobStorageUpload doesn't allow for overwriting blobs
## Current behavior
You get an error if you try to upload the same file name
```
azure.core.exceptions.ResourceExistsError: The specified blob already exists.
RequestId:5bef0cf1-b01e-002e-6
```
## Proposed behavior
The task should take in an `overwrite` argument and pass it to [this line](https://github.com/PrefectHQ/prefect/blob/6cd24b023411980842fa77e6c0ca2ced47eeb83e/src/prefect/tasks/azure/blobstorage.py#L131).
</issue>
<code>
[start of src/prefect/tasks/azure/blobstorage.py]
1 import uuid
2
3 import azure.storage.blob
4
5 from prefect import Task
6 from prefect.client import Secret
7 from prefect.utilities.tasks import defaults_from_attrs
8
9
10 class BlobStorageDownload(Task):
11 """
12 Task for downloading data from an Blob Storage container and returning it as a string.
13 Note that all initialization arguments can optionally be provided or overwritten at runtime.
14
15 Args:
16 - azure_credentials_secret (str, optional): the name of the Prefect Secret
17 that stores your Azure credentials; this Secret must be an Azure connection string
18 - container (str, optional): the name of the Azure Blob Storage to download from
19 - **kwargs (dict, optional): additional keyword arguments to pass to the
20 Task constructor
21 """
22
23 def __init__(
24 self,
25 azure_credentials_secret: str = "AZ_CONNECTION_STRING",
26 container: str = None,
27 **kwargs
28 ) -> None:
29 self.azure_credentials_secret = azure_credentials_secret
30 self.container = container
31 super().__init__(**kwargs)
32
33 @defaults_from_attrs("azure_credentials_secret", "container")
34 def run(
35 self,
36 blob_name: str,
37 azure_credentials_secret: str = "AZ_CONNECTION_STRING",
38 container: str = None,
39 ) -> str:
40 """
41 Task run method.
42
43 Args:
44 - blob_name (str): the name of the blob within this container to retrieve
45 - azure_credentials_secret (str, optional): the name of the Prefect Secret
46 that stores your Azure credentials; this Secret must be an Azure connection string
47 - container (str, optional): the name of the Blob Storage container to download from
48
49 Returns:
50 - str: the contents of this blob_name / container, as a string
51 """
52
53 if container is None:
54 raise ValueError("A container name must be provided.")
55
56 # get Azure credentials
57 azure_credentials = Secret(azure_credentials_secret).get()
58
59 blob_service = azure.storage.blob.BlobServiceClient.from_connection_string(
60 conn_str=azure_credentials
61 )
62
63 client = blob_service.get_blob_client(container=container, blob=blob_name)
64 content_string = client.download_blob().content_as_text()
65
66 return content_string
67
68
69 class BlobStorageUpload(Task):
70 """
71 Task for uploading string data (e.g., a JSON string) to an Azure Blob Storage container.
72 Note that all initialization arguments can optionally be provided or overwritten at runtime.
73
74 Args:
75 - azure_credentials_secret (str, optional): the name of the Prefect Secret
76 that stores your Azure credentials; this Secret must be an Azure connection string
77 - container (str, optional): the name of the Azure Blob Storage to upload to
78 - **kwargs (dict, optional): additional keyword arguments to pass to the
79 Task constructor
80 """
81
82 def __init__(
83 self,
84 azure_credentials_secret: str = "AZ_CONNECTION_STRING",
85 container: str = None,
86 **kwargs
87 ) -> None:
88 self.azure_credentials_secret = azure_credentials_secret
89 self.container = container
90 super().__init__(**kwargs)
91
92 @defaults_from_attrs("azure_credentials_secret", "container")
93 def run(
94 self,
95 data: str,
96 blob_name: str = None,
97 azure_credentials_secret: str = "AZ_CONNECTION_STRING",
98 container: str = None,
99 ) -> str:
100 """
101 Task run method.
102
103 Args:
104 - data (str): the data payload to upload
105 - blob_name (str, optional): the name to upload the data under; if not
106 provided, a random `uuid` will be created
107 - azure_credentials_secret (str, optional): the name of the Prefect Secret
108 that stores your Azure credentials; this Secret must be an Azure connection string
109 - container (str, optional): the name of the Blob Storage container to upload to
110
111 Returns:
112 - str: the name of the blob the data payload was uploaded to
113 """
114
115 if container is None:
116 raise ValueError("A container name must be provided.")
117
118 # get Azure credentials
119 azure_credentials = Secret(azure_credentials_secret).get()
120
121 blob_service = azure.storage.blob.BlobServiceClient.from_connection_string(
122 conn_str=azure_credentials
123 )
124
125 # create key if not provided
126 if blob_name is None:
127 blob_name = str(uuid.uuid4())
128
129 client = blob_service.get_blob_client(container=container, blob=blob_name)
130
131 client.upload_blob(data)
132
133 return blob_name
134
[end of src/prefect/tasks/azure/blobstorage.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/prefect/tasks/azure/blobstorage.py b/src/prefect/tasks/azure/blobstorage.py
--- a/src/prefect/tasks/azure/blobstorage.py
+++ b/src/prefect/tasks/azure/blobstorage.py
@@ -75,6 +75,8 @@
- azure_credentials_secret (str, optional): the name of the Prefect Secret
that stores your Azure credentials; this Secret must be an Azure connection string
- container (str, optional): the name of the Azure Blob Storage to upload to
+ - overwrite (bool, optional): if `True`, an existing blob with the same name will be overwritten.
+ Defaults to `False` and an error will be thrown if the blob already exists.
- **kwargs (dict, optional): additional keyword arguments to pass to the
Task constructor
"""
@@ -83,19 +85,22 @@
self,
azure_credentials_secret: str = "AZ_CONNECTION_STRING",
container: str = None,
+ overwrite: bool = False,
**kwargs
) -> None:
self.azure_credentials_secret = azure_credentials_secret
self.container = container
+ self.overwrite = overwrite
super().__init__(**kwargs)
- @defaults_from_attrs("azure_credentials_secret", "container")
+ @defaults_from_attrs("azure_credentials_secret", "container", "overwrite")
def run(
self,
data: str,
blob_name: str = None,
azure_credentials_secret: str = "AZ_CONNECTION_STRING",
container: str = None,
+ overwrite: bool = False,
) -> str:
"""
Task run method.
@@ -107,6 +112,8 @@
- azure_credentials_secret (str, optional): the name of the Prefect Secret
that stores your Azure credentials; this Secret must be an Azure connection string
- container (str, optional): the name of the Blob Storage container to upload to
+ - overwrite (bool, optional): if `True`, an existing blob with the same name will be overwritten.
+ Defaults to `False` and an error will be thrown if the blob already exists.
Returns:
- str: the name of the blob the data payload was uploaded to
@@ -128,6 +135,6 @@
client = blob_service.get_blob_client(container=container, blob=blob_name)
- client.upload_blob(data)
+ client.upload_blob(data, overwrite=overwrite)
return blob_name
| {"golden_diff": "diff --git a/src/prefect/tasks/azure/blobstorage.py b/src/prefect/tasks/azure/blobstorage.py\n--- a/src/prefect/tasks/azure/blobstorage.py\n+++ b/src/prefect/tasks/azure/blobstorage.py\n@@ -75,6 +75,8 @@\n - azure_credentials_secret (str, optional): the name of the Prefect Secret\n that stores your Azure credentials; this Secret must be an Azure connection string\n - container (str, optional): the name of the Azure Blob Storage to upload to\n+ - overwrite (bool, optional): if `True`, an existing blob with the same name will be overwritten.\n+ Defaults to `False` and an error will be thrown if the blob already exists.\n - **kwargs (dict, optional): additional keyword arguments to pass to the\n Task constructor\n \"\"\"\n@@ -83,19 +85,22 @@\n self,\n azure_credentials_secret: str = \"AZ_CONNECTION_STRING\",\n container: str = None,\n+ overwrite: bool = False,\n **kwargs\n ) -> None:\n self.azure_credentials_secret = azure_credentials_secret\n self.container = container\n+ self.overwrite = overwrite\n super().__init__(**kwargs)\n \n- @defaults_from_attrs(\"azure_credentials_secret\", \"container\")\n+ @defaults_from_attrs(\"azure_credentials_secret\", \"container\", \"overwrite\")\n def run(\n self,\n data: str,\n blob_name: str = None,\n azure_credentials_secret: str = \"AZ_CONNECTION_STRING\",\n container: str = None,\n+ overwrite: bool = False,\n ) -> str:\n \"\"\"\n Task run method.\n@@ -107,6 +112,8 @@\n - azure_credentials_secret (str, optional): the name of the Prefect Secret\n that stores your Azure credentials; this Secret must be an Azure connection string\n - container (str, optional): the name of the Blob Storage container to upload to\n+ - overwrite (bool, optional): if `True`, an existing blob with the same name will be overwritten.\n+ Defaults to `False` and an error will be thrown if the blob already exists.\n \n Returns:\n - str: the name of the blob the data payload was uploaded to\n@@ -128,6 +135,6 @@\n \n client = blob_service.get_blob_client(container=container, blob=blob_name)\n \n- client.upload_blob(data)\n+ client.upload_blob(data, overwrite=overwrite)\n \n return blob_name\n", "issue": "Azure BlobStorageUpload doesn't allow for overwriting blobs\n## Current behavior\r\n\r\nYou get an error if you try to upload the same file name\r\n\r\n```\r\nazure.core.exceptions.ResourceExistsError: The specified blob already exists.\r\nRequestId:5bef0cf1-b01e-002e-6\r\n```\r\n\r\n## Proposed behavior\r\n\r\nThe task should take in an `overwrite` argument and pass it to [this line](https://github.com/PrefectHQ/prefect/blob/6cd24b023411980842fa77e6c0ca2ced47eeb83e/src/prefect/tasks/azure/blobstorage.py#L131).\r\n\r\n\n", "before_files": [{"content": "import uuid\n\nimport azure.storage.blob\n\nfrom prefect import Task\nfrom prefect.client import Secret\nfrom prefect.utilities.tasks import defaults_from_attrs\n\n\nclass BlobStorageDownload(Task):\n \"\"\"\n Task for downloading data from an Blob Storage container and returning it as a string.\n Note that all initialization arguments can optionally be provided or overwritten at runtime.\n\n Args:\n - azure_credentials_secret (str, optional): the name of the Prefect Secret\n that stores your Azure credentials; this Secret must be an Azure connection string\n - container (str, optional): the name of the Azure Blob Storage to download from\n - **kwargs (dict, optional): additional keyword arguments to pass to the\n Task constructor\n \"\"\"\n\n def __init__(\n self,\n azure_credentials_secret: str = \"AZ_CONNECTION_STRING\",\n container: str = None,\n **kwargs\n ) -> None:\n self.azure_credentials_secret = azure_credentials_secret\n self.container = container\n super().__init__(**kwargs)\n\n @defaults_from_attrs(\"azure_credentials_secret\", \"container\")\n def run(\n self,\n blob_name: str,\n azure_credentials_secret: str = \"AZ_CONNECTION_STRING\",\n container: str = None,\n ) -> str:\n \"\"\"\n Task run method.\n\n Args:\n - blob_name (str): the name of the blob within this container to retrieve\n - azure_credentials_secret (str, optional): the name of the Prefect Secret\n that stores your Azure credentials; this Secret must be an Azure connection string\n - container (str, optional): the name of the Blob Storage container to download from\n\n Returns:\n - str: the contents of this blob_name / container, as a string\n \"\"\"\n\n if container is None:\n raise ValueError(\"A container name must be provided.\")\n\n # get Azure credentials\n azure_credentials = Secret(azure_credentials_secret).get()\n\n blob_service = azure.storage.blob.BlobServiceClient.from_connection_string(\n conn_str=azure_credentials\n )\n\n client = blob_service.get_blob_client(container=container, blob=blob_name)\n content_string = client.download_blob().content_as_text()\n\n return content_string\n\n\nclass BlobStorageUpload(Task):\n \"\"\"\n Task for uploading string data (e.g., a JSON string) to an Azure Blob Storage container.\n Note that all initialization arguments can optionally be provided or overwritten at runtime.\n\n Args:\n - azure_credentials_secret (str, optional): the name of the Prefect Secret\n that stores your Azure credentials; this Secret must be an Azure connection string\n - container (str, optional): the name of the Azure Blob Storage to upload to\n - **kwargs (dict, optional): additional keyword arguments to pass to the\n Task constructor\n \"\"\"\n\n def __init__(\n self,\n azure_credentials_secret: str = \"AZ_CONNECTION_STRING\",\n container: str = None,\n **kwargs\n ) -> None:\n self.azure_credentials_secret = azure_credentials_secret\n self.container = container\n super().__init__(**kwargs)\n\n @defaults_from_attrs(\"azure_credentials_secret\", \"container\")\n def run(\n self,\n data: str,\n blob_name: str = None,\n azure_credentials_secret: str = \"AZ_CONNECTION_STRING\",\n container: str = None,\n ) -> str:\n \"\"\"\n Task run method.\n\n Args:\n - data (str): the data payload to upload\n - blob_name (str, optional): the name to upload the data under; if not\n provided, a random `uuid` will be created\n - azure_credentials_secret (str, optional): the name of the Prefect Secret\n that stores your Azure credentials; this Secret must be an Azure connection string\n - container (str, optional): the name of the Blob Storage container to upload to\n\n Returns:\n - str: the name of the blob the data payload was uploaded to\n \"\"\"\n\n if container is None:\n raise ValueError(\"A container name must be provided.\")\n\n # get Azure credentials\n azure_credentials = Secret(azure_credentials_secret).get()\n\n blob_service = azure.storage.blob.BlobServiceClient.from_connection_string(\n conn_str=azure_credentials\n )\n\n # create key if not provided\n if blob_name is None:\n blob_name = str(uuid.uuid4())\n\n client = blob_service.get_blob_client(container=container, blob=blob_name)\n\n client.upload_blob(data)\n\n return blob_name\n", "path": "src/prefect/tasks/azure/blobstorage.py"}]} | 1,953 | 547 |
gh_patches_debug_7233 | rasdani/github-patches | git_diff | graspologic-org__graspologic-431 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
remove * import in simulations
https://github.com/neurodata/graspy/blob/master/graspy/simulations/__init__.py
should not be using * import here
</issue>
<code>
[start of graspy/simulations/__init__.py]
1 # Copyright (c) Microsoft Corporation and contributors.
2 # Licensed under the MIT License.
3
4 from .simulations import *
5 from .simulations_corr import *
6 from .rdpg_corr import *
7
[end of graspy/simulations/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/graspy/simulations/__init__.py b/graspy/simulations/__init__.py
--- a/graspy/simulations/__init__.py
+++ b/graspy/simulations/__init__.py
@@ -1,6 +1,19 @@
# Copyright (c) Microsoft Corporation and contributors.
# Licensed under the MIT License.
-from .simulations import *
-from .simulations_corr import *
-from .rdpg_corr import *
+from .simulations import sample_edges, er_np, er_nm, sbm, rdpg, p_from_latent
+from .simulations_corr import sample_edges_corr, er_corr, sbm_corr
+from .rdpg_corr import rdpg_corr
+
+__all__ = [
+ "sample_edges",
+ "er_np",
+ "er_nm",
+ "sbm",
+ "rdpg",
+ "p_from_latent",
+ "sample_edges_corr",
+ "er_corr",
+ "sbm_corr",
+ "rdpg_corr",
+]
| {"golden_diff": "diff --git a/graspy/simulations/__init__.py b/graspy/simulations/__init__.py\n--- a/graspy/simulations/__init__.py\n+++ b/graspy/simulations/__init__.py\n@@ -1,6 +1,19 @@\n # Copyright (c) Microsoft Corporation and contributors.\n # Licensed under the MIT License.\n \n-from .simulations import *\n-from .simulations_corr import *\n-from .rdpg_corr import *\n+from .simulations import sample_edges, er_np, er_nm, sbm, rdpg, p_from_latent\n+from .simulations_corr import sample_edges_corr, er_corr, sbm_corr\n+from .rdpg_corr import rdpg_corr\n+\n+__all__ = [\n+ \"sample_edges\",\n+ \"er_np\",\n+ \"er_nm\",\n+ \"sbm\",\n+ \"rdpg\",\n+ \"p_from_latent\",\n+ \"sample_edges_corr\",\n+ \"er_corr\",\n+ \"sbm_corr\",\n+ \"rdpg_corr\",\n+]\n", "issue": "remove * import in simulations\nhttps://github.com/neurodata/graspy/blob/master/graspy/simulations/__init__.py\r\n\r\nshould not be using * import here\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation and contributors.\n# Licensed under the MIT License.\n\nfrom .simulations import *\nfrom .simulations_corr import *\nfrom .rdpg_corr import *\n", "path": "graspy/simulations/__init__.py"}]} | 626 | 231 |
gh_patches_debug_62674 | rasdani/github-patches | git_diff | oppia__oppia-1713 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add an OutputContains rule to the CodeRepl interaction.
We've had a request to add an OutputContains rule to the CodeRepl interaction.
The use case is as follows: the student will type in the body of a function, and their code will be checked by calling the function on several inputs and printing the results. We don't want to stop the student from printing their own stuff from the function first, though, hence the idea of checking to see whether a substring of the student's output matches the expected output.
Note that this is a straightforward starter project. The files to modify are extensions/interactions/CodeRepl/CodeRepl.js (see codeReplRulesService) and the corresponding test suite in extensions/interactions/CodeRepl/CodeReplRulesServiceSpec.js.
/cc @anuzis
</issue>
<code>
[start of extensions/rules/code_evaluation.py]
1 # coding: utf-8
2 #
3 # Copyright 2014 The Oppia Authors. All Rights Reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, softwar
12 # distributed under the License is distributed on an "AS-IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 """Rules for CodeEvaluation objects."""
18
19 from extensions.rules import base
20
21
22 class CodeEquals(base.CodeEvaluationRule):
23 description = 'has code equal to {{x|CodeString}}'
24
25
26 class CodeContains(base.CodeEvaluationRule):
27 description = 'has code that contains {{x|CodeString}}'
28
29
30 class CodeDoesNotContain(base.CodeEvaluationRule):
31 description = 'has code that does not contain {{x|CodeString}}'
32
33
34 class OutputEquals(base.CodeEvaluationRule):
35 description = 'has output equal to {{x|CodeString}}'
36
37
38 class ResultsInError(base.CodeEvaluationRule):
39 description = 'results in an error when run'
40
41
42 class ErrorContains(base.CodeEvaluationRule):
43 description = (
44 'has error message that contains {{x|UnicodeString}}')
45
[end of extensions/rules/code_evaluation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/extensions/rules/code_evaluation.py b/extensions/rules/code_evaluation.py
--- a/extensions/rules/code_evaluation.py
+++ b/extensions/rules/code_evaluation.py
@@ -30,6 +30,8 @@
class CodeDoesNotContain(base.CodeEvaluationRule):
description = 'has code that does not contain {{x|CodeString}}'
+class OutputContains(base.CodeEvaluationRule):
+ description = 'has output that contains {{x|CodeString}}'
class OutputEquals(base.CodeEvaluationRule):
description = 'has output equal to {{x|CodeString}}'
| {"golden_diff": "diff --git a/extensions/rules/code_evaluation.py b/extensions/rules/code_evaluation.py\n--- a/extensions/rules/code_evaluation.py\n+++ b/extensions/rules/code_evaluation.py\n@@ -30,6 +30,8 @@\n class CodeDoesNotContain(base.CodeEvaluationRule):\n description = 'has code that does not contain {{x|CodeString}}'\n \n+class OutputContains(base.CodeEvaluationRule):\n+ description = 'has output that contains {{x|CodeString}}'\n \n class OutputEquals(base.CodeEvaluationRule):\n description = 'has output equal to {{x|CodeString}}'\n", "issue": "Add an OutputContains rule to the CodeRepl interaction.\nWe've had a request to add an OutputContains rule to the CodeRepl interaction.\n\nThe use case is as follows: the student will type in the body of a function, and their code will be checked by calling the function on several inputs and printing the results. We don't want to stop the student from printing their own stuff from the function first, though, hence the idea of checking to see whether a substring of the student's output matches the expected output.\n\nNote that this is a straightforward starter project. The files to modify are extensions/interactions/CodeRepl/CodeRepl.js (see codeReplRulesService) and the corresponding test suite in extensions/interactions/CodeRepl/CodeReplRulesServiceSpec.js.\n\n/cc @anuzis \n\n", "before_files": [{"content": "# coding: utf-8\n#\n# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, softwar\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Rules for CodeEvaluation objects.\"\"\"\n\nfrom extensions.rules import base\n\n\nclass CodeEquals(base.CodeEvaluationRule):\n description = 'has code equal to {{x|CodeString}}'\n\n\nclass CodeContains(base.CodeEvaluationRule):\n description = 'has code that contains {{x|CodeString}}'\n\n\nclass CodeDoesNotContain(base.CodeEvaluationRule):\n description = 'has code that does not contain {{x|CodeString}}'\n\n\nclass OutputEquals(base.CodeEvaluationRule):\n description = 'has output equal to {{x|CodeString}}'\n\n\nclass ResultsInError(base.CodeEvaluationRule):\n description = 'results in an error when run'\n\n\nclass ErrorContains(base.CodeEvaluationRule):\n description = (\n 'has error message that contains {{x|UnicodeString}}')\n", "path": "extensions/rules/code_evaluation.py"}]} | 1,101 | 122 |
gh_patches_debug_22767 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-4224 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pizza Hut Spider returns some closed outlets
It looks like the GB Pizza Hut spider "pizza_hut_gb" is returning a number of outlets that have closed. These are evident when the website either redirects to https://www.pizzahut.co.uk/restaurants/find or https://www.pizzahut.co.uk/restaurants/error/filenotfound . It seems that Pizza Hut are leaving up the https://www.pizzahut.co.uk/huts/uk-2/... web page after the outlet has closed, presumably for SEO reasons. These pages still contain the old location and web address, which the spider then picks up.
Examples include https://www.pizzahut.co.uk/huts/uk-2/437-ayr/ and https://www.pizzahut.co.uk/huts/uk-2/390-barrow/ .
I think these closed outlets can probably be removed from the dataset returned by looking at the openingHours LD field on the /huts/uk-2/ pages. The closed outlets seem to always have "openingHours":[]. The open branches have some sensible content there.
</issue>
<code>
[start of locations/spiders/pizza_hut_gb.py]
1 from scrapy.spiders import SitemapSpider
2
3 from locations.spiders.vapestore_gb import clean_address
4 from locations.structured_data_spider import StructuredDataSpider
5
6
7 class PizzaHutGB(SitemapSpider, StructuredDataSpider):
8 name = "pizza_hut_gb"
9 item_attributes = {"brand": "Pizza Hut", "brand_wikidata": "Q191615"}
10 sitemap_urls = ["https://www.pizzahut.co.uk/sitemap.xml"]
11 sitemap_rules = [
12 (r"https:\/\/www\.pizzahut\.co\.uk\/huts\/[-\w]+\/([-.\w]+)\/$", "parse_sd")
13 ]
14 wanted_types = ["FastFoodRestaurant"]
15
16 def inspect_item(self, item, response):
17 item["street_address"] = clean_address(item["street_address"])
18
19 if item["website"].startswith("https://www.pizzahut.co.uk/huts/"):
20 item["brand"] = "Pizza Hut Delivery"
21 item["brand_wikidata"] = "Q107293079"
22
23 yield item
24
[end of locations/spiders/pizza_hut_gb.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/pizza_hut_gb.py b/locations/spiders/pizza_hut_gb.py
--- a/locations/spiders/pizza_hut_gb.py
+++ b/locations/spiders/pizza_hut_gb.py
@@ -7,17 +7,19 @@
class PizzaHutGB(SitemapSpider, StructuredDataSpider):
name = "pizza_hut_gb"
item_attributes = {"brand": "Pizza Hut", "brand_wikidata": "Q191615"}
+ PIZZA_HUT_DELIVERY = {"brand": "Pizza Hut Delivery", "brand_wikidata": "Q107293079"}
sitemap_urls = ["https://www.pizzahut.co.uk/sitemap.xml"]
sitemap_rules = [
(r"https:\/\/www\.pizzahut\.co\.uk\/huts\/[-\w]+\/([-.\w]+)\/$", "parse_sd")
]
- wanted_types = ["FastFoodRestaurant"]
- def inspect_item(self, item, response):
+ def post_process_item(self, item, response, ld_data, **kwargs):
item["street_address"] = clean_address(item["street_address"])
if item["website"].startswith("https://www.pizzahut.co.uk/huts/"):
- item["brand"] = "Pizza Hut Delivery"
- item["brand_wikidata"] = "Q107293079"
+ item.update(self.PIZZA_HUT_DELIVERY)
+
+ if not item["opening_hours"]:
+ return
yield item
| {"golden_diff": "diff --git a/locations/spiders/pizza_hut_gb.py b/locations/spiders/pizza_hut_gb.py\n--- a/locations/spiders/pizza_hut_gb.py\n+++ b/locations/spiders/pizza_hut_gb.py\n@@ -7,17 +7,19 @@\n class PizzaHutGB(SitemapSpider, StructuredDataSpider):\n name = \"pizza_hut_gb\"\n item_attributes = {\"brand\": \"Pizza Hut\", \"brand_wikidata\": \"Q191615\"}\n+ PIZZA_HUT_DELIVERY = {\"brand\": \"Pizza Hut Delivery\", \"brand_wikidata\": \"Q107293079\"}\n sitemap_urls = [\"https://www.pizzahut.co.uk/sitemap.xml\"]\n sitemap_rules = [\n (r\"https:\\/\\/www\\.pizzahut\\.co\\.uk\\/huts\\/[-\\w]+\\/([-.\\w]+)\\/$\", \"parse_sd\")\n ]\n- wanted_types = [\"FastFoodRestaurant\"]\n \n- def inspect_item(self, item, response):\n+ def post_process_item(self, item, response, ld_data, **kwargs):\n item[\"street_address\"] = clean_address(item[\"street_address\"])\n \n if item[\"website\"].startswith(\"https://www.pizzahut.co.uk/huts/\"):\n- item[\"brand\"] = \"Pizza Hut Delivery\"\n- item[\"brand_wikidata\"] = \"Q107293079\"\n+ item.update(self.PIZZA_HUT_DELIVERY)\n+\n+ if not item[\"opening_hours\"]:\n+ return\n \n yield item\n", "issue": "Pizza Hut Spider returns some closed outlets\nIt looks like the GB Pizza Hut spider \"pizza_hut_gb\" is returning a number of outlets that have closed. These are evident when the website either redirects to https://www.pizzahut.co.uk/restaurants/find or https://www.pizzahut.co.uk/restaurants/error/filenotfound . It seems that Pizza Hut are leaving up the https://www.pizzahut.co.uk/huts/uk-2/... web page after the outlet has closed, presumably for SEO reasons. These pages still contain the old location and web address, which the spider then picks up.\r\n\r\nExamples include https://www.pizzahut.co.uk/huts/uk-2/437-ayr/ and https://www.pizzahut.co.uk/huts/uk-2/390-barrow/ .\r\n\r\nI think these closed outlets can probably be removed from the dataset returned by looking at the openingHours LD field on the /huts/uk-2/ pages. The closed outlets seem to always have \"openingHours\":[]. The open branches have some sensible content there.\n", "before_files": [{"content": "from scrapy.spiders import SitemapSpider\n\nfrom locations.spiders.vapestore_gb import clean_address\nfrom locations.structured_data_spider import StructuredDataSpider\n\n\nclass PizzaHutGB(SitemapSpider, StructuredDataSpider):\n name = \"pizza_hut_gb\"\n item_attributes = {\"brand\": \"Pizza Hut\", \"brand_wikidata\": \"Q191615\"}\n sitemap_urls = [\"https://www.pizzahut.co.uk/sitemap.xml\"]\n sitemap_rules = [\n (r\"https:\\/\\/www\\.pizzahut\\.co\\.uk\\/huts\\/[-\\w]+\\/([-.\\w]+)\\/$\", \"parse_sd\")\n ]\n wanted_types = [\"FastFoodRestaurant\"]\n\n def inspect_item(self, item, response):\n item[\"street_address\"] = clean_address(item[\"street_address\"])\n\n if item[\"website\"].startswith(\"https://www.pizzahut.co.uk/huts/\"):\n item[\"brand\"] = \"Pizza Hut Delivery\"\n item[\"brand_wikidata\"] = \"Q107293079\"\n\n yield item\n", "path": "locations/spiders/pizza_hut_gb.py"}]} | 1,056 | 353 |
gh_patches_debug_15192 | rasdani/github-patches | git_diff | SeldonIO__MLServer-339 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
mlserver --version fails (0.5.0)
```
mlserver --version
Traceback (most recent call last):
File "/home/clive/anaconda3/envs/mlserver/bin/mlserver", line 8, in <module>
sys.exit(main())
File "/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/mlserver/cli/main.py", line 45, in main
root()
File "/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py", line 781, in main
with self.make_context(prog_name, args, **extra) as ctx:
File "/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py", line 700, in make_context
self.parse_args(ctx, args)
File "/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py", line 1212, in parse_args
rest = Command.parse_args(self, ctx, args)
File "/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py", line 1048, in parse_args
value, args = param.handle_parse_result(ctx, opts, args)
File "/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py", line 1630, in handle_parse_result
value = invoke_param_callback(self.callback, ctx, self, value)
File "/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py", line 123, in invoke_param_callback
return callback(ctx, param, value)
File "/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/decorators.py", line 295, in callback
raise RuntimeError("Could not determine version")
RuntimeError: Could not determine version
(mlserver) /home/clive $ pip freeze | grep mlserver
mlserver==0.5.0
```
</issue>
<code>
[start of setup.py]
1 import os
2
3 from typing import Dict
4 from setuptools import setup, find_packages
5
6 ROOT_PATH = os.path.dirname(__file__)
7 PKG_NAME = "mlserver"
8 PKG_PATH = os.path.join(ROOT_PATH, PKG_NAME)
9
10
11 def _load_version() -> str:
12 version = ""
13 version_path = os.path.join(PKG_PATH, "version.py")
14 with open(version_path) as fp:
15 version_module: Dict[str, str] = {}
16 exec(fp.read(), version_module)
17 version = version_module["__version__"]
18
19 return version
20
21
22 def _load_description() -> str:
23 readme_path = os.path.join(ROOT_PATH, "README.md")
24 with open(readme_path) as fp:
25 return fp.read()
26
27
28 setup(
29 name=PKG_NAME,
30 version=_load_version(),
31 url="https://github.com/SeldonIO/MLServer.git",
32 author="Seldon Technologies Ltd.",
33 author_email="[email protected]",
34 description="ML server",
35 packages=find_packages(exclude=["tests", "tests.*"]),
36 install_requires=[
37 "grpcio",
38 "protobuf",
39 # We pin version of fastapi
40 # check https://github.com/SeldonIO/MLServer/issues/340
41 "fastapi==0.68.2",
42 "uvicorn",
43 "click",
44 "numpy",
45 "pandas",
46 ],
47 extras_require={"all": ["orjson"]},
48 entry_points={"console_scripts": ["mlserver=mlserver.cli:main"]},
49 long_description=_load_description(),
50 long_description_content_type="text/markdown",
51 license="Apache 2.0",
52 )
53
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -34,15 +34,16 @@
description="ML server",
packages=find_packages(exclude=["tests", "tests.*"]),
install_requires=[
- "grpcio",
- "protobuf",
+ "click",
# We pin version of fastapi
# check https://github.com/SeldonIO/MLServer/issues/340
"fastapi==0.68.2",
- "uvicorn",
- "click",
+ "grpcio",
+ "importlib-metadata;python_version<'3.8'",
"numpy",
"pandas",
+ "protobuf",
+ "uvicorn",
],
extras_require={"all": ["orjson"]},
entry_points={"console_scripts": ["mlserver=mlserver.cli:main"]},
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -34,15 +34,16 @@\n description=\"ML server\",\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n install_requires=[\n- \"grpcio\",\n- \"protobuf\",\n+ \"click\",\n # We pin version of fastapi\n # check https://github.com/SeldonIO/MLServer/issues/340\n \"fastapi==0.68.2\",\n- \"uvicorn\",\n- \"click\",\n+ \"grpcio\",\n+ \"importlib-metadata;python_version<'3.8'\",\n \"numpy\",\n \"pandas\",\n+ \"protobuf\",\n+ \"uvicorn\",\n ],\n extras_require={\"all\": [\"orjson\"]},\n entry_points={\"console_scripts\": [\"mlserver=mlserver.cli:main\"]},\n", "issue": "mlserver --version fails (0.5.0)\n```\r\nmlserver --version\r\nTraceback (most recent call last):\r\n File \"/home/clive/anaconda3/envs/mlserver/bin/mlserver\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/mlserver/cli/main.py\", line 45, in main\r\n root()\r\n File \"/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py\", line 781, in main\r\n with self.make_context(prog_name, args, **extra) as ctx:\r\n File \"/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py\", line 700, in make_context\r\n self.parse_args(ctx, args)\r\n File \"/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py\", line 1212, in parse_args\r\n rest = Command.parse_args(self, ctx, args)\r\n File \"/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py\", line 1048, in parse_args\r\n value, args = param.handle_parse_result(ctx, opts, args)\r\n File \"/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py\", line 1630, in handle_parse_result\r\n value = invoke_param_callback(self.callback, ctx, self, value)\r\n File \"/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/core.py\", line 123, in invoke_param_callback\r\n return callback(ctx, param, value)\r\n File \"/home/clive/anaconda3/envs/mlserver/lib/python3.8/site-packages/click/decorators.py\", line 295, in callback\r\n raise RuntimeError(\"Could not determine version\")\r\nRuntimeError: Could not determine version\r\n(mlserver) /home/clive $ pip freeze | grep mlserver\r\nmlserver==0.5.0\r\n```\n", "before_files": [{"content": "import os\n\nfrom typing import Dict\nfrom setuptools import setup, find_packages\n\nROOT_PATH = os.path.dirname(__file__)\nPKG_NAME = \"mlserver\"\nPKG_PATH = os.path.join(ROOT_PATH, PKG_NAME)\n\n\ndef _load_version() -> str:\n version = \"\"\n version_path = os.path.join(PKG_PATH, \"version.py\")\n with open(version_path) as fp:\n version_module: Dict[str, str] = {}\n exec(fp.read(), version_module)\n version = version_module[\"__version__\"]\n\n return version\n\n\ndef _load_description() -> str:\n readme_path = os.path.join(ROOT_PATH, \"README.md\")\n with open(readme_path) as fp:\n return fp.read()\n\n\nsetup(\n name=PKG_NAME,\n version=_load_version(),\n url=\"https://github.com/SeldonIO/MLServer.git\",\n author=\"Seldon Technologies Ltd.\",\n author_email=\"[email protected]\",\n description=\"ML server\",\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n install_requires=[\n \"grpcio\",\n \"protobuf\",\n # We pin version of fastapi\n # check https://github.com/SeldonIO/MLServer/issues/340\n \"fastapi==0.68.2\",\n \"uvicorn\",\n \"click\",\n \"numpy\",\n \"pandas\",\n ],\n extras_require={\"all\": [\"orjson\"]},\n entry_points={\"console_scripts\": [\"mlserver=mlserver.cli:main\"]},\n long_description=_load_description(),\n long_description_content_type=\"text/markdown\",\n license=\"Apache 2.0\",\n)\n", "path": "setup.py"}]} | 1,500 | 196 |
gh_patches_debug_1887 | rasdani/github-patches | git_diff | spotify__luigi-2679 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Is there a reason python-dateutil is pinned to v2.7.5?
In this [commit](https://github.com/spotify/luigi/commit/ca0aa9afedecda539339e51974ef38cecf180d4b), I can see that python-dateutil has been pinned to version 2.7.5 - is this strictly necessary? Version 2.8.0 was released a couple of weeks ago and It's causing `ContextualVersionConflict` errors for us.
</issue>
<code>
[start of setup.py]
1 # Copyright (c) 2012 Spotify AB
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may not
4 # use this file except in compliance with the License. You may obtain a copy of
5 # the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations under
13 # the License.
14
15 import os
16 import sys
17
18 from setuptools import setup
19
20
21 def get_static_files(path):
22 return [os.path.join(dirpath.replace("luigi/", ""), ext)
23 for (dirpath, dirnames, filenames) in os.walk(path)
24 for ext in ["*.html", "*.js", "*.css", "*.png",
25 "*.eot", "*.svg", "*.ttf", "*.woff", "*.woff2"]]
26
27
28 luigi_package_data = sum(map(get_static_files, ["luigi/static", "luigi/templates"]), [])
29
30 readme_note = """\
31 .. note::
32
33 For the latest source, discussion, etc, please visit the
34 `GitHub repository <https://github.com/spotify/luigi>`_\n\n
35 """
36
37 with open('README.rst') as fobj:
38 long_description = readme_note + fobj.read()
39
40 install_requires = [
41 'tornado>=4.0,<5',
42 # https://pagure.io/python-daemon/issue/18
43 'python-daemon<2.2.0',
44 'python-dateutil==2.7.5',
45 ]
46
47 # Note: To support older versions of setuptools, we're explicitly not
48 # using conditional syntax (i.e. 'enum34>1.1.0;python_version<"3.4"').
49 # This syntax is a problem for setuptools as recent as `20.1.1`,
50 # published Feb 16, 2016.
51 if sys.version_info[:2] < (3, 4):
52 install_requires.append('enum34>1.1.0')
53
54 if os.environ.get('READTHEDOCS', None) == 'True':
55 # So that we can build documentation for luigi.db_task_history and luigi.contrib.sqla
56 install_requires.append('sqlalchemy')
57 # readthedocs don't like python-daemon, see #1342
58 install_requires.remove('python-daemon<2.2.0')
59 install_requires.append('sphinx>=1.4.4') # Value mirrored in doc/conf.py
60
61 setup(
62 name='luigi',
63 version='2.8.3',
64 description='Workflow mgmgt + task scheduling + dependency resolution',
65 long_description=long_description,
66 author='The Luigi Authors',
67 url='https://github.com/spotify/luigi',
68 license='Apache License 2.0',
69 packages=[
70 'luigi',
71 'luigi.configuration',
72 'luigi.contrib',
73 'luigi.contrib.hdfs',
74 'luigi.tools'
75 ],
76 package_data={
77 'luigi': luigi_package_data
78 },
79 entry_points={
80 'console_scripts': [
81 'luigi = luigi.cmdline:luigi_run',
82 'luigid = luigi.cmdline:luigid',
83 'luigi-grep = luigi.tools.luigi_grep:main',
84 'luigi-deps = luigi.tools.deps:main',
85 'luigi-deps-tree = luigi.tools.deps_tree:main'
86 ]
87 },
88 install_requires=install_requires,
89 extras_require={
90 'toml': ['toml<2.0.0'],
91 },
92 classifiers=[
93 'Development Status :: 5 - Production/Stable',
94 'Environment :: Console',
95 'Environment :: Web Environment',
96 'Intended Audience :: Developers',
97 'Intended Audience :: System Administrators',
98 'License :: OSI Approved :: Apache Software License',
99 'Programming Language :: Python :: 2.7',
100 'Programming Language :: Python :: 3.3',
101 'Programming Language :: Python :: 3.4',
102 'Programming Language :: Python :: 3.5',
103 'Programming Language :: Python :: 3.6',
104 'Programming Language :: Python :: 3.7',
105 'Topic :: System :: Monitoring',
106 ],
107 )
108
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -41,7 +41,7 @@
'tornado>=4.0,<5',
# https://pagure.io/python-daemon/issue/18
'python-daemon<2.2.0',
- 'python-dateutil==2.7.5',
+ 'python-dateutil>=2.7.5,<3',
]
# Note: To support older versions of setuptools, we're explicitly not
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -41,7 +41,7 @@\n 'tornado>=4.0,<5',\n # https://pagure.io/python-daemon/issue/18\n 'python-daemon<2.2.0',\n- 'python-dateutil==2.7.5',\n+ 'python-dateutil>=2.7.5,<3',\n ]\n \n # Note: To support older versions of setuptools, we're explicitly not\n", "issue": "Is there a reason python-dateutil is pinned to v2.7.5?\nIn this [commit](https://github.com/spotify/luigi/commit/ca0aa9afedecda539339e51974ef38cecf180d4b), I can see that python-dateutil has been pinned to version 2.7.5 - is this strictly necessary? Version 2.8.0 was released a couple of weeks ago and It's causing `ContextualVersionConflict` errors for us.\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) 2012 Spotify AB\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may not\n# use this file except in compliance with the License. You may obtain a copy of\n# the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations under\n# the License.\n\nimport os\nimport sys\n\nfrom setuptools import setup\n\n\ndef get_static_files(path):\n return [os.path.join(dirpath.replace(\"luigi/\", \"\"), ext)\n for (dirpath, dirnames, filenames) in os.walk(path)\n for ext in [\"*.html\", \"*.js\", \"*.css\", \"*.png\",\n \"*.eot\", \"*.svg\", \"*.ttf\", \"*.woff\", \"*.woff2\"]]\n\n\nluigi_package_data = sum(map(get_static_files, [\"luigi/static\", \"luigi/templates\"]), [])\n\nreadme_note = \"\"\"\\\n.. note::\n\n For the latest source, discussion, etc, please visit the\n `GitHub repository <https://github.com/spotify/luigi>`_\\n\\n\n\"\"\"\n\nwith open('README.rst') as fobj:\n long_description = readme_note + fobj.read()\n\ninstall_requires = [\n 'tornado>=4.0,<5',\n # https://pagure.io/python-daemon/issue/18\n 'python-daemon<2.2.0',\n 'python-dateutil==2.7.5',\n]\n\n# Note: To support older versions of setuptools, we're explicitly not\n# using conditional syntax (i.e. 'enum34>1.1.0;python_version<\"3.4\"').\n# This syntax is a problem for setuptools as recent as `20.1.1`,\n# published Feb 16, 2016.\nif sys.version_info[:2] < (3, 4):\n install_requires.append('enum34>1.1.0')\n\nif os.environ.get('READTHEDOCS', None) == 'True':\n # So that we can build documentation for luigi.db_task_history and luigi.contrib.sqla\n install_requires.append('sqlalchemy')\n # readthedocs don't like python-daemon, see #1342\n install_requires.remove('python-daemon<2.2.0')\n install_requires.append('sphinx>=1.4.4') # Value mirrored in doc/conf.py\n\nsetup(\n name='luigi',\n version='2.8.3',\n description='Workflow mgmgt + task scheduling + dependency resolution',\n long_description=long_description,\n author='The Luigi Authors',\n url='https://github.com/spotify/luigi',\n license='Apache License 2.0',\n packages=[\n 'luigi',\n 'luigi.configuration',\n 'luigi.contrib',\n 'luigi.contrib.hdfs',\n 'luigi.tools'\n ],\n package_data={\n 'luigi': luigi_package_data\n },\n entry_points={\n 'console_scripts': [\n 'luigi = luigi.cmdline:luigi_run',\n 'luigid = luigi.cmdline:luigid',\n 'luigi-grep = luigi.tools.luigi_grep:main',\n 'luigi-deps = luigi.tools.deps:main',\n 'luigi-deps-tree = luigi.tools.deps_tree:main'\n ]\n },\n install_requires=install_requires,\n extras_require={\n 'toml': ['toml<2.0.0'],\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: System :: Monitoring',\n ],\n)\n", "path": "setup.py"}]} | 1,829 | 117 |
gh_patches_debug_19906 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-4246 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
v4 --replacements vs v5 --modify-headers
I'm trying to replace the `User-Agent` request header if it contains a certain string.
This works with "mitmproxy-4.0.4-linux":
```
./mitmproxy --replacements ":~hq User-Agent:Mozilla(.+):CUSTOMAGENT"
```
With "mitmproxy-5.2-linux", this at least replaces the `User-Agent`, but is missing my "certain string condition":
```
./mitmproxy --modify-headers "|~hq .+|User-Agent|CUSTOMAGENT"
```
How do I add my `Mozilla` condition in v5?
None of these work:
```
./mitmproxy --modify-headers "|~hq ^(.*?)Mozilla(.*?)$|User-Agent|CUSTOMAGENT"
./mitmproxy --modify-headers "/~hq .*?Mozilla.*?/User-Agent/CUSTOMAGENT"
./mitmproxy --modify-headers "|~hq Mozilla|User-Agent|CUSTOMAGENT"
./mitmproxy --modify-headers "|~hq User-Agent: Mozilla|User-Agent|CUSTOMAGENT"
./mitmproxy --modify-headers "|~hq \"^(.*?)Mozilla(.*?)$\"|User-Agent|CUSTOMAGENT"
```
I've been trying for hours, and I feel like I've tried every variation under the sun. There's a very small chance it's a bug, but most likely I'm just doing it wrong. If it matters, this system is Ubuntu 16.04.
</issue>
<code>
[start of mitmproxy/addons/modifyheaders.py]
1 import re
2 import typing
3 from pathlib import Path
4
5 from mitmproxy import ctx, exceptions, flowfilter, http
6 from mitmproxy.net.http import Headers
7 from mitmproxy.utils import strutils
8 from mitmproxy.utils.spec import parse_spec
9
10
11 class ModifySpec(typing.NamedTuple):
12 matches: flowfilter.TFilter
13 subject: bytes
14 replacement_str: str
15
16 def read_replacement(self) -> bytes:
17 """
18 Process the replacement str. This usually just involves converting it to bytes.
19 However, if it starts with `@`, we interpret the rest as a file path to read from.
20
21 Raises:
22 - IOError if the file cannot be read.
23 """
24 if self.replacement_str.startswith("@"):
25 return Path(self.replacement_str[1:]).expanduser().read_bytes()
26 else:
27 # We could cache this at some point, but unlikely to be a problem.
28 return strutils.escaped_str_to_bytes(self.replacement_str)
29
30
31 def parse_modify_spec(option: str, subject_is_regex: bool) -> ModifySpec:
32 flow_filter, subject_str, replacement = parse_spec(option)
33
34 subject = strutils.escaped_str_to_bytes(subject_str)
35 if subject_is_regex:
36 try:
37 re.compile(subject)
38 except re.error as e:
39 raise ValueError(f"Invalid regular expression {subject!r} ({e})")
40
41 spec = ModifySpec(flow_filter, subject, replacement)
42
43 try:
44 spec.read_replacement()
45 except OSError as e:
46 raise ValueError(f"Invalid file path: {replacement[1:]} ({e})")
47
48 return spec
49
50
51 class ModifyHeaders:
52 def __init__(self):
53 self.replacements: typing.List[ModifySpec] = []
54
55 def load(self, loader):
56 loader.add_option(
57 "modify_headers", typing.Sequence[str], [],
58 """
59 Header modify pattern of the form "[/flow-filter]/header-name/[@]header-value", where the
60 separator can be any character. The @ allows to provide a file path that is used to read
61 the header value string. An empty header-value removes existing header-name headers.
62 """
63 )
64
65 def configure(self, updated):
66 if "modify_headers" in updated:
67 self.replacements = []
68 for option in ctx.options.modify_headers:
69 try:
70 spec = parse_modify_spec(option, False)
71 except ValueError as e:
72 raise exceptions.OptionsError(f"Cannot parse modify_headers option {option}: {e}") from e
73 self.replacements.append(spec)
74
75 def request(self, flow):
76 if flow.response or flow.error or flow.reply.state == "taken":
77 return
78 self.run(flow, flow.request.headers)
79
80 def response(self, flow):
81 if flow.error or flow.reply.state == "taken":
82 return
83 self.run(flow, flow.response.headers)
84
85 def run(self, flow: http.HTTPFlow, hdrs: Headers) -> None:
86 # unset all specified headers
87 for spec in self.replacements:
88 if spec.matches(flow):
89 hdrs.pop(spec.subject, None)
90
91 # set all specified headers if the replacement string is not empty
92 for spec in self.replacements:
93 if spec.matches(flow):
94 try:
95 replacement = spec.read_replacement()
96 except OSError as e:
97 ctx.log.warn(f"Could not read replacement file: {e}")
98 continue
99 else:
100 if replacement:
101 hdrs.add(spec.subject, replacement)
102
[end of mitmproxy/addons/modifyheaders.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mitmproxy/addons/modifyheaders.py b/mitmproxy/addons/modifyheaders.py
--- a/mitmproxy/addons/modifyheaders.py
+++ b/mitmproxy/addons/modifyheaders.py
@@ -83,14 +83,21 @@
self.run(flow, flow.response.headers)
def run(self, flow: http.HTTPFlow, hdrs: Headers) -> None:
- # unset all specified headers
+ matches = []
+
+ # first check all the filters against the original, unmodified flow
for spec in self.replacements:
- if spec.matches(flow):
+ matches.append(spec.matches(flow))
+
+ # unset all specified headers
+ for i, spec in enumerate(self.replacements):
+ if matches[i]:
hdrs.pop(spec.subject, None)
# set all specified headers if the replacement string is not empty
- for spec in self.replacements:
- if spec.matches(flow):
+
+ for i, spec in enumerate(self.replacements):
+ if matches[i]:
try:
replacement = spec.read_replacement()
except OSError as e:
| {"golden_diff": "diff --git a/mitmproxy/addons/modifyheaders.py b/mitmproxy/addons/modifyheaders.py\n--- a/mitmproxy/addons/modifyheaders.py\n+++ b/mitmproxy/addons/modifyheaders.py\n@@ -83,14 +83,21 @@\n self.run(flow, flow.response.headers)\n \n def run(self, flow: http.HTTPFlow, hdrs: Headers) -> None:\n- # unset all specified headers\n+ matches = []\n+\n+ # first check all the filters against the original, unmodified flow\n for spec in self.replacements:\n- if spec.matches(flow):\n+ matches.append(spec.matches(flow))\n+\n+ # unset all specified headers\n+ for i, spec in enumerate(self.replacements):\n+ if matches[i]:\n hdrs.pop(spec.subject, None)\n \n # set all specified headers if the replacement string is not empty\n- for spec in self.replacements:\n- if spec.matches(flow):\n+\n+ for i, spec in enumerate(self.replacements):\n+ if matches[i]:\n try:\n replacement = spec.read_replacement()\n except OSError as e:\n", "issue": "v4 --replacements vs v5 --modify-headers\nI'm trying to replace the `User-Agent` request header if it contains a certain string.\r\n\r\nThis works with \"mitmproxy-4.0.4-linux\":\r\n\r\n```\r\n./mitmproxy --replacements \":~hq User-Agent:Mozilla(.+):CUSTOMAGENT\"\r\n```\r\n\r\nWith \"mitmproxy-5.2-linux\", this at least replaces the `User-Agent`, but is missing my \"certain string condition\":\r\n\r\n```\r\n./mitmproxy --modify-headers \"|~hq .+|User-Agent|CUSTOMAGENT\"\r\n```\r\n\r\nHow do I add my `Mozilla` condition in v5?\r\n\r\nNone of these work:\r\n\r\n```\r\n./mitmproxy --modify-headers \"|~hq ^(.*?)Mozilla(.*?)$|User-Agent|CUSTOMAGENT\"\r\n\r\n./mitmproxy --modify-headers \"/~hq .*?Mozilla.*?/User-Agent/CUSTOMAGENT\"\r\n\r\n./mitmproxy --modify-headers \"|~hq Mozilla|User-Agent|CUSTOMAGENT\"\r\n\r\n./mitmproxy --modify-headers \"|~hq User-Agent: Mozilla|User-Agent|CUSTOMAGENT\"\r\n\r\n./mitmproxy --modify-headers \"|~hq \\\"^(.*?)Mozilla(.*?)$\\\"|User-Agent|CUSTOMAGENT\"\r\n```\r\n\r\nI've been trying for hours, and I feel like I've tried every variation under the sun. There's a very small chance it's a bug, but most likely I'm just doing it wrong. If it matters, this system is Ubuntu 16.04.\r\n\r\n\r\n\n", "before_files": [{"content": "import re\nimport typing\nfrom pathlib import Path\n\nfrom mitmproxy import ctx, exceptions, flowfilter, http\nfrom mitmproxy.net.http import Headers\nfrom mitmproxy.utils import strutils\nfrom mitmproxy.utils.spec import parse_spec\n\n\nclass ModifySpec(typing.NamedTuple):\n matches: flowfilter.TFilter\n subject: bytes\n replacement_str: str\n\n def read_replacement(self) -> bytes:\n \"\"\"\n Process the replacement str. This usually just involves converting it to bytes.\n However, if it starts with `@`, we interpret the rest as a file path to read from.\n\n Raises:\n - IOError if the file cannot be read.\n \"\"\"\n if self.replacement_str.startswith(\"@\"):\n return Path(self.replacement_str[1:]).expanduser().read_bytes()\n else:\n # We could cache this at some point, but unlikely to be a problem.\n return strutils.escaped_str_to_bytes(self.replacement_str)\n\n\ndef parse_modify_spec(option: str, subject_is_regex: bool) -> ModifySpec:\n flow_filter, subject_str, replacement = parse_spec(option)\n\n subject = strutils.escaped_str_to_bytes(subject_str)\n if subject_is_regex:\n try:\n re.compile(subject)\n except re.error as e:\n raise ValueError(f\"Invalid regular expression {subject!r} ({e})\")\n\n spec = ModifySpec(flow_filter, subject, replacement)\n\n try:\n spec.read_replacement()\n except OSError as e:\n raise ValueError(f\"Invalid file path: {replacement[1:]} ({e})\")\n\n return spec\n\n\nclass ModifyHeaders:\n def __init__(self):\n self.replacements: typing.List[ModifySpec] = []\n\n def load(self, loader):\n loader.add_option(\n \"modify_headers\", typing.Sequence[str], [],\n \"\"\"\n Header modify pattern of the form \"[/flow-filter]/header-name/[@]header-value\", where the\n separator can be any character. The @ allows to provide a file path that is used to read\n the header value string. An empty header-value removes existing header-name headers.\n \"\"\"\n )\n\n def configure(self, updated):\n if \"modify_headers\" in updated:\n self.replacements = []\n for option in ctx.options.modify_headers:\n try:\n spec = parse_modify_spec(option, False)\n except ValueError as e:\n raise exceptions.OptionsError(f\"Cannot parse modify_headers option {option}: {e}\") from e\n self.replacements.append(spec)\n\n def request(self, flow):\n if flow.response or flow.error or flow.reply.state == \"taken\":\n return\n self.run(flow, flow.request.headers)\n\n def response(self, flow):\n if flow.error or flow.reply.state == \"taken\":\n return\n self.run(flow, flow.response.headers)\n\n def run(self, flow: http.HTTPFlow, hdrs: Headers) -> None:\n # unset all specified headers\n for spec in self.replacements:\n if spec.matches(flow):\n hdrs.pop(spec.subject, None)\n\n # set all specified headers if the replacement string is not empty\n for spec in self.replacements:\n if spec.matches(flow):\n try:\n replacement = spec.read_replacement()\n except OSError as e:\n ctx.log.warn(f\"Could not read replacement file: {e}\")\n continue\n else:\n if replacement:\n hdrs.add(spec.subject, replacement)\n", "path": "mitmproxy/addons/modifyheaders.py"}]} | 1,802 | 248 |
gh_patches_debug_23291 | rasdani/github-patches | git_diff | scikit-hep__awkward-3115 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
typing ak.Array for numba.cuda.jit signature
### Version of Awkward Array
2.6.2
### Description and code to reproduce
Hey guys, I followed a hint from the discussion in [#696](https://github.com/scikit-hep/awkward/discussions/696#discussion-2571850) to type `ak.Array` for numba signatures. So I tried something like
```python
import awkward as ak
import numba as nb
from numba import types
cpu_arr_type = ak.Array([[[0, 1], [2, 3]], [[4, 5]]], backend='cpu').numba_type
@nb.njit(types.void(cpu_arr_type))
def cpu_kernel(arr):
do_something_with_arr
```
and this works like a charm.
However, I'm interested in the same case but with a cuda kernel. So I tried what appeared more natural to do:
```python
gpu_arr_type = ak.Array([[[0, 1], [2, 3]], [[4, 5]]], backend='cuda').numba_type
@nb.cuda.jit(types.void(gpu_arr_type), extensions=[ak.numba.cuda])
def cuda_kernel(arr):
do_something_with_arr
```
This time, I get the error:
```python
self = <awkward._connect.numba.arrayview_cuda.ArrayViewArgHandler object at 0x784afbc13fa0>
ty = ak.ArrayView(ak.ListArrayType(array(int64, 1d, C), ak.ListArrayType(array(int64, 1d, C), ak.NumpyArrayType(array(int64, 1d, C), {}), {}), {}), None, ())
val = <Array [[[4, 1], [2, -1]], [...], [[4, 0]]] type='3 * var * var * int64'>
stream = 0, retr = []
def prepare_args(self, ty, val, stream, retr):
if isinstance(val, ak.Array):
if isinstance(val.layout.backend, CupyBackend):
# Use uint64 for pos, start, stop, the array pointers values, and the pylookup value
tys = numba.types.UniTuple(numba.types.uint64, 5)
> start = val._numbaview.start
E AttributeError: 'NoneType' object has no attribute 'start'
.../site-packages/awkward/_connect/numba/arrayview_cuda.py:21: AttributeError
```
How should this latter case be correctly treated? Note that, without typing, the thing works as expected:
```python
@nb.cuda.jit(extensions=[ak.numba.cuda])
def cuda_kernel_no_typing(arr):
do_something_with_arr
```
However, I'm interested in `ak.Array`s with the 3D layout of integers (as above) and would like to take advantage of numba's eager compilation. I'm passing the `arr` for testing as
```python
backend = 'cpu' # or 'cuda'
arr = ak.to_backend(
ak.Array([
[[4, 1], [2, -1]],
[[0, -1], [1, 1], [3, -1]],
[[4, 0]]
]),
backend
)
```
Any help is appreciated!
</issue>
<code>
[start of src/awkward/_connect/numba/arrayview_cuda.py]
1 # BSD 3-Clause License; see https://github.com/scikit-hep/awkward/blob/main/LICENSE
2
3 from __future__ import annotations
4
5 import numba
6 from numba.core.errors import NumbaTypeError
7
8 import awkward as ak
9 from awkward._backends.cupy import CupyBackend
10
11 ########## ArrayView Arguments Handler for CUDA JIT
12
13
14 class ArrayViewArgHandler:
15 def prepare_args(self, ty, val, stream, retr):
16 if isinstance(val, ak.Array):
17 if isinstance(val.layout.backend, CupyBackend):
18 # Use uint64 for pos, start, stop, the array pointers values, and the pylookup value
19 tys = numba.types.UniTuple(numba.types.uint64, 5)
20
21 start = val._numbaview.start
22 stop = val._numbaview.stop
23 pos = val._numbaview.pos
24 arrayptrs = val._numbaview.lookup.arrayptrs.data.ptr
25 pylookup = 0
26
27 return tys, (pos, start, stop, arrayptrs, pylookup)
28 else:
29 raise NumbaTypeError(
30 '`ak.to_backend` should be called with `backend="cuda"` to put '
31 "the array on the GPU before using it: "
32 'ak.to_backend(array, backend="cuda")'
33 )
34
35 else:
36 return ty, val
37
[end of src/awkward/_connect/numba/arrayview_cuda.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/awkward/_connect/numba/arrayview_cuda.py b/src/awkward/_connect/numba/arrayview_cuda.py
--- a/src/awkward/_connect/numba/arrayview_cuda.py
+++ b/src/awkward/_connect/numba/arrayview_cuda.py
@@ -15,13 +15,22 @@
def prepare_args(self, ty, val, stream, retr):
if isinstance(val, ak.Array):
if isinstance(val.layout.backend, CupyBackend):
+ if ty is not val.numba_type:
+ raise NumbaTypeError(
+ f"the array type: {val.numba_type} does not match "
+ f"the kernel signature type: {ty}"
+ )
+
# Use uint64 for pos, start, stop, the array pointers values, and the pylookup value
tys = numba.types.UniTuple(numba.types.uint64, 5)
- start = val._numbaview.start
- stop = val._numbaview.stop
- pos = val._numbaview.pos
- arrayptrs = val._numbaview.lookup.arrayptrs.data.ptr
+ view = val._numbaview
+ assert view is not None
+
+ start = view.start
+ stop = view.stop
+ pos = view.pos
+ arrayptrs = view.lookup.arrayptrs.data.ptr
pylookup = 0
return tys, (pos, start, stop, arrayptrs, pylookup)
| {"golden_diff": "diff --git a/src/awkward/_connect/numba/arrayview_cuda.py b/src/awkward/_connect/numba/arrayview_cuda.py\n--- a/src/awkward/_connect/numba/arrayview_cuda.py\n+++ b/src/awkward/_connect/numba/arrayview_cuda.py\n@@ -15,13 +15,22 @@\n def prepare_args(self, ty, val, stream, retr):\n if isinstance(val, ak.Array):\n if isinstance(val.layout.backend, CupyBackend):\n+ if ty is not val.numba_type:\n+ raise NumbaTypeError(\n+ f\"the array type: {val.numba_type} does not match \"\n+ f\"the kernel signature type: {ty}\"\n+ )\n+\n # Use uint64 for pos, start, stop, the array pointers values, and the pylookup value\n tys = numba.types.UniTuple(numba.types.uint64, 5)\n \n- start = val._numbaview.start\n- stop = val._numbaview.stop\n- pos = val._numbaview.pos\n- arrayptrs = val._numbaview.lookup.arrayptrs.data.ptr\n+ view = val._numbaview\n+ assert view is not None\n+\n+ start = view.start\n+ stop = view.stop\n+ pos = view.pos\n+ arrayptrs = view.lookup.arrayptrs.data.ptr\n pylookup = 0\n \n return tys, (pos, start, stop, arrayptrs, pylookup)\n", "issue": "typing ak.Array for numba.cuda.jit signature\n### Version of Awkward Array\n\n2.6.2\n\n### Description and code to reproduce\n\nHey guys, I followed a hint from the discussion in [#696](https://github.com/scikit-hep/awkward/discussions/696#discussion-2571850) to type `ak.Array` for numba signatures. So I tried something like\r\n\r\n```python\r\nimport awkward as ak\r\nimport numba as nb\r\nfrom numba import types\r\n\r\ncpu_arr_type = ak.Array([[[0, 1], [2, 3]], [[4, 5]]], backend='cpu').numba_type\r\n\r\[email protected](types.void(cpu_arr_type))\r\ndef cpu_kernel(arr):\r\n do_something_with_arr\r\n```\r\nand this works like a charm.\r\n\r\nHowever, I'm interested in the same case but with a cuda kernel. So I tried what appeared more natural to do:\r\n```python\r\ngpu_arr_type = ak.Array([[[0, 1], [2, 3]], [[4, 5]]], backend='cuda').numba_type\r\n\r\[email protected](types.void(gpu_arr_type), extensions=[ak.numba.cuda])\r\ndef cuda_kernel(arr):\r\n do_something_with_arr\r\n```\r\nThis time, I get the error:\r\n```python\r\nself = <awkward._connect.numba.arrayview_cuda.ArrayViewArgHandler object at 0x784afbc13fa0>\r\nty = ak.ArrayView(ak.ListArrayType(array(int64, 1d, C), ak.ListArrayType(array(int64, 1d, C), ak.NumpyArrayType(array(int64, 1d, C), {}), {}), {}), None, ())\r\nval = <Array [[[4, 1], [2, -1]], [...], [[4, 0]]] type='3 * var * var * int64'>\r\nstream = 0, retr = []\r\n\r\n def prepare_args(self, ty, val, stream, retr):\r\n if isinstance(val, ak.Array):\r\n if isinstance(val.layout.backend, CupyBackend):\r\n # Use uint64 for pos, start, stop, the array pointers values, and the pylookup value\r\n tys = numba.types.UniTuple(numba.types.uint64, 5)\r\n \r\n> start = val._numbaview.start\r\nE AttributeError: 'NoneType' object has no attribute 'start'\r\n\r\n.../site-packages/awkward/_connect/numba/arrayview_cuda.py:21: AttributeError\r\n```\r\nHow should this latter case be correctly treated? Note that, without typing, the thing works as expected:\r\n```python\r\[email protected](extensions=[ak.numba.cuda])\r\ndef cuda_kernel_no_typing(arr):\r\n do_something_with_arr\r\n```\r\nHowever, I'm interested in `ak.Array`s with the 3D layout of integers (as above) and would like to take advantage of numba's eager compilation. I'm passing the `arr` for testing as\r\n```python\r\nbackend = 'cpu' # or 'cuda'\r\narr = ak.to_backend(\r\n ak.Array([\r\n [[4, 1], [2, -1]],\r\n [[0, -1], [1, 1], [3, -1]],\r\n [[4, 0]]\r\n ]),\r\n backend\r\n)\r\n```\r\nAny help is appreciated!\r\n\n", "before_files": [{"content": "# BSD 3-Clause License; see https://github.com/scikit-hep/awkward/blob/main/LICENSE\n\nfrom __future__ import annotations\n\nimport numba\nfrom numba.core.errors import NumbaTypeError\n\nimport awkward as ak\nfrom awkward._backends.cupy import CupyBackend\n\n########## ArrayView Arguments Handler for CUDA JIT\n\n\nclass ArrayViewArgHandler:\n def prepare_args(self, ty, val, stream, retr):\n if isinstance(val, ak.Array):\n if isinstance(val.layout.backend, CupyBackend):\n # Use uint64 for pos, start, stop, the array pointers values, and the pylookup value\n tys = numba.types.UniTuple(numba.types.uint64, 5)\n\n start = val._numbaview.start\n stop = val._numbaview.stop\n pos = val._numbaview.pos\n arrayptrs = val._numbaview.lookup.arrayptrs.data.ptr\n pylookup = 0\n\n return tys, (pos, start, stop, arrayptrs, pylookup)\n else:\n raise NumbaTypeError(\n '`ak.to_backend` should be called with `backend=\"cuda\"` to put '\n \"the array on the GPU before using it: \"\n 'ak.to_backend(array, backend=\"cuda\")'\n )\n\n else:\n return ty, val\n", "path": "src/awkward/_connect/numba/arrayview_cuda.py"}]} | 1,647 | 343 |
gh_patches_debug_21885 | rasdani/github-patches | git_diff | numba__numba-3578 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
simulate bug func_or_sig vs fn_or_sig named parameter
There seems to be a difference in the named parameter func_or_sig/fn_or_sig between the cuda.jit() in the simulator vs gpu code.
</issue>
<code>
[start of numba/cuda/simulator/api.py]
1 '''
2 Contains CUDA API functions
3 '''
4 from __future__ import absolute_import
5
6 from contextlib import contextmanager
7 from .cudadrv.devices import require_context, reset, gpus
8 from .kernel import FakeCUDAKernel
9 from numba.typing import Signature
10 from warnings import warn
11 from ..args import In, Out, InOut
12
13
14 def select_device(dev=0):
15 assert dev == 0, 'Only a single device supported by the simulator'
16
17
18 class stream(object):
19 '''
20 The stream API is supported in the simulator - however, all execution
21 occurs synchronously, so synchronization requires no operation.
22 '''
23 @contextmanager
24 def auto_synchronize(self):
25 yield
26
27 def synchronize(self):
28 pass
29
30
31 def synchronize():
32 pass
33
34 def close():
35 gpus.closed = True
36
37
38 def declare_device(*args, **kwargs):
39 pass
40
41
42 def detect():
43 print('Found 1 CUDA devices')
44 print('id %d %20s %40s' % (0, 'SIMULATOR', '[SUPPORTED]'))
45 print('%40s: 5.2' % 'compute capability')
46
47
48 def list_devices():
49 return gpus
50
51
52 # Events
53
54 class Event(object):
55 '''
56 The simulator supports the event API, but they do not record timing info,
57 and all simulation is synchronous. Execution time is not recorded.
58 '''
59 def record(self, stream=0):
60 pass
61
62 def wait(self, stream=0):
63 pass
64
65 def synchronize(self):
66 pass
67
68 def elapsed_time(self, event):
69 warn('Simulator timings are bogus')
70 return 0.0
71
72 event = Event
73
74
75 def jit(fn_or_sig=None, device=False, debug=False, argtypes=None, inline=False, restype=None,
76 fastmath=False, link=None):
77 if link is not None:
78 raise NotImplementedError('Cannot link PTX in the simulator')
79 # Check for first argument specifying types - in that case the
80 # decorator is not being passed a function
81 if fn_or_sig is None or isinstance(fn_or_sig, (str, tuple, Signature)):
82 def jitwrapper(fn):
83 return FakeCUDAKernel(fn,
84 device=device,
85 fastmath=fastmath)
86 return jitwrapper
87 return FakeCUDAKernel(fn_or_sig, device=device)
88
89 autojit = jit
90
91
92 @contextmanager
93 def defer_cleanup():
94 # No effect for simulator
95 yield
96
[end of numba/cuda/simulator/api.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/numba/cuda/simulator/api.py b/numba/cuda/simulator/api.py
--- a/numba/cuda/simulator/api.py
+++ b/numba/cuda/simulator/api.py
@@ -72,19 +72,19 @@
event = Event
-def jit(fn_or_sig=None, device=False, debug=False, argtypes=None, inline=False, restype=None,
- fastmath=False, link=None):
+def jit(func_or_sig=None, device=False, debug=False, argtypes=None,
+ inline=False, restype=None, fastmath=False, link=None):
if link is not None:
raise NotImplementedError('Cannot link PTX in the simulator')
# Check for first argument specifying types - in that case the
# decorator is not being passed a function
- if fn_or_sig is None or isinstance(fn_or_sig, (str, tuple, Signature)):
+ if func_or_sig is None or isinstance(func_or_sig, (str, tuple, Signature)):
def jitwrapper(fn):
return FakeCUDAKernel(fn,
device=device,
fastmath=fastmath)
return jitwrapper
- return FakeCUDAKernel(fn_or_sig, device=device)
+ return FakeCUDAKernel(func_or_sig, device=device)
autojit = jit
| {"golden_diff": "diff --git a/numba/cuda/simulator/api.py b/numba/cuda/simulator/api.py\n--- a/numba/cuda/simulator/api.py\n+++ b/numba/cuda/simulator/api.py\n@@ -72,19 +72,19 @@\n event = Event\n \n \n-def jit(fn_or_sig=None, device=False, debug=False, argtypes=None, inline=False, restype=None,\n- fastmath=False, link=None):\n+def jit(func_or_sig=None, device=False, debug=False, argtypes=None,\n+ inline=False, restype=None, fastmath=False, link=None):\n if link is not None:\n raise NotImplementedError('Cannot link PTX in the simulator')\n # Check for first argument specifying types - in that case the\n # decorator is not being passed a function\n- if fn_or_sig is None or isinstance(fn_or_sig, (str, tuple, Signature)):\n+ if func_or_sig is None or isinstance(func_or_sig, (str, tuple, Signature)):\n def jitwrapper(fn):\n return FakeCUDAKernel(fn,\n device=device,\n fastmath=fastmath)\n return jitwrapper\n- return FakeCUDAKernel(fn_or_sig, device=device)\n+ return FakeCUDAKernel(func_or_sig, device=device)\n \n autojit = jit\n", "issue": "simulate bug func_or_sig vs fn_or_sig named parameter\nThere seems to be a difference in the named parameter func_or_sig/fn_or_sig between the cuda.jit() in the simulator vs gpu code. \n", "before_files": [{"content": "'''\nContains CUDA API functions\n'''\nfrom __future__ import absolute_import\n\nfrom contextlib import contextmanager\nfrom .cudadrv.devices import require_context, reset, gpus\nfrom .kernel import FakeCUDAKernel\nfrom numba.typing import Signature\nfrom warnings import warn\nfrom ..args import In, Out, InOut\n\n\ndef select_device(dev=0):\n assert dev == 0, 'Only a single device supported by the simulator'\n\n\nclass stream(object):\n '''\n The stream API is supported in the simulator - however, all execution\n occurs synchronously, so synchronization requires no operation.\n '''\n @contextmanager\n def auto_synchronize(self):\n yield\n\n def synchronize(self):\n pass\n\n\ndef synchronize():\n pass\n\ndef close():\n gpus.closed = True\n\n\ndef declare_device(*args, **kwargs):\n pass\n\n\ndef detect():\n print('Found 1 CUDA devices')\n print('id %d %20s %40s' % (0, 'SIMULATOR', '[SUPPORTED]'))\n print('%40s: 5.2' % 'compute capability')\n\n\ndef list_devices():\n return gpus\n\n\n# Events\n\nclass Event(object):\n '''\n The simulator supports the event API, but they do not record timing info,\n and all simulation is synchronous. Execution time is not recorded.\n '''\n def record(self, stream=0):\n pass\n\n def wait(self, stream=0):\n pass\n\n def synchronize(self):\n pass\n\n def elapsed_time(self, event):\n warn('Simulator timings are bogus')\n return 0.0\n\nevent = Event\n\n\ndef jit(fn_or_sig=None, device=False, debug=False, argtypes=None, inline=False, restype=None,\n fastmath=False, link=None):\n if link is not None:\n raise NotImplementedError('Cannot link PTX in the simulator')\n # Check for first argument specifying types - in that case the\n # decorator is not being passed a function\n if fn_or_sig is None or isinstance(fn_or_sig, (str, tuple, Signature)):\n def jitwrapper(fn):\n return FakeCUDAKernel(fn,\n device=device,\n fastmath=fastmath)\n return jitwrapper\n return FakeCUDAKernel(fn_or_sig, device=device)\n\nautojit = jit\n\n\n@contextmanager\ndef defer_cleanup():\n # No effect for simulator\n yield\n", "path": "numba/cuda/simulator/api.py"}]} | 1,305 | 287 |
gh_patches_debug_25919 | rasdani/github-patches | git_diff | archlinux__archinstall-823 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
mkinitcpio.conf generated incorrectly for AMDGPU.
As the archwiki installation guide states [https://wiki.archlinux.org/title/AMDGPU#Specify_the_correct_module_order](https://wiki.archlinux.org/title/AMDGPU#Specify_the_correct_module_order), you must ensure that the amdgpu module is loaded before the radeon one: `MODULES=(amdgpu radeon)`
Otherwise the DM will fail to start at boot.
</issue>
<code>
[start of profiles/xorg.py]
1 # A system with "xorg" installed
2
3 import archinstall
4 import logging
5
6 is_top_level_profile = True
7
8 __description__ = 'Installs a minimal system as well as xorg and graphics drivers.'
9
10 __packages__ = [
11 'dkms',
12 'xorg-server',
13 'xorg-xinit',
14 'nvidia-dkms',
15 *archinstall.lib.hardware.__packages__,
16 ]
17
18
19 def _prep_function(*args, **kwargs):
20 """
21 Magic function called by the importing installer
22 before continuing any further. It also avoids executing any
23 other code in this stage. So it's a safe way to ask the user
24 for more input before any other installer steps start.
25 """
26
27 archinstall.storage["gfx_driver_packages"] = archinstall.select_driver()
28
29 # TODO: Add language section and/or merge it with the locale selected
30 # earlier in for instance guided.py installer.
31
32 return True
33
34
35 # Ensures that this code only gets executed if executed
36 # through importlib.util.spec_from_file_location("xorg", "/somewhere/xorg.py")
37 # or through conventional import xorg
38 if __name__ == 'xorg':
39 try:
40 if "nvidia" in archinstall.storage.get("gfx_driver_packages", []):
41 if "linux-zen" in archinstall.storage['installation_session'].base_packages or "linux-lts" in archinstall.storage['installation_session'].base_packages:
42 for kernel in archinstall.storage['installation_session'].kernels:
43 archinstall.storage['installation_session'].add_additional_packages(f"{kernel}-headers") # Fixes https://github.com/archlinux/archinstall/issues/585
44 archinstall.storage['installation_session'].add_additional_packages("dkms") # I've had kernel regen fail if it wasn't installed before nvidia-dkms
45 archinstall.storage['installation_session'].add_additional_packages("xorg-server xorg-xinit nvidia-dkms")
46 else:
47 archinstall.storage['installation_session'].add_additional_packages(f"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}")
48 else:
49 archinstall.storage['installation_session'].add_additional_packages(f"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}")
50 except Exception as err:
51 archinstall.log(f"Could not handle nvidia and linuz-zen specific situations during xorg installation: {err}", level=logging.WARNING, fg="yellow")
52 archinstall.storage['installation_session'].add_additional_packages("xorg-server xorg-xinit") # Prep didn't run, so there's no driver to install
53
[end of profiles/xorg.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/profiles/xorg.py b/profiles/xorg.py
--- a/profiles/xorg.py
+++ b/profiles/xorg.py
@@ -45,6 +45,17 @@
archinstall.storage['installation_session'].add_additional_packages("xorg-server xorg-xinit nvidia-dkms")
else:
archinstall.storage['installation_session'].add_additional_packages(f"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}")
+ elif 'amdgpu' in archinstall.storage.get("gfx_driver_packages", []):
+ # The order of these two are important if amdgpu is installed #808
+ if 'amdgpu' in archinstall.storage['installation_session'].MODULES:
+ archinstall.storage['installation_session'].MODULES.remove('amdgpu')
+ archinstall.storage['installation_session'].MODULES.append('amdgpu')
+
+ if 'radeon' in archinstall.storage['installation_session'].MODULES:
+ archinstall.storage['installation_session'].MODULES.remove('radeon')
+ archinstall.storage['installation_session'].MODULES.append('radeon')
+
+ archinstall.storage['installation_session'].add_additional_packages(f"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}")
else:
archinstall.storage['installation_session'].add_additional_packages(f"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}")
except Exception as err:
| {"golden_diff": "diff --git a/profiles/xorg.py b/profiles/xorg.py\n--- a/profiles/xorg.py\n+++ b/profiles/xorg.py\n@@ -45,6 +45,17 @@\n \t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(\"xorg-server xorg-xinit nvidia-dkms\")\n \t\t\telse:\n \t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}\")\n+\t\telif 'amdgpu' in archinstall.storage.get(\"gfx_driver_packages\", []):\n+\t\t\t# The order of these two are important if amdgpu is installed #808\n+\t\t\tif 'amdgpu' in archinstall.storage['installation_session'].MODULES:\n+\t\t\t\tarchinstall.storage['installation_session'].MODULES.remove('amdgpu')\n+\t\t\tarchinstall.storage['installation_session'].MODULES.append('amdgpu')\n+\n+\t\t\tif 'radeon' in archinstall.storage['installation_session'].MODULES:\n+\t\t\t\tarchinstall.storage['installation_session'].MODULES.remove('radeon')\n+\t\t\tarchinstall.storage['installation_session'].MODULES.append('radeon')\n+\n+\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}\")\n \t\telse:\n \t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}\")\n \texcept Exception as err:\n", "issue": "mkinitcpio.conf generated incorrectly for AMDGPU.\nAs the archwiki installation guide states [https://wiki.archlinux.org/title/AMDGPU#Specify_the_correct_module_order](https://wiki.archlinux.org/title/AMDGPU#Specify_the_correct_module_order), you must ensure that the amdgpu module is loaded before the radeon one: `MODULES=(amdgpu radeon)`\r\nOtherwise the DM will fail to start at boot.\n", "before_files": [{"content": "# A system with \"xorg\" installed\n\nimport archinstall\nimport logging\n\nis_top_level_profile = True\n\n__description__ = 'Installs a minimal system as well as xorg and graphics drivers.'\n\n__packages__ = [\n\t'dkms',\n\t'xorg-server',\n\t'xorg-xinit',\n\t'nvidia-dkms',\n\t*archinstall.lib.hardware.__packages__,\n]\n\n\ndef _prep_function(*args, **kwargs):\n\t\"\"\"\n\tMagic function called by the importing installer\n\tbefore continuing any further. It also avoids executing any\n\tother code in this stage. So it's a safe way to ask the user\n\tfor more input before any other installer steps start.\n\t\"\"\"\n\n\tarchinstall.storage[\"gfx_driver_packages\"] = archinstall.select_driver()\n\n\t# TODO: Add language section and/or merge it with the locale selected\n\t# earlier in for instance guided.py installer.\n\n\treturn True\n\n\n# Ensures that this code only gets executed if executed\n# through importlib.util.spec_from_file_location(\"xorg\", \"/somewhere/xorg.py\")\n# or through conventional import xorg\nif __name__ == 'xorg':\n\ttry:\n\t\tif \"nvidia\" in archinstall.storage.get(\"gfx_driver_packages\", []):\n\t\t\tif \"linux-zen\" in archinstall.storage['installation_session'].base_packages or \"linux-lts\" in archinstall.storage['installation_session'].base_packages:\n\t\t\t\tfor kernel in archinstall.storage['installation_session'].kernels:\n\t\t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"{kernel}-headers\") # Fixes https://github.com/archlinux/archinstall/issues/585\n\t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(\"dkms\") # I've had kernel regen fail if it wasn't installed before nvidia-dkms\n\t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(\"xorg-server xorg-xinit nvidia-dkms\")\n\t\t\telse:\n\t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}\")\n\t\telse:\n\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}\")\n\texcept Exception as err:\n\t\tarchinstall.log(f\"Could not handle nvidia and linuz-zen specific situations during xorg installation: {err}\", level=logging.WARNING, fg=\"yellow\")\n\t\tarchinstall.storage['installation_session'].add_additional_packages(\"xorg-server xorg-xinit\") # Prep didn't run, so there's no driver to install\n", "path": "profiles/xorg.py"}]} | 1,308 | 344 |
gh_patches_debug_2112 | rasdani/github-patches | git_diff | Qiskit__qiskit-1940 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
rzz gate
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: 0.7.2
- **Python version**: 3.6.6
- **Operating system**: Windows 10
### What is the current behavior?
rzz gate appears to give incorrect results
### Steps to reproduce the problem
rzz gate rule defined in https://github.com/Qiskit/qiskit-terra/blob/master/qiskit/extensions/standard/rzz.py
```
CnotGate(q[0], q[1]),
U1Gate(self.params[0], q[0]),
CnotGate(q[0], q[1])
```
### What is the expected behavior?
I think it should be
```
CnotGate(q[0], q[1]),
U1Gate(self.params[0], q[1]),
CnotGate(q[0], q[1])
```
the u1 phase should be on the target instead of control
### Suggested solutions
modify rzz gate definition to give the right behavior.
</issue>
<code>
[start of qiskit/extensions/standard/rzz.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright 2017, IBM.
4 #
5 # This source code is licensed under the Apache License, Version 2.0 found in
6 # the LICENSE.txt file in the root directory of this source tree.
7
8 """
9 two-qubit ZZ-rotation gate.
10 """
11 from qiskit.circuit import CompositeGate
12 from qiskit.circuit import Gate
13 from qiskit.circuit import QuantumCircuit
14 from qiskit.circuit import QuantumRegister
15 from qiskit.circuit.decorators import _op_expand
16 from qiskit.dagcircuit import DAGCircuit
17 from qiskit.extensions.standard.u1 import U1Gate
18 from qiskit.extensions.standard.cx import CnotGate
19
20
21 class RZZGate(Gate):
22 """Two-qubit ZZ-rotation gate."""
23
24 def __init__(self, theta, ctl, tgt, circ=None):
25 """Create new rzz gate."""
26 super().__init__("rzz", [theta], [ctl, tgt], circ)
27
28 def _define_decompositions(self):
29 """
30 gate rzz(theta) a, b { cx a, b; u1(theta) b; cx a, b; }
31 """
32 decomposition = DAGCircuit()
33 q = QuantumRegister(2, "q")
34 decomposition.add_qreg(q)
35 rule = [
36 CnotGate(q[0], q[1]),
37 U1Gate(self.params[0], q[0]),
38 CnotGate(q[0], q[1])
39 ]
40 for inst in rule:
41 decomposition.apply_operation_back(inst)
42 self._decompositions = [decomposition]
43
44 def inverse(self):
45 """Invert this gate."""
46 self.params[0] = -self.params[0]
47 self._decompositions = None
48 return self
49
50 def reapply(self, circ):
51 """Reapply this gate to corresponding qubits in circ."""
52 self._modifiers(circ.rzz(self.params[0], self.qargs[0], self.qargs[1]))
53
54
55 @_op_expand(2, broadcastable=[False, False])
56 def rzz(self, theta, qubit1, qubit2):
57 """Apply RZZ to circuit."""
58 self._check_qubit(qubit1)
59 self._check_qubit(qubit2)
60 self._check_dups([qubit1, qubit2])
61 return self._attach(RZZGate(theta, qubit1, qubit2, self))
62
63
64 # Add to QuantumCircuit and CompositeGate classes
65 QuantumCircuit.rzz = rzz
66 CompositeGate.rzz = rzz
67
[end of qiskit/extensions/standard/rzz.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qiskit/extensions/standard/rzz.py b/qiskit/extensions/standard/rzz.py
--- a/qiskit/extensions/standard/rzz.py
+++ b/qiskit/extensions/standard/rzz.py
@@ -34,7 +34,7 @@
decomposition.add_qreg(q)
rule = [
CnotGate(q[0], q[1]),
- U1Gate(self.params[0], q[0]),
+ U1Gate(self.params[0], q[1]),
CnotGate(q[0], q[1])
]
for inst in rule:
| {"golden_diff": "diff --git a/qiskit/extensions/standard/rzz.py b/qiskit/extensions/standard/rzz.py\n--- a/qiskit/extensions/standard/rzz.py\n+++ b/qiskit/extensions/standard/rzz.py\n@@ -34,7 +34,7 @@\n decomposition.add_qreg(q)\n rule = [\n CnotGate(q[0], q[1]),\n- U1Gate(self.params[0], q[0]),\n+ U1Gate(self.params[0], q[1]),\n CnotGate(q[0], q[1])\n ]\n for inst in rule:\n", "issue": "rzz gate\n<!-- \u26a0\ufe0f If you do not respect this template, your issue will be closed -->\r\n<!-- \u26a0\ufe0f Make sure to browse the opened and closed issues -->\r\n\r\n### Information\r\n\r\n- **Qiskit Terra version**: 0.7.2\r\n- **Python version**: 3.6.6\r\n- **Operating system**: Windows 10\r\n\r\n### What is the current behavior?\r\n\r\nrzz gate appears to give incorrect results\r\n\r\n### Steps to reproduce the problem\r\n\r\nrzz gate rule defined in https://github.com/Qiskit/qiskit-terra/blob/master/qiskit/extensions/standard/rzz.py\r\n\r\n```\r\n CnotGate(q[0], q[1]),\r\n U1Gate(self.params[0], q[0]),\r\n CnotGate(q[0], q[1])\r\n```\r\n\r\n### What is the expected behavior?\r\n\r\nI think it should be\r\n```\r\n CnotGate(q[0], q[1]),\r\n U1Gate(self.params[0], q[1]),\r\n CnotGate(q[0], q[1])\r\n```\r\nthe u1 phase should be on the target instead of control\r\n\r\n### Suggested solutions\r\n\r\nmodify rzz gate definition to give the right behavior.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright 2017, IBM.\n#\n# This source code is licensed under the Apache License, Version 2.0 found in\n# the LICENSE.txt file in the root directory of this source tree.\n\n\"\"\"\ntwo-qubit ZZ-rotation gate.\n\"\"\"\nfrom qiskit.circuit import CompositeGate\nfrom qiskit.circuit import Gate\nfrom qiskit.circuit import QuantumCircuit\nfrom qiskit.circuit import QuantumRegister\nfrom qiskit.circuit.decorators import _op_expand\nfrom qiskit.dagcircuit import DAGCircuit\nfrom qiskit.extensions.standard.u1 import U1Gate\nfrom qiskit.extensions.standard.cx import CnotGate\n\n\nclass RZZGate(Gate):\n \"\"\"Two-qubit ZZ-rotation gate.\"\"\"\n\n def __init__(self, theta, ctl, tgt, circ=None):\n \"\"\"Create new rzz gate.\"\"\"\n super().__init__(\"rzz\", [theta], [ctl, tgt], circ)\n\n def _define_decompositions(self):\n \"\"\"\n gate rzz(theta) a, b { cx a, b; u1(theta) b; cx a, b; }\n \"\"\"\n decomposition = DAGCircuit()\n q = QuantumRegister(2, \"q\")\n decomposition.add_qreg(q)\n rule = [\n CnotGate(q[0], q[1]),\n U1Gate(self.params[0], q[0]),\n CnotGate(q[0], q[1])\n ]\n for inst in rule:\n decomposition.apply_operation_back(inst)\n self._decompositions = [decomposition]\n\n def inverse(self):\n \"\"\"Invert this gate.\"\"\"\n self.params[0] = -self.params[0]\n self._decompositions = None\n return self\n\n def reapply(self, circ):\n \"\"\"Reapply this gate to corresponding qubits in circ.\"\"\"\n self._modifiers(circ.rzz(self.params[0], self.qargs[0], self.qargs[1]))\n\n\n@_op_expand(2, broadcastable=[False, False])\ndef rzz(self, theta, qubit1, qubit2):\n \"\"\"Apply RZZ to circuit.\"\"\"\n self._check_qubit(qubit1)\n self._check_qubit(qubit2)\n self._check_dups([qubit1, qubit2])\n return self._attach(RZZGate(theta, qubit1, qubit2, self))\n\n\n# Add to QuantumCircuit and CompositeGate classes\nQuantumCircuit.rzz = rzz\nCompositeGate.rzz = rzz\n", "path": "qiskit/extensions/standard/rzz.py"}]} | 1,490 | 132 |
gh_patches_debug_31020 | rasdani/github-patches | git_diff | OpenMined__PySyft-3150 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove ZSTD
**Is your feature request related to a problem? Please describe.**
ZSTD is used for compression in our serde process. However we don't need extra compression as we move to Protobuf.
ZSTD is usually a source of problems when installing PySyft with different hacks to solve it.
**Describe the solution you'd like**
Remove ZSTD dependency.
This will require removing the tests and its use in serde.
**Describe alternatives you've considered**
Protobuf covers compression.
**Additional context**
</issue>
<code>
[start of syft/serde/compression.py]
1 """
2 This file exists to provide one common place for all compression methods used in
3 simplifying and serializing PySyft objects.
4 """
5
6 import lz4
7 from lz4 import ( # noqa: F401
8 frame,
9 ) # needed as otherwise we will get: module 'lz4' has no attribute 'frame'
10 import zstd
11
12 from syft.exceptions import CompressionNotFoundException
13
14 # COMPRESSION SCHEME INT CODES
15 NO_COMPRESSION = 40
16 LZ4 = 41
17 ZSTD = 42
18 scheme_to_bytes = {
19 NO_COMPRESSION: NO_COMPRESSION.to_bytes(1, byteorder="big"),
20 LZ4: LZ4.to_bytes(1, byteorder="big"),
21 ZSTD: ZSTD.to_bytes(1, byteorder="big"),
22 }
23
24 ## SECTION: chosen Compression Algorithm
25
26
27 def _apply_compress_scheme(decompressed_input_bin) -> tuple:
28 """
29 Apply the selected compression scheme.
30 By default is used LZ4
31
32 Args:
33 decompressed_input_bin: the binary to be compressed
34 """
35 return apply_lz4_compression(decompressed_input_bin)
36
37
38 def apply_lz4_compression(decompressed_input_bin) -> tuple:
39 """
40 Apply LZ4 compression to the input
41
42 Args:
43 decompressed_input_bin: the binary to be compressed
44
45 Returns:
46 a tuple (compressed_result, LZ4)
47 """
48 return lz4.frame.compress(decompressed_input_bin), LZ4
49
50
51 def apply_zstd_compression(decompressed_input_bin) -> tuple:
52 """
53 Apply ZSTD compression to the input
54
55 Args:
56 decompressed_input_bin: the binary to be compressed
57
58 Returns:
59 a tuple (compressed_result, ZSTD)
60 """
61
62 return zstd.compress(decompressed_input_bin), ZSTD
63
64
65 def apply_no_compression(decompressed_input_bin) -> tuple:
66 """
67 No compression is applied to the input
68
69 Args:
70 decompressed_input_bin: the binary
71
72 Returns:
73 a tuple (the binary, LZ4)
74 """
75
76 return decompressed_input_bin, NO_COMPRESSION
77
78
79 def _compress(decompressed_input_bin: bin) -> bin:
80 """
81 This function compresses a binary using the function _apply_compress_scheme
82 if the input has been already compressed in some step, it will return it as it is
83
84 Args:
85 decompressed_input_bin (bin): binary to be compressed
86
87 Returns:
88 bin: a compressed binary
89
90 """
91 compress_stream, compress_scheme = _apply_compress_scheme(decompressed_input_bin)
92 try:
93 z = scheme_to_bytes[compress_scheme] + compress_stream
94 return z
95 except KeyError:
96 raise CompressionNotFoundException(
97 f"Compression scheme not found for compression code: {str(compress_scheme)}"
98 )
99
100
101 def _decompress(binary: bin) -> bin:
102 """
103 This function decompresses a binary using the scheme defined in the first byte of the input
104
105 Args:
106 binary (bin): a compressed binary
107
108 Returns:
109 bin: decompressed binary
110
111 """
112
113 # check the 1-byte header to check the compression scheme used
114 compress_scheme = binary[0]
115
116 # remove the 1-byte header from the input stream
117 binary = binary[1:]
118 # 1) Decompress or return the original stream
119 if compress_scheme == LZ4:
120 return lz4.frame.decompress(binary)
121 elif compress_scheme == ZSTD:
122 return zstd.decompress(binary)
123 elif compress_scheme == NO_COMPRESSION:
124 return binary
125 else:
126 raise CompressionNotFoundException(
127 f"Compression scheme not found for compression code: {str(compress_scheme)}"
128 )
129
[end of syft/serde/compression.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/syft/serde/compression.py b/syft/serde/compression.py
--- a/syft/serde/compression.py
+++ b/syft/serde/compression.py
@@ -7,18 +7,15 @@
from lz4 import ( # noqa: F401
frame,
) # needed as otherwise we will get: module 'lz4' has no attribute 'frame'
-import zstd
from syft.exceptions import CompressionNotFoundException
# COMPRESSION SCHEME INT CODES
NO_COMPRESSION = 40
LZ4 = 41
-ZSTD = 42
scheme_to_bytes = {
NO_COMPRESSION: NO_COMPRESSION.to_bytes(1, byteorder="big"),
LZ4: LZ4.to_bytes(1, byteorder="big"),
- ZSTD: ZSTD.to_bytes(1, byteorder="big"),
}
## SECTION: chosen Compression Algorithm
@@ -48,20 +45,6 @@
return lz4.frame.compress(decompressed_input_bin), LZ4
-def apply_zstd_compression(decompressed_input_bin) -> tuple:
- """
- Apply ZSTD compression to the input
-
- Args:
- decompressed_input_bin: the binary to be compressed
-
- Returns:
- a tuple (compressed_result, ZSTD)
- """
-
- return zstd.compress(decompressed_input_bin), ZSTD
-
-
def apply_no_compression(decompressed_input_bin) -> tuple:
"""
No compression is applied to the input
@@ -118,8 +101,6 @@
# 1) Decompress or return the original stream
if compress_scheme == LZ4:
return lz4.frame.decompress(binary)
- elif compress_scheme == ZSTD:
- return zstd.decompress(binary)
elif compress_scheme == NO_COMPRESSION:
return binary
else:
| {"golden_diff": "diff --git a/syft/serde/compression.py b/syft/serde/compression.py\n--- a/syft/serde/compression.py\n+++ b/syft/serde/compression.py\n@@ -7,18 +7,15 @@\n from lz4 import ( # noqa: F401\n frame,\n ) # needed as otherwise we will get: module 'lz4' has no attribute 'frame'\n-import zstd\n \n from syft.exceptions import CompressionNotFoundException\n \n # COMPRESSION SCHEME INT CODES\n NO_COMPRESSION = 40\n LZ4 = 41\n-ZSTD = 42\n scheme_to_bytes = {\n NO_COMPRESSION: NO_COMPRESSION.to_bytes(1, byteorder=\"big\"),\n LZ4: LZ4.to_bytes(1, byteorder=\"big\"),\n- ZSTD: ZSTD.to_bytes(1, byteorder=\"big\"),\n }\n \n ## SECTION: chosen Compression Algorithm\n@@ -48,20 +45,6 @@\n return lz4.frame.compress(decompressed_input_bin), LZ4\n \n \n-def apply_zstd_compression(decompressed_input_bin) -> tuple:\n- \"\"\"\n- Apply ZSTD compression to the input\n-\n- Args:\n- decompressed_input_bin: the binary to be compressed\n-\n- Returns:\n- a tuple (compressed_result, ZSTD)\n- \"\"\"\n-\n- return zstd.compress(decompressed_input_bin), ZSTD\n-\n-\n def apply_no_compression(decompressed_input_bin) -> tuple:\n \"\"\"\n No compression is applied to the input\n@@ -118,8 +101,6 @@\n # 1) Decompress or return the original stream\n if compress_scheme == LZ4:\n return lz4.frame.decompress(binary)\n- elif compress_scheme == ZSTD:\n- return zstd.decompress(binary)\n elif compress_scheme == NO_COMPRESSION:\n return binary\n else:\n", "issue": "Remove ZSTD\n**Is your feature request related to a problem? Please describe.**\r\nZSTD is used for compression in our serde process. However we don't need extra compression as we move to Protobuf.\r\nZSTD is usually a source of problems when installing PySyft with different hacks to solve it.\r\n\r\n**Describe the solution you'd like**\r\nRemove ZSTD dependency.\r\nThis will require removing the tests and its use in serde.\r\n\r\n**Describe alternatives you've considered**\r\nProtobuf covers compression.\r\n\r\n**Additional context**\r\n\n", "before_files": [{"content": "\"\"\"\nThis file exists to provide one common place for all compression methods used in\nsimplifying and serializing PySyft objects.\n\"\"\"\n\nimport lz4\nfrom lz4 import ( # noqa: F401\n frame,\n) # needed as otherwise we will get: module 'lz4' has no attribute 'frame'\nimport zstd\n\nfrom syft.exceptions import CompressionNotFoundException\n\n# COMPRESSION SCHEME INT CODES\nNO_COMPRESSION = 40\nLZ4 = 41\nZSTD = 42\nscheme_to_bytes = {\n NO_COMPRESSION: NO_COMPRESSION.to_bytes(1, byteorder=\"big\"),\n LZ4: LZ4.to_bytes(1, byteorder=\"big\"),\n ZSTD: ZSTD.to_bytes(1, byteorder=\"big\"),\n}\n\n## SECTION: chosen Compression Algorithm\n\n\ndef _apply_compress_scheme(decompressed_input_bin) -> tuple:\n \"\"\"\n Apply the selected compression scheme.\n By default is used LZ4\n\n Args:\n decompressed_input_bin: the binary to be compressed\n \"\"\"\n return apply_lz4_compression(decompressed_input_bin)\n\n\ndef apply_lz4_compression(decompressed_input_bin) -> tuple:\n \"\"\"\n Apply LZ4 compression to the input\n\n Args:\n decompressed_input_bin: the binary to be compressed\n\n Returns:\n a tuple (compressed_result, LZ4)\n \"\"\"\n return lz4.frame.compress(decompressed_input_bin), LZ4\n\n\ndef apply_zstd_compression(decompressed_input_bin) -> tuple:\n \"\"\"\n Apply ZSTD compression to the input\n\n Args:\n decompressed_input_bin: the binary to be compressed\n\n Returns:\n a tuple (compressed_result, ZSTD)\n \"\"\"\n\n return zstd.compress(decompressed_input_bin), ZSTD\n\n\ndef apply_no_compression(decompressed_input_bin) -> tuple:\n \"\"\"\n No compression is applied to the input\n\n Args:\n decompressed_input_bin: the binary\n\n Returns:\n a tuple (the binary, LZ4)\n \"\"\"\n\n return decompressed_input_bin, NO_COMPRESSION\n\n\ndef _compress(decompressed_input_bin: bin) -> bin:\n \"\"\"\n This function compresses a binary using the function _apply_compress_scheme\n if the input has been already compressed in some step, it will return it as it is\n\n Args:\n decompressed_input_bin (bin): binary to be compressed\n\n Returns:\n bin: a compressed binary\n\n \"\"\"\n compress_stream, compress_scheme = _apply_compress_scheme(decompressed_input_bin)\n try:\n z = scheme_to_bytes[compress_scheme] + compress_stream\n return z\n except KeyError:\n raise CompressionNotFoundException(\n f\"Compression scheme not found for compression code: {str(compress_scheme)}\"\n )\n\n\ndef _decompress(binary: bin) -> bin:\n \"\"\"\n This function decompresses a binary using the scheme defined in the first byte of the input\n\n Args:\n binary (bin): a compressed binary\n\n Returns:\n bin: decompressed binary\n\n \"\"\"\n\n # check the 1-byte header to check the compression scheme used\n compress_scheme = binary[0]\n\n # remove the 1-byte header from the input stream\n binary = binary[1:]\n # 1) Decompress or return the original stream\n if compress_scheme == LZ4:\n return lz4.frame.decompress(binary)\n elif compress_scheme == ZSTD:\n return zstd.decompress(binary)\n elif compress_scheme == NO_COMPRESSION:\n return binary\n else:\n raise CompressionNotFoundException(\n f\"Compression scheme not found for compression code: {str(compress_scheme)}\"\n )\n", "path": "syft/serde/compression.py"}]} | 1,723 | 415 |
gh_patches_debug_10830 | rasdani/github-patches | git_diff | Mailu__Mailu-2177 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Manage user authentication and permissions
Currently no authentication is implemented. Multiple issues will have to be tackled:
- complete permission scheme or simple admin role plus admins per domain?
- how to store user passwords (shared format between Flask-admin and dovecot)?
- how should the initial use be created?
</issue>
<code>
[start of core/admin/start.py]
1 #!/usr/bin/python3
2
3 import os
4 import logging as log
5 import sys
6
7 log.basicConfig(stream=sys.stderr, level=os.environ.get("LOG_LEVEL", "INFO"))
8
9 os.system("flask mailu advertise")
10 os.system("flask db upgrade")
11
12 account = os.environ.get("INITIAL_ADMIN_ACCOUNT")
13 domain = os.environ.get("INITIAL_ADMIN_DOMAIN")
14 password = os.environ.get("INITIAL_ADMIN_PW")
15
16 if account is not None and domain is not None and password is not None:
17 mode = os.environ.get("INITIAL_ADMIN_MODE", default="ifmissing")
18 log.info("Creating initial admin accout %s@%s with mode %s",account,domain,mode)
19 os.system("flask mailu admin %s %s '%s' --mode %s" % (account, domain, password, mode))
20
21 def test_DNS():
22 import dns.resolver
23 import dns.exception
24 import dns.flags
25 import dns.rdtypes
26 import dns.rdatatype
27 import dns.rdataclass
28 import time
29 # DNS stub configured to do DNSSEC enabled queries
30 resolver = dns.resolver.Resolver()
31 resolver.use_edns(0, 0, 1232)
32 resolver.flags = dns.flags.AD | dns.flags.RD
33 nameservers = resolver.nameservers
34 for ns in nameservers:
35 resolver.nameservers=[ns]
36 while True:
37 try:
38 result = resolver.query('example.org', dns.rdatatype.A, dns.rdataclass.IN, lifetime=10)
39 except Exception as e:
40 log.critical("Your DNS resolver at %s is not working (%s). Please use another resolver or enable unbound via https://setup.mailu.io.", ns, e);
41 else:
42 if result.response.flags & dns.flags.AD:
43 break
44 log.critical("Your DNS resolver at %s isn't doing DNSSEC validation; Please use another resolver or enable unbound via https://setup.mailu.io.", ns)
45 time.sleep(5)
46
47 test_DNS()
48
49 start_command="".join([
50 "gunicorn --threads ", str(os.cpu_count()),
51 " -b :80 ",
52 "--access-logfile - " if (log.root.level<=log.INFO) else "",
53 "--error-logfile - ",
54 "--preload ",
55 "'mailu:create_app()'"])
56
57 os.system(start_command)
58
[end of core/admin/start.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/admin/start.py b/core/admin/start.py
--- a/core/admin/start.py
+++ b/core/admin/start.py
@@ -35,7 +35,7 @@
resolver.nameservers=[ns]
while True:
try:
- result = resolver.query('example.org', dns.rdatatype.A, dns.rdataclass.IN, lifetime=10)
+ result = resolver.resolve('example.org', dns.rdatatype.A, dns.rdataclass.IN, lifetime=10)
except Exception as e:
log.critical("Your DNS resolver at %s is not working (%s). Please use another resolver or enable unbound via https://setup.mailu.io.", ns, e);
else:
| {"golden_diff": "diff --git a/core/admin/start.py b/core/admin/start.py\n--- a/core/admin/start.py\n+++ b/core/admin/start.py\n@@ -35,7 +35,7 @@\n resolver.nameservers=[ns]\n while True:\n try:\n- result = resolver.query('example.org', dns.rdatatype.A, dns.rdataclass.IN, lifetime=10)\n+ result = resolver.resolve('example.org', dns.rdatatype.A, dns.rdataclass.IN, lifetime=10)\n except Exception as e:\n log.critical(\"Your DNS resolver at %s is not working (%s). Please use another resolver or enable unbound via https://setup.mailu.io.\", ns, e);\n else:\n", "issue": "Manage user authentication and permissions\nCurrently no authentication is implemented. Multiple issues will have to be tackled:\n- complete permission scheme or simple admin role plus admins per domain?\n- how to store user passwords (shared format between Flask-admin and dovecot)?\n- how should the initial use be created?\n\n", "before_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport logging as log\nimport sys\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"INFO\"))\n\nos.system(\"flask mailu advertise\")\nos.system(\"flask db upgrade\")\n\naccount = os.environ.get(\"INITIAL_ADMIN_ACCOUNT\")\ndomain = os.environ.get(\"INITIAL_ADMIN_DOMAIN\")\npassword = os.environ.get(\"INITIAL_ADMIN_PW\")\n\nif account is not None and domain is not None and password is not None:\n mode = os.environ.get(\"INITIAL_ADMIN_MODE\", default=\"ifmissing\")\n log.info(\"Creating initial admin accout %s@%s with mode %s\",account,domain,mode)\n os.system(\"flask mailu admin %s %s '%s' --mode %s\" % (account, domain, password, mode))\n\ndef test_DNS():\n import dns.resolver\n import dns.exception\n import dns.flags\n import dns.rdtypes\n import dns.rdatatype\n import dns.rdataclass\n import time\n # DNS stub configured to do DNSSEC enabled queries\n resolver = dns.resolver.Resolver()\n resolver.use_edns(0, 0, 1232)\n resolver.flags = dns.flags.AD | dns.flags.RD\n nameservers = resolver.nameservers\n for ns in nameservers:\n resolver.nameservers=[ns]\n while True:\n try:\n result = resolver.query('example.org', dns.rdatatype.A, dns.rdataclass.IN, lifetime=10)\n except Exception as e:\n log.critical(\"Your DNS resolver at %s is not working (%s). Please use another resolver or enable unbound via https://setup.mailu.io.\", ns, e);\n else:\n if result.response.flags & dns.flags.AD:\n break\n log.critical(\"Your DNS resolver at %s isn't doing DNSSEC validation; Please use another resolver or enable unbound via https://setup.mailu.io.\", ns)\n time.sleep(5)\n\ntest_DNS()\n\nstart_command=\"\".join([\n \"gunicorn --threads \", str(os.cpu_count()),\n \" -b :80 \",\n \"--access-logfile - \" if (log.root.level<=log.INFO) else \"\",\n \"--error-logfile - \",\n \"--preload \",\n \"'mailu:create_app()'\"])\n\nos.system(start_command)\n", "path": "core/admin/start.py"}]} | 1,206 | 153 |
gh_patches_debug_20282 | rasdani/github-patches | git_diff | PaddlePaddle__models-449 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Need to set the version of CTC decoders formally
</issue>
<code>
[start of deep_speech_2/decoders/swig/setup.py]
1 """Script to build and install decoder package."""
2 from __future__ import absolute_import
3 from __future__ import division
4 from __future__ import print_function
5
6 from setuptools import setup, Extension, distutils
7 import glob
8 import platform
9 import os, sys
10 import multiprocessing.pool
11 import argparse
12
13 parser = argparse.ArgumentParser(description=__doc__)
14 parser.add_argument(
15 "--num_processes",
16 default=1,
17 type=int,
18 help="Number of cpu processes to build package. (default: %(default)d)")
19 args = parser.parse_known_args()
20
21 # reconstruct sys.argv to pass to setup below
22 sys.argv = [sys.argv[0]] + args[1]
23
24
25 # monkey-patch for parallel compilation
26 # See: https://stackoverflow.com/a/13176803
27 def parallelCCompile(self,
28 sources,
29 output_dir=None,
30 macros=None,
31 include_dirs=None,
32 debug=0,
33 extra_preargs=None,
34 extra_postargs=None,
35 depends=None):
36 # those lines are copied from distutils.ccompiler.CCompiler directly
37 macros, objects, extra_postargs, pp_opts, build = self._setup_compile(
38 output_dir, macros, include_dirs, sources, depends, extra_postargs)
39 cc_args = self._get_cc_args(pp_opts, debug, extra_preargs)
40
41 # parallel code
42 def _single_compile(obj):
43 try:
44 src, ext = build[obj]
45 except KeyError:
46 return
47 self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
48
49 # convert to list, imap is evaluated on-demand
50 thread_pool = multiprocessing.pool.ThreadPool(args[0].num_processes)
51 list(thread_pool.imap(_single_compile, objects))
52 return objects
53
54
55 def compile_test(header, library):
56 dummy_path = os.path.join(os.path.dirname(__file__), "dummy")
57 command = "bash -c \"g++ -include " + header \
58 + " -l" + library + " -x c++ - <<<'int main() {}' -o " \
59 + dummy_path + " >/dev/null 2>/dev/null && rm " \
60 + dummy_path + " 2>/dev/null\""
61 return os.system(command) == 0
62
63
64 # hack compile to support parallel compiling
65 distutils.ccompiler.CCompiler.compile = parallelCCompile
66
67 FILES = glob.glob('kenlm/util/*.cc') \
68 + glob.glob('kenlm/lm/*.cc') \
69 + glob.glob('kenlm/util/double-conversion/*.cc')
70
71 FILES += glob.glob('openfst-1.6.3/src/lib/*.cc')
72
73 # FILES + glob.glob('glog/src/*.cc')
74 FILES = [
75 fn for fn in FILES
76 if not (fn.endswith('main.cc') or fn.endswith('test.cc') or fn.endswith(
77 'unittest.cc'))
78 ]
79
80 LIBS = ['stdc++']
81 if platform.system() != 'Darwin':
82 LIBS.append('rt')
83
84 ARGS = ['-O3', '-DNDEBUG', '-DKENLM_MAX_ORDER=6', '-std=c++11']
85
86 if compile_test('zlib.h', 'z'):
87 ARGS.append('-DHAVE_ZLIB')
88 LIBS.append('z')
89
90 if compile_test('bzlib.h', 'bz2'):
91 ARGS.append('-DHAVE_BZLIB')
92 LIBS.append('bz2')
93
94 if compile_test('lzma.h', 'lzma'):
95 ARGS.append('-DHAVE_XZLIB')
96 LIBS.append('lzma')
97
98 os.system('swig -python -c++ ./decoders.i')
99
100 decoders_module = [
101 Extension(
102 name='_swig_decoders',
103 sources=FILES + glob.glob('*.cxx') + glob.glob('*.cpp'),
104 language='c++',
105 include_dirs=[
106 '.',
107 'kenlm',
108 'openfst-1.6.3/src/include',
109 'ThreadPool',
110 #'glog/src'
111 ],
112 libraries=LIBS,
113 extra_compile_args=ARGS)
114 ]
115
116 setup(
117 name='swig_decoders',
118 version='0.1',
119 description="""CTC decoders""",
120 ext_modules=decoders_module,
121 py_modules=['swig_decoders'], )
122
[end of deep_speech_2/decoders/swig/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/deep_speech_2/decoders/swig/setup.py b/deep_speech_2/decoders/swig/setup.py
--- a/deep_speech_2/decoders/swig/setup.py
+++ b/deep_speech_2/decoders/swig/setup.py
@@ -70,7 +70,6 @@
FILES += glob.glob('openfst-1.6.3/src/lib/*.cc')
-# FILES + glob.glob('glog/src/*.cc')
FILES = [
fn for fn in FILES
if not (fn.endswith('main.cc') or fn.endswith('test.cc') or fn.endswith(
@@ -107,7 +106,6 @@
'kenlm',
'openfst-1.6.3/src/include',
'ThreadPool',
- #'glog/src'
],
libraries=LIBS,
extra_compile_args=ARGS)
@@ -115,7 +113,7 @@
setup(
name='swig_decoders',
- version='0.1',
+ version='1.0',
description="""CTC decoders""",
ext_modules=decoders_module,
py_modules=['swig_decoders'], )
| {"golden_diff": "diff --git a/deep_speech_2/decoders/swig/setup.py b/deep_speech_2/decoders/swig/setup.py\n--- a/deep_speech_2/decoders/swig/setup.py\n+++ b/deep_speech_2/decoders/swig/setup.py\n@@ -70,7 +70,6 @@\n \n FILES += glob.glob('openfst-1.6.3/src/lib/*.cc')\n \n-# FILES + glob.glob('glog/src/*.cc')\n FILES = [\n fn for fn in FILES\n if not (fn.endswith('main.cc') or fn.endswith('test.cc') or fn.endswith(\n@@ -107,7 +106,6 @@\n 'kenlm',\n 'openfst-1.6.3/src/include',\n 'ThreadPool',\n- #'glog/src'\n ],\n libraries=LIBS,\n extra_compile_args=ARGS)\n@@ -115,7 +113,7 @@\n \n setup(\n name='swig_decoders',\n- version='0.1',\n+ version='1.0',\n description=\"\"\"CTC decoders\"\"\",\n ext_modules=decoders_module,\n py_modules=['swig_decoders'], )\n", "issue": "Need to set the version of CTC decoders formally\n\n", "before_files": [{"content": "\"\"\"Script to build and install decoder package.\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nfrom setuptools import setup, Extension, distutils\nimport glob\nimport platform\nimport os, sys\nimport multiprocessing.pool\nimport argparse\n\nparser = argparse.ArgumentParser(description=__doc__)\nparser.add_argument(\n \"--num_processes\",\n default=1,\n type=int,\n help=\"Number of cpu processes to build package. (default: %(default)d)\")\nargs = parser.parse_known_args()\n\n# reconstruct sys.argv to pass to setup below\nsys.argv = [sys.argv[0]] + args[1]\n\n\n# monkey-patch for parallel compilation\n# See: https://stackoverflow.com/a/13176803\ndef parallelCCompile(self,\n sources,\n output_dir=None,\n macros=None,\n include_dirs=None,\n debug=0,\n extra_preargs=None,\n extra_postargs=None,\n depends=None):\n # those lines are copied from distutils.ccompiler.CCompiler directly\n macros, objects, extra_postargs, pp_opts, build = self._setup_compile(\n output_dir, macros, include_dirs, sources, depends, extra_postargs)\n cc_args = self._get_cc_args(pp_opts, debug, extra_preargs)\n\n # parallel code\n def _single_compile(obj):\n try:\n src, ext = build[obj]\n except KeyError:\n return\n self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)\n\n # convert to list, imap is evaluated on-demand\n thread_pool = multiprocessing.pool.ThreadPool(args[0].num_processes)\n list(thread_pool.imap(_single_compile, objects))\n return objects\n\n\ndef compile_test(header, library):\n dummy_path = os.path.join(os.path.dirname(__file__), \"dummy\")\n command = \"bash -c \\\"g++ -include \" + header \\\n + \" -l\" + library + \" -x c++ - <<<'int main() {}' -o \" \\\n + dummy_path + \" >/dev/null 2>/dev/null && rm \" \\\n + dummy_path + \" 2>/dev/null\\\"\"\n return os.system(command) == 0\n\n\n# hack compile to support parallel compiling\ndistutils.ccompiler.CCompiler.compile = parallelCCompile\n\nFILES = glob.glob('kenlm/util/*.cc') \\\n + glob.glob('kenlm/lm/*.cc') \\\n + glob.glob('kenlm/util/double-conversion/*.cc')\n\nFILES += glob.glob('openfst-1.6.3/src/lib/*.cc')\n\n# FILES + glob.glob('glog/src/*.cc')\nFILES = [\n fn for fn in FILES\n if not (fn.endswith('main.cc') or fn.endswith('test.cc') or fn.endswith(\n 'unittest.cc'))\n]\n\nLIBS = ['stdc++']\nif platform.system() != 'Darwin':\n LIBS.append('rt')\n\nARGS = ['-O3', '-DNDEBUG', '-DKENLM_MAX_ORDER=6', '-std=c++11']\n\nif compile_test('zlib.h', 'z'):\n ARGS.append('-DHAVE_ZLIB')\n LIBS.append('z')\n\nif compile_test('bzlib.h', 'bz2'):\n ARGS.append('-DHAVE_BZLIB')\n LIBS.append('bz2')\n\nif compile_test('lzma.h', 'lzma'):\n ARGS.append('-DHAVE_XZLIB')\n LIBS.append('lzma')\n\nos.system('swig -python -c++ ./decoders.i')\n\ndecoders_module = [\n Extension(\n name='_swig_decoders',\n sources=FILES + glob.glob('*.cxx') + glob.glob('*.cpp'),\n language='c++',\n include_dirs=[\n '.',\n 'kenlm',\n 'openfst-1.6.3/src/include',\n 'ThreadPool',\n #'glog/src'\n ],\n libraries=LIBS,\n extra_compile_args=ARGS)\n]\n\nsetup(\n name='swig_decoders',\n version='0.1',\n description=\"\"\"CTC decoders\"\"\",\n ext_modules=decoders_module,\n py_modules=['swig_decoders'], )\n", "path": "deep_speech_2/decoders/swig/setup.py"}]} | 1,728 | 266 |
gh_patches_debug_23631 | rasdani/github-patches | git_diff | e-valuation__EvaP-762 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Test management commands
Because in three years, run_tasks will silently fail on the production system and nobody will notice.
- [x] **run_tasks** - shouldn't be too hard and is rather important
- [x] **anonymize** - might be a bit of work to cover it properly, but should be straightforward.
- [x] **refresh_results_cache** - should be easy
- [x] **dump_testdata** - don't know how not to overwrite the file during testing, but should be possible
the other commands are already tested or rather unsuitable for testing
- [x] **merge_users** - already has a test (#703) and is shown to be pretty broken.
- [x] **run** - don't know how to test this and there isn't really anything that could break. still, somehow running it to check that it doesn't crash right away on e.g. imports would be cool
- [x] **reload_testdata** - don't know whether it's possible at all to test that, i mean it drops the whole database...
- [ ] **import_ad** - we never used it and i don't know whether it's feasible to mock ldap
use `self.stdout.write` instead of `print` and `call_command("command_name", stdout=StringIO())` to avoid console output during tests. don't know what to do about calls to `input`.
</issue>
<code>
[start of evap/evaluation/management/commands/import_ad.py]
1 import getpass
2 import ldap
3 import sys
4
5 from django.core.management.base import BaseCommand
6
7 from evap.evaluation.models import UserProfile
8
9
10 class Command(BaseCommand):
11 args = '<ldap server> <username>'
12 help = 'Imports user data from Active Directory. The username should be specified with realm.'
13
14 def handle(self, *args, **options):
15 try:
16 # connect
17 l = ldap.initialize(args[0])
18
19 # bind
20 l.bind_s(args[1], getpass.getpass("AD Password: "))
21
22 # find all users
23 result = l.search_s("OU=INSTITUT,DC=hpi,DC=uni-potsdam,DC=de", ldap.SCOPE_SUBTREE, filterstr="(&(&(objectClass=user)(!(objectClass=computer)))(givenName=*)(sn=*)(mail=*))")
24 for _, attrs in result:
25 try:
26 user = UserProfile.objects.get(username__iexact=attrs['sAMAccountName'][0])
27 user.first_name = attrs['givenName'][0]
28 user.last_name = attrs['sn'][0]
29 user.email = attrs['mail'][0]
30 user.save()
31
32 print("Successfully updated: '{0}'".format(user.username))
33 except UserProfile.DoesNotExist:
34 pass
35 except Exception as e:
36 print(e)
37
38 l.unbind_s()
39
40 except KeyboardInterrupt:
41 sys.stderr.write("\nOperation cancelled.\n")
42 sys.exit(1)
43
[end of evap/evaluation/management/commands/import_ad.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/evap/evaluation/management/commands/import_ad.py b/evap/evaluation/management/commands/import_ad.py
deleted file mode 100644
--- a/evap/evaluation/management/commands/import_ad.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import getpass
-import ldap
-import sys
-
-from django.core.management.base import BaseCommand
-
-from evap.evaluation.models import UserProfile
-
-
-class Command(BaseCommand):
- args = '<ldap server> <username>'
- help = 'Imports user data from Active Directory. The username should be specified with realm.'
-
- def handle(self, *args, **options):
- try:
- # connect
- l = ldap.initialize(args[0])
-
- # bind
- l.bind_s(args[1], getpass.getpass("AD Password: "))
-
- # find all users
- result = l.search_s("OU=INSTITUT,DC=hpi,DC=uni-potsdam,DC=de", ldap.SCOPE_SUBTREE, filterstr="(&(&(objectClass=user)(!(objectClass=computer)))(givenName=*)(sn=*)(mail=*))")
- for _, attrs in result:
- try:
- user = UserProfile.objects.get(username__iexact=attrs['sAMAccountName'][0])
- user.first_name = attrs['givenName'][0]
- user.last_name = attrs['sn'][0]
- user.email = attrs['mail'][0]
- user.save()
-
- print("Successfully updated: '{0}'".format(user.username))
- except UserProfile.DoesNotExist:
- pass
- except Exception as e:
- print(e)
-
- l.unbind_s()
-
- except KeyboardInterrupt:
- sys.stderr.write("\nOperation cancelled.\n")
- sys.exit(1)
| {"golden_diff": "diff --git a/evap/evaluation/management/commands/import_ad.py b/evap/evaluation/management/commands/import_ad.py\ndeleted file mode 100644\n--- a/evap/evaluation/management/commands/import_ad.py\n+++ /dev/null\n@@ -1,42 +0,0 @@\n-import getpass\n-import ldap\n-import sys\n-\n-from django.core.management.base import BaseCommand\n-\n-from evap.evaluation.models import UserProfile\n-\n-\n-class Command(BaseCommand):\n- args = '<ldap server> <username>'\n- help = 'Imports user data from Active Directory. The username should be specified with realm.'\n-\n- def handle(self, *args, **options):\n- try:\n- # connect\n- l = ldap.initialize(args[0])\n-\n- # bind\n- l.bind_s(args[1], getpass.getpass(\"AD Password: \"))\n-\n- # find all users\n- result = l.search_s(\"OU=INSTITUT,DC=hpi,DC=uni-potsdam,DC=de\", ldap.SCOPE_SUBTREE, filterstr=\"(&(&(objectClass=user)(!(objectClass=computer)))(givenName=*)(sn=*)(mail=*))\")\n- for _, attrs in result:\n- try:\n- user = UserProfile.objects.get(username__iexact=attrs['sAMAccountName'][0])\n- user.first_name = attrs['givenName'][0]\n- user.last_name = attrs['sn'][0]\n- user.email = attrs['mail'][0]\n- user.save()\n-\n- print(\"Successfully updated: '{0}'\".format(user.username))\n- except UserProfile.DoesNotExist:\n- pass\n- except Exception as e:\n- print(e)\n-\n- l.unbind_s()\n-\n- except KeyboardInterrupt:\n- sys.stderr.write(\"\\nOperation cancelled.\\n\")\n- sys.exit(1)\n", "issue": "Test management commands\nBecause in three years, run_tasks will silently fail on the production system and nobody will notice.\n- [x] **run_tasks** - shouldn't be too hard and is rather important\n- [x] **anonymize** - might be a bit of work to cover it properly, but should be straightforward.\n- [x] **refresh_results_cache** - should be easy\n- [x] **dump_testdata** - don't know how not to overwrite the file during testing, but should be possible\n\nthe other commands are already tested or rather unsuitable for testing\n- [x] **merge_users** - already has a test (#703) and is shown to be pretty broken.\n- [x] **run** - don't know how to test this and there isn't really anything that could break. still, somehow running it to check that it doesn't crash right away on e.g. imports would be cool\n- [x] **reload_testdata** - don't know whether it's possible at all to test that, i mean it drops the whole database...\n- [ ] **import_ad** - we never used it and i don't know whether it's feasible to mock ldap\n\nuse `self.stdout.write` instead of `print` and `call_command(\"command_name\", stdout=StringIO())` to avoid console output during tests. don't know what to do about calls to `input`.\n\n", "before_files": [{"content": "import getpass\nimport ldap\nimport sys\n\nfrom django.core.management.base import BaseCommand\n\nfrom evap.evaluation.models import UserProfile\n\n\nclass Command(BaseCommand):\n args = '<ldap server> <username>'\n help = 'Imports user data from Active Directory. The username should be specified with realm.'\n\n def handle(self, *args, **options):\n try:\n # connect\n l = ldap.initialize(args[0])\n\n # bind\n l.bind_s(args[1], getpass.getpass(\"AD Password: \"))\n\n # find all users\n result = l.search_s(\"OU=INSTITUT,DC=hpi,DC=uni-potsdam,DC=de\", ldap.SCOPE_SUBTREE, filterstr=\"(&(&(objectClass=user)(!(objectClass=computer)))(givenName=*)(sn=*)(mail=*))\")\n for _, attrs in result:\n try:\n user = UserProfile.objects.get(username__iexact=attrs['sAMAccountName'][0])\n user.first_name = attrs['givenName'][0]\n user.last_name = attrs['sn'][0]\n user.email = attrs['mail'][0]\n user.save()\n\n print(\"Successfully updated: '{0}'\".format(user.username))\n except UserProfile.DoesNotExist:\n pass\n except Exception as e:\n print(e)\n\n l.unbind_s()\n\n except KeyboardInterrupt:\n sys.stderr.write(\"\\nOperation cancelled.\\n\")\n sys.exit(1)\n", "path": "evap/evaluation/management/commands/import_ad.py"}]} | 1,234 | 411 |
gh_patches_debug_19401 | rasdani/github-patches | git_diff | geopandas__geopandas-643 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
GeoDataFrame.to_file fail on bool column
When converting GeoDataFrame with bool column to shp file, got following error
```sh
ValueError: 'bool' is not in list
```
</issue>
<code>
[start of geopandas/io/file.py]
1 import os
2
3 import fiona
4 import numpy as np
5 import six
6
7 from geopandas import GeoDataFrame
8
9 # Adapted from pandas.io.common
10 if six.PY3:
11 from urllib.request import urlopen as _urlopen
12 from urllib.parse import urlparse as parse_url
13 from urllib.parse import uses_relative, uses_netloc, uses_params
14 else:
15 from urllib2 import urlopen as _urlopen
16 from urlparse import urlparse as parse_url
17 from urlparse import uses_relative, uses_netloc, uses_params
18
19 _VALID_URLS = set(uses_relative + uses_netloc + uses_params)
20 _VALID_URLS.discard('')
21
22
23 def _is_url(url):
24 """Check to see if *url* has a valid protocol."""
25 try:
26 return parse_url(url).scheme in _VALID_URLS
27 except:
28 return False
29
30
31 def read_file(filename, **kwargs):
32 """
33 Returns a GeoDataFrame from a file or URL.
34
35 Parameters
36 ----------
37 filename: str
38 Either the absolute or relative path to the file or URL to
39 be opened.
40 **kwargs:
41 Keyword args to be passed to the `open` or `BytesCollection` method
42 in the fiona library when opening the file. For more information on
43 possible keywords, type:
44 ``import fiona; help(fiona.open)``
45
46 Examples
47 --------
48 >>> df = geopandas.read_file("nybb.shp")
49
50 Returns
51 -------
52 geodataframe : GeoDataFrame
53 """
54 bbox = kwargs.pop('bbox', None)
55 if _is_url(filename):
56 req = _urlopen(filename)
57 path_or_bytes = req.read()
58 reader = fiona.BytesCollection
59 else:
60 path_or_bytes = filename
61 reader = fiona.open
62 with reader(path_or_bytes, **kwargs) as f:
63 crs = f.crs
64 if bbox is not None:
65 assert len(bbox) == 4
66 f_filt = f.filter(bbox=bbox)
67 else:
68 f_filt = f
69 gdf = GeoDataFrame.from_features(f_filt, crs=crs)
70 # re-order with column order from metadata, with geometry last
71 columns = list(f.meta["schema"]["properties"]) + ["geometry"]
72 gdf = gdf[columns]
73
74 return gdf
75
76
77 def to_file(df, filename, driver="ESRI Shapefile", schema=None,
78 **kwargs):
79 """
80 Write this GeoDataFrame to an OGR data source
81
82 A dictionary of supported OGR providers is available via:
83 >>> import fiona
84 >>> fiona.supported_drivers
85
86 Parameters
87 ----------
88 df : GeoDataFrame to be written
89 filename : string
90 File path or file handle to write to.
91 driver : string, default 'ESRI Shapefile'
92 The OGR format driver used to write the vector file.
93 schema : dict, default None
94 If specified, the schema dictionary is passed to Fiona to
95 better control how the file is written. If None, GeoPandas
96 will determine the schema based on each column's dtype
97
98 The *kwargs* are passed to fiona.open and can be used to write
99 to multi-layer data, store data within archives (zip files), etc.
100 """
101 if schema is None:
102 schema = infer_schema(df)
103 filename = os.path.abspath(os.path.expanduser(filename))
104 with fiona.drivers():
105 with fiona.open(filename, 'w', driver=driver, crs=df.crs,
106 schema=schema, **kwargs) as colxn:
107 colxn.writerecords(df.iterfeatures())
108
109
110 def infer_schema(df):
111 try:
112 from collections import OrderedDict
113 except ImportError:
114 from ordereddict import OrderedDict
115
116 def convert_type(in_type):
117 if in_type == object:
118 return 'str'
119 out_type = type(np.asscalar(np.zeros(1, in_type))).__name__
120 if out_type == 'long':
121 out_type = 'int'
122 return out_type
123
124 properties = OrderedDict([
125 (col, convert_type(_type)) for col, _type in
126 zip(df.columns, df.dtypes) if col != df._geometry_column_name
127 ])
128
129 geom_type = _common_geom_type(df)
130 if not geom_type:
131 raise ValueError("Geometry column cannot contain mutiple "
132 "geometry types when writing to file.")
133
134 schema = {'geometry': geom_type, 'properties': properties}
135
136 return schema
137
138
139 def _common_geom_type(df):
140 # Need to check geom_types before we write to file...
141 # Some (most?) providers expect a single geometry type:
142 # Point, LineString, or Polygon
143 geom_types = df.geometry.geom_type.unique()
144
145 from os.path import commonprefix # To find longest common prefix
146 geom_type = commonprefix([g[::-1] for g in geom_types if g])[::-1] # Reverse
147 if not geom_type:
148 geom_type = None
149
150 return geom_type
151
[end of geopandas/io/file.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/geopandas/io/file.py b/geopandas/io/file.py
--- a/geopandas/io/file.py
+++ b/geopandas/io/file.py
@@ -113,16 +113,20 @@
except ImportError:
from ordereddict import OrderedDict
- def convert_type(in_type):
+ def convert_type(column, in_type):
if in_type == object:
return 'str'
out_type = type(np.asscalar(np.zeros(1, in_type))).__name__
if out_type == 'long':
out_type = 'int'
+ if out_type == 'bool':
+ raise ValueError('column "{}" is boolean type, '.format(column) +
+ 'which is unsupported in file writing. '
+ 'Consider casting the column to int type.')
return out_type
properties = OrderedDict([
- (col, convert_type(_type)) for col, _type in
+ (col, convert_type(col, _type)) for col, _type in
zip(df.columns, df.dtypes) if col != df._geometry_column_name
])
| {"golden_diff": "diff --git a/geopandas/io/file.py b/geopandas/io/file.py\n--- a/geopandas/io/file.py\n+++ b/geopandas/io/file.py\n@@ -113,16 +113,20 @@\n except ImportError:\n from ordereddict import OrderedDict\n \n- def convert_type(in_type):\n+ def convert_type(column, in_type):\n if in_type == object:\n return 'str'\n out_type = type(np.asscalar(np.zeros(1, in_type))).__name__\n if out_type == 'long':\n out_type = 'int'\n+ if out_type == 'bool':\n+ raise ValueError('column \"{}\" is boolean type, '.format(column) +\n+ 'which is unsupported in file writing. '\n+ 'Consider casting the column to int type.')\n return out_type\n \n properties = OrderedDict([\n- (col, convert_type(_type)) for col, _type in\n+ (col, convert_type(col, _type)) for col, _type in\n zip(df.columns, df.dtypes) if col != df._geometry_column_name\n ])\n", "issue": "GeoDataFrame.to_file fail on bool column\nWhen converting GeoDataFrame with bool column to shp file, got following error\r\n```sh\r\nValueError: 'bool' is not in list\r\n```\n", "before_files": [{"content": "import os\n\nimport fiona\nimport numpy as np\nimport six\n\nfrom geopandas import GeoDataFrame\n\n# Adapted from pandas.io.common\nif six.PY3:\n from urllib.request import urlopen as _urlopen\n from urllib.parse import urlparse as parse_url\n from urllib.parse import uses_relative, uses_netloc, uses_params\nelse:\n from urllib2 import urlopen as _urlopen\n from urlparse import urlparse as parse_url\n from urlparse import uses_relative, uses_netloc, uses_params\n\n_VALID_URLS = set(uses_relative + uses_netloc + uses_params)\n_VALID_URLS.discard('')\n\n\ndef _is_url(url):\n \"\"\"Check to see if *url* has a valid protocol.\"\"\"\n try:\n return parse_url(url).scheme in _VALID_URLS\n except:\n return False\n\n\ndef read_file(filename, **kwargs):\n \"\"\"\n Returns a GeoDataFrame from a file or URL.\n\n Parameters\n ----------\n filename: str\n Either the absolute or relative path to the file or URL to\n be opened.\n **kwargs:\n Keyword args to be passed to the `open` or `BytesCollection` method\n in the fiona library when opening the file. For more information on\n possible keywords, type:\n ``import fiona; help(fiona.open)``\n\n Examples\n --------\n >>> df = geopandas.read_file(\"nybb.shp\")\n\n Returns\n -------\n geodataframe : GeoDataFrame\n \"\"\"\n bbox = kwargs.pop('bbox', None)\n if _is_url(filename):\n req = _urlopen(filename)\n path_or_bytes = req.read()\n reader = fiona.BytesCollection\n else:\n path_or_bytes = filename\n reader = fiona.open\n with reader(path_or_bytes, **kwargs) as f:\n crs = f.crs\n if bbox is not None:\n assert len(bbox) == 4\n f_filt = f.filter(bbox=bbox)\n else:\n f_filt = f\n gdf = GeoDataFrame.from_features(f_filt, crs=crs)\n # re-order with column order from metadata, with geometry last\n columns = list(f.meta[\"schema\"][\"properties\"]) + [\"geometry\"]\n gdf = gdf[columns]\n\n return gdf\n\n\ndef to_file(df, filename, driver=\"ESRI Shapefile\", schema=None,\n **kwargs):\n \"\"\"\n Write this GeoDataFrame to an OGR data source\n\n A dictionary of supported OGR providers is available via:\n >>> import fiona\n >>> fiona.supported_drivers\n\n Parameters\n ----------\n df : GeoDataFrame to be written\n filename : string\n File path or file handle to write to.\n driver : string, default 'ESRI Shapefile'\n The OGR format driver used to write the vector file.\n schema : dict, default None\n If specified, the schema dictionary is passed to Fiona to\n better control how the file is written. If None, GeoPandas\n will determine the schema based on each column's dtype\n\n The *kwargs* are passed to fiona.open and can be used to write\n to multi-layer data, store data within archives (zip files), etc.\n \"\"\"\n if schema is None:\n schema = infer_schema(df)\n filename = os.path.abspath(os.path.expanduser(filename))\n with fiona.drivers():\n with fiona.open(filename, 'w', driver=driver, crs=df.crs,\n schema=schema, **kwargs) as colxn:\n colxn.writerecords(df.iterfeatures())\n\n\ndef infer_schema(df):\n try:\n from collections import OrderedDict\n except ImportError:\n from ordereddict import OrderedDict\n\n def convert_type(in_type):\n if in_type == object:\n return 'str'\n out_type = type(np.asscalar(np.zeros(1, in_type))).__name__\n if out_type == 'long':\n out_type = 'int'\n return out_type\n\n properties = OrderedDict([\n (col, convert_type(_type)) for col, _type in\n zip(df.columns, df.dtypes) if col != df._geometry_column_name\n ])\n\n geom_type = _common_geom_type(df)\n if not geom_type:\n raise ValueError(\"Geometry column cannot contain mutiple \"\n \"geometry types when writing to file.\")\n\n schema = {'geometry': geom_type, 'properties': properties}\n\n return schema\n\n\ndef _common_geom_type(df):\n # Need to check geom_types before we write to file...\n # Some (most?) providers expect a single geometry type:\n # Point, LineString, or Polygon\n geom_types = df.geometry.geom_type.unique()\n\n from os.path import commonprefix # To find longest common prefix\n geom_type = commonprefix([g[::-1] for g in geom_types if g])[::-1] # Reverse\n if not geom_type:\n geom_type = None\n\n return geom_type\n", "path": "geopandas/io/file.py"}]} | 2,010 | 243 |
gh_patches_debug_39731 | rasdani/github-patches | git_diff | OCHA-DAP__hdx-ckan-1835 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Redirect a non-new user to Newsfeed instead of My Organisations
</issue>
<code>
[start of ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py]
1 import datetime
2 import dateutil
3
4 import ckan.controllers.user as ckan_user
5 import ckan.lib.helpers as h
6 import ckan.lib.base as base
7 from ckan.common import _, c, g, request
8 import ckan.logic as logic
9 from pylons import config
10
11 get_action = logic.get_action
12
13 class LoginController(ckan_user.UserController):
14 def logged_in(self):
15 # redirect if needed
16 came_from = request.params.get('came_from', '')
17 if self._sane_came_from(came_from):
18 return h.redirect_to(str(came_from))
19
20 if c.user:
21 context = None
22 data_dict = {'id': c.user}
23
24 user_dict = get_action('user_show')(context, data_dict)
25
26 if 'created' in user_dict:
27 time_passed = datetime.datetime.now() - dateutil.parser.parse( user_dict['created'] )
28 else:
29 time_passed = None
30
31 if not user_dict['activity'] and time_passed and time_passed.days < 3:
32 #/dataset/new
33 contribute_url = h.url_for(controller='package', action='new')
34 # message = ''' Now that you've registered an account , you can <a href="%s">start adding datasets</a>.
35 # If you want to associate this dataset with an organization, either click on "My Organizations" below
36 # to create a new organization or ask the admin of an existing organization to add you as a member.''' % contribute_url
37 #h.flash_success(_(message), True)
38 else:
39 h.flash_success(_("%s is now logged in") %
40 user_dict['display_name'])
41 #return self.me()
42 # Instead redirect to My orgs page
43 return h.redirect_to(controller='user',
44 action='dashboard_organizations')
45 else:
46 err = _('Login failed. Bad username or password.')
47 if g.openid_enabled:
48 err += _(' (Or if using OpenID, it hasn\'t been associated '
49 'with a user account.)')
50 if h.asbool(config.get('ckan.legacy_templates', 'false')):
51 h.flash_error(err)
52 h.redirect_to(controller='user',
53 action='login', came_from=came_from)
54 else:
55 return self.login(error=err)
56
57 def contribute(self, error=None):
58 self.login(error)
59 vars = {'contribute':True}
60 return base.render('user/login.html', extra_vars=vars)
[end of ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py b/ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py
--- a/ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py
+++ b/ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py
@@ -10,7 +10,9 @@
get_action = logic.get_action
+
class LoginController(ckan_user.UserController):
+
def logged_in(self):
# redirect if needed
came_from = request.params.get('came_from', '')
@@ -24,24 +26,22 @@
user_dict = get_action('user_show')(context, data_dict)
if 'created' in user_dict:
- time_passed = datetime.datetime.now() - dateutil.parser.parse( user_dict['created'] )
+ time_passed = datetime.datetime.now(
+ ) - dateutil.parser.parse(user_dict['created'])
else:
- time_passed = None
-
+ time_passed = None
if not user_dict['activity'] and time_passed and time_passed.days < 3:
- #/dataset/new
- contribute_url = h.url_for(controller='package', action='new')
- # message = ''' Now that you've registered an account , you can <a href="%s">start adding datasets</a>.
- # If you want to associate this dataset with an organization, either click on "My Organizations" below
+ #/dataset/new
+ contribute_url = h.url_for(controller='package', action='new')
+ # message = ''' Now that you've registered an account , you can <a href="%s">start adding datasets</a>.
+ # If you want to associate this dataset with an organization, either click on "My Organizations" below
# to create a new organization or ask the admin of an existing organization to add you as a member.''' % contribute_url
#h.flash_success(_(message), True)
+ return h.redirect_to(controller='user', action='dashboard_organizations')
else:
h.flash_success(_("%s is now logged in") %
- user_dict['display_name'])
- #return self.me()
- # Instead redirect to My orgs page
- return h.redirect_to(controller='user',
- action='dashboard_organizations')
+ user_dict['display_name'])
+ return self.me()
else:
err = _('Login failed. Bad username or password.')
if g.openid_enabled:
@@ -53,8 +53,8 @@
action='login', came_from=came_from)
else:
return self.login(error=err)
-
+
def contribute(self, error=None):
self.login(error)
- vars = {'contribute':True}
- return base.render('user/login.html', extra_vars=vars)
\ No newline at end of file
+ vars = {'contribute': True}
+ return base.render('user/login.html', extra_vars=vars)
| {"golden_diff": "diff --git a/ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py b/ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py\n--- a/ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py\n+++ b/ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py\n@@ -10,7 +10,9 @@\n \n get_action = logic.get_action\n \n+\n class LoginController(ckan_user.UserController):\n+\n def logged_in(self):\n # redirect if needed\n came_from = request.params.get('came_from', '')\n@@ -24,24 +26,22 @@\n user_dict = get_action('user_show')(context, data_dict)\n \n if 'created' in user_dict:\n- time_passed = datetime.datetime.now() - dateutil.parser.parse( user_dict['created'] )\n+ time_passed = datetime.datetime.now(\n+ ) - dateutil.parser.parse(user_dict['created'])\n else:\n- time_passed = None \n- \n+ time_passed = None\n if not user_dict['activity'] and time_passed and time_passed.days < 3:\n- #/dataset/new \n- contribute_url = h.url_for(controller='package', action='new')\n- # message = ''' Now that you've registered an account , you can <a href=\"%s\">start adding datasets</a>. \n- # If you want to associate this dataset with an organization, either click on \"My Organizations\" below \n+ #/dataset/new\n+ contribute_url = h.url_for(controller='package', action='new')\n+ # message = ''' Now that you've registered an account , you can <a href=\"%s\">start adding datasets</a>.\n+ # If you want to associate this dataset with an organization, either click on \"My Organizations\" below\n # to create a new organization or ask the admin of an existing organization to add you as a member.''' % contribute_url\n #h.flash_success(_(message), True)\n+ return h.redirect_to(controller='user', action='dashboard_organizations')\n else:\n h.flash_success(_(\"%s is now logged in\") %\n- user_dict['display_name'])\n- #return self.me()\n- # Instead redirect to My orgs page\n- return h.redirect_to(controller='user',\n- action='dashboard_organizations')\n+ user_dict['display_name'])\n+ return self.me()\n else:\n err = _('Login failed. Bad username or password.')\n if g.openid_enabled:\n@@ -53,8 +53,8 @@\n action='login', came_from=came_from)\n else:\n return self.login(error=err)\n- \n+\n def contribute(self, error=None):\n self.login(error)\n- vars = {'contribute':True}\n- return base.render('user/login.html', extra_vars=vars)\n\\ No newline at end of file\n+ vars = {'contribute': True}\n+ return base.render('user/login.html', extra_vars=vars)\n", "issue": "Redirect a non-new user to Newsfeed instead of My Organisations\n\n", "before_files": [{"content": "import datetime\nimport dateutil\n\nimport ckan.controllers.user as ckan_user\nimport ckan.lib.helpers as h\nimport ckan.lib.base as base\nfrom ckan.common import _, c, g, request\nimport ckan.logic as logic\nfrom pylons import config\n\nget_action = logic.get_action\n\nclass LoginController(ckan_user.UserController):\n def logged_in(self):\n # redirect if needed\n came_from = request.params.get('came_from', '')\n if self._sane_came_from(came_from):\n return h.redirect_to(str(came_from))\n\n if c.user:\n context = None\n data_dict = {'id': c.user}\n\n user_dict = get_action('user_show')(context, data_dict)\n\n if 'created' in user_dict:\n time_passed = datetime.datetime.now() - dateutil.parser.parse( user_dict['created'] )\n else:\n time_passed = None \n \n if not user_dict['activity'] and time_passed and time_passed.days < 3:\n #/dataset/new \n contribute_url = h.url_for(controller='package', action='new')\n # message = ''' Now that you've registered an account , you can <a href=\"%s\">start adding datasets</a>. \n # If you want to associate this dataset with an organization, either click on \"My Organizations\" below \n # to create a new organization or ask the admin of an existing organization to add you as a member.''' % contribute_url\n #h.flash_success(_(message), True)\n else:\n h.flash_success(_(\"%s is now logged in\") %\n user_dict['display_name'])\n #return self.me()\n # Instead redirect to My orgs page\n return h.redirect_to(controller='user',\n action='dashboard_organizations')\n else:\n err = _('Login failed. Bad username or password.')\n if g.openid_enabled:\n err += _(' (Or if using OpenID, it hasn\\'t been associated '\n 'with a user account.)')\n if h.asbool(config.get('ckan.legacy_templates', 'false')):\n h.flash_error(err)\n h.redirect_to(controller='user',\n action='login', came_from=came_from)\n else:\n return self.login(error=err)\n \n def contribute(self, error=None):\n self.login(error)\n vars = {'contribute':True}\n return base.render('user/login.html', extra_vars=vars)", "path": "ckanext-hdx_users/ckanext/hdx_users/controllers/login_controller.py"}]} | 1,210 | 672 |
gh_patches_debug_53600 | rasdani/github-patches | git_diff | aws__aws-cli-577 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
typo in s3api list-objects documentation
The documentation for the s3api list-objects --max-items parameter says that a `NextMarker` will be provided, while the --starting-token parameter refers to this as `NextToken` which is the actual name of the returned token in JSON.
So in short I think that the `NextMarker` should really say `NextToken` to prevent any confusion.
</issue>
<code>
[start of awscli/customizations/paginate.py]
1 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 """This module has customizations to unify paging paramters.
14
15 For any operation that can be paginated, we will:
16
17 * Remove the service specific pagination params. This can vary across
18 services and we're going to replace them with a consistent set of
19 arguments.
20 * Add a ``--starting-token`` and a ``--max-items`` argument.
21
22 """
23 import logging
24
25 from awscli.arguments import BaseCLIArgument
26 from botocore.parameters import StringParameter
27
28 logger = logging.getLogger(__name__)
29
30
31 STARTING_TOKEN_HELP = """
32 <p>A token to specify where to start paginating. This is the
33 <code>NextToken</code> from a previously truncated response.</p>
34 """
35
36 MAX_ITEMS_HELP = """
37 <p>The total number of items to return. If the total number
38 of items available is more than the value specified in
39 max-items then a <code>NextMarker</code> will
40 be provided in the output that you can use to resume pagination.
41 """
42
43
44 def unify_paging_params(argument_table, operation, **kwargs):
45 if not operation.can_paginate:
46 # We only apply these customizations to paginated responses.
47 return
48 logger.debug("Modifying paging parameters for operation: %s", operation)
49 _remove_existing_paging_arguments(argument_table, operation)
50 argument_table['starting-token'] = PageArgument('starting-token',
51 STARTING_TOKEN_HELP,
52 operation,
53 parse_type='string')
54 argument_table['max-items'] = PageArgument('max-items', MAX_ITEMS_HELP,
55 operation, parse_type='integer')
56
57
58 def _remove_existing_paging_arguments(argument_table, operation):
59 tokens = _get_input_tokens(operation)
60 for token_name in tokens:
61 cli_name = _get_cli_name(operation.params, token_name)
62 del argument_table[cli_name]
63 if 'limit_key' in operation.pagination:
64 key_name = operation.pagination['limit_key']
65 cli_name = _get_cli_name(operation.params, key_name)
66 del argument_table[cli_name]
67
68
69 def _get_input_tokens(operation):
70 config = operation.pagination
71 tokens = config['input_token']
72 if not isinstance(tokens, list):
73 return [tokens]
74 return tokens
75
76
77 def _get_cli_name(param_objects, token_name):
78 for param in param_objects:
79 if param.name == token_name:
80 return param.cli_name.lstrip('-')
81
82
83 class PageArgument(BaseCLIArgument):
84 type_map = {
85 'string': str,
86 'integer': int,
87 }
88
89 def __init__(self, name, documentation, operation, parse_type):
90 param = StringParameter(operation, name=name, type=parse_type)
91 self._name = name
92 self.argument_object = param
93 self._name = name
94 self._documentation = documentation
95 self._parse_type = parse_type
96
97 @property
98 def cli_name(self):
99 return '--' + self._name
100
101 @property
102 def cli_type_name(self):
103 return self._parse_type
104
105 @property
106 def required(self):
107 return False
108
109 @property
110 def documentation(self):
111 return self._documentation
112
113 def add_to_parser(self, parser):
114 parser.add_argument(self.cli_name, dest=self.py_name,
115 type=self.type_map[self._parse_type])
116
117 def add_to_params(self, parameters, value):
118 if value is not None:
119 parameters[self.py_name] = value
120
[end of awscli/customizations/paginate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/awscli/customizations/paginate.py b/awscli/customizations/paginate.py
--- a/awscli/customizations/paginate.py
+++ b/awscli/customizations/paginate.py
@@ -36,7 +36,7 @@
MAX_ITEMS_HELP = """
<p>The total number of items to return. If the total number
of items available is more than the value specified in
-max-items then a <code>NextMarker</code> will
+max-items then a <code>NextToken</code> will
be provided in the output that you can use to resume pagination.
"""
| {"golden_diff": "diff --git a/awscli/customizations/paginate.py b/awscli/customizations/paginate.py\n--- a/awscli/customizations/paginate.py\n+++ b/awscli/customizations/paginate.py\n@@ -36,7 +36,7 @@\n MAX_ITEMS_HELP = \"\"\"\n <p>The total number of items to return. If the total number\n of items available is more than the value specified in\n-max-items then a <code>NextMarker</code> will\n+max-items then a <code>NextToken</code> will\n be provided in the output that you can use to resume pagination.\n \"\"\"\n", "issue": "typo in s3api list-objects documentation\nThe documentation for the s3api list-objects --max-items parameter says that a `NextMarker` will be provided, while the --starting-token parameter refers to this as `NextToken` which is the actual name of the returned token in JSON.\n\nSo in short I think that the `NextMarker` should really say `NextToken` to prevent any confusion.\n\n", "before_files": [{"content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\"\"\"This module has customizations to unify paging paramters.\n\nFor any operation that can be paginated, we will:\n\n * Remove the service specific pagination params. This can vary across\n services and we're going to replace them with a consistent set of\n arguments.\n * Add a ``--starting-token`` and a ``--max-items`` argument.\n\n\"\"\"\nimport logging\n\nfrom awscli.arguments import BaseCLIArgument\nfrom botocore.parameters import StringParameter\n\nlogger = logging.getLogger(__name__)\n\n\nSTARTING_TOKEN_HELP = \"\"\"\n<p>A token to specify where to start paginating. This is the\n<code>NextToken</code> from a previously truncated response.</p>\n\"\"\"\n\nMAX_ITEMS_HELP = \"\"\"\n<p>The total number of items to return. If the total number\nof items available is more than the value specified in\nmax-items then a <code>NextMarker</code> will\nbe provided in the output that you can use to resume pagination.\n\"\"\"\n\n\ndef unify_paging_params(argument_table, operation, **kwargs):\n if not operation.can_paginate:\n # We only apply these customizations to paginated responses.\n return\n logger.debug(\"Modifying paging parameters for operation: %s\", operation)\n _remove_existing_paging_arguments(argument_table, operation)\n argument_table['starting-token'] = PageArgument('starting-token',\n STARTING_TOKEN_HELP,\n operation,\n parse_type='string')\n argument_table['max-items'] = PageArgument('max-items', MAX_ITEMS_HELP,\n operation, parse_type='integer')\n\n\ndef _remove_existing_paging_arguments(argument_table, operation):\n tokens = _get_input_tokens(operation)\n for token_name in tokens:\n cli_name = _get_cli_name(operation.params, token_name)\n del argument_table[cli_name]\n if 'limit_key' in operation.pagination:\n key_name = operation.pagination['limit_key']\n cli_name = _get_cli_name(operation.params, key_name)\n del argument_table[cli_name]\n\n\ndef _get_input_tokens(operation):\n config = operation.pagination\n tokens = config['input_token']\n if not isinstance(tokens, list):\n return [tokens]\n return tokens\n\n\ndef _get_cli_name(param_objects, token_name):\n for param in param_objects:\n if param.name == token_name:\n return param.cli_name.lstrip('-')\n\n\nclass PageArgument(BaseCLIArgument):\n type_map = {\n 'string': str,\n 'integer': int,\n }\n\n def __init__(self, name, documentation, operation, parse_type):\n param = StringParameter(operation, name=name, type=parse_type)\n self._name = name\n self.argument_object = param\n self._name = name\n self._documentation = documentation\n self._parse_type = parse_type\n\n @property\n def cli_name(self):\n return '--' + self._name\n\n @property\n def cli_type_name(self):\n return self._parse_type\n\n @property\n def required(self):\n return False\n\n @property\n def documentation(self):\n return self._documentation\n\n def add_to_parser(self, parser):\n parser.add_argument(self.cli_name, dest=self.py_name,\n type=self.type_map[self._parse_type])\n\n def add_to_params(self, parameters, value):\n if value is not None:\n parameters[self.py_name] = value\n", "path": "awscli/customizations/paginate.py"}]} | 1,747 | 132 |
gh_patches_debug_7438 | rasdani/github-patches | git_diff | ranaroussi__yfinance-1237 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
fix(sec): upgrade lxml to 4.9.1
### What happened?
There are 1 security vulnerabilities found in lxml 4.5.1
- [CVE-2022-2309](https://www.oscs1024.com/hd/CVE-2022-2309)
### What did I do?
Upgrade lxml from 4.5.1 to 4.9.1 for vulnerability fix
### What did you expect to happen?
Ideally, no insecure libs should be used.
### The specification of the pull request
[PR Specification](https://www.oscs1024.com/docs/pr-specification/) from OSCS
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: UTF-8 -*-
3 #
4 # yfinance - market data downloader
5 # https://github.com/ranaroussi/yfinance
6
7 """yfinance - market data downloader"""
8
9 from setuptools import setup, find_packages
10 # from codecs import open
11 import io
12 from os import path
13
14 # --- get version ---
15 version = "unknown"
16 with open("yfinance/version.py") as f:
17 line = f.read().strip()
18 version = line.replace("version = ", "").replace('"', '')
19 # --- /get version ---
20
21
22 here = path.abspath(path.dirname(__file__))
23
24 # Get the long description from the README file
25 with io.open(path.join(here, 'README.md'), encoding='utf-8') as f:
26 long_description = f.read()
27
28 setup(
29 name='yfinance',
30 version=version,
31 description='Download market data from Yahoo! Finance API',
32 long_description=long_description,
33 long_description_content_type='text/markdown',
34 url='https://github.com/ranaroussi/yfinance',
35 author='Ran Aroussi',
36 author_email='[email protected]',
37 license='Apache',
38 classifiers=[
39 'License :: OSI Approved :: Apache Software License',
40 # 'Development Status :: 3 - Alpha',
41 # 'Development Status :: 4 - Beta',
42 'Development Status :: 5 - Production/Stable',
43
44
45 'Operating System :: OS Independent',
46 'Intended Audience :: Developers',
47 'Topic :: Office/Business :: Financial',
48 'Topic :: Office/Business :: Financial :: Investment',
49 'Topic :: Scientific/Engineering :: Interface Engine/Protocol Translator',
50 'Topic :: Software Development :: Libraries',
51 'Topic :: Software Development :: Libraries :: Python Modules',
52
53 'Programming Language :: Python :: 2.7',
54 'Programming Language :: Python :: 3.4',
55 'Programming Language :: Python :: 3.5',
56 # 'Programming Language :: Python :: 3.6',
57 'Programming Language :: Python :: 3.7',
58 'Programming Language :: Python :: 3.8',
59 'Programming Language :: Python :: 3.9',
60 ],
61 platforms=['any'],
62 keywords='pandas, yahoo finance, pandas datareader',
63 packages=find_packages(exclude=['contrib', 'docs', 'tests', 'examples']),
64 install_requires=['pandas>=1.3.0', 'numpy>=1.16.5',
65 'requests>=2.26', 'multitasking>=0.0.7',
66 'appdirs>=1.4.4'],
67 entry_points={
68 'console_scripts': [
69 'sample=sample:main',
70 ],
71 },
72 )
73
74 print("""
75 NOTE: yfinance is not affiliated, endorsed, or vetted by Yahoo, Inc.
76
77 You should refer to Yahoo!'s terms of use for details on your rights
78 to use the actual data downloaded.""")
79
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -63,7 +63,7 @@
packages=find_packages(exclude=['contrib', 'docs', 'tests', 'examples']),
install_requires=['pandas>=1.3.0', 'numpy>=1.16.5',
'requests>=2.26', 'multitasking>=0.0.7',
- 'appdirs>=1.4.4'],
+ 'lxml>=4.9.1', 'appdirs>=1.4.4'],
entry_points={
'console_scripts': [
'sample=sample:main',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -63,7 +63,7 @@\n packages=find_packages(exclude=['contrib', 'docs', 'tests', 'examples']),\n install_requires=['pandas>=1.3.0', 'numpy>=1.16.5',\n 'requests>=2.26', 'multitasking>=0.0.7',\n- 'appdirs>=1.4.4'],\n+ 'lxml>=4.9.1', 'appdirs>=1.4.4'],\n entry_points={\n 'console_scripts': [\n 'sample=sample:main',\n", "issue": "fix(sec): upgrade lxml to 4.9.1\n### What happened\uff1f\nThere are 1 security vulnerabilities found in lxml 4.5.1\n- [CVE-2022-2309](https://www.oscs1024.com/hd/CVE-2022-2309)\n\n\n### What did I do\uff1f\nUpgrade lxml from 4.5.1 to 4.9.1 for vulnerability fix\n\n### What did you expect to happen\uff1f\nIdeally, no insecure libs should be used.\n\n### The specification of the pull request\n[PR Specification](https://www.oscs1024.com/docs/pr-specification/) from OSCS\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: UTF-8 -*-\n#\n# yfinance - market data downloader\n# https://github.com/ranaroussi/yfinance\n\n\"\"\"yfinance - market data downloader\"\"\"\n\nfrom setuptools import setup, find_packages\n# from codecs import open\nimport io\nfrom os import path\n\n# --- get version ---\nversion = \"unknown\"\nwith open(\"yfinance/version.py\") as f:\n line = f.read().strip()\n version = line.replace(\"version = \", \"\").replace('\"', '')\n# --- /get version ---\n\n\nhere = path.abspath(path.dirname(__file__))\n\n# Get the long description from the README file\nwith io.open(path.join(here, 'README.md'), encoding='utf-8') as f:\n long_description = f.read()\n\nsetup(\n name='yfinance',\n version=version,\n description='Download market data from Yahoo! Finance API',\n long_description=long_description,\n long_description_content_type='text/markdown',\n url='https://github.com/ranaroussi/yfinance',\n author='Ran Aroussi',\n author_email='[email protected]',\n license='Apache',\n classifiers=[\n 'License :: OSI Approved :: Apache Software License',\n # 'Development Status :: 3 - Alpha',\n # 'Development Status :: 4 - Beta',\n 'Development Status :: 5 - Production/Stable',\n\n\n 'Operating System :: OS Independent',\n 'Intended Audience :: Developers',\n 'Topic :: Office/Business :: Financial',\n 'Topic :: Office/Business :: Financial :: Investment',\n 'Topic :: Scientific/Engineering :: Interface Engine/Protocol Translator',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n # 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n platforms=['any'],\n keywords='pandas, yahoo finance, pandas datareader',\n packages=find_packages(exclude=['contrib', 'docs', 'tests', 'examples']),\n install_requires=['pandas>=1.3.0', 'numpy>=1.16.5',\n 'requests>=2.26', 'multitasking>=0.0.7',\n 'appdirs>=1.4.4'],\n entry_points={\n 'console_scripts': [\n 'sample=sample:main',\n ],\n },\n)\n\nprint(\"\"\"\nNOTE: yfinance is not affiliated, endorsed, or vetted by Yahoo, Inc.\n\nYou should refer to Yahoo!'s terms of use for details on your rights\nto use the actual data downloaded.\"\"\")\n", "path": "setup.py"}]} | 1,448 | 146 |
gh_patches_debug_12394 | rasdani/github-patches | git_diff | aws__aws-cli-341 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
argparse dependency is only needed for Python 2.6
We currently have a dependency on argparse because it's not in stdlib for Python 2.6. We should make this dependency specific to 2.6 and not install it for other Python versions.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 from setuptools import setup, find_packages
6
7 import awscli
8
9
10 requires = ['botocore>=0.16.0,<0.17.0',
11 'bcdoc>=0.9.0,<0.10.0',
12 'six>=1.1.0',
13 'colorama==0.2.5',
14 'argparse>=1.1',
15 'docutils>=0.10',
16 'rsa==3.1.1']
17
18
19 setup_options = dict(
20 name='awscli',
21 version=awscli.__version__,
22 description='Universal Command Line Environment for AWS.',
23 long_description=open('README.rst').read(),
24 author='Mitch Garnaat',
25 author_email='[email protected]',
26 url='http://aws.amazon.com/cli/',
27 scripts=['bin/aws', 'bin/aws.cmd',
28 'bin/aws_completer', 'bin/aws_zsh_completer.sh'],
29 packages=find_packages('.', exclude=['tests*']),
30 package_dir={'awscli': 'awscli'},
31 package_data={'awscli': ['data/*.json', 'examples/*/*']},
32 install_requires=requires,
33 license=open("LICENSE.txt").read(),
34 classifiers=(
35 'Development Status :: 5 - Production/Stable',
36 'Intended Audience :: Developers',
37 'Intended Audience :: System Administrators',
38 'Natural Language :: English',
39 'License :: OSI Approved :: Apache Software License',
40 'Programming Language :: Python',
41 'Programming Language :: Python :: 2.6',
42 'Programming Language :: Python :: 2.7',
43 'Programming Language :: Python :: 3',
44 'Programming Language :: Python :: 3.3',
45 ),
46 )
47
48 if 'py2exe' in sys.argv:
49 # This will actually give us a py2exe command.
50 import py2exe
51 # And we have some py2exe specific options.
52 setup_options['options'] = {
53 'py2exe': {
54 'optimize': 0,
55 'skip_archive': True,
56 'includes': ['ConfigParser', 'urllib', 'httplib',
57 'docutils.readers.standalone',
58 'docutils.parsers.rst',
59 'docutils.languages.en',
60 'xml.etree.ElementTree', 'HTMLParser',
61 'awscli.handlers'],
62 }
63 }
64 setup_options['console'] = ['bin/aws']
65
66
67 setup(**setup_options)
68
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,5 +1,4 @@
#!/usr/bin/env python
-import os
import sys
from setuptools import setup, find_packages
@@ -11,10 +10,14 @@
'bcdoc>=0.9.0,<0.10.0',
'six>=1.1.0',
'colorama==0.2.5',
- 'argparse>=1.1',
'docutils>=0.10',
'rsa==3.1.1']
+if sys.version_info[:2] == (2, 6):
+ # For python2.6 we have to require argparse since it
+ # was not in stdlib until 2.7.
+ requires.append('argparse>=1.1')
+
setup_options = dict(
name='awscli',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,5 +1,4 @@\n #!/usr/bin/env python\n-import os\n import sys\n \n from setuptools import setup, find_packages\n@@ -11,10 +10,14 @@\n 'bcdoc>=0.9.0,<0.10.0',\n 'six>=1.1.0',\n 'colorama==0.2.5',\n- 'argparse>=1.1',\n 'docutils>=0.10',\n 'rsa==3.1.1']\n \n+if sys.version_info[:2] == (2, 6):\n+ # For python2.6 we have to require argparse since it\n+ # was not in stdlib until 2.7.\n+ requires.append('argparse>=1.1')\n+\n \n setup_options = dict(\n name='awscli',\n", "issue": "argparse dependency is only needed for Python 2.6\nWe currently have a dependency on argparse because it's not in stdlib for Python 2.6. We should make this dependency specific to 2.6 and not install it for other Python versions.\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\n\nimport awscli\n\n\nrequires = ['botocore>=0.16.0,<0.17.0',\n 'bcdoc>=0.9.0,<0.10.0',\n 'six>=1.1.0',\n 'colorama==0.2.5',\n 'argparse>=1.1',\n 'docutils>=0.10',\n 'rsa==3.1.1']\n\n\nsetup_options = dict(\n name='awscli',\n version=awscli.__version__,\n description='Universal Command Line Environment for AWS.',\n long_description=open('README.rst').read(),\n author='Mitch Garnaat',\n author_email='[email protected]',\n url='http://aws.amazon.com/cli/',\n scripts=['bin/aws', 'bin/aws.cmd',\n 'bin/aws_completer', 'bin/aws_zsh_completer.sh'],\n packages=find_packages('.', exclude=['tests*']),\n package_dir={'awscli': 'awscli'},\n package_data={'awscli': ['data/*.json', 'examples/*/*']},\n install_requires=requires,\n license=open(\"LICENSE.txt\").read(),\n classifiers=(\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n ),\n)\n\nif 'py2exe' in sys.argv:\n # This will actually give us a py2exe command.\n import py2exe\n # And we have some py2exe specific options.\n setup_options['options'] = {\n 'py2exe': {\n 'optimize': 0,\n 'skip_archive': True,\n 'includes': ['ConfigParser', 'urllib', 'httplib',\n 'docutils.readers.standalone',\n 'docutils.parsers.rst',\n 'docutils.languages.en',\n 'xml.etree.ElementTree', 'HTMLParser',\n 'awscli.handlers'],\n }\n }\n setup_options['console'] = ['bin/aws']\n\n\nsetup(**setup_options)\n", "path": "setup.py"}]} | 1,240 | 206 |
gh_patches_debug_11235 | rasdani/github-patches | git_diff | saleor__saleor-5311 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Broken multiple interface notation in schema
### What I'm trying to achieve
To use Apollo tooling to generate TS types for the application queries. However, it fails because Saleor's schema uses comma as a separator instead of ampersand. More: https://github.com/apollographql/apollo-tooling/issues/434
### Steps to reproduce the problem
1. Go to mirumee/saleor-dashboard repository and clone it
2. Copy schema from core to dashboard
3. `npm run build-types`
4. Notice that it fails at multiple interface implementation.
</issue>
<code>
[start of saleor/graphql/management/commands/get_graphql_schema.py]
1 from django.core.management.base import BaseCommand
2 from graphql import print_schema
3
4 from ...api import schema
5
6
7 class Command(BaseCommand):
8 help = "Writes SDL for GraphQL API schema to stdout"
9
10 def handle(self, *args, **options):
11 self.stdout.write(print_schema(schema))
12
[end of saleor/graphql/management/commands/get_graphql_schema.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/saleor/graphql/management/commands/get_graphql_schema.py b/saleor/graphql/management/commands/get_graphql_schema.py
--- a/saleor/graphql/management/commands/get_graphql_schema.py
+++ b/saleor/graphql/management/commands/get_graphql_schema.py
@@ -8,4 +8,14 @@
help = "Writes SDL for GraphQL API schema to stdout"
def handle(self, *args, **options):
- self.stdout.write(print_schema(schema))
+ """Support multiple interface notation in schema for Apollo tooling.
+
+ In `graphql-core` V2 separator for interaces is `,`.
+ Apollo tooling to generate TypeScript types using `&` as interfaces separator.
+ https://github.com/graphql-python/graphql-core/pull/258
+ """
+ printed_schema = print_schema(schema)
+ for line in printed_schema.splitlines():
+ if "implements" in line:
+ line = line.replace(",", " &")
+ self.stdout.write(f"{line}\n")
| {"golden_diff": "diff --git a/saleor/graphql/management/commands/get_graphql_schema.py b/saleor/graphql/management/commands/get_graphql_schema.py\n--- a/saleor/graphql/management/commands/get_graphql_schema.py\n+++ b/saleor/graphql/management/commands/get_graphql_schema.py\n@@ -8,4 +8,14 @@\n help = \"Writes SDL for GraphQL API schema to stdout\"\n \n def handle(self, *args, **options):\n- self.stdout.write(print_schema(schema))\n+ \"\"\"Support multiple interface notation in schema for Apollo tooling.\n+\n+ In `graphql-core` V2 separator for interaces is `,`.\n+ Apollo tooling to generate TypeScript types using `&` as interfaces separator.\n+ https://github.com/graphql-python/graphql-core/pull/258\n+ \"\"\"\n+ printed_schema = print_schema(schema)\n+ for line in printed_schema.splitlines():\n+ if \"implements\" in line:\n+ line = line.replace(\",\", \" &\")\n+ self.stdout.write(f\"{line}\\n\")\n", "issue": "Broken multiple interface notation in schema\n### What I'm trying to achieve\r\nTo use Apollo tooling to generate TS types for the application queries. However, it fails because Saleor's schema uses comma as a separator instead of ampersand. More: https://github.com/apollographql/apollo-tooling/issues/434 \r\n\r\n### Steps to reproduce the problem\r\n1. Go to mirumee/saleor-dashboard repository and clone it\r\n2. Copy schema from core to dashboard\r\n3. `npm run build-types`\r\n4. Notice that it fails at multiple interface implementation.\n", "before_files": [{"content": "from django.core.management.base import BaseCommand\nfrom graphql import print_schema\n\nfrom ...api import schema\n\n\nclass Command(BaseCommand):\n help = \"Writes SDL for GraphQL API schema to stdout\"\n\n def handle(self, *args, **options):\n self.stdout.write(print_schema(schema))\n", "path": "saleor/graphql/management/commands/get_graphql_schema.py"}]} | 748 | 227 |
gh_patches_debug_14761 | rasdani/github-patches | git_diff | iterative__dvc-7965 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add TOML support for metrics
Right now, there is only TOML file support for params files. We need to add TOML support for metrics as well.
Here's a [link to the Discord question](https://discord.com/channels/485586884165107732/485596304961962003/865974923079319563) that brought this up.
</issue>
<code>
[start of dvc/repo/metrics/show.py]
1 import logging
2 import os
3 from typing import List
4
5 from scmrepo.exceptions import SCMError
6
7 from dvc.fs.dvc import DvcFileSystem
8 from dvc.output import Output
9 from dvc.repo import locked
10 from dvc.repo.collect import StrPaths, collect
11 from dvc.repo.live import summary_fs_path
12 from dvc.scm import NoSCMError
13 from dvc.utils import error_handler, errored_revisions, onerror_collect
14 from dvc.utils.collections import ensure_list
15 from dvc.utils.serialize import load_yaml
16
17 logger = logging.getLogger(__name__)
18
19
20 def _is_metric(out: Output) -> bool:
21 return bool(out.metric) or bool(out.live)
22
23
24 def _to_fs_paths(metrics: List[Output]) -> StrPaths:
25 result = []
26 for out in metrics:
27 if out.metric:
28 result.append(out.repo.dvcfs.from_os_path(out.fs_path))
29 elif out.live:
30 fs_path = summary_fs_path(out)
31 if fs_path:
32 result.append(out.repo.dvcfs.from_os_path(fs_path))
33 return result
34
35
36 def _collect_metrics(repo, targets, revision, recursive):
37 metrics, fs_paths = collect(
38 repo,
39 targets=targets,
40 output_filter=_is_metric,
41 recursive=recursive,
42 rev=revision,
43 )
44 return _to_fs_paths(metrics) + list(fs_paths)
45
46
47 def _extract_metrics(metrics, path, rev):
48 if isinstance(metrics, (int, float)):
49 return metrics
50
51 if not isinstance(metrics, dict):
52 return None
53
54 ret = {}
55 for key, val in metrics.items():
56 m = _extract_metrics(val, path, rev)
57 if m not in (None, {}):
58 ret[key] = m
59 else:
60 logger.debug(
61 "Could not parse '%s' metric from '%s' at '%s' "
62 "due to its unsupported type: '%s'",
63 key,
64 path,
65 rev,
66 type(val).__name__,
67 )
68
69 return ret
70
71
72 @error_handler
73 def _read_metric(path, fs, rev, **kwargs):
74 val = load_yaml(path, fs=fs)
75 val = _extract_metrics(val, path, rev)
76 return val or {}
77
78
79 def _read_metrics(repo, metrics, rev, onerror=None):
80 fs = DvcFileSystem(repo=repo)
81
82 relpath = ""
83 if repo.root_dir != repo.fs.path.getcwd():
84 relpath = repo.fs.path.relpath(repo.root_dir, repo.fs.path.getcwd())
85
86 res = {}
87 for metric in metrics:
88 if not fs.isfile(metric):
89 continue
90
91 res[os.path.join(relpath, *fs.path.parts(metric))] = _read_metric(
92 metric, fs, rev, onerror=onerror
93 )
94
95 return res
96
97
98 def _gather_metrics(repo, targets, rev, recursive, onerror=None):
99 metrics = _collect_metrics(repo, targets, rev, recursive)
100 return _read_metrics(repo, metrics, rev, onerror=onerror)
101
102
103 @locked
104 def show(
105 repo,
106 targets=None,
107 all_branches=False,
108 all_tags=False,
109 recursive=False,
110 revs=None,
111 all_commits=False,
112 onerror=None,
113 ):
114 if onerror is None:
115 onerror = onerror_collect
116
117 targets = ensure_list(targets)
118 targets = [repo.dvcfs.from_os_path(target) for target in targets]
119
120 res = {}
121 for rev in repo.brancher(
122 revs=revs,
123 all_branches=all_branches,
124 all_tags=all_tags,
125 all_commits=all_commits,
126 ):
127 res[rev] = error_handler(_gather_metrics)(
128 repo, targets, rev, recursive, onerror=onerror
129 )
130
131 # Hide workspace metrics if they are the same as in the active branch
132 try:
133 active_branch = repo.scm.active_branch()
134 except (SCMError, NoSCMError):
135 # SCMError - detached head
136 # NoSCMError - no repo case
137 pass
138 else:
139 if res.get("workspace") == res.get(active_branch):
140 res.pop("workspace", None)
141
142 errored = errored_revisions(res)
143 if errored:
144 from dvc.ui import ui
145
146 ui.error_write(
147 "DVC failed to load some metrics for following revisions:"
148 f" '{', '.join(errored)}'."
149 )
150
151 return res
152
[end of dvc/repo/metrics/show.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dvc/repo/metrics/show.py b/dvc/repo/metrics/show.py
--- a/dvc/repo/metrics/show.py
+++ b/dvc/repo/metrics/show.py
@@ -12,7 +12,7 @@
from dvc.scm import NoSCMError
from dvc.utils import error_handler, errored_revisions, onerror_collect
from dvc.utils.collections import ensure_list
-from dvc.utils.serialize import load_yaml
+from dvc.utils.serialize import LOADERS
logger = logging.getLogger(__name__)
@@ -71,7 +71,9 @@
@error_handler
def _read_metric(path, fs, rev, **kwargs):
- val = load_yaml(path, fs=fs)
+ suffix = fs.path.suffix(path).lower()
+ loader = LOADERS[suffix]
+ val = loader(path, fs=fs)
val = _extract_metrics(val, path, rev)
return val or {}
| {"golden_diff": "diff --git a/dvc/repo/metrics/show.py b/dvc/repo/metrics/show.py\n--- a/dvc/repo/metrics/show.py\n+++ b/dvc/repo/metrics/show.py\n@@ -12,7 +12,7 @@\n from dvc.scm import NoSCMError\n from dvc.utils import error_handler, errored_revisions, onerror_collect\n from dvc.utils.collections import ensure_list\n-from dvc.utils.serialize import load_yaml\n+from dvc.utils.serialize import LOADERS\n \n logger = logging.getLogger(__name__)\n \n@@ -71,7 +71,9 @@\n \n @error_handler\n def _read_metric(path, fs, rev, **kwargs):\n- val = load_yaml(path, fs=fs)\n+ suffix = fs.path.suffix(path).lower()\n+ loader = LOADERS[suffix]\n+ val = loader(path, fs=fs)\n val = _extract_metrics(val, path, rev)\n return val or {}\n", "issue": "Add TOML support for metrics\nRight now, there is only TOML file support for params files. We need to add TOML support for metrics as well.\r\n\r\nHere's a [link to the Discord question](https://discord.com/channels/485586884165107732/485596304961962003/865974923079319563) that brought this up.\n", "before_files": [{"content": "import logging\nimport os\nfrom typing import List\n\nfrom scmrepo.exceptions import SCMError\n\nfrom dvc.fs.dvc import DvcFileSystem\nfrom dvc.output import Output\nfrom dvc.repo import locked\nfrom dvc.repo.collect import StrPaths, collect\nfrom dvc.repo.live import summary_fs_path\nfrom dvc.scm import NoSCMError\nfrom dvc.utils import error_handler, errored_revisions, onerror_collect\nfrom dvc.utils.collections import ensure_list\nfrom dvc.utils.serialize import load_yaml\n\nlogger = logging.getLogger(__name__)\n\n\ndef _is_metric(out: Output) -> bool:\n return bool(out.metric) or bool(out.live)\n\n\ndef _to_fs_paths(metrics: List[Output]) -> StrPaths:\n result = []\n for out in metrics:\n if out.metric:\n result.append(out.repo.dvcfs.from_os_path(out.fs_path))\n elif out.live:\n fs_path = summary_fs_path(out)\n if fs_path:\n result.append(out.repo.dvcfs.from_os_path(fs_path))\n return result\n\n\ndef _collect_metrics(repo, targets, revision, recursive):\n metrics, fs_paths = collect(\n repo,\n targets=targets,\n output_filter=_is_metric,\n recursive=recursive,\n rev=revision,\n )\n return _to_fs_paths(metrics) + list(fs_paths)\n\n\ndef _extract_metrics(metrics, path, rev):\n if isinstance(metrics, (int, float)):\n return metrics\n\n if not isinstance(metrics, dict):\n return None\n\n ret = {}\n for key, val in metrics.items():\n m = _extract_metrics(val, path, rev)\n if m not in (None, {}):\n ret[key] = m\n else:\n logger.debug(\n \"Could not parse '%s' metric from '%s' at '%s' \"\n \"due to its unsupported type: '%s'\",\n key,\n path,\n rev,\n type(val).__name__,\n )\n\n return ret\n\n\n@error_handler\ndef _read_metric(path, fs, rev, **kwargs):\n val = load_yaml(path, fs=fs)\n val = _extract_metrics(val, path, rev)\n return val or {}\n\n\ndef _read_metrics(repo, metrics, rev, onerror=None):\n fs = DvcFileSystem(repo=repo)\n\n relpath = \"\"\n if repo.root_dir != repo.fs.path.getcwd():\n relpath = repo.fs.path.relpath(repo.root_dir, repo.fs.path.getcwd())\n\n res = {}\n for metric in metrics:\n if not fs.isfile(metric):\n continue\n\n res[os.path.join(relpath, *fs.path.parts(metric))] = _read_metric(\n metric, fs, rev, onerror=onerror\n )\n\n return res\n\n\ndef _gather_metrics(repo, targets, rev, recursive, onerror=None):\n metrics = _collect_metrics(repo, targets, rev, recursive)\n return _read_metrics(repo, metrics, rev, onerror=onerror)\n\n\n@locked\ndef show(\n repo,\n targets=None,\n all_branches=False,\n all_tags=False,\n recursive=False,\n revs=None,\n all_commits=False,\n onerror=None,\n):\n if onerror is None:\n onerror = onerror_collect\n\n targets = ensure_list(targets)\n targets = [repo.dvcfs.from_os_path(target) for target in targets]\n\n res = {}\n for rev in repo.brancher(\n revs=revs,\n all_branches=all_branches,\n all_tags=all_tags,\n all_commits=all_commits,\n ):\n res[rev] = error_handler(_gather_metrics)(\n repo, targets, rev, recursive, onerror=onerror\n )\n\n # Hide workspace metrics if they are the same as in the active branch\n try:\n active_branch = repo.scm.active_branch()\n except (SCMError, NoSCMError):\n # SCMError - detached head\n # NoSCMError - no repo case\n pass\n else:\n if res.get(\"workspace\") == res.get(active_branch):\n res.pop(\"workspace\", None)\n\n errored = errored_revisions(res)\n if errored:\n from dvc.ui import ui\n\n ui.error_write(\n \"DVC failed to load some metrics for following revisions:\"\n f\" '{', '.join(errored)}'.\"\n )\n\n return res\n", "path": "dvc/repo/metrics/show.py"}]} | 1,964 | 212 |
gh_patches_debug_31073 | rasdani/github-patches | git_diff | fossasia__open-event-server-4162 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ticket-tag: remove GET for /ticket-tags
Parent issue #4101.
Related issue: #4119.
Make `/ticket-tags` POST only.
</issue>
<code>
[start of app/api/ticket_tags.py]
1 from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
2 from marshmallow_jsonapi.flask import Schema, Relationship
3 from marshmallow_jsonapi import fields
4 from sqlalchemy.orm.exc import NoResultFound
5 from flask_rest_jsonapi.exceptions import ObjectNotFound
6
7 from app.api.helpers.utilities import dasherize
8 from app.api.helpers.permissions import jwt_required
9 from app.models import db
10 from app.models.ticket import Ticket, TicketTag, ticket_tags_table
11 from app.models.event import Event
12 from app.api.helpers.db import safe_query
13 from app.api.helpers.utilities import require_relationship
14 from app.api.helpers.exceptions import ForbiddenException
15 from app.api.helpers.permission_manager import has_access
16
17
18 class TicketTagSchema(Schema):
19 """
20 Api schema for TicketTag Model
21 """
22
23 class Meta:
24 """
25 Meta class for TicketTag Api Schema
26 """
27 type_ = 'ticket-tag'
28 self_view = 'v1.ticket_tag_detail'
29 self_view_kwargs = {'id': '<id>'}
30 inflect = dasherize
31
32 id = fields.Str(dump_only=True)
33 name = fields.Str(allow_none=True)
34 tickets = Relationship(attribute='tickets',
35 self_view='v1.ticket_tag_ticket',
36 self_view_kwargs={'id': '<id>'},
37 related_view='v1.ticket_list',
38 related_view_kwargs={'ticket_tag_id': '<id>'},
39 schema='TicketSchema',
40 many=True,
41 type_='ticket')
42 event = Relationship(attribute='event',
43 self_view='v1.ticket_tag_event',
44 self_view_kwargs={'id': '<id>'},
45 related_view='v1.event_detail',
46 related_view_kwargs={'ticket_tag_id': '<id>'},
47 schema='EventSchema',
48 type_='event')
49
50
51 class TicketTagListPost(ResourceList):
52 """
53 List and create TicketTag
54 """
55 def before_post(self, args, kwargs, data):
56 """
57 before post method for checking required relationship
58 :param args:
59 :param kwargs:
60 :param data:
61 :return:
62 """
63 require_relationship(['event'], data)
64
65 if not has_access('is_coorganizer', event_id=data['event']):
66 raise ForbiddenException({'source': ''}, 'Co-organizer access is required.')
67
68 def after_create_object(self, obj, data, view_kwargs):
69 """
70 method to add ticket tags and ticket in association table
71 :param obj:
72 :param data:
73 :param view_kwargs:
74 :return:
75 """
76 if 'tickets' in data:
77 ticket_ids = data['tickets']
78 for ticket_id in ticket_ids:
79 try:
80 ticket = Ticket.query.filter_by(id=ticket_id).one()
81 except NoResultFound:
82 raise ObjectNotFound({'parameter': 'ticket_id'},
83 "Ticket: {} not found".format(ticket_id))
84 else:
85 ticket.tags.append(obj)
86 self.session.commit()
87
88 schema = TicketTagSchema
89 data_layer = {'session': db.session,
90 'model': TicketTag,
91 'methods': {
92 'after_create_object': after_create_object
93 }}
94
95
96 class TicketTagList(ResourceList):
97 """
98 List TicketTags based on event_id or ticket_id
99 """
100 def query(self, view_kwargs):
101 """
102 method to query Ticket tags based on different params
103 :param view_kwargs:
104 :return:
105 """
106 query_ = self.session.query(TicketTag)
107 if view_kwargs.get('ticket_id'):
108 ticket = safe_query(self, Ticket, 'id', view_kwargs['ticket_id'], 'ticket_id')
109 query_ = query_.join(ticket_tags_table).filter_by(ticket_id=ticket.id)
110 if view_kwargs.get('event_id'):
111 event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')
112 query_ = query_.join(Event).filter(Event.id == event.id)
113 elif view_kwargs.get('event_identifier'):
114 event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')
115 query_ = query_.join(Event).filter(Event.id == event.id)
116 return query_
117
118 view_kwargs = True
119 schema = TicketTagSchema
120 methods = ['GET', ]
121 data_layer = {'session': db.session,
122 'model': TicketTag,
123 'methods': {
124 'query': query
125 }}
126
127
128 class TicketTagDetail(ResourceDetail):
129 """
130 TicketTag detail by id
131 """
132 decorators = (jwt_required,)
133 schema = TicketTagSchema
134 data_layer = {'session': db.session,
135 'model': TicketTag}
136
137
138 class TicketTagRelationshipRequired(ResourceRelationship):
139 """
140 TicketTag Relationship
141 """
142 decorators = (jwt_required,)
143 methods = ['GET', 'PATCH']
144 schema = TicketTagSchema
145 data_layer = {'session': db.session,
146 'model': TicketTag}
147
148
149 class TicketTagRelationshipOptional(ResourceRelationship):
150 """
151 TicketTag Relationship
152 """
153 decorators = (jwt_required,)
154 schema = TicketTagSchema
155 data_layer = {'session': db.session,
156 'model': TicketTag}
157
[end of app/api/ticket_tags.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/api/ticket_tags.py b/app/api/ticket_tags.py
--- a/app/api/ticket_tags.py
+++ b/app/api/ticket_tags.py
@@ -1,8 +1,6 @@
from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
from marshmallow_jsonapi.flask import Schema, Relationship
from marshmallow_jsonapi import fields
-from sqlalchemy.orm.exc import NoResultFound
-from flask_rest_jsonapi.exceptions import ObjectNotFound
from app.api.helpers.utilities import dasherize
from app.api.helpers.permissions import jwt_required
@@ -65,32 +63,10 @@
if not has_access('is_coorganizer', event_id=data['event']):
raise ForbiddenException({'source': ''}, 'Co-organizer access is required.')
- def after_create_object(self, obj, data, view_kwargs):
- """
- method to add ticket tags and ticket in association table
- :param obj:
- :param data:
- :param view_kwargs:
- :return:
- """
- if 'tickets' in data:
- ticket_ids = data['tickets']
- for ticket_id in ticket_ids:
- try:
- ticket = Ticket.query.filter_by(id=ticket_id).one()
- except NoResultFound:
- raise ObjectNotFound({'parameter': 'ticket_id'},
- "Ticket: {} not found".format(ticket_id))
- else:
- ticket.tags.append(obj)
- self.session.commit()
-
schema = TicketTagSchema
+ methods = ['POST', ]
data_layer = {'session': db.session,
- 'model': TicketTag,
- 'methods': {
- 'after_create_object': after_create_object
- }}
+ 'model': TicketTag}
class TicketTagList(ResourceList):
| {"golden_diff": "diff --git a/app/api/ticket_tags.py b/app/api/ticket_tags.py\n--- a/app/api/ticket_tags.py\n+++ b/app/api/ticket_tags.py\n@@ -1,8 +1,6 @@\n from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\n from marshmallow_jsonapi.flask import Schema, Relationship\n from marshmallow_jsonapi import fields\n-from sqlalchemy.orm.exc import NoResultFound\n-from flask_rest_jsonapi.exceptions import ObjectNotFound\n \n from app.api.helpers.utilities import dasherize\n from app.api.helpers.permissions import jwt_required\n@@ -65,32 +63,10 @@\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ForbiddenException({'source': ''}, 'Co-organizer access is required.')\n \n- def after_create_object(self, obj, data, view_kwargs):\n- \"\"\"\n- method to add ticket tags and ticket in association table\n- :param obj:\n- :param data:\n- :param view_kwargs:\n- :return:\n- \"\"\"\n- if 'tickets' in data:\n- ticket_ids = data['tickets']\n- for ticket_id in ticket_ids:\n- try:\n- ticket = Ticket.query.filter_by(id=ticket_id).one()\n- except NoResultFound:\n- raise ObjectNotFound({'parameter': 'ticket_id'},\n- \"Ticket: {} not found\".format(ticket_id))\n- else:\n- ticket.tags.append(obj)\n- self.session.commit()\n-\n schema = TicketTagSchema\n+ methods = ['POST', ]\n data_layer = {'session': db.session,\n- 'model': TicketTag,\n- 'methods': {\n- 'after_create_object': after_create_object\n- }}\n+ 'model': TicketTag}\n \n \n class TicketTagList(ResourceList):\n", "issue": "Ticket-tag: remove GET for /ticket-tags \nParent issue #4101.\r\nRelated issue: #4119.\r\n\r\nMake `/ticket-tags` POST only.\n", "before_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom marshmallow_jsonapi.flask import Schema, Relationship\nfrom marshmallow_jsonapi import fields\nfrom sqlalchemy.orm.exc import NoResultFound\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.helpers.utilities import dasherize\nfrom app.api.helpers.permissions import jwt_required\nfrom app.models import db\nfrom app.models.ticket import Ticket, TicketTag, ticket_tags_table\nfrom app.models.event import Event\nfrom app.api.helpers.db import safe_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.helpers.exceptions import ForbiddenException\nfrom app.api.helpers.permission_manager import has_access\n\n\nclass TicketTagSchema(Schema):\n \"\"\"\n Api schema for TicketTag Model\n \"\"\"\n\n class Meta:\n \"\"\"\n Meta class for TicketTag Api Schema\n \"\"\"\n type_ = 'ticket-tag'\n self_view = 'v1.ticket_tag_detail'\n self_view_kwargs = {'id': '<id>'}\n inflect = dasherize\n\n id = fields.Str(dump_only=True)\n name = fields.Str(allow_none=True)\n tickets = Relationship(attribute='tickets',\n self_view='v1.ticket_tag_ticket',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.ticket_list',\n related_view_kwargs={'ticket_tag_id': '<id>'},\n schema='TicketSchema',\n many=True,\n type_='ticket')\n event = Relationship(attribute='event',\n self_view='v1.ticket_tag_event',\n self_view_kwargs={'id': '<id>'},\n related_view='v1.event_detail',\n related_view_kwargs={'ticket_tag_id': '<id>'},\n schema='EventSchema',\n type_='event')\n\n\nclass TicketTagListPost(ResourceList):\n \"\"\"\n List and create TicketTag\n \"\"\"\n def before_post(self, args, kwargs, data):\n \"\"\"\n before post method for checking required relationship\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event'], data)\n\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ForbiddenException({'source': ''}, 'Co-organizer access is required.')\n\n def after_create_object(self, obj, data, view_kwargs):\n \"\"\"\n method to add ticket tags and ticket in association table\n :param obj:\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n if 'tickets' in data:\n ticket_ids = data['tickets']\n for ticket_id in ticket_ids:\n try:\n ticket = Ticket.query.filter_by(id=ticket_id).one()\n except NoResultFound:\n raise ObjectNotFound({'parameter': 'ticket_id'},\n \"Ticket: {} not found\".format(ticket_id))\n else:\n ticket.tags.append(obj)\n self.session.commit()\n\n schema = TicketTagSchema\n data_layer = {'session': db.session,\n 'model': TicketTag,\n 'methods': {\n 'after_create_object': after_create_object\n }}\n\n\nclass TicketTagList(ResourceList):\n \"\"\"\n List TicketTags based on event_id or ticket_id\n \"\"\"\n def query(self, view_kwargs):\n \"\"\"\n method to query Ticket tags based on different params\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(TicketTag)\n if view_kwargs.get('ticket_id'):\n ticket = safe_query(self, Ticket, 'id', view_kwargs['ticket_id'], 'ticket_id')\n query_ = query_.join(ticket_tags_table).filter_by(ticket_id=ticket.id)\n if view_kwargs.get('event_id'):\n event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')\n query_ = query_.join(Event).filter(Event.id == event.id)\n elif view_kwargs.get('event_identifier'):\n event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')\n query_ = query_.join(Event).filter(Event.id == event.id)\n return query_\n\n view_kwargs = True\n schema = TicketTagSchema\n methods = ['GET', ]\n data_layer = {'session': db.session,\n 'model': TicketTag,\n 'methods': {\n 'query': query\n }}\n\n\nclass TicketTagDetail(ResourceDetail):\n \"\"\"\n TicketTag detail by id\n \"\"\"\n decorators = (jwt_required,)\n schema = TicketTagSchema\n data_layer = {'session': db.session,\n 'model': TicketTag}\n\n\nclass TicketTagRelationshipRequired(ResourceRelationship):\n \"\"\"\n TicketTag Relationship\n \"\"\"\n decorators = (jwt_required,)\n methods = ['GET', 'PATCH']\n schema = TicketTagSchema\n data_layer = {'session': db.session,\n 'model': TicketTag}\n\n\nclass TicketTagRelationshipOptional(ResourceRelationship):\n \"\"\"\n TicketTag Relationship\n \"\"\"\n decorators = (jwt_required,)\n schema = TicketTagSchema\n data_layer = {'session': db.session,\n 'model': TicketTag}\n", "path": "app/api/ticket_tags.py"}]} | 2,015 | 393 |
gh_patches_debug_63274 | rasdani/github-patches | git_diff | Mailu__Mailu-2603 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Maximum number of connections from user+IP exceeded
Hi, we have a problem... :-)
We have changed the original value of "AUTH_RATELIMIT" to "AUTH_RATELIMIT=100/minute;6000/hour", but logs continue to say " Maximum number of connections from user+IP exceeded (mail_max_userip_connections=20)" while reading response from upstream..."
We have made docker-compose dow and docker-compose up -d, but without result.
How can we change the default limit set during the installation?
Thanks in advance.
</issue>
<code>
[start of core/admin/mailu/internal/views/dovecot.py]
1 from mailu import models
2 from mailu.internal import internal
3 from flask import current_app as app
4
5 import flask
6 import socket
7 import os
8 import sqlalchemy.exc
9
10 @internal.route("/dovecot/passdb/<path:user_email>")
11 def dovecot_passdb_dict(user_email):
12 user = models.User.query.get(user_email) or flask.abort(404)
13 allow_nets = []
14 allow_nets.append(app.config["SUBNET"])
15 if app.config["SUBNET6"]:
16 allow_nets.append(app.config["SUBNET6"])
17 return flask.jsonify({
18 "password": None,
19 "nopassword": "Y",
20 "allow_nets": ",".join(allow_nets)
21 })
22
23 @internal.route("/dovecot/userdb/")
24 def dovecot_userdb_dict_list():
25 return flask.jsonify([
26 user[0] for user in models.User.query.filter(models.User.enabled.is_(True)).with_entities(models.User.email).all()
27 ])
28
29 @internal.route("/dovecot/userdb/<path:user_email>")
30 def dovecot_userdb_dict(user_email):
31 try:
32 quota = models.User.query.filter(models.User.email==user_email).with_entities(models.User.quota_bytes).one_or_none() or flask.abort(404)
33 except sqlalchemy.exc.StatementError as exc:
34 flask.abort(404)
35 return flask.jsonify({
36 "quota_rule": f"*:bytes={quota[0]}"
37 })
38
39
40 @internal.route("/dovecot/quota/<ns>/<path:user_email>", methods=["POST"])
41 def dovecot_quota(ns, user_email):
42 user = models.User.query.get(user_email) or flask.abort(404)
43 if ns == "storage":
44 user.quota_bytes_used = flask.request.get_json()
45 user.dont_change_updated_at()
46 models.db.session.commit()
47 return flask.jsonify(None)
48
49
50 @internal.route("/dovecot/sieve/name/<script>/<path:user_email>")
51 def dovecot_sieve_name(script, user_email):
52 return flask.jsonify(script)
53
54
55 @internal.route("/dovecot/sieve/data/default/<path:user_email>")
56 def dovecot_sieve_data(user_email):
57 user = models.User.query.get(user_email) or flask.abort(404)
58 return flask.jsonify(flask.render_template("default.sieve", user=user))
59
[end of core/admin/mailu/internal/views/dovecot.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/admin/mailu/internal/views/dovecot.py b/core/admin/mailu/internal/views/dovecot.py
--- a/core/admin/mailu/internal/views/dovecot.py
+++ b/core/admin/mailu/internal/views/dovecot.py
@@ -17,7 +17,7 @@
return flask.jsonify({
"password": None,
"nopassword": "Y",
- "allow_nets": ",".join(allow_nets)
+ "allow_real_nets": ",".join(allow_nets)
})
@internal.route("/dovecot/userdb/")
| {"golden_diff": "diff --git a/core/admin/mailu/internal/views/dovecot.py b/core/admin/mailu/internal/views/dovecot.py\n--- a/core/admin/mailu/internal/views/dovecot.py\n+++ b/core/admin/mailu/internal/views/dovecot.py\n@@ -17,7 +17,7 @@\n return flask.jsonify({\n \"password\": None,\n \"nopassword\": \"Y\",\n- \"allow_nets\": \",\".join(allow_nets)\n+ \"allow_real_nets\": \",\".join(allow_nets)\n })\n \n @internal.route(\"/dovecot/userdb/\")\n", "issue": "Maximum number of connections from user+IP exceeded \nHi, we have a problem... :-)\r\nWe have changed the original value of \"AUTH_RATELIMIT\" to \"AUTH_RATELIMIT=100/minute;6000/hour\", but logs continue to say \" Maximum number of connections from user+IP exceeded (mail_max_userip_connections=20)\" while reading response from upstream...\"\r\nWe have made docker-compose dow and docker-compose up -d, but without result.\r\nHow can we change the default limit set during the installation?\r\nThanks in advance.\n", "before_files": [{"content": "from mailu import models\nfrom mailu.internal import internal\nfrom flask import current_app as app\n\nimport flask\nimport socket\nimport os\nimport sqlalchemy.exc\n\[email protected](\"/dovecot/passdb/<path:user_email>\")\ndef dovecot_passdb_dict(user_email):\n user = models.User.query.get(user_email) or flask.abort(404)\n allow_nets = []\n allow_nets.append(app.config[\"SUBNET\"])\n if app.config[\"SUBNET6\"]:\n allow_nets.append(app.config[\"SUBNET6\"])\n return flask.jsonify({\n \"password\": None,\n \"nopassword\": \"Y\",\n \"allow_nets\": \",\".join(allow_nets)\n })\n\[email protected](\"/dovecot/userdb/\")\ndef dovecot_userdb_dict_list():\n return flask.jsonify([\n user[0] for user in models.User.query.filter(models.User.enabled.is_(True)).with_entities(models.User.email).all()\n ])\n\[email protected](\"/dovecot/userdb/<path:user_email>\")\ndef dovecot_userdb_dict(user_email):\n try:\n quota = models.User.query.filter(models.User.email==user_email).with_entities(models.User.quota_bytes).one_or_none() or flask.abort(404)\n except sqlalchemy.exc.StatementError as exc:\n flask.abort(404)\n return flask.jsonify({\n \"quota_rule\": f\"*:bytes={quota[0]}\"\n })\n\n\[email protected](\"/dovecot/quota/<ns>/<path:user_email>\", methods=[\"POST\"])\ndef dovecot_quota(ns, user_email):\n user = models.User.query.get(user_email) or flask.abort(404)\n if ns == \"storage\":\n user.quota_bytes_used = flask.request.get_json()\n user.dont_change_updated_at()\n models.db.session.commit()\n return flask.jsonify(None)\n\n\[email protected](\"/dovecot/sieve/name/<script>/<path:user_email>\")\ndef dovecot_sieve_name(script, user_email):\n return flask.jsonify(script)\n\n\[email protected](\"/dovecot/sieve/data/default/<path:user_email>\")\ndef dovecot_sieve_data(user_email):\n user = models.User.query.get(user_email) or flask.abort(404)\n return flask.jsonify(flask.render_template(\"default.sieve\", user=user))\n", "path": "core/admin/mailu/internal/views/dovecot.py"}]} | 1,269 | 129 |
gh_patches_debug_20106 | rasdani/github-patches | git_diff | microsoft__torchgeo-93 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Jupyter Notebook tutorials
We need to figure out how to render Jupyter Notebooks in our documentation so that we can provide easy-to-use tutorials for new users. This should work similarly to https://pytorch.org/tutorials/.
Ideally I would like to be able to test these tutorials so that they stay up-to-date.
</issue>
<code>
[start of docs/conf.py]
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # https://www.sphinx-doc.org/en/master/usage/configuration.html
6
7 # -- Path setup --------------------------------------------------------------
8
9 import os
10 import sys
11
12 import pytorch_sphinx_theme
13
14 # If extensions (or modules to document with autodoc) are in another directory,
15 # add these directories to sys.path here. If the directory is relative to the
16 # documentation root, use os.path.abspath to make it absolute, like shown here.
17 sys.path.insert(0, os.path.abspath(".."))
18
19 import torchgeo # noqa: E402
20
21 # -- Project information -----------------------------------------------------
22
23 project = "torchgeo"
24 copyright = "2021, Microsoft Corporation"
25 author = "Adam J. Stewart"
26 version = ".".join(torchgeo.__version__.split(".")[:2])
27 release = torchgeo.__version__
28
29
30 # -- General configuration ---------------------------------------------------
31
32 # Add any Sphinx extension module names here, as strings. They can be
33 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
34 # ones.
35 extensions = [
36 "sphinx.ext.autodoc",
37 "sphinx.ext.autosectionlabel",
38 "sphinx.ext.intersphinx",
39 "sphinx.ext.napoleon",
40 "sphinx.ext.todo",
41 "sphinx.ext.viewcode",
42 ]
43
44 # List of patterns, relative to source directory, that match files and
45 # directories to ignore when looking for source files.
46 # This pattern also affects html_static_path and html_extra_path.
47 exclude_patterns = ["_build"]
48
49 # Sphinx 3.0+ required for:
50 # autodoc_typehints = "description"
51 needs_sphinx = "3.0"
52
53 nitpicky = True
54 nitpick_ignore = [
55 # https://github.com/sphinx-doc/sphinx/issues/8127
56 ("py:class", ".."),
57 # TODO: can't figure out why this isn't found
58 ("py:class", "LightningDataModule"),
59 ]
60
61
62 # -- Options for HTML output -------------------------------------------------
63
64 # The theme to use for HTML and HTML Help pages. See the documentation for
65 # a list of builtin themes.
66 html_theme = "pytorch_sphinx_theme"
67 html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
68
69 # Theme options are theme-specific and customize the look and feel of a theme
70 # further. For a list of options available for each theme, see the
71 # documentation.
72 html_theme_options = {
73 "collapse_navigation": False,
74 "display_version": True,
75 "logo_only": True,
76 "pytorch_project": "docs",
77 "navigation_with_keys": True,
78 "analytics_id": "UA-117752657-2",
79 }
80
81 # -- Extension configuration -------------------------------------------------
82
83 # sphinx.ext.autodoc
84 autodoc_default_options = {
85 "members": True,
86 "special-members": True,
87 "show-inheritance": True,
88 }
89 autodoc_member_order = "bysource"
90 autodoc_typehints = "description"
91
92 # sphinx.ext.intersphinx
93 intersphinx_mapping = {
94 "python": ("https://docs.python.org/3", None),
95 "pytorch-lightning": ("https://pytorch-lightning.readthedocs.io/en/latest/", None),
96 "rasterio": ("https://rasterio.readthedocs.io/en/latest/", None),
97 "rtree": ("https://rtree.readthedocs.io/en/latest/", None),
98 "torch": ("https://pytorch.org/docs/stable", None),
99 }
100
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -34,11 +34,11 @@
# ones.
extensions = [
"sphinx.ext.autodoc",
- "sphinx.ext.autosectionlabel",
"sphinx.ext.intersphinx",
"sphinx.ext.napoleon",
"sphinx.ext.todo",
"sphinx.ext.viewcode",
+ "nbsphinx",
]
# List of patterns, relative to source directory, that match files and
@@ -97,3 +97,17 @@
"rtree": ("https://rtree.readthedocs.io/en/latest/", None),
"torch": ("https://pytorch.org/docs/stable", None),
}
+
+# nbsphinx
+nbsphinx_execute = "never"
+# TODO: branch/tag should change depending on which version of docs you look at
+# TODO: :width: may be broken
+nbsphinx_prolog = """
+{% set colab = "https://colab.research.google.com" %}
+{% set repo = "microsoft/torchgeo" %}
+{% set branch = "main" %}
+
+.. image:: {{ colab }}/assets/colab-badge.svg
+ :alt: Open in Colab
+ :target: {{ colab }}/github/{{ repo }}/blob/{{ branch }}/docs/{{ env.docname }}
+"""
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -34,11 +34,11 @@\n # ones.\n extensions = [\n \"sphinx.ext.autodoc\",\n- \"sphinx.ext.autosectionlabel\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.viewcode\",\n+ \"nbsphinx\",\n ]\n \n # List of patterns, relative to source directory, that match files and\n@@ -97,3 +97,17 @@\n \"rtree\": (\"https://rtree.readthedocs.io/en/latest/\", None),\n \"torch\": (\"https://pytorch.org/docs/stable\", None),\n }\n+\n+# nbsphinx\n+nbsphinx_execute = \"never\"\n+# TODO: branch/tag should change depending on which version of docs you look at\n+# TODO: :width: may be broken\n+nbsphinx_prolog = \"\"\"\n+{% set colab = \"https://colab.research.google.com\" %}\n+{% set repo = \"microsoft/torchgeo\" %}\n+{% set branch = \"main\" %}\n+\n+.. image:: {{ colab }}/assets/colab-badge.svg\n+ :alt: Open in Colab\n+ :target: {{ colab }}/github/{{ repo }}/blob/{{ branch }}/docs/{{ env.docname }}\n+\"\"\"\n", "issue": "Jupyter Notebook tutorials\nWe need to figure out how to render Jupyter Notebooks in our documentation so that we can provide easy-to-use tutorials for new users. This should work similarly to https://pytorch.org/tutorials/.\r\n\r\nIdeally I would like to be able to test these tutorials so that they stay up-to-date.\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\nimport os\nimport sys\n\nimport pytorch_sphinx_theme\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath(\"..\"))\n\nimport torchgeo # noqa: E402\n\n# -- Project information -----------------------------------------------------\n\nproject = \"torchgeo\"\ncopyright = \"2021, Microsoft Corporation\"\nauthor = \"Adam J. Stewart\"\nversion = \".\".join(torchgeo.__version__.split(\".\")[:2])\nrelease = torchgeo.__version__\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.autosectionlabel\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.viewcode\",\n]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\"_build\"]\n\n# Sphinx 3.0+ required for:\n# autodoc_typehints = \"description\"\nneeds_sphinx = \"3.0\"\n\nnitpicky = True\nnitpick_ignore = [\n # https://github.com/sphinx-doc/sphinx/issues/8127\n (\"py:class\", \"..\"),\n # TODO: can't figure out why this isn't found\n (\"py:class\", \"LightningDataModule\"),\n]\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = \"pytorch_sphinx_theme\"\nhtml_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n \"collapse_navigation\": False,\n \"display_version\": True,\n \"logo_only\": True,\n \"pytorch_project\": \"docs\",\n \"navigation_with_keys\": True,\n \"analytics_id\": \"UA-117752657-2\",\n}\n\n# -- Extension configuration -------------------------------------------------\n\n# sphinx.ext.autodoc\nautodoc_default_options = {\n \"members\": True,\n \"special-members\": True,\n \"show-inheritance\": True,\n}\nautodoc_member_order = \"bysource\"\nautodoc_typehints = \"description\"\n\n# sphinx.ext.intersphinx\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3\", None),\n \"pytorch-lightning\": (\"https://pytorch-lightning.readthedocs.io/en/latest/\", None),\n \"rasterio\": (\"https://rasterio.readthedocs.io/en/latest/\", None),\n \"rtree\": (\"https://rtree.readthedocs.io/en/latest/\", None),\n \"torch\": (\"https://pytorch.org/docs/stable\", None),\n}\n", "path": "docs/conf.py"}]} | 1,569 | 309 |
gh_patches_debug_32695 | rasdani/github-patches | git_diff | conan-io__conan-center-index-3023 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[package] jbig/20160605: Fails to build on iOS
<!--
Please don't forget to update the issue title.
Include all applicable information to help us reproduce your problem.
-->
### Package and Environment Details (include every applicable attribute)
* Package Name/Version: **jbig/20160605**
* Operating System+version: **iOS 11.0**
* Compiler+version: **apple-clang 11.0**
* Conan version: **conan 1.29.2**
* Python version: **Python 3.8.5**
### Conan profile
```
[settings]
arch=x86_64
arch_build=x86_64
build_type=Debug
compiler=apple-clang
compiler.cppstd=17
compiler.libcxx=libc++
compiler.version=11.0
os=iOS
os.version=11.0
os_build=Macos
[options]
[build_requires]
*: darwin-toolchain/1.0.8@theodelrieu/stable
[env]
```
### Steps to reproduce (Include if Applicable)
`conan install jbig/20160605@ --profile ios --build=missing`
### Logs (Include/Attach if Applicable)
<details><summary>Click to expand log</summary>
```
CMake Error at CMakeLists.txt:31 (install):
install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable
target "jbgtopbm".
```
</details>
I would suggest adding an option that disables the `pbmtojbg` and `jbgtopbm` targets from being generated. The recipe could define individual `build_` options for each, which other packages do, or go with a more generically named option that enables/disables both. For reference, `sqlite3`, `bzip2`, and `spirv-cross` have a `build_executable` option, while `glslang` has a `build_executables` option.
</issue>
<code>
[start of recipes/jbig/all/conanfile.py]
1 import os
2 import glob
3 from conans import ConanFile, CMake, tools
4
5
6 class ConanJBig(ConanFile):
7 name = "jbig"
8 url = "https://github.com/conan-io/conan-center-index"
9 homepage = "https://github.com/ImageMagick/jbig"
10 description = "jbig for the Windows build of ImageMagick"
11 topics = ("conan", "jbig", "imagemagick", "window", "graphic")
12 license = "GPL-2.0"
13 exports_sources = ['CMakeLists.txt', "*.patch"]
14 generators = 'cmake'
15 settings = "os", "arch", "compiler", "build_type"
16 options = {"shared": [True, False], "fPIC": [True, False]}
17 default_options = {"shared": False, "fPIC": True}
18
19 @property
20 def _source_subfolder(self):
21 return "source_subfolder"
22
23 @property
24 def _build_subfolder(self):
25 return "build_subfolder"
26
27 def source(self):
28 tools.get(**self.conan_data["sources"][self.version])
29 for extracted_dir in glob.glob("jbig-*"):
30 os.rename(extracted_dir, self._source_subfolder)
31
32 def config_options(self):
33 if self.settings.os == "Windows":
34 del self.options.fPIC
35
36 def configure(self):
37 del self.settings.compiler.libcxx
38 del self.settings.compiler.cppstd
39
40 def _configure_cmake(self):
41 cmake = CMake(self)
42 cmake.configure(build_folder=self._build_subfolder)
43 return cmake
44
45 def build(self):
46 for patch in self.conan_data["patches"][self.version]:
47 tools.patch(**patch)
48 cmake = self._configure_cmake()
49 cmake.build()
50
51 def package(self):
52 self.copy(pattern="COPYING", dst="licenses", src=self._source_subfolder)
53 cmake = self._configure_cmake()
54 cmake.install()
55
56 def package_info(self):
57 bindir = os.path.join(self.package_folder, "bin")
58 self.output.info("Appending PATH environment variable: {}".format(bindir))
59 self.env_info.PATH.append(bindir)
60 self.cpp_info.libs = [self.name]
61 if self.options.shared and self.settings.compiler == "Visual Studio":
62 self.cpp_info.defines = ["_JBIGDLL_"]
63
[end of recipes/jbig/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recipes/jbig/all/conanfile.py b/recipes/jbig/all/conanfile.py
--- a/recipes/jbig/all/conanfile.py
+++ b/recipes/jbig/all/conanfile.py
@@ -13,8 +13,18 @@
exports_sources = ['CMakeLists.txt', "*.patch"]
generators = 'cmake'
settings = "os", "arch", "compiler", "build_type"
- options = {"shared": [True, False], "fPIC": [True, False]}
- default_options = {"shared": False, "fPIC": True}
+ options = {
+ "shared": [True, False],
+ "fPIC": [True, False],
+ "build_executables": [True, False]
+ }
+ default_options = {
+ "shared": False,
+ "fPIC": True,
+ "build_executables": True
+ }
+
+ _cmake = None
@property
def _source_subfolder(self):
@@ -38,9 +48,13 @@
del self.settings.compiler.cppstd
def _configure_cmake(self):
- cmake = CMake(self)
- cmake.configure(build_folder=self._build_subfolder)
- return cmake
+ if self._cmake:
+ return self._cmake
+
+ self._cmake = CMake(self)
+ self._cmake.definitions["BUILD_EXECUTABLES"] = self.options.build_executables
+ self._cmake.configure(build_folder=self._build_subfolder)
+ return self._cmake
def build(self):
for patch in self.conan_data["patches"][self.version]:
@@ -54,9 +68,11 @@
cmake.install()
def package_info(self):
- bindir = os.path.join(self.package_folder, "bin")
- self.output.info("Appending PATH environment variable: {}".format(bindir))
- self.env_info.PATH.append(bindir)
self.cpp_info.libs = [self.name]
if self.options.shared and self.settings.compiler == "Visual Studio":
self.cpp_info.defines = ["_JBIGDLL_"]
+
+ if self.options.build_executables:
+ bin_path = os.path.join(self.package_folder, "bin")
+ self.output.info("Appending PATH environment variable: {}".format(bin_path))
+ self.env_info.PATH.append(bin_path)
| {"golden_diff": "diff --git a/recipes/jbig/all/conanfile.py b/recipes/jbig/all/conanfile.py\n--- a/recipes/jbig/all/conanfile.py\n+++ b/recipes/jbig/all/conanfile.py\n@@ -13,8 +13,18 @@\n exports_sources = ['CMakeLists.txt', \"*.patch\"]\n generators = 'cmake'\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n- options = {\"shared\": [True, False], \"fPIC\": [True, False]}\n- default_options = {\"shared\": False, \"fPIC\": True}\n+ options = {\n+ \"shared\": [True, False],\n+ \"fPIC\": [True, False],\n+ \"build_executables\": [True, False]\n+ }\n+ default_options = {\n+ \"shared\": False,\n+ \"fPIC\": True,\n+ \"build_executables\": True\n+ }\n+\n+ _cmake = None\n \n @property\n def _source_subfolder(self):\n@@ -38,9 +48,13 @@\n del self.settings.compiler.cppstd\n \n def _configure_cmake(self):\n- cmake = CMake(self)\n- cmake.configure(build_folder=self._build_subfolder)\n- return cmake\n+ if self._cmake:\n+ return self._cmake\n+\n+ self._cmake = CMake(self)\n+ self._cmake.definitions[\"BUILD_EXECUTABLES\"] = self.options.build_executables\n+ self._cmake.configure(build_folder=self._build_subfolder)\n+ return self._cmake\n \n def build(self):\n for patch in self.conan_data[\"patches\"][self.version]:\n@@ -54,9 +68,11 @@\n cmake.install()\n \n def package_info(self):\n- bindir = os.path.join(self.package_folder, \"bin\")\n- self.output.info(\"Appending PATH environment variable: {}\".format(bindir))\n- self.env_info.PATH.append(bindir)\n self.cpp_info.libs = [self.name]\n if self.options.shared and self.settings.compiler == \"Visual Studio\":\n self.cpp_info.defines = [\"_JBIGDLL_\"]\n+\n+ if self.options.build_executables:\n+ bin_path = os.path.join(self.package_folder, \"bin\")\n+ self.output.info(\"Appending PATH environment variable: {}\".format(bin_path))\n+ self.env_info.PATH.append(bin_path)\n", "issue": "[package] jbig/20160605: Fails to build on iOS\n<!-- \r\n Please don't forget to update the issue title.\r\n Include all applicable information to help us reproduce your problem.\r\n-->\r\n\r\n### Package and Environment Details (include every applicable attribute)\r\n * Package Name/Version: **jbig/20160605**\r\n * Operating System+version: **iOS 11.0**\r\n * Compiler+version: **apple-clang 11.0**\r\n * Conan version: **conan 1.29.2**\r\n * Python version: **Python 3.8.5**\r\n\r\n### Conan profile\r\n```\r\n[settings]\r\narch=x86_64\r\narch_build=x86_64\r\nbuild_type=Debug\r\ncompiler=apple-clang\r\ncompiler.cppstd=17\r\ncompiler.libcxx=libc++\r\ncompiler.version=11.0\r\nos=iOS\r\nos.version=11.0\r\nos_build=Macos\r\n[options]\r\n[build_requires]\r\n*: darwin-toolchain/1.0.8@theodelrieu/stable\r\n[env]\r\n```\r\n\r\n\r\n### Steps to reproduce (Include if Applicable)\r\n\r\n`conan install jbig/20160605@ --profile ios --build=missing`\r\n\r\n### Logs (Include/Attach if Applicable)\r\n<details><summary>Click to expand log</summary>\r\n\r\n```\r\nCMake Error at CMakeLists.txt:31 (install):\r\n install TARGETS given no BUNDLE DESTINATION for MACOSX_BUNDLE executable\r\n target \"jbgtopbm\".\r\n```\r\n\r\n</details>\r\n\r\nI would suggest adding an option that disables the `pbmtojbg` and `jbgtopbm` targets from being generated. The recipe could define individual `build_` options for each, which other packages do, or go with a more generically named option that enables/disables both. For reference, `sqlite3`, `bzip2`, and `spirv-cross` have a `build_executable` option, while `glslang` has a `build_executables` option. \n", "before_files": [{"content": "import os\nimport glob\nfrom conans import ConanFile, CMake, tools\n\n\nclass ConanJBig(ConanFile):\n name = \"jbig\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/ImageMagick/jbig\"\n description = \"jbig for the Windows build of ImageMagick\"\n topics = (\"conan\", \"jbig\", \"imagemagick\", \"window\", \"graphic\")\n license = \"GPL-2.0\"\n exports_sources = ['CMakeLists.txt', \"*.patch\"]\n generators = 'cmake'\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\"shared\": [True, False], \"fPIC\": [True, False]}\n default_options = {\"shared\": False, \"fPIC\": True}\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n @property\n def _build_subfolder(self):\n return \"build_subfolder\"\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n for extracted_dir in glob.glob(\"jbig-*\"):\n os.rename(extracted_dir, self._source_subfolder)\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n def _configure_cmake(self):\n cmake = CMake(self)\n cmake.configure(build_folder=self._build_subfolder)\n return cmake\n\n def build(self):\n for patch in self.conan_data[\"patches\"][self.version]:\n tools.patch(**patch)\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(pattern=\"COPYING\", dst=\"licenses\", src=self._source_subfolder)\n cmake = self._configure_cmake()\n cmake.install()\n\n def package_info(self):\n bindir = os.path.join(self.package_folder, \"bin\")\n self.output.info(\"Appending PATH environment variable: {}\".format(bindir))\n self.env_info.PATH.append(bindir)\n self.cpp_info.libs = [self.name]\n if self.options.shared and self.settings.compiler == \"Visual Studio\":\n self.cpp_info.defines = [\"_JBIGDLL_\"]\n", "path": "recipes/jbig/all/conanfile.py"}]} | 1,625 | 536 |
gh_patches_debug_33865 | rasdani/github-patches | git_diff | cowrie__cowrie-1022 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cowrie not set up for py.test framework
So I tried running the test in both python2 and python3. For python2 all the tests were passing but for python3 there was some error.
```
py.test --cov=cowrie
===================================================================================== test session starts =====================================================================================
platform linux -- Python 3.7.2, pytest-4.2.0, py-1.7.0, pluggy-0.8.1
rootdir: /home/mzfr/dev/cowrie, inifile:
plugins: cov-2.6.1
collected 3 items / 3 errors
=========================================================================================== ERRORS ============================================================================================
___________________________________________________________________ ERROR collecting src/cowrie/test/test_base_commands.py ____________________________________________________________________
../shell/fs.py:26: in <module>
PICKLE = pickle.load(open(CONFIG.get('shell', 'filesystem'), 'rb'))
../core/config.py:29: in get
return super(EnvironmentConfigParser, self).get(section, option, **kwargs)
/usr/lib/python3.7/configparser.py:780: in get
d = self._unify_values(section, vars)
/usr/lib/python3.7/configparser.py:1146: in _unify_values
raise NoSectionError(section) from None
E configparser.NoSectionError: No section: 'shell'
During handling of the above exception, another exception occurred:
test_base_commands.py:12: in <module>
from cowrie.shell import protocol
../shell/protocol.py:21: in <module>
from cowrie.shell import command
../shell/command.py:20: in <module>
from cowrie.shell import fs
../shell/fs.py:29: in <module>
exit(2)
/usr/lib/python3.7/_sitebuiltins.py:26: in __call__
raise SystemExit(code)
E SystemExit: 2
--------------------------------------------------------------------------------------- Captured stdout ---------------------------------------------------------------------------------------
ERROR: Config file not found: etc/cowrie.cfg.dist
________________________________________________________________________ ERROR collecting src/cowrie/test/test_echo.py ________________________________________________________________________
../shell/fs.py:26: in <module>
PICKLE = pickle.load(open(CONFIG.get('shell', 'filesystem'), 'rb'))
../core/config.py:29: in get
return super(EnvironmentConfigParser, self).get(section, option, **kwargs)
/usr/lib/python3.7/configparser.py:780: in get
d = self._unify_values(section, vars)
/usr/lib/python3.7/configparser.py:1146: in _unify_values
raise NoSectionError(section) from None
E configparser.NoSectionError: No section: 'shell'
During handling of the above exception, another exception occurred:
test_echo.py:16: in <module>
from cowrie.shell import protocol
../shell/protocol.py:21: in <module>
from cowrie.shell import command
../shell/command.py:20: in <module>
from cowrie.shell import fs
../shell/fs.py:29: in <module>
exit(2)
/usr/lib/python3.7/_sitebuiltins.py:26: in __call__
raise SystemExit(code)
E SystemExit: 2
--------------------------------------------------------------------------------------- Captured stdout ---------------------------------------------------------------------------------------
ERROR: Config file not found: etc/cowrie.cfg.dist
________________________________________________________________________ ERROR collecting src/cowrie/test/test_tftp.py ________________________________________________________________________
../shell/fs.py:26: in <module>
PICKLE = pickle.load(open(CONFIG.get('shell', 'filesystem'), 'rb'))
../core/config.py:29: in get
return super(EnvironmentConfigParser, self).get(section, option, **kwargs)
/usr/lib/python3.7/configparser.py:780: in get
d = self._unify_values(section, vars)
/usr/lib/python3.7/configparser.py:1146: in _unify_values
raise NoSectionError(section) from None
E configparser.NoSectionError: No section: 'shell'
During handling of the above exception, another exception occurred:
test_tftp.py:16: in <module>
from cowrie.shell import protocol
../shell/protocol.py:21: in <module>
from cowrie.shell import command
../shell/command.py:20: in <module>
from cowrie.shell import fs
../shell/fs.py:29: in <module>
exit(2)
/usr/lib/python3.7/_sitebuiltins.py:26: in __call__
raise SystemExit(code)
E SystemExit: 2
--------------------------------------------------------------------------------------- Captured stdout ---------------------------------------------------------------------------------------
ERROR: Config file not found: etc/cowrie.cfg.dist
```
</issue>
<code>
[start of src/cowrie/core/config.py]
1 # Copyright (c) 2009-2014 Upi Tamminen <[email protected]>
2 # See the COPYRIGHT file for more information
3
4 """
5 This module contains ...
6 """
7
8 from __future__ import absolute_import, division
9
10 import configparser
11 import os
12
13
14 def to_environ_key(key):
15 return key.upper()
16
17
18 class EnvironmentConfigParser(configparser.ConfigParser):
19
20 def has_option(self, section, option):
21 if to_environ_key('_'.join((section, option))) in os.environ:
22 return True
23 return super(EnvironmentConfigParser, self).has_option(section, option)
24
25 def get(self, section, option, **kwargs):
26 key = to_environ_key('_'.join((section, option)))
27 if key in os.environ:
28 return os.environ[key]
29 return super(EnvironmentConfigParser, self).get(section, option, **kwargs)
30
31
32 def readConfigFile(cfgfile):
33 """
34 Read config files and return ConfigParser object
35
36 @param cfgfile: filename or array of filenames
37 @return: ConfigParser object
38 """
39 parser = EnvironmentConfigParser(interpolation=configparser.ExtendedInterpolation())
40 parser.read(cfgfile)
41 return parser
42
43
44 CONFIG = readConfigFile(("etc/cowrie.cfg.dist", "/etc/cowrie/cowrie.cfg", "etc/cowrie.cfg", "cowrie.cfg"))
45
[end of src/cowrie/core/config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cowrie/core/config.py b/src/cowrie/core/config.py
--- a/src/cowrie/core/config.py
+++ b/src/cowrie/core/config.py
@@ -8,7 +8,8 @@
from __future__ import absolute_import, division
import configparser
-import os
+from os import environ
+from os.path import abspath, dirname, exists, join
def to_environ_key(key):
@@ -18,15 +19,16 @@
class EnvironmentConfigParser(configparser.ConfigParser):
def has_option(self, section, option):
- if to_environ_key('_'.join((section, option))) in os.environ:
+ if to_environ_key('_'.join((section, option))) in environ:
return True
return super(EnvironmentConfigParser, self).has_option(section, option)
def get(self, section, option, **kwargs):
key = to_environ_key('_'.join((section, option)))
- if key in os.environ:
- return os.environ[key]
- return super(EnvironmentConfigParser, self).get(section, option, **kwargs)
+ if key in environ:
+ return environ[key]
+ return super(EnvironmentConfigParser, self).get(
+ section, option, **kwargs)
def readConfigFile(cfgfile):
@@ -36,9 +38,26 @@
@param cfgfile: filename or array of filenames
@return: ConfigParser object
"""
- parser = EnvironmentConfigParser(interpolation=configparser.ExtendedInterpolation())
+ parser = EnvironmentConfigParser(
+ interpolation=configparser.ExtendedInterpolation())
parser.read(cfgfile)
return parser
-CONFIG = readConfigFile(("etc/cowrie.cfg.dist", "/etc/cowrie/cowrie.cfg", "etc/cowrie.cfg", "cowrie.cfg"))
+def get_config_path():
+ """Get absolute path to the config file
+ """
+ config_files = ["etc/cowrie/cowrie.cfg", "etc/cowrie.cfg",
+ "cowrie.cfg", "etc/cowrie.cfg.dist"]
+ current_path = abspath(dirname(__file__))
+ root = "/".join(current_path.split("/")[:-3])
+
+ for file in config_files:
+ absolute_path = join(root, file)
+ if exists(absolute_path):
+ return absolute_path
+
+ print("Config file not found")
+
+
+CONFIG = readConfigFile(get_config_path())
| {"golden_diff": "diff --git a/src/cowrie/core/config.py b/src/cowrie/core/config.py\n--- a/src/cowrie/core/config.py\n+++ b/src/cowrie/core/config.py\n@@ -8,7 +8,8 @@\n from __future__ import absolute_import, division\n \n import configparser\n-import os\n+from os import environ\n+from os.path import abspath, dirname, exists, join\n \n \n def to_environ_key(key):\n@@ -18,15 +19,16 @@\n class EnvironmentConfigParser(configparser.ConfigParser):\n \n def has_option(self, section, option):\n- if to_environ_key('_'.join((section, option))) in os.environ:\n+ if to_environ_key('_'.join((section, option))) in environ:\n return True\n return super(EnvironmentConfigParser, self).has_option(section, option)\n \n def get(self, section, option, **kwargs):\n key = to_environ_key('_'.join((section, option)))\n- if key in os.environ:\n- return os.environ[key]\n- return super(EnvironmentConfigParser, self).get(section, option, **kwargs)\n+ if key in environ:\n+ return environ[key]\n+ return super(EnvironmentConfigParser, self).get(\n+ section, option, **kwargs)\n \n \n def readConfigFile(cfgfile):\n@@ -36,9 +38,26 @@\n @param cfgfile: filename or array of filenames\n @return: ConfigParser object\n \"\"\"\n- parser = EnvironmentConfigParser(interpolation=configparser.ExtendedInterpolation())\n+ parser = EnvironmentConfigParser(\n+ interpolation=configparser.ExtendedInterpolation())\n parser.read(cfgfile)\n return parser\n \n \n-CONFIG = readConfigFile((\"etc/cowrie.cfg.dist\", \"/etc/cowrie/cowrie.cfg\", \"etc/cowrie.cfg\", \"cowrie.cfg\"))\n+def get_config_path():\n+ \"\"\"Get absolute path to the config file\n+ \"\"\"\n+ config_files = [\"etc/cowrie/cowrie.cfg\", \"etc/cowrie.cfg\",\n+ \"cowrie.cfg\", \"etc/cowrie.cfg.dist\"]\n+ current_path = abspath(dirname(__file__))\n+ root = \"/\".join(current_path.split(\"/\")[:-3])\n+\n+ for file in config_files:\n+ absolute_path = join(root, file)\n+ if exists(absolute_path):\n+ return absolute_path\n+\n+ print(\"Config file not found\")\n+\n+\n+CONFIG = readConfigFile(get_config_path())\n", "issue": "Cowrie not set up for py.test framework\nSo I tried running the test in both python2 and python3. For python2 all the tests were passing but for python3 there was some error.\r\n\r\n```\r\n py.test --cov=cowrie \r\n===================================================================================== test session starts =====================================================================================\r\nplatform linux -- Python 3.7.2, pytest-4.2.0, py-1.7.0, pluggy-0.8.1\r\nrootdir: /home/mzfr/dev/cowrie, inifile:\r\nplugins: cov-2.6.1\r\ncollected 3 items / 3 errors \r\n\r\n=========================================================================================== ERRORS ============================================================================================\r\n___________________________________________________________________ ERROR collecting src/cowrie/test/test_base_commands.py ____________________________________________________________________\r\n../shell/fs.py:26: in <module>\r\n PICKLE = pickle.load(open(CONFIG.get('shell', 'filesystem'), 'rb'))\r\n../core/config.py:29: in get\r\n return super(EnvironmentConfigParser, self).get(section, option, **kwargs)\r\n/usr/lib/python3.7/configparser.py:780: in get\r\n d = self._unify_values(section, vars)\r\n/usr/lib/python3.7/configparser.py:1146: in _unify_values\r\n raise NoSectionError(section) from None\r\nE configparser.NoSectionError: No section: 'shell'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\ntest_base_commands.py:12: in <module>\r\n from cowrie.shell import protocol\r\n../shell/protocol.py:21: in <module>\r\n from cowrie.shell import command\r\n../shell/command.py:20: in <module>\r\n from cowrie.shell import fs\r\n../shell/fs.py:29: in <module>\r\n exit(2)\r\n/usr/lib/python3.7/_sitebuiltins.py:26: in __call__\r\n raise SystemExit(code)\r\nE SystemExit: 2\r\n--------------------------------------------------------------------------------------- Captured stdout ---------------------------------------------------------------------------------------\r\nERROR: Config file not found: etc/cowrie.cfg.dist\r\n________________________________________________________________________ ERROR collecting src/cowrie/test/test_echo.py ________________________________________________________________________\r\n../shell/fs.py:26: in <module>\r\n PICKLE = pickle.load(open(CONFIG.get('shell', 'filesystem'), 'rb'))\r\n../core/config.py:29: in get\r\n return super(EnvironmentConfigParser, self).get(section, option, **kwargs)\r\n/usr/lib/python3.7/configparser.py:780: in get\r\n d = self._unify_values(section, vars)\r\n/usr/lib/python3.7/configparser.py:1146: in _unify_values\r\n raise NoSectionError(section) from None\r\nE configparser.NoSectionError: No section: 'shell'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\ntest_echo.py:16: in <module>\r\n from cowrie.shell import protocol\r\n../shell/protocol.py:21: in <module>\r\n from cowrie.shell import command\r\n../shell/command.py:20: in <module>\r\n from cowrie.shell import fs\r\n../shell/fs.py:29: in <module>\r\n exit(2)\r\n/usr/lib/python3.7/_sitebuiltins.py:26: in __call__\r\n raise SystemExit(code)\r\nE SystemExit: 2\r\n--------------------------------------------------------------------------------------- Captured stdout ---------------------------------------------------------------------------------------\r\nERROR: Config file not found: etc/cowrie.cfg.dist\r\n________________________________________________________________________ ERROR collecting src/cowrie/test/test_tftp.py ________________________________________________________________________\r\n../shell/fs.py:26: in <module>\r\n PICKLE = pickle.load(open(CONFIG.get('shell', 'filesystem'), 'rb'))\r\n../core/config.py:29: in get\r\n return super(EnvironmentConfigParser, self).get(section, option, **kwargs)\r\n/usr/lib/python3.7/configparser.py:780: in get\r\n d = self._unify_values(section, vars)\r\n/usr/lib/python3.7/configparser.py:1146: in _unify_values\r\n raise NoSectionError(section) from None\r\nE configparser.NoSectionError: No section: 'shell'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\ntest_tftp.py:16: in <module>\r\n from cowrie.shell import protocol\r\n../shell/protocol.py:21: in <module>\r\n from cowrie.shell import command\r\n../shell/command.py:20: in <module>\r\n from cowrie.shell import fs\r\n../shell/fs.py:29: in <module>\r\n exit(2)\r\n/usr/lib/python3.7/_sitebuiltins.py:26: in __call__\r\n raise SystemExit(code)\r\nE SystemExit: 2\r\n--------------------------------------------------------------------------------------- Captured stdout ---------------------------------------------------------------------------------------\r\nERROR: Config file not found: etc/cowrie.cfg.dist\r\n```\n", "before_files": [{"content": "# Copyright (c) 2009-2014 Upi Tamminen <[email protected]>\n# See the COPYRIGHT file for more information\n\n\"\"\"\nThis module contains ...\n\"\"\"\n\nfrom __future__ import absolute_import, division\n\nimport configparser\nimport os\n\n\ndef to_environ_key(key):\n return key.upper()\n\n\nclass EnvironmentConfigParser(configparser.ConfigParser):\n\n def has_option(self, section, option):\n if to_environ_key('_'.join((section, option))) in os.environ:\n return True\n return super(EnvironmentConfigParser, self).has_option(section, option)\n\n def get(self, section, option, **kwargs):\n key = to_environ_key('_'.join((section, option)))\n if key in os.environ:\n return os.environ[key]\n return super(EnvironmentConfigParser, self).get(section, option, **kwargs)\n\n\ndef readConfigFile(cfgfile):\n \"\"\"\n Read config files and return ConfigParser object\n\n @param cfgfile: filename or array of filenames\n @return: ConfigParser object\n \"\"\"\n parser = EnvironmentConfigParser(interpolation=configparser.ExtendedInterpolation())\n parser.read(cfgfile)\n return parser\n\n\nCONFIG = readConfigFile((\"etc/cowrie.cfg.dist\", \"/etc/cowrie/cowrie.cfg\", \"etc/cowrie.cfg\", \"cowrie.cfg\"))\n", "path": "src/cowrie/core/config.py"}]} | 1,935 | 541 |
gh_patches_debug_15272 | rasdani/github-patches | git_diff | chainer__chainer-1539 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Invalid CuPy cache problem with different version of CUDAs
When a user update CUDA, caches of CuPy for old CUDA is sometimes incompatible with new one. We need to check CUDA version and to store kernel cache with its information.
@cosmo__ reported this problem on Twitter. Thank you!
</issue>
<code>
[start of cupy/cuda/compiler.py]
1 import hashlib
2 import os
3 import re
4 import subprocess
5 import sys
6 import tempfile
7
8 import filelock
9 import six
10
11 from cupy.cuda import device
12 from cupy.cuda import function
13
14
15 def _get_arch():
16 cc = device.Device().compute_capability
17 return 'sm_%s' % cc
18
19
20 class TemporaryDirectory(object):
21
22 def __enter__(self):
23 self.path = tempfile.mkdtemp()
24 return self.path
25
26 def __exit__(self, exc_type, exc_value, traceback):
27 if exc_value is not None:
28 return
29
30 for name in os.listdir(self.path):
31 os.unlink(os.path.join(self.path, name))
32 os.rmdir(self.path)
33
34
35 def _run_nvcc(cmd, cwd):
36 try:
37 return subprocess.check_output(cmd, cwd=cwd, stderr=subprocess.STDOUT)
38 except subprocess.CalledProcessError as e:
39 msg = ('`nvcc` command returns non-zero exit status. \n'
40 'command: {0}\n'
41 'return-code: {1}\n'
42 'stdout/stderr: \n'
43 '{2}'.format(e.cmd, e.returncode, e.output))
44 raise RuntimeError(msg)
45 except OSError as e:
46 msg = 'Failed to run `nvcc` command. ' \
47 'Check PATH environment variable: ' \
48 + str(e)
49 raise OSError(msg)
50
51
52 def nvcc(source, options=(), arch=None):
53 if not arch:
54 arch = _get_arch()
55 cmd = ['nvcc', '--cubin', '-arch', arch] + list(options)
56
57 with TemporaryDirectory() as root_dir:
58 path = os.path.join(root_dir, 'kern')
59 cu_path = '%s.cu' % path
60 cubin_path = '%s.cubin' % path
61
62 with open(cu_path, 'w') as cu_file:
63 cu_file.write(source)
64
65 cmd.append(cu_path)
66 _run_nvcc(cmd, root_dir)
67
68 with open(cubin_path, 'rb') as bin_file:
69 return bin_file.read()
70
71
72 def preprocess(source, options=()):
73 cmd = ['nvcc', '--preprocess'] + list(options)
74 with TemporaryDirectory() as root_dir:
75 path = os.path.join(root_dir, 'kern')
76 cu_path = '%s.cu' % path
77
78 with open(cu_path, 'w') as cu_file:
79 cu_file.write(source)
80
81 cmd.append(cu_path)
82 pp_src = _run_nvcc(cmd, root_dir)
83
84 if isinstance(pp_src, six.binary_type):
85 pp_src = pp_src.decode('utf-8')
86 return re.sub('(?m)^#.*$', '', pp_src)
87
88
89 _default_cache_dir = os.path.expanduser('~/.cupy/kernel_cache')
90
91
92 def get_cache_dir():
93 return os.environ.get('CUPY_CACHE_DIR', _default_cache_dir)
94
95
96 _empty_file_preprocess_cache = {}
97
98
99 def compile_with_cache(source, options=(), arch=None, cache_dir=None):
100 global _empty_file_preprocess_cache
101 if cache_dir is None:
102 cache_dir = get_cache_dir()
103 if arch is None:
104 arch = _get_arch()
105
106 if 'win32' == sys.platform:
107 options += ('-Xcompiler', '/wd 4819')
108 if sys.maxsize == 9223372036854775807:
109 options += '-m64',
110 elif sys.maxsize == 2147483647:
111 options += '-m32',
112
113 env = (arch, options)
114 if '#include' in source:
115 pp_src = '%s %s' % (env, preprocess(source, options))
116 else:
117 base = _empty_file_preprocess_cache.get(env, None)
118 if base is None:
119 base = _empty_file_preprocess_cache[env] = preprocess('', options)
120 pp_src = '%s %s %s' % (env, base, source)
121
122 if isinstance(pp_src, six.text_type):
123 pp_src = pp_src.encode('utf-8')
124 name = '%s.cubin' % hashlib.md5(pp_src).hexdigest()
125
126 mod = function.Module()
127
128 if not os.path.exists(cache_dir):
129 os.makedirs(cache_dir)
130
131 lock_path = os.path.join(cache_dir, 'lock_file.lock')
132
133 path = os.path.join(cache_dir, name)
134 with filelock.FileLock(lock_path) as lock:
135 if os.path.exists(path):
136 with open(path, 'rb') as file:
137 cubin = file.read()
138 mod.load(cubin)
139 else:
140 lock.release()
141 cubin = nvcc(source, options, arch)
142 mod.load(cubin)
143 lock.acquire()
144 with open(path, 'wb') as cubin_file:
145 cubin_file.write(cubin)
146
147 return mod
148
[end of cupy/cuda/compiler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cupy/cuda/compiler.py b/cupy/cuda/compiler.py
--- a/cupy/cuda/compiler.py
+++ b/cupy/cuda/compiler.py
@@ -12,6 +12,18 @@
from cupy.cuda import function
+_nvcc_version = None
+
+
+def _get_nvcc_version():
+ global _nvcc_version
+ if _nvcc_version is None:
+ cmd = ['nvcc', '--version']
+ _nvcc_version = _run_nvcc(cmd, '.')
+
+ return _nvcc_version
+
+
def _get_arch():
cc = device.Device().compute_capability
return 'sm_%s' % cc
@@ -110,7 +122,7 @@
elif sys.maxsize == 2147483647:
options += '-m32',
- env = (arch, options)
+ env = (arch, options, _get_nvcc_version())
if '#include' in source:
pp_src = '%s %s' % (env, preprocess(source, options))
else:
| {"golden_diff": "diff --git a/cupy/cuda/compiler.py b/cupy/cuda/compiler.py\n--- a/cupy/cuda/compiler.py\n+++ b/cupy/cuda/compiler.py\n@@ -12,6 +12,18 @@\n from cupy.cuda import function\n \n \n+_nvcc_version = None\n+\n+\n+def _get_nvcc_version():\n+ global _nvcc_version\n+ if _nvcc_version is None:\n+ cmd = ['nvcc', '--version']\n+ _nvcc_version = _run_nvcc(cmd, '.')\n+\n+ return _nvcc_version\n+\n+\n def _get_arch():\n cc = device.Device().compute_capability\n return 'sm_%s' % cc\n@@ -110,7 +122,7 @@\n elif sys.maxsize == 2147483647:\n options += '-m32',\n \n- env = (arch, options)\n+ env = (arch, options, _get_nvcc_version())\n if '#include' in source:\n pp_src = '%s %s' % (env, preprocess(source, options))\n else:\n", "issue": "Invalid CuPy cache problem with different version of CUDAs\nWhen a user update CUDA, caches of CuPy for old CUDA is sometimes incompatible with new one. We need to check CUDA version and to store kernel cache with its information.\n\n@cosmo__ reported this problem on Twitter. Thank you!\n\n", "before_files": [{"content": "import hashlib\nimport os\nimport re\nimport subprocess\nimport sys\nimport tempfile\n\nimport filelock\nimport six\n\nfrom cupy.cuda import device\nfrom cupy.cuda import function\n\n\ndef _get_arch():\n cc = device.Device().compute_capability\n return 'sm_%s' % cc\n\n\nclass TemporaryDirectory(object):\n\n def __enter__(self):\n self.path = tempfile.mkdtemp()\n return self.path\n\n def __exit__(self, exc_type, exc_value, traceback):\n if exc_value is not None:\n return\n\n for name in os.listdir(self.path):\n os.unlink(os.path.join(self.path, name))\n os.rmdir(self.path)\n\n\ndef _run_nvcc(cmd, cwd):\n try:\n return subprocess.check_output(cmd, cwd=cwd, stderr=subprocess.STDOUT)\n except subprocess.CalledProcessError as e:\n msg = ('`nvcc` command returns non-zero exit status. \\n'\n 'command: {0}\\n'\n 'return-code: {1}\\n'\n 'stdout/stderr: \\n'\n '{2}'.format(e.cmd, e.returncode, e.output))\n raise RuntimeError(msg)\n except OSError as e:\n msg = 'Failed to run `nvcc` command. ' \\\n 'Check PATH environment variable: ' \\\n + str(e)\n raise OSError(msg)\n\n\ndef nvcc(source, options=(), arch=None):\n if not arch:\n arch = _get_arch()\n cmd = ['nvcc', '--cubin', '-arch', arch] + list(options)\n\n with TemporaryDirectory() as root_dir:\n path = os.path.join(root_dir, 'kern')\n cu_path = '%s.cu' % path\n cubin_path = '%s.cubin' % path\n\n with open(cu_path, 'w') as cu_file:\n cu_file.write(source)\n\n cmd.append(cu_path)\n _run_nvcc(cmd, root_dir)\n\n with open(cubin_path, 'rb') as bin_file:\n return bin_file.read()\n\n\ndef preprocess(source, options=()):\n cmd = ['nvcc', '--preprocess'] + list(options)\n with TemporaryDirectory() as root_dir:\n path = os.path.join(root_dir, 'kern')\n cu_path = '%s.cu' % path\n\n with open(cu_path, 'w') as cu_file:\n cu_file.write(source)\n\n cmd.append(cu_path)\n pp_src = _run_nvcc(cmd, root_dir)\n\n if isinstance(pp_src, six.binary_type):\n pp_src = pp_src.decode('utf-8')\n return re.sub('(?m)^#.*$', '', pp_src)\n\n\n_default_cache_dir = os.path.expanduser('~/.cupy/kernel_cache')\n\n\ndef get_cache_dir():\n return os.environ.get('CUPY_CACHE_DIR', _default_cache_dir)\n\n\n_empty_file_preprocess_cache = {}\n\n\ndef compile_with_cache(source, options=(), arch=None, cache_dir=None):\n global _empty_file_preprocess_cache\n if cache_dir is None:\n cache_dir = get_cache_dir()\n if arch is None:\n arch = _get_arch()\n\n if 'win32' == sys.platform:\n options += ('-Xcompiler', '/wd 4819')\n if sys.maxsize == 9223372036854775807:\n options += '-m64',\n elif sys.maxsize == 2147483647:\n options += '-m32',\n\n env = (arch, options)\n if '#include' in source:\n pp_src = '%s %s' % (env, preprocess(source, options))\n else:\n base = _empty_file_preprocess_cache.get(env, None)\n if base is None:\n base = _empty_file_preprocess_cache[env] = preprocess('', options)\n pp_src = '%s %s %s' % (env, base, source)\n\n if isinstance(pp_src, six.text_type):\n pp_src = pp_src.encode('utf-8')\n name = '%s.cubin' % hashlib.md5(pp_src).hexdigest()\n\n mod = function.Module()\n\n if not os.path.exists(cache_dir):\n os.makedirs(cache_dir)\n\n lock_path = os.path.join(cache_dir, 'lock_file.lock')\n\n path = os.path.join(cache_dir, name)\n with filelock.FileLock(lock_path) as lock:\n if os.path.exists(path):\n with open(path, 'rb') as file:\n cubin = file.read()\n mod.load(cubin)\n else:\n lock.release()\n cubin = nvcc(source, options, arch)\n mod.load(cubin)\n lock.acquire()\n with open(path, 'wb') as cubin_file:\n cubin_file.write(cubin)\n\n return mod\n", "path": "cupy/cuda/compiler.py"}]} | 2,010 | 248 |
gh_patches_debug_63916 | rasdani/github-patches | git_diff | tensorflow__addons-897 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Nightly build break
**System information**
- TensorFlow version and how it was installed (source or binary): tf-nightly-**2.2.0.dev20200115**
- TensorFlow-Addons version and how it was installed (source or binary): tfa-nightly-**0.8.0.dev20200115**
**Describe the bug**
Hi, it looks like [this commit](https://github.com/tensorflow/addons/commit/3aae7732998cb233234a2948010b9aaafc24e920) causes the latest nightly build to fail on import
```
----> 1 import tensorflow_addons
/usr/local/lib/python3.6/dist-packages/tensorflow_addons/__init__.py in <module>()
30
31 # Cleanup symbols to avoid polluting namespace.
---> 32 del absolute_import
33 del division
34 del print_function
NameError: name 'absolute_import' is not defined
```
@seanpmorgan
**Code to reproduce the issue**
[colab](https://colab.research.google.com/drive/1fxRshVv0FPJNHdOqWC4GySjPJ_TdJTJU#scrollTo=TTC3gzRLRAvY)
</issue>
<code>
[start of tensorflow_addons/__init__.py]
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Useful extra functionality for TensorFlow maintained by SIG-addons."""
16
17 # Local project imports
18 from tensorflow_addons import activations
19 from tensorflow_addons import callbacks
20 from tensorflow_addons import image
21 from tensorflow_addons import layers
22 from tensorflow_addons import losses
23 from tensorflow_addons import metrics
24 from tensorflow_addons import optimizers
25 from tensorflow_addons import rnn
26 from tensorflow_addons import seq2seq
27 from tensorflow_addons import text
28
29 from tensorflow_addons.version import __version__
30
31 # Cleanup symbols to avoid polluting namespace.
32 del absolute_import
33 del division
34 del print_function
35
[end of tensorflow_addons/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tensorflow_addons/__init__.py b/tensorflow_addons/__init__.py
--- a/tensorflow_addons/__init__.py
+++ b/tensorflow_addons/__init__.py
@@ -27,8 +27,3 @@
from tensorflow_addons import text
from tensorflow_addons.version import __version__
-
-# Cleanup symbols to avoid polluting namespace.
-del absolute_import
-del division
-del print_function
| {"golden_diff": "diff --git a/tensorflow_addons/__init__.py b/tensorflow_addons/__init__.py\n--- a/tensorflow_addons/__init__.py\n+++ b/tensorflow_addons/__init__.py\n@@ -27,8 +27,3 @@\n from tensorflow_addons import text\n \n from tensorflow_addons.version import __version__\n-\n-# Cleanup symbols to avoid polluting namespace.\n-del absolute_import\n-del division\n-del print_function\n", "issue": "Nightly build break\n**System information**\r\n- TensorFlow version and how it was installed (source or binary): tf-nightly-**2.2.0.dev20200115** \r\n- TensorFlow-Addons version and how it was installed (source or binary): tfa-nightly-**0.8.0.dev20200115**\r\n\r\n**Describe the bug**\r\nHi, it looks like [this commit](https://github.com/tensorflow/addons/commit/3aae7732998cb233234a2948010b9aaafc24e920) causes the latest nightly build to fail on import\r\n\r\n```\r\n----> 1 import tensorflow_addons\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow_addons/__init__.py in <module>()\r\n 30 \r\n 31 # Cleanup symbols to avoid polluting namespace.\r\n---> 32 del absolute_import\r\n 33 del division\r\n 34 del print_function\r\n\r\nNameError: name 'absolute_import' is not defined\r\n```\r\n@seanpmorgan \r\n\r\n**Code to reproduce the issue**\r\n[colab](https://colab.research.google.com/drive/1fxRshVv0FPJNHdOqWC4GySjPJ_TdJTJU#scrollTo=TTC3gzRLRAvY)\r\n\n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Useful extra functionality for TensorFlow maintained by SIG-addons.\"\"\"\n\n# Local project imports\nfrom tensorflow_addons import activations\nfrom tensorflow_addons import callbacks\nfrom tensorflow_addons import image\nfrom tensorflow_addons import layers\nfrom tensorflow_addons import losses\nfrom tensorflow_addons import metrics\nfrom tensorflow_addons import optimizers\nfrom tensorflow_addons import rnn\nfrom tensorflow_addons import seq2seq\nfrom tensorflow_addons import text\n\nfrom tensorflow_addons.version import __version__\n\n# Cleanup symbols to avoid polluting namespace.\ndel absolute_import\ndel division\ndel print_function\n", "path": "tensorflow_addons/__init__.py"}]} | 1,170 | 99 |
gh_patches_debug_989 | rasdani/github-patches | git_diff | hydroshare__hydroshare-5098 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Haystack rest endpoint response serializer does not include short_id
**Description of the bug**
The Haystack REST endpoint for complex solr searches does not include the short_id into the response serializer. This is a critical piece of information for users of this endpoint.
Steps to reproduce the bug:
https://github.com/hydroshare/hydroshare/blob/d3bd1737a0179eac74cd68926b3b79b80894410e/hs_rest_api/discovery.py#L12
**Expected behavior**
I expect resource ids to be included with search results so I can retrieve resources.
</issue>
<code>
[start of hs_rest_api/discovery.py]
1 from drf_haystack.serializers import HaystackSerializer
2 from drf_haystack.viewsets import HaystackViewSet
3 from hs_core.search_indexes import BaseResourceIndex
4 from hs_core.models import BaseResource
5 from drf_haystack.fields import HaystackCharField, HaystackDateField, HaystackMultiValueField, \
6 HaystackFloatField
7 from drf_yasg.utils import swagger_auto_schema
8 from rest_framework.decorators import action
9 from rest_framework import serializers
10
11
12 class DiscoveryResourceSerializer(HaystackSerializer):
13 class Meta:
14 index_classes = [BaseResourceIndex]
15 fields = [
16 "title",
17 "author",
18 "contributor",
19 "subject",
20 "abstract",
21 "resource_type",
22 "content_type",
23 "coverage_type",
24 "availability",
25 "created",
26 "modified",
27 "start_date",
28 "end_date",
29 "east",
30 "north",
31 "eastlimit",
32 "westlimit",
33 "northlimit",
34 "southlimit"
35 ]
36
37
38 class DiscoverResourceValidator(serializers.Serializer):
39 text = HaystackCharField(required=False,
40 help_text='Search across all Resource Fields')
41 author = HaystackCharField(required=False,
42 help_text='Search by author')
43 contributor = HaystackMultiValueField(required=False,
44 help_text='Search by contributor')
45 subject = HaystackMultiValueField(required=False,
46 help_text='Search within subject keywords')
47 abstract = HaystackCharField(required=False,
48 help_text='Search within the abstract')
49 resource_type = HaystackCharField(required=False,
50 help_text='Search by resource type')
51 content_type = HaystackMultiValueField(required=False,
52 help_text='Search by content type')
53 coverage_type = HaystackMultiValueField(required=False,
54 help_text='Search by coverage type '
55 '(point, box, period)')
56 availability = HaystackMultiValueField(required=False,
57 help_text='Search by availability '
58 '(discoverable, public, published)')
59 created = HaystackDateField(required=False,
60 help_text='Search by created date')
61 modified = HaystackDateField(required=False,
62 help_text='Search by modified date')
63 start_date = HaystackDateField(required=False,
64 help_text='Search by start date')
65 end_date = HaystackDateField(required=False,
66 help_text='Search by end date')
67 east = HaystackFloatField(required=False,
68 help_text='Search by location or box center east longitude')
69 north = HaystackFloatField(required=False,
70 help_text='Search by location or box center north latitude')
71 eastlimit = HaystackFloatField(required=False,
72 help_text='Search by east limit longitude')
73 westlimit = HaystackFloatField(required=False,
74 help_text='Search by west limit longitude')
75 northlimit = HaystackFloatField(required=False,
76 help_text='Search by north limit latitude')
77 southlimit = HaystackFloatField(required=False,
78 help_text='Search by south limit latitude')
79
80
81 class DiscoverSearchView(HaystackViewSet):
82 index_models = [BaseResource]
83 serializer_class = DiscoveryResourceSerializer
84
85 @action(detail=True, methods=['get'])
86 @swagger_auto_schema(operation_description="Search HydroShare Resources using solr conventions."
87 "We use haystack for queries so you can use all of "
88 "the parameters described here in combination with "
89 "field lookups "
90 "https://django-haystack.readthedocs.io/en/latest/"
91 "searchqueryset_api.html?highlight=lookups#id1",
92 query_serializer=DiscoverResourceValidator)
93 def list(self, request):
94 return super(DiscoverSearchView, self).list(request)
95
[end of hs_rest_api/discovery.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/hs_rest_api/discovery.py b/hs_rest_api/discovery.py
--- a/hs_rest_api/discovery.py
+++ b/hs_rest_api/discovery.py
@@ -13,6 +13,7 @@
class Meta:
index_classes = [BaseResourceIndex]
fields = [
+ "short_id",
"title",
"author",
"contributor",
| {"golden_diff": "diff --git a/hs_rest_api/discovery.py b/hs_rest_api/discovery.py\n--- a/hs_rest_api/discovery.py\n+++ b/hs_rest_api/discovery.py\n@@ -13,6 +13,7 @@\n class Meta:\n index_classes = [BaseResourceIndex]\n fields = [\n+ \"short_id\",\n \"title\",\n \"author\",\n \"contributor\",\n", "issue": "Haystack rest endpoint response serializer does not include short_id\n**Description of the bug**\r\nThe Haystack REST endpoint for complex solr searches does not include the short_id into the response serializer. This is a critical piece of information for users of this endpoint. \r\n\r\nSteps to reproduce the bug:\r\nhttps://github.com/hydroshare/hydroshare/blob/d3bd1737a0179eac74cd68926b3b79b80894410e/hs_rest_api/discovery.py#L12\r\n\r\n**Expected behavior**\r\nI expect resource ids to be included with search results so I can retrieve resources.\r\n\n", "before_files": [{"content": "from drf_haystack.serializers import HaystackSerializer\nfrom drf_haystack.viewsets import HaystackViewSet\nfrom hs_core.search_indexes import BaseResourceIndex\nfrom hs_core.models import BaseResource\nfrom drf_haystack.fields import HaystackCharField, HaystackDateField, HaystackMultiValueField, \\\n HaystackFloatField\nfrom drf_yasg.utils import swagger_auto_schema\nfrom rest_framework.decorators import action\nfrom rest_framework import serializers\n\n\nclass DiscoveryResourceSerializer(HaystackSerializer):\n class Meta:\n index_classes = [BaseResourceIndex]\n fields = [\n \"title\",\n \"author\",\n \"contributor\",\n \"subject\",\n \"abstract\",\n \"resource_type\",\n \"content_type\",\n \"coverage_type\",\n \"availability\",\n \"created\",\n \"modified\",\n \"start_date\",\n \"end_date\",\n \"east\",\n \"north\",\n \"eastlimit\",\n \"westlimit\",\n \"northlimit\",\n \"southlimit\"\n ]\n\n\nclass DiscoverResourceValidator(serializers.Serializer):\n text = HaystackCharField(required=False,\n help_text='Search across all Resource Fields')\n author = HaystackCharField(required=False,\n help_text='Search by author')\n contributor = HaystackMultiValueField(required=False,\n help_text='Search by contributor')\n subject = HaystackMultiValueField(required=False,\n help_text='Search within subject keywords')\n abstract = HaystackCharField(required=False,\n help_text='Search within the abstract')\n resource_type = HaystackCharField(required=False,\n help_text='Search by resource type')\n content_type = HaystackMultiValueField(required=False,\n help_text='Search by content type')\n coverage_type = HaystackMultiValueField(required=False,\n help_text='Search by coverage type '\n '(point, box, period)')\n availability = HaystackMultiValueField(required=False,\n help_text='Search by availability '\n '(discoverable, public, published)')\n created = HaystackDateField(required=False,\n help_text='Search by created date')\n modified = HaystackDateField(required=False,\n help_text='Search by modified date')\n start_date = HaystackDateField(required=False,\n help_text='Search by start date')\n end_date = HaystackDateField(required=False,\n help_text='Search by end date')\n east = HaystackFloatField(required=False,\n help_text='Search by location or box center east longitude')\n north = HaystackFloatField(required=False,\n help_text='Search by location or box center north latitude')\n eastlimit = HaystackFloatField(required=False,\n help_text='Search by east limit longitude')\n westlimit = HaystackFloatField(required=False,\n help_text='Search by west limit longitude')\n northlimit = HaystackFloatField(required=False,\n help_text='Search by north limit latitude')\n southlimit = HaystackFloatField(required=False,\n help_text='Search by south limit latitude')\n\n\nclass DiscoverSearchView(HaystackViewSet):\n index_models = [BaseResource]\n serializer_class = DiscoveryResourceSerializer\n\n @action(detail=True, methods=['get'])\n @swagger_auto_schema(operation_description=\"Search HydroShare Resources using solr conventions.\"\n \"We use haystack for queries so you can use all of \"\n \"the parameters described here in combination with \"\n \"field lookups \"\n \"https://django-haystack.readthedocs.io/en/latest/\"\n \"searchqueryset_api.html?highlight=lookups#id1\",\n query_serializer=DiscoverResourceValidator)\n def list(self, request):\n return super(DiscoverSearchView, self).list(request)\n", "path": "hs_rest_api/discovery.py"}]} | 1,632 | 89 |
gh_patches_debug_20873 | rasdani/github-patches | git_diff | GeotrekCE__Geotrek-admin-2223 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix translations in package
The compilemessages step for geotrek and mapentity is missing somewhere
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/python3
2 import os
3 import distutils.command.build
4 from setuptools import setup, find_packages
5
6 here = os.path.abspath(os.path.dirname(__file__))
7
8
9 class BuildCommand(distutils.command.build.build):
10 def run(self):
11 print("before")
12 distutils.command.build.build.run(self)
13 print("after")
14 from django.core.management import call_command
15 curdir = os.getcwd()
16 os.chdir(os.path.join(curdir, 'geotrek'))
17 call_command('compilemessages')
18 os.chdir(os.path.join(curdir, 'mapentity'))
19 call_command('compilemessages')
20 os.chdir(curdir)
21
22
23 setup(
24 name='geotrek',
25 version=open(os.path.join(here, 'VERSION')).read().strip(),
26 author='Makina Corpus',
27 author_email='[email protected]',
28 url='http://makina-corpus.com',
29 description="Geotrek",
30 long_description=(open(os.path.join(here, 'README.rst')).read() + '\n\n'
31 + open(os.path.join(here, 'docs', 'changelog.rst')).read()),
32 scripts=['manage.py'],
33 install_requires=[
34 # pinned by requirements.txt
35 'psycopg2',
36 'docutils',
37 'GDAL',
38 'Pillow',
39 'easy-thumbnails',
40 'simplekml',
41 'pygal',
42 'django-extended-choices',
43 'django-multiselectfield',
44 'geojson',
45 'tif2geojson',
46 'pytz',
47 'djangorestframework-gis',
48 'drf-dynamic-fields',
49 'django-rest-swagger',
50 'django-embed-video',
51 'xlrd',
52 'landez',
53 'redis',
54 'celery',
55 'django-celery-results',
56 'requests[security]',
57 'drf-extensions',
58 'django-colorfield',
59 'factory_boy',
60 ],
61 cmdclass={"build": BuildCommand},
62 include_package_data=True,
63 license='BSD, see LICENSE file.',
64 packages=find_packages(),
65 classifiers=['Natural Language :: English',
66 'Environment :: Web Environment',
67 'Framework :: Django',
68 'Development Status :: 5 - Production/Stable',
69 'Programming Language :: Python :: 2.7'],
70 )
71
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,23 +1,24 @@
#!/usr/bin/python3
import os
import distutils.command.build
+from pathlib import Path
from setuptools import setup, find_packages
+from shutil import copy
here = os.path.abspath(os.path.dirname(__file__))
class BuildCommand(distutils.command.build.build):
def run(self):
- print("before")
distutils.command.build.build.run(self)
- print("after")
from django.core.management import call_command
curdir = os.getcwd()
- os.chdir(os.path.join(curdir, 'geotrek'))
- call_command('compilemessages')
- os.chdir(os.path.join(curdir, 'mapentity'))
- call_command('compilemessages')
- os.chdir(curdir)
+ for subdir in ('geotrek', 'mapentity'):
+ os.chdir(subdir)
+ call_command('compilemessages')
+ for path in Path('.').rglob('*.mo'):
+ copy(path, os.path.join(curdir, self.build_lib, subdir, path))
+ os.chdir(curdir)
setup(
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,23 +1,24 @@\n #!/usr/bin/python3\n import os\n import distutils.command.build\n+from pathlib import Path\n from setuptools import setup, find_packages\n+from shutil import copy\n \n here = os.path.abspath(os.path.dirname(__file__))\n \n \n class BuildCommand(distutils.command.build.build):\n def run(self):\n- print(\"before\")\n distutils.command.build.build.run(self)\n- print(\"after\")\n from django.core.management import call_command\n curdir = os.getcwd()\n- os.chdir(os.path.join(curdir, 'geotrek'))\n- call_command('compilemessages')\n- os.chdir(os.path.join(curdir, 'mapentity'))\n- call_command('compilemessages')\n- os.chdir(curdir)\n+ for subdir in ('geotrek', 'mapentity'):\n+ os.chdir(subdir)\n+ call_command('compilemessages')\n+ for path in Path('.').rglob('*.mo'):\n+ copy(path, os.path.join(curdir, self.build_lib, subdir, path))\n+ os.chdir(curdir)\n \n \n setup(\n", "issue": "Fix translations in package\nThe compilemessages step for geotrek and mapentity is missing somewhere\n", "before_files": [{"content": "#!/usr/bin/python3\nimport os\nimport distutils.command.build\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\nclass BuildCommand(distutils.command.build.build):\n def run(self):\n print(\"before\")\n distutils.command.build.build.run(self)\n print(\"after\")\n from django.core.management import call_command\n curdir = os.getcwd()\n os.chdir(os.path.join(curdir, 'geotrek'))\n call_command('compilemessages')\n os.chdir(os.path.join(curdir, 'mapentity'))\n call_command('compilemessages')\n os.chdir(curdir)\n\n\nsetup(\n name='geotrek',\n version=open(os.path.join(here, 'VERSION')).read().strip(),\n author='Makina Corpus',\n author_email='[email protected]',\n url='http://makina-corpus.com',\n description=\"Geotrek\",\n long_description=(open(os.path.join(here, 'README.rst')).read() + '\\n\\n'\n + open(os.path.join(here, 'docs', 'changelog.rst')).read()),\n scripts=['manage.py'],\n install_requires=[\n # pinned by requirements.txt\n 'psycopg2',\n 'docutils',\n 'GDAL',\n 'Pillow',\n 'easy-thumbnails',\n 'simplekml',\n 'pygal',\n 'django-extended-choices',\n 'django-multiselectfield',\n 'geojson',\n 'tif2geojson',\n 'pytz',\n 'djangorestframework-gis',\n 'drf-dynamic-fields',\n 'django-rest-swagger',\n 'django-embed-video',\n 'xlrd',\n 'landez',\n 'redis',\n 'celery',\n 'django-celery-results',\n 'requests[security]',\n 'drf-extensions',\n 'django-colorfield',\n 'factory_boy',\n ],\n cmdclass={\"build\": BuildCommand},\n include_package_data=True,\n license='BSD, see LICENSE file.',\n packages=find_packages(),\n classifiers=['Natural Language :: English',\n 'Environment :: Web Environment',\n 'Framework :: Django',\n 'Development Status :: 5 - Production/Stable',\n 'Programming Language :: Python :: 2.7'],\n)\n", "path": "setup.py"}]} | 1,172 | 256 |
gh_patches_debug_26330 | rasdani/github-patches | git_diff | streamlink__streamlink-1583 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Vaughnlive changed IP's to break Streamlink
This will be a very brief bug report... As of tonight the head vaughnlive.py references IPs which were disconnected by vaughn to thwart streamlinking. I've observed vaughn serving video now from "66.90.93.44","66.90.93.35" and have personally gotten it to work overwriting the IP's in rtmp_server_map with those two alternating. I would submit the commit but I think some more testing is needed as I only use streamlink with one occasional stream and don't know how far those IPs will get more frequent SL users.
#1187 contains lengthy discussion on the history of the war vaughn has waged against streamlink, this is probably not the last time the IPs will change.
</issue>
<code>
[start of src/streamlink/plugins/vaughnlive.py]
1 import random
2 import re
3 import itertools
4 import ssl
5 import websocket
6
7 from streamlink.plugin import Plugin
8 from streamlink.plugin.api import useragents, http
9 from streamlink.stream import RTMPStream
10
11 _url_re = re.compile(r"""
12 http(s)?://(\w+\.)?
13 (?P<domain>vaughnlive|breakers|instagib|vapers|pearltime).tv
14 (/embed/video)?
15 /(?P<channel>[^/&?]+)
16 """, re.VERBOSE)
17
18
19 class VLWebSocket(websocket.WebSocket):
20 def __init__(self, **_):
21 self.session = _.pop("session")
22 self.logger = self.session.logger.new_module("plugins.vaughnlive.websocket")
23 sslopt = _.pop("sslopt", {})
24 sslopt["cert_reqs"] = ssl.CERT_NONE
25 super(VLWebSocket, self).__init__(sslopt=sslopt, **_)
26
27 def send(self, payload, opcode=websocket.ABNF.OPCODE_TEXT):
28 self.logger.debug("Sending message: {0}", payload)
29 return super(VLWebSocket, self).send(payload + "\n\x00", opcode)
30
31 def recv(self):
32 d = super(VLWebSocket, self).recv().replace("\n", "").replace("\x00", "")
33 return d.split(" ", 1)
34
35
36 class VaughnLive(Plugin):
37 servers = ["wss://sapi-ws-{0}x{1:02}.vaughnlive.tv".format(x, y) for x, y in itertools.product(range(1, 3),
38 range(1, 6))]
39 origin = "https://vaughnlive.tv"
40 rtmp_server_map = {
41 "594140c69edad": "66.90.93.42",
42 "585c4cab1bef1": "66.90.93.34",
43 "5940d648b3929": "66.90.93.42",
44 "5941854b39bc4": "198.255.0.10"
45 }
46 name_remap = {"#vl": "live", "#btv": "btv", "#pt": "pt", "#igb": "instagib", "#vtv": "vtv"}
47 domain_map = {"vaughnlive": "#vl", "breakers": "#btv", "instagib": "#igb", "vapers": "#vtv", "pearltime": "#pt"}
48
49 @classmethod
50 def can_handle_url(cls, url):
51 return _url_re.match(url)
52
53 def api_url(self):
54 return random.choice(self.servers)
55
56 def parse_ack(self, action, message):
57 if action.endswith("3"):
58 channel, _, viewers, token, server, choked, is_live, chls, trns, ingest = message.split(";")
59 is_live = is_live == "1"
60 viewers = int(viewers)
61 self.logger.debug("Viewers: {0}, isLive={1}", viewers, is_live)
62 domain, channel = channel.split("-", 1)
63 return is_live, server, domain, channel, token, ingest
64 else:
65 self.logger.error("Unhandled action format: {0}", action)
66
67 def _get_info(self, stream_name):
68 server = self.api_url()
69 self.logger.debug("Connecting to API: {0}", server)
70 ws = websocket.create_connection(server,
71 header=["User-Agent: {0}".format(useragents.CHROME)],
72 origin=self.origin,
73 class_=VLWebSocket,
74 session=self.session)
75 ws.send("MVN LOAD3 {0}".format(stream_name))
76 action, message = ws.recv()
77 return self.parse_ack(action, message)
78
79 def _get_rtmp_streams(self, server, domain, channel, token):
80 rtmp_server = self.rtmp_server_map.get(server, server)
81
82 url = "rtmp://{0}/live?{1}".format(rtmp_server, token)
83
84 yield "live", RTMPStream(self.session, params={
85 "rtmp": url,
86 "pageUrl": self.url,
87 "playpath": "{0}_{1}".format(self.name_remap.get(domain, "live"), channel),
88 "live": True
89 })
90
91 def _get_streams(self):
92 m = _url_re.match(self.url)
93 if m:
94 stream_name = "{0}-{1}".format(self.domain_map[(m.group("domain").lower())],
95 m.group("channel"))
96
97 is_live, server, domain, channel, token, ingest = self._get_info(stream_name)
98
99 if not is_live:
100 self.logger.info("Stream is currently off air")
101 else:
102 for s in self._get_rtmp_streams(server, domain, channel, token):
103 yield s
104
105
106 __plugin__ = VaughnLive
107
[end of src/streamlink/plugins/vaughnlive.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/vaughnlive.py b/src/streamlink/plugins/vaughnlive.py
--- a/src/streamlink/plugins/vaughnlive.py
+++ b/src/streamlink/plugins/vaughnlive.py
@@ -38,10 +38,10 @@
range(1, 6))]
origin = "https://vaughnlive.tv"
rtmp_server_map = {
- "594140c69edad": "66.90.93.42",
- "585c4cab1bef1": "66.90.93.34",
- "5940d648b3929": "66.90.93.42",
- "5941854b39bc4": "198.255.0.10"
+ "594140c69edad": "192.240.105.171:1935",
+ "585c4cab1bef1": "192.240.105.171:1935",
+ "5940d648b3929": "192.240.105.171:1935",
+ "5941854b39bc4": "192.240.105.171:1935"
}
name_remap = {"#vl": "live", "#btv": "btv", "#pt": "pt", "#igb": "instagib", "#vtv": "vtv"}
domain_map = {"vaughnlive": "#vl", "breakers": "#btv", "instagib": "#igb", "vapers": "#vtv", "pearltime": "#pt"}
@@ -99,6 +99,7 @@
if not is_live:
self.logger.info("Stream is currently off air")
else:
+ self.logger.info("Stream powered by VaughnSoft - remember to support them.")
for s in self._get_rtmp_streams(server, domain, channel, token):
yield s
| {"golden_diff": "diff --git a/src/streamlink/plugins/vaughnlive.py b/src/streamlink/plugins/vaughnlive.py\n--- a/src/streamlink/plugins/vaughnlive.py\n+++ b/src/streamlink/plugins/vaughnlive.py\n@@ -38,10 +38,10 @@\n range(1, 6))]\n origin = \"https://vaughnlive.tv\"\n rtmp_server_map = {\n- \"594140c69edad\": \"66.90.93.42\",\n- \"585c4cab1bef1\": \"66.90.93.34\",\n- \"5940d648b3929\": \"66.90.93.42\",\n- \"5941854b39bc4\": \"198.255.0.10\"\n+ \"594140c69edad\": \"192.240.105.171:1935\",\n+ \"585c4cab1bef1\": \"192.240.105.171:1935\",\n+ \"5940d648b3929\": \"192.240.105.171:1935\",\n+ \"5941854b39bc4\": \"192.240.105.171:1935\"\n }\n name_remap = {\"#vl\": \"live\", \"#btv\": \"btv\", \"#pt\": \"pt\", \"#igb\": \"instagib\", \"#vtv\": \"vtv\"}\n domain_map = {\"vaughnlive\": \"#vl\", \"breakers\": \"#btv\", \"instagib\": \"#igb\", \"vapers\": \"#vtv\", \"pearltime\": \"#pt\"}\n@@ -99,6 +99,7 @@\n if not is_live:\n self.logger.info(\"Stream is currently off air\")\n else:\n+ self.logger.info(\"Stream powered by VaughnSoft - remember to support them.\")\n for s in self._get_rtmp_streams(server, domain, channel, token):\n yield s\n", "issue": "Vaughnlive changed IP's to break Streamlink\nThis will be a very brief bug report... As of tonight the head vaughnlive.py references IPs which were disconnected by vaughn to thwart streamlinking. I've observed vaughn serving video now from \"66.90.93.44\",\"66.90.93.35\" and have personally gotten it to work overwriting the IP's in rtmp_server_map with those two alternating. I would submit the commit but I think some more testing is needed as I only use streamlink with one occasional stream and don't know how far those IPs will get more frequent SL users.\r\n\r\n #1187 contains lengthy discussion on the history of the war vaughn has waged against streamlink, this is probably not the last time the IPs will change.\n", "before_files": [{"content": "import random\nimport re\nimport itertools\nimport ssl\nimport websocket\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import useragents, http\nfrom streamlink.stream import RTMPStream\n\n_url_re = re.compile(r\"\"\"\n http(s)?://(\\w+\\.)?\n (?P<domain>vaughnlive|breakers|instagib|vapers|pearltime).tv\n (/embed/video)?\n /(?P<channel>[^/&?]+)\n\"\"\", re.VERBOSE)\n\n\nclass VLWebSocket(websocket.WebSocket):\n def __init__(self, **_):\n self.session = _.pop(\"session\")\n self.logger = self.session.logger.new_module(\"plugins.vaughnlive.websocket\")\n sslopt = _.pop(\"sslopt\", {})\n sslopt[\"cert_reqs\"] = ssl.CERT_NONE\n super(VLWebSocket, self).__init__(sslopt=sslopt, **_)\n\n def send(self, payload, opcode=websocket.ABNF.OPCODE_TEXT):\n self.logger.debug(\"Sending message: {0}\", payload)\n return super(VLWebSocket, self).send(payload + \"\\n\\x00\", opcode)\n\n def recv(self):\n d = super(VLWebSocket, self).recv().replace(\"\\n\", \"\").replace(\"\\x00\", \"\")\n return d.split(\" \", 1)\n\n\nclass VaughnLive(Plugin):\n servers = [\"wss://sapi-ws-{0}x{1:02}.vaughnlive.tv\".format(x, y) for x, y in itertools.product(range(1, 3),\n range(1, 6))]\n origin = \"https://vaughnlive.tv\"\n rtmp_server_map = {\n \"594140c69edad\": \"66.90.93.42\",\n \"585c4cab1bef1\": \"66.90.93.34\",\n \"5940d648b3929\": \"66.90.93.42\",\n \"5941854b39bc4\": \"198.255.0.10\"\n }\n name_remap = {\"#vl\": \"live\", \"#btv\": \"btv\", \"#pt\": \"pt\", \"#igb\": \"instagib\", \"#vtv\": \"vtv\"}\n domain_map = {\"vaughnlive\": \"#vl\", \"breakers\": \"#btv\", \"instagib\": \"#igb\", \"vapers\": \"#vtv\", \"pearltime\": \"#pt\"}\n\n @classmethod\n def can_handle_url(cls, url):\n return _url_re.match(url)\n\n def api_url(self):\n return random.choice(self.servers)\n\n def parse_ack(self, action, message):\n if action.endswith(\"3\"):\n channel, _, viewers, token, server, choked, is_live, chls, trns, ingest = message.split(\";\")\n is_live = is_live == \"1\"\n viewers = int(viewers)\n self.logger.debug(\"Viewers: {0}, isLive={1}\", viewers, is_live)\n domain, channel = channel.split(\"-\", 1)\n return is_live, server, domain, channel, token, ingest\n else:\n self.logger.error(\"Unhandled action format: {0}\", action)\n\n def _get_info(self, stream_name):\n server = self.api_url()\n self.logger.debug(\"Connecting to API: {0}\", server)\n ws = websocket.create_connection(server,\n header=[\"User-Agent: {0}\".format(useragents.CHROME)],\n origin=self.origin,\n class_=VLWebSocket,\n session=self.session)\n ws.send(\"MVN LOAD3 {0}\".format(stream_name))\n action, message = ws.recv()\n return self.parse_ack(action, message)\n\n def _get_rtmp_streams(self, server, domain, channel, token):\n rtmp_server = self.rtmp_server_map.get(server, server)\n\n url = \"rtmp://{0}/live?{1}\".format(rtmp_server, token)\n\n yield \"live\", RTMPStream(self.session, params={\n \"rtmp\": url,\n \"pageUrl\": self.url,\n \"playpath\": \"{0}_{1}\".format(self.name_remap.get(domain, \"live\"), channel),\n \"live\": True\n })\n\n def _get_streams(self):\n m = _url_re.match(self.url)\n if m:\n stream_name = \"{0}-{1}\".format(self.domain_map[(m.group(\"domain\").lower())],\n m.group(\"channel\"))\n\n is_live, server, domain, channel, token, ingest = self._get_info(stream_name)\n\n if not is_live:\n self.logger.info(\"Stream is currently off air\")\n else:\n for s in self._get_rtmp_streams(server, domain, channel, token):\n yield s\n\n\n__plugin__ = VaughnLive\n", "path": "src/streamlink/plugins/vaughnlive.py"}]} | 2,019 | 519 |
gh_patches_debug_26363 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-786 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Implement showing and changing a column's type
## Problem
<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->
Users might want to change the data type of an existing column on their table.
## Proposed solution
<!-- A clear and concise description of your proposed solution or feature. -->
The ["Working with Columns" design spec](https://wiki.mathesar.org/en/design/specs/working-with-columns) has a solution for showing and changing column types, which we need to implement on the frontend.
Please note that we're only implementing changing the Mathesar data type in this milestone. Options specific to individual data types will be implemented in the next milestone.
Number data types should save as `NUMERIC`.
Text data types should save as `VARCHAR`.
Date/time data types can be disabled for now since they're not fully implemented on the backend.
## Additional context
<!-- Add any other context or screenshots about the feature request here.-->
- Backend work:
- #532 to get the list of types
- #199 to get valid target types and change types
- Design issue: #324
- Design discussion: #436
- #269
</issue>
<code>
[start of mathesar/views.py]
1 from django.shortcuts import render, redirect, get_object_or_404
2
3 from mathesar.models import Database, Schema, Table
4 from mathesar.api.serializers.databases import DatabaseSerializer
5 from mathesar.api.serializers.schemas import SchemaSerializer
6 from mathesar.api.serializers.tables import TableSerializer
7
8
9 def get_schema_list(request, database):
10 schema_serializer = SchemaSerializer(
11 Schema.objects.filter(database=database),
12 many=True,
13 context={'request': request}
14 )
15 return schema_serializer.data
16
17
18 def get_database_list(request):
19 database_serializer = DatabaseSerializer(
20 Database.objects.all(),
21 many=True,
22 context={'request': request}
23 )
24 return database_serializer.data
25
26
27 def get_table_list(request, schema):
28 if schema is None:
29 return []
30 table_serializer = TableSerializer(
31 Table.objects.filter(schema=schema),
32 many=True,
33 context={'request': request}
34 )
35 return table_serializer.data
36
37
38 def get_common_data(request, database, schema=None):
39 return {
40 'current_db': database.name if database else None,
41 'current_schema': schema.id if schema else None,
42 'schemas': get_schema_list(request, database),
43 'databases': get_database_list(request),
44 'tables': get_table_list(request, schema)
45 }
46
47
48 def get_current_database(request, db_name):
49 # if there's a DB name passed in, try to retrieve the database, or return a 404 error.
50 if db_name is not None:
51 return get_object_or_404(Database, name=db_name)
52 else:
53 try:
54 # Try to get the first database available
55 return Database.objects.order_by('id').first()
56 except Database.DoesNotExist:
57 return None
58
59
60 def get_current_schema(request, schema_id, database):
61 # if there's a schema ID passed in, try to retrieve the schema, or return a 404 error.
62 if schema_id is not None:
63 return get_object_or_404(Schema, id=schema_id)
64 else:
65 try:
66 # Try to get the first schema in the DB
67 return Schema.objects.filter(database=database).order_by('id').first()
68 except Schema.DoesNotExist:
69 return None
70
71
72 def render_schema(request, database, schema):
73 # if there's no schema available, redirect to the schemas page.
74 if not schema:
75 return redirect('schemas', db_name=database.name)
76 else:
77 # We are redirecting so that the correct URL is passed to the frontend.
78 return redirect('schema_home', db_name=database.name, schema_id=schema.id)
79
80
81 def home(request):
82 database = get_current_database(request, None)
83 schema = get_current_schema(request, None, database)
84 return render_schema(request, database, schema)
85
86
87 def db_home(request, db_name):
88 database = get_current_database(request, db_name)
89 schema = get_current_schema(request, None, database)
90 return render_schema(request, database, schema)
91
92
93 def schema_home(request, db_name, schema_id):
94 database = get_current_database(request, db_name)
95 schema = get_current_schema(request, schema_id, database)
96 return render(request, 'mathesar/index.html', {
97 'common_data': get_common_data(request, database, schema)
98 })
99
100
101 def schemas(request, db_name):
102 database = get_current_database(request, db_name)
103 schema = get_current_schema(request, None, database)
104 return render(request, 'mathesar/index.html', {
105 'common_data': get_common_data(request, database, schema)
106 })
107
[end of mathesar/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mathesar/views.py b/mathesar/views.py
--- a/mathesar/views.py
+++ b/mathesar/views.py
@@ -1,7 +1,7 @@
from django.shortcuts import render, redirect, get_object_or_404
from mathesar.models import Database, Schema, Table
-from mathesar.api.serializers.databases import DatabaseSerializer
+from mathesar.api.serializers.databases import DatabaseSerializer, TypeSerializer
from mathesar.api.serializers.schemas import SchemaSerializer
from mathesar.api.serializers.tables import TableSerializer
@@ -35,13 +35,25 @@
return table_serializer.data
+def get_type_list(request, database):
+ if database is None:
+ return []
+ type_serializer = TypeSerializer(
+ database.supported_types,
+ many=True,
+ context={'request': request}
+ )
+ return type_serializer.data
+
+
def get_common_data(request, database, schema=None):
return {
'current_db': database.name if database else None,
'current_schema': schema.id if schema else None,
'schemas': get_schema_list(request, database),
'databases': get_database_list(request),
- 'tables': get_table_list(request, schema)
+ 'tables': get_table_list(request, schema),
+ 'abstract_types': get_type_list(request, database)
}
| {"golden_diff": "diff --git a/mathesar/views.py b/mathesar/views.py\n--- a/mathesar/views.py\n+++ b/mathesar/views.py\n@@ -1,7 +1,7 @@\n from django.shortcuts import render, redirect, get_object_or_404\n \n from mathesar.models import Database, Schema, Table\n-from mathesar.api.serializers.databases import DatabaseSerializer\n+from mathesar.api.serializers.databases import DatabaseSerializer, TypeSerializer\n from mathesar.api.serializers.schemas import SchemaSerializer\n from mathesar.api.serializers.tables import TableSerializer\n \n@@ -35,13 +35,25 @@\n return table_serializer.data\n \n \n+def get_type_list(request, database):\n+ if database is None:\n+ return []\n+ type_serializer = TypeSerializer(\n+ database.supported_types,\n+ many=True,\n+ context={'request': request}\n+ )\n+ return type_serializer.data\n+\n+\n def get_common_data(request, database, schema=None):\n return {\n 'current_db': database.name if database else None,\n 'current_schema': schema.id if schema else None,\n 'schemas': get_schema_list(request, database),\n 'databases': get_database_list(request),\n- 'tables': get_table_list(request, schema)\n+ 'tables': get_table_list(request, schema),\n+ 'abstract_types': get_type_list(request, database)\n }\n", "issue": "Implement showing and changing a column's type\n## Problem\r\n<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->\r\nUsers might want to change the data type of an existing column on their table.\r\n\r\n## Proposed solution\r\n<!-- A clear and concise description of your proposed solution or feature. -->\r\nThe [\"Working with Columns\" design spec](https://wiki.mathesar.org/en/design/specs/working-with-columns) has a solution for showing and changing column types, which we need to implement on the frontend.\r\n\r\nPlease note that we're only implementing changing the Mathesar data type in this milestone. Options specific to individual data types will be implemented in the next milestone.\r\n\r\nNumber data types should save as `NUMERIC`.\r\nText data types should save as `VARCHAR`.\r\nDate/time data types can be disabled for now since they're not fully implemented on the backend.\r\n\r\n## Additional context\r\n<!-- Add any other context or screenshots about the feature request here.-->\r\n- Backend work:\r\n - #532 to get the list of types \r\n - #199 to get valid target types and change types \r\n- Design issue: #324 \r\n- Design discussion: #436\r\n- #269 \n", "before_files": [{"content": "from django.shortcuts import render, redirect, get_object_or_404\n\nfrom mathesar.models import Database, Schema, Table\nfrom mathesar.api.serializers.databases import DatabaseSerializer\nfrom mathesar.api.serializers.schemas import SchemaSerializer\nfrom mathesar.api.serializers.tables import TableSerializer\n\n\ndef get_schema_list(request, database):\n schema_serializer = SchemaSerializer(\n Schema.objects.filter(database=database),\n many=True,\n context={'request': request}\n )\n return schema_serializer.data\n\n\ndef get_database_list(request):\n database_serializer = DatabaseSerializer(\n Database.objects.all(),\n many=True,\n context={'request': request}\n )\n return database_serializer.data\n\n\ndef get_table_list(request, schema):\n if schema is None:\n return []\n table_serializer = TableSerializer(\n Table.objects.filter(schema=schema),\n many=True,\n context={'request': request}\n )\n return table_serializer.data\n\n\ndef get_common_data(request, database, schema=None):\n return {\n 'current_db': database.name if database else None,\n 'current_schema': schema.id if schema else None,\n 'schemas': get_schema_list(request, database),\n 'databases': get_database_list(request),\n 'tables': get_table_list(request, schema)\n }\n\n\ndef get_current_database(request, db_name):\n # if there's a DB name passed in, try to retrieve the database, or return a 404 error.\n if db_name is not None:\n return get_object_or_404(Database, name=db_name)\n else:\n try:\n # Try to get the first database available\n return Database.objects.order_by('id').first()\n except Database.DoesNotExist:\n return None\n\n\ndef get_current_schema(request, schema_id, database):\n # if there's a schema ID passed in, try to retrieve the schema, or return a 404 error.\n if schema_id is not None:\n return get_object_or_404(Schema, id=schema_id)\n else:\n try:\n # Try to get the first schema in the DB\n return Schema.objects.filter(database=database).order_by('id').first()\n except Schema.DoesNotExist:\n return None\n\n\ndef render_schema(request, database, schema):\n # if there's no schema available, redirect to the schemas page.\n if not schema:\n return redirect('schemas', db_name=database.name)\n else:\n # We are redirecting so that the correct URL is passed to the frontend.\n return redirect('schema_home', db_name=database.name, schema_id=schema.id)\n\n\ndef home(request):\n database = get_current_database(request, None)\n schema = get_current_schema(request, None, database)\n return render_schema(request, database, schema)\n\n\ndef db_home(request, db_name):\n database = get_current_database(request, db_name)\n schema = get_current_schema(request, None, database)\n return render_schema(request, database, schema)\n\n\ndef schema_home(request, db_name, schema_id):\n database = get_current_database(request, db_name)\n schema = get_current_schema(request, schema_id, database)\n return render(request, 'mathesar/index.html', {\n 'common_data': get_common_data(request, database, schema)\n })\n\n\ndef schemas(request, db_name):\n database = get_current_database(request, db_name)\n schema = get_current_schema(request, None, database)\n return render(request, 'mathesar/index.html', {\n 'common_data': get_common_data(request, database, schema)\n })\n", "path": "mathesar/views.py"}]} | 1,766 | 296 |
gh_patches_debug_34062 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-1871 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug]: Chichester District Council is not working
### I Have A Problem With:
A specific source
### What's Your Problem
The source has stopped working since Tuesday 13th February 2024. All the collection days no longer show on the calendar at all. The Chichester District Council website still shows me the days.
### Source (if relevant)
chichester_gov_uk
### Logs
```Shell
This error originated from a custom integration.
Logger: waste_collection_schedule.source_shell
Source: custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py:136
Integration: waste_collection_schedule (documentation)
First occurred: 11:36:47 (1 occurrences)
Last logged: 11:36:47
fetch failed for source Chichester District Council: Traceback (most recent call last): File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py", line 134, in fetch entries = self._source.fetch() ^^^^^^^^^^^^^^^^^^^^ File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py", line 37, in fetch form_url = form["action"] ~~~~^^^^^^^^^^ TypeError: 'NoneType' object is not subscriptable
```
### Relevant Configuration
```YAML
waste_collection_schedule:
sources:
- name: chichester_gov_uk
args:
uprn: 10002466648
```
### Checklist Source Error
- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)
- [X] Checked that the website of your service provider is still working
- [X] Tested my attributes on the service provider website (if possible)
- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version)
### Checklist Sensor Error
- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)
### Required
- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
</issue>
<code>
[start of custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py]
1 from datetime import datetime
2
3 import requests
4 from bs4 import BeautifulSoup
5 from waste_collection_schedule import Collection
6
7 TITLE = "Chichester District Council"
8 DESCRIPTION = "Source for chichester.gov.uk services for Chichester"
9 URL = "chichester.gov.uk"
10
11 TEST_CASES = {
12 "Test_001": {"uprn": "010002476348"},
13 "Test_002": {"uprn": "100062612654"},
14 "Test_003": {"uprn": "100061745708"},
15 }
16
17 ICON_MAP = {
18 "General Waste": "mdi:trash-can",
19 "Recycling": "mdi:recycle",
20 "Garden Recycling": "mdi:leaf",
21 }
22
23
24 class Source:
25 def __init__(self, uprn):
26 self._uprn = uprn
27
28 def fetch(self):
29 session = requests.Session()
30 # Start a session
31 r = session.get("https://www.chichester.gov.uk/checkyourbinday")
32 r.raise_for_status()
33 soup = BeautifulSoup(r.text, features="html.parser")
34
35 # Extract form submission url
36 form = soup.find("form", attrs={"id": "WASTECOLLECTIONCALENDARV2_FORM"})
37 form_url = form["action"]
38
39 # Submit form
40 form_data = {
41 "WASTECOLLECTIONCALENDARV2_FORMACTION_NEXT": "Submit",
42 "WASTECOLLECTIONCALENDARV2_CALENDAR_UPRN": self._uprn,
43 }
44 r = session.post(form_url, data=form_data)
45 r.raise_for_status()
46
47 # Extract collection dates
48 soup = BeautifulSoup(r.text, features="html.parser")
49 entries = []
50 data = soup.find_all("div", attrs={"class": "bin-days"})
51 for bin in data:
52 if "print-only" in bin["class"]:
53 continue
54
55 type = bin.find("span").contents[0].replace("bin", "").strip().title()
56 list_items = bin.find_all("li")
57 if list_items:
58 for item in list_items:
59 date = datetime.strptime(item.text, "%d %B %Y").date()
60 entries.append(
61 Collection(
62 date=date,
63 t=type,
64 icon=ICON_MAP.get(type),
65 )
66 )
67
68 return entries
69
[end of custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py
@@ -33,13 +33,13 @@
soup = BeautifulSoup(r.text, features="html.parser")
# Extract form submission url
- form = soup.find("form", attrs={"id": "WASTECOLLECTIONCALENDARV2_FORM"})
+ form = soup.find("form", attrs={"id": "WASTECOLLECTIONCALENDARV5_FORM"})
form_url = form["action"]
# Submit form
form_data = {
- "WASTECOLLECTIONCALENDARV2_FORMACTION_NEXT": "Submit",
- "WASTECOLLECTIONCALENDARV2_CALENDAR_UPRN": self._uprn,
+ "WASTECOLLECTIONCALENDARV5_FORMACTION_NEXT": "Submit",
+ "WASTECOLLECTIONCALENDARV5_CALENDAR_UPRN": self._uprn,
}
r = session.post(form_url, data=form_data)
r.raise_for_status()
@@ -47,16 +47,18 @@
# Extract collection dates
soup = BeautifulSoup(r.text, features="html.parser")
entries = []
- data = soup.find_all("div", attrs={"class": "bin-days"})
- for bin in data:
- if "print-only" in bin["class"]:
- continue
-
- type = bin.find("span").contents[0].replace("bin", "").strip().title()
- list_items = bin.find_all("li")
- if list_items:
- for item in list_items:
- date = datetime.strptime(item.text, "%d %B %Y").date()
+ tables = soup.find_all("table", attrs={"class": "bin-collection-dates"})
+ # Data is presented in two tables side-by-side
+ for table in tables:
+ # Each collection is a table row
+ data = table.find_all("tr")
+ for bin in data:
+ cells = bin.find_all("td")
+ # Ignore the header row
+ if len(cells) == 2:
+ date = datetime.strptime(cells[0].text, "%d %B %Y").date()
+ # Maintain backwards compatibility - it used to be General Waste and now it is General waste
+ type = cells[1].text.title()
entries.append(
Collection(
date=date,
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py\n@@ -33,13 +33,13 @@\n soup = BeautifulSoup(r.text, features=\"html.parser\")\n \n # Extract form submission url\n- form = soup.find(\"form\", attrs={\"id\": \"WASTECOLLECTIONCALENDARV2_FORM\"})\n+ form = soup.find(\"form\", attrs={\"id\": \"WASTECOLLECTIONCALENDARV5_FORM\"})\n form_url = form[\"action\"]\n \n # Submit form\n form_data = {\n- \"WASTECOLLECTIONCALENDARV2_FORMACTION_NEXT\": \"Submit\",\n- \"WASTECOLLECTIONCALENDARV2_CALENDAR_UPRN\": self._uprn,\n+ \"WASTECOLLECTIONCALENDARV5_FORMACTION_NEXT\": \"Submit\",\n+ \"WASTECOLLECTIONCALENDARV5_CALENDAR_UPRN\": self._uprn,\n }\n r = session.post(form_url, data=form_data)\n r.raise_for_status()\n@@ -47,16 +47,18 @@\n # Extract collection dates\n soup = BeautifulSoup(r.text, features=\"html.parser\")\n entries = []\n- data = soup.find_all(\"div\", attrs={\"class\": \"bin-days\"})\n- for bin in data:\n- if \"print-only\" in bin[\"class\"]:\n- continue\n-\n- type = bin.find(\"span\").contents[0].replace(\"bin\", \"\").strip().title()\n- list_items = bin.find_all(\"li\")\n- if list_items:\n- for item in list_items:\n- date = datetime.strptime(item.text, \"%d %B %Y\").date()\n+ tables = soup.find_all(\"table\", attrs={\"class\": \"bin-collection-dates\"})\n+ # Data is presented in two tables side-by-side\n+ for table in tables:\n+ # Each collection is a table row\n+ data = table.find_all(\"tr\")\n+ for bin in data:\n+ cells = bin.find_all(\"td\")\n+ # Ignore the header row\n+ if len(cells) == 2:\n+ date = datetime.strptime(cells[0].text, \"%d %B %Y\").date()\n+ # Maintain backwards compatibility - it used to be General Waste and now it is General waste\n+ type = cells[1].text.title()\n entries.append(\n Collection(\n date=date,\n", "issue": "[Bug]: Chichester District Council is not working\n### I Have A Problem With:\n\nA specific source\n\n### What's Your Problem\n\nThe source has stopped working since Tuesday 13th February 2024. All the collection days no longer show on the calendar at all. The Chichester District Council website still shows me the days.\n\n### Source (if relevant)\n\nchichester_gov_uk\n\n### Logs\n\n```Shell\nThis error originated from a custom integration.\r\n\r\nLogger: waste_collection_schedule.source_shell\r\nSource: custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py:136\r\nIntegration: waste_collection_schedule (documentation)\r\nFirst occurred: 11:36:47 (1 occurrences)\r\nLast logged: 11:36:47\r\n\r\nfetch failed for source Chichester District Council: Traceback (most recent call last): File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py\", line 134, in fetch entries = self._source.fetch() ^^^^^^^^^^^^^^^^^^^^ File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py\", line 37, in fetch form_url = form[\"action\"] ~~~~^^^^^^^^^^ TypeError: 'NoneType' object is not subscriptable\n```\n\n\n### Relevant Configuration\n\n```YAML\nwaste_collection_schedule:\r\n sources:\r\n - name: chichester_gov_uk\r\n args:\r\n uprn: 10002466648\n```\n\n\n### Checklist Source Error\n\n- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\n- [X] Checked that the website of your service provider is still working\n- [X] Tested my attributes on the service provider website (if possible)\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\n\n### Checklist Sensor Error\n\n- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\n\n### Required\n\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "from datetime import datetime\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection\n\nTITLE = \"Chichester District Council\"\nDESCRIPTION = \"Source for chichester.gov.uk services for Chichester\"\nURL = \"chichester.gov.uk\"\n\nTEST_CASES = {\n \"Test_001\": {\"uprn\": \"010002476348\"},\n \"Test_002\": {\"uprn\": \"100062612654\"},\n \"Test_003\": {\"uprn\": \"100061745708\"},\n}\n\nICON_MAP = {\n \"General Waste\": \"mdi:trash-can\",\n \"Recycling\": \"mdi:recycle\",\n \"Garden Recycling\": \"mdi:leaf\",\n}\n\n\nclass Source:\n def __init__(self, uprn):\n self._uprn = uprn\n\n def fetch(self):\n session = requests.Session()\n # Start a session\n r = session.get(\"https://www.chichester.gov.uk/checkyourbinday\")\n r.raise_for_status()\n soup = BeautifulSoup(r.text, features=\"html.parser\")\n\n # Extract form submission url\n form = soup.find(\"form\", attrs={\"id\": \"WASTECOLLECTIONCALENDARV2_FORM\"})\n form_url = form[\"action\"]\n\n # Submit form\n form_data = {\n \"WASTECOLLECTIONCALENDARV2_FORMACTION_NEXT\": \"Submit\",\n \"WASTECOLLECTIONCALENDARV2_CALENDAR_UPRN\": self._uprn,\n }\n r = session.post(form_url, data=form_data)\n r.raise_for_status()\n\n # Extract collection dates\n soup = BeautifulSoup(r.text, features=\"html.parser\")\n entries = []\n data = soup.find_all(\"div\", attrs={\"class\": \"bin-days\"})\n for bin in data:\n if \"print-only\" in bin[\"class\"]:\n continue\n\n type = bin.find(\"span\").contents[0].replace(\"bin\", \"\").strip().title()\n list_items = bin.find_all(\"li\")\n if list_items:\n for item in list_items:\n date = datetime.strptime(item.text, \"%d %B %Y\").date()\n entries.append(\n Collection(\n date=date,\n t=type,\n icon=ICON_MAP.get(type),\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/chichester_gov_uk.py"}]} | 1,756 | 597 |
gh_patches_debug_2252 | rasdani/github-patches | git_diff | fonttools__fonttools-337 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
I find the font's line height is bigger than original font.
I have tried pyftsubset with command line option --no-recalc-bounds ,
but the generated subfont's line height is still bigger than original font.
I used html font-face render font.
@font-face {
font-family: 'freetype';
src: url('font.ttf') format('truetype');
}
the font file is microsoft chinese liti.ttf.
</issue>
<code>
[start of Lib/fontTools/ttLib/tables/_v_h_e_a.py]
1 from __future__ import print_function, division, absolute_import
2 from fontTools.misc.py23 import *
3 from fontTools.misc import sstruct
4 from fontTools.misc.textTools import safeEval
5 from . import DefaultTable
6
7 vheaFormat = """
8 > # big endian
9 tableVersion: 16.16F
10 ascent: h
11 descent: h
12 lineGap: h
13 advanceHeightMax: H
14 minTopSideBearing: h
15 minBottomSideBearing: h
16 yMaxExtent: h
17 caretSlopeRise: h
18 caretSlopeRun: h
19 reserved0: h
20 reserved1: h
21 reserved2: h
22 reserved3: h
23 reserved4: h
24 metricDataFormat: h
25 numberOfVMetrics: H
26 """
27
28 class table__v_h_e_a(DefaultTable.DefaultTable):
29
30 # Note: Keep in sync with table__h_h_e_a
31
32 dependencies = ['vmtx', 'glyf']
33
34 def decompile(self, data, ttFont):
35 sstruct.unpack(vheaFormat, data, self)
36
37 def compile(self, ttFont):
38 self.recalc(ttFont)
39 return sstruct.pack(vheaFormat, self)
40
41 def recalc(self, ttFont):
42 vtmxTable = ttFont['vmtx']
43 if 'glyf' in ttFont:
44 glyfTable = ttFont['glyf']
45 INFINITY = 100000
46 advanceHeightMax = 0
47 minTopSideBearing = +INFINITY # arbitrary big number
48 minBottomSideBearing = +INFINITY # arbitrary big number
49 yMaxExtent = -INFINITY # arbitrary big negative number
50
51 for name in ttFont.getGlyphOrder():
52 height, tsb = vtmxTable[name]
53 advanceHeightMax = max(advanceHeightMax, height)
54 g = glyfTable[name]
55 if g.numberOfContours == 0:
56 continue
57 if g.numberOfContours < 0 and not hasattr(g, "yMax"):
58 # Composite glyph without extents set.
59 # Calculate those.
60 g.recalcBounds(glyfTable)
61 minTopSideBearing = min(minTopSideBearing, tsb)
62 bsb = height - tsb - (g.yMax - g.yMin)
63 minBottomSideBearing = min(minBottomSideBearing, bsb)
64 extent = tsb + (g.yMax - g.yMin)
65 yMaxExtent = max(yMaxExtent, extent)
66
67 if yMaxExtent == -INFINITY:
68 # No glyph has outlines.
69 minTopSideBearing = 0
70 minBottomSideBearing = 0
71 yMaxExtent = 0
72
73 self.advanceHeightMax = advanceHeightMax
74 self.minTopSideBearing = minTopSideBearing
75 self.minBottomSideBearing = minBottomSideBearing
76 self.yMaxExtent = yMaxExtent
77 else:
78 # XXX CFF recalc...
79 pass
80
81 def toXML(self, writer, ttFont):
82 formatstring, names, fixes = sstruct.getformat(vheaFormat)
83 for name in names:
84 value = getattr(self, name)
85 writer.simpletag(name, value=value)
86 writer.newline()
87
88 def fromXML(self, name, attrs, content, ttFont):
89 setattr(self, name, safeEval(attrs["value"]))
90
[end of Lib/fontTools/ttLib/tables/_v_h_e_a.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Lib/fontTools/ttLib/tables/_v_h_e_a.py b/Lib/fontTools/ttLib/tables/_v_h_e_a.py
--- a/Lib/fontTools/ttLib/tables/_v_h_e_a.py
+++ b/Lib/fontTools/ttLib/tables/_v_h_e_a.py
@@ -35,7 +35,8 @@
sstruct.unpack(vheaFormat, data, self)
def compile(self, ttFont):
- self.recalc(ttFont)
+ if ttFont.isLoaded('glyf') and ttFont.recalcBBoxes:
+ self.recalc(ttFont)
return sstruct.pack(vheaFormat, self)
def recalc(self, ttFont):
| {"golden_diff": "diff --git a/Lib/fontTools/ttLib/tables/_v_h_e_a.py b/Lib/fontTools/ttLib/tables/_v_h_e_a.py\n--- a/Lib/fontTools/ttLib/tables/_v_h_e_a.py\n+++ b/Lib/fontTools/ttLib/tables/_v_h_e_a.py\n@@ -35,7 +35,8 @@\n \t\tsstruct.unpack(vheaFormat, data, self)\n \n \tdef compile(self, ttFont):\n-\t\tself.recalc(ttFont)\n+\t\tif ttFont.isLoaded('glyf') and ttFont.recalcBBoxes:\n+\t\t\tself.recalc(ttFont)\n \t\treturn sstruct.pack(vheaFormat, self)\n \n \tdef recalc(self, ttFont):\n", "issue": "I find the font's line height is bigger than original font.\n I have tried pyftsubset with command line option --no-recalc-bounds ,\nbut the generated subfont's line height is still bigger than original font.\n\nI used html font-face render font.\n@font-face {\n font-family: 'freetype';\n src: url('font.ttf') format('truetype');\n }\n\nthe font file is microsoft chinese liti.ttf.\n\n", "before_files": [{"content": "from __future__ import print_function, division, absolute_import\nfrom fontTools.misc.py23 import *\nfrom fontTools.misc import sstruct\nfrom fontTools.misc.textTools import safeEval\nfrom . import DefaultTable\n\nvheaFormat = \"\"\"\n\t\t>\t# big endian\n\t\ttableVersion:\t\t16.16F\n\t\tascent:\t\t\th\n\t\tdescent:\t\th\n\t\tlineGap:\t\th\n\t\tadvanceHeightMax:\tH\n\t\tminTopSideBearing:\th\n\t\tminBottomSideBearing:\th\n\t\tyMaxExtent:\t\th\n\t\tcaretSlopeRise:\t\th\n\t\tcaretSlopeRun:\t\th\n\t\treserved0:\t\th\n\t\treserved1:\t\th\n\t\treserved2:\t\th\n\t\treserved3:\t\th\n\t\treserved4:\t\th\n\t\tmetricDataFormat:\th\n\t\tnumberOfVMetrics:\tH\n\"\"\"\n\nclass table__v_h_e_a(DefaultTable.DefaultTable):\n\n\t# Note: Keep in sync with table__h_h_e_a\n\n\tdependencies = ['vmtx', 'glyf']\n\n\tdef decompile(self, data, ttFont):\n\t\tsstruct.unpack(vheaFormat, data, self)\n\n\tdef compile(self, ttFont):\n\t\tself.recalc(ttFont)\n\t\treturn sstruct.pack(vheaFormat, self)\n\n\tdef recalc(self, ttFont):\n\t\tvtmxTable = ttFont['vmtx']\n\t\tif 'glyf' in ttFont:\n\t\t\tglyfTable = ttFont['glyf']\n\t\t\tINFINITY = 100000\n\t\t\tadvanceHeightMax = 0\n\t\t\tminTopSideBearing = +INFINITY # arbitrary big number\n\t\t\tminBottomSideBearing = +INFINITY # arbitrary big number\n\t\t\tyMaxExtent = -INFINITY # arbitrary big negative number\n\n\t\t\tfor name in ttFont.getGlyphOrder():\n\t\t\t\theight, tsb = vtmxTable[name]\n\t\t\t\tadvanceHeightMax = max(advanceHeightMax, height)\n\t\t\t\tg = glyfTable[name]\n\t\t\t\tif g.numberOfContours == 0:\n\t\t\t\t\tcontinue\n\t\t\t\tif g.numberOfContours < 0 and not hasattr(g, \"yMax\"):\n\t\t\t\t\t# Composite glyph without extents set.\n\t\t\t\t\t# Calculate those.\n\t\t\t\t\tg.recalcBounds(glyfTable)\n\t\t\t\tminTopSideBearing = min(minTopSideBearing, tsb)\n\t\t\t\tbsb = height - tsb - (g.yMax - g.yMin)\n\t\t\t\tminBottomSideBearing = min(minBottomSideBearing, bsb)\n\t\t\t\textent = tsb + (g.yMax - g.yMin)\n\t\t\t\tyMaxExtent = max(yMaxExtent, extent)\n\n\t\t\tif yMaxExtent == -INFINITY:\n\t\t\t\t# No glyph has outlines.\n\t\t\t\tminTopSideBearing = 0\n\t\t\t\tminBottomSideBearing = 0\n\t\t\t\tyMaxExtent = 0\n\n\t\t\tself.advanceHeightMax = advanceHeightMax\n\t\t\tself.minTopSideBearing = minTopSideBearing\n\t\t\tself.minBottomSideBearing = minBottomSideBearing\n\t\t\tself.yMaxExtent = yMaxExtent\n\t\telse:\n\t\t\t# XXX CFF recalc...\n\t\t\tpass\n\n\tdef toXML(self, writer, ttFont):\n\t\tformatstring, names, fixes = sstruct.getformat(vheaFormat)\n\t\tfor name in names:\n\t\t\tvalue = getattr(self, name)\n\t\t\twriter.simpletag(name, value=value)\n\t\t\twriter.newline()\n\n\tdef fromXML(self, name, attrs, content, ttFont):\n\t\tsetattr(self, name, safeEval(attrs[\"value\"]))\n", "path": "Lib/fontTools/ttLib/tables/_v_h_e_a.py"}]} | 1,602 | 162 |
gh_patches_debug_22699 | rasdani/github-patches | git_diff | svthalia__concrexit-3592 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Prevent full disk
### Describe the bug
Sometimes the server's storage gets full, because for some reason filepond uploads aren't being deleted. Today this caused the server to crash (because the full server disk broke redis). We should prevent this from happening in multiple ways:
- Make old uploads be deleted. Would be nice to find out why the uploads aren't being deleted already. But we should also (additionally) periodically remove old files from the media volume.
- Maybe limit the volume size such that it getting full does not influence the rest of the server. But docker doesn't really support that nicely. We could make a separate volume for it on the host and bind-mount it I guess.
### How to reproduce
<!-- Steps to reproduce the behaviour -->
1. Upload lots of albums to a docker deployment
2. See the media volume get larger.
### Expected behaviour
Stuff is cleaned up once it's processed and periodically.
</issue>
<code>
[start of website/photos/tasks.py]
1 from django.db import transaction
2 from django.dispatch import Signal
3
4 from celery import shared_task
5 from django_drf_filepond.models import TemporaryUpload
6 from django_filepond_widget.fields import FilePondFile
7
8 from photos.models import Album
9
10 from .services import extract_archive
11
12 album_uploaded = Signal()
13
14
15 @shared_task
16 def process_album_upload(archive_upload_id: str, album_id: int):
17 try:
18 album = Album.objects.get(id=album_id)
19 except Album.DoesNotExist:
20 return
21
22 archive = TemporaryUpload.objects.get(upload_id=archive_upload_id).file
23 try:
24 with transaction.atomic():
25 # We make the upload atomic separately, so we can keep using the db if it fails.
26 # See https://docs.djangoproject.com/en/4.2/topics/db/transactions/#handling-exceptions-within-postgresql-transactions.
27 extract_archive(album, archive)
28 album.is_processing = False
29 album.save()
30
31 # Send signal to notify that an album has been uploaded. This is used
32 # by facedetection, and possibly in the future to notify the uploader.
33 album_uploaded.send(sender=None, album=album)
34 finally:
35 if isinstance(archive, FilePondFile):
36 archive.remove()
37
[end of website/photos/tasks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/photos/tasks.py b/website/photos/tasks.py
--- a/website/photos/tasks.py
+++ b/website/photos/tasks.py
@@ -3,7 +3,6 @@
from celery import shared_task
from django_drf_filepond.models import TemporaryUpload
-from django_filepond_widget.fields import FilePondFile
from photos.models import Album
@@ -19,7 +18,8 @@
except Album.DoesNotExist:
return
- archive = TemporaryUpload.objects.get(upload_id=archive_upload_id).file
+ upload = TemporaryUpload.objects.get(upload_id=archive_upload_id)
+ archive = upload.file
try:
with transaction.atomic():
# We make the upload atomic separately, so we can keep using the db if it fails.
@@ -32,5 +32,5 @@
# by facedetection, and possibly in the future to notify the uploader.
album_uploaded.send(sender=None, album=album)
finally:
- if isinstance(archive, FilePondFile):
- archive.remove()
+ archive.delete()
+ upload.delete()
| {"golden_diff": "diff --git a/website/photos/tasks.py b/website/photos/tasks.py\n--- a/website/photos/tasks.py\n+++ b/website/photos/tasks.py\n@@ -3,7 +3,6 @@\n \n from celery import shared_task\n from django_drf_filepond.models import TemporaryUpload\n-from django_filepond_widget.fields import FilePondFile\n \n from photos.models import Album\n \n@@ -19,7 +18,8 @@\n except Album.DoesNotExist:\n return\n \n- archive = TemporaryUpload.objects.get(upload_id=archive_upload_id).file\n+ upload = TemporaryUpload.objects.get(upload_id=archive_upload_id)\n+ archive = upload.file\n try:\n with transaction.atomic():\n # We make the upload atomic separately, so we can keep using the db if it fails.\n@@ -32,5 +32,5 @@\n # by facedetection, and possibly in the future to notify the uploader.\n album_uploaded.send(sender=None, album=album)\n finally:\n- if isinstance(archive, FilePondFile):\n- archive.remove()\n+ archive.delete()\n+ upload.delete()\n", "issue": "Prevent full disk\n### Describe the bug\r\nSometimes the server's storage gets full, because for some reason filepond uploads aren't being deleted. Today this caused the server to crash (because the full server disk broke redis). We should prevent this from happening in multiple ways:\r\n\r\n- Make old uploads be deleted. Would be nice to find out why the uploads aren't being deleted already. But we should also (additionally) periodically remove old files from the media volume.\r\n- Maybe limit the volume size such that it getting full does not influence the rest of the server. But docker doesn't really support that nicely. We could make a separate volume for it on the host and bind-mount it I guess.\r\n\r\n### How to reproduce\r\n<!-- Steps to reproduce the behaviour -->\r\n1. Upload lots of albums to a docker deployment\r\n2. See the media volume get larger.\r\n\r\n### Expected behaviour\r\nStuff is cleaned up once it's processed and periodically.\r\n\r\n\r\n\n", "before_files": [{"content": "from django.db import transaction\nfrom django.dispatch import Signal\n\nfrom celery import shared_task\nfrom django_drf_filepond.models import TemporaryUpload\nfrom django_filepond_widget.fields import FilePondFile\n\nfrom photos.models import Album\n\nfrom .services import extract_archive\n\nalbum_uploaded = Signal()\n\n\n@shared_task\ndef process_album_upload(archive_upload_id: str, album_id: int):\n try:\n album = Album.objects.get(id=album_id)\n except Album.DoesNotExist:\n return\n\n archive = TemporaryUpload.objects.get(upload_id=archive_upload_id).file\n try:\n with transaction.atomic():\n # We make the upload atomic separately, so we can keep using the db if it fails.\n # See https://docs.djangoproject.com/en/4.2/topics/db/transactions/#handling-exceptions-within-postgresql-transactions.\n extract_archive(album, archive)\n album.is_processing = False\n album.save()\n\n # Send signal to notify that an album has been uploaded. This is used\n # by facedetection, and possibly in the future to notify the uploader.\n album_uploaded.send(sender=None, album=album)\n finally:\n if isinstance(archive, FilePondFile):\n archive.remove()\n", "path": "website/photos/tasks.py"}]} | 1,051 | 235 |
gh_patches_debug_16164 | rasdani/github-patches | git_diff | mozilla__bugbug-1631 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make spawn_pipeline not depend on the order of tasks in the yaml file
Currently, if a task is defined in the yaml file before its dependencies, the spawn_pipeline script fails with:
```
Traceback (most recent call last):
File "/code/spawn_pipeline.py", line 132, in <module>
main()
File "/code/spawn_pipeline.py", line 110, in main
new_dependencies.append(id_mapping[dependency])
KeyError: 'regressor-finder'
```
So things like https://github.com/mozilla/bugbug/commit/aaa67b3b0a1db7530cbf88df644aff076fcd2e4e are needed.
We should make the spawn_pipeline script not depend on the order of definition of tasks in the yaml file.
</issue>
<code>
[start of infra/spawn_pipeline.py]
1 #!/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright 2019 Mozilla
5 #
6 # Licensed under the Apache License, Version 2.0 (the "License");
7 # you may not use this file except in compliance with the License.
8 # You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing, software
13 # distributed under the License is distributed on an "AS IS" BASIS,
14 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
17
18 """
19 This script triggers the data pipeline for the bugbug project
20 """
21
22 import argparse
23 import os
24 import sys
25
26 import jsone
27 import requests.packages.urllib3
28 import taskcluster
29 import yaml
30
31 requests.packages.urllib3.disable_warnings()
32
33 TASKCLUSTER_DEFAULT_URL = "https://community-tc.services.mozilla.com"
34
35
36 def get_taskcluster_options():
37 """
38 Helper to get the Taskcluster setup options
39 according to current environment (local or Taskcluster)
40 """
41 options = taskcluster.optionsFromEnvironment()
42 proxy_url = os.environ.get("TASKCLUSTER_PROXY_URL")
43
44 if proxy_url is not None:
45 # Always use proxy url when available
46 options["rootUrl"] = proxy_url
47
48 if "rootUrl" not in options:
49 # Always have a value in root url
50 options["rootUrl"] = TASKCLUSTER_DEFAULT_URL
51
52 return options
53
54
55 def main():
56 parser = argparse.ArgumentParser(description="Spawn tasks for bugbug data pipeline")
57 parser.add_argument("data_pipeline_json")
58
59 args = parser.parse_args()
60 decision_task_id = os.environ.get("TASK_ID")
61 options = get_taskcluster_options()
62 add_self = False
63 if decision_task_id:
64 add_self = True
65 task_group_id = decision_task_id
66 else:
67 task_group_id = taskcluster.utils.slugId()
68 keys = {"taskGroupId": task_group_id}
69
70 id_mapping = {}
71
72 # First pass, do the template rendering and dependencies resolution
73 tasks = []
74
75 with open(args.data_pipeline_json) as pipeline_file:
76 raw_tasks = yaml.safe_load(pipeline_file.read())
77
78 version = os.getenv("TAG", "latest")
79 context = {"version": version}
80 rendered = jsone.render(raw_tasks, context)
81
82 for task in rendered["tasks"]:
83 # We need to generate new unique task ids for taskcluster to be happy
84 # but need to identify dependencies across tasks. So we create a
85 # mapping between an internal ID and the generate ID
86
87 task_id = taskcluster.utils.slugId()
88 task_internal_id = task.pop("ID")
89
90 if task_internal_id in id_mapping:
91 raise ValueError(f"Conflicting IDs {task_internal_id}")
92
93 id_mapping[task_internal_id] = task_id
94
95 for key, value in keys.items():
96 task[key] = value
97
98 task_payload = task["payload"]
99
100 if "env" in task_payload and task_payload["env"]:
101 task_payload["env"]["TAG"] = version
102 else:
103 task_payload["env"] = {
104 "TAG": version,
105 }
106
107 # Process the dependencies
108 new_dependencies = []
109 for dependency in task.get("dependencies", []):
110 new_dependencies.append(id_mapping[dependency])
111
112 if add_self:
113 new_dependencies.append(decision_task_id)
114
115 task["dependencies"] = new_dependencies
116
117 tasks.append((task_id, task))
118
119 # Now sends them
120 queue = taskcluster.Queue(options)
121 try:
122 for task_id, task_payload in tasks:
123 queue.createTask(task_id, task_payload)
124
125 print(f"https://community-tc.services.mozilla.com/tasks/groups/{task_group_id}")
126 except taskcluster.exceptions.TaskclusterAuthFailure as e:
127 print(f"TaskclusterAuthFailure: {e.body}", file=sys.stderr)
128 raise
129
130
131 if __name__ == "__main__":
132 main()
133
[end of infra/spawn_pipeline.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/infra/spawn_pipeline.py b/infra/spawn_pipeline.py
--- a/infra/spawn_pipeline.py
+++ b/infra/spawn_pipeline.py
@@ -85,13 +85,19 @@
# mapping between an internal ID and the generate ID
task_id = taskcluster.utils.slugId()
- task_internal_id = task.pop("ID")
+ task_internal_id = task["ID"]
if task_internal_id in id_mapping:
raise ValueError(f"Conflicting IDs {task_internal_id}")
+ # Store each task ID in the id_mapping dictionary before processing dependencies.
+ # This way, tasks can be defined in any order.
id_mapping[task_internal_id] = task_id
+ for task in rendered["tasks"]:
+ task_internal_id = task.pop("ID")
+ task_id = id_mapping[task_internal_id]
+
for key, value in keys.items():
task[key] = value
| {"golden_diff": "diff --git a/infra/spawn_pipeline.py b/infra/spawn_pipeline.py\n--- a/infra/spawn_pipeline.py\n+++ b/infra/spawn_pipeline.py\n@@ -85,13 +85,19 @@\n # mapping between an internal ID and the generate ID\n \n task_id = taskcluster.utils.slugId()\n- task_internal_id = task.pop(\"ID\")\n+ task_internal_id = task[\"ID\"]\n \n if task_internal_id in id_mapping:\n raise ValueError(f\"Conflicting IDs {task_internal_id}\")\n \n+ # Store each task ID in the id_mapping dictionary before processing dependencies.\n+ # This way, tasks can be defined in any order.\n id_mapping[task_internal_id] = task_id\n \n+ for task in rendered[\"tasks\"]:\n+ task_internal_id = task.pop(\"ID\")\n+ task_id = id_mapping[task_internal_id]\n+\n for key, value in keys.items():\n task[key] = value\n", "issue": "Make spawn_pipeline not depend on the order of tasks in the yaml file\nCurrently, if a task is defined in the yaml file before its dependencies, the spawn_pipeline script fails with:\r\n```\r\nTraceback (most recent call last):\r\n File \"/code/spawn_pipeline.py\", line 132, in <module>\r\n main()\r\n File \"/code/spawn_pipeline.py\", line 110, in main\r\n new_dependencies.append(id_mapping[dependency])\r\nKeyError: 'regressor-finder'\r\n```\r\n\r\nSo things like https://github.com/mozilla/bugbug/commit/aaa67b3b0a1db7530cbf88df644aff076fcd2e4e are needed.\r\n\r\nWe should make the spawn_pipeline script not depend on the order of definition of tasks in the yaml file.\n", "before_files": [{"content": "#!/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright 2019 Mozilla\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThis script triggers the data pipeline for the bugbug project\n\"\"\"\n\nimport argparse\nimport os\nimport sys\n\nimport jsone\nimport requests.packages.urllib3\nimport taskcluster\nimport yaml\n\nrequests.packages.urllib3.disable_warnings()\n\nTASKCLUSTER_DEFAULT_URL = \"https://community-tc.services.mozilla.com\"\n\n\ndef get_taskcluster_options():\n \"\"\"\n Helper to get the Taskcluster setup options\n according to current environment (local or Taskcluster)\n \"\"\"\n options = taskcluster.optionsFromEnvironment()\n proxy_url = os.environ.get(\"TASKCLUSTER_PROXY_URL\")\n\n if proxy_url is not None:\n # Always use proxy url when available\n options[\"rootUrl\"] = proxy_url\n\n if \"rootUrl\" not in options:\n # Always have a value in root url\n options[\"rootUrl\"] = TASKCLUSTER_DEFAULT_URL\n\n return options\n\n\ndef main():\n parser = argparse.ArgumentParser(description=\"Spawn tasks for bugbug data pipeline\")\n parser.add_argument(\"data_pipeline_json\")\n\n args = parser.parse_args()\n decision_task_id = os.environ.get(\"TASK_ID\")\n options = get_taskcluster_options()\n add_self = False\n if decision_task_id:\n add_self = True\n task_group_id = decision_task_id\n else:\n task_group_id = taskcluster.utils.slugId()\n keys = {\"taskGroupId\": task_group_id}\n\n id_mapping = {}\n\n # First pass, do the template rendering and dependencies resolution\n tasks = []\n\n with open(args.data_pipeline_json) as pipeline_file:\n raw_tasks = yaml.safe_load(pipeline_file.read())\n\n version = os.getenv(\"TAG\", \"latest\")\n context = {\"version\": version}\n rendered = jsone.render(raw_tasks, context)\n\n for task in rendered[\"tasks\"]:\n # We need to generate new unique task ids for taskcluster to be happy\n # but need to identify dependencies across tasks. So we create a\n # mapping between an internal ID and the generate ID\n\n task_id = taskcluster.utils.slugId()\n task_internal_id = task.pop(\"ID\")\n\n if task_internal_id in id_mapping:\n raise ValueError(f\"Conflicting IDs {task_internal_id}\")\n\n id_mapping[task_internal_id] = task_id\n\n for key, value in keys.items():\n task[key] = value\n\n task_payload = task[\"payload\"]\n\n if \"env\" in task_payload and task_payload[\"env\"]:\n task_payload[\"env\"][\"TAG\"] = version\n else:\n task_payload[\"env\"] = {\n \"TAG\": version,\n }\n\n # Process the dependencies\n new_dependencies = []\n for dependency in task.get(\"dependencies\", []):\n new_dependencies.append(id_mapping[dependency])\n\n if add_self:\n new_dependencies.append(decision_task_id)\n\n task[\"dependencies\"] = new_dependencies\n\n tasks.append((task_id, task))\n\n # Now sends them\n queue = taskcluster.Queue(options)\n try:\n for task_id, task_payload in tasks:\n queue.createTask(task_id, task_payload)\n\n print(f\"https://community-tc.services.mozilla.com/tasks/groups/{task_group_id}\")\n except taskcluster.exceptions.TaskclusterAuthFailure as e:\n print(f\"TaskclusterAuthFailure: {e.body}\", file=sys.stderr)\n raise\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "infra/spawn_pipeline.py"}]} | 1,884 | 209 |
gh_patches_debug_11927 | rasdani/github-patches | git_diff | pytorch__text-280 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TypeError in Python 2.7
https://github.com/pytorch/text/blob/a2795e5731d1b7c0298a1b5087bb8142e1c39d0b/torchtext/datasets/imdb.py#L32
In python 2.7, it will report that `TypeError: 'encoding' is an invalid keyword argument for this function`.
I replace `open` with `io.open` to fix it.
</issue>
<code>
[start of torchtext/datasets/imdb.py]
1 import os
2 import glob
3
4 from .. import data
5
6
7 class IMDB(data.Dataset):
8
9 urls = ['http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz']
10 name = 'imdb'
11 dirname = 'aclImdb'
12
13 @staticmethod
14 def sort_key(ex):
15 return len(ex.text)
16
17 def __init__(self, path, text_field, label_field, **kwargs):
18 """Create an IMDB dataset instance given a path and fields.
19
20 Arguments:
21 path: Path to the dataset's highest level directory
22 text_field: The field that will be used for text data.
23 label_field: The field that will be used for label data.
24 Remaining keyword arguments: Passed to the constructor of
25 data.Dataset.
26 """
27 fields = [('text', text_field), ('label', label_field)]
28 examples = []
29
30 for label in ['pos', 'neg']:
31 for fname in glob.iglob(os.path.join(path, label, '*.txt')):
32 with open(fname, 'r', encoding="utf-8") as f:
33 text = f.readline()
34 examples.append(data.Example.fromlist([text, label], fields))
35
36 super(IMDB, self).__init__(examples, fields, **kwargs)
37
38 @classmethod
39 def splits(cls, text_field, label_field, root='.data',
40 train='train', test='test', **kwargs):
41 """Create dataset objects for splits of the IMDB dataset.
42
43 Arguments:
44 text_field: The field that will be used for the sentence.
45 label_field: The field that will be used for label data.
46 root: Root dataset storage directory. Default is '.data'.
47 train: The directory that contains the training examples
48 test: The directory that contains the test examples
49 Remaining keyword arguments: Passed to the splits method of
50 Dataset.
51 """
52 return super(IMDB, cls).splits(
53 root=root, text_field=text_field, label_field=label_field,
54 train=train, validation=None, test=test, **kwargs)
55
56 @classmethod
57 def iters(cls, batch_size=32, device=0, root='.data', vectors=None, **kwargs):
58 """Creater iterator objects for splits of the IMDB dataset.
59
60 Arguments:
61 batch_size: Batch_size
62 device: Device to create batches on. Use - 1 for CPU and None for
63 the currently active GPU device.
64 root: The root directory that contains the imdb dataset subdirectory
65 vectors: one of the available pretrained vectors or a list with each
66 element one of the available pretrained vectors (see Vocab.load_vectors)
67
68 Remaining keyword arguments: Passed to the splits method.
69 """
70 TEXT = data.Field()
71 LABEL = data.Field(sequential=False)
72
73 train, test = cls.splits(TEXT, LABEL, root=root, **kwargs)
74
75 TEXT.build_vocab(train, vectors=vectors)
76 LABEL.build_vocab(train)
77
78 return data.BucketIterator.splits(
79 (train, test), batch_size=batch_size, device=device)
80
[end of torchtext/datasets/imdb.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchtext/datasets/imdb.py b/torchtext/datasets/imdb.py
--- a/torchtext/datasets/imdb.py
+++ b/torchtext/datasets/imdb.py
@@ -1,5 +1,6 @@
import os
import glob
+import io
from .. import data
@@ -29,7 +30,7 @@
for label in ['pos', 'neg']:
for fname in glob.iglob(os.path.join(path, label, '*.txt')):
- with open(fname, 'r', encoding="utf-8") as f:
+ with io.open(fname, 'r', encoding="utf-8") as f:
text = f.readline()
examples.append(data.Example.fromlist([text, label], fields))
| {"golden_diff": "diff --git a/torchtext/datasets/imdb.py b/torchtext/datasets/imdb.py\n--- a/torchtext/datasets/imdb.py\n+++ b/torchtext/datasets/imdb.py\n@@ -1,5 +1,6 @@\n import os\n import glob\n+import io\n \n from .. import data\n \n@@ -29,7 +30,7 @@\n \n for label in ['pos', 'neg']:\n for fname in glob.iglob(os.path.join(path, label, '*.txt')):\n- with open(fname, 'r', encoding=\"utf-8\") as f:\n+ with io.open(fname, 'r', encoding=\"utf-8\") as f:\n text = f.readline()\n examples.append(data.Example.fromlist([text, label], fields))\n", "issue": "TypeError in Python 2.7\nhttps://github.com/pytorch/text/blob/a2795e5731d1b7c0298a1b5087bb8142e1c39d0b/torchtext/datasets/imdb.py#L32\r\n\r\nIn python 2.7, it will report that `TypeError: 'encoding' is an invalid keyword argument for this function`.\r\n\r\nI replace `open` with `io.open` to fix it.\n", "before_files": [{"content": "import os\nimport glob\n\nfrom .. import data\n\n\nclass IMDB(data.Dataset):\n\n urls = ['http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz']\n name = 'imdb'\n dirname = 'aclImdb'\n\n @staticmethod\n def sort_key(ex):\n return len(ex.text)\n\n def __init__(self, path, text_field, label_field, **kwargs):\n \"\"\"Create an IMDB dataset instance given a path and fields.\n\n Arguments:\n path: Path to the dataset's highest level directory\n text_field: The field that will be used for text data.\n label_field: The field that will be used for label data.\n Remaining keyword arguments: Passed to the constructor of\n data.Dataset.\n \"\"\"\n fields = [('text', text_field), ('label', label_field)]\n examples = []\n\n for label in ['pos', 'neg']:\n for fname in glob.iglob(os.path.join(path, label, '*.txt')):\n with open(fname, 'r', encoding=\"utf-8\") as f:\n text = f.readline()\n examples.append(data.Example.fromlist([text, label], fields))\n\n super(IMDB, self).__init__(examples, fields, **kwargs)\n\n @classmethod\n def splits(cls, text_field, label_field, root='.data',\n train='train', test='test', **kwargs):\n \"\"\"Create dataset objects for splits of the IMDB dataset.\n\n Arguments:\n text_field: The field that will be used for the sentence.\n label_field: The field that will be used for label data.\n root: Root dataset storage directory. Default is '.data'.\n train: The directory that contains the training examples\n test: The directory that contains the test examples\n Remaining keyword arguments: Passed to the splits method of\n Dataset.\n \"\"\"\n return super(IMDB, cls).splits(\n root=root, text_field=text_field, label_field=label_field,\n train=train, validation=None, test=test, **kwargs)\n\n @classmethod\n def iters(cls, batch_size=32, device=0, root='.data', vectors=None, **kwargs):\n \"\"\"Creater iterator objects for splits of the IMDB dataset.\n\n Arguments:\n batch_size: Batch_size\n device: Device to create batches on. Use - 1 for CPU and None for\n the currently active GPU device.\n root: The root directory that contains the imdb dataset subdirectory\n vectors: one of the available pretrained vectors or a list with each\n element one of the available pretrained vectors (see Vocab.load_vectors)\n\n Remaining keyword arguments: Passed to the splits method.\n \"\"\"\n TEXT = data.Field()\n LABEL = data.Field(sequential=False)\n\n train, test = cls.splits(TEXT, LABEL, root=root, **kwargs)\n\n TEXT.build_vocab(train, vectors=vectors)\n LABEL.build_vocab(train)\n\n return data.BucketIterator.splits(\n (train, test), batch_size=batch_size, device=device)\n", "path": "torchtext/datasets/imdb.py"}]} | 1,458 | 169 |
gh_patches_debug_12119 | rasdani/github-patches | git_diff | sanic-org__sanic-647 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
what have done to static.py?
On last Friday,everything is ok,my static file test works fine.
Today,when I pip install sanic==0.5.1
It raise 404 error.
when I pip install sanic==0.5.0
everything is ok again.
seems like the code blow has some problem?
if not file_path.startswith(root_path):
raise FileNotFound('File not found',
path=file_or_directory,
relative_url=file_uri)
</issue>
<code>
[start of sanic/static.py]
1 from mimetypes import guess_type
2 from os import path
3 from re import sub
4 from time import strftime, gmtime
5 from urllib.parse import unquote
6
7 from aiofiles.os import stat
8
9 from sanic.exceptions import (
10 ContentRangeError,
11 FileNotFound,
12 HeaderNotFound,
13 InvalidUsage,
14 )
15 from sanic.handlers import ContentRangeHandler
16 from sanic.response import file, HTTPResponse
17
18
19 def register(app, uri, file_or_directory, pattern,
20 use_modified_since, use_content_range):
21 # TODO: Though sanic is not a file server, I feel like we should at least
22 # make a good effort here. Modified-since is nice, but we could
23 # also look into etags, expires, and caching
24 """
25 Register a static directory handler with Sanic by adding a route to the
26 router and registering a handler.
27
28 :param app: Sanic
29 :param file_or_directory: File or directory path to serve from
30 :param uri: URL to serve from
31 :param pattern: regular expression used to match files in the URL
32 :param use_modified_since: If true, send file modified time, and return
33 not modified if the browser's matches the
34 server's
35 :param use_content_range: If true, process header for range requests
36 and sends the file part that is requested
37 """
38 # If we're not trying to match a file directly,
39 # serve from the folder
40 if not path.isfile(file_or_directory):
41 uri += '<file_uri:' + pattern + '>'
42
43 async def _handler(request, file_uri=None):
44 # Using this to determine if the URL is trying to break out of the path
45 # served. os.path.realpath seems to be very slow
46 if file_uri and '../' in file_uri:
47 raise InvalidUsage("Invalid URL")
48 # Merge served directory and requested file if provided
49 # Strip all / that in the beginning of the URL to help prevent python
50 # from herping a derp and treating the uri as an absolute path
51 root_path = file_path = file_or_directory
52 if file_uri:
53 file_path = path.join(
54 file_or_directory, sub('^[/]*', '', file_uri))
55
56 # URL decode the path sent by the browser otherwise we won't be able to
57 # match filenames which got encoded (filenames with spaces etc)
58 file_path = path.abspath(unquote(file_path))
59 if not file_path.startswith(root_path):
60 raise FileNotFound('File not found',
61 path=file_or_directory,
62 relative_url=file_uri)
63 try:
64 headers = {}
65 # Check if the client has been sent this file before
66 # and it has not been modified since
67 stats = None
68 if use_modified_since:
69 stats = await stat(file_path)
70 modified_since = strftime(
71 '%a, %d %b %Y %H:%M:%S GMT', gmtime(stats.st_mtime))
72 if request.headers.get('If-Modified-Since') == modified_since:
73 return HTTPResponse(status=304)
74 headers['Last-Modified'] = modified_since
75 _range = None
76 if use_content_range:
77 _range = None
78 if not stats:
79 stats = await stat(file_path)
80 headers['Accept-Ranges'] = 'bytes'
81 headers['Content-Length'] = str(stats.st_size)
82 if request.method != 'HEAD':
83 try:
84 _range = ContentRangeHandler(request, stats)
85 except HeaderNotFound:
86 pass
87 else:
88 del headers['Content-Length']
89 for key, value in _range.headers.items():
90 headers[key] = value
91 if request.method == 'HEAD':
92 return HTTPResponse(
93 headers=headers,
94 content_type=guess_type(file_path)[0] or 'text/plain')
95 else:
96 return await file(file_path, headers=headers, _range=_range)
97 except ContentRangeError:
98 raise
99 except Exception:
100 raise FileNotFound('File not found',
101 path=file_or_directory,
102 relative_url=file_uri)
103
104 app.route(uri, methods=['GET', 'HEAD'])(_handler)
105
[end of sanic/static.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sanic/static.py b/sanic/static.py
--- a/sanic/static.py
+++ b/sanic/static.py
@@ -56,7 +56,7 @@
# URL decode the path sent by the browser otherwise we won't be able to
# match filenames which got encoded (filenames with spaces etc)
file_path = path.abspath(unquote(file_path))
- if not file_path.startswith(root_path):
+ if not file_path.startswith(path.abspath(unquote(root_path))):
raise FileNotFound('File not found',
path=file_or_directory,
relative_url=file_uri)
| {"golden_diff": "diff --git a/sanic/static.py b/sanic/static.py\n--- a/sanic/static.py\n+++ b/sanic/static.py\n@@ -56,7 +56,7 @@\n # URL decode the path sent by the browser otherwise we won't be able to\n # match filenames which got encoded (filenames with spaces etc)\n file_path = path.abspath(unquote(file_path))\n- if not file_path.startswith(root_path):\n+ if not file_path.startswith(path.abspath(unquote(root_path))):\n raise FileNotFound('File not found',\n path=file_or_directory,\n relative_url=file_uri)\n", "issue": "what have done to static.py?\nOn last Friday,everything is ok,my static file test works fine.\r\n\r\nToday,when I pip install sanic==0.5.1\r\nIt raise 404 error.\r\n\r\nwhen I pip install sanic==0.5.0\r\neverything is ok again.\r\n\r\nseems like the code blow has some problem?\r\nif not file_path.startswith(root_path):\r\n raise FileNotFound('File not found',\r\n path=file_or_directory,\r\n relative_url=file_uri)\n", "before_files": [{"content": "from mimetypes import guess_type\nfrom os import path\nfrom re import sub\nfrom time import strftime, gmtime\nfrom urllib.parse import unquote\n\nfrom aiofiles.os import stat\n\nfrom sanic.exceptions import (\n ContentRangeError,\n FileNotFound,\n HeaderNotFound,\n InvalidUsage,\n)\nfrom sanic.handlers import ContentRangeHandler\nfrom sanic.response import file, HTTPResponse\n\n\ndef register(app, uri, file_or_directory, pattern,\n use_modified_since, use_content_range):\n # TODO: Though sanic is not a file server, I feel like we should at least\n # make a good effort here. Modified-since is nice, but we could\n # also look into etags, expires, and caching\n \"\"\"\n Register a static directory handler with Sanic by adding a route to the\n router and registering a handler.\n\n :param app: Sanic\n :param file_or_directory: File or directory path to serve from\n :param uri: URL to serve from\n :param pattern: regular expression used to match files in the URL\n :param use_modified_since: If true, send file modified time, and return\n not modified if the browser's matches the\n server's\n :param use_content_range: If true, process header for range requests\n and sends the file part that is requested\n \"\"\"\n # If we're not trying to match a file directly,\n # serve from the folder\n if not path.isfile(file_or_directory):\n uri += '<file_uri:' + pattern + '>'\n\n async def _handler(request, file_uri=None):\n # Using this to determine if the URL is trying to break out of the path\n # served. os.path.realpath seems to be very slow\n if file_uri and '../' in file_uri:\n raise InvalidUsage(\"Invalid URL\")\n # Merge served directory and requested file if provided\n # Strip all / that in the beginning of the URL to help prevent python\n # from herping a derp and treating the uri as an absolute path\n root_path = file_path = file_or_directory\n if file_uri:\n file_path = path.join(\n file_or_directory, sub('^[/]*', '', file_uri))\n\n # URL decode the path sent by the browser otherwise we won't be able to\n # match filenames which got encoded (filenames with spaces etc)\n file_path = path.abspath(unquote(file_path))\n if not file_path.startswith(root_path):\n raise FileNotFound('File not found',\n path=file_or_directory,\n relative_url=file_uri)\n try:\n headers = {}\n # Check if the client has been sent this file before\n # and it has not been modified since\n stats = None\n if use_modified_since:\n stats = await stat(file_path)\n modified_since = strftime(\n '%a, %d %b %Y %H:%M:%S GMT', gmtime(stats.st_mtime))\n if request.headers.get('If-Modified-Since') == modified_since:\n return HTTPResponse(status=304)\n headers['Last-Modified'] = modified_since\n _range = None\n if use_content_range:\n _range = None\n if not stats:\n stats = await stat(file_path)\n headers['Accept-Ranges'] = 'bytes'\n headers['Content-Length'] = str(stats.st_size)\n if request.method != 'HEAD':\n try:\n _range = ContentRangeHandler(request, stats)\n except HeaderNotFound:\n pass\n else:\n del headers['Content-Length']\n for key, value in _range.headers.items():\n headers[key] = value\n if request.method == 'HEAD':\n return HTTPResponse(\n headers=headers,\n content_type=guess_type(file_path)[0] or 'text/plain')\n else:\n return await file(file_path, headers=headers, _range=_range)\n except ContentRangeError:\n raise\n except Exception:\n raise FileNotFound('File not found',\n path=file_or_directory,\n relative_url=file_uri)\n\n app.route(uri, methods=['GET', 'HEAD'])(_handler)\n", "path": "sanic/static.py"}]} | 1,728 | 129 |
gh_patches_debug_38158 | rasdani/github-patches | git_diff | Flexget__Flexget-171 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Input plugin "imdb_list" currently failing to fetch lists behind authentication
Message: `There was an error during imdb_list input (Unable to get imdb list: 404 Client Error: Not Found), using cache instead."`
Same issue as, http://flexget.com/ticket/2313 but even with the most recent fix applied it still fails.
</issue>
<code>
[start of flexget/plugins/input/imdb_list.py]
1 from __future__ import unicode_literals, division, absolute_import
2 import logging
3 import csv
4 import re
5 from cgi import parse_header
6
7 from flexget import plugin
8 from flexget.event import event
9 from flexget.utils import requests
10 from flexget.utils.imdb import make_url
11 from flexget.utils.cached_input import cached
12 from flexget.utils.tools import decode_html
13 from flexget.entry import Entry
14 from flexget.utils.soup import get_soup
15
16 log = logging.getLogger('imdb_list')
17
18 USER_ID_RE = r'^ur\d{7,8}$'
19
20
21 class ImdbList(object):
22 """"Creates an entry for each movie in your imdb list."""
23
24 schema = {
25 'type': 'object',
26 'properties': {
27 'user_id': {
28 'type': 'string',
29 'pattern': USER_ID_RE,
30 'error_pattern': 'user_id must be in the form urXXXXXXX'
31 },
32 'username': {'type': 'string'},
33 'password': {'type': 'string'},
34 'list': {'type': 'string'}
35 },
36 'required': ['list'],
37 'additionalProperties': False
38 }
39
40 @cached('imdb_list', persist='2 hours')
41 def on_task_input(self, task, config):
42 sess = requests.Session()
43 if config.get('username') and config.get('password'):
44
45 log.verbose('Logging in ...')
46
47 # Log in to imdb with our handler
48 params = {'login': config['username'], 'password': config['password']}
49 try:
50 # First get the login page so we can get the hidden input value
51 soup = get_soup(sess.get('https://secure.imdb.com/register-imdb/login').content)
52
53 # Fix for bs4 bug. see #2313 and github#118
54 auxsoup = soup.find('div', id='nb20').next_sibling.next_sibling
55 tag = auxsoup.find('input', attrs={'name': '49e6c'})
56 if tag:
57 params['49e6c'] = tag['value']
58 else:
59 log.warning('Unable to find required info for imdb login, maybe their login method has changed.')
60 # Now we do the actual login with appropriate parameters
61 r = sess.post('https://secure.imdb.com/register-imdb/login', data=params, raise_status=False)
62 except requests.RequestException as e:
63 raise plugin.PluginError('Unable to login to imdb: %s' % e.message)
64
65 # IMDb redirects us upon a successful login.
66 # removed - doesn't happen always?
67 # if r.status_code != 302:
68 # log.warning('It appears logging in to IMDb was unsuccessful.')
69
70 # try to automatically figure out user_id from watchlist redirect url
71 if not 'user_id' in config:
72 log.verbose('Getting user_id ...')
73 try:
74 response = sess.get('http://www.imdb.com/list/watchlist')
75 except requests.RequestException as e:
76 log.error('Error retrieving user ID from imdb: %s' % e.message)
77 user_id = ''
78 else:
79 log.debug('redirected to %s' % response.url)
80 user_id = response.url.split('/')[-2]
81 if re.match(USER_ID_RE, user_id):
82 config['user_id'] = user_id
83 else:
84 raise plugin.PluginError('Couldn\'t figure out user_id, please configure it manually.')
85
86 if not 'user_id' in config:
87 raise plugin.PluginError('Configuration option `user_id` required.')
88
89 log.verbose('Retrieving list %s ...' % config['list'])
90
91 # Get the imdb list in csv format
92 try:
93 url = 'http://www.imdb.com/list/export'
94 params = {'list_id': config['list'], 'author_id': config['user_id']}
95 log.debug('Requesting %s' % url)
96 opener = sess.get(url, params=params)
97 mime_type = parse_header(opener.headers['content-type'])[0]
98 log.debug('mime_type: %s' % mime_type)
99 if mime_type != 'text/csv':
100 raise plugin.PluginError('Didn\'t get CSV export as response. Probably specified list `%s` '
101 'does not exist.' % config['list'])
102 csv_rows = csv.reader(opener.iter_lines())
103 except requests.RequestException as e:
104 raise plugin.PluginError('Unable to get imdb list: %s' % e.message)
105
106 # Create an Entry for each movie in the list
107 entries = []
108 for row in csv_rows:
109 if not row or row[0] == 'position':
110 # Don't use blank rows or the headings row
111 continue
112 try:
113 title = decode_html(row[5]).decode('utf-8')
114 entries.append(Entry(title=title, url=make_url(row[1]), imdb_id=row[1], imdb_name=title))
115 except IndexError:
116 log.critical('IndexError! Unable to handle row: %s' % row)
117 return entries
118
119
120 @event('plugin.register')
121 def register_plugin():
122 plugin.register(ImdbList, 'imdb_list', api_ver=2)
123
[end of flexget/plugins/input/imdb_list.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/flexget/plugins/input/imdb_list.py b/flexget/plugins/input/imdb_list.py
--- a/flexget/plugins/input/imdb_list.py
+++ b/flexget/plugins/input/imdb_list.py
@@ -1,13 +1,13 @@
from __future__ import unicode_literals, division, absolute_import
import logging
-import csv
+import feedparser
import re
from cgi import parse_header
from flexget import plugin
from flexget.event import event
from flexget.utils import requests
-from flexget.utils.imdb import make_url
+from flexget.utils.imdb import make_url, extract_id
from flexget.utils.cached_input import cached
from flexget.utils.tools import decode_html
from flexget.entry import Entry
@@ -88,32 +88,27 @@
log.verbose('Retrieving list %s ...' % config['list'])
- # Get the imdb list in csv format
+ # Get the imdb list in RSS format
try:
- url = 'http://www.imdb.com/list/export'
- params = {'list_id': config['list'], 'author_id': config['user_id']}
+ if config['list'] in ['watchlist', 'ratings', 'checkins']:
+ url = 'http://rss.imdb.com/user/%s/%s' % (config['user_id'], config['list'])
+ else:
+ url = 'http://rss.imdb.com/list/%s' % config['list']
log.debug('Requesting %s' % url)
- opener = sess.get(url, params=params)
- mime_type = parse_header(opener.headers['content-type'])[0]
- log.debug('mime_type: %s' % mime_type)
- if mime_type != 'text/csv':
- raise plugin.PluginError('Didn\'t get CSV export as response. Probably specified list `%s` '
- 'does not exist.' % config['list'])
- csv_rows = csv.reader(opener.iter_lines())
+ try:
+ rss = feedparser.parse(url)
+ except LookupError as e:
+ raise plugin.PluginError('Failed to parse RSS feed for list `%s` correctly: %s' % (config['list'], e))
except requests.RequestException as e:
raise plugin.PluginError('Unable to get imdb list: %s' % e.message)
# Create an Entry for each movie in the list
entries = []
- for row in csv_rows:
- if not row or row[0] == 'position':
- # Don't use blank rows or the headings row
- continue
+ for entry in rss.entries:
try:
- title = decode_html(row[5]).decode('utf-8')
- entries.append(Entry(title=title, url=make_url(row[1]), imdb_id=row[1], imdb_name=title))
+ entries.append(Entry(title=entry.title, url=entry.link, imdb_id=extract_id(entry.link), imdb_name=entry.title))
except IndexError:
- log.critical('IndexError! Unable to handle row: %s' % row)
+ log.critical('IndexError! Unable to handle RSS entry: %s' % entry)
return entries
| {"golden_diff": "diff --git a/flexget/plugins/input/imdb_list.py b/flexget/plugins/input/imdb_list.py\n--- a/flexget/plugins/input/imdb_list.py\n+++ b/flexget/plugins/input/imdb_list.py\n@@ -1,13 +1,13 @@\n from __future__ import unicode_literals, division, absolute_import\n import logging\n-import csv\n+import feedparser\n import re\n from cgi import parse_header\n \n from flexget import plugin\n from flexget.event import event\n from flexget.utils import requests\n-from flexget.utils.imdb import make_url\n+from flexget.utils.imdb import make_url, extract_id\n from flexget.utils.cached_input import cached\n from flexget.utils.tools import decode_html\n from flexget.entry import Entry\n@@ -88,32 +88,27 @@\n \n log.verbose('Retrieving list %s ...' % config['list'])\n \n- # Get the imdb list in csv format\n+ # Get the imdb list in RSS format\n try:\n- url = 'http://www.imdb.com/list/export'\n- params = {'list_id': config['list'], 'author_id': config['user_id']}\n+ if config['list'] in ['watchlist', 'ratings', 'checkins']:\n+ url = 'http://rss.imdb.com/user/%s/%s' % (config['user_id'], config['list'])\n+ else:\n+ url = 'http://rss.imdb.com/list/%s' % config['list']\n log.debug('Requesting %s' % url)\n- opener = sess.get(url, params=params)\n- mime_type = parse_header(opener.headers['content-type'])[0]\n- log.debug('mime_type: %s' % mime_type)\n- if mime_type != 'text/csv':\n- raise plugin.PluginError('Didn\\'t get CSV export as response. Probably specified list `%s` '\n- 'does not exist.' % config['list'])\n- csv_rows = csv.reader(opener.iter_lines())\n+ try:\n+ rss = feedparser.parse(url)\n+ except LookupError as e:\n+ raise plugin.PluginError('Failed to parse RSS feed for list `%s` correctly: %s' % (config['list'], e))\n except requests.RequestException as e:\n raise plugin.PluginError('Unable to get imdb list: %s' % e.message)\n \n # Create an Entry for each movie in the list\n entries = []\n- for row in csv_rows:\n- if not row or row[0] == 'position':\n- # Don't use blank rows or the headings row\n- continue\n+ for entry in rss.entries:\n try:\n- title = decode_html(row[5]).decode('utf-8')\n- entries.append(Entry(title=title, url=make_url(row[1]), imdb_id=row[1], imdb_name=title))\n+ entries.append(Entry(title=entry.title, url=entry.link, imdb_id=extract_id(entry.link), imdb_name=entry.title))\n except IndexError:\n- log.critical('IndexError! Unable to handle row: %s' % row)\n+ log.critical('IndexError! Unable to handle RSS entry: %s' % entry)\n return entries\n", "issue": "Input plugin \"imdb_list\" currently failing to fetch lists behind authentication\nMessage: `There was an error during imdb_list input (Unable to get imdb list: 404 Client Error: Not Found), using cache instead.\"`\n\nSame issue as, http://flexget.com/ticket/2313 but even with the most recent fix applied it still fails.\n\n", "before_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nimport logging\nimport csv\nimport re\nfrom cgi import parse_header\n\nfrom flexget import plugin\nfrom flexget.event import event\nfrom flexget.utils import requests\nfrom flexget.utils.imdb import make_url\nfrom flexget.utils.cached_input import cached\nfrom flexget.utils.tools import decode_html\nfrom flexget.entry import Entry\nfrom flexget.utils.soup import get_soup\n\nlog = logging.getLogger('imdb_list')\n\nUSER_ID_RE = r'^ur\\d{7,8}$'\n\n\nclass ImdbList(object):\n \"\"\"\"Creates an entry for each movie in your imdb list.\"\"\"\n\n schema = {\n 'type': 'object',\n 'properties': {\n 'user_id': {\n 'type': 'string',\n 'pattern': USER_ID_RE,\n 'error_pattern': 'user_id must be in the form urXXXXXXX'\n },\n 'username': {'type': 'string'},\n 'password': {'type': 'string'},\n 'list': {'type': 'string'}\n },\n 'required': ['list'],\n 'additionalProperties': False\n }\n\n @cached('imdb_list', persist='2 hours')\n def on_task_input(self, task, config):\n sess = requests.Session()\n if config.get('username') and config.get('password'):\n\n log.verbose('Logging in ...')\n\n # Log in to imdb with our handler\n params = {'login': config['username'], 'password': config['password']}\n try:\n # First get the login page so we can get the hidden input value\n soup = get_soup(sess.get('https://secure.imdb.com/register-imdb/login').content)\n\n # Fix for bs4 bug. see #2313 and github#118\n auxsoup = soup.find('div', id='nb20').next_sibling.next_sibling\n tag = auxsoup.find('input', attrs={'name': '49e6c'})\n if tag:\n params['49e6c'] = tag['value']\n else:\n log.warning('Unable to find required info for imdb login, maybe their login method has changed.')\n # Now we do the actual login with appropriate parameters\n r = sess.post('https://secure.imdb.com/register-imdb/login', data=params, raise_status=False)\n except requests.RequestException as e:\n raise plugin.PluginError('Unable to login to imdb: %s' % e.message)\n\n # IMDb redirects us upon a successful login.\n # removed - doesn't happen always?\n # if r.status_code != 302:\n # log.warning('It appears logging in to IMDb was unsuccessful.')\n\n # try to automatically figure out user_id from watchlist redirect url\n if not 'user_id' in config:\n log.verbose('Getting user_id ...')\n try:\n response = sess.get('http://www.imdb.com/list/watchlist')\n except requests.RequestException as e:\n log.error('Error retrieving user ID from imdb: %s' % e.message)\n user_id = ''\n else:\n log.debug('redirected to %s' % response.url)\n user_id = response.url.split('/')[-2]\n if re.match(USER_ID_RE, user_id):\n config['user_id'] = user_id\n else:\n raise plugin.PluginError('Couldn\\'t figure out user_id, please configure it manually.')\n\n if not 'user_id' in config:\n raise plugin.PluginError('Configuration option `user_id` required.')\n\n log.verbose('Retrieving list %s ...' % config['list'])\n\n # Get the imdb list in csv format\n try:\n url = 'http://www.imdb.com/list/export'\n params = {'list_id': config['list'], 'author_id': config['user_id']}\n log.debug('Requesting %s' % url)\n opener = sess.get(url, params=params)\n mime_type = parse_header(opener.headers['content-type'])[0]\n log.debug('mime_type: %s' % mime_type)\n if mime_type != 'text/csv':\n raise plugin.PluginError('Didn\\'t get CSV export as response. Probably specified list `%s` '\n 'does not exist.' % config['list'])\n csv_rows = csv.reader(opener.iter_lines())\n except requests.RequestException as e:\n raise plugin.PluginError('Unable to get imdb list: %s' % e.message)\n\n # Create an Entry for each movie in the list\n entries = []\n for row in csv_rows:\n if not row or row[0] == 'position':\n # Don't use blank rows or the headings row\n continue\n try:\n title = decode_html(row[5]).decode('utf-8')\n entries.append(Entry(title=title, url=make_url(row[1]), imdb_id=row[1], imdb_name=title))\n except IndexError:\n log.critical('IndexError! Unable to handle row: %s' % row)\n return entries\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(ImdbList, 'imdb_list', api_ver=2)\n", "path": "flexget/plugins/input/imdb_list.py"}]} | 1,988 | 705 |
gh_patches_debug_36150 | rasdani/github-patches | git_diff | prowler-cloud__prowler-2736 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ecr_repositories_scan_vulnerabilities_in_latest_image: Configure level
### New feature motivation
Hi, is it possible to configure the level from which the test shall fail?
AWS tags some findings as medium which I might want to ignore, but of course I don't want to mute critical findings for the image.
### Solution Proposed
none
### Describe alternatives you've considered
none
### Additional context
_No response_
</issue>
<code>
[start of prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py]
1 from prowler.lib.check.models import Check, Check_Report_AWS
2 from prowler.providers.aws.services.ecr.ecr_client import ecr_client
3
4
5 class ecr_repositories_scan_vulnerabilities_in_latest_image(Check):
6 def execute(self):
7 findings = []
8 for registry in ecr_client.registries.values():
9 for repository in registry.repositories:
10 # First check if the repository has images
11 if len(repository.images_details) > 0:
12 # We only want to check the latest image pushed
13 image = repository.images_details[-1]
14
15 report = Check_Report_AWS(self.metadata())
16 report.region = repository.region
17 report.resource_id = repository.name
18 report.resource_arn = repository.arn
19 report.resource_tags = repository.tags
20 report.status = "PASS"
21 report.status_extended = f"ECR repository {repository.name} has imageTag {image.latest_tag} scanned without findings."
22 if not image.scan_findings_status:
23 report.status = "FAIL"
24 report.status_extended = f"ECR repository {repository.name} has imageTag {image.latest_tag} without a scan."
25 elif image.scan_findings_status == "FAILED":
26 report.status = "FAIL"
27 report.status_extended = (
28 f"ECR repository {repository.name} with scan status FAILED."
29 )
30 elif image.scan_findings_status != "FAILED":
31 if image.scan_findings_severity_count and (
32 image.scan_findings_severity_count.critical
33 or image.scan_findings_severity_count.high
34 or image.scan_findings_severity_count.medium
35 ):
36 report.status = "FAIL"
37 report.status_extended = f"ECR repository {repository.name} has imageTag {image.latest_tag} scanned with findings: CRITICAL->{image.scan_findings_severity_count.critical}, HIGH->{image.scan_findings_severity_count.high}, MEDIUM->{image.scan_findings_severity_count.medium}."
38
39 findings.append(report)
40
41 return findings
42
[end of prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py b/prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py
--- a/prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py
+++ b/prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py
@@ -5,6 +5,12 @@
class ecr_repositories_scan_vulnerabilities_in_latest_image(Check):
def execute(self):
findings = []
+
+ # Get minimun severity to report
+ minimum_severity = ecr_client.audit_config.get(
+ "ecr_repository_vulnerability_minimum_severity", "MEDIUM"
+ )
+
for registry in ecr_client.registries.values():
for repository in registry.repositories:
# First check if the repository has images
@@ -27,8 +33,23 @@
report.status_extended = (
f"ECR repository {repository.name} with scan status FAILED."
)
- elif image.scan_findings_status != "FAILED":
- if image.scan_findings_severity_count and (
+ elif (
+ image.scan_findings_status != "FAILED"
+ and image.scan_findings_severity_count
+ ):
+ if (
+ minimum_severity == "CRITICAL"
+ and image.scan_findings_severity_count.critical
+ ):
+ report.status = "FAIL"
+ report.status_extended = f"ECR repository {repository.name} has imageTag {image.latest_tag} scanned with findings: CRITICAL->{image.scan_findings_severity_count.critical}."
+ elif minimum_severity == "HIGH" and (
+ image.scan_findings_severity_count.critical
+ or image.scan_findings_severity_count.high
+ ):
+ report.status = "FAIL"
+ report.status_extended = f"ECR repository {repository.name} has imageTag {image.latest_tag} scanned with findings: CRITICAL->{image.scan_findings_severity_count.critical}, HIGH->{image.scan_findings_severity_count.high}."
+ elif minimum_severity == "MEDIUM" and (
image.scan_findings_severity_count.critical
or image.scan_findings_severity_count.high
or image.scan_findings_severity_count.medium
| {"golden_diff": "diff --git a/prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py b/prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py\n--- a/prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py\n+++ b/prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py\n@@ -5,6 +5,12 @@\n class ecr_repositories_scan_vulnerabilities_in_latest_image(Check):\n def execute(self):\n findings = []\n+\n+ # Get minimun severity to report\n+ minimum_severity = ecr_client.audit_config.get(\n+ \"ecr_repository_vulnerability_minimum_severity\", \"MEDIUM\"\n+ )\n+\n for registry in ecr_client.registries.values():\n for repository in registry.repositories:\n # First check if the repository has images\n@@ -27,8 +33,23 @@\n report.status_extended = (\n f\"ECR repository {repository.name} with scan status FAILED.\"\n )\n- elif image.scan_findings_status != \"FAILED\":\n- if image.scan_findings_severity_count and (\n+ elif (\n+ image.scan_findings_status != \"FAILED\"\n+ and image.scan_findings_severity_count\n+ ):\n+ if (\n+ minimum_severity == \"CRITICAL\"\n+ and image.scan_findings_severity_count.critical\n+ ):\n+ report.status = \"FAIL\"\n+ report.status_extended = f\"ECR repository {repository.name} has imageTag {image.latest_tag} scanned with findings: CRITICAL->{image.scan_findings_severity_count.critical}.\"\n+ elif minimum_severity == \"HIGH\" and (\n+ image.scan_findings_severity_count.critical\n+ or image.scan_findings_severity_count.high\n+ ):\n+ report.status = \"FAIL\"\n+ report.status_extended = f\"ECR repository {repository.name} has imageTag {image.latest_tag} scanned with findings: CRITICAL->{image.scan_findings_severity_count.critical}, HIGH->{image.scan_findings_severity_count.high}.\"\n+ elif minimum_severity == \"MEDIUM\" and (\n image.scan_findings_severity_count.critical\n or image.scan_findings_severity_count.high\n or image.scan_findings_severity_count.medium\n", "issue": "ecr_repositories_scan_vulnerabilities_in_latest_image: Configure level\n### New feature motivation\n\nHi, is it possible to configure the level from which the test shall fail?\r\nAWS tags some findings as medium which I might want to ignore, but of course I don't want to mute critical findings for the image.\n\n### Solution Proposed\n\nnone\n\n### Describe alternatives you've considered\n\nnone\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "from prowler.lib.check.models import Check, Check_Report_AWS\nfrom prowler.providers.aws.services.ecr.ecr_client import ecr_client\n\n\nclass ecr_repositories_scan_vulnerabilities_in_latest_image(Check):\n def execute(self):\n findings = []\n for registry in ecr_client.registries.values():\n for repository in registry.repositories:\n # First check if the repository has images\n if len(repository.images_details) > 0:\n # We only want to check the latest image pushed\n image = repository.images_details[-1]\n\n report = Check_Report_AWS(self.metadata())\n report.region = repository.region\n report.resource_id = repository.name\n report.resource_arn = repository.arn\n report.resource_tags = repository.tags\n report.status = \"PASS\"\n report.status_extended = f\"ECR repository {repository.name} has imageTag {image.latest_tag} scanned without findings.\"\n if not image.scan_findings_status:\n report.status = \"FAIL\"\n report.status_extended = f\"ECR repository {repository.name} has imageTag {image.latest_tag} without a scan.\"\n elif image.scan_findings_status == \"FAILED\":\n report.status = \"FAIL\"\n report.status_extended = (\n f\"ECR repository {repository.name} with scan status FAILED.\"\n )\n elif image.scan_findings_status != \"FAILED\":\n if image.scan_findings_severity_count and (\n image.scan_findings_severity_count.critical\n or image.scan_findings_severity_count.high\n or image.scan_findings_severity_count.medium\n ):\n report.status = \"FAIL\"\n report.status_extended = f\"ECR repository {repository.name} has imageTag {image.latest_tag} scanned with findings: CRITICAL->{image.scan_findings_severity_count.critical}, HIGH->{image.scan_findings_severity_count.high}, MEDIUM->{image.scan_findings_severity_count.medium}.\"\n\n findings.append(report)\n\n return findings\n", "path": "prowler/providers/aws/services/ecr/ecr_repositories_scan_vulnerabilities_in_latest_image/ecr_repositories_scan_vulnerabilities_in_latest_image.py"}]} | 1,162 | 560 |
gh_patches_debug_13964 | rasdani/github-patches | git_diff | azavea__raster-vision-701 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Predict zero for nodata pixels on semantic segmentation
</issue>
<code>
[start of rastervision/task/semantic_segmentation.py]
1 from typing import List
2 import logging
3
4 import numpy as np
5
6 from .task import Task
7 from rastervision.core.box import Box
8 from rastervision.data.scene import Scene
9 from rastervision.data.label import SemanticSegmentationLabels
10
11 log = logging.getLogger(__name__)
12
13
14 def get_random_sample_train_windows(label_store, chip_size, class_map, extent,
15 chip_options, filter_windows):
16 prob = chip_options.negative_survival_probability
17 target_count_threshold = chip_options.target_count_threshold
18 target_classes = chip_options.target_classes
19 chips_per_scene = chip_options.chips_per_scene
20
21 if not target_classes:
22 all_class_ids = [item.id for item in class_map.get_items()]
23 target_classes = all_class_ids
24
25 windows = []
26 attempts = 0
27 while (attempts < chips_per_scene):
28 candidate_window = extent.make_random_square(chip_size)
29 if not filter_windows([candidate_window]):
30 continue
31 attempts = attempts + 1
32
33 if (prob >= 1.0):
34 windows.append(candidate_window)
35 elif attempts == chips_per_scene and len(windows) == 0:
36 windows.append(candidate_window)
37 else:
38 good = label_store.enough_target_pixels(
39 candidate_window, target_count_threshold, target_classes)
40 if good or (np.random.rand() < prob):
41 windows.append(candidate_window)
42
43 return windows
44
45
46 class SemanticSegmentation(Task):
47 """Task-derived type that implements the semantic segmentation task."""
48
49 def get_train_windows(self, scene: Scene) -> List[Box]:
50 """Get training windows covering a scene.
51
52 Args:
53 scene: The scene over-which windows are to be generated.
54
55 Returns:
56 A list of windows, list(Box)
57
58 """
59
60 def filter_windows(windows):
61 if scene.aoi_polygons:
62 windows = Box.filter_by_aoi(windows, scene.aoi_polygons)
63 return windows
64
65 raster_source = scene.raster_source
66 extent = raster_source.get_extent()
67 label_store = scene.ground_truth_label_source
68 chip_size = self.config.chip_size
69
70 chip_options = self.config.chip_options
71
72 if chip_options.window_method == 'random_sample':
73 return get_random_sample_train_windows(
74 label_store, chip_size, self.config.class_map, extent,
75 chip_options, filter_windows)
76 elif chip_options.window_method == 'sliding':
77 stride = chip_options.stride
78 if stride is None:
79 stride = chip_size / 2
80
81 return list(
82 filter_windows((extent.get_windows(chip_size, stride))))
83
84 def get_train_labels(self, window: Box, scene: Scene) -> np.ndarray:
85 """Get the training labels for the given window in the given scene.
86
87 Args:
88 window: The window over-which the labels are to be
89 retrieved.
90 scene: The scene from-which the window of labels is to be
91 extracted.
92
93 Returns:
94 An appropriately-shaped 2d np.ndarray with the labels
95 encoded as packed pixels.
96
97 """
98 label_store = scene.ground_truth_label_source
99 return label_store.get_labels(window)
100
101 def get_predict_windows(self, extent: Box) -> List[Box]:
102 """Get windows over-which predictions will be calculated.
103
104 Args:
105 extent: The overall extent of the area.
106
107 Returns:
108 An sequence of windows.
109
110 """
111 chip_size = self.config.chip_size
112 return extent.get_windows(chip_size, chip_size)
113
114 def post_process_predictions(self, labels, scene):
115 return labels
116
117 def save_debug_predict_image(self, scene, debug_dir_uri):
118 # TODO implement this
119 pass
120
121 def predict_scene(self, scene, tmp_dir):
122 """Predict on a single scene, and return the labels."""
123 log.info('Making predictions for scene')
124 raster_source = scene.raster_source
125 windows = self.get_predict_windows(raster_source.get_extent())
126
127 def label_fn(window):
128 chip = raster_source.get_chip(window)
129 if np.any(chip):
130 chip = raster_source.get_chip(window)
131 labels = self.backend.predict([chip], [window], tmp_dir)
132 label_arr = labels.get_label_arr(window)
133 else:
134 label_arr = np.zeros((window.get_height(), window.get_width()))
135 print('.', end='', flush=True)
136 return label_arr
137
138 return SemanticSegmentationLabels(windows, label_fn)
139
[end of rastervision/task/semantic_segmentation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rastervision/task/semantic_segmentation.py b/rastervision/task/semantic_segmentation.py
--- a/rastervision/task/semantic_segmentation.py
+++ b/rastervision/task/semantic_segmentation.py
@@ -126,12 +126,12 @@
def label_fn(window):
chip = raster_source.get_chip(window)
- if np.any(chip):
- chip = raster_source.get_chip(window)
- labels = self.backend.predict([chip], [window], tmp_dir)
- label_arr = labels.get_label_arr(window)
- else:
- label_arr = np.zeros((window.get_height(), window.get_width()))
+ labels = self.backend.predict([chip], [window], tmp_dir)
+ label_arr = labels.get_label_arr(window)
+
+ # Set NODATA pixels in imagery to predicted value of 0 (ie. ignore)
+ label_arr[np.sum(chip, axis=2) == 0] = 0
+
print('.', end='', flush=True)
return label_arr
| {"golden_diff": "diff --git a/rastervision/task/semantic_segmentation.py b/rastervision/task/semantic_segmentation.py\n--- a/rastervision/task/semantic_segmentation.py\n+++ b/rastervision/task/semantic_segmentation.py\n@@ -126,12 +126,12 @@\n \n def label_fn(window):\n chip = raster_source.get_chip(window)\n- if np.any(chip):\n- chip = raster_source.get_chip(window)\n- labels = self.backend.predict([chip], [window], tmp_dir)\n- label_arr = labels.get_label_arr(window)\n- else:\n- label_arr = np.zeros((window.get_height(), window.get_width()))\n+ labels = self.backend.predict([chip], [window], tmp_dir)\n+ label_arr = labels.get_label_arr(window)\n+\n+ # Set NODATA pixels in imagery to predicted value of 0 (ie. ignore)\n+ label_arr[np.sum(chip, axis=2) == 0] = 0\n+\n print('.', end='', flush=True)\n return label_arr\n", "issue": "Predict zero for nodata pixels on semantic segmentation\n\n", "before_files": [{"content": "from typing import List\nimport logging\n\nimport numpy as np\n\nfrom .task import Task\nfrom rastervision.core.box import Box\nfrom rastervision.data.scene import Scene\nfrom rastervision.data.label import SemanticSegmentationLabels\n\nlog = logging.getLogger(__name__)\n\n\ndef get_random_sample_train_windows(label_store, chip_size, class_map, extent,\n chip_options, filter_windows):\n prob = chip_options.negative_survival_probability\n target_count_threshold = chip_options.target_count_threshold\n target_classes = chip_options.target_classes\n chips_per_scene = chip_options.chips_per_scene\n\n if not target_classes:\n all_class_ids = [item.id for item in class_map.get_items()]\n target_classes = all_class_ids\n\n windows = []\n attempts = 0\n while (attempts < chips_per_scene):\n candidate_window = extent.make_random_square(chip_size)\n if not filter_windows([candidate_window]):\n continue\n attempts = attempts + 1\n\n if (prob >= 1.0):\n windows.append(candidate_window)\n elif attempts == chips_per_scene and len(windows) == 0:\n windows.append(candidate_window)\n else:\n good = label_store.enough_target_pixels(\n candidate_window, target_count_threshold, target_classes)\n if good or (np.random.rand() < prob):\n windows.append(candidate_window)\n\n return windows\n\n\nclass SemanticSegmentation(Task):\n \"\"\"Task-derived type that implements the semantic segmentation task.\"\"\"\n\n def get_train_windows(self, scene: Scene) -> List[Box]:\n \"\"\"Get training windows covering a scene.\n\n Args:\n scene: The scene over-which windows are to be generated.\n\n Returns:\n A list of windows, list(Box)\n\n \"\"\"\n\n def filter_windows(windows):\n if scene.aoi_polygons:\n windows = Box.filter_by_aoi(windows, scene.aoi_polygons)\n return windows\n\n raster_source = scene.raster_source\n extent = raster_source.get_extent()\n label_store = scene.ground_truth_label_source\n chip_size = self.config.chip_size\n\n chip_options = self.config.chip_options\n\n if chip_options.window_method == 'random_sample':\n return get_random_sample_train_windows(\n label_store, chip_size, self.config.class_map, extent,\n chip_options, filter_windows)\n elif chip_options.window_method == 'sliding':\n stride = chip_options.stride\n if stride is None:\n stride = chip_size / 2\n\n return list(\n filter_windows((extent.get_windows(chip_size, stride))))\n\n def get_train_labels(self, window: Box, scene: Scene) -> np.ndarray:\n \"\"\"Get the training labels for the given window in the given scene.\n\n Args:\n window: The window over-which the labels are to be\n retrieved.\n scene: The scene from-which the window of labels is to be\n extracted.\n\n Returns:\n An appropriately-shaped 2d np.ndarray with the labels\n encoded as packed pixels.\n\n \"\"\"\n label_store = scene.ground_truth_label_source\n return label_store.get_labels(window)\n\n def get_predict_windows(self, extent: Box) -> List[Box]:\n \"\"\"Get windows over-which predictions will be calculated.\n\n Args:\n extent: The overall extent of the area.\n\n Returns:\n An sequence of windows.\n\n \"\"\"\n chip_size = self.config.chip_size\n return extent.get_windows(chip_size, chip_size)\n\n def post_process_predictions(self, labels, scene):\n return labels\n\n def save_debug_predict_image(self, scene, debug_dir_uri):\n # TODO implement this\n pass\n\n def predict_scene(self, scene, tmp_dir):\n \"\"\"Predict on a single scene, and return the labels.\"\"\"\n log.info('Making predictions for scene')\n raster_source = scene.raster_source\n windows = self.get_predict_windows(raster_source.get_extent())\n\n def label_fn(window):\n chip = raster_source.get_chip(window)\n if np.any(chip):\n chip = raster_source.get_chip(window)\n labels = self.backend.predict([chip], [window], tmp_dir)\n label_arr = labels.get_label_arr(window)\n else:\n label_arr = np.zeros((window.get_height(), window.get_width()))\n print('.', end='', flush=True)\n return label_arr\n\n return SemanticSegmentationLabels(windows, label_fn)\n", "path": "rastervision/task/semantic_segmentation.py"}]} | 1,820 | 232 |
gh_patches_debug_24535 | rasdani/github-patches | git_diff | nvaccess__nvda-14588 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
in tabbed notepad when switching between tabs nvda should announce some way to differentiate between tabs
### Steps to reproduce:
download the new tabbed notepad.
now using the menu create a new tab
now switch between tabs with ctrl+tabe
### Actual behavior:
nvda announces blank edition text editor
### Expected behavior:
Before writing what I want, I would like to talk about my discoveries, sorry if it doesn't make sense.
I typed a different word into the first line of text on each tab.
guide example 1
Fernando
guide 2
silva
using object navigation I found the list of tabs and within this list there was each tab named with what was written in the first line of text.
Now I left the first line of text empty in tab 1
in the list of tabs tab 1 appears with the name of untitled
from what i understand if the first line of text is characters this text will be the title of the tab.
If the first line of text is empty, the tab will have an untitled title.
so my suggestion is:
when switching between tabs in notepad in this example by pressing ctrl+tab nvda should announce the title of the tab which will be what is typed in the first line.
But this doesn't work if the first line of the tabs is empty, so I suggest that nvda also announce the position of the tab within the list.
example
guide 1
first line
Fernando
guide 2
first line
empty
guide 3
first line
silva
when switching between tabs nvda would announce:
guide 1 of 3 fernando
guide 2 of 3 untitled
guide 3 of 3 silva
Tab name and tab count could also be announced by command nvda + t to read window name.
### NVDA logs, crash dumps and other attachments:
### System configuration
#### NVDA installed/portable/running from source:
instaled
#### NVDA version:
nvda.exe, NVDA alpha-27590,180c9f2b
#### Windows version:
11 22.623.1095
#### Name and version of other software in use when reproducing the issue:
Notepad.exe, Microsoft.WindowsNotepad 11.2212.33.0
#### Other information about your system:
### Other questions
#### Does the issue still occur after restarting your computer?
yes
#### Have you tried any other versions of NVDA? If so, please report their behaviors.
no
#### If NVDA add-ons are disabled, is your problem still occurring?
yes
#### Does the issue still occur after you run the COM Registration Fixing Tool in NVDA's tools menu?
yes
</issue>
<code>
[start of source/appModules/notepad.py]
1 # A part of NonVisual Desktop Access (NVDA)
2 # Copyright (C) 2022-2023 NV Access Limited, Joseph Lee
3 # This file is covered by the GNU General Public License.
4 # See the file COPYING for more details.
5
6 """App module for Windows Notepad.
7 While this app module also covers older Notepad releases,
8 this module provides workarounds for Windows 11 Notepad."""
9
10 from comtypes import COMError
11 import appModuleHandler
12 import api
13 import UIAHandler
14 from NVDAObjects.UIA import UIA
15 from NVDAObjects import NVDAObject
16
17
18 class AppModule(appModuleHandler.AppModule):
19
20 def _get_statusBar(self) -> NVDAObject:
21 """Retrieves Windows 11 Notepad status bar.
22 In Windows 10 and earlier, status bar can be obtained by looking at the bottom of the screen.
23 Windows 11 Notepad uses Windows 11 UI design (top-level window is labeled "DesktopWindowXamlSource",
24 therefore status bar cannot be obtained by position alone.
25 If visible, a child of the foreground window hosts the status bar elements.
26 Status bar child position must be checked whenever Notepad is updated on stable Windows 11 releases
27 as Notepad is updated through Microsoft Store as opposed to tied to specific Windows releases.
28 L{api.getStatusBar} will resort to position lookup if C{NotImplementedError} is raised.
29 """
30 # #13688: Notepad 11 uses Windows 11 user interface, therefore status bar is harder to obtain.
31 # This does not affect earlier versions.
32 notepadVersion = int(self.productVersion.split(".")[0])
33 if notepadVersion < 11:
34 raise NotImplementedError()
35 # And no, status bar is shown when editing documents.
36 # Thankfully, of all the UIA objects encountered, document window has a unique window class name.
37 if api.getFocusObject().windowClassName != "RichEditD2DPT":
38 raise NotImplementedError()
39 # Obtain status bar text across Notepad 11 releases.
40 clientObject = UIAHandler.handler.clientObject
41 condition = clientObject.createPropertyCondition(UIAHandler.UIA_AutomationIdPropertyId, "ContentTextBlock")
42 walker = clientObject.createTreeWalker(condition)
43 notepadWindow = clientObject.elementFromHandle(api.getForegroundObject().windowHandle)
44 try:
45 element = walker.getFirstChildElement(notepadWindow)
46 # Is status bar even showing?
47 element = element.buildUpdatedCache(UIAHandler.handler.baseCacheRequest)
48 except (ValueError, COMError):
49 raise NotImplementedError
50 statusBar = UIA(UIAElement=element).parent
51 return statusBar
52
[end of source/appModules/notepad.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/source/appModules/notepad.py b/source/appModules/notepad.py
--- a/source/appModules/notepad.py
+++ b/source/appModules/notepad.py
@@ -10,13 +10,38 @@
from comtypes import COMError
import appModuleHandler
import api
+import braille
+import controlTypes
+import eventHandler
+import speech
import UIAHandler
from NVDAObjects.UIA import UIA
from NVDAObjects import NVDAObject
+from typing import Callable
class AppModule(appModuleHandler.AppModule):
+ def event_UIA_elementSelected(self, obj: NVDAObject, nextHandler: Callable[[], None]):
+ # Announce currently selected tab when it changes.
+ if (
+ obj.role == controlTypes.Role.TAB
+ # this is done because 2 selection events are sent for the same object, so to prevent double speaking.
+ and not eventHandler.isPendingEvents("UIA_elementSelected")
+ and controlTypes.State.SELECTED in obj.states
+ ):
+ speech.cancelSpeech()
+ speech.speakObject(obj, reason=controlTypes.OutputReason.FOCUS)
+ braille.handler.message(
+ braille.getPropertiesBraille(
+ name=obj.name,
+ role=obj.role,
+ states=obj.states,
+ positionInfo=obj.positionInfo
+ )
+ )
+ nextHandler()
+
def _get_statusBar(self) -> NVDAObject:
"""Retrieves Windows 11 Notepad status bar.
In Windows 10 and earlier, status bar can be obtained by looking at the bottom of the screen.
| {"golden_diff": "diff --git a/source/appModules/notepad.py b/source/appModules/notepad.py\n--- a/source/appModules/notepad.py\n+++ b/source/appModules/notepad.py\n@@ -10,13 +10,38 @@\n from comtypes import COMError\n import appModuleHandler\n import api\n+import braille\n+import controlTypes\n+import eventHandler\n+import speech\n import UIAHandler\n from NVDAObjects.UIA import UIA\n from NVDAObjects import NVDAObject\n+from typing import Callable\n \n \n class AppModule(appModuleHandler.AppModule):\n \n+\tdef event_UIA_elementSelected(self, obj: NVDAObject, nextHandler: Callable[[], None]):\n+\t\t# Announce currently selected tab when it changes.\n+\t\tif (\n+\t\t\tobj.role == controlTypes.Role.TAB\n+\t\t\t# this is done because 2 selection events are sent for the same object, so to prevent double speaking.\n+\t\t\tand not eventHandler.isPendingEvents(\"UIA_elementSelected\")\n+\t\t\tand controlTypes.State.SELECTED in obj.states\n+\t\t):\n+\t\t\tspeech.cancelSpeech()\n+\t\t\tspeech.speakObject(obj, reason=controlTypes.OutputReason.FOCUS)\n+\t\t\tbraille.handler.message(\n+\t\t\t\tbraille.getPropertiesBraille(\n+\t\t\t\t\tname=obj.name,\n+\t\t\t\t\trole=obj.role,\n+\t\t\t\t\tstates=obj.states,\n+\t\t\t\t\tpositionInfo=obj.positionInfo\n+\t\t\t\t)\n+\t\t\t)\n+\t\tnextHandler()\n+\n \tdef _get_statusBar(self) -> NVDAObject:\n \t\t\"\"\"Retrieves Windows 11 Notepad status bar.\n \t\tIn Windows 10 and earlier, status bar can be obtained by looking at the bottom of the screen.\n", "issue": "in tabbed notepad when switching between tabs nvda should announce some way to differentiate between tabs\n\r\n### Steps to reproduce:\r\ndownload the new tabbed notepad.\r\nnow using the menu create a new tab\r\nnow switch between tabs with ctrl+tabe\r\n### Actual behavior:\r\nnvda announces blank edition text editor\r\n### Expected behavior:\r\nBefore writing what I want, I would like to talk about my discoveries, sorry if it doesn't make sense.\r\nI typed a different word into the first line of text on each tab.\r\nguide example 1\r\nFernando\r\nguide 2\r\nsilva\r\nusing object navigation I found the list of tabs and within this list there was each tab named with what was written in the first line of text.\r\nNow I left the first line of text empty in tab 1\r\nin the list of tabs tab 1 appears with the name of untitled\r\nfrom what i understand if the first line of text is characters this text will be the title of the tab.\r\nIf the first line of text is empty, the tab will have an untitled title.\r\nso my suggestion is:\r\nwhen switching between tabs in notepad in this example by pressing ctrl+tab nvda should announce the title of the tab which will be what is typed in the first line.\r\nBut this doesn't work if the first line of the tabs is empty, so I suggest that nvda also announce the position of the tab within the list.\r\nexample\r\nguide 1\r\nfirst line\r\nFernando\r\nguide 2\r\nfirst line\r\nempty\r\nguide 3\r\nfirst line\r\nsilva\r\nwhen switching between tabs nvda would announce:\r\nguide 1 of 3 fernando\r\nguide 2 of 3 untitled\r\nguide 3 of 3 silva\r\nTab name and tab count could also be announced by command nvda + t to read window name.\r\n### NVDA logs, crash dumps and other attachments:\r\n\r\n### System configuration\r\n#### NVDA installed/portable/running from source:\r\ninstaled\r\n#### NVDA version:\r\nnvda.exe, NVDA alpha-27590,180c9f2b\r\n#### Windows version:\r\n11 22.623.1095\r\n#### Name and version of other software in use when reproducing the issue:\r\nNotepad.exe, Microsoft.WindowsNotepad 11.2212.33.0\r\n\r\n#### Other information about your system:\r\n\r\n### Other questions\r\n#### Does the issue still occur after restarting your computer?\r\nyes\r\n#### Have you tried any other versions of NVDA? If so, please report their behaviors.\r\nno\r\n#### If NVDA add-ons are disabled, is your problem still occurring?\r\nyes\r\n#### Does the issue still occur after you run the COM Registration Fixing Tool in NVDA's tools menu?\r\nyes\n", "before_files": [{"content": "# A part of NonVisual Desktop Access (NVDA)\n# Copyright (C) 2022-2023 NV Access Limited, Joseph Lee\n# This file is covered by the GNU General Public License.\n# See the file COPYING for more details.\n\n\"\"\"App module for Windows Notepad.\nWhile this app module also covers older Notepad releases,\nthis module provides workarounds for Windows 11 Notepad.\"\"\"\n\nfrom comtypes import COMError\nimport appModuleHandler\nimport api\nimport UIAHandler\nfrom NVDAObjects.UIA import UIA\nfrom NVDAObjects import NVDAObject\n\n\nclass AppModule(appModuleHandler.AppModule):\n\n\tdef _get_statusBar(self) -> NVDAObject:\n\t\t\"\"\"Retrieves Windows 11 Notepad status bar.\n\t\tIn Windows 10 and earlier, status bar can be obtained by looking at the bottom of the screen.\n\t\tWindows 11 Notepad uses Windows 11 UI design (top-level window is labeled \"DesktopWindowXamlSource\",\n\t\ttherefore status bar cannot be obtained by position alone.\n\t\tIf visible, a child of the foreground window hosts the status bar elements.\n\t\tStatus bar child position must be checked whenever Notepad is updated on stable Windows 11 releases\n\t\tas Notepad is updated through Microsoft Store as opposed to tied to specific Windows releases.\n\t\tL{api.getStatusBar} will resort to position lookup if C{NotImplementedError} is raised.\n\t\t\"\"\"\n\t\t# #13688: Notepad 11 uses Windows 11 user interface, therefore status bar is harder to obtain.\n\t\t# This does not affect earlier versions.\n\t\tnotepadVersion = int(self.productVersion.split(\".\")[0])\n\t\tif notepadVersion < 11:\n\t\t\traise NotImplementedError()\n\t\t# And no, status bar is shown when editing documents.\n\t\t# Thankfully, of all the UIA objects encountered, document window has a unique window class name.\n\t\tif api.getFocusObject().windowClassName != \"RichEditD2DPT\":\n\t\t\traise NotImplementedError()\n\t\t# Obtain status bar text across Notepad 11 releases.\n\t\tclientObject = UIAHandler.handler.clientObject\n\t\tcondition = clientObject.createPropertyCondition(UIAHandler.UIA_AutomationIdPropertyId, \"ContentTextBlock\")\n\t\twalker = clientObject.createTreeWalker(condition)\n\t\tnotepadWindow = clientObject.elementFromHandle(api.getForegroundObject().windowHandle)\n\t\ttry:\n\t\t\telement = walker.getFirstChildElement(notepadWindow)\n\t\t\t# Is status bar even showing?\n\t\t\telement = element.buildUpdatedCache(UIAHandler.handler.baseCacheRequest)\n\t\texcept (ValueError, COMError):\n\t\t\traise NotImplementedError\n\t\tstatusBar = UIA(UIAElement=element).parent\n\t\treturn statusBar\n", "path": "source/appModules/notepad.py"}]} | 1,802 | 365 |
gh_patches_debug_11669 | rasdani/github-patches | git_diff | scikit-hep__pyhf-960 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Documentation: meaning of value for return_fitted_val=True
# Description
In this code snippet from the documentation
```python
>>> pyhf.infer.mle.fixed_poi_fit(test_poi, data, model, return_fitted_val=True)
(array([1. , 0.97224597, 0.87553894]), array([28.92218013]))
```
it isn't clear what the meaning of `array([28.92218013])` is. Is it likelihood, log likelihood, -log likelihood, -2 log likelihood?
It is the latter, but that is not clear.
Applies to
https://scikit-hep.org/pyhf/_generated/pyhf.infer.mle.fixed_poi_fit.html
or
https://scikit-hep.org/pyhf/_generated/pyhf.infer.mle.fit.html
## Is your feature request related to a problem? Please describe.
I wasn't sure, so I had to try a few things to figure it out.
### Describe the solution you'd like
Add a note to the documentation for the convention.
### Describe alternatives you've considered
banging my head against the wall.
# Relevant Issues and Pull Requests
</issue>
<code>
[start of src/pyhf/infer/mle.py]
1 """Module for Maximum Likelihood Estimation."""
2 from .. import get_backend
3 from ..exceptions import UnspecifiedPOI
4
5
6 def twice_nll(pars, data, pdf):
7 """
8 Twice the negative Log-Likelihood.
9
10 Args:
11 data (`tensor`): The data
12 pdf (~pyhf.pdf.Model): The statistical model adhering to the schema model.json
13
14 Returns:
15 Twice the negative log likelihood.
16
17 """
18 return -2 * pdf.logpdf(pars, data)
19
20
21 def fit(data, pdf, init_pars=None, par_bounds=None, **kwargs):
22 """
23 Run a unconstrained maximum likelihood fit.
24
25 Example:
26 >>> import pyhf
27 >>> pyhf.set_backend("numpy")
28 >>> model = pyhf.simplemodels.hepdata_like(
29 ... signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]
30 ... )
31 >>> observations = [51, 48]
32 >>> data = pyhf.tensorlib.astensor(observations + model.config.auxdata)
33 >>> pyhf.infer.mle.fit(data, model, return_fitted_val=True)
34 (array([0. , 1.0030512 , 0.96266961]), array([24.98393521]))
35
36 Args:
37 data (`tensor`): The data
38 pdf (~pyhf.pdf.Model): The statistical model adhering to the schema model.json
39 init_pars (`list`): Values to initialize the model parameters at for the fit
40 par_bounds (`list` of `list`\s or `tuple`\s): The extrema of values the model parameters are allowed to reach in the fit
41 kwargs: Keyword arguments passed through to the optimizer API
42
43 Returns:
44 See optimizer API
45
46 """
47 _, opt = get_backend()
48 init_pars = init_pars or pdf.config.suggested_init()
49 par_bounds = par_bounds or pdf.config.suggested_bounds()
50 return opt.minimize(twice_nll, data, pdf, init_pars, par_bounds, **kwargs)
51
52
53 def fixed_poi_fit(poi_val, data, pdf, init_pars=None, par_bounds=None, **kwargs):
54 """
55 Run a maximum likelihood fit with the POI value fixed.
56
57 Example:
58 >>> import pyhf
59 >>> pyhf.set_backend("numpy")
60 >>> model = pyhf.simplemodels.hepdata_like(
61 ... signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]
62 ... )
63 >>> observations = [51, 48]
64 >>> data = pyhf.tensorlib.astensor(observations + model.config.auxdata)
65 >>> test_poi = 1.0
66 >>> pyhf.infer.mle.fixed_poi_fit(test_poi, data, model, return_fitted_val=True)
67 (array([1. , 0.97224597, 0.87553894]), array([28.92218013]))
68
69 Args:
70 data: The data
71 pdf (~pyhf.pdf.Model): The statistical model adhering to the schema model.json
72 init_pars (`list`): Values to initialize the model parameters at for the fit
73 par_bounds (`list` of `list`\s or `tuple`\s): The extrema of values the model parameters are allowed to reach in the fit
74 kwargs: Keyword arguments passed through to the optimizer API
75
76 Returns:
77 See optimizer API
78
79 """
80 if pdf.config.poi_index is None:
81 raise UnspecifiedPOI(
82 'No POI is defined. A POI is required to fit with a fixed POI.'
83 )
84 _, opt = get_backend()
85 init_pars = init_pars or pdf.config.suggested_init()
86 par_bounds = par_bounds or pdf.config.suggested_bounds()
87 return opt.minimize(
88 twice_nll,
89 data,
90 pdf,
91 init_pars,
92 par_bounds,
93 [(pdf.config.poi_index, poi_val)],
94 **kwargs,
95 )
96
[end of src/pyhf/infer/mle.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/pyhf/infer/mle.py b/src/pyhf/infer/mle.py
--- a/src/pyhf/infer/mle.py
+++ b/src/pyhf/infer/mle.py
@@ -22,6 +22,10 @@
"""
Run a unconstrained maximum likelihood fit.
+ .. note::
+
+ :func:`twice_nll` is the objective function.
+
Example:
>>> import pyhf
>>> pyhf.set_backend("numpy")
@@ -54,6 +58,10 @@
"""
Run a maximum likelihood fit with the POI value fixed.
+ .. note::
+
+ :func:`twice_nll` is the objective function.
+
Example:
>>> import pyhf
>>> pyhf.set_backend("numpy")
| {"golden_diff": "diff --git a/src/pyhf/infer/mle.py b/src/pyhf/infer/mle.py\n--- a/src/pyhf/infer/mle.py\n+++ b/src/pyhf/infer/mle.py\n@@ -22,6 +22,10 @@\n \"\"\"\n Run a unconstrained maximum likelihood fit.\n \n+ .. note::\n+\n+ :func:`twice_nll` is the objective function.\n+\n Example:\n >>> import pyhf\n >>> pyhf.set_backend(\"numpy\")\n@@ -54,6 +58,10 @@\n \"\"\"\n Run a maximum likelihood fit with the POI value fixed.\n \n+ .. note::\n+\n+ :func:`twice_nll` is the objective function.\n+\n Example:\n >>> import pyhf\n >>> pyhf.set_backend(\"numpy\")\n", "issue": "Documentation: meaning of value for return_fitted_val=True\n# Description\r\n\r\nIn this code snippet from the documentation \r\n\r\n```python\r\n>>> pyhf.infer.mle.fixed_poi_fit(test_poi, data, model, return_fitted_val=True)\r\n(array([1. , 0.97224597, 0.87553894]), array([28.92218013]))\r\n```\r\n\r\nit isn't clear what the meaning of `array([28.92218013])` is. Is it likelihood, log likelihood, -log likelihood, -2 log likelihood?\r\nIt is the latter, but that is not clear.\r\n\r\nApplies to \r\n\r\nhttps://scikit-hep.org/pyhf/_generated/pyhf.infer.mle.fixed_poi_fit.html\r\nor\r\nhttps://scikit-hep.org/pyhf/_generated/pyhf.infer.mle.fit.html\r\n\r\n## Is your feature request related to a problem? Please describe.\r\n\r\nI wasn't sure, so I had to try a few things to figure it out.\r\n\r\n### Describe the solution you'd like\r\n\r\nAdd a note to the documentation for the convention.\r\n\r\n### Describe alternatives you've considered\r\n\r\nbanging my head against the wall.\r\n\r\n# Relevant Issues and Pull Requests\r\n\r\n\n", "before_files": [{"content": "\"\"\"Module for Maximum Likelihood Estimation.\"\"\"\nfrom .. import get_backend\nfrom ..exceptions import UnspecifiedPOI\n\n\ndef twice_nll(pars, data, pdf):\n \"\"\"\n Twice the negative Log-Likelihood.\n\n Args:\n data (`tensor`): The data\n pdf (~pyhf.pdf.Model): The statistical model adhering to the schema model.json\n\n Returns:\n Twice the negative log likelihood.\n\n \"\"\"\n return -2 * pdf.logpdf(pars, data)\n\n\ndef fit(data, pdf, init_pars=None, par_bounds=None, **kwargs):\n \"\"\"\n Run a unconstrained maximum likelihood fit.\n\n Example:\n >>> import pyhf\n >>> pyhf.set_backend(\"numpy\")\n >>> model = pyhf.simplemodels.hepdata_like(\n ... signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]\n ... )\n >>> observations = [51, 48]\n >>> data = pyhf.tensorlib.astensor(observations + model.config.auxdata)\n >>> pyhf.infer.mle.fit(data, model, return_fitted_val=True)\n (array([0. , 1.0030512 , 0.96266961]), array([24.98393521]))\n\n Args:\n data (`tensor`): The data\n pdf (~pyhf.pdf.Model): The statistical model adhering to the schema model.json\n init_pars (`list`): Values to initialize the model parameters at for the fit\n par_bounds (`list` of `list`\\s or `tuple`\\s): The extrema of values the model parameters are allowed to reach in the fit\n kwargs: Keyword arguments passed through to the optimizer API\n\n Returns:\n See optimizer API\n\n \"\"\"\n _, opt = get_backend()\n init_pars = init_pars or pdf.config.suggested_init()\n par_bounds = par_bounds or pdf.config.suggested_bounds()\n return opt.minimize(twice_nll, data, pdf, init_pars, par_bounds, **kwargs)\n\n\ndef fixed_poi_fit(poi_val, data, pdf, init_pars=None, par_bounds=None, **kwargs):\n \"\"\"\n Run a maximum likelihood fit with the POI value fixed.\n\n Example:\n >>> import pyhf\n >>> pyhf.set_backend(\"numpy\")\n >>> model = pyhf.simplemodels.hepdata_like(\n ... signal_data=[12.0, 11.0], bkg_data=[50.0, 52.0], bkg_uncerts=[3.0, 7.0]\n ... )\n >>> observations = [51, 48]\n >>> data = pyhf.tensorlib.astensor(observations + model.config.auxdata)\n >>> test_poi = 1.0\n >>> pyhf.infer.mle.fixed_poi_fit(test_poi, data, model, return_fitted_val=True)\n (array([1. , 0.97224597, 0.87553894]), array([28.92218013]))\n\n Args:\n data: The data\n pdf (~pyhf.pdf.Model): The statistical model adhering to the schema model.json\n init_pars (`list`): Values to initialize the model parameters at for the fit\n par_bounds (`list` of `list`\\s or `tuple`\\s): The extrema of values the model parameters are allowed to reach in the fit\n kwargs: Keyword arguments passed through to the optimizer API\n\n Returns:\n See optimizer API\n\n \"\"\"\n if pdf.config.poi_index is None:\n raise UnspecifiedPOI(\n 'No POI is defined. A POI is required to fit with a fixed POI.'\n )\n _, opt = get_backend()\n init_pars = init_pars or pdf.config.suggested_init()\n par_bounds = par_bounds or pdf.config.suggested_bounds()\n return opt.minimize(\n twice_nll,\n data,\n pdf,\n init_pars,\n par_bounds,\n [(pdf.config.poi_index, poi_val)],\n **kwargs,\n )\n", "path": "src/pyhf/infer/mle.py"}]} | 1,925 | 179 |
gh_patches_debug_33534 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-1673 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Feature]: paper dates are not in the complete date-file
### I propose a feature for:
Sources
### Describe your wanted feature
Hi,
right now I saw that the homepage "https://www.geoport-nwm.de/de/abfuhrtermine-geoportal.html" describes 3 ics files for paper-dates:
Please can you add them to the integrsation, because I need to add them manually now.
Kalenderdatei AltpapiertonneGER Umweltschutz GmbH | downloaden (ICS)
Kalenderdatei AltpapiertonneGollan Recycling GmbH | downloaden (ICS)
Kalenderdatei AltpapiertonneVeolia Umweltservice Nord GmbH | downloaden (ICS)
</issue>
<code>
[start of custom_components/waste_collection_schedule/waste_collection_schedule/source/geoport_nwm_de.py]
1 import datetime
2 import urllib
3
4 import requests
5 from waste_collection_schedule import Collection # type: ignore[attr-defined]
6 from waste_collection_schedule.service.ICS import ICS
7
8 TITLE = "Landkreis Nordwestmecklenburg"
9 DESCRIPTION = "Source for Landkreis Nordwestmecklenburg"
10 URL = "https://www.geoport-nwm.de"
11 TEST_CASES = {
12 "Rüting": {"district": "Rüting"},
13 "Grevenstein u. ...": {"district": "Grevenstein u. Ausbau"},
14 "Seefeld": {"district": "Seefeld/ Testorf- Steinfort"},
15 "1100l": {"district": "Groß Stieten (1.100 l Behälter)"},
16 "kl. Bünsdorf": {"district": "Klein Bünsdorf"},
17 }
18
19
20 class Source:
21 def __init__(self, district):
22 self._district = district
23 self._ics = ICS()
24
25 def fetch(self):
26 today = datetime.date.today()
27 dates = []
28 if today.month == 12:
29 # On Dec 27 2022, the 2022 schedule was no longer available for test case "Seefeld", all others worked
30 try:
31 dates = self.fetch_year(today.year)
32 except Exception:
33 pass
34 try:
35 dates.extend(self.fetch_year(today.year + 1))
36 except Exception:
37 pass
38 else:
39 dates = self.fetch_year(today.year)
40
41 entries = []
42 for d in dates:
43 entries.append(Collection(d[0], d[1]))
44 return entries
45
46 def fetch_year(self, year):
47 arg = convert_to_arg(self._district)
48 r = requests.get(
49 f"https://www.geoport-nwm.de/nwm-download/Abfuhrtermine/ICS/{year}/{arg}.ics"
50 )
51 r.raise_for_status()
52 return self._ics.convert(r.text)
53
54
55 def convert_to_arg(district):
56 district = district.replace("(1.100 l Behälter)", "1100_l")
57 district = district.replace("ü", "ue")
58 district = district.replace("ö", "oe")
59 district = district.replace("ä", "ae")
60 district = district.replace("ß", "ss")
61 district = district.replace("/", "")
62 district = district.replace("- ", "-")
63 district = district.replace(".", "")
64 district = district.replace(" ", "_")
65 arg = urllib.parse.quote("Ortsteil_" + district)
66 return arg
67
[end of custom_components/waste_collection_schedule/waste_collection_schedule/source/geoport_nwm_de.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/geoport_nwm_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/geoport_nwm_de.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/geoport_nwm_de.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/geoport_nwm_de.py
@@ -16,6 +16,8 @@
"kl. Bünsdorf": {"district": "Klein Bünsdorf"},
}
+API_URL = "https://www.geoport-nwm.de/nwm-download/Abfuhrtermine/ICS/{year}/{arg}.ics"
+
class Source:
def __init__(self, district):
@@ -45,22 +47,35 @@
def fetch_year(self, year):
arg = convert_to_arg(self._district)
- r = requests.get(
- f"https://www.geoport-nwm.de/nwm-download/Abfuhrtermine/ICS/{year}/{arg}.ics"
- )
+ r = requests.get(API_URL.format(year=year, arg=arg))
r.raise_for_status()
- return self._ics.convert(r.text)
+ entries = self._ics.convert(r.text)
+ for prefix in (
+ "Schadstoffmobil",
+ "Papiertonne_GER",
+ "Papiertonne_Gollan",
+ "Papiertonne_Veolia",
+ ):
+ try:
+ r = requests.get(API_URL.format(year=year, arg=f"{prefix}_{arg}"))
+ r.raise_for_status()
+ new_entries = self._ics.convert(r.text)
+ entries.extend(new_entries)
+ except (ValueError, requests.exceptions.HTTPError):
+ pass
+ return entries
-def convert_to_arg(district):
+def convert_to_arg(district, prefix=""):
district = district.replace("(1.100 l Behälter)", "1100_l")
district = district.replace("ü", "ue")
district = district.replace("ö", "oe")
district = district.replace("ä", "ae")
district = district.replace("ß", "ss")
district = district.replace("/", "")
- district = district.replace("- ", "-")
+ # district = district.replace("- ", "-") failed with Seefeld/ Testorf- Steinfort
district = district.replace(".", "")
district = district.replace(" ", "_")
- arg = urllib.parse.quote("Ortsteil_" + district)
+ prefix = prefix + "_" if prefix else ""
+ arg = urllib.parse.quote(f"{prefix}Ortsteil_{district}")
return arg
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/geoport_nwm_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/geoport_nwm_de.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/geoport_nwm_de.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/geoport_nwm_de.py\n@@ -16,6 +16,8 @@\n \"kl. B\u00fcnsdorf\": {\"district\": \"Klein B\u00fcnsdorf\"},\n }\n \n+API_URL = \"https://www.geoport-nwm.de/nwm-download/Abfuhrtermine/ICS/{year}/{arg}.ics\"\n+\n \n class Source:\n def __init__(self, district):\n@@ -45,22 +47,35 @@\n \n def fetch_year(self, year):\n arg = convert_to_arg(self._district)\n- r = requests.get(\n- f\"https://www.geoport-nwm.de/nwm-download/Abfuhrtermine/ICS/{year}/{arg}.ics\"\n- )\n+ r = requests.get(API_URL.format(year=year, arg=arg))\n r.raise_for_status()\n- return self._ics.convert(r.text)\n+ entries = self._ics.convert(r.text)\n+ for prefix in (\n+ \"Schadstoffmobil\",\n+ \"Papiertonne_GER\",\n+ \"Papiertonne_Gollan\",\n+ \"Papiertonne_Veolia\",\n+ ):\n+ try:\n+ r = requests.get(API_URL.format(year=year, arg=f\"{prefix}_{arg}\"))\n+ r.raise_for_status()\n+ new_entries = self._ics.convert(r.text)\n+ entries.extend(new_entries)\n+ except (ValueError, requests.exceptions.HTTPError):\n+ pass\n+ return entries\n \n \n-def convert_to_arg(district):\n+def convert_to_arg(district, prefix=\"\"):\n district = district.replace(\"(1.100 l Beh\u00e4lter)\", \"1100_l\")\n district = district.replace(\"\u00fc\", \"ue\")\n district = district.replace(\"\u00f6\", \"oe\")\n district = district.replace(\"\u00e4\", \"ae\")\n district = district.replace(\"\u00df\", \"ss\")\n district = district.replace(\"/\", \"\")\n- district = district.replace(\"- \", \"-\")\n+ # district = district.replace(\"- \", \"-\") failed with Seefeld/ Testorf- Steinfort\n district = district.replace(\".\", \"\")\n district = district.replace(\" \", \"_\")\n- arg = urllib.parse.quote(\"Ortsteil_\" + district)\n+ prefix = prefix + \"_\" if prefix else \"\"\n+ arg = urllib.parse.quote(f\"{prefix}Ortsteil_{district}\")\n return arg\n", "issue": "[Feature]: paper dates are not in the complete date-file\n### I propose a feature for:\r\n\r\nSources\r\n\r\n### Describe your wanted feature\r\n\r\nHi,\r\nright now I saw that the homepage \"https://www.geoport-nwm.de/de/abfuhrtermine-geoportal.html\" describes 3 ics files for paper-dates: \r\nPlease can you add them to the integrsation, because I need to add them manually now.\r\n\r\nKalenderdatei AltpapiertonneGER Umweltschutz GmbH | downloaden (ICS)\r\nKalenderdatei AltpapiertonneGollan Recycling GmbH | downloaden (ICS)\r\nKalenderdatei AltpapiertonneVeolia Umweltservice Nord GmbH | downloaden (ICS)\r\n\n", "before_files": [{"content": "import datetime\nimport urllib\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\n\nTITLE = \"Landkreis Nordwestmecklenburg\"\nDESCRIPTION = \"Source for Landkreis Nordwestmecklenburg\"\nURL = \"https://www.geoport-nwm.de\"\nTEST_CASES = {\n \"R\u00fcting\": {\"district\": \"R\u00fcting\"},\n \"Grevenstein u. ...\": {\"district\": \"Grevenstein u. Ausbau\"},\n \"Seefeld\": {\"district\": \"Seefeld/ Testorf- Steinfort\"},\n \"1100l\": {\"district\": \"Gro\u00df Stieten (1.100 l Beh\u00e4lter)\"},\n \"kl. B\u00fcnsdorf\": {\"district\": \"Klein B\u00fcnsdorf\"},\n}\n\n\nclass Source:\n def __init__(self, district):\n self._district = district\n self._ics = ICS()\n\n def fetch(self):\n today = datetime.date.today()\n dates = []\n if today.month == 12:\n # On Dec 27 2022, the 2022 schedule was no longer available for test case \"Seefeld\", all others worked\n try:\n dates = self.fetch_year(today.year)\n except Exception:\n pass\n try:\n dates.extend(self.fetch_year(today.year + 1))\n except Exception:\n pass\n else:\n dates = self.fetch_year(today.year)\n\n entries = []\n for d in dates:\n entries.append(Collection(d[0], d[1]))\n return entries\n\n def fetch_year(self, year):\n arg = convert_to_arg(self._district)\n r = requests.get(\n f\"https://www.geoport-nwm.de/nwm-download/Abfuhrtermine/ICS/{year}/{arg}.ics\"\n )\n r.raise_for_status()\n return self._ics.convert(r.text)\n\n\ndef convert_to_arg(district):\n district = district.replace(\"(1.100 l Beh\u00e4lter)\", \"1100_l\")\n district = district.replace(\"\u00fc\", \"ue\")\n district = district.replace(\"\u00f6\", \"oe\")\n district = district.replace(\"\u00e4\", \"ae\")\n district = district.replace(\"\u00df\", \"ss\")\n district = district.replace(\"/\", \"\")\n district = district.replace(\"- \", \"-\")\n district = district.replace(\".\", \"\")\n district = district.replace(\" \", \"_\")\n arg = urllib.parse.quote(\"Ortsteil_\" + district)\n return arg\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/geoport_nwm_de.py"}]} | 1,394 | 591 |
gh_patches_debug_19205 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-2205 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Move non-core dependencies to dedicated groups
@la4de has made a very useful playground for Strawberry, available (for now) here -> https://la4de.github.io/strawberry-playground/
Unfortunately some of the default dependencies aren't uploaded as wheels (see https://github.com/la4de/strawberry-playground/issues/1).
Maybe it could time to move some of these deps to specific groups, we definitely don't need python-multipart installed by default :)
Here's a list of proposed groups based on dependencies installed when doing `pip install strawberry-graphql`:
**Default**:
- cached-property
- sentinel
- typing-extensions
- graphql-core
- python-dateutil (I think we need this because of compatibility with python 3.7)
**CLI**:
- click
- pygments
**All web frameworks**:
- python-multipart
</issue>
<code>
[start of strawberry/utils/debug.py]
1 import datetime
2 import json
3 from json import JSONEncoder
4 from typing import Any, Dict, Optional
5
6 from pygments import highlight, lexers
7 from pygments.formatters import Terminal256Formatter
8
9 from .graphql_lexer import GraphQLLexer
10
11
12 class StrawberryJSONEncoder(JSONEncoder):
13 def default(self, o: Any) -> Any:
14 return repr(o)
15
16
17 def pretty_print_graphql_operation(
18 operation_name: Optional[str], query: str, variables: Optional[Dict["str", Any]]
19 ):
20 """Pretty print a GraphQL operation using pygments.
21
22 Won't print introspection operation to prevent noise in the output."""
23
24 if operation_name == "IntrospectionQuery":
25 return
26
27 now = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
28
29 print(f"[{now}]: {operation_name or 'No operation name'}")
30 print(highlight(query, GraphQLLexer(), Terminal256Formatter()))
31
32 if variables:
33 variables_json = json.dumps(variables, indent=4, cls=StrawberryJSONEncoder)
34
35 print(highlight(variables_json, lexers.JsonLexer(), Terminal256Formatter()))
36
[end of strawberry/utils/debug.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/strawberry/utils/debug.py b/strawberry/utils/debug.py
--- a/strawberry/utils/debug.py
+++ b/strawberry/utils/debug.py
@@ -3,11 +3,6 @@
from json import JSONEncoder
from typing import Any, Dict, Optional
-from pygments import highlight, lexers
-from pygments.formatters import Terminal256Formatter
-
-from .graphql_lexer import GraphQLLexer
-
class StrawberryJSONEncoder(JSONEncoder):
def default(self, o: Any) -> Any:
@@ -21,6 +16,17 @@
Won't print introspection operation to prevent noise in the output."""
+ try:
+ from pygments import highlight, lexers
+ from pygments.formatters import Terminal256Formatter
+ except ImportError as e:
+ raise ImportError(
+ "pygments is not installed but is required for debug output, install it "
+ "directly or run `pip install strawberry-graphql[debug-server]`"
+ ) from e
+
+ from .graphql_lexer import GraphQLLexer
+
if operation_name == "IntrospectionQuery":
return
| {"golden_diff": "diff --git a/strawberry/utils/debug.py b/strawberry/utils/debug.py\n--- a/strawberry/utils/debug.py\n+++ b/strawberry/utils/debug.py\n@@ -3,11 +3,6 @@\n from json import JSONEncoder\n from typing import Any, Dict, Optional\n \n-from pygments import highlight, lexers\n-from pygments.formatters import Terminal256Formatter\n-\n-from .graphql_lexer import GraphQLLexer\n-\n \n class StrawberryJSONEncoder(JSONEncoder):\n def default(self, o: Any) -> Any:\n@@ -21,6 +16,17 @@\n \n Won't print introspection operation to prevent noise in the output.\"\"\"\n \n+ try:\n+ from pygments import highlight, lexers\n+ from pygments.formatters import Terminal256Formatter\n+ except ImportError as e:\n+ raise ImportError(\n+ \"pygments is not installed but is required for debug output, install it \"\n+ \"directly or run `pip install strawberry-graphql[debug-server]`\"\n+ ) from e\n+\n+ from .graphql_lexer import GraphQLLexer\n+\n if operation_name == \"IntrospectionQuery\":\n return\n", "issue": "Move non-core dependencies to dedicated groups\n@la4de has made a very useful playground for Strawberry, available (for now) here -> https://la4de.github.io/strawberry-playground/\r\n\r\nUnfortunately some of the default dependencies aren't uploaded as wheels (see https://github.com/la4de/strawberry-playground/issues/1).\r\n\r\nMaybe it could time to move some of these deps to specific groups, we definitely don't need python-multipart installed by default :)\r\n\r\nHere's a list of proposed groups based on dependencies installed when doing `pip install strawberry-graphql`:\r\n\r\n**Default**:\r\n \r\n- cached-property\r\n- sentinel\r\n- typing-extensions\r\n- graphql-core\r\n- python-dateutil (I think we need this because of compatibility with python 3.7)\r\n\r\n**CLI**:\r\n\r\n- click\r\n- pygments\r\n\r\n**All web frameworks**:\r\n\r\n- python-multipart\r\n\r\n\r\n\n", "before_files": [{"content": "import datetime\nimport json\nfrom json import JSONEncoder\nfrom typing import Any, Dict, Optional\n\nfrom pygments import highlight, lexers\nfrom pygments.formatters import Terminal256Formatter\n\nfrom .graphql_lexer import GraphQLLexer\n\n\nclass StrawberryJSONEncoder(JSONEncoder):\n def default(self, o: Any) -> Any:\n return repr(o)\n\n\ndef pretty_print_graphql_operation(\n operation_name: Optional[str], query: str, variables: Optional[Dict[\"str\", Any]]\n):\n \"\"\"Pretty print a GraphQL operation using pygments.\n\n Won't print introspection operation to prevent noise in the output.\"\"\"\n\n if operation_name == \"IntrospectionQuery\":\n return\n\n now = datetime.datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\")\n\n print(f\"[{now}]: {operation_name or 'No operation name'}\")\n print(highlight(query, GraphQLLexer(), Terminal256Formatter()))\n\n if variables:\n variables_json = json.dumps(variables, indent=4, cls=StrawberryJSONEncoder)\n\n print(highlight(variables_json, lexers.JsonLexer(), Terminal256Formatter()))\n", "path": "strawberry/utils/debug.py"}]} | 1,037 | 259 |
gh_patches_debug_478 | rasdani/github-patches | git_diff | fossasia__open-event-server-7579 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
test_multiple_heads is not raising the expected error
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
On having multiple heads, the travis build fails with error -
```
error: Hooks handler process 'dredd-hooks-python ./tests/hook_main.py' exited with status: 1
warn: Hook handling timed out.
error: Hooks handler process 'dredd-hooks-python ./tests/hook_main.py' exited with status: 1
info: Backend server process exited
The command "dredd" failed and exited with 1 during .
```
It should raise error as expected in - https://github.com/fossasia/open-event-server/blob/development/scripts/test_multiple_heads.sh
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
Expected error should be raised - `Error: Multiple Migration Heads`
<!-- If applicable, add stacktrace to help explain your problem. -->
**Additional context**
<!-- Add any other context about the problem here. -->
On it
</issue>
<code>
[start of migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py]
1 """empty message
2
3 Revision ID: 3b29ea38f0cb
4 Revises: 2d0760003a8a
5 Create Date: 2021-01-07 05:19:49.749923
6
7 """
8
9 from alembic import op
10 import sqlalchemy as sa
11 import sqlalchemy_utils
12
13
14 # revision identifiers, used by Alembic.
15 revision = '3b29ea38f0cb'
16 down_revision = '2d0760003a8a'
17
18
19 def upgrade():
20 # ### commands auto generated by Alembic - please adjust! ###
21 op.add_column('speaker', sa.Column('rank', sa.Integer(), nullable=False, server_default='0'))
22 # ### end Alembic commands ###
23
24
25 def downgrade():
26 # ### commands auto generated by Alembic - please adjust! ###
27 op.drop_column('speaker', 'rank')
28 # ### end Alembic commands ###
29
[end of migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py b/migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py
--- a/migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py
+++ b/migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py
@@ -13,7 +13,7 @@
# revision identifiers, used by Alembic.
revision = '3b29ea38f0cb'
-down_revision = '2d0760003a8a'
+down_revision = '4e61d4df3516'
def upgrade():
| {"golden_diff": "diff --git a/migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py b/migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py\n--- a/migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py\n+++ b/migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py\n@@ -13,7 +13,7 @@\n \n # revision identifiers, used by Alembic.\n revision = '3b29ea38f0cb'\n-down_revision = '2d0760003a8a'\n+down_revision = '4e61d4df3516'\n \n \n def upgrade():\n", "issue": "test_multiple_heads is not raising the expected error\n**Describe the bug**\r\n<!-- A clear and concise description of what the bug is. -->\r\nOn having multiple heads, the travis build fails with error -\r\n```\r\nerror: Hooks handler process 'dredd-hooks-python ./tests/hook_main.py' exited with status: 1\r\nwarn: Hook handling timed out.\r\nerror: Hooks handler process 'dredd-hooks-python ./tests/hook_main.py' exited with status: 1\r\ninfo: Backend server process exited\r\nThe command \"dredd\" failed and exited with 1 during .\r\n```\r\nIt should raise error as expected in - https://github.com/fossasia/open-event-server/blob/development/scripts/test_multiple_heads.sh\r\n\r\n**Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nExpected error should be raised - `Error: Multiple Migration Heads`\r\n\r\n<!-- If applicable, add stacktrace to help explain your problem. -->\r\n\r\n\r\n**Additional context**\r\n<!-- Add any other context about the problem here. -->\r\nOn it\n", "before_files": [{"content": "\"\"\"empty message\n\nRevision ID: 3b29ea38f0cb\nRevises: 2d0760003a8a\nCreate Date: 2021-01-07 05:19:49.749923\n\n\"\"\"\n\nfrom alembic import op\nimport sqlalchemy as sa\nimport sqlalchemy_utils\n\n\n# revision identifiers, used by Alembic.\nrevision = '3b29ea38f0cb'\ndown_revision = '2d0760003a8a'\n\n\ndef upgrade():\n # ### commands auto generated by Alembic - please adjust! ###\n op.add_column('speaker', sa.Column('rank', sa.Integer(), nullable=False, server_default='0'))\n # ### end Alembic commands ###\n\n\ndef downgrade():\n # ### commands auto generated by Alembic - please adjust! ###\n op.drop_column('speaker', 'rank')\n # ### end Alembic commands ###\n", "path": "migrations/versions/rev-2021-01-07-05:19:49-3b29ea38f0cb_.py"}]} | 1,087 | 244 |
gh_patches_debug_6175 | rasdani/github-patches | git_diff | google__fuzzbench-148 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[reports] Data.csv.gz don't need to contain id column
It has these columns because data.csv.gz contains data from a join query of snapshots on trials.
time_started and time_ended are from trials but they are probably not useful for the analysis people want to do so they just take up space at this point.
</issue>
<code>
[start of analysis/queries.py]
1 # Copyright 2020 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Database queries for acquiring experiment data."""
15
16 import pandas as pd
17 import sqlalchemy
18
19 from database import models
20 from database import utils as db_utils
21
22
23 def get_experiment_data(experiment_names):
24 """Get measurements (such as coverage) on experiments from the database."""
25 snapshots_query = db_utils.query(models.Snapshot).options(
26 sqlalchemy.orm.joinedload('trial')).filter(
27 models.Snapshot.trial.has(
28 models.Trial.experiment.in_(experiment_names)))
29 return pd.read_sql_query(snapshots_query.statement, db_utils.engine)
30
[end of analysis/queries.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/analysis/queries.py b/analysis/queries.py
--- a/analysis/queries.py
+++ b/analysis/queries.py
@@ -26,4 +26,8 @@
sqlalchemy.orm.joinedload('trial')).filter(
models.Snapshot.trial.has(
models.Trial.experiment.in_(experiment_names)))
- return pd.read_sql_query(snapshots_query.statement, db_utils.engine)
+
+ # id must be loaded to do the join but get rid of it now since
+ # trial_id provides the same info.
+ data = pd.read_sql_query(snapshots_query.statement, db_utils.engine)
+ return data.drop(columns=['id'])
| {"golden_diff": "diff --git a/analysis/queries.py b/analysis/queries.py\n--- a/analysis/queries.py\n+++ b/analysis/queries.py\n@@ -26,4 +26,8 @@\n sqlalchemy.orm.joinedload('trial')).filter(\n models.Snapshot.trial.has(\n models.Trial.experiment.in_(experiment_names)))\n- return pd.read_sql_query(snapshots_query.statement, db_utils.engine)\n+\n+ # id must be loaded to do the join but get rid of it now since\n+ # trial_id provides the same info.\n+ data = pd.read_sql_query(snapshots_query.statement, db_utils.engine)\n+ return data.drop(columns=['id'])\n", "issue": "[reports] Data.csv.gz don't need to contain id column\nIt has these columns because data.csv.gz contains data from a join query of snapshots on trials.\r\ntime_started and time_ended are from trials but they are probably not useful for the analysis people want to do so they just take up space at this point.\n", "before_files": [{"content": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Database queries for acquiring experiment data.\"\"\"\n\nimport pandas as pd\nimport sqlalchemy\n\nfrom database import models\nfrom database import utils as db_utils\n\n\ndef get_experiment_data(experiment_names):\n \"\"\"Get measurements (such as coverage) on experiments from the database.\"\"\"\n snapshots_query = db_utils.query(models.Snapshot).options(\n sqlalchemy.orm.joinedload('trial')).filter(\n models.Snapshot.trial.has(\n models.Trial.experiment.in_(experiment_names)))\n return pd.read_sql_query(snapshots_query.statement, db_utils.engine)\n", "path": "analysis/queries.py"}]} | 896 | 151 |
gh_patches_debug_24520 | rasdani/github-patches | git_diff | scikit-image__scikit-image-4945 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
measure.label speed
This is triggered by [this Stackoverflow question](https://stackoverflow.com/questions/62804953/performance-differences-between-bwlabeln-on-matlab-and-skimage-measure-label-on/62842582#62842582). When I have large binary arrays to label and performance issues, I usually resort to calling the ndimage version. Could we imagine having a `fast_binary` flag which would call the ndimage function? A factor of 3-4 (from a few tests I just ran) is not bad...
</issue>
<code>
[start of skimage/measure/_label.py]
1 from ._ccomp import label_cython as clabel
2
3
4 def label(input, background=None, return_num=False, connectivity=None):
5 r"""Label connected regions of an integer array.
6
7 Two pixels are connected when they are neighbors and have the same value.
8 In 2D, they can be neighbors either in a 1- or 2-connected sense.
9 The value refers to the maximum number of orthogonal hops to consider a
10 pixel/voxel a neighbor::
11
12 1-connectivity 2-connectivity diagonal connection close-up
13
14 [ ] [ ] [ ] [ ] [ ]
15 | \ | / | <- hop 2
16 [ ]--[x]--[ ] [ ]--[x]--[ ] [x]--[ ]
17 | / | \ hop 1
18 [ ] [ ] [ ] [ ]
19
20 Parameters
21 ----------
22 input : ndarray of dtype int
23 Image to label.
24 background : int, optional
25 Consider all pixels with this value as background pixels, and label
26 them as 0. By default, 0-valued pixels are considered as background
27 pixels.
28 return_num : bool, optional
29 Whether to return the number of assigned labels.
30 connectivity : int, optional
31 Maximum number of orthogonal hops to consider a pixel/voxel
32 as a neighbor.
33 Accepted values are ranging from 1 to input.ndim. If ``None``, a full
34 connectivity of ``input.ndim`` is used.
35
36 Returns
37 -------
38 labels : ndarray of dtype int
39 Labeled array, where all connected regions are assigned the
40 same integer value.
41 num : int, optional
42 Number of labels, which equals the maximum label index and is only
43 returned if return_num is `True`.
44
45 See Also
46 --------
47 regionprops
48
49 References
50 ----------
51 .. [1] Christophe Fiorio and Jens Gustedt, "Two linear time Union-Find
52 strategies for image processing", Theoretical Computer Science
53 154 (1996), pp. 165-181.
54 .. [2] Kensheng Wu, Ekow Otoo and Arie Shoshani, "Optimizing connected
55 component labeling algorithms", Paper LBNL-56864, 2005,
56 Lawrence Berkeley National Laboratory (University of California),
57 http://repositories.cdlib.org/lbnl/LBNL-56864
58
59 Examples
60 --------
61 >>> import numpy as np
62 >>> x = np.eye(3).astype(int)
63 >>> print(x)
64 [[1 0 0]
65 [0 1 0]
66 [0 0 1]]
67 >>> print(label(x, connectivity=1))
68 [[1 0 0]
69 [0 2 0]
70 [0 0 3]]
71 >>> print(label(x, connectivity=2))
72 [[1 0 0]
73 [0 1 0]
74 [0 0 1]]
75 >>> print(label(x, background=-1))
76 [[1 2 2]
77 [2 1 2]
78 [2 2 1]]
79 >>> x = np.array([[1, 0, 0],
80 ... [1, 1, 5],
81 ... [0, 0, 0]])
82 >>> print(label(x))
83 [[1 0 0]
84 [1 1 2]
85 [0 0 0]]
86 """
87 return clabel(input, background, return_num, connectivity)
88
[end of skimage/measure/_label.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/skimage/measure/_label.py b/skimage/measure/_label.py
--- a/skimage/measure/_label.py
+++ b/skimage/measure/_label.py
@@ -1,6 +1,34 @@
+from scipy import ndimage
from ._ccomp import label_cython as clabel
+def _label_bool(image, background=None, return_num=False, connectivity=None):
+ """Faster implementation of clabel for boolean input.
+
+ See context: https://github.com/scikit-image/scikit-image/issues/4833
+ """
+ from ..morphology._util import _resolve_neighborhood
+ if background == 1:
+ image = ~image
+
+ if connectivity is None:
+ connectivity = image.ndim
+
+ if not 1 <= connectivity <= image.ndim:
+ raise ValueError(
+ f'Connectivity for {image.ndim}D image should '
+ f'be in [1, ..., {image.ndim}]. Got {connectivity}.'
+ )
+
+ selem = _resolve_neighborhood(None, connectivity, image.ndim)
+ result = ndimage.label(image, structure=selem)
+
+ if return_num:
+ return result
+ else:
+ return result[0]
+
+
def label(input, background=None, return_num=False, connectivity=None):
r"""Label connected regions of an integer array.
@@ -84,4 +112,8 @@
[1 1 2]
[0 0 0]]
"""
- return clabel(input, background, return_num, connectivity)
+ if input.dtype == bool:
+ return _label_bool(input, background=background,
+ return_num=return_num, connectivity=connectivity)
+ else:
+ return clabel(input, background, return_num, connectivity)
| {"golden_diff": "diff --git a/skimage/measure/_label.py b/skimage/measure/_label.py\n--- a/skimage/measure/_label.py\n+++ b/skimage/measure/_label.py\n@@ -1,6 +1,34 @@\n+from scipy import ndimage\n from ._ccomp import label_cython as clabel\n \n \n+def _label_bool(image, background=None, return_num=False, connectivity=None):\n+ \"\"\"Faster implementation of clabel for boolean input.\n+\n+ See context: https://github.com/scikit-image/scikit-image/issues/4833\n+ \"\"\"\n+ from ..morphology._util import _resolve_neighborhood\n+ if background == 1:\n+ image = ~image\n+\n+ if connectivity is None:\n+ connectivity = image.ndim\n+\n+ if not 1 <= connectivity <= image.ndim:\n+ raise ValueError(\n+ f'Connectivity for {image.ndim}D image should '\n+ f'be in [1, ..., {image.ndim}]. Got {connectivity}.'\n+ )\n+\n+ selem = _resolve_neighborhood(None, connectivity, image.ndim)\n+ result = ndimage.label(image, structure=selem)\n+\n+ if return_num:\n+ return result\n+ else:\n+ return result[0]\n+\n+\n def label(input, background=None, return_num=False, connectivity=None):\n r\"\"\"Label connected regions of an integer array.\n \n@@ -84,4 +112,8 @@\n [1 1 2]\n [0 0 0]]\n \"\"\"\n- return clabel(input, background, return_num, connectivity)\n+ if input.dtype == bool:\n+ return _label_bool(input, background=background,\n+ return_num=return_num, connectivity=connectivity)\n+ else:\n+ return clabel(input, background, return_num, connectivity)\n", "issue": "measure.label speed\nThis is triggered by [this Stackoverflow question](https://stackoverflow.com/questions/62804953/performance-differences-between-bwlabeln-on-matlab-and-skimage-measure-label-on/62842582#62842582). When I have large binary arrays to label and performance issues, I usually resort to calling the ndimage version. Could we imagine having a `fast_binary` flag which would call the ndimage function? A factor of 3-4 (from a few tests I just ran) is not bad...\n", "before_files": [{"content": "from ._ccomp import label_cython as clabel\n\n\ndef label(input, background=None, return_num=False, connectivity=None):\n r\"\"\"Label connected regions of an integer array.\n\n Two pixels are connected when they are neighbors and have the same value.\n In 2D, they can be neighbors either in a 1- or 2-connected sense.\n The value refers to the maximum number of orthogonal hops to consider a\n pixel/voxel a neighbor::\n\n 1-connectivity 2-connectivity diagonal connection close-up\n\n [ ] [ ] [ ] [ ] [ ]\n | \\ | / | <- hop 2\n [ ]--[x]--[ ] [ ]--[x]--[ ] [x]--[ ]\n | / | \\ hop 1\n [ ] [ ] [ ] [ ]\n\n Parameters\n ----------\n input : ndarray of dtype int\n Image to label.\n background : int, optional\n Consider all pixels with this value as background pixels, and label\n them as 0. By default, 0-valued pixels are considered as background\n pixels.\n return_num : bool, optional\n Whether to return the number of assigned labels.\n connectivity : int, optional\n Maximum number of orthogonal hops to consider a pixel/voxel\n as a neighbor.\n Accepted values are ranging from 1 to input.ndim. If ``None``, a full\n connectivity of ``input.ndim`` is used.\n\n Returns\n -------\n labels : ndarray of dtype int\n Labeled array, where all connected regions are assigned the\n same integer value.\n num : int, optional\n Number of labels, which equals the maximum label index and is only\n returned if return_num is `True`.\n\n See Also\n --------\n regionprops\n\n References\n ----------\n .. [1] Christophe Fiorio and Jens Gustedt, \"Two linear time Union-Find\n strategies for image processing\", Theoretical Computer Science\n 154 (1996), pp. 165-181.\n .. [2] Kensheng Wu, Ekow Otoo and Arie Shoshani, \"Optimizing connected\n component labeling algorithms\", Paper LBNL-56864, 2005,\n Lawrence Berkeley National Laboratory (University of California),\n http://repositories.cdlib.org/lbnl/LBNL-56864\n\n Examples\n --------\n >>> import numpy as np\n >>> x = np.eye(3).astype(int)\n >>> print(x)\n [[1 0 0]\n [0 1 0]\n [0 0 1]]\n >>> print(label(x, connectivity=1))\n [[1 0 0]\n [0 2 0]\n [0 0 3]]\n >>> print(label(x, connectivity=2))\n [[1 0 0]\n [0 1 0]\n [0 0 1]]\n >>> print(label(x, background=-1))\n [[1 2 2]\n [2 1 2]\n [2 2 1]]\n >>> x = np.array([[1, 0, 0],\n ... [1, 1, 5],\n ... [0, 0, 0]])\n >>> print(label(x))\n [[1 0 0]\n [1 1 2]\n [0 0 0]]\n \"\"\"\n return clabel(input, background, return_num, connectivity)\n", "path": "skimage/measure/_label.py"}]} | 1,619 | 411 |
gh_patches_debug_26316 | rasdani/github-patches | git_diff | scikit-hep__pyhf-424 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pin optional dependencies at the minor release level
# Description
To avoid having our prior releases break like `v0.0.15` did in Issue #396 it would be good to pin our optional dependencies at the minor release level for each release. This should safeguard us from old releases getting broken by API changes in the dependencies that we use as applications.
To be clear, I don't think we should limit the dependencies in `install_requires` beyond placing _lower_ bounds, but I do think that we should now be placing upper bounds on all of the optional dependencies as we are really more using those as **applications** in our library.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 from setuptools import setup, find_packages
4 from os import path
5 import sys
6
7 this_directory = path.abspath(path.dirname(__file__))
8 if sys.version_info.major < 3:
9 from io import open
10 with open(path.join(this_directory, 'README.md'), encoding='utf-8') as readme_md:
11 long_description = readme_md.read()
12
13 extras_require = {
14 'tensorflow': [
15 'tensorflow>=1.12.0',
16 'tensorflow-probability>=0.5.0',
17 'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass
18 'setuptools<=39.1.0',
19 ],
20 'torch': ['torch>=1.0.0'],
21 'mxnet': [
22 'mxnet>=1.0.0',
23 'requests<2.19.0,>=2.18.4',
24 'numpy<1.15.0,>=1.8.2',
25 'requests<2.19.0,>=2.18.4',
26 ],
27 # 'dask': [
28 # 'dask[array]'
29 # ],
30 'xmlimport': ['uproot'],
31 'minuit': ['iminuit'],
32 'develop': [
33 'pyflakes',
34 'pytest<4.0.0,>=3.5.1',
35 'pytest-cov>=2.5.1',
36 'pytest-mock',
37 'pytest-benchmark[histogram]',
38 'pytest-console-scripts',
39 'python-coveralls',
40 'coverage>=4.0', # coveralls
41 'matplotlib',
42 'jupyter',
43 'nbdime',
44 'uproot>=3.3.0',
45 'papermill>=0.16.0',
46 'graphviz',
47 'bumpversion',
48 'sphinx',
49 'sphinxcontrib-bibtex',
50 'sphinxcontrib-napoleon',
51 'sphinx_rtd_theme',
52 'nbsphinx',
53 'sphinx-issues',
54 'm2r',
55 'jsonpatch',
56 'ipython<7', # jupyter_console and ipython clash in dependency requirement -- downgrade ipython for now
57 'pre-commit',
58 'black;python_version>="3.6"', # Black is Python3 only
59 'twine',
60 ],
61 }
62 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
63
64 setup(
65 name='pyhf',
66 version='0.0.16',
67 description='(partial) pure python histfactory implementation',
68 long_description=long_description,
69 long_description_content_type='text/markdown',
70 url='https://github.com/diana-hep/pyhf',
71 author='Lukas Heinrich',
72 author_email='[email protected]',
73 license='Apache',
74 keywords='physics fitting numpy scipy tensorflow pytorch mxnet dask',
75 classifiers=[
76 "Programming Language :: Python :: 2",
77 "Programming Language :: Python :: 2.7",
78 "Programming Language :: Python :: 3",
79 "Programming Language :: Python :: 3.6",
80 "Programming Language :: Python :: 3.7",
81 ],
82 packages=find_packages(),
83 include_package_data=True,
84 python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*",
85 install_requires=[
86 'scipy', # requires numpy, which is required by pyhf, tensorflow, and mxnet
87 'click>=6.0', # for console scripts,
88 'tqdm', # for readxml
89 'six', # for modifiers
90 'jsonschema>=v3.0.0a2', # for utils, alpha-release for draft 6
91 'jsonpatch',
92 ],
93 extras_require=extras_require,
94 entry_points={'console_scripts': ['pyhf=pyhf.commandline:pyhf']},
95 dependency_links=[],
96 )
97
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -12,18 +12,13 @@
extras_require = {
'tensorflow': [
- 'tensorflow>=1.12.0',
- 'tensorflow-probability>=0.5.0',
+ 'tensorflow~=1.13',
+ 'tensorflow-probability~=0.5',
'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass
'setuptools<=39.1.0',
],
- 'torch': ['torch>=1.0.0'],
- 'mxnet': [
- 'mxnet>=1.0.0',
- 'requests<2.19.0,>=2.18.4',
- 'numpy<1.15.0,>=1.8.2',
- 'requests<2.19.0,>=2.18.4',
- ],
+ 'torch': ['torch~=1.0'],
+ 'mxnet': ['mxnet~=1.0', 'requests~=2.18.4', 'numpy<1.15.0,>=1.8.2'],
# 'dask': [
# 'dask[array]'
# ],
@@ -31,7 +26,7 @@
'minuit': ['iminuit'],
'develop': [
'pyflakes',
- 'pytest<4.0.0,>=3.5.1',
+ 'pytest~=3.5',
'pytest-cov>=2.5.1',
'pytest-mock',
'pytest-benchmark[histogram]',
@@ -41,8 +36,8 @@
'matplotlib',
'jupyter',
'nbdime',
- 'uproot>=3.3.0',
- 'papermill>=0.16.0',
+ 'uproot~=3.3',
+ 'papermill~=0.16',
'graphviz',
'bumpversion',
'sphinx',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -12,18 +12,13 @@\n \n extras_require = {\n 'tensorflow': [\n- 'tensorflow>=1.12.0',\n- 'tensorflow-probability>=0.5.0',\n+ 'tensorflow~=1.13',\n+ 'tensorflow-probability~=0.5',\n 'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass\n 'setuptools<=39.1.0',\n ],\n- 'torch': ['torch>=1.0.0'],\n- 'mxnet': [\n- 'mxnet>=1.0.0',\n- 'requests<2.19.0,>=2.18.4',\n- 'numpy<1.15.0,>=1.8.2',\n- 'requests<2.19.0,>=2.18.4',\n- ],\n+ 'torch': ['torch~=1.0'],\n+ 'mxnet': ['mxnet~=1.0', 'requests~=2.18.4', 'numpy<1.15.0,>=1.8.2'],\n # 'dask': [\n # 'dask[array]'\n # ],\n@@ -31,7 +26,7 @@\n 'minuit': ['iminuit'],\n 'develop': [\n 'pyflakes',\n- 'pytest<4.0.0,>=3.5.1',\n+ 'pytest~=3.5',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n@@ -41,8 +36,8 @@\n 'matplotlib',\n 'jupyter',\n 'nbdime',\n- 'uproot>=3.3.0',\n- 'papermill>=0.16.0',\n+ 'uproot~=3.3',\n+ 'papermill~=0.16',\n 'graphviz',\n 'bumpversion',\n 'sphinx',\n", "issue": "Pin optional dependencies at the minor release level\n# Description\r\n\r\nTo avoid having our prior releases break like `v0.0.15` did in Issue #396 it would be good to pin our optional dependencies at the minor release level for each release. This should safeguard us from old releases getting broken by API changes in the dependencies that we use as applications.\r\n\r\nTo be clear, I don't think we should limit the dependencies in `install_requires` beyond placing _lower_ bounds, but I do think that we should now be placing upper bounds on all of the optional dependencies as we are really more using those as **applications** in our library.\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom setuptools import setup, find_packages\nfrom os import path\nimport sys\n\nthis_directory = path.abspath(path.dirname(__file__))\nif sys.version_info.major < 3:\n from io import open\nwith open(path.join(this_directory, 'README.md'), encoding='utf-8') as readme_md:\n long_description = readme_md.read()\n\nextras_require = {\n 'tensorflow': [\n 'tensorflow>=1.12.0',\n 'tensorflow-probability>=0.5.0',\n 'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass\n 'setuptools<=39.1.0',\n ],\n 'torch': ['torch>=1.0.0'],\n 'mxnet': [\n 'mxnet>=1.0.0',\n 'requests<2.19.0,>=2.18.4',\n 'numpy<1.15.0,>=1.8.2',\n 'requests<2.19.0,>=2.18.4',\n ],\n # 'dask': [\n # 'dask[array]'\n # ],\n 'xmlimport': ['uproot'],\n 'minuit': ['iminuit'],\n 'develop': [\n 'pyflakes',\n 'pytest<4.0.0,>=3.5.1',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'python-coveralls',\n 'coverage>=4.0', # coveralls\n 'matplotlib',\n 'jupyter',\n 'nbdime',\n 'uproot>=3.3.0',\n 'papermill>=0.16.0',\n 'graphviz',\n 'bumpversion',\n 'sphinx',\n 'sphinxcontrib-bibtex',\n 'sphinxcontrib-napoleon',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'sphinx-issues',\n 'm2r',\n 'jsonpatch',\n 'ipython<7', # jupyter_console and ipython clash in dependency requirement -- downgrade ipython for now\n 'pre-commit',\n 'black;python_version>=\"3.6\"', # Black is Python3 only\n 'twine',\n ],\n}\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\nsetup(\n name='pyhf',\n version='0.0.16',\n description='(partial) pure python histfactory implementation',\n long_description=long_description,\n long_description_content_type='text/markdown',\n url='https://github.com/diana-hep/pyhf',\n author='Lukas Heinrich',\n author_email='[email protected]',\n license='Apache',\n keywords='physics fitting numpy scipy tensorflow pytorch mxnet dask',\n classifiers=[\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n packages=find_packages(),\n include_package_data=True,\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*\",\n install_requires=[\n 'scipy', # requires numpy, which is required by pyhf, tensorflow, and mxnet\n 'click>=6.0', # for console scripts,\n 'tqdm', # for readxml\n 'six', # for modifiers\n 'jsonschema>=v3.0.0a2', # for utils, alpha-release for draft 6\n 'jsonpatch',\n ],\n extras_require=extras_require,\n entry_points={'console_scripts': ['pyhf=pyhf.commandline:pyhf']},\n dependency_links=[],\n)\n", "path": "setup.py"}]} | 1,747 | 493 |
gh_patches_debug_17442 | rasdani/github-patches | git_diff | spacetelescope__jwql-857 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add code to manage.py to create necessesary symlinks to run web app locally
In order to run the JWQL web app locally, one must create symbolic links to the `outputs`, `thumbnails`, `preview_images`, and `filesystem` directories. We can add some code in `website.manage.py` in order to do this automatically. Something like this:
```python
from jwql.utils.utils import get_config()
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "jwql_proj.settings")
# Create symbolic links here (if they don't already exist)
for directory in ['filesystem', 'outputs', 'preview_image_filesystem', 'thumbnails_filesystem']:
path = get_config()[directory]
# code to create symlink
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
```
Credit @york-stsci for the suggestion!
</issue>
<code>
[start of jwql/website/manage.py]
1 #! /usr/bin/env python
2
3 """Utility module for administrative tasks.
4
5 A python script version of Django's command-line utility for
6 administrative tasks (``django-admin``). Additionally, puts the project
7 package on ``sys.path`` and defines the ``DJANGO_SETTINGS_MODULE``
8 variable to point to the jwql ``settings.py`` file.
9
10 Generated by ``django-admin startproject`` using Django 2.0.1.
11
12 Use
13 ---
14
15 To run the web app server:
16 ::
17
18 python manage.py runserver
19
20 To start the interactive shellL:
21 ::
22
23 python manage.py shell
24
25 To run tests for all installed apps:
26 ::
27
28 python manage.py test
29
30 References
31 ----------
32 For more information please see:
33 ``https://docs.djangoproject.com/en/2.0/ref/django-admin/``
34 """
35
36 import os
37 import sys
38
39 if __name__ == "__main__":
40
41 os.environ.setdefault("DJANGO_SETTINGS_MODULE", "jwql_proj.settings")
42
43 try:
44 from django.core.management import execute_from_command_line
45 except ImportError as exc:
46 raise ImportError(
47 "Couldn't import Django. Are you sure it's installed and "
48 "available on your PYTHONPATH environment variable? Did you "
49 "forget to activate a virtual environment?"
50 ) from exc
51 execute_from_command_line(sys.argv)
52
[end of jwql/website/manage.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/jwql/website/manage.py b/jwql/website/manage.py
--- a/jwql/website/manage.py
+++ b/jwql/website/manage.py
@@ -36,10 +36,25 @@
import os
import sys
+from jwql.utils.utils import get_config
+
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "jwql_proj.settings")
+ directory_mapping = {
+ 'filesystem': 'filesystem',
+ 'outputs': 'outputs',
+ 'preview_image_filesystem': 'preview_images',
+ 'thumbnail_filesystem': 'thumbnails'
+ }
+
+ for directory in ['filesystem', 'outputs', 'preview_image_filesystem', 'thumbnail_filesystem']:
+ symlink_location = os.path.join(os.path.dirname(__file__), 'apps', 'jwql', 'static', directory_mapping[directory])
+ if not os.path.exists(symlink_location):
+ symlink_path = get_config()[directory]
+ os.symlink(symlink_path, symlink_location)
+
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
| {"golden_diff": "diff --git a/jwql/website/manage.py b/jwql/website/manage.py\n--- a/jwql/website/manage.py\n+++ b/jwql/website/manage.py\n@@ -36,10 +36,25 @@\n import os\n import sys\n \n+from jwql.utils.utils import get_config\n+\n if __name__ == \"__main__\":\n \n os.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"jwql_proj.settings\")\n \n+ directory_mapping = {\n+ 'filesystem': 'filesystem',\n+ 'outputs': 'outputs',\n+ 'preview_image_filesystem': 'preview_images',\n+ 'thumbnail_filesystem': 'thumbnails'\n+ }\n+\n+ for directory in ['filesystem', 'outputs', 'preview_image_filesystem', 'thumbnail_filesystem']:\n+ symlink_location = os.path.join(os.path.dirname(__file__), 'apps', 'jwql', 'static', directory_mapping[directory])\n+ if not os.path.exists(symlink_location):\n+ symlink_path = get_config()[directory]\n+ os.symlink(symlink_path, symlink_location)\n+\n try:\n from django.core.management import execute_from_command_line\n except ImportError as exc:\n", "issue": "Add code to manage.py to create necessesary symlinks to run web app locally \nIn order to run the JWQL web app locally, one must create symbolic links to the `outputs`, `thumbnails`, `preview_images`, and `filesystem` directories. We can add some code in `website.manage.py` in order to do this automatically. Something like this:\r\n\r\n\r\n```python\r\nfrom jwql.utils.utils import get_config()\r\n\r\nif __name__ == \"__main__\":\r\n\r\n os.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"jwql_proj.settings\")\r\n\r\n # Create symbolic links here (if they don't already exist)\r\n for directory in ['filesystem', 'outputs', 'preview_image_filesystem', 'thumbnails_filesystem']:\r\n path = get_config()[directory]\r\n # code to create symlink\r\n\r\n try:\r\n from django.core.management import execute_from_command_line\r\n except ImportError as exc:\r\n raise ImportError(\r\n \"Couldn't import Django. Are you sure it's installed and \"\r\n \"available on your PYTHONPATH environment variable? Did you \"\r\n \"forget to activate a virtual environment?\"\r\n ) from exc\r\n execute_from_command_line(sys.argv)\r\n```\r\n\r\nCredit @york-stsci for the suggestion!\n", "before_files": [{"content": "#! /usr/bin/env python\n\n\"\"\"Utility module for administrative tasks.\n\nA python script version of Django's command-line utility for\nadministrative tasks (``django-admin``). Additionally, puts the project\npackage on ``sys.path`` and defines the ``DJANGO_SETTINGS_MODULE``\nvariable to point to the jwql ``settings.py`` file.\n\nGenerated by ``django-admin startproject`` using Django 2.0.1.\n\nUse\n---\n\n To run the web app server:\n ::\n\n python manage.py runserver\n\n To start the interactive shellL:\n ::\n\n python manage.py shell\n\n To run tests for all installed apps:\n ::\n\n python manage.py test\n\nReferences\n----------\nFor more information please see:\n ``https://docs.djangoproject.com/en/2.0/ref/django-admin/``\n\"\"\"\n\nimport os\nimport sys\n\nif __name__ == \"__main__\":\n\n os.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"jwql_proj.settings\")\n\n try:\n from django.core.management import execute_from_command_line\n except ImportError as exc:\n raise ImportError(\n \"Couldn't import Django. Are you sure it's installed and \"\n \"available on your PYTHONPATH environment variable? Did you \"\n \"forget to activate a virtual environment?\"\n ) from exc\n execute_from_command_line(sys.argv)\n", "path": "jwql/website/manage.py"}]} | 1,179 | 256 |
gh_patches_debug_20691 | rasdani/github-patches | git_diff | ephios-dev__ephios-525 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Select2 on disposition view shows error alert
Closing the select2 field in the disposition view without selecting an entry (e.g. by typing something into the field an then clicking somewhere outside the field) also triggers the form submission. This fails because no valid user has been selected and consequently shows an ugly alert to the user.
</issue>
<code>
[start of ephios/core/context.py]
1 import importlib
2
3 from django.conf import settings
4 from django.templatetags.static import static
5 from django.utils.translation import get_language
6
7 from ephios.core.models import AbstractParticipation
8 from ephios.core.signals import footer_link
9
10 # suggested in https://github.com/python-poetry/poetry/issues/273
11 EPHIOS_VERSION = "v" + importlib.metadata.version("ephios")
12
13
14 def ephios_base_context(request):
15 footer = {}
16 for _, result in footer_link.send(None, request=request):
17 for label, url in result.items():
18 footer[label] = url
19
20 datatables_translation_url = None
21 if get_language() == "de-de":
22 datatables_translation_url = static("datatables/german.json")
23
24 return {
25 "ParticipationStates": AbstractParticipation.States,
26 "footer": footer,
27 "datatables_translation_url": datatables_translation_url,
28 "ephios_version": EPHIOS_VERSION,
29 "SITE_URL": settings.SITE_URL,
30 "PWA_APP_ICONS": settings.PWA_APP_ICONS,
31 }
32
[end of ephios/core/context.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ephios/core/context.py b/ephios/core/context.py
--- a/ephios/core/context.py
+++ b/ephios/core/context.py
@@ -1,7 +1,6 @@
import importlib
from django.conf import settings
-from django.templatetags.static import static
from django.utils.translation import get_language
from ephios.core.models import AbstractParticipation
@@ -17,14 +16,10 @@
for label, url in result.items():
footer[label] = url
- datatables_translation_url = None
- if get_language() == "de-de":
- datatables_translation_url = static("datatables/german.json")
-
return {
"ParticipationStates": AbstractParticipation.States,
"footer": footer,
- "datatables_translation_url": datatables_translation_url,
+ "LANGUAGE_CODE": get_language(),
"ephios_version": EPHIOS_VERSION,
"SITE_URL": settings.SITE_URL,
"PWA_APP_ICONS": settings.PWA_APP_ICONS,
| {"golden_diff": "diff --git a/ephios/core/context.py b/ephios/core/context.py\n--- a/ephios/core/context.py\n+++ b/ephios/core/context.py\n@@ -1,7 +1,6 @@\n import importlib\n \n from django.conf import settings\n-from django.templatetags.static import static\n from django.utils.translation import get_language\n \n from ephios.core.models import AbstractParticipation\n@@ -17,14 +16,10 @@\n for label, url in result.items():\n footer[label] = url\n \n- datatables_translation_url = None\n- if get_language() == \"de-de\":\n- datatables_translation_url = static(\"datatables/german.json\")\n-\n return {\n \"ParticipationStates\": AbstractParticipation.States,\n \"footer\": footer,\n- \"datatables_translation_url\": datatables_translation_url,\n+ \"LANGUAGE_CODE\": get_language(),\n \"ephios_version\": EPHIOS_VERSION,\n \"SITE_URL\": settings.SITE_URL,\n \"PWA_APP_ICONS\": settings.PWA_APP_ICONS,\n", "issue": "Select2 on disposition view shows error alert\nClosing the select2 field in the disposition view without selecting an entry (e.g. by typing something into the field an then clicking somewhere outside the field) also triggers the form submission. This fails because no valid user has been selected and consequently shows an ugly alert to the user.\n", "before_files": [{"content": "import importlib\n\nfrom django.conf import settings\nfrom django.templatetags.static import static\nfrom django.utils.translation import get_language\n\nfrom ephios.core.models import AbstractParticipation\nfrom ephios.core.signals import footer_link\n\n# suggested in https://github.com/python-poetry/poetry/issues/273\nEPHIOS_VERSION = \"v\" + importlib.metadata.version(\"ephios\")\n\n\ndef ephios_base_context(request):\n footer = {}\n for _, result in footer_link.send(None, request=request):\n for label, url in result.items():\n footer[label] = url\n\n datatables_translation_url = None\n if get_language() == \"de-de\":\n datatables_translation_url = static(\"datatables/german.json\")\n\n return {\n \"ParticipationStates\": AbstractParticipation.States,\n \"footer\": footer,\n \"datatables_translation_url\": datatables_translation_url,\n \"ephios_version\": EPHIOS_VERSION,\n \"SITE_URL\": settings.SITE_URL,\n \"PWA_APP_ICONS\": settings.PWA_APP_ICONS,\n }\n", "path": "ephios/core/context.py"}]} | 896 | 233 |
gh_patches_debug_26734 | rasdani/github-patches | git_diff | kivy__kivy-4268 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
kivy/examples/android/takepicture/ fails on Android
Example cloned form GIT build with:
buildozer android debug
deployed to Android 4.4.4 crashes, from adb logcat output teh following lines seem to be relevant:
I/python (25790): /data/data/org.test.takepicture/files/lib/python2.7/site-packages/kivy/core/image/img_pygame.py:13: RuntimeWarning: import cdrom: No module named cdrom
I/python (25790): Traceback (most recent call last):
I/python (25790): File "/home/jb/python/mread/.buildozer/android/app/main.py", line 32, in <module>
I/python (25790): ImportError: No module named PIL
I/python (25790): Python for android ended.
Second line indicates problem with image library, unfortunately I have no clue how to fix it.
</issue>
<code>
[start of examples/android/takepicture/main.py]
1 '''
2 Take picture
3 ============
4
5 .. author:: Mathieu Virbel <[email protected]>
6
7 Little example to demonstrate how to start an Intent, and get the result.
8 When you use the Android.startActivityForResult(), the result will be dispatched
9 into onActivityResult. You can catch the event with the android.activity API
10 from python-for-android project.
11
12 If you want to compile it, don't forget to add the CAMERA permission::
13
14 ./build.py --name 'TakePicture' --package org.test.takepicture \
15 --permission CAMERA --version 1 \
16 --private ~/code/kivy/examples/android/takepicture \
17 debug installd
18
19 '''
20
21 __version__ = '0.1'
22
23 from kivy.app import App
24 from os.path import exists
25 from jnius import autoclass, cast
26 from android import activity
27 from functools import partial
28 from kivy.clock import Clock
29 from kivy.uix.scatter import Scatter
30 from kivy.properties import StringProperty
31
32 from PIL import Image
33
34 Intent = autoclass('android.content.Intent')
35 PythonActivity = autoclass('org.renpy.android.PythonActivity')
36 MediaStore = autoclass('android.provider.MediaStore')
37 Uri = autoclass('android.net.Uri')
38 Environment = autoclass('android.os.Environment')
39
40
41 class Picture(Scatter):
42 source = StringProperty(None)
43
44
45 class TakePictureApp(App):
46 def build(self):
47 self.index = 0
48 activity.bind(on_activity_result=self.on_activity_result)
49
50 def get_filename(self):
51 while True:
52 self.index += 1
53 fn = (Environment.getExternalStorageDirectory().getPath() +
54 '/takepicture{}.jpg'.format(self.index))
55 if not exists(fn):
56 return fn
57
58 def take_picture(self):
59 intent = Intent(MediaStore.ACTION_IMAGE_CAPTURE)
60 self.last_fn = self.get_filename()
61 self.uri = Uri.parse('file://' + self.last_fn)
62 self.uri = cast('android.os.Parcelable', self.uri)
63 intent.putExtra(MediaStore.EXTRA_OUTPUT, self.uri)
64 PythonActivity.mActivity.startActivityForResult(intent, 0x123)
65
66 def on_activity_result(self, requestCode, resultCode, intent):
67 if requestCode == 0x123:
68 Clock.schedule_once(partial(self.add_picture, self.last_fn), 0)
69
70 def add_picture(self, fn, *args):
71 im = Image.open(fn)
72 width, height = im.size
73 im.thumbnail((width / 4, height / 4), Image.ANTIALIAS)
74 im.save(fn, quality=95)
75 self.root.add_widget(Picture(source=fn, center=self.root.center))
76
77 def on_pause(self):
78 return True
79
80 TakePictureApp().run()
81
[end of examples/android/takepicture/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/android/takepicture/main.py b/examples/android/takepicture/main.py
--- a/examples/android/takepicture/main.py
+++ b/examples/android/takepicture/main.py
@@ -23,7 +23,7 @@
from kivy.app import App
from os.path import exists
from jnius import autoclass, cast
-from android import activity
+from android import activity, mActivity
from functools import partial
from kivy.clock import Clock
from kivy.uix.scatter import Scatter
@@ -32,7 +32,6 @@
from PIL import Image
Intent = autoclass('android.content.Intent')
-PythonActivity = autoclass('org.renpy.android.PythonActivity')
MediaStore = autoclass('android.provider.MediaStore')
Uri = autoclass('android.net.Uri')
Environment = autoclass('android.os.Environment')
@@ -61,7 +60,7 @@
self.uri = Uri.parse('file://' + self.last_fn)
self.uri = cast('android.os.Parcelable', self.uri)
intent.putExtra(MediaStore.EXTRA_OUTPUT, self.uri)
- PythonActivity.mActivity.startActivityForResult(intent, 0x123)
+ mActivity.startActivityForResult(intent, 0x123)
def on_activity_result(self, requestCode, resultCode, intent):
if requestCode == 0x123:
| {"golden_diff": "diff --git a/examples/android/takepicture/main.py b/examples/android/takepicture/main.py\n--- a/examples/android/takepicture/main.py\n+++ b/examples/android/takepicture/main.py\n@@ -23,7 +23,7 @@\n from kivy.app import App\n from os.path import exists\n from jnius import autoclass, cast\n-from android import activity\n+from android import activity, mActivity\n from functools import partial\n from kivy.clock import Clock\n from kivy.uix.scatter import Scatter\n@@ -32,7 +32,6 @@\n from PIL import Image\n \n Intent = autoclass('android.content.Intent')\n-PythonActivity = autoclass('org.renpy.android.PythonActivity')\n MediaStore = autoclass('android.provider.MediaStore')\n Uri = autoclass('android.net.Uri')\n Environment = autoclass('android.os.Environment')\n@@ -61,7 +60,7 @@\n self.uri = Uri.parse('file://' + self.last_fn)\n self.uri = cast('android.os.Parcelable', self.uri)\n intent.putExtra(MediaStore.EXTRA_OUTPUT, self.uri)\n- PythonActivity.mActivity.startActivityForResult(intent, 0x123)\n+ mActivity.startActivityForResult(intent, 0x123)\n \n def on_activity_result(self, requestCode, resultCode, intent):\n if requestCode == 0x123:\n", "issue": "kivy/examples/android/takepicture/ fails on Android\nExample cloned form GIT build with:\nbuildozer android debug\ndeployed to Android 4.4.4 crashes, from adb logcat output teh following lines seem to be relevant:\n\nI/python (25790): /data/data/org.test.takepicture/files/lib/python2.7/site-packages/kivy/core/image/img_pygame.py:13: RuntimeWarning: import cdrom: No module named cdrom\n\nI/python (25790): Traceback (most recent call last):\nI/python (25790): File \"/home/jb/python/mread/.buildozer/android/app/main.py\", line 32, in <module>\nI/python (25790): ImportError: No module named PIL\nI/python (25790): Python for android ended.\n\nSecond line indicates problem with image library, unfortunately I have no clue how to fix it.\n\n", "before_files": [{"content": "'''\nTake picture\n============\n\n.. author:: Mathieu Virbel <[email protected]>\n\nLittle example to demonstrate how to start an Intent, and get the result.\nWhen you use the Android.startActivityForResult(), the result will be dispatched\ninto onActivityResult. You can catch the event with the android.activity API\nfrom python-for-android project.\n\nIf you want to compile it, don't forget to add the CAMERA permission::\n\n ./build.py --name 'TakePicture' --package org.test.takepicture \\\n --permission CAMERA --version 1 \\\n --private ~/code/kivy/examples/android/takepicture \\\n debug installd\n\n'''\n\n__version__ = '0.1'\n\nfrom kivy.app import App\nfrom os.path import exists\nfrom jnius import autoclass, cast\nfrom android import activity\nfrom functools import partial\nfrom kivy.clock import Clock\nfrom kivy.uix.scatter import Scatter\nfrom kivy.properties import StringProperty\n\nfrom PIL import Image\n\nIntent = autoclass('android.content.Intent')\nPythonActivity = autoclass('org.renpy.android.PythonActivity')\nMediaStore = autoclass('android.provider.MediaStore')\nUri = autoclass('android.net.Uri')\nEnvironment = autoclass('android.os.Environment')\n\n\nclass Picture(Scatter):\n source = StringProperty(None)\n\n\nclass TakePictureApp(App):\n def build(self):\n self.index = 0\n activity.bind(on_activity_result=self.on_activity_result)\n\n def get_filename(self):\n while True:\n self.index += 1\n fn = (Environment.getExternalStorageDirectory().getPath() +\n '/takepicture{}.jpg'.format(self.index))\n if not exists(fn):\n return fn\n\n def take_picture(self):\n intent = Intent(MediaStore.ACTION_IMAGE_CAPTURE)\n self.last_fn = self.get_filename()\n self.uri = Uri.parse('file://' + self.last_fn)\n self.uri = cast('android.os.Parcelable', self.uri)\n intent.putExtra(MediaStore.EXTRA_OUTPUT, self.uri)\n PythonActivity.mActivity.startActivityForResult(intent, 0x123)\n\n def on_activity_result(self, requestCode, resultCode, intent):\n if requestCode == 0x123:\n Clock.schedule_once(partial(self.add_picture, self.last_fn), 0)\n\n def add_picture(self, fn, *args):\n im = Image.open(fn)\n width, height = im.size\n im.thumbnail((width / 4, height / 4), Image.ANTIALIAS)\n im.save(fn, quality=95)\n self.root.add_widget(Picture(source=fn, center=self.root.center))\n\n def on_pause(self):\n return True\n\nTakePictureApp().run()\n", "path": "examples/android/takepicture/main.py"}]} | 1,473 | 287 |
gh_patches_debug_15821 | rasdani/github-patches | git_diff | crytic__slither-387 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ABIEncoderV2 flagged as solidity version
The following Solidity code is flagged as being different solidity versions:
```sol
pragma solidity 0.5.12;
pragma experimental ABIEncoderV2;
```
Outputs:
```
INFO:Detectors:
Different versions of Solidity is used in :
- Version used: ['0.5.12', 'ABIEncoderV2']
- 0.5.12 (Contract.sol#1)
- ABIEncoderV2 (Contract.sol#2)
```
</issue>
<code>
[start of slither/core/declarations/pragma_directive.py]
1 from slither.core.source_mapping.source_mapping import SourceMapping
2
3 class Pragma(SourceMapping):
4
5 def __init__(self, directive):
6 super(Pragma, self).__init__()
7 self._directive = directive
8
9 @property
10 def directive(self):
11 '''
12 list(str)
13 '''
14 return self._directive
15
16 @property
17 def version(self):
18 return ''.join(self.directive[1:])
19
20 @property
21 def name(self):
22 return self.version
23
24 def __str__(self):
25 return 'pragma '+''.join(self.directive)
26
[end of slither/core/declarations/pragma_directive.py]
[start of slither/detectors/attributes/constant_pragma.py]
1 """
2 Check that the same pragma is used in all the files
3 """
4
5 from slither.detectors.abstract_detector import AbstractDetector, DetectorClassification
6 from slither.formatters.attributes.constant_pragma import format
7
8
9 class ConstantPragma(AbstractDetector):
10 """
11 Check that the same pragma is used in all the files
12 """
13
14 ARGUMENT = 'pragma'
15 HELP = 'If different pragma directives are used'
16 IMPACT = DetectorClassification.INFORMATIONAL
17 CONFIDENCE = DetectorClassification.HIGH
18
19 WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#different-pragma-directives-are-used'
20
21
22 WIKI_TITLE = 'Different pragma directives are used'
23 WIKI_DESCRIPTION = 'Detect if different Solidity versions are used.'
24 WIKI_RECOMMENDATION = 'Use one Solidity version.'
25
26 def _detect(self):
27 results = []
28 pragma = self.slither.pragma_directives
29 versions = [p.version for p in pragma]
30 versions = sorted(list(set(versions)))
31
32 if len(versions) > 1:
33 info = [f"Different versions of Solidity is used in {self.filename}:\n"]
34 info += [f"\t- Version used: {[str(v) for v in versions]}\n"]
35
36 for p in pragma:
37 info += ["\t- ", p, "\n"]
38
39 res = self.generate_result(info)
40
41 results.append(res)
42
43 return results
44
45 @staticmethod
46 def _format(slither, result):
47 format(slither, result)
48
[end of slither/detectors/attributes/constant_pragma.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/slither/core/declarations/pragma_directive.py b/slither/core/declarations/pragma_directive.py
--- a/slither/core/declarations/pragma_directive.py
+++ b/slither/core/declarations/pragma_directive.py
@@ -21,5 +21,11 @@
def name(self):
return self.version
+ @property
+ def is_solidity_version(self):
+ if len(self._directive) > 0:
+ return self._directive[0].lower() == 'solidity'
+ return False
+
def __str__(self):
return 'pragma '+''.join(self.directive)
diff --git a/slither/detectors/attributes/constant_pragma.py b/slither/detectors/attributes/constant_pragma.py
--- a/slither/detectors/attributes/constant_pragma.py
+++ b/slither/detectors/attributes/constant_pragma.py
@@ -26,7 +26,7 @@
def _detect(self):
results = []
pragma = self.slither.pragma_directives
- versions = [p.version for p in pragma]
+ versions = [p.version for p in pragma if p.is_solidity_version]
versions = sorted(list(set(versions)))
if len(versions) > 1:
| {"golden_diff": "diff --git a/slither/core/declarations/pragma_directive.py b/slither/core/declarations/pragma_directive.py\n--- a/slither/core/declarations/pragma_directive.py\n+++ b/slither/core/declarations/pragma_directive.py\n@@ -21,5 +21,11 @@\n def name(self):\n return self.version\n \n+ @property\n+ def is_solidity_version(self):\n+ if len(self._directive) > 0:\n+ return self._directive[0].lower() == 'solidity'\n+ return False\n+\n def __str__(self):\n return 'pragma '+''.join(self.directive)\ndiff --git a/slither/detectors/attributes/constant_pragma.py b/slither/detectors/attributes/constant_pragma.py\n--- a/slither/detectors/attributes/constant_pragma.py\n+++ b/slither/detectors/attributes/constant_pragma.py\n@@ -26,7 +26,7 @@\n def _detect(self):\n results = []\n pragma = self.slither.pragma_directives\n- versions = [p.version for p in pragma]\n+ versions = [p.version for p in pragma if p.is_solidity_version]\n versions = sorted(list(set(versions)))\n \n if len(versions) > 1:\n", "issue": "ABIEncoderV2 flagged as solidity version\nThe following Solidity code is flagged as being different solidity versions:\r\n\r\n```sol\r\npragma solidity 0.5.12;\r\npragma experimental ABIEncoderV2;\r\n```\r\n\r\nOutputs:\r\n\r\n```\r\nINFO:Detectors:\r\nDifferent versions of Solidity is used in :\r\n\t- Version used: ['0.5.12', 'ABIEncoderV2']\r\n\t- 0.5.12 (Contract.sol#1)\r\n\t- ABIEncoderV2 (Contract.sol#2)\r\n```\n", "before_files": [{"content": "from slither.core.source_mapping.source_mapping import SourceMapping\n\nclass Pragma(SourceMapping):\n\n def __init__(self, directive):\n super(Pragma, self).__init__()\n self._directive = directive\n\n @property\n def directive(self):\n '''\n list(str)\n '''\n return self._directive\n\n @property\n def version(self):\n return ''.join(self.directive[1:])\n\n @property\n def name(self):\n return self.version\n\n def __str__(self):\n return 'pragma '+''.join(self.directive)\n", "path": "slither/core/declarations/pragma_directive.py"}, {"content": "\"\"\"\n Check that the same pragma is used in all the files\n\"\"\"\n\nfrom slither.detectors.abstract_detector import AbstractDetector, DetectorClassification\nfrom slither.formatters.attributes.constant_pragma import format\n\n\nclass ConstantPragma(AbstractDetector):\n \"\"\"\n Check that the same pragma is used in all the files\n \"\"\"\n\n ARGUMENT = 'pragma'\n HELP = 'If different pragma directives are used'\n IMPACT = DetectorClassification.INFORMATIONAL\n CONFIDENCE = DetectorClassification.HIGH\n\n WIKI = 'https://github.com/crytic/slither/wiki/Detector-Documentation#different-pragma-directives-are-used'\n\n\n WIKI_TITLE = 'Different pragma directives are used'\n WIKI_DESCRIPTION = 'Detect if different Solidity versions are used.'\n WIKI_RECOMMENDATION = 'Use one Solidity version.'\n\n def _detect(self):\n results = []\n pragma = self.slither.pragma_directives\n versions = [p.version for p in pragma]\n versions = sorted(list(set(versions)))\n\n if len(versions) > 1:\n info = [f\"Different versions of Solidity is used in {self.filename}:\\n\"]\n info += [f\"\\t- Version used: {[str(v) for v in versions]}\\n\"]\n\n for p in pragma:\n info += [\"\\t- \", p, \"\\n\"]\n\n res = self.generate_result(info)\n\n results.append(res)\n\n return results\n\n @staticmethod\n def _format(slither, result):\n format(slither, result)\n", "path": "slither/detectors/attributes/constant_pragma.py"}]} | 1,291 | 295 |
gh_patches_debug_6765 | rasdani/github-patches | git_diff | conda-forge__conda-smithy-971 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix linter error on missing target_platform
Invoking `conda smithy recipe-lint` on the [conda-forge/go1.4-feedstock/meta.yaml](
https://github.com/conda-forge/go1.4-bootstrap-feedstock/blob/master/recipe/meta.yaml) file yields the following exception:
```
± conda smithy recipe-lint
Traceback (most recent call last):
File "/opt/conda/bin/conda-smithy", line 10, in <module>
sys.exit(main())
File "/opt/conda/lib/python3.6/site-packages/conda_smithy/cli.py", line 279, in main
args.subcommand_func(args)
File "/opt/conda/lib/python3.6/site-packages/conda_smithy/cli.py", line 203, in __call__
return_hints=True)
File "/opt/conda/lib/python3.6/site-packages/conda_smithy/lint_recipe.py", line 428, in main
content = render_meta_yaml(''.join(fh))
File "/opt/conda/lib/python3.6/site-packages/conda_smithy/utils.py", line 49, in render_meta_yaml
content = env.from_string(text).render(os=mockos, environ=mockos.environ)
File "/opt/conda/lib/python3.6/site-packages/jinja2/asyncsupport.py", line 76, in render
return original_render(self, *args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/jinja2/environment.py", line 1008, in render
return self.environment.handle_exception(exc_info, True)
File "/opt/conda/lib/python3.6/site-packages/jinja2/environment.py", line 780, in handle_exception
reraise(exc_type, exc_value, tb)
File "/opt/conda/lib/python3.6/site-packages/jinja2/_compat.py", line 37, in reraise
raise value.with_traceback(tb)
File "<template>", line 29, in top-level template code
jinja2.exceptions.UndefinedError: 'target_platform' is undefined
```
</issue>
<code>
[start of conda_smithy/utils.py]
1 import shutil
2 import tempfile
3 import jinja2
4 import six
5 import datetime
6 import time
7 from collections import defaultdict
8 from contextlib import contextmanager
9
10 @contextmanager
11 def tmp_directory():
12 tmp_dir = tempfile.mkdtemp('_recipe')
13 yield tmp_dir
14 shutil.rmtree(tmp_dir)
15
16
17 class NullUndefined(jinja2.Undefined):
18 def __unicode__(self):
19 return self._undefined_name
20
21 def __getattr__(self, name):
22 return '{}.{}'.format(self, name)
23
24 def __getitem__(self, name):
25 return '{}["{}"]'.format(self, name)
26
27
28 class MockOS(dict):
29 def __init__(self):
30 self.environ = defaultdict(lambda: '')
31
32
33 def render_meta_yaml(text):
34 env = jinja2.Environment(undefined=NullUndefined)
35
36 # stub out cb3 jinja2 functions - they are not important for linting
37 # if we don't stub them out, the ruamel.yaml load fails to interpret them
38 # we can't just use conda-build's api.render functionality, because it would apply selectors
39 env.globals.update(dict(compiler=lambda x: x + '_compiler_stub',
40 pin_subpackage=lambda *args, **kwargs: 'subpackage_stub',
41 pin_compatible=lambda *args, **kwargs: 'compatible_pin_stub',
42 cdt=lambda *args, **kwargs: 'cdt_stub',
43 load_file_regex=lambda *args, **kwargs: \
44 defaultdict(lambda : ''),
45 datetime=datetime,
46 time=time,
47 ))
48 mockos = MockOS()
49 content = env.from_string(text).render(os=mockos, environ=mockos.environ)
50 return content
51
[end of conda_smithy/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conda_smithy/utils.py b/conda_smithy/utils.py
--- a/conda_smithy/utils.py
+++ b/conda_smithy/utils.py
@@ -44,6 +44,7 @@
defaultdict(lambda : ''),
datetime=datetime,
time=time,
+ target_platform="linux-64",
))
mockos = MockOS()
content = env.from_string(text).render(os=mockos, environ=mockos.environ)
| {"golden_diff": "diff --git a/conda_smithy/utils.py b/conda_smithy/utils.py\n--- a/conda_smithy/utils.py\n+++ b/conda_smithy/utils.py\n@@ -44,6 +44,7 @@\n defaultdict(lambda : ''),\n datetime=datetime,\n time=time,\n+ target_platform=\"linux-64\",\n ))\n mockos = MockOS()\n content = env.from_string(text).render(os=mockos, environ=mockos.environ)\n", "issue": "Fix linter error on missing target_platform\nInvoking `conda smithy recipe-lint` on the [conda-forge/go1.4-feedstock/meta.yaml](\r\nhttps://github.com/conda-forge/go1.4-bootstrap-feedstock/blob/master/recipe/meta.yaml) file yields the following exception:\r\n\r\n```\r\n\u00b1 conda smithy recipe-lint\r\nTraceback (most recent call last):\r\n File \"/opt/conda/bin/conda-smithy\", line 10, in <module>\r\n sys.exit(main())\r\n File \"/opt/conda/lib/python3.6/site-packages/conda_smithy/cli.py\", line 279, in main\r\n args.subcommand_func(args)\r\n File \"/opt/conda/lib/python3.6/site-packages/conda_smithy/cli.py\", line 203, in __call__\r\n return_hints=True)\r\n File \"/opt/conda/lib/python3.6/site-packages/conda_smithy/lint_recipe.py\", line 428, in main\r\n content = render_meta_yaml(''.join(fh))\r\n File \"/opt/conda/lib/python3.6/site-packages/conda_smithy/utils.py\", line 49, in render_meta_yaml\r\n content = env.from_string(text).render(os=mockos, environ=mockos.environ)\r\n File \"/opt/conda/lib/python3.6/site-packages/jinja2/asyncsupport.py\", line 76, in render\r\n return original_render(self, *args, **kwargs)\r\n File \"/opt/conda/lib/python3.6/site-packages/jinja2/environment.py\", line 1008, in render\r\n return self.environment.handle_exception(exc_info, True)\r\n File \"/opt/conda/lib/python3.6/site-packages/jinja2/environment.py\", line 780, in handle_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"/opt/conda/lib/python3.6/site-packages/jinja2/_compat.py\", line 37, in reraise\r\n raise value.with_traceback(tb)\r\n File \"<template>\", line 29, in top-level template code\r\njinja2.exceptions.UndefinedError: 'target_platform' is undefined\r\n```\n", "before_files": [{"content": "import shutil\nimport tempfile\nimport jinja2\nimport six\nimport datetime\nimport time\nfrom collections import defaultdict\nfrom contextlib import contextmanager\n\n@contextmanager\ndef tmp_directory():\n tmp_dir = tempfile.mkdtemp('_recipe')\n yield tmp_dir\n shutil.rmtree(tmp_dir)\n\n\nclass NullUndefined(jinja2.Undefined):\n def __unicode__(self):\n return self._undefined_name\n\n def __getattr__(self, name):\n return '{}.{}'.format(self, name)\n\n def __getitem__(self, name):\n return '{}[\"{}\"]'.format(self, name)\n\n\nclass MockOS(dict):\n def __init__(self):\n self.environ = defaultdict(lambda: '')\n\n\ndef render_meta_yaml(text):\n env = jinja2.Environment(undefined=NullUndefined)\n\n # stub out cb3 jinja2 functions - they are not important for linting\n # if we don't stub them out, the ruamel.yaml load fails to interpret them\n # we can't just use conda-build's api.render functionality, because it would apply selectors\n env.globals.update(dict(compiler=lambda x: x + '_compiler_stub',\n pin_subpackage=lambda *args, **kwargs: 'subpackage_stub',\n pin_compatible=lambda *args, **kwargs: 'compatible_pin_stub',\n cdt=lambda *args, **kwargs: 'cdt_stub',\n load_file_regex=lambda *args, **kwargs: \\\n defaultdict(lambda : ''),\n datetime=datetime,\n time=time,\n ))\n mockos = MockOS()\n content = env.from_string(text).render(os=mockos, environ=mockos.environ)\n return content\n", "path": "conda_smithy/utils.py"}]} | 1,461 | 105 |
gh_patches_debug_2928 | rasdani/github-patches | git_diff | ray-project__ray-3621 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[modin] Importing Modin before Ray can sometimes cause ImportError
### Describe the problem
<!-- Describe the problem clearly here. -->
When running Modin with Ray installed from source, I am sometimes running into `ImportError` and `ModuleNotFoundError` which is occurring when I am running a modified version of Modin. This forces me to modify Ray's source such that it does not try to use the Modin that is bundled with Ray.
I will work on a solution for this.
### Source code / logs
`import modin.pandas as pd`
```
Traceback (most recent call last):
File "/home/ubuntu/ray/python/ray/function_manager.py", line 165, in fetch_and_register_remote_function
function = pickle.loads(serialized_function)
ModuleNotFoundError: No module named 'modin.data_management.utils'
```
</issue>
<code>
[start of python/ray/__init__.py]
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4
5 import os
6 import sys
7
8 if "pyarrow" in sys.modules:
9 raise ImportError("Ray must be imported before pyarrow because Ray "
10 "requires a specific version of pyarrow (which is "
11 "packaged along with Ray).")
12
13 # Add the directory containing pyarrow to the Python path so that we find the
14 # pyarrow version packaged with ray and not a pre-existing pyarrow.
15 pyarrow_path = os.path.join(
16 os.path.abspath(os.path.dirname(__file__)), "pyarrow_files")
17 sys.path.insert(0, pyarrow_path)
18
19 # See https://github.com/ray-project/ray/issues/131.
20 helpful_message = """
21
22 If you are using Anaconda, try fixing this problem by running:
23
24 conda install libgcc
25 """
26
27 try:
28 import pyarrow # noqa: F401
29 except ImportError as e:
30 if ((hasattr(e, "msg") and isinstance(e.msg, str)
31 and ("libstdc++" in e.msg or "CXX" in e.msg))):
32 # This code path should be taken with Python 3.
33 e.msg += helpful_message
34 elif (hasattr(e, "message") and isinstance(e.message, str)
35 and ("libstdc++" in e.message or "CXX" in e.message)):
36 # This code path should be taken with Python 2.
37 condition = (hasattr(e, "args") and isinstance(e.args, tuple)
38 and len(e.args) == 1 and isinstance(e.args[0], str))
39 if condition:
40 e.args = (e.args[0] + helpful_message, )
41 else:
42 if not hasattr(e, "args"):
43 e.args = ()
44 elif not isinstance(e.args, tuple):
45 e.args = (e.args, )
46 e.args += (helpful_message, )
47 raise
48
49 modin_path = os.path.join(os.path.abspath(os.path.dirname(__file__)), "modin")
50 sys.path.insert(0, modin_path)
51
52 from ray.raylet import ObjectID, _config # noqa: E402
53 from ray.profiling import profile # noqa: E402
54 from ray.worker import (error_info, init, connect, disconnect, get, put, wait,
55 remote, get_gpu_ids, get_resource_ids, get_webui_url,
56 register_custom_serializer, shutdown,
57 is_initialized) # noqa: E402
58 from ray.worker import (SCRIPT_MODE, WORKER_MODE, LOCAL_MODE,
59 PYTHON_MODE) # noqa: E402
60 from ray.worker import global_state # noqa: E402
61 import ray.internal # noqa: E402
62 # We import ray.actor because some code is run in actor.py which initializes
63 # some functions in the worker.
64 import ray.actor # noqa: F401
65 from ray.actor import method # noqa: E402
66
67 # Ray version string.
68 __version__ = "0.6.0"
69
70 __all__ = [
71 "error_info", "init", "connect", "disconnect", "get", "put", "wait",
72 "remote", "profile", "actor", "method", "get_gpu_ids", "get_resource_ids",
73 "get_webui_url", "register_custom_serializer", "shutdown",
74 "is_initialized", "SCRIPT_MODE", "WORKER_MODE", "LOCAL_MODE",
75 "PYTHON_MODE", "global_state", "ObjectID", "_config", "__version__",
76 "internal"
77 ]
78
79 import ctypes # noqa: E402
80 # Windows only
81 if hasattr(ctypes, "windll"):
82 # Makes sure that all child processes die when we die. Also makes sure that
83 # fatal crashes result in process termination rather than an error dialog
84 # (the latter is annoying since we have a lot of processes). This is done
85 # by associating all child processes with a "job" object that imposes this
86 # behavior.
87 (lambda kernel32: (lambda job: (lambda n: kernel32.SetInformationJobObject(job, 9, "\0" * 17 + chr(0x8 | 0x4 | 0x20) + "\0" * (n - 18), n))(0x90 if ctypes.sizeof(ctypes.c_void_p) > ctypes.sizeof(ctypes.c_int) else 0x70) and kernel32.AssignProcessToJobObject(job, ctypes.c_void_p(kernel32.GetCurrentProcess())))(ctypes.c_void_p(kernel32.CreateJobObjectW(None, None))) if kernel32 is not None else None)(ctypes.windll.kernel32) # noqa: E501
88
[end of python/ray/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/ray/__init__.py b/python/ray/__init__.py
--- a/python/ray/__init__.py
+++ b/python/ray/__init__.py
@@ -47,7 +47,7 @@
raise
modin_path = os.path.join(os.path.abspath(os.path.dirname(__file__)), "modin")
-sys.path.insert(0, modin_path)
+sys.path.append(modin_path)
from ray.raylet import ObjectID, _config # noqa: E402
from ray.profiling import profile # noqa: E402
| {"golden_diff": "diff --git a/python/ray/__init__.py b/python/ray/__init__.py\n--- a/python/ray/__init__.py\n+++ b/python/ray/__init__.py\n@@ -47,7 +47,7 @@\n raise\n \n modin_path = os.path.join(os.path.abspath(os.path.dirname(__file__)), \"modin\")\n-sys.path.insert(0, modin_path)\n+sys.path.append(modin_path)\n \n from ray.raylet import ObjectID, _config # noqa: E402\n from ray.profiling import profile # noqa: E402\n", "issue": "[modin] Importing Modin before Ray can sometimes cause ImportError\n\r\n### Describe the problem\r\n<!-- Describe the problem clearly here. -->\r\nWhen running Modin with Ray installed from source, I am sometimes running into `ImportError` and `ModuleNotFoundError` which is occurring when I am running a modified version of Modin. This forces me to modify Ray's source such that it does not try to use the Modin that is bundled with Ray.\r\n\r\nI will work on a solution for this.\r\n\r\n### Source code / logs\r\n\r\n`import modin.pandas as pd`\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/ray/python/ray/function_manager.py\", line 165, in fetch_and_register_remote_function\r\n function = pickle.loads(serialized_function)\r\nModuleNotFoundError: No module named 'modin.data_management.utils'\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport sys\n\nif \"pyarrow\" in sys.modules:\n raise ImportError(\"Ray must be imported before pyarrow because Ray \"\n \"requires a specific version of pyarrow (which is \"\n \"packaged along with Ray).\")\n\n# Add the directory containing pyarrow to the Python path so that we find the\n# pyarrow version packaged with ray and not a pre-existing pyarrow.\npyarrow_path = os.path.join(\n os.path.abspath(os.path.dirname(__file__)), \"pyarrow_files\")\nsys.path.insert(0, pyarrow_path)\n\n# See https://github.com/ray-project/ray/issues/131.\nhelpful_message = \"\"\"\n\nIf you are using Anaconda, try fixing this problem by running:\n\n conda install libgcc\n\"\"\"\n\ntry:\n import pyarrow # noqa: F401\nexcept ImportError as e:\n if ((hasattr(e, \"msg\") and isinstance(e.msg, str)\n and (\"libstdc++\" in e.msg or \"CXX\" in e.msg))):\n # This code path should be taken with Python 3.\n e.msg += helpful_message\n elif (hasattr(e, \"message\") and isinstance(e.message, str)\n and (\"libstdc++\" in e.message or \"CXX\" in e.message)):\n # This code path should be taken with Python 2.\n condition = (hasattr(e, \"args\") and isinstance(e.args, tuple)\n and len(e.args) == 1 and isinstance(e.args[0], str))\n if condition:\n e.args = (e.args[0] + helpful_message, )\n else:\n if not hasattr(e, \"args\"):\n e.args = ()\n elif not isinstance(e.args, tuple):\n e.args = (e.args, )\n e.args += (helpful_message, )\n raise\n\nmodin_path = os.path.join(os.path.abspath(os.path.dirname(__file__)), \"modin\")\nsys.path.insert(0, modin_path)\n\nfrom ray.raylet import ObjectID, _config # noqa: E402\nfrom ray.profiling import profile # noqa: E402\nfrom ray.worker import (error_info, init, connect, disconnect, get, put, wait,\n remote, get_gpu_ids, get_resource_ids, get_webui_url,\n register_custom_serializer, shutdown,\n is_initialized) # noqa: E402\nfrom ray.worker import (SCRIPT_MODE, WORKER_MODE, LOCAL_MODE,\n PYTHON_MODE) # noqa: E402\nfrom ray.worker import global_state # noqa: E402\nimport ray.internal # noqa: E402\n# We import ray.actor because some code is run in actor.py which initializes\n# some functions in the worker.\nimport ray.actor # noqa: F401\nfrom ray.actor import method # noqa: E402\n\n# Ray version string.\n__version__ = \"0.6.0\"\n\n__all__ = [\n \"error_info\", \"init\", \"connect\", \"disconnect\", \"get\", \"put\", \"wait\",\n \"remote\", \"profile\", \"actor\", \"method\", \"get_gpu_ids\", \"get_resource_ids\",\n \"get_webui_url\", \"register_custom_serializer\", \"shutdown\",\n \"is_initialized\", \"SCRIPT_MODE\", \"WORKER_MODE\", \"LOCAL_MODE\",\n \"PYTHON_MODE\", \"global_state\", \"ObjectID\", \"_config\", \"__version__\",\n \"internal\"\n]\n\nimport ctypes # noqa: E402\n# Windows only\nif hasattr(ctypes, \"windll\"):\n # Makes sure that all child processes die when we die. Also makes sure that\n # fatal crashes result in process termination rather than an error dialog\n # (the latter is annoying since we have a lot of processes). This is done\n # by associating all child processes with a \"job\" object that imposes this\n # behavior.\n (lambda kernel32: (lambda job: (lambda n: kernel32.SetInformationJobObject(job, 9, \"\\0\" * 17 + chr(0x8 | 0x4 | 0x20) + \"\\0\" * (n - 18), n))(0x90 if ctypes.sizeof(ctypes.c_void_p) > ctypes.sizeof(ctypes.c_int) else 0x70) and kernel32.AssignProcessToJobObject(job, ctypes.c_void_p(kernel32.GetCurrentProcess())))(ctypes.c_void_p(kernel32.CreateJobObjectW(None, None))) if kernel32 is not None else None)(ctypes.windll.kernel32) # noqa: E501\n", "path": "python/ray/__init__.py"}]} | 1,918 | 132 |
gh_patches_debug_35781 | rasdani/github-patches | git_diff | sunpy__sunpy-5114 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update "Masking out the solar disk" example to use maputils function
Example: https://docs.sunpy.org/en/stable/generated/gallery/computer_vision_techniques/mask_disk.html
Update to use `sunpy.map.coordinate_is_on_solar_disk()`
</issue>
<code>
[start of examples/computer_vision_techniques/mask_disk.py]
1 """
2 ==========================
3 Masking out the solar disk
4 ==========================
5
6 How to mask out all emission from the solar disk.
7 """
8 import matplotlib.pyplot as plt
9 import numpy as np
10 import numpy.ma as ma
11
12 import sunpy.map
13 from sunpy.data.sample import AIA_171_IMAGE
14 from sunpy.map.maputils import all_coordinates_from_map
15
16 ###############################################################################
17 # We start with the sample data
18 aia = sunpy.map.Map(AIA_171_IMAGE)
19
20 ###############################################################################
21 # A utility function gives us access to the helioprojective coordinate of each
22 # pixels. We can use that to create a new array which
23 # contains the normalized radial position for each pixel.
24 hpc_coords = all_coordinates_from_map(aia)
25 r = np.sqrt(hpc_coords.Tx ** 2 + hpc_coords.Ty ** 2) / aia.rsun_obs
26
27 ###############################################################################
28 # With this information, we create a mask where all values which are less then
29 # the solar radius are masked. We also make a slight change to the colormap
30 # so that masked values are shown as black instead of the default white.
31 mask = ma.masked_less_equal(r, 1)
32 palette = aia.cmap
33 palette.set_bad('black')
34
35 ###############################################################################
36 # Finally we create a new map with our new mask.
37 scaled_map = sunpy.map.Map(aia.data, aia.meta, mask=mask.mask)
38
39 ###############################################################################
40 # Let's plot the results using our modified colormap
41 fig = plt.figure()
42 plt.subplot(projection=scaled_map)
43 scaled_map.plot(cmap=palette)
44 scaled_map.draw_limb()
45 plt.show()
46
[end of examples/computer_vision_techniques/mask_disk.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/computer_vision_techniques/mask_disk.py b/examples/computer_vision_techniques/mask_disk.py
--- a/examples/computer_vision_techniques/mask_disk.py
+++ b/examples/computer_vision_techniques/mask_disk.py
@@ -6,12 +6,10 @@
How to mask out all emission from the solar disk.
"""
import matplotlib.pyplot as plt
-import numpy as np
-import numpy.ma as ma
import sunpy.map
from sunpy.data.sample import AIA_171_IMAGE
-from sunpy.map.maputils import all_coordinates_from_map
+from sunpy.map.maputils import all_coordinates_from_map, coordinate_is_on_solar_disk
###############################################################################
# We start with the sample data
@@ -19,22 +17,22 @@
###############################################################################
# A utility function gives us access to the helioprojective coordinate of each
-# pixels. We can use that to create a new array which
-# contains the normalized radial position for each pixel.
+# pixels. We can use that to create a new array of all the coordinates
+# that are on the solar disk.
hpc_coords = all_coordinates_from_map(aia)
-r = np.sqrt(hpc_coords.Tx ** 2 + hpc_coords.Ty ** 2) / aia.rsun_obs
###############################################################################
-# With this information, we create a mask where all values which are less then
-# the solar radius are masked. We also make a slight change to the colormap
-# so that masked values are shown as black instead of the default white.
-mask = ma.masked_less_equal(r, 1)
+# Now, we can create a mask from the coordinates by using another utility
+# function that gives us a mask that has `True` for those coordinates that are
+# on the solar disk. We also make a slight change to the colormap so that
+# masked values are shown as black instead of the default white.
+mask = coordinate_is_on_solar_disk(hpc_coords)
palette = aia.cmap
palette.set_bad('black')
###############################################################################
# Finally we create a new map with our new mask.
-scaled_map = sunpy.map.Map(aia.data, aia.meta, mask=mask.mask)
+scaled_map = sunpy.map.Map(aia.data, aia.meta, mask=mask)
###############################################################################
# Let's plot the results using our modified colormap
| {"golden_diff": "diff --git a/examples/computer_vision_techniques/mask_disk.py b/examples/computer_vision_techniques/mask_disk.py\n--- a/examples/computer_vision_techniques/mask_disk.py\n+++ b/examples/computer_vision_techniques/mask_disk.py\n@@ -6,12 +6,10 @@\n How to mask out all emission from the solar disk.\n \"\"\"\n import matplotlib.pyplot as plt\n-import numpy as np\n-import numpy.ma as ma\n \n import sunpy.map\n from sunpy.data.sample import AIA_171_IMAGE\n-from sunpy.map.maputils import all_coordinates_from_map\n+from sunpy.map.maputils import all_coordinates_from_map, coordinate_is_on_solar_disk\n \n ###############################################################################\n # We start with the sample data\n@@ -19,22 +17,22 @@\n \n ###############################################################################\n # A utility function gives us access to the helioprojective coordinate of each\n-# pixels. We can use that to create a new array which\n-# contains the normalized radial position for each pixel.\n+# pixels. We can use that to create a new array of all the coordinates\n+# that are on the solar disk.\n hpc_coords = all_coordinates_from_map(aia)\n-r = np.sqrt(hpc_coords.Tx ** 2 + hpc_coords.Ty ** 2) / aia.rsun_obs\n \n ###############################################################################\n-# With this information, we create a mask where all values which are less then\n-# the solar radius are masked. We also make a slight change to the colormap\n-# so that masked values are shown as black instead of the default white.\n-mask = ma.masked_less_equal(r, 1)\n+# Now, we can create a mask from the coordinates by using another utility\n+# function that gives us a mask that has `True` for those coordinates that are\n+# on the solar disk. We also make a slight change to the colormap so that\n+# masked values are shown as black instead of the default white.\n+mask = coordinate_is_on_solar_disk(hpc_coords)\n palette = aia.cmap\n palette.set_bad('black')\n \n ###############################################################################\n # Finally we create a new map with our new mask.\n-scaled_map = sunpy.map.Map(aia.data, aia.meta, mask=mask.mask)\n+scaled_map = sunpy.map.Map(aia.data, aia.meta, mask=mask)\n \n ###############################################################################\n # Let's plot the results using our modified colormap\n", "issue": "Update \"Masking out the solar disk\" example to use maputils function\nExample: https://docs.sunpy.org/en/stable/generated/gallery/computer_vision_techniques/mask_disk.html\r\n\r\nUpdate to use `sunpy.map.coordinate_is_on_solar_disk()`\n", "before_files": [{"content": "\"\"\"\n==========================\nMasking out the solar disk\n==========================\n\nHow to mask out all emission from the solar disk.\n\"\"\"\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport numpy.ma as ma\n\nimport sunpy.map\nfrom sunpy.data.sample import AIA_171_IMAGE\nfrom sunpy.map.maputils import all_coordinates_from_map\n\n###############################################################################\n# We start with the sample data\naia = sunpy.map.Map(AIA_171_IMAGE)\n\n###############################################################################\n# A utility function gives us access to the helioprojective coordinate of each\n# pixels. We can use that to create a new array which\n# contains the normalized radial position for each pixel.\nhpc_coords = all_coordinates_from_map(aia)\nr = np.sqrt(hpc_coords.Tx ** 2 + hpc_coords.Ty ** 2) / aia.rsun_obs\n\n###############################################################################\n# With this information, we create a mask where all values which are less then\n# the solar radius are masked. We also make a slight change to the colormap\n# so that masked values are shown as black instead of the default white.\nmask = ma.masked_less_equal(r, 1)\npalette = aia.cmap\npalette.set_bad('black')\n\n###############################################################################\n# Finally we create a new map with our new mask.\nscaled_map = sunpy.map.Map(aia.data, aia.meta, mask=mask.mask)\n\n###############################################################################\n# Let's plot the results using our modified colormap\nfig = plt.figure()\nplt.subplot(projection=scaled_map)\nscaled_map.plot(cmap=palette)\nscaled_map.draw_limb()\nplt.show()\n", "path": "examples/computer_vision_techniques/mask_disk.py"}]} | 1,028 | 514 |
gh_patches_debug_7803 | rasdani/github-patches | git_diff | weecology__retriever-712 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add Python 3 to setup.py
We need to note in the setup.py that Python 3 is supported.
</issue>
<code>
[start of setup.py]
1 """Use the following command to install retriever: python setup.py install"""
2 from __future__ import absolute_import
3
4 from setuptools import setup
5 from pkg_resources import parse_version
6 import platform
7
8
9 current_platform = platform.system().lower()
10 extra_includes = []
11 if current_platform == "darwin":
12 try:
13 import py2app
14 except ImportError:
15 pass
16 extra_includes = []
17 elif current_platform == "windows":
18 try:
19 import py2exe
20 except ImportError:
21 pass
22 import sys
23 extra_includes = ['pyodbc', 'inspect']
24 sys.path.append(
25 "C:\\Windows\\winsxs\\x86_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.21022.8_none_bcb86ed6ac711f91")
26
27 __version__ = 'v2.0.dev'
28 with open("_version.py", "w") as version_file:
29 version_file.write("__version__ = " + "'" + __version__ + "'\n")
30 version_file.close()
31
32
33 def clean_version(v):
34 return parse_version(v).__repr__().lstrip("<Version('").rstrip("')>")
35
36 packages = [
37 'retriever.lib',
38 'retriever.engines',
39 'retriever',
40 ]
41
42 includes = [
43 'xlrd',
44 'future'
45 'pymysql',
46 'psycopg2',
47 'sqlite3',
48 ] + extra_includes
49
50 excludes = [
51 'pyreadline',
52 'doctest',
53 'optparse',
54 'getopt',
55 'pickle',
56 'calendar',
57 'pdb',
58 'inspect',
59 'email',
60 'pywin', 'pywin.debugger',
61 'pywin.debugger.dbgcon',
62 'pywin.dialogs', 'pywin.dialogs.list',
63 'Tkconstants', 'Tkinter', 'tcl',
64 ]
65
66 setup(name='retriever',
67 version=clean_version(__version__),
68 description='Data Retriever',
69 author='Ben Morris, Ethan White, Henry Senyondo',
70 author_email='[email protected]',
71 url='https://github.com/weecology/retriever',
72 classifiers=['Intended Audience :: Science/Research',
73 'License :: OSI Approved :: MIT License',
74 'Programming Language :: Python',
75 'Programming Language :: Python :: 2', ],
76 packages=packages,
77 package_dir={
78 'retriever': ''
79 },
80 entry_points={
81 'console_scripts': [
82 'retriever = retriever.__main__:main',
83 ],
84 },
85 install_requires=[
86 'xlrd',
87 'future'
88 ],
89
90 # py2exe flags
91 console=[{'script': "__main__.py",
92 'dest_base': "retriever",
93 'icon_resources': [(1, 'icon.ico')]
94 }],
95 zipfile=None,
96
97 # py2app flags
98 app=['__main__.py'],
99 data_files=[('', ['CITATION'])],
100 setup_requires=['py2app'] if current_platform == 'darwin' else [],
101
102 # options
103 # optimize is set to 1 of py2app to avoid errors with pymysql
104 # bundle_files = 1 or 2 was causing failed builds so we moved
105 # to bundle_files = 3 and Inno Setup
106 options={'py2exe': {'bundle_files': 3,
107 'compressed': 2,
108 'optimize': 1,
109 'packages': packages,
110 'includes': includes,
111 'excludes': excludes,
112 },
113 'py2app': {'packages': ['retriever'],
114 'includes': includes,
115 'site_packages': True,
116 'resources': [],
117 'optimize': 1,
118 'argv_emulation': True,
119 'no_chdir': True,
120 'iconfile': 'osx_icon.icns',
121 },
122 },
123 )
124
125
126 try:
127 from retriever.compile import compile
128 from retriever.lib.repository import check_for_updates
129 compile()
130 check_for_updates()
131 except:
132 pass
133
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -72,7 +72,8 @@
classifiers=['Intended Audience :: Science/Research',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python',
- 'Programming Language :: Python :: 2', ],
+ 'Programming Language :: Python :: 2',
+ 'Programming Language :: Python :: 3',],
packages=packages,
package_dir={
'retriever': ''
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -72,7 +72,8 @@\n classifiers=['Intended Audience :: Science/Research',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python',\n- 'Programming Language :: Python :: 2', ],\n+ 'Programming Language :: Python :: 2',\n+ 'Programming Language :: Python :: 3',],\n packages=packages,\n package_dir={\n 'retriever': ''\n", "issue": "Add Python 3 to setup.py\nWe need to note in the setup.py that Python 3 is supported.\n", "before_files": [{"content": "\"\"\"Use the following command to install retriever: python setup.py install\"\"\"\nfrom __future__ import absolute_import\n\nfrom setuptools import setup\nfrom pkg_resources import parse_version\nimport platform\n\n\ncurrent_platform = platform.system().lower()\nextra_includes = []\nif current_platform == \"darwin\":\n try:\n import py2app\n except ImportError:\n pass\n extra_includes = []\nelif current_platform == \"windows\":\n try:\n import py2exe\n except ImportError:\n pass\n import sys\n extra_includes = ['pyodbc', 'inspect']\n sys.path.append(\n \"C:\\\\Windows\\\\winsxs\\\\x86_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.21022.8_none_bcb86ed6ac711f91\")\n\n__version__ = 'v2.0.dev'\nwith open(\"_version.py\", \"w\") as version_file:\n version_file.write(\"__version__ = \" + \"'\" + __version__ + \"'\\n\")\n version_file.close()\n\n\ndef clean_version(v):\n return parse_version(v).__repr__().lstrip(\"<Version('\").rstrip(\"')>\")\n\npackages = [\n 'retriever.lib',\n 'retriever.engines',\n 'retriever',\n]\n\nincludes = [\n 'xlrd',\n 'future'\n 'pymysql',\n 'psycopg2',\n 'sqlite3',\n] + extra_includes\n\nexcludes = [\n 'pyreadline',\n 'doctest',\n 'optparse',\n 'getopt',\n 'pickle',\n 'calendar',\n 'pdb',\n 'inspect',\n 'email',\n 'pywin', 'pywin.debugger',\n 'pywin.debugger.dbgcon',\n 'pywin.dialogs', 'pywin.dialogs.list',\n 'Tkconstants', 'Tkinter', 'tcl',\n]\n\nsetup(name='retriever',\n version=clean_version(__version__),\n description='Data Retriever',\n author='Ben Morris, Ethan White, Henry Senyondo',\n author_email='[email protected]',\n url='https://github.com/weecology/retriever',\n classifiers=['Intended Audience :: Science/Research',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2', ],\n packages=packages,\n package_dir={\n 'retriever': ''\n },\n entry_points={\n 'console_scripts': [\n 'retriever = retriever.__main__:main',\n ],\n },\n install_requires=[\n 'xlrd',\n 'future'\n ],\n\n # py2exe flags\n console=[{'script': \"__main__.py\",\n 'dest_base': \"retriever\",\n 'icon_resources': [(1, 'icon.ico')]\n }],\n zipfile=None,\n\n # py2app flags\n app=['__main__.py'],\n data_files=[('', ['CITATION'])],\n setup_requires=['py2app'] if current_platform == 'darwin' else [],\n\n # options\n # optimize is set to 1 of py2app to avoid errors with pymysql\n # bundle_files = 1 or 2 was causing failed builds so we moved\n # to bundle_files = 3 and Inno Setup\n options={'py2exe': {'bundle_files': 3,\n 'compressed': 2,\n 'optimize': 1,\n 'packages': packages,\n 'includes': includes,\n 'excludes': excludes,\n },\n 'py2app': {'packages': ['retriever'],\n 'includes': includes,\n 'site_packages': True,\n 'resources': [],\n 'optimize': 1,\n 'argv_emulation': True,\n 'no_chdir': True,\n 'iconfile': 'osx_icon.icns',\n },\n },\n )\n\n\ntry:\n from retriever.compile import compile\n from retriever.lib.repository import check_for_updates\n compile()\n check_for_updates()\nexcept:\n pass\n", "path": "setup.py"}]} | 1,733 | 112 |
gh_patches_debug_38826 | rasdani/github-patches | git_diff | sparcs-kaist__otlplus-979 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[CHORE] 졸업플래너 신규 model의 serialize결과 캐싱
## 동기
**Is your feature request related to a problem? Please describe.**
OTL의 주요 모델에는 캐시가 구현되어 있으나 졸업플래너에서 새로 생성된 model은 아직 캐싱이 적용되어 있지 않습니다.
베타 출시 때는 우선 임시로 그대로 출시하였지만 캐시 도입이 필요합니다.
특히 트랙 부분은 페이지 접속 시에 로딩되고 한번에 많은 양이 로드되기 때문에 성능이 상당히 저하될 여지가 있습니다.
## 설명
**Describe the solution you'd like.**
A clear and concise description of what you want to happen.
## 스크린샷
(OPTIONAL) If applicable, add screenshots to help explain your feature request.
## 개발 환경
- OS: [e.g. macOS]
- ```python --version```:
- ```node --version```:
## 테스트 환경
(OPTIONAL)
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Web Version: [e.g. 1.1.0]
## 추가 정보
(OPTIONAL) Add any other context or screenshots about the feature request here.
</issue>
<code>
[start of apps/graduation/models.py]
1 from django.db import models
2
3 from apps.subject.models import Department
4
5
6 UNBOUND_START_YEAR = 2000
7 UNBOUND_END_YEAR = 2100
8
9
10 class GeneralTrack(models.Model):
11 start_year = models.IntegerField(db_index=True)
12 end_year = models.IntegerField(db_index=True)
13 is_foreign = models.BooleanField(db_index=True)
14
15 total_credit = models.IntegerField()
16 total_au = models.IntegerField()
17 basic_required = models.IntegerField()
18 basic_elective = models.IntegerField()
19 thesis_study = models.IntegerField()
20 thesis_study_doublemajor = models.IntegerField()
21 general_required_credit = models.IntegerField()
22 general_required_au = models.IntegerField()
23 humanities = models.IntegerField()
24 humanities_doublemajor = models.IntegerField()
25
26 class Meta:
27 unique_together = [["start_year", "is_foreign"], ["end_year", "is_foreign"]]
28
29 def to_json(self):
30 result = {
31 "id": self.id,
32 "start_year": self.start_year,
33 "end_year": self.end_year,
34 "is_foreign": self.is_foreign,
35 "total_credit": self.total_credit,
36 "total_au": self.total_au,
37 "basic_required": self.basic_required,
38 "basic_elective": self.basic_elective,
39 "thesis_study": self.thesis_study,
40 "thesis_study_doublemajor": self.thesis_study_doublemajor,
41 "general_required_credit": self.general_required_credit,
42 "general_required_au": self.general_required_au,
43 "humanities": self.humanities,
44 "humanities_doublemajor": self.humanities_doublemajor,
45 }
46
47 return result
48
49
50 class MajorTrack(models.Model):
51 start_year = models.IntegerField(db_index=True)
52 end_year = models.IntegerField(db_index=True)
53 department = models.ForeignKey(Department,
54 on_delete=models.CASCADE, db_index=True)
55
56 basic_elective_doublemajor = models.IntegerField()
57 major_required = models.IntegerField()
58 major_elective = models.IntegerField()
59
60 class Meta:
61 unique_together = [["start_year", "department"], ["end_year", "department"]]
62
63 def to_json(self):
64 result = {
65 "id": self.id,
66 "start_year": self.start_year,
67 "end_year": self.end_year,
68 "department": self.department.to_json(nested=False),
69 "basic_elective_doublemajor": self.basic_elective_doublemajor,
70 "major_required": self.major_required,
71 "major_elective": self.major_elective,
72 }
73
74 return result
75
76
77 class AdditionalTrack(models.Model):
78 ADDITIONAL_TYPE_CHOICES = [
79 ('DOUBLE', 'DOUBLE'),
80 ('MINOR', 'MINOR'),
81 ('ADVANCED', 'ADVANCED'),
82 ('INTERDISCIPLINARY', 'INTERDISCIPLINARY'),
83 ]
84
85 start_year = models.IntegerField(db_index=True)
86 end_year = models.IntegerField(db_index=True)
87 type = models.CharField(db_index=True, max_length=32, choices=ADDITIONAL_TYPE_CHOICES)
88 department = models.ForeignKey(Department,
89 null=True, blank=True,
90 on_delete=models.CASCADE, db_index=True)
91
92 major_required = models.IntegerField()
93 major_elective = models.IntegerField()
94
95 class Meta:
96 unique_together = [["start_year", "type", "department"], ["end_year", "type", "department"]]
97
98 def to_json(self):
99 result = {
100 "id": self.id,
101 "start_year": self.start_year,
102 "end_year": self.end_year,
103 "type": self.type,
104 "department": self.department.to_json(nested=False) if self.department else None,
105 "major_required": self.major_required,
106 "major_elective": self.major_elective,
107 }
108
109 return result
110
[end of apps/graduation/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/apps/graduation/models.py b/apps/graduation/models.py
--- a/apps/graduation/models.py
+++ b/apps/graduation/models.py
@@ -1,4 +1,5 @@
from django.db import models
+from django.core.cache import cache
from apps.subject.models import Department
@@ -26,7 +27,15 @@
class Meta:
unique_together = [["start_year", "is_foreign"], ["end_year", "is_foreign"]]
+ def get_cache_key(self):
+ return "generaltrack:%d-%d-%s" % (self.start_year, self.end_year, self.is_foreign)
+
def to_json(self):
+ cache_id = self.get_cache_key()
+ result_cached = cache.get(cache_id)
+ if result_cached is not None:
+ return result_cached
+
result = {
"id": self.id,
"start_year": self.start_year,
@@ -44,6 +53,8 @@
"humanities_doublemajor": self.humanities_doublemajor,
}
+ cache.set(cache_id, result, 60 * 60)
+
return result
@@ -60,7 +71,15 @@
class Meta:
unique_together = [["start_year", "department"], ["end_year", "department"]]
+ def get_cache_key(self):
+ return "majortrack:%d-%d-%d" % (self.start_year, self.end_year, self.department.id)
+
def to_json(self):
+ cache_id = self.get_cache_key()
+ result_cached = cache.get(cache_id)
+ if result_cached is not None:
+ return result_cached
+
result = {
"id": self.id,
"start_year": self.start_year,
@@ -71,6 +90,8 @@
"major_elective": self.major_elective,
}
+ cache.set(cache_id, result, 60 * 60)
+
return result
@@ -95,7 +116,15 @@
class Meta:
unique_together = [["start_year", "type", "department"], ["end_year", "type", "department"]]
+ def get_cache_key(self):
+ return "additionaltrack:%d-%d-%s-%d" % (self.start_year, self.end_year, self.type, self.department.id if self.department else 0)
+
def to_json(self):
+ cache_id = self.get_cache_key()
+ result_cached = cache.get(cache_id)
+ if result_cached is not None:
+ return result_cached
+
result = {
"id": self.id,
"start_year": self.start_year,
@@ -106,4 +135,6 @@
"major_elective": self.major_elective,
}
+ cache.set(cache_id, result, 60 * 60)
+
return result
| {"golden_diff": "diff --git a/apps/graduation/models.py b/apps/graduation/models.py\n--- a/apps/graduation/models.py\n+++ b/apps/graduation/models.py\n@@ -1,4 +1,5 @@\n from django.db import models\n+from django.core.cache import cache\n \n from apps.subject.models import Department\n \n@@ -26,7 +27,15 @@\n class Meta:\n unique_together = [[\"start_year\", \"is_foreign\"], [\"end_year\", \"is_foreign\"]]\n \n+ def get_cache_key(self):\n+ return \"generaltrack:%d-%d-%s\" % (self.start_year, self.end_year, self.is_foreign)\n+\n def to_json(self):\n+ cache_id = self.get_cache_key()\n+ result_cached = cache.get(cache_id)\n+ if result_cached is not None:\n+ return result_cached\n+\n result = {\n \"id\": self.id,\n \"start_year\": self.start_year,\n@@ -44,6 +53,8 @@\n \"humanities_doublemajor\": self.humanities_doublemajor,\n }\n \n+ cache.set(cache_id, result, 60 * 60)\n+\n return result\n \n \n@@ -60,7 +71,15 @@\n class Meta:\n unique_together = [[\"start_year\", \"department\"], [\"end_year\", \"department\"]]\n \n+ def get_cache_key(self):\n+ return \"majortrack:%d-%d-%d\" % (self.start_year, self.end_year, self.department.id)\n+\n def to_json(self):\n+ cache_id = self.get_cache_key()\n+ result_cached = cache.get(cache_id)\n+ if result_cached is not None:\n+ return result_cached\n+\n result = {\n \"id\": self.id,\n \"start_year\": self.start_year,\n@@ -71,6 +90,8 @@\n \"major_elective\": self.major_elective,\n }\n \n+ cache.set(cache_id, result, 60 * 60)\n+\n return result\n \n \n@@ -95,7 +116,15 @@\n class Meta:\n unique_together = [[\"start_year\", \"type\", \"department\"], [\"end_year\", \"type\", \"department\"]]\n \n+ def get_cache_key(self):\n+ return \"additionaltrack:%d-%d-%s-%d\" % (self.start_year, self.end_year, self.type, self.department.id if self.department else 0)\n+\n def to_json(self):\n+ cache_id = self.get_cache_key()\n+ result_cached = cache.get(cache_id)\n+ if result_cached is not None:\n+ return result_cached\n+\n result = {\n \"id\": self.id,\n \"start_year\": self.start_year,\n@@ -106,4 +135,6 @@\n \"major_elective\": self.major_elective,\n }\n \n+ cache.set(cache_id, result, 60 * 60)\n+\n return result\n", "issue": "[CHORE] \uc878\uc5c5\ud50c\ub798\ub108 \uc2e0\uaddc model\uc758 serialize\uacb0\uacfc \uce90\uc2f1\n## \ub3d9\uae30\r\n\r\n**Is your feature request related to a problem? Please describe.**\r\n\r\nOTL\uc758 \uc8fc\uc694 \ubaa8\ub378\uc5d0\ub294 \uce90\uc2dc\uac00 \uad6c\ud604\ub418\uc5b4 \uc788\uc73c\ub098 \uc878\uc5c5\ud50c\ub798\ub108\uc5d0\uc11c \uc0c8\ub85c \uc0dd\uc131\ub41c model\uc740 \uc544\uc9c1 \uce90\uc2f1\uc774 \uc801\uc6a9\ub418\uc5b4 \uc788\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\r\n\ubca0\ud0c0 \ucd9c\uc2dc \ub54c\ub294 \uc6b0\uc120 \uc784\uc2dc\ub85c \uadf8\ub300\ub85c \ucd9c\uc2dc\ud558\uc600\uc9c0\ub9cc \uce90\uc2dc \ub3c4\uc785\uc774 \ud544\uc694\ud569\ub2c8\ub2e4.\r\n\ud2b9\ud788 \ud2b8\ub799 \ubd80\ubd84\uc740 \ud398\uc774\uc9c0 \uc811\uc18d \uc2dc\uc5d0 \ub85c\ub529\ub418\uace0 \ud55c\ubc88\uc5d0 \ub9ce\uc740 \uc591\uc774 \ub85c\ub4dc\ub418\uae30 \ub54c\ubb38\uc5d0 \uc131\ub2a5\uc774 \uc0c1\ub2f9\ud788 \uc800\ud558\ub420 \uc5ec\uc9c0\uac00 \uc788\uc2b5\ub2c8\ub2e4.\r\n\r\n## \uc124\uba85\r\n\r\n**Describe the solution you'd like.**\r\n\r\nA clear and concise description of what you want to happen.\r\n\r\n## \uc2a4\ud06c\ub9b0\uc0f7\r\n\r\n(OPTIONAL) If applicable, add screenshots to help explain your feature request.\r\n\r\n## \uac1c\ubc1c \ud658\uacbd\r\n\r\n- OS: [e.g. macOS]\r\n- ```python --version```:\r\n- ```node --version```:\r\n\r\n## \ud14c\uc2a4\ud2b8 \ud658\uacbd\r\n\r\n(OPTIONAL)\r\n\r\n- Device: [e.g. iPhone6]\r\n- OS: [e.g. iOS8.1]\r\n- Web Version: [e.g. 1.1.0]\r\n\r\n## \ucd94\uac00 \uc815\ubcf4\r\n\r\n(OPTIONAL) Add any other context or screenshots about the feature request here.\r\n\n", "before_files": [{"content": "from django.db import models\n\nfrom apps.subject.models import Department\n\n\nUNBOUND_START_YEAR = 2000\nUNBOUND_END_YEAR = 2100\n\n\nclass GeneralTrack(models.Model):\n start_year = models.IntegerField(db_index=True)\n end_year = models.IntegerField(db_index=True)\n is_foreign = models.BooleanField(db_index=True)\n\n total_credit = models.IntegerField()\n total_au = models.IntegerField()\n basic_required = models.IntegerField()\n basic_elective = models.IntegerField()\n thesis_study = models.IntegerField()\n thesis_study_doublemajor = models.IntegerField()\n general_required_credit = models.IntegerField()\n general_required_au = models.IntegerField()\n humanities = models.IntegerField()\n humanities_doublemajor = models.IntegerField()\n\n class Meta:\n unique_together = [[\"start_year\", \"is_foreign\"], [\"end_year\", \"is_foreign\"]]\n\n def to_json(self):\n result = {\n \"id\": self.id,\n \"start_year\": self.start_year,\n \"end_year\": self.end_year,\n \"is_foreign\": self.is_foreign,\n \"total_credit\": self.total_credit,\n \"total_au\": self.total_au,\n \"basic_required\": self.basic_required,\n \"basic_elective\": self.basic_elective,\n \"thesis_study\": self.thesis_study,\n \"thesis_study_doublemajor\": self.thesis_study_doublemajor,\n \"general_required_credit\": self.general_required_credit,\n \"general_required_au\": self.general_required_au,\n \"humanities\": self.humanities,\n \"humanities_doublemajor\": self.humanities_doublemajor,\n }\n\n return result\n\n\nclass MajorTrack(models.Model):\n start_year = models.IntegerField(db_index=True)\n end_year = models.IntegerField(db_index=True)\n department = models.ForeignKey(Department,\n on_delete=models.CASCADE, db_index=True)\n\n basic_elective_doublemajor = models.IntegerField()\n major_required = models.IntegerField()\n major_elective = models.IntegerField()\n\n class Meta:\n unique_together = [[\"start_year\", \"department\"], [\"end_year\", \"department\"]]\n\n def to_json(self):\n result = {\n \"id\": self.id,\n \"start_year\": self.start_year,\n \"end_year\": self.end_year,\n \"department\": self.department.to_json(nested=False),\n \"basic_elective_doublemajor\": self.basic_elective_doublemajor,\n \"major_required\": self.major_required,\n \"major_elective\": self.major_elective,\n }\n\n return result\n\n\nclass AdditionalTrack(models.Model):\n ADDITIONAL_TYPE_CHOICES = [\n ('DOUBLE', 'DOUBLE'),\n ('MINOR', 'MINOR'),\n ('ADVANCED', 'ADVANCED'),\n ('INTERDISCIPLINARY', 'INTERDISCIPLINARY'),\n ]\n\n start_year = models.IntegerField(db_index=True)\n end_year = models.IntegerField(db_index=True)\n type = models.CharField(db_index=True, max_length=32, choices=ADDITIONAL_TYPE_CHOICES)\n department = models.ForeignKey(Department,\n null=True, blank=True,\n on_delete=models.CASCADE, db_index=True)\n\n major_required = models.IntegerField()\n major_elective = models.IntegerField()\n\n class Meta:\n unique_together = [[\"start_year\", \"type\", \"department\"], [\"end_year\", \"type\", \"department\"]]\n\n def to_json(self):\n result = {\n \"id\": self.id,\n \"start_year\": self.start_year,\n \"end_year\": self.end_year,\n \"type\": self.type,\n \"department\": self.department.to_json(nested=False) if self.department else None,\n \"major_required\": self.major_required,\n \"major_elective\": self.major_elective,\n }\n\n return result\n", "path": "apps/graduation/models.py"}]} | 1,868 | 649 |
gh_patches_debug_4968 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-2649 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dbt clean regression
### Describe the bug
In dbt 0.16.1 `dbt clean` fails without a profile:
```bash
(dbt) dbt$ dbt --version
installed version: 0.16.1
latest version: 0.17.0
Your version of dbt is out of date! You can find instructions for upgrading here:
https://docs.getdbt.com/docs/installation
(dbt) dbt$ dbt clean
Running with dbt=0.16.1
Encountered an error while reading the project:
ERROR: Runtime Error
Could not find profile named 'profile'
Encountered an error:
Runtime Error
Could not run dbt
```
In dbt 0.15.1, `dbt clean` works.
```bash
(dbt) dbt$ dbt --version
installed version: 0.15.1
latest version: 0.17.0
Your version of dbt is out of date! You can find instructions for upgrading here:
https://docs.getdbt.com/docs/installation
(dbt) dbt$ dbt clean
Running with dbt=0.15.1
Checking target/*
Cleaned target/*
Finished cleaning all paths.
```
### Steps To Reproduce
Delete any profile found in `~/.dbt/profile.yml`.
Install 0.16.1:
```bash
pip install dbt==0.16.1
```
Navigate to dbt project:
```
dbt clean
```
Repeat for 0.15.1 to confirm regression.
### Expected behavior
I expected `dbt clean` to work without a profile. This broke some of our automated jobs when we tried to upgrade.
### System information
**Which database are you using dbt with?**
- [ ] postgres
- [X] redshift
- [ ] bigquery
- [ ] snowflake
- [ ] other (specify: ____________)
**The output of `dbt --version`:**
Multiple versions. See above.
**The operating system you're using:**
macOS 10.14.6
**The output of `python --version`:**
```
(dbt) dbt$ python --version
Python 3.7.3
```
### Additional context
Most people probably don't run `dbt clean` without a profile, but it was causing us confusion, so wanted to document it as a breaking change at least.
I also tested this with 0.17.0: same error as 0.16.1.
```
(dbt) dbt$ dbt --version
installed version: 0.17.0
latest version: 0.17.0
Up to date!
Plugins:
- bigquery: 0.17.0
- snowflake: 0.17.0
- redshift: 0.17.0
- postgres: 0.17.0
(dbt) dbt$ dbt clean
Running with dbt=0.17.0
Encountered an error while reading the project:
ERROR: Runtime Error
Could not find profile named 'profile'
Encountered an error:
Runtime Error
Could not run dbt
```
</issue>
<code>
[start of core/dbt/task/clean.py]
1 import os.path
2 import os
3 import shutil
4
5 from dbt.task.base import ConfiguredTask
6 from dbt.logger import GLOBAL_LOGGER as logger
7
8
9 class CleanTask(ConfiguredTask):
10
11 def __is_project_path(self, path):
12 proj_path = os.path.abspath('.')
13 return not os.path.commonprefix(
14 [proj_path, os.path.abspath(path)]
15 ) == proj_path
16
17 def __is_protected_path(self, path):
18 """
19 This function identifies protected paths, so as not to clean them.
20 """
21 abs_path = os.path.abspath(path)
22 protected_paths = self.config.source_paths + \
23 self.config.test_paths + ['.']
24 protected_abs_paths = [os.path.abspath(p) for p in protected_paths]
25 return abs_path in set(protected_abs_paths) or \
26 self.__is_project_path(abs_path)
27
28 def run(self):
29 """
30 This function takes all the paths in the target file
31 and cleans the project paths that are not protected.
32 """
33 for path in self.config.clean_targets:
34 logger.info("Checking {}/*".format(path))
35 if not self.__is_protected_path(path):
36 shutil.rmtree(path, True)
37 logger.info(" Cleaned {}/*".format(path))
38 else:
39 logger.info("ERROR: not cleaning {}/* because it is "
40 "protected".format(path))
41 logger.info("Finished cleaning all paths.")
42
[end of core/dbt/task/clean.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/dbt/task/clean.py b/core/dbt/task/clean.py
--- a/core/dbt/task/clean.py
+++ b/core/dbt/task/clean.py
@@ -2,11 +2,13 @@
import os
import shutil
-from dbt.task.base import ConfiguredTask
+from dbt.task.base import BaseTask
from dbt.logger import GLOBAL_LOGGER as logger
+from dbt.config import UnsetProfileConfig
-class CleanTask(ConfiguredTask):
+class CleanTask(BaseTask):
+ ConfigType = UnsetProfileConfig
def __is_project_path(self, path):
proj_path = os.path.abspath('.')
| {"golden_diff": "diff --git a/core/dbt/task/clean.py b/core/dbt/task/clean.py\n--- a/core/dbt/task/clean.py\n+++ b/core/dbt/task/clean.py\n@@ -2,11 +2,13 @@\n import os\n import shutil\n \n-from dbt.task.base import ConfiguredTask\n+from dbt.task.base import BaseTask\n from dbt.logger import GLOBAL_LOGGER as logger\n+from dbt.config import UnsetProfileConfig\n \n \n-class CleanTask(ConfiguredTask):\n+class CleanTask(BaseTask):\n+ ConfigType = UnsetProfileConfig\n \n def __is_project_path(self, path):\n proj_path = os.path.abspath('.')\n", "issue": "dbt clean regression\n### Describe the bug\r\nIn dbt 0.16.1 `dbt clean` fails without a profile: \r\n\r\n```bash\r\n(dbt) dbt$ dbt --version\r\ninstalled version: 0.16.1\r\n latest version: 0.17.0\r\n\r\nYour version of dbt is out of date! You can find instructions for upgrading here:\r\nhttps://docs.getdbt.com/docs/installation\r\n(dbt) dbt$ dbt clean\r\nRunning with dbt=0.16.1\r\nEncountered an error while reading the project:\r\n ERROR: Runtime Error\r\n Could not find profile named 'profile'\r\nEncountered an error:\r\nRuntime Error\r\n Could not run dbt\r\n```\r\n\r\nIn dbt 0.15.1, `dbt clean` works.\r\n\r\n```bash\r\n(dbt) dbt$ dbt --version\r\ninstalled version: 0.15.1\r\n latest version: 0.17.0\r\n\r\nYour version of dbt is out of date! You can find instructions for upgrading here:\r\nhttps://docs.getdbt.com/docs/installation\r\n(dbt) dbt$ dbt clean\r\nRunning with dbt=0.15.1\r\nChecking target/*\r\n Cleaned target/*\r\nFinished cleaning all paths.\r\n```\r\n\r\n### Steps To Reproduce\r\nDelete any profile found in `~/.dbt/profile.yml`. \r\n\r\nInstall 0.16.1:\r\n```bash\r\npip install dbt==0.16.1\r\n```\r\nNavigate to dbt project:\r\n```\r\ndbt clean\r\n```\r\n\r\nRepeat for 0.15.1 to confirm regression.\r\n\r\n### Expected behavior\r\nI expected `dbt clean` to work without a profile. This broke some of our automated jobs when we tried to upgrade.\r\n\r\n### System information\r\n**Which database are you using dbt with?**\r\n- [ ] postgres\r\n- [X] redshift\r\n- [ ] bigquery\r\n- [ ] snowflake\r\n- [ ] other (specify: ____________)\r\n\r\n\r\n**The output of `dbt --version`:**\r\nMultiple versions. See above.\r\n\r\n**The operating system you're using:**\r\nmacOS 10.14.6\r\n\r\n**The output of `python --version`:**\r\n```\r\n(dbt) dbt$ python --version\r\nPython 3.7.3\r\n```\r\n\r\n### Additional context\r\nMost people probably don't run `dbt clean` without a profile, but it was causing us confusion, so wanted to document it as a breaking change at least.\r\n\r\nI also tested this with 0.17.0: same error as 0.16.1.\r\n\r\n```\r\n(dbt) dbt$ dbt --version\r\ninstalled version: 0.17.0\r\n latest version: 0.17.0\r\n\r\nUp to date!\r\n\r\nPlugins:\r\n - bigquery: 0.17.0\r\n - snowflake: 0.17.0\r\n - redshift: 0.17.0\r\n - postgres: 0.17.0\r\n(dbt) dbt$ dbt clean\r\nRunning with dbt=0.17.0\r\nEncountered an error while reading the project:\r\n ERROR: Runtime Error\r\n Could not find profile named 'profile'\r\nEncountered an error:\r\nRuntime Error\r\n Could not run dbt\r\n```\r\n\n", "before_files": [{"content": "import os.path\nimport os\nimport shutil\n\nfrom dbt.task.base import ConfiguredTask\nfrom dbt.logger import GLOBAL_LOGGER as logger\n\n\nclass CleanTask(ConfiguredTask):\n\n def __is_project_path(self, path):\n proj_path = os.path.abspath('.')\n return not os.path.commonprefix(\n [proj_path, os.path.abspath(path)]\n ) == proj_path\n\n def __is_protected_path(self, path):\n \"\"\"\n This function identifies protected paths, so as not to clean them.\n \"\"\"\n abs_path = os.path.abspath(path)\n protected_paths = self.config.source_paths + \\\n self.config.test_paths + ['.']\n protected_abs_paths = [os.path.abspath(p) for p in protected_paths]\n return abs_path in set(protected_abs_paths) or \\\n self.__is_project_path(abs_path)\n\n def run(self):\n \"\"\"\n This function takes all the paths in the target file\n and cleans the project paths that are not protected.\n \"\"\"\n for path in self.config.clean_targets:\n logger.info(\"Checking {}/*\".format(path))\n if not self.__is_protected_path(path):\n shutil.rmtree(path, True)\n logger.info(\" Cleaned {}/*\".format(path))\n else:\n logger.info(\"ERROR: not cleaning {}/* because it is \"\n \"protected\".format(path))\n logger.info(\"Finished cleaning all paths.\")\n", "path": "core/dbt/task/clean.py"}]} | 1,637 | 143 |
gh_patches_debug_2478 | rasdani/github-patches | git_diff | svthalia__concrexit-1767 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Delete tpay payment if order is modified
### Summary
Right now it is possible to order a pizza, pay it with tpay, change the order to a pizza with a different price, and the payment will not match the order anymore.
### How to test
1. Order a pizza
2. Pay with tpay
3. Change the order
4. The payment should be deleted
5. If the event is over, or the payment is batched, then changing the order should crash
</issue>
<code>
[start of website/pizzas/views.py]
1 """Views provided by the pizzas package."""
2 from django.contrib import messages
3 from django.contrib.auth.decorators import login_required
4 from django.http import Http404
5 from django.shortcuts import get_object_or_404, render, redirect
6 from django.utils.translation import gettext_lazy as _
7 from django.views.decorators.http import require_http_methods
8
9 from payments.services import delete_payment
10 from .models import FoodOrder, FoodEvent, Product
11
12
13 @login_required
14 def index(request):
15 """Overview of user order for a pizza event."""
16 products = Product.available_products.order_by("name")
17 if not request.user.has_perm("pizzas.order_restricted_products"):
18 products = products.exclude(restricted=True)
19 event = FoodEvent.current()
20 try:
21 obj = FoodOrder.objects.get(food_event=event, member=request.member)
22 except FoodOrder.DoesNotExist:
23 obj = None
24 context = {"event": event, "products": products, "order": obj}
25 return render(request, "pizzas/index.html", context)
26
27
28 @require_http_methods(["POST"])
29 def cancel_order(request):
30 """View that cancels a user's order."""
31 if "order" in request.POST:
32 try:
33 order = get_object_or_404(FoodOrder, pk=int(request.POST["order"]))
34 if not order.can_be_changed:
35 messages.error(request, _("You can no longer cancel."))
36 elif order.member == request.member:
37 order.delete()
38 messages.success(request, _("Your order has been cancelled."))
39 except Http404:
40 messages.error(request, _("Your order could not be found."))
41 return redirect("pizzas:index")
42
43
44 @login_required
45 def place_order(request):
46 """View that shows the detail of the current order."""
47 event = FoodEvent.current()
48 if not event:
49 return redirect("pizzas:index")
50
51 try:
52 obj = FoodOrder.objects.get(food_event=event, member=request.member)
53 current_order_locked = not obj.can_be_changed
54 except FoodOrder.DoesNotExist:
55 obj = None
56 current_order_locked = False
57
58 if "product" in request.POST and not current_order_locked:
59 productset = Product.available_products.all()
60 if not request.user.has_perm("pizzas.order_restricted_products"):
61 productset = productset.exclude(restricted=True)
62 try:
63 product = productset.get(pk=int(request.POST["product"]))
64 except Product.DoesNotExist as e:
65 raise Http404("Pizza does not exist") from e
66 if not obj:
67 obj = FoodOrder(food_event=event, member=request.member)
68 obj.product = product
69 if obj.payment:
70 delete_payment(obj.payment)
71 obj.save()
72 return redirect("pizzas:index")
73
[end of website/pizzas/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/pizzas/views.py b/website/pizzas/views.py
--- a/website/pizzas/views.py
+++ b/website/pizzas/views.py
@@ -67,6 +67,6 @@
obj = FoodOrder(food_event=event, member=request.member)
obj.product = product
if obj.payment:
- delete_payment(obj.payment)
+ delete_payment(obj)
obj.save()
return redirect("pizzas:index")
| {"golden_diff": "diff --git a/website/pizzas/views.py b/website/pizzas/views.py\n--- a/website/pizzas/views.py\n+++ b/website/pizzas/views.py\n@@ -67,6 +67,6 @@\n obj = FoodOrder(food_event=event, member=request.member)\n obj.product = product\n if obj.payment:\n- delete_payment(obj.payment)\n+ delete_payment(obj)\n obj.save()\n return redirect(\"pizzas:index\")\n", "issue": "Delete tpay payment if order is modified\n### Summary\r\nRight now it is possible to order a pizza, pay it with tpay, change the order to a pizza with a different price, and the payment will not match the order anymore.\r\n\r\n### How to test\r\n1. Order a pizza\r\n2. Pay with tpay\r\n3. Change the order\r\n4. The payment should be deleted\r\n5. If the event is over, or the payment is batched, then changing the order should crash\n", "before_files": [{"content": "\"\"\"Views provided by the pizzas package.\"\"\"\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import Http404\nfrom django.shortcuts import get_object_or_404, render, redirect\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views.decorators.http import require_http_methods\n\nfrom payments.services import delete_payment\nfrom .models import FoodOrder, FoodEvent, Product\n\n\n@login_required\ndef index(request):\n \"\"\"Overview of user order for a pizza event.\"\"\"\n products = Product.available_products.order_by(\"name\")\n if not request.user.has_perm(\"pizzas.order_restricted_products\"):\n products = products.exclude(restricted=True)\n event = FoodEvent.current()\n try:\n obj = FoodOrder.objects.get(food_event=event, member=request.member)\n except FoodOrder.DoesNotExist:\n obj = None\n context = {\"event\": event, \"products\": products, \"order\": obj}\n return render(request, \"pizzas/index.html\", context)\n\n\n@require_http_methods([\"POST\"])\ndef cancel_order(request):\n \"\"\"View that cancels a user's order.\"\"\"\n if \"order\" in request.POST:\n try:\n order = get_object_or_404(FoodOrder, pk=int(request.POST[\"order\"]))\n if not order.can_be_changed:\n messages.error(request, _(\"You can no longer cancel.\"))\n elif order.member == request.member:\n order.delete()\n messages.success(request, _(\"Your order has been cancelled.\"))\n except Http404:\n messages.error(request, _(\"Your order could not be found.\"))\n return redirect(\"pizzas:index\")\n\n\n@login_required\ndef place_order(request):\n \"\"\"View that shows the detail of the current order.\"\"\"\n event = FoodEvent.current()\n if not event:\n return redirect(\"pizzas:index\")\n\n try:\n obj = FoodOrder.objects.get(food_event=event, member=request.member)\n current_order_locked = not obj.can_be_changed\n except FoodOrder.DoesNotExist:\n obj = None\n current_order_locked = False\n\n if \"product\" in request.POST and not current_order_locked:\n productset = Product.available_products.all()\n if not request.user.has_perm(\"pizzas.order_restricted_products\"):\n productset = productset.exclude(restricted=True)\n try:\n product = productset.get(pk=int(request.POST[\"product\"]))\n except Product.DoesNotExist as e:\n raise Http404(\"Pizza does not exist\") from e\n if not obj:\n obj = FoodOrder(food_event=event, member=request.member)\n obj.product = product\n if obj.payment:\n delete_payment(obj.payment)\n obj.save()\n return redirect(\"pizzas:index\")\n", "path": "website/pizzas/views.py"}]} | 1,345 | 98 |
gh_patches_debug_12655 | rasdani/github-patches | git_diff | deis__deis-3535 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error with `deis certs:remove`
Getting the following error when trying to remove a cert.
```
$ deis certs:remove '*.brandfolder.com'
Removing *.brandfolder.com... 405 METHOD NOT ALLOWED
Detail:
Method 'DELETE' not allowed.
```
</issue>
<code>
[start of controller/api/urls.py]
1 from __future__ import unicode_literals
2
3 from django.conf import settings
4 from django.conf.urls import include, patterns, url
5
6 from api import routers, views
7
8
9 router = routers.ApiRouter()
10
11 # Add the generated REST URLs and login/logout endpoint
12 urlpatterns = patterns(
13 '',
14 url(r'^', include(router.urls)),
15 # application release components
16 url(r'^apps/(?P<id>{})/config/?'.format(settings.APP_URL_REGEX),
17 views.ConfigViewSet.as_view({'get': 'retrieve', 'post': 'create'})),
18 url(r'^apps/(?P<id>{})/builds/(?P<uuid>[-_\w]+)/?'.format(settings.APP_URL_REGEX),
19 views.BuildViewSet.as_view({'get': 'retrieve'})),
20 url(r'^apps/(?P<id>{})/builds/?'.format(settings.APP_URL_REGEX),
21 views.BuildViewSet.as_view({'get': 'list', 'post': 'create'})),
22 url(r'^apps/(?P<id>{})/releases/v(?P<version>[0-9]+)/?'.format(settings.APP_URL_REGEX),
23 views.ReleaseViewSet.as_view({'get': 'retrieve'})),
24 url(r'^apps/(?P<id>{})/releases/rollback/?'.format(settings.APP_URL_REGEX),
25 views.ReleaseViewSet.as_view({'post': 'rollback'})),
26 url(r'^apps/(?P<id>{})/releases/?'.format(settings.APP_URL_REGEX),
27 views.ReleaseViewSet.as_view({'get': 'list'})),
28 # application infrastructure
29 url(r'^apps/(?P<id>{})/containers/(?P<type>[-_\w]+)/(?P<num>[-_\w]+)/?'.format(
30 settings.APP_URL_REGEX),
31 views.ContainerViewSet.as_view({'get': 'retrieve'})),
32 url(r'^apps/(?P<id>{})/containers/(?P<type>[-_\w.]+)/?'.format(settings.APP_URL_REGEX),
33 views.ContainerViewSet.as_view({'get': 'list'})),
34 url(r'^apps/(?P<id>{})/containers/?'.format(settings.APP_URL_REGEX),
35 views.ContainerViewSet.as_view({'get': 'list'})),
36 # application domains
37 url(r'^apps/(?P<id>{})/domains/(?P<domain>[-\._\w]+)/?'.format(settings.APP_URL_REGEX),
38 views.DomainViewSet.as_view({'delete': 'destroy'})),
39 url(r'^apps/(?P<id>{})/domains/?'.format(settings.APP_URL_REGEX),
40 views.DomainViewSet.as_view({'post': 'create', 'get': 'list'})),
41 # application actions
42 url(r'^apps/(?P<id>{})/scale/?'.format(settings.APP_URL_REGEX),
43 views.AppViewSet.as_view({'post': 'scale'})),
44 url(r'^apps/(?P<id>{})/logs/?'.format(settings.APP_URL_REGEX),
45 views.AppViewSet.as_view({'get': 'logs'})),
46 url(r'^apps/(?P<id>{})/run/?'.format(settings.APP_URL_REGEX),
47 views.AppViewSet.as_view({'post': 'run'})),
48 # apps sharing
49 url(r'^apps/(?P<id>{})/perms/(?P<username>[-_\w]+)/?'.format(settings.APP_URL_REGEX),
50 views.AppPermsViewSet.as_view({'delete': 'destroy'})),
51 url(r'^apps/(?P<id>{})/perms/?'.format(settings.APP_URL_REGEX),
52 views.AppPermsViewSet.as_view({'get': 'list', 'post': 'create'})),
53 # apps base endpoint
54 url(r'^apps/(?P<id>{})/?'.format(settings.APP_URL_REGEX),
55 views.AppViewSet.as_view({'get': 'retrieve', 'delete': 'destroy'})),
56 url(r'^apps/?',
57 views.AppViewSet.as_view({'get': 'list', 'post': 'create'})),
58 # key
59 url(r'^keys/(?P<id>.+)/?',
60 views.KeyViewSet.as_view({
61 'get': 'retrieve', 'delete': 'destroy'})),
62 url(r'^keys/?',
63 views.KeyViewSet.as_view({'get': 'list', 'post': 'create'})),
64 # hooks
65 url(r'^hooks/push/?',
66 views.PushHookViewSet.as_view({'post': 'create'})),
67 url(r'^hooks/build/?',
68 views.BuildHookViewSet.as_view({'post': 'create'})),
69 url(r'^hooks/config/?',
70 views.ConfigHookViewSet.as_view({'post': 'create'})),
71 # authn / authz
72 url(r'^auth/register/?',
73 views.UserRegistrationViewSet.as_view({'post': 'create'})),
74 url(r'^auth/cancel/?',
75 views.UserManagementViewSet.as_view({'delete': 'destroy'})),
76 url(r'^auth/passwd/?',
77 views.UserManagementViewSet.as_view({'post': 'passwd'})),
78 url(r'^auth/login/',
79 'rest_framework.authtoken.views.obtain_auth_token'),
80 # admin sharing
81 url(r'^admin/perms/(?P<username>[-_\w]+)/?',
82 views.AdminPermsViewSet.as_view({'delete': 'destroy'})),
83 url(r'^admin/perms/?',
84 views.AdminPermsViewSet.as_view({'get': 'list', 'post': 'create'})),
85 url(r'^certs/(?P<common_name>[-_.\w]+)/?'.format(settings.APP_URL_REGEX),
86 views.CertificateViewSet.as_view({'get': 'retrieve', 'delete': 'destroy'})),
87 url(r'^certs/?',
88 views.CertificateViewSet.as_view({'get': 'list', 'post': 'create'})),
89 # list users
90 url(r'^users/', views.UserView.as_view({'get': 'list'})),
91 )
92
[end of controller/api/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/controller/api/urls.py b/controller/api/urls.py
--- a/controller/api/urls.py
+++ b/controller/api/urls.py
@@ -82,7 +82,7 @@
views.AdminPermsViewSet.as_view({'delete': 'destroy'})),
url(r'^admin/perms/?',
views.AdminPermsViewSet.as_view({'get': 'list', 'post': 'create'})),
- url(r'^certs/(?P<common_name>[-_.\w]+)/?'.format(settings.APP_URL_REGEX),
+ url(r'^certs/(?P<common_name>[-_*.\w]+)/?'.format(settings.APP_URL_REGEX),
views.CertificateViewSet.as_view({'get': 'retrieve', 'delete': 'destroy'})),
url(r'^certs/?',
views.CertificateViewSet.as_view({'get': 'list', 'post': 'create'})),
| {"golden_diff": "diff --git a/controller/api/urls.py b/controller/api/urls.py\n--- a/controller/api/urls.py\n+++ b/controller/api/urls.py\n@@ -82,7 +82,7 @@\n views.AdminPermsViewSet.as_view({'delete': 'destroy'})),\n url(r'^admin/perms/?',\n views.AdminPermsViewSet.as_view({'get': 'list', 'post': 'create'})),\n- url(r'^certs/(?P<common_name>[-_.\\w]+)/?'.format(settings.APP_URL_REGEX),\n+ url(r'^certs/(?P<common_name>[-_*.\\w]+)/?'.format(settings.APP_URL_REGEX),\n views.CertificateViewSet.as_view({'get': 'retrieve', 'delete': 'destroy'})),\n url(r'^certs/?',\n views.CertificateViewSet.as_view({'get': 'list', 'post': 'create'})),\n", "issue": "Error with `deis certs:remove`\nGetting the following error when trying to remove a cert.\n\n```\n$ deis certs:remove '*.brandfolder.com'\nRemoving *.brandfolder.com... 405 METHOD NOT ALLOWED\nDetail:\nMethod 'DELETE' not allowed.\n```\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nfrom django.conf import settings\nfrom django.conf.urls import include, patterns, url\n\nfrom api import routers, views\n\n\nrouter = routers.ApiRouter()\n\n# Add the generated REST URLs and login/logout endpoint\nurlpatterns = patterns(\n '',\n url(r'^', include(router.urls)),\n # application release components\n url(r'^apps/(?P<id>{})/config/?'.format(settings.APP_URL_REGEX),\n views.ConfigViewSet.as_view({'get': 'retrieve', 'post': 'create'})),\n url(r'^apps/(?P<id>{})/builds/(?P<uuid>[-_\\w]+)/?'.format(settings.APP_URL_REGEX),\n views.BuildViewSet.as_view({'get': 'retrieve'})),\n url(r'^apps/(?P<id>{})/builds/?'.format(settings.APP_URL_REGEX),\n views.BuildViewSet.as_view({'get': 'list', 'post': 'create'})),\n url(r'^apps/(?P<id>{})/releases/v(?P<version>[0-9]+)/?'.format(settings.APP_URL_REGEX),\n views.ReleaseViewSet.as_view({'get': 'retrieve'})),\n url(r'^apps/(?P<id>{})/releases/rollback/?'.format(settings.APP_URL_REGEX),\n views.ReleaseViewSet.as_view({'post': 'rollback'})),\n url(r'^apps/(?P<id>{})/releases/?'.format(settings.APP_URL_REGEX),\n views.ReleaseViewSet.as_view({'get': 'list'})),\n # application infrastructure\n url(r'^apps/(?P<id>{})/containers/(?P<type>[-_\\w]+)/(?P<num>[-_\\w]+)/?'.format(\n settings.APP_URL_REGEX),\n views.ContainerViewSet.as_view({'get': 'retrieve'})),\n url(r'^apps/(?P<id>{})/containers/(?P<type>[-_\\w.]+)/?'.format(settings.APP_URL_REGEX),\n views.ContainerViewSet.as_view({'get': 'list'})),\n url(r'^apps/(?P<id>{})/containers/?'.format(settings.APP_URL_REGEX),\n views.ContainerViewSet.as_view({'get': 'list'})),\n # application domains\n url(r'^apps/(?P<id>{})/domains/(?P<domain>[-\\._\\w]+)/?'.format(settings.APP_URL_REGEX),\n views.DomainViewSet.as_view({'delete': 'destroy'})),\n url(r'^apps/(?P<id>{})/domains/?'.format(settings.APP_URL_REGEX),\n views.DomainViewSet.as_view({'post': 'create', 'get': 'list'})),\n # application actions\n url(r'^apps/(?P<id>{})/scale/?'.format(settings.APP_URL_REGEX),\n views.AppViewSet.as_view({'post': 'scale'})),\n url(r'^apps/(?P<id>{})/logs/?'.format(settings.APP_URL_REGEX),\n views.AppViewSet.as_view({'get': 'logs'})),\n url(r'^apps/(?P<id>{})/run/?'.format(settings.APP_URL_REGEX),\n views.AppViewSet.as_view({'post': 'run'})),\n # apps sharing\n url(r'^apps/(?P<id>{})/perms/(?P<username>[-_\\w]+)/?'.format(settings.APP_URL_REGEX),\n views.AppPermsViewSet.as_view({'delete': 'destroy'})),\n url(r'^apps/(?P<id>{})/perms/?'.format(settings.APP_URL_REGEX),\n views.AppPermsViewSet.as_view({'get': 'list', 'post': 'create'})),\n # apps base endpoint\n url(r'^apps/(?P<id>{})/?'.format(settings.APP_URL_REGEX),\n views.AppViewSet.as_view({'get': 'retrieve', 'delete': 'destroy'})),\n url(r'^apps/?',\n views.AppViewSet.as_view({'get': 'list', 'post': 'create'})),\n # key\n url(r'^keys/(?P<id>.+)/?',\n views.KeyViewSet.as_view({\n 'get': 'retrieve', 'delete': 'destroy'})),\n url(r'^keys/?',\n views.KeyViewSet.as_view({'get': 'list', 'post': 'create'})),\n # hooks\n url(r'^hooks/push/?',\n views.PushHookViewSet.as_view({'post': 'create'})),\n url(r'^hooks/build/?',\n views.BuildHookViewSet.as_view({'post': 'create'})),\n url(r'^hooks/config/?',\n views.ConfigHookViewSet.as_view({'post': 'create'})),\n # authn / authz\n url(r'^auth/register/?',\n views.UserRegistrationViewSet.as_view({'post': 'create'})),\n url(r'^auth/cancel/?',\n views.UserManagementViewSet.as_view({'delete': 'destroy'})),\n url(r'^auth/passwd/?',\n views.UserManagementViewSet.as_view({'post': 'passwd'})),\n url(r'^auth/login/',\n 'rest_framework.authtoken.views.obtain_auth_token'),\n # admin sharing\n url(r'^admin/perms/(?P<username>[-_\\w]+)/?',\n views.AdminPermsViewSet.as_view({'delete': 'destroy'})),\n url(r'^admin/perms/?',\n views.AdminPermsViewSet.as_view({'get': 'list', 'post': 'create'})),\n url(r'^certs/(?P<common_name>[-_.\\w]+)/?'.format(settings.APP_URL_REGEX),\n views.CertificateViewSet.as_view({'get': 'retrieve', 'delete': 'destroy'})),\n url(r'^certs/?',\n views.CertificateViewSet.as_view({'get': 'list', 'post': 'create'})),\n # list users\n url(r'^users/', views.UserView.as_view({'get': 'list'})),\n)\n", "path": "controller/api/urls.py"}]} | 1,976 | 190 |
gh_patches_debug_36019 | rasdani/github-patches | git_diff | pytorch__TensorRT-905 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
✨[Feature] Enable debug logging with a context
**Is your feature request related to a problem? Please describe.**
<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]-->
Right now seems like users don't know how or don't enable debug logging. We can probably add some syntax to make this easier.
**Describe the solution you'd like**
<!--A clear and concise description of what you want to happen.-->
I would love to see something like:
```py
import torch_tensorrt as torchtrt
with torchtrt.debug:
torchtrt.ts.compile(....)
```
under the hood this would be equivalent to:
```py
import torch_tensorrt as torchtrt
torchtrt.logging.set_reportable_log_level(torchtrt.logging.Level.Debug)
torchtrt.ts.compile(....)
torchtrt.logging.set_reportable_log_level(torchtrt.logging.Level.Error)
```
**Describe alternatives you've considered**
<!--A clear and concise description of any alternative solutions or features you've considered.-->
**Additional context**
<!--Add any other context or screenshots about the feature request here.-->
</issue>
<code>
[start of py/torch_tensorrt/logging.py]
1 from enum import Enum
2 from torch_tensorrt._C import _get_logging_prefix, _set_logging_prefix, \
3 _get_reportable_log_level, _set_reportable_log_level, \
4 _get_is_colored_output_on, _set_is_colored_output_on, \
5 _log, LogLevel
6
7
8 class Level(Enum):
9 """Enum to set the minimum required logging level to print a message to stdout
10 """
11 InternalError = LogLevel.INTERNAL_ERROR
12 Error = LogLevel.ERROR
13 Warning = LogLevel.WARNING
14 Info = LogLevel.INFO
15 Debug = LogLevel.DEBUG
16 Graph = LogLevel.GRAPH
17
18 @staticmethod
19 def _to_internal_level(external) -> LogLevel:
20 if external == Level.InternalError:
21 return LogLevel.INTERNAL_ERROR
22 if external == Level.Error:
23 return LogLevel.ERROR
24 if external == Level.Warning:
25 return LogLevel.WARNING
26 if external == Level.Info:
27 return LogLevel.INFO
28 if external == Level.Debug:
29 return LogLevel.DEBUG
30 if external == Level.Graph:
31 return LogLevel.GRAPH
32
33
34 def get_logging_prefix() -> str:
35 """Get the prefix set for logging messages
36
37 Returns:
38 str: Prefix used for logger
39 """
40 return _get_logging_prefix()
41
42
43 def set_logging_prefix(prefix: str):
44 """Set the prefix used when logging messages
45
46 Args:
47 prefix (str): Prefix to use for logging messages
48 """
49 _set_logging_prefix(prefix)
50
51
52 def get_reportable_log_level() -> Level:
53 """Get the level required for a message to be printed in the log
54
55 Returns:
56 torch_tensorrt.logging.Level: The enum representing the level required to print
57 """
58 return Level(_get_reportable_log_level())
59
60
61 def set_reportable_log_level(level: Level):
62 """Set the level required for a message to be printed to the log
63
64 Args:
65 level (torch_tensorrt.logging.Level): The enum representing the level required to print
66 """
67 _set_reportable_log_level(Level._to_internal_level(level))
68
69
70 def get_is_colored_output_on() -> bool:
71 """Get if colored output is enabled for logging
72
73 Returns:
74 bool: If colored output is one
75 """
76 return _get_is_colored_output_on()
77
78
79 def set_is_colored_output_on(colored_output_on: bool):
80 """Enable or disable color in the log output
81
82 Args:
83 colored_output_on (bool): If colored output should be enabled or not
84 """
85 _set_is_colored_output_on(colored_output_on)
86
87
88 def log(level: Level, msg: str):
89 """Add a new message to the log
90
91 Adds a new message to the log at a specified level. The message
92 will only get printed out if Level > reportable_log_level
93
94 Args:
95 level (torch_tensorrt.logging.Level): Severity of the message
96 msg (str): Actual message text
97 """
98 _log(Level._to_internal_level(level), msg)
99
[end of py/torch_tensorrt/logging.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/py/torch_tensorrt/logging.py b/py/torch_tensorrt/logging.py
--- a/py/torch_tensorrt/logging.py
+++ b/py/torch_tensorrt/logging.py
@@ -96,3 +96,113 @@
msg (str): Actual message text
"""
_log(Level._to_internal_level(level), msg)
+
+ InternalError = LogLevel.INTERNAL_ERROR
+ Error = LogLevel.ERROR
+ Warning = LogLevel.WARNING
+ Info = LogLevel.INFO
+ Debug = LogLevel.DEBUG
+ Graph = LogLevel.GRAPH
+
+
+class internal_errors:
+ """Context-manager to limit displayed log messages to just internal errors
+
+ Example::
+
+ with torch_tensorrt.logging.internal_errors():
+ outputs = model_torchtrt(inputs)
+ """
+
+ def __enter__(self):
+ self.external_lvl = get_reportable_log_level()
+ set_reportable_log_level(Level.InternalError)
+
+ def __exit__(self, exc_type, exc_value, exc_tb):
+ set_reportable_log_level(self.external_lvl)
+
+
+class errors:
+ """Context-manager to limit displayed log messages to just errors and above
+
+ Example::
+
+ with torch_tensorrt.logging.errors():
+ outputs = model_torchtrt(inputs)
+ """
+
+ def __enter__(self):
+ self.external_lvl = get_reportable_log_level()
+ set_reportable_log_level(Level.Error)
+
+ def __exit__(self, exc_type, exc_value, exc_tb):
+ set_reportable_log_level(self.external_lvl)
+
+
+class warnings:
+ """Context-manager to limit displayed log messages to just warnings and above
+
+ Example::
+
+ with torch_tensorrt.logging.warnings():
+ model_trt = torch_tensorrt.compile(model, **spec)
+ """
+
+ def __enter__(self):
+ self.external_lvl = get_reportable_log_level()
+ set_reportable_log_level(Level.Warning)
+
+ def __exit__(self, exc_type, exc_value, exc_tb):
+ set_reportable_log_level(self.external_lvl)
+
+
+class info:
+ """Context-manager to display all info and greater severity messages
+
+ Example::
+
+ with torch_tensorrt.logging.info():
+ model_trt = torch_tensorrt.compile(model, **spec)
+ """
+
+ def __enter__(self):
+ self.external_lvl = get_reportable_log_level()
+ set_reportable_log_level(Level.Info)
+
+ def __exit__(self, exc_type, exc_value, exc_tb):
+ set_reportable_log_level(self.external_lvl)
+
+
+class debug:
+ """Context-manager to display full debug information through the logger
+
+ Example::
+
+ with torch_tensorrt.logging.debug():
+ model_trt = torch_tensorrt.compile(model, **spec)
+ """
+
+ def __enter__(self):
+ self.external_lvl = get_reportable_log_level()
+ set_reportable_log_level(Level.Debug)
+
+ def __exit__(self, exc_type, exc_value, exc_tb):
+ set_reportable_log_level(self.external_lvl)
+
+
+class graphs:
+ """Context-manager to display the results of intermediate lowering passes
+ as well as full debug information through the logger
+
+ Example::
+
+ with torch_tensorrt.logging.graphs():
+ model_trt = torch_tensorrt.compile(model, **spec)
+ """
+
+ def __enter__(self):
+ self.external_lvl = get_reportable_log_level()
+ set_reportable_log_level(Level.Graph)
+
+ def __exit__(self, exc_type, exc_value, exc_tb):
+ set_reportable_log_level(self.external_lvl)
| {"golden_diff": "diff --git a/py/torch_tensorrt/logging.py b/py/torch_tensorrt/logging.py\n--- a/py/torch_tensorrt/logging.py\n+++ b/py/torch_tensorrt/logging.py\n@@ -96,3 +96,113 @@\n msg (str): Actual message text\n \"\"\"\n _log(Level._to_internal_level(level), msg)\n+\n+ InternalError = LogLevel.INTERNAL_ERROR\n+ Error = LogLevel.ERROR\n+ Warning = LogLevel.WARNING\n+ Info = LogLevel.INFO\n+ Debug = LogLevel.DEBUG\n+ Graph = LogLevel.GRAPH\n+\n+\n+class internal_errors:\n+ \"\"\"Context-manager to limit displayed log messages to just internal errors\n+\n+ Example::\n+\n+ with torch_tensorrt.logging.internal_errors():\n+ outputs = model_torchtrt(inputs)\n+ \"\"\"\n+\n+ def __enter__(self):\n+ self.external_lvl = get_reportable_log_level()\n+ set_reportable_log_level(Level.InternalError)\n+\n+ def __exit__(self, exc_type, exc_value, exc_tb):\n+ set_reportable_log_level(self.external_lvl)\n+\n+\n+class errors:\n+ \"\"\"Context-manager to limit displayed log messages to just errors and above\n+\n+ Example::\n+\n+ with torch_tensorrt.logging.errors():\n+ outputs = model_torchtrt(inputs)\n+ \"\"\"\n+\n+ def __enter__(self):\n+ self.external_lvl = get_reportable_log_level()\n+ set_reportable_log_level(Level.Error)\n+\n+ def __exit__(self, exc_type, exc_value, exc_tb):\n+ set_reportable_log_level(self.external_lvl)\n+\n+\n+class warnings:\n+ \"\"\"Context-manager to limit displayed log messages to just warnings and above\n+\n+ Example::\n+\n+ with torch_tensorrt.logging.warnings():\n+ model_trt = torch_tensorrt.compile(model, **spec)\n+ \"\"\"\n+\n+ def __enter__(self):\n+ self.external_lvl = get_reportable_log_level()\n+ set_reportable_log_level(Level.Warning)\n+\n+ def __exit__(self, exc_type, exc_value, exc_tb):\n+ set_reportable_log_level(self.external_lvl)\n+\n+\n+class info:\n+ \"\"\"Context-manager to display all info and greater severity messages\n+\n+ Example::\n+\n+ with torch_tensorrt.logging.info():\n+ model_trt = torch_tensorrt.compile(model, **spec)\n+ \"\"\"\n+\n+ def __enter__(self):\n+ self.external_lvl = get_reportable_log_level()\n+ set_reportable_log_level(Level.Info)\n+\n+ def __exit__(self, exc_type, exc_value, exc_tb):\n+ set_reportable_log_level(self.external_lvl)\n+\n+\n+class debug:\n+ \"\"\"Context-manager to display full debug information through the logger\n+\n+ Example::\n+\n+ with torch_tensorrt.logging.debug():\n+ model_trt = torch_tensorrt.compile(model, **spec)\n+ \"\"\"\n+\n+ def __enter__(self):\n+ self.external_lvl = get_reportable_log_level()\n+ set_reportable_log_level(Level.Debug)\n+\n+ def __exit__(self, exc_type, exc_value, exc_tb):\n+ set_reportable_log_level(self.external_lvl)\n+\n+\n+class graphs:\n+ \"\"\"Context-manager to display the results of intermediate lowering passes\n+ as well as full debug information through the logger\n+\n+ Example::\n+\n+ with torch_tensorrt.logging.graphs():\n+ model_trt = torch_tensorrt.compile(model, **spec)\n+ \"\"\"\n+\n+ def __enter__(self):\n+ self.external_lvl = get_reportable_log_level()\n+ set_reportable_log_level(Level.Graph)\n+\n+ def __exit__(self, exc_type, exc_value, exc_tb):\n+ set_reportable_log_level(self.external_lvl)\n", "issue": "\u2728[Feature] Enable debug logging with a context \n**Is your feature request related to a problem? Please describe.**\r\n<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]-->\r\n\r\nRight now seems like users don't know how or don't enable debug logging. We can probably add some syntax to make this easier.\r\n\r\n**Describe the solution you'd like**\r\n<!--A clear and concise description of what you want to happen.-->\r\n\r\nI would love to see something like: \r\n\r\n```py\r\nimport torch_tensorrt as torchtrt \r\n\r\nwith torchtrt.debug:\r\n torchtrt.ts.compile(....)\r\n\r\n```\r\n\r\nunder the hood this would be equivalent to:\r\n\r\n ```py\r\nimport torch_tensorrt as torchtrt \r\n\r\ntorchtrt.logging.set_reportable_log_level(torchtrt.logging.Level.Debug)\r\ntorchtrt.ts.compile(....)\r\ntorchtrt.logging.set_reportable_log_level(torchtrt.logging.Level.Error)\r\n```\r\n\r\n**Describe alternatives you've considered**\r\n<!--A clear and concise description of any alternative solutions or features you've considered.-->\r\n\r\n**Additional context**\r\n<!--Add any other context or screenshots about the feature request here.-->\r\n\n", "before_files": [{"content": "from enum import Enum\nfrom torch_tensorrt._C import _get_logging_prefix, _set_logging_prefix, \\\n _get_reportable_log_level, _set_reportable_log_level, \\\n _get_is_colored_output_on, _set_is_colored_output_on, \\\n _log, LogLevel\n\n\nclass Level(Enum):\n \"\"\"Enum to set the minimum required logging level to print a message to stdout\n \"\"\"\n InternalError = LogLevel.INTERNAL_ERROR\n Error = LogLevel.ERROR\n Warning = LogLevel.WARNING\n Info = LogLevel.INFO\n Debug = LogLevel.DEBUG\n Graph = LogLevel.GRAPH\n\n @staticmethod\n def _to_internal_level(external) -> LogLevel:\n if external == Level.InternalError:\n return LogLevel.INTERNAL_ERROR\n if external == Level.Error:\n return LogLevel.ERROR\n if external == Level.Warning:\n return LogLevel.WARNING\n if external == Level.Info:\n return LogLevel.INFO\n if external == Level.Debug:\n return LogLevel.DEBUG\n if external == Level.Graph:\n return LogLevel.GRAPH\n\n\ndef get_logging_prefix() -> str:\n \"\"\"Get the prefix set for logging messages\n\n Returns:\n str: Prefix used for logger\n \"\"\"\n return _get_logging_prefix()\n\n\ndef set_logging_prefix(prefix: str):\n \"\"\"Set the prefix used when logging messages\n\n Args:\n prefix (str): Prefix to use for logging messages\n \"\"\"\n _set_logging_prefix(prefix)\n\n\ndef get_reportable_log_level() -> Level:\n \"\"\"Get the level required for a message to be printed in the log\n\n Returns:\n torch_tensorrt.logging.Level: The enum representing the level required to print\n \"\"\"\n return Level(_get_reportable_log_level())\n\n\ndef set_reportable_log_level(level: Level):\n \"\"\"Set the level required for a message to be printed to the log\n\n Args:\n level (torch_tensorrt.logging.Level): The enum representing the level required to print\n \"\"\"\n _set_reportable_log_level(Level._to_internal_level(level))\n\n\ndef get_is_colored_output_on() -> bool:\n \"\"\"Get if colored output is enabled for logging\n\n Returns:\n bool: If colored output is one\n \"\"\"\n return _get_is_colored_output_on()\n\n\ndef set_is_colored_output_on(colored_output_on: bool):\n \"\"\"Enable or disable color in the log output\n\n Args:\n colored_output_on (bool): If colored output should be enabled or not\n \"\"\"\n _set_is_colored_output_on(colored_output_on)\n\n\ndef log(level: Level, msg: str):\n \"\"\"Add a new message to the log\n\n Adds a new message to the log at a specified level. The message\n will only get printed out if Level > reportable_log_level\n\n Args:\n level (torch_tensorrt.logging.Level): Severity of the message\n msg (str): Actual message text\n \"\"\"\n _log(Level._to_internal_level(level), msg)\n", "path": "py/torch_tensorrt/logging.py"}]} | 1,609 | 817 |
gh_patches_debug_20562 | rasdani/github-patches | git_diff | pantsbuild__pants-13464 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pants package does not build missing docker images if previous build was cached.
**Describe the bug**
Pant's caching of build targets does not take into consideration that the final target does not exist.
Take this example: https://www.pantsbuild.org/v2.8/docs/docker#example
```
$ ./pants package src/docker/hw/Dockerfile
[...]
18:07:29.66 [INFO] Completed: Building src.python.hw/bin.pex
18:07:31.83 [INFO] Completed: Building docker image helloworld:latest
18:07:31.83 [INFO] Built docker image: helloworld:latest
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
helloworld latest abcdefabcdef 6 seconds ago 420MB
$ docker rmi helloworld:latest
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
$ ./pants package src/docker/hw/Dockerfile
19:07:31.83 [INFO] Built docker image: helloworld:latest
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
```
If you did the equivalent commands for the `helloworld.pex` files, `pants package` would replace the missing file in the `dist/` folder.
**Pants version**
2.8rc1
**OS**
Linux
</issue>
<code>
[start of src/python/pants/backend/docker/util_rules/docker_binary.py]
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 from dataclasses import dataclass
7 from typing import Mapping
8
9 from pants.backend.docker.util_rules.docker_build_args import DockerBuildArgs
10 from pants.engine.fs import Digest
11 from pants.engine.process import (
12 BinaryNotFoundError,
13 BinaryPath,
14 BinaryPathRequest,
15 BinaryPaths,
16 BinaryPathTest,
17 Process,
18 SearchPath,
19 )
20 from pants.engine.rules import Get, collect_rules, rule
21 from pants.util.logging import LogLevel
22 from pants.util.strutil import pluralize
23
24
25 class DockerBinary(BinaryPath):
26 """The `docker` binary."""
27
28 DEFAULT_SEARCH_PATH = SearchPath(("/usr/bin", "/bin", "/usr/local/bin"))
29
30 def build_image(
31 self,
32 tags: tuple[str, ...],
33 digest: Digest,
34 dockerfile: str | None = None,
35 build_args: DockerBuildArgs | None = None,
36 env: Mapping[str, str] | None = None,
37 ) -> Process:
38 args = [self.path, "build"]
39
40 for tag in tags:
41 args.extend(["-t", tag])
42
43 if build_args:
44 for build_arg in build_args:
45 args.extend(["--build-arg", build_arg])
46
47 if dockerfile:
48 args.extend(["-f", dockerfile])
49
50 # Add build context root.
51 args.append(".")
52
53 return Process(
54 argv=tuple(args),
55 description=(
56 f"Building docker image {tags[0]}"
57 + (f" +{pluralize(len(tags)-1, 'additional tag')}." if len(tags) > 1 else ".")
58 ),
59 env=env,
60 input_digest=digest,
61 )
62
63 def push_image(self, tags: tuple[str, ...]) -> Process | None:
64 if not tags:
65 return None
66
67 return Process(
68 argv=(self.path, "push", *tags), description="Pushing docker image {tags[0]}"
69 )
70
71
72 @dataclass(frozen=True)
73 class DockerBinaryRequest:
74 search_path: SearchPath = DockerBinary.DEFAULT_SEARCH_PATH
75
76
77 @rule(desc="Finding the `docker` binary", level=LogLevel.DEBUG)
78 async def find_docker(docker_request: DockerBinaryRequest) -> DockerBinary:
79 request = BinaryPathRequest(
80 binary_name="docker",
81 search_path=docker_request.search_path,
82 test=BinaryPathTest(args=["-v"]),
83 )
84 paths = await Get(BinaryPaths, BinaryPathRequest, request)
85 first_path = paths.first_path
86 if not first_path:
87 raise BinaryNotFoundError.from_request(request, rationale="interact with the docker daemon")
88 return DockerBinary(first_path.path, first_path.fingerprint)
89
90
91 @rule
92 async def get_docker() -> DockerBinary:
93 return await Get(DockerBinary, DockerBinaryRequest())
94
95
96 def rules():
97 return collect_rules()
98
[end of src/python/pants/backend/docker/util_rules/docker_binary.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/python/pants/backend/docker/util_rules/docker_binary.py b/src/python/pants/backend/docker/util_rules/docker_binary.py
--- a/src/python/pants/backend/docker/util_rules/docker_binary.py
+++ b/src/python/pants/backend/docker/util_rules/docker_binary.py
@@ -15,6 +15,7 @@
BinaryPaths,
BinaryPathTest,
Process,
+ ProcessCacheScope,
SearchPath,
)
from pants.engine.rules import Get, collect_rules, rule
@@ -58,6 +59,7 @@
),
env=env,
input_digest=digest,
+ cache_scope=ProcessCacheScope.PER_SESSION,
)
def push_image(self, tags: tuple[str, ...]) -> Process | None:
@@ -65,7 +67,9 @@
return None
return Process(
- argv=(self.path, "push", *tags), description="Pushing docker image {tags[0]}"
+ argv=(self.path, "push", *tags),
+ cache_scope=ProcessCacheScope.PER_SESSION,
+ description=f"Pushing docker image {tags[0]}",
)
| {"golden_diff": "diff --git a/src/python/pants/backend/docker/util_rules/docker_binary.py b/src/python/pants/backend/docker/util_rules/docker_binary.py\n--- a/src/python/pants/backend/docker/util_rules/docker_binary.py\n+++ b/src/python/pants/backend/docker/util_rules/docker_binary.py\n@@ -15,6 +15,7 @@\n BinaryPaths,\n BinaryPathTest,\n Process,\n+ ProcessCacheScope,\n SearchPath,\n )\n from pants.engine.rules import Get, collect_rules, rule\n@@ -58,6 +59,7 @@\n ),\n env=env,\n input_digest=digest,\n+ cache_scope=ProcessCacheScope.PER_SESSION,\n )\n \n def push_image(self, tags: tuple[str, ...]) -> Process | None:\n@@ -65,7 +67,9 @@\n return None\n \n return Process(\n- argv=(self.path, \"push\", *tags), description=\"Pushing docker image {tags[0]}\"\n+ argv=(self.path, \"push\", *tags),\n+ cache_scope=ProcessCacheScope.PER_SESSION,\n+ description=f\"Pushing docker image {tags[0]}\",\n )\n", "issue": "pants package does not build missing docker images if previous build was cached.\n**Describe the bug**\r\nPant's caching of build targets does not take into consideration that the final target does not exist.\r\n\r\nTake this example: https://www.pantsbuild.org/v2.8/docs/docker#example\r\n\r\n```\r\n$ ./pants package src/docker/hw/Dockerfile\r\n[...]\r\n18:07:29.66 [INFO] Completed: Building src.python.hw/bin.pex\r\n18:07:31.83 [INFO] Completed: Building docker image helloworld:latest\r\n18:07:31.83 [INFO] Built docker image: helloworld:latest\r\n\r\n$ docker images\r\nREPOSITORY TAG IMAGE ID CREATED SIZE\r\nhelloworld latest abcdefabcdef 6 seconds ago 420MB\r\n\r\n$ docker rmi helloworld:latest\r\n\r\n$ docker images\r\nREPOSITORY TAG IMAGE ID CREATED SIZE\r\n\r\n$ ./pants package src/docker/hw/Dockerfile\r\n19:07:31.83 [INFO] Built docker image: helloworld:latest\r\n\r\n$ docker images\r\nREPOSITORY TAG IMAGE ID CREATED SIZE\r\n```\r\nIf you did the equivalent commands for the `helloworld.pex` files, `pants package` would replace the missing file in the `dist/` folder.\r\n\r\n**Pants version**\r\n2.8rc1\r\n\r\n**OS**\r\nLinux\r\n\n", "before_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\nfrom typing import Mapping\n\nfrom pants.backend.docker.util_rules.docker_build_args import DockerBuildArgs\nfrom pants.engine.fs import Digest\nfrom pants.engine.process import (\n BinaryNotFoundError,\n BinaryPath,\n BinaryPathRequest,\n BinaryPaths,\n BinaryPathTest,\n Process,\n SearchPath,\n)\nfrom pants.engine.rules import Get, collect_rules, rule\nfrom pants.util.logging import LogLevel\nfrom pants.util.strutil import pluralize\n\n\nclass DockerBinary(BinaryPath):\n \"\"\"The `docker` binary.\"\"\"\n\n DEFAULT_SEARCH_PATH = SearchPath((\"/usr/bin\", \"/bin\", \"/usr/local/bin\"))\n\n def build_image(\n self,\n tags: tuple[str, ...],\n digest: Digest,\n dockerfile: str | None = None,\n build_args: DockerBuildArgs | None = None,\n env: Mapping[str, str] | None = None,\n ) -> Process:\n args = [self.path, \"build\"]\n\n for tag in tags:\n args.extend([\"-t\", tag])\n\n if build_args:\n for build_arg in build_args:\n args.extend([\"--build-arg\", build_arg])\n\n if dockerfile:\n args.extend([\"-f\", dockerfile])\n\n # Add build context root.\n args.append(\".\")\n\n return Process(\n argv=tuple(args),\n description=(\n f\"Building docker image {tags[0]}\"\n + (f\" +{pluralize(len(tags)-1, 'additional tag')}.\" if len(tags) > 1 else \".\")\n ),\n env=env,\n input_digest=digest,\n )\n\n def push_image(self, tags: tuple[str, ...]) -> Process | None:\n if not tags:\n return None\n\n return Process(\n argv=(self.path, \"push\", *tags), description=\"Pushing docker image {tags[0]}\"\n )\n\n\n@dataclass(frozen=True)\nclass DockerBinaryRequest:\n search_path: SearchPath = DockerBinary.DEFAULT_SEARCH_PATH\n\n\n@rule(desc=\"Finding the `docker` binary\", level=LogLevel.DEBUG)\nasync def find_docker(docker_request: DockerBinaryRequest) -> DockerBinary:\n request = BinaryPathRequest(\n binary_name=\"docker\",\n search_path=docker_request.search_path,\n test=BinaryPathTest(args=[\"-v\"]),\n )\n paths = await Get(BinaryPaths, BinaryPathRequest, request)\n first_path = paths.first_path\n if not first_path:\n raise BinaryNotFoundError.from_request(request, rationale=\"interact with the docker daemon\")\n return DockerBinary(first_path.path, first_path.fingerprint)\n\n\n@rule\nasync def get_docker() -> DockerBinary:\n return await Get(DockerBinary, DockerBinaryRequest())\n\n\ndef rules():\n return collect_rules()\n", "path": "src/python/pants/backend/docker/util_rules/docker_binary.py"}]} | 1,683 | 246 |
gh_patches_debug_33920 | rasdani/github-patches | git_diff | googleapis__google-cloud-python-5988 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Please cut a release of Cloud Asset
</issue>
<code>
[start of asset/setup.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20 name = 'google-cloud-cloudasset'
21 description = 'Cloud Asset API API client library'
22 version = '0.1.0'
23 release_status = '3 - Alpha'
24 dependencies = [
25 'google-api-core[grpc] >= 1.1.0, < 2.0.0dev',
26 'enum34; python_version < "3.4"',
27 'grpc-google-iam-v1<0.12dev,>=0.11.4',
28 ]
29
30 package_root = os.path.abspath(os.path.dirname(__file__))
31
32 readme_filename = os.path.join(package_root, 'README.rst')
33 with io.open(readme_filename, encoding='utf-8') as readme_file:
34 readme = readme_file.read()
35
36 packages = [
37 package for package in setuptools.find_packages()
38 if package.startswith('google')
39 ]
40
41 namespaces = ['google']
42 if 'google.cloud' in packages:
43 namespaces.append('google.cloud')
44
45 setuptools.setup(
46 name=name,
47 version=version,
48 description=description,
49 long_description=readme,
50 author='Google LLC',
51 author_email='[email protected]',
52 license='Apache 2.0',
53 url='https://github.com/GoogleCloudPlatform/google-cloud-python',
54 classifiers=[
55 release_status,
56 'Intended Audience :: Developers',
57 'License :: OSI Approved :: Apache Software License',
58 'Programming Language :: Python',
59 'Programming Language :: Python :: 2',
60 'Programming Language :: Python :: 2.7',
61 'Programming Language :: Python :: 3',
62 'Programming Language :: Python :: 3.4',
63 'Programming Language :: Python :: 3.5',
64 'Programming Language :: Python :: 3.6',
65 'Operating System :: OS Independent',
66 'Topic :: Internet',
67 ],
68 platforms='Posix; MacOS X; Windows',
69 packages=packages,
70 namespace_packages=namespaces,
71 install_requires=dependencies,
72 include_package_data=True,
73 zip_safe=False,
74 )
75
[end of asset/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/asset/setup.py b/asset/setup.py
--- a/asset/setup.py
+++ b/asset/setup.py
@@ -17,27 +17,38 @@
import setuptools
+# Package metadata.
+
name = 'google-cloud-cloudasset'
description = 'Cloud Asset API API client library'
version = '0.1.0'
-release_status = '3 - Alpha'
+# Should be one of:
+# 'Development Status :: 3 - Alpha'
+# 'Development Status :: 4 - Beta'
+# 'Development Status :: 5 - Production/Stable'
+release_status = 'Development Status :: 3 - Alpha'
dependencies = [
'google-api-core[grpc] >= 1.1.0, < 2.0.0dev',
'enum34; python_version < "3.4"',
'grpc-google-iam-v1<0.12dev,>=0.11.4',
]
+# Setup boilerplate below this line.
+
package_root = os.path.abspath(os.path.dirname(__file__))
readme_filename = os.path.join(package_root, 'README.rst')
with io.open(readme_filename, encoding='utf-8') as readme_file:
readme = readme_file.read()
+# Only include packages under the 'google' namespace. Do not include tests,
+# benchmarks, etc.
packages = [
package for package in setuptools.find_packages()
if package.startswith('google')
]
+# Determine which namespaces are needed.
namespaces = ['google']
if 'google.cloud' in packages:
namespaces.append('google.cloud')
@@ -59,9 +70,9 @@
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
- 'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
+ 'Programming Language :: Python :: 3.7',
'Operating System :: OS Independent',
'Topic :: Internet',
],
| {"golden_diff": "diff --git a/asset/setup.py b/asset/setup.py\n--- a/asset/setup.py\n+++ b/asset/setup.py\n@@ -17,27 +17,38 @@\n \n import setuptools\n \n+# Package metadata.\n+\n name = 'google-cloud-cloudasset'\n description = 'Cloud Asset API API client library'\n version = '0.1.0'\n-release_status = '3 - Alpha'\n+# Should be one of:\n+# 'Development Status :: 3 - Alpha'\n+# 'Development Status :: 4 - Beta'\n+# 'Development Status :: 5 - Production/Stable'\n+release_status = 'Development Status :: 3 - Alpha'\n dependencies = [\n 'google-api-core[grpc] >= 1.1.0, < 2.0.0dev',\n 'enum34; python_version < \"3.4\"',\n 'grpc-google-iam-v1<0.12dev,>=0.11.4',\n ]\n \n+# Setup boilerplate below this line.\n+\n package_root = os.path.abspath(os.path.dirname(__file__))\n \n readme_filename = os.path.join(package_root, 'README.rst')\n with io.open(readme_filename, encoding='utf-8') as readme_file:\n readme = readme_file.read()\n \n+# Only include packages under the 'google' namespace. Do not include tests,\n+# benchmarks, etc.\n packages = [\n package for package in setuptools.find_packages()\n if package.startswith('google')\n ]\n \n+# Determine which namespaces are needed.\n namespaces = ['google']\n if 'google.cloud' in packages:\n namespaces.append('google.cloud')\n@@ -59,9 +70,9 @@\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n- 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n+ 'Programming Language :: Python :: 3.7',\n 'Operating System :: OS Independent',\n 'Topic :: Internet',\n ],\n", "issue": "Please cut a release of Cloud Asset\n\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\nname = 'google-cloud-cloudasset'\ndescription = 'Cloud Asset API API client library'\nversion = '0.1.0'\nrelease_status = '3 - Alpha'\ndependencies = [\n 'google-api-core[grpc] >= 1.1.0, < 2.0.0dev',\n 'enum34; python_version < \"3.4\"',\n 'grpc-google-iam-v1<0.12dev,>=0.11.4',\n]\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, 'README.rst')\nwith io.open(readme_filename, encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\npackages = [\n package for package in setuptools.find_packages()\n if package.startswith('google')\n]\n\nnamespaces = ['google']\nif 'google.cloud' in packages:\n namespaces.append('google.cloud')\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author='Google LLC',\n author_email='[email protected]',\n license='Apache 2.0',\n url='https://github.com/GoogleCloudPlatform/google-cloud-python',\n classifiers=[\n release_status,\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Operating System :: OS Independent',\n 'Topic :: Internet',\n ],\n platforms='Posix; MacOS X; Windows',\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "asset/setup.py"}]} | 1,241 | 445 |
gh_patches_debug_6963 | rasdani/github-patches | git_diff | Lightning-AI__pytorch-lightning-1193 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wandb logger doesn't upload saved model checkpoint for final epoch
## 🐛 Bug
When training a model on the TPU and using the wandb logger, the checkpoint for the last epoch trained doesn't get uploaded to wandb.
### To Reproduce
Colab notebook: https://colab.research.google.com/drive/1oPaRWGZcz6YEol012xFADN42LV-jowtT
</issue>
<code>
[start of pytorch_lightning/loggers/wandb.py]
1 r"""
2
3 .. _wandb:
4
5 WandbLogger
6 -------------
7 """
8 import os
9 from argparse import Namespace
10 from typing import Optional, List, Dict, Union, Any
11
12 import torch.nn as nn
13
14 try:
15 import wandb
16 from wandb.wandb_run import Run
17 except ImportError: # pragma: no-cover
18 raise ImportError('You want to use `wandb` logger which is not installed yet,' # pragma: no-cover
19 ' install it with `pip install wandb`.')
20
21 from pytorch_lightning.loggers.base import LightningLoggerBase, rank_zero_only
22
23
24 class WandbLogger(LightningLoggerBase):
25 """
26 Logger for `W&B <https://www.wandb.com/>`_.
27
28 Args:
29 name (str): display name for the run.
30 save_dir (str): path where data is saved.
31 offline (bool): run offline (data can be streamed later to wandb servers).
32 id or version (str): sets the version, mainly used to resume a previous run.
33 anonymous (bool): enables or explicitly disables anonymous logging.
34 project (str): the name of the project to which this run will belong.
35 tags (list of str): tags associated with this run.
36
37 Example
38 --------
39 .. code-block:: python
40
41 from pytorch_lightning.loggers import WandbLogger
42 from pytorch_lightning import Trainer
43
44 wandb_logger = WandbLogger()
45 trainer = Trainer(logger=wandb_logger)
46 """
47
48 def __init__(self, name: Optional[str] = None, save_dir: Optional[str] = None,
49 offline: bool = False, id: Optional[str] = None, anonymous: bool = False,
50 version: Optional[str] = None, project: Optional[str] = None,
51 tags: Optional[List[str]] = None, experiment=None, entity=None):
52 super().__init__()
53 self._name = name
54 self._save_dir = save_dir
55 self._anonymous = 'allow' if anonymous else None
56 self._id = version or id
57 self._tags = tags
58 self._project = project
59 self._experiment = experiment
60 self._offline = offline
61 self._entity = entity
62
63 def __getstate__(self):
64 state = self.__dict__.copy()
65 # cannot be pickled
66 state['_experiment'] = None
67 # args needed to reload correct experiment
68 state['_id'] = self.experiment.id
69 return state
70
71 @property
72 def experiment(self) -> Run:
73 r"""
74
75 Actual wandb object. To use wandb features do the following.
76
77 Example::
78
79 self.logger.experiment.some_wandb_function()
80
81 """
82 if self._experiment is None:
83 if self._offline:
84 os.environ['WANDB_MODE'] = 'dryrun'
85 self._experiment = wandb.init(
86 name=self._name, dir=self._save_dir, project=self._project, anonymous=self._anonymous,
87 id=self._id, resume='allow', tags=self._tags, entity=self._entity)
88 return self._experiment
89
90 def watch(self, model: nn.Module, log: str = 'gradients', log_freq: int = 100):
91 wandb.watch(model, log=log, log_freq=log_freq)
92
93 @rank_zero_only
94 def log_hyperparams(self, params: Union[Dict[str, Any], Namespace]) -> None:
95 params = self._convert_params(params)
96 self.experiment.config.update(params)
97
98 @rank_zero_only
99 def log_metrics(self, metrics: Dict[str, float], step: Optional[int] = None) -> None:
100 if step is not None:
101 metrics['global_step'] = step
102 self.experiment.log(metrics)
103
104 @rank_zero_only
105 def finalize(self, status: str = 'success') -> None:
106 try:
107 exit_code = 0 if status == 'success' else 1
108 wandb.join(exit_code)
109 except TypeError:
110 wandb.join()
111
112 @property
113 def name(self) -> str:
114 return self.experiment.project_name()
115
116 @property
117 def version(self) -> str:
118 return self.experiment.id
119
[end of pytorch_lightning/loggers/wandb.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pytorch_lightning/loggers/wandb.py b/pytorch_lightning/loggers/wandb.py
--- a/pytorch_lightning/loggers/wandb.py
+++ b/pytorch_lightning/loggers/wandb.py
@@ -101,14 +101,6 @@
metrics['global_step'] = step
self.experiment.log(metrics)
- @rank_zero_only
- def finalize(self, status: str = 'success') -> None:
- try:
- exit_code = 0 if status == 'success' else 1
- wandb.join(exit_code)
- except TypeError:
- wandb.join()
-
@property
def name(self) -> str:
return self.experiment.project_name()
| {"golden_diff": "diff --git a/pytorch_lightning/loggers/wandb.py b/pytorch_lightning/loggers/wandb.py\n--- a/pytorch_lightning/loggers/wandb.py\n+++ b/pytorch_lightning/loggers/wandb.py\n@@ -101,14 +101,6 @@\n metrics['global_step'] = step\n self.experiment.log(metrics)\n \n- @rank_zero_only\n- def finalize(self, status: str = 'success') -> None:\n- try:\n- exit_code = 0 if status == 'success' else 1\n- wandb.join(exit_code)\n- except TypeError:\n- wandb.join()\n-\n @property\n def name(self) -> str:\n return self.experiment.project_name()\n", "issue": "Wandb logger doesn't upload saved model checkpoint for final epoch\n## \ud83d\udc1b Bug\r\n\r\nWhen training a model on the TPU and using the wandb logger, the checkpoint for the last epoch trained doesn't get uploaded to wandb.\r\n\r\n### To Reproduce\r\n\r\nColab notebook: https://colab.research.google.com/drive/1oPaRWGZcz6YEol012xFADN42LV-jowtT\n", "before_files": [{"content": "r\"\"\"\n\n.. _wandb:\n\nWandbLogger\n-------------\n\"\"\"\nimport os\nfrom argparse import Namespace\nfrom typing import Optional, List, Dict, Union, Any\n\nimport torch.nn as nn\n\ntry:\n import wandb\n from wandb.wandb_run import Run\nexcept ImportError: # pragma: no-cover\n raise ImportError('You want to use `wandb` logger which is not installed yet,' # pragma: no-cover\n ' install it with `pip install wandb`.')\n\nfrom pytorch_lightning.loggers.base import LightningLoggerBase, rank_zero_only\n\n\nclass WandbLogger(LightningLoggerBase):\n \"\"\"\n Logger for `W&B <https://www.wandb.com/>`_.\n\n Args:\n name (str): display name for the run.\n save_dir (str): path where data is saved.\n offline (bool): run offline (data can be streamed later to wandb servers).\n id or version (str): sets the version, mainly used to resume a previous run.\n anonymous (bool): enables or explicitly disables anonymous logging.\n project (str): the name of the project to which this run will belong.\n tags (list of str): tags associated with this run.\n\n Example\n --------\n .. code-block:: python\n\n from pytorch_lightning.loggers import WandbLogger\n from pytorch_lightning import Trainer\n\n wandb_logger = WandbLogger()\n trainer = Trainer(logger=wandb_logger)\n \"\"\"\n\n def __init__(self, name: Optional[str] = None, save_dir: Optional[str] = None,\n offline: bool = False, id: Optional[str] = None, anonymous: bool = False,\n version: Optional[str] = None, project: Optional[str] = None,\n tags: Optional[List[str]] = None, experiment=None, entity=None):\n super().__init__()\n self._name = name\n self._save_dir = save_dir\n self._anonymous = 'allow' if anonymous else None\n self._id = version or id\n self._tags = tags\n self._project = project\n self._experiment = experiment\n self._offline = offline\n self._entity = entity\n\n def __getstate__(self):\n state = self.__dict__.copy()\n # cannot be pickled\n state['_experiment'] = None\n # args needed to reload correct experiment\n state['_id'] = self.experiment.id\n return state\n\n @property\n def experiment(self) -> Run:\n r\"\"\"\n\n Actual wandb object. To use wandb features do the following.\n\n Example::\n\n self.logger.experiment.some_wandb_function()\n\n \"\"\"\n if self._experiment is None:\n if self._offline:\n os.environ['WANDB_MODE'] = 'dryrun'\n self._experiment = wandb.init(\n name=self._name, dir=self._save_dir, project=self._project, anonymous=self._anonymous,\n id=self._id, resume='allow', tags=self._tags, entity=self._entity)\n return self._experiment\n\n def watch(self, model: nn.Module, log: str = 'gradients', log_freq: int = 100):\n wandb.watch(model, log=log, log_freq=log_freq)\n\n @rank_zero_only\n def log_hyperparams(self, params: Union[Dict[str, Any], Namespace]) -> None:\n params = self._convert_params(params)\n self.experiment.config.update(params)\n\n @rank_zero_only\n def log_metrics(self, metrics: Dict[str, float], step: Optional[int] = None) -> None:\n if step is not None:\n metrics['global_step'] = step\n self.experiment.log(metrics)\n\n @rank_zero_only\n def finalize(self, status: str = 'success') -> None:\n try:\n exit_code = 0 if status == 'success' else 1\n wandb.join(exit_code)\n except TypeError:\n wandb.join()\n\n @property\n def name(self) -> str:\n return self.experiment.project_name()\n\n @property\n def version(self) -> str:\n return self.experiment.id\n", "path": "pytorch_lightning/loggers/wandb.py"}]} | 1,804 | 169 |
gh_patches_debug_35487 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3129 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider jiffylube is broken
During the global build at 2021-09-01-14-42-16, spider **jiffylube** failed with **0 features** and **49 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-09-01-14-42-16/logs/jiffylube.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-09-01-14-42-16/output/jiffylube.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-09-01-14-42-16/output/jiffylube.geojson))
</issue>
<code>
[start of locations/spiders/jiffylube.py]
1 # -*- coding: utf-8 -*-
2 import json
3
4 import scrapy
5
6 from locations.items import GeojsonPointItem
7 from locations.hours import OpeningHours
8
9
10 STATES = [
11 'AL', 'AK', 'AS', 'AZ', 'AR', 'CA', 'CO', 'CT', 'DE', 'DC', 'FM', 'FL',
12 'GA', 'GU', 'HI', 'ID', 'IL', 'IN', 'IA', 'KS', 'KY', 'LA', 'ME', 'MH',
13 'MD', 'MA', 'MI', 'MN', 'MS', 'MO', 'MT', 'NE', 'NV', 'NH', 'NJ', 'NM',
14 'NY', 'NC', 'ND', 'MP', 'OH', 'OK', 'OR', 'PW', 'PA', 'PR', 'RI', 'SC',
15 'SD', 'TN', 'TX', 'UT', 'VT', 'VI', 'VA', 'WA', 'WV', 'WI', 'WY'
16 ]
17
18 DAY_MAPPING = {
19 'Monday': 'Mo',
20 'Tuesday': 'Tu',
21 'Wednesday': 'We',
22 'Thursday': 'Th',
23 'Friday': 'Fr',
24 'Saturday': 'Sa',
25 'Sunday': 'Su'
26 }
27
28 class JiffyLubeSpider(scrapy.Spider):
29 name = "jiffylube"
30 item_attributes = {'brand': "Jiffy Lube"}
31 allowed_domains = ["www.jiffylube.com"]
32
33 def start_requests(self):
34 template = 'https://www.jiffylube.com/api/locations?state={state}'
35
36 headers = {
37 'Accept': 'application/json',
38 }
39
40 for state in STATES:
41 yield scrapy.http.FormRequest(
42 url=template.format(state=state),
43 method='GET',
44 headers=headers,
45 callback=self.parse
46 )
47 def parse(self, response):
48 jsonresponse = json.loads(response.body_as_unicode())
49
50 for stores in jsonresponse:
51 store = json.dumps(stores)
52 store_data = json.loads(store)
53
54 properties = {
55 'name': store_data["nickname"],
56 'ref': store_data["id"],
57 'addr_full': store_data["address"],
58 'city': store_data["city"],
59 'state': store_data["state"],
60 'postcode': store_data["postal_code"].strip(),
61 'country': store_data["country"],
62 'phone': store_data["phone_main"],
63 'lat': float(store_data["coordinates"]["latitude"]),
64 'lon': float(store_data["coordinates"]["longitude"]),
65 'website': "https://www.jiffylube.com{}".format(store_data["_links"]["_self"])
66 }
67
68 hours = store_data["hours_schema"]
69
70 if hours:
71 properties['opening_hours'] = self.process_hours(hours)
72
73 yield GeojsonPointItem(**properties)
74
75 def process_hours(self, hours):
76 opening_hours = OpeningHours()
77
78 for hour in hours:
79 day = hour["name"]
80 open_time = hour["time_open"]
81 close_time = hour["time_close"]
82
83 opening_hours.add_range(day=DAY_MAPPING[day], open_time=open_time, close_time=close_time,
84 time_format='%H:%M')
85 return opening_hours.as_opening_hours()
[end of locations/spiders/jiffylube.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/jiffylube.py b/locations/spiders/jiffylube.py
--- a/locations/spiders/jiffylube.py
+++ b/locations/spiders/jiffylube.py
@@ -29,30 +29,27 @@
name = "jiffylube"
item_attributes = {'brand': "Jiffy Lube"}
allowed_domains = ["www.jiffylube.com"]
+ start_urls = (
+ 'https://www.jiffylube.com/api/locations',
+ )
- def start_requests(self):
- template = 'https://www.jiffylube.com/api/locations?state={state}'
- headers = {
- 'Accept': 'application/json',
- }
-
- for state in STATES:
- yield scrapy.http.FormRequest(
- url=template.format(state=state),
- method='GET',
- headers=headers,
- callback=self.parse
- )
def parse(self, response):
- jsonresponse = json.loads(response.body_as_unicode())
+ stores = json.loads(response.text)
+
+
+ for store in stores:
+ store_url = "https://www.jiffylube.com/api" + store["_links"]["_self"]
+ yield scrapy.Request(
+ store_url,
+ callback=self.parse_store
+ )
- for stores in jsonresponse:
- store = json.dumps(stores)
- store_data = json.loads(store)
+
+ def parse_store(self, response):
+ store_data = json.loads(response.text)
properties = {
- 'name': store_data["nickname"],
'ref': store_data["id"],
'addr_full': store_data["address"],
'city': store_data["city"],
@@ -64,22 +61,5 @@
'lon': float(store_data["coordinates"]["longitude"]),
'website': "https://www.jiffylube.com{}".format(store_data["_links"]["_self"])
}
-
- hours = store_data["hours_schema"]
-
- if hours:
- properties['opening_hours'] = self.process_hours(hours)
-
+
yield GeojsonPointItem(**properties)
-
- def process_hours(self, hours):
- opening_hours = OpeningHours()
-
- for hour in hours:
- day = hour["name"]
- open_time = hour["time_open"]
- close_time = hour["time_close"]
-
- opening_hours.add_range(day=DAY_MAPPING[day], open_time=open_time, close_time=close_time,
- time_format='%H:%M')
- return opening_hours.as_opening_hours()
\ No newline at end of file
| {"golden_diff": "diff --git a/locations/spiders/jiffylube.py b/locations/spiders/jiffylube.py\n--- a/locations/spiders/jiffylube.py\n+++ b/locations/spiders/jiffylube.py\n@@ -29,30 +29,27 @@\n name = \"jiffylube\"\n item_attributes = {'brand': \"Jiffy Lube\"}\n allowed_domains = [\"www.jiffylube.com\"]\n+ start_urls = (\n+ 'https://www.jiffylube.com/api/locations',\n+ )\n \n- def start_requests(self):\n- template = 'https://www.jiffylube.com/api/locations?state={state}'\n \n- headers = {\n- 'Accept': 'application/json',\n- }\n-\n- for state in STATES:\n- yield scrapy.http.FormRequest(\n- url=template.format(state=state),\n- method='GET',\n- headers=headers,\n- callback=self.parse\n- )\n def parse(self, response):\n- jsonresponse = json.loads(response.body_as_unicode())\n+ stores = json.loads(response.text)\n+ \n+\n+ for store in stores:\n+ store_url = \"https://www.jiffylube.com/api\" + store[\"_links\"][\"_self\"]\n+ yield scrapy.Request(\n+ store_url,\n+ callback=self.parse_store\n+ )\n \n- for stores in jsonresponse:\n- store = json.dumps(stores)\n- store_data = json.loads(store)\n+\n+ def parse_store(self, response):\n+ store_data = json.loads(response.text)\n \n properties = {\n- 'name': store_data[\"nickname\"],\n 'ref': store_data[\"id\"],\n 'addr_full': store_data[\"address\"],\n 'city': store_data[\"city\"],\n@@ -64,22 +61,5 @@\n 'lon': float(store_data[\"coordinates\"][\"longitude\"]),\n 'website': \"https://www.jiffylube.com{}\".format(store_data[\"_links\"][\"_self\"])\n }\n-\n- hours = store_data[\"hours_schema\"]\n-\n- if hours:\n- properties['opening_hours'] = self.process_hours(hours)\n-\n+ \n yield GeojsonPointItem(**properties)\n-\n- def process_hours(self, hours):\n- opening_hours = OpeningHours()\n-\n- for hour in hours:\n- day = hour[\"name\"]\n- open_time = hour[\"time_open\"]\n- close_time = hour[\"time_close\"]\n-\n- opening_hours.add_range(day=DAY_MAPPING[day], open_time=open_time, close_time=close_time,\n- time_format='%H:%M')\n- return opening_hours.as_opening_hours()\n\\ No newline at end of file\n", "issue": "Spider jiffylube is broken\nDuring the global build at 2021-09-01-14-42-16, spider **jiffylube** failed with **0 features** and **49 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-09-01-14-42-16/logs/jiffylube.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-09-01-14-42-16/output/jiffylube.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-09-01-14-42-16/output/jiffylube.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport json\n\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nSTATES = [\n 'AL', 'AK', 'AS', 'AZ', 'AR', 'CA', 'CO', 'CT', 'DE', 'DC', 'FM', 'FL',\n 'GA', 'GU', 'HI', 'ID', 'IL', 'IN', 'IA', 'KS', 'KY', 'LA', 'ME', 'MH',\n 'MD', 'MA', 'MI', 'MN', 'MS', 'MO', 'MT', 'NE', 'NV', 'NH', 'NJ', 'NM',\n 'NY', 'NC', 'ND', 'MP', 'OH', 'OK', 'OR', 'PW', 'PA', 'PR', 'RI', 'SC',\n 'SD', 'TN', 'TX', 'UT', 'VT', 'VI', 'VA', 'WA', 'WV', 'WI', 'WY'\n]\n\nDAY_MAPPING = {\n 'Monday': 'Mo',\n 'Tuesday': 'Tu',\n 'Wednesday': 'We',\n 'Thursday': 'Th',\n 'Friday': 'Fr',\n 'Saturday': 'Sa',\n 'Sunday': 'Su'\n}\n\nclass JiffyLubeSpider(scrapy.Spider):\n name = \"jiffylube\"\n item_attributes = {'brand': \"Jiffy Lube\"}\n allowed_domains = [\"www.jiffylube.com\"]\n\n def start_requests(self):\n template = 'https://www.jiffylube.com/api/locations?state={state}'\n\n headers = {\n 'Accept': 'application/json',\n }\n\n for state in STATES:\n yield scrapy.http.FormRequest(\n url=template.format(state=state),\n method='GET',\n headers=headers,\n callback=self.parse\n )\n def parse(self, response):\n jsonresponse = json.loads(response.body_as_unicode())\n\n for stores in jsonresponse:\n store = json.dumps(stores)\n store_data = json.loads(store)\n\n properties = {\n 'name': store_data[\"nickname\"],\n 'ref': store_data[\"id\"],\n 'addr_full': store_data[\"address\"],\n 'city': store_data[\"city\"],\n 'state': store_data[\"state\"],\n 'postcode': store_data[\"postal_code\"].strip(),\n 'country': store_data[\"country\"],\n 'phone': store_data[\"phone_main\"],\n 'lat': float(store_data[\"coordinates\"][\"latitude\"]),\n 'lon': float(store_data[\"coordinates\"][\"longitude\"]),\n 'website': \"https://www.jiffylube.com{}\".format(store_data[\"_links\"][\"_self\"])\n }\n\n hours = store_data[\"hours_schema\"]\n\n if hours:\n properties['opening_hours'] = self.process_hours(hours)\n\n yield GeojsonPointItem(**properties)\n\n def process_hours(self, hours):\n opening_hours = OpeningHours()\n\n for hour in hours:\n day = hour[\"name\"]\n open_time = hour[\"time_open\"]\n close_time = hour[\"time_close\"]\n\n opening_hours.add_range(day=DAY_MAPPING[day], open_time=open_time, close_time=close_time,\n time_format='%H:%M')\n return opening_hours.as_opening_hours()", "path": "locations/spiders/jiffylube.py"}]} | 1,598 | 587 |
gh_patches_debug_31028 | rasdani/github-patches | git_diff | pretix__pretix-346 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Log old email when changing an order's email address
… because otherwise it's completely lost.
</issue>
<code>
[start of src/pretix/control/logdisplay.py]
1 import json
2 from decimal import Decimal
3
4 from django.dispatch import receiver
5 from django.utils import formats
6 from django.utils.translation import ugettext_lazy as _
7
8 from pretix.base.models import Event, ItemVariation, LogEntry
9 from pretix.base.signals import logentry_display
10
11
12 def _display_order_changed(event: Event, logentry: LogEntry):
13 data = json.loads(logentry.data)
14
15 text = _('The order has been changed:')
16 if logentry.action_type == 'pretix.event.order.changed.item':
17 old_item = str(event.items.get(pk=data['old_item']))
18 if data['old_variation']:
19 old_item += ' - ' + str(event.itemvariations.get(pk=data['old_variation']))
20 new_item = str(event.items.get(pk=data['new_item']))
21 if data['new_variation']:
22 new_item += ' - ' + str(event.itemvariations.get(pk=data['new_variation']))
23 return text + ' ' + _('{old_item} ({old_price} {currency}) changed to {new_item} ({new_price} {currency}).').format(
24 old_item=old_item, new_item=new_item,
25 old_price=formats.localize(Decimal(data['old_price'])),
26 new_price=formats.localize(Decimal(data['new_price'])),
27 currency=event.currency
28 )
29 elif logentry.action_type == 'pretix.event.order.changed.price':
30 return text + ' ' + _('Price of a position changed from {old_price} {currency} to {new_price} {currency}.').format(
31 old_price=formats.localize(Decimal(data['old_price'])),
32 new_price=formats.localize(Decimal(data['new_price'])),
33 currency=event.currency
34 )
35 elif logentry.action_type == 'pretix.event.order.changed.cancel':
36 old_item = str(event.items.get(pk=data['old_item']))
37 if data['old_variation']:
38 old_item += ' - ' + str(ItemVariation.objects.get(pk=data['old_variation']))
39 return text + ' ' + _('{old_item} ({old_price} {currency}) removed.').format(
40 old_item=old_item,
41 old_price=formats.localize(Decimal(data['old_price'])),
42 currency=event.currency
43 )
44
45
46 @receiver(signal=logentry_display, dispatch_uid="pretixcontrol_logentry_display")
47 def pretixcontrol_logentry_display(sender: Event, logentry: LogEntry, **kwargs):
48 plains = {
49 'pretix.event.order.modified': _('The order details have been modified.'),
50 'pretix.event.order.unpaid': _('The order has been marked as unpaid.'),
51 'pretix.event.order.resend': _('The link to the order detail page has been resent to the user.'),
52 'pretix.event.order.expirychanged': _('The order\'s expiry date has been changed.'),
53 'pretix.event.order.expired': _('The order has been marked as expired.'),
54 'pretix.event.order.paid': _('The order has been marked as paid.'),
55 'pretix.event.order.refunded': _('The order has been refunded.'),
56 'pretix.event.order.canceled': _('The order has been canceled.'),
57 'pretix.event.order.placed': _('The order has been created.'),
58 'pretix.event.order.invoice.generated': _('The invoice has been generated.'),
59 'pretix.event.order.invoice.regenerated': _('The invoice has been regenerated.'),
60 'pretix.event.order.invoice.reissued': _('The invoice has been reissued.'),
61 'pretix.event.order.comment': _('The order\'s internal comment has been updated.'),
62 'pretix.event.order.contact.changed': _('The email address has been changed.'),
63 'pretix.event.order.payment.changed': _('The payment method has been changed.'),
64 'pretix.event.order.expire_warning_sent': _('An email has been sent with a warning that the order is about to expire.'),
65 'pretix.user.settings.2fa.enabled': _('Two-factor authentication has been enabled.'),
66 'pretix.user.settings.2fa.disabled': _('Two-factor authentication has been disabled.'),
67 'pretix.user.settings.2fa.regenemergency': _('Your two-factor emergency codes have been regenerated.'),
68 'pretix.control.auth.user.forgot_password.mail_sent': _('Password reset mail sent.'),
69 'pretix.control.auth.user.forgot_password.recovered': _('The password has been reset.')
70
71 }
72 if logentry.action_type in plains:
73 return plains[logentry.action_type]
74
75 if logentry.action_type.startswith('pretix.event.order.changed'):
76 return _display_order_changed(sender, logentry)
77
78 if logentry.action_type == 'pretix.user.settings.2fa.device.added':
79 data = json.loads(logentry.data)
80 return _('A new two-factor authentication device "{name}" has been added to your account.').format(
81 name=data['name']
82 )
83 if logentry.action_type == 'pretix.user.settings.2fa.device.deleted':
84 data = json.loads(logentry.data)
85 return _('The two-factor authentication device "{name}" has been removed from your account.').format(
86 name=data['name']
87 )
88 if logentry.action_type == 'pretix.user.settings.changed':
89 data = json.loads(logentry.data)
90 text = str(_('Your account settings have been changed.'))
91 if 'email' in data:
92 text = text + ' ' + str(_('Your email address has been changed to {email}.').format(email=data['email']))
93 if 'new_pw' in data:
94 text = text + ' ' + str(_('Your password has been changed.'))
95 return text
96
[end of src/pretix/control/logdisplay.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/pretix/control/logdisplay.py b/src/pretix/control/logdisplay.py
--- a/src/pretix/control/logdisplay.py
+++ b/src/pretix/control/logdisplay.py
@@ -59,7 +59,6 @@
'pretix.event.order.invoice.regenerated': _('The invoice has been regenerated.'),
'pretix.event.order.invoice.reissued': _('The invoice has been reissued.'),
'pretix.event.order.comment': _('The order\'s internal comment has been updated.'),
- 'pretix.event.order.contact.changed': _('The email address has been changed.'),
'pretix.event.order.payment.changed': _('The payment method has been changed.'),
'pretix.event.order.expire_warning_sent': _('An email has been sent with a warning that the order is about to expire.'),
'pretix.user.settings.2fa.enabled': _('Two-factor authentication has been enabled.'),
@@ -75,6 +74,13 @@
if logentry.action_type.startswith('pretix.event.order.changed'):
return _display_order_changed(sender, logentry)
+ if logentry.action_type.startswith('pretix.event.order.contact.changed'):
+ data = json.loads(logentry.data)
+ return _('The email address has been changed from "{old}" to "{new}".').format(
+ old=data['old_email'],
+ new=data['new_email'],
+ )
+
if logentry.action_type == 'pretix.user.settings.2fa.device.added':
data = json.loads(logentry.data)
return _('A new two-factor authentication device "{name}" has been added to your account.').format(
| {"golden_diff": "diff --git a/src/pretix/control/logdisplay.py b/src/pretix/control/logdisplay.py\n--- a/src/pretix/control/logdisplay.py\n+++ b/src/pretix/control/logdisplay.py\n@@ -59,7 +59,6 @@\n 'pretix.event.order.invoice.regenerated': _('The invoice has been regenerated.'),\n 'pretix.event.order.invoice.reissued': _('The invoice has been reissued.'),\n 'pretix.event.order.comment': _('The order\\'s internal comment has been updated.'),\n- 'pretix.event.order.contact.changed': _('The email address has been changed.'),\n 'pretix.event.order.payment.changed': _('The payment method has been changed.'),\n 'pretix.event.order.expire_warning_sent': _('An email has been sent with a warning that the order is about to expire.'),\n 'pretix.user.settings.2fa.enabled': _('Two-factor authentication has been enabled.'),\n@@ -75,6 +74,13 @@\n if logentry.action_type.startswith('pretix.event.order.changed'):\n return _display_order_changed(sender, logentry)\n \n+ if logentry.action_type.startswith('pretix.event.order.contact.changed'):\n+ data = json.loads(logentry.data)\n+ return _('The email address has been changed from \"{old}\" to \"{new}\".').format(\n+ old=data['old_email'],\n+ new=data['new_email'],\n+ )\n+\n if logentry.action_type == 'pretix.user.settings.2fa.device.added':\n data = json.loads(logentry.data)\n return _('A new two-factor authentication device \"{name}\" has been added to your account.').format(\n", "issue": "Log old email when changing an order's email address\n\u2026 because otherwise it's completely lost.\n", "before_files": [{"content": "import json\nfrom decimal import Decimal\n\nfrom django.dispatch import receiver\nfrom django.utils import formats\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom pretix.base.models import Event, ItemVariation, LogEntry\nfrom pretix.base.signals import logentry_display\n\n\ndef _display_order_changed(event: Event, logentry: LogEntry):\n data = json.loads(logentry.data)\n\n text = _('The order has been changed:')\n if logentry.action_type == 'pretix.event.order.changed.item':\n old_item = str(event.items.get(pk=data['old_item']))\n if data['old_variation']:\n old_item += ' - ' + str(event.itemvariations.get(pk=data['old_variation']))\n new_item = str(event.items.get(pk=data['new_item']))\n if data['new_variation']:\n new_item += ' - ' + str(event.itemvariations.get(pk=data['new_variation']))\n return text + ' ' + _('{old_item} ({old_price} {currency}) changed to {new_item} ({new_price} {currency}).').format(\n old_item=old_item, new_item=new_item,\n old_price=formats.localize(Decimal(data['old_price'])),\n new_price=formats.localize(Decimal(data['new_price'])),\n currency=event.currency\n )\n elif logentry.action_type == 'pretix.event.order.changed.price':\n return text + ' ' + _('Price of a position changed from {old_price} {currency} to {new_price} {currency}.').format(\n old_price=formats.localize(Decimal(data['old_price'])),\n new_price=formats.localize(Decimal(data['new_price'])),\n currency=event.currency\n )\n elif logentry.action_type == 'pretix.event.order.changed.cancel':\n old_item = str(event.items.get(pk=data['old_item']))\n if data['old_variation']:\n old_item += ' - ' + str(ItemVariation.objects.get(pk=data['old_variation']))\n return text + ' ' + _('{old_item} ({old_price} {currency}) removed.').format(\n old_item=old_item,\n old_price=formats.localize(Decimal(data['old_price'])),\n currency=event.currency\n )\n\n\n@receiver(signal=logentry_display, dispatch_uid=\"pretixcontrol_logentry_display\")\ndef pretixcontrol_logentry_display(sender: Event, logentry: LogEntry, **kwargs):\n plains = {\n 'pretix.event.order.modified': _('The order details have been modified.'),\n 'pretix.event.order.unpaid': _('The order has been marked as unpaid.'),\n 'pretix.event.order.resend': _('The link to the order detail page has been resent to the user.'),\n 'pretix.event.order.expirychanged': _('The order\\'s expiry date has been changed.'),\n 'pretix.event.order.expired': _('The order has been marked as expired.'),\n 'pretix.event.order.paid': _('The order has been marked as paid.'),\n 'pretix.event.order.refunded': _('The order has been refunded.'),\n 'pretix.event.order.canceled': _('The order has been canceled.'),\n 'pretix.event.order.placed': _('The order has been created.'),\n 'pretix.event.order.invoice.generated': _('The invoice has been generated.'),\n 'pretix.event.order.invoice.regenerated': _('The invoice has been regenerated.'),\n 'pretix.event.order.invoice.reissued': _('The invoice has been reissued.'),\n 'pretix.event.order.comment': _('The order\\'s internal comment has been updated.'),\n 'pretix.event.order.contact.changed': _('The email address has been changed.'),\n 'pretix.event.order.payment.changed': _('The payment method has been changed.'),\n 'pretix.event.order.expire_warning_sent': _('An email has been sent with a warning that the order is about to expire.'),\n 'pretix.user.settings.2fa.enabled': _('Two-factor authentication has been enabled.'),\n 'pretix.user.settings.2fa.disabled': _('Two-factor authentication has been disabled.'),\n 'pretix.user.settings.2fa.regenemergency': _('Your two-factor emergency codes have been regenerated.'),\n 'pretix.control.auth.user.forgot_password.mail_sent': _('Password reset mail sent.'),\n 'pretix.control.auth.user.forgot_password.recovered': _('The password has been reset.')\n\n }\n if logentry.action_type in plains:\n return plains[logentry.action_type]\n\n if logentry.action_type.startswith('pretix.event.order.changed'):\n return _display_order_changed(sender, logentry)\n\n if logentry.action_type == 'pretix.user.settings.2fa.device.added':\n data = json.loads(logentry.data)\n return _('A new two-factor authentication device \"{name}\" has been added to your account.').format(\n name=data['name']\n )\n if logentry.action_type == 'pretix.user.settings.2fa.device.deleted':\n data = json.loads(logentry.data)\n return _('The two-factor authentication device \"{name}\" has been removed from your account.').format(\n name=data['name']\n )\n if logentry.action_type == 'pretix.user.settings.changed':\n data = json.loads(logentry.data)\n text = str(_('Your account settings have been changed.'))\n if 'email' in data:\n text = text + ' ' + str(_('Your email address has been changed to {email}.').format(email=data['email']))\n if 'new_pw' in data:\n text = text + ' ' + str(_('Your password has been changed.'))\n return text\n", "path": "src/pretix/control/logdisplay.py"}]} | 1,914 | 349 |
gh_patches_debug_58561 | rasdani/github-patches | git_diff | codespell-project__codespell-86 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
codespell.py does nothng if [fileN] is not specified
Previously running `codespell` without file parameter starts the check. Now `codespell.py` does nothing. The behavior should stay the same as before - if file/dir argument is not specefied then current directory should be used as a default parameter.
</issue>
<code>
[start of bin/codespell.py]
1 #!/usr/bin/env python
2
3 import sys
4
5 if __name__ == '__main__':
6 import codespell_lib
7 sys.exit(codespell_lib.main(*sys.argv))
8
[end of bin/codespell.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bin/codespell.py b/bin/codespell.py
--- a/bin/codespell.py
+++ b/bin/codespell.py
@@ -4,4 +4,4 @@
if __name__ == '__main__':
import codespell_lib
- sys.exit(codespell_lib.main(*sys.argv))
+ sys.exit(codespell_lib.main(*sys.argv[1:]))
| {"golden_diff": "diff --git a/bin/codespell.py b/bin/codespell.py\n--- a/bin/codespell.py\n+++ b/bin/codespell.py\n@@ -4,4 +4,4 @@\n \n if __name__ == '__main__':\n import codespell_lib\n- sys.exit(codespell_lib.main(*sys.argv))\n+ sys.exit(codespell_lib.main(*sys.argv[1:]))\n", "issue": "codespell.py does nothng if [fileN] is not specified\nPreviously running `codespell` without file parameter starts the check. Now `codespell.py` does nothing. The behavior should stay the same as before - if file/dir argument is not specefied then current directory should be used as a default parameter.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport sys\n\nif __name__ == '__main__':\n import codespell_lib\n sys.exit(codespell_lib.main(*sys.argv))\n", "path": "bin/codespell.py"}]} | 649 | 87 |
gh_patches_debug_6744 | rasdani/github-patches | git_diff | bridgecrewio__checkov-3626 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CKV_DOCKER_10 mistakes quoted absolute paths for relative paths
**Describe the issue**
CKV_DOCKER_10 mistakes quoted absolute paths for relative paths.
**Examples**
```
cat << EOF > Dockerfile
FROM alpine:3.16
WORKDIR "/app"
EOF
checkov --check CKV_DOCKER_10 --file Dockerfile
```

**Version (please complete the following information):**
2.1.258
</issue>
<code>
[start of checkov/dockerfile/checks/WorkdirIsAbsolute.py]
1 from __future__ import annotations
2
3 import re
4
5 from checkov.common.models.enums import CheckCategories, CheckResult
6 from checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck
7
8 ISABSOLUTE = re.compile("(^/[A-Za-z0-9-_+]*)|(^[A-Za-z0-9-_+]:\\\\.*)|(^\\$[{}A-Za-z0-9-_+].*)")
9
10
11 class WorkdirIsAbsolute(BaseDockerfileCheck):
12 def __init__(self) -> None:
13 """
14 For clarity and reliability, you should always use absolute paths for your WORKDIR.
15 """
16 name = "Ensure that WORKDIR values are absolute paths"
17 id = "CKV_DOCKER_10"
18 supported_instructions = ("WORKDIR",)
19 categories = (CheckCategories.CONVENTION,)
20 super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)
21
22 def scan_entity_conf(self, conf: list[dict[str, int | str]]) -> tuple[CheckResult, list[dict[str, int | str]] | None]:
23 workdirs = []
24 for workdir in conf:
25 path = workdir["value"]
26 if not re.match(ISABSOLUTE, path):
27 workdirs.append(workdir)
28
29 if workdirs:
30 return CheckResult.FAILED, workdirs
31
32 return CheckResult.PASSED, None
33
34
35 check = WorkdirIsAbsolute()
36
[end of checkov/dockerfile/checks/WorkdirIsAbsolute.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/dockerfile/checks/WorkdirIsAbsolute.py b/checkov/dockerfile/checks/WorkdirIsAbsolute.py
--- a/checkov/dockerfile/checks/WorkdirIsAbsolute.py
+++ b/checkov/dockerfile/checks/WorkdirIsAbsolute.py
@@ -5,7 +5,7 @@
from checkov.common.models.enums import CheckCategories, CheckResult
from checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck
-ISABSOLUTE = re.compile("(^/[A-Za-z0-9-_+]*)|(^[A-Za-z0-9-_+]:\\\\.*)|(^\\$[{}A-Za-z0-9-_+].*)")
+ISABSOLUTE = re.compile("^\"?((/[A-Za-z0-9-_+]*)|([A-Za-z0-9-_+]:\\\\.*)|(\\$[{}A-Za-z0-9-_+].*))")
class WorkdirIsAbsolute(BaseDockerfileCheck):
| {"golden_diff": "diff --git a/checkov/dockerfile/checks/WorkdirIsAbsolute.py b/checkov/dockerfile/checks/WorkdirIsAbsolute.py\n--- a/checkov/dockerfile/checks/WorkdirIsAbsolute.py\n+++ b/checkov/dockerfile/checks/WorkdirIsAbsolute.py\n@@ -5,7 +5,7 @@\n from checkov.common.models.enums import CheckCategories, CheckResult\n from checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck\n \n-ISABSOLUTE = re.compile(\"(^/[A-Za-z0-9-_+]*)|(^[A-Za-z0-9-_+]:\\\\\\\\.*)|(^\\\\$[{}A-Za-z0-9-_+].*)\")\n+ISABSOLUTE = re.compile(\"^\\\"?((/[A-Za-z0-9-_+]*)|([A-Za-z0-9-_+]:\\\\\\\\.*)|(\\\\$[{}A-Za-z0-9-_+].*))\")\n \n \n class WorkdirIsAbsolute(BaseDockerfileCheck):\n", "issue": "CKV_DOCKER_10 mistakes quoted absolute paths for relative paths\n**Describe the issue**\r\nCKV_DOCKER_10 mistakes quoted absolute paths for relative paths.\r\n\r\n**Examples**\r\n```\r\ncat << EOF > Dockerfile\r\nFROM alpine:3.16\r\nWORKDIR \"/app\"\r\nEOF\r\n\r\ncheckov --check CKV_DOCKER_10 --file Dockerfile\r\n```\r\n\r\n\r\n\r\n**Version (please complete the following information):**\r\n2.1.258\r\n\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport re\n\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.dockerfile.base_dockerfile_check import BaseDockerfileCheck\n\nISABSOLUTE = re.compile(\"(^/[A-Za-z0-9-_+]*)|(^[A-Za-z0-9-_+]:\\\\\\\\.*)|(^\\\\$[{}A-Za-z0-9-_+].*)\")\n\n\nclass WorkdirIsAbsolute(BaseDockerfileCheck):\n def __init__(self) -> None:\n \"\"\"\n For clarity and reliability, you should always use absolute paths for your WORKDIR.\n \"\"\"\n name = \"Ensure that WORKDIR values are absolute paths\"\n id = \"CKV_DOCKER_10\"\n supported_instructions = (\"WORKDIR\",)\n categories = (CheckCategories.CONVENTION,)\n super().__init__(name=name, id=id, categories=categories, supported_instructions=supported_instructions)\n\n def scan_entity_conf(self, conf: list[dict[str, int | str]]) -> tuple[CheckResult, list[dict[str, int | str]] | None]:\n workdirs = []\n for workdir in conf:\n path = workdir[\"value\"]\n if not re.match(ISABSOLUTE, path):\n workdirs.append(workdir)\n\n if workdirs:\n return CheckResult.FAILED, workdirs\n\n return CheckResult.PASSED, None\n\n\ncheck = WorkdirIsAbsolute()\n", "path": "checkov/dockerfile/checks/WorkdirIsAbsolute.py"}]} | 1,094 | 211 |
gh_patches_debug_2508 | rasdani/github-patches | git_diff | coala__coala-6088 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Small typo in coalib/output/printers/LogPrinter.py
Should read responsibility instead of reponsibility.
</issue>
<code>
[start of coalib/output/printers/LogPrinter.py]
1 import traceback
2 import logging
3
4 from coalib.output.printers.LOG_LEVEL import LOG_LEVEL
5 from coalib.processes.communication.LogMessage import LogMessage
6
7
8 class LogPrinterMixin:
9 """
10 Provides access to the logging interfaces (e.g. err, warn, info) by routing
11 them to the log_message method, which should be implemented by descendants
12 of this class.
13 """
14
15 def debug(self, *messages, delimiter=' ', timestamp=None, **kwargs):
16 self.log_message(LogMessage(LOG_LEVEL.DEBUG,
17 *messages,
18 delimiter=delimiter,
19 timestamp=timestamp),
20 **kwargs)
21
22 def info(self, *messages, delimiter=' ', timestamp=None, **kwargs):
23 self.log_message(LogMessage(LOG_LEVEL.INFO,
24 *messages,
25 delimiter=delimiter,
26 timestamp=timestamp),
27 **kwargs)
28
29 def warn(self, *messages, delimiter=' ', timestamp=None, **kwargs):
30 self.log_message(LogMessage(LOG_LEVEL.WARNING,
31 *messages,
32 delimiter=delimiter,
33 timestamp=timestamp),
34 **kwargs)
35
36 def err(self, *messages, delimiter=' ', timestamp=None, **kwargs):
37 self.log_message(LogMessage(LOG_LEVEL.ERROR,
38 *messages,
39 delimiter=delimiter,
40 timestamp=timestamp),
41 **kwargs)
42
43 def log(self, log_level, message, timestamp=None, **kwargs):
44 self.log_message(LogMessage(log_level,
45 message,
46 timestamp=timestamp),
47 **kwargs)
48
49 def log_exception(self,
50 message,
51 exception,
52 log_level=LOG_LEVEL.ERROR,
53 timestamp=None,
54 **kwargs):
55 """
56 If the log_level of the printer is greater than DEBUG, it prints
57 only the message. If it is DEBUG or lower, it shows the message
58 along with the traceback of the exception.
59
60 :param message: The message to print.
61 :param exception: The exception to print.
62 :param log_level: The log_level of this message (not used when
63 logging the traceback. Tracebacks always have
64 a level of DEBUG).
65 :param timestamp: The time at which this log occurred. Defaults to
66 the current time.
67 :param kwargs: Keyword arguments to be passed when logging the
68 message (not used when logging the traceback).
69 """
70 if not isinstance(exception, BaseException):
71 raise TypeError('log_exception can only log derivatives of '
72 'BaseException.')
73
74 traceback_str = '\n'.join(
75 traceback.format_exception(type(exception),
76 exception,
77 exception.__traceback__))
78
79 self.log(log_level, message, timestamp=timestamp, **kwargs)
80 self.log_message(
81 LogMessage(LOG_LEVEL.INFO,
82 'Exception was:' + '\n' + traceback_str,
83 timestamp=timestamp),
84 **kwargs)
85
86 def log_message(self, log_message, **kwargs):
87 """
88 It is your reponsibility to implement this method, if you're using this
89 mixin.
90 """
91 raise NotImplementedError
92
93
94 class LogPrinter(LogPrinterMixin):
95 """
96 This class is deprecated and will be soon removed. To get logger use
97 logging.getLogger(__name__). Make sure that you're getting it when the
98 logging configuration is loaded.
99
100 The LogPrinter class allows to print log messages to an underlying Printer.
101
102 This class is an adapter, means you can create a LogPrinter from every
103 existing Printer instance.
104 """
105
106 def __init__(self,
107 printer=None,
108 log_level=LOG_LEVEL.DEBUG,
109 timestamp_format='%X'):
110 """
111 Creates a new log printer from an existing Printer.
112
113 :param printer: The underlying Printer where log messages
114 shall be written to. If you inherit from
115 LogPrinter, set it to self.
116 :param log_level: The minimum log level, everything below will
117 not be logged.
118 :param timestamp_format: The format string for the
119 datetime.today().strftime(format) method.
120 """
121 self.logger = logging.getLogger()
122
123 self._printer = printer
124 self.log_level = log_level
125 self.timestamp_format = timestamp_format
126
127 @property
128 def log_level(self):
129 """
130 Returns current log_level used in logger.
131 """
132 return self.logger.getEffectiveLevel()
133
134 @log_level.setter
135 def log_level(self, log_level):
136 """
137 Sets log_level for logger.
138 """
139 self.logger.setLevel(log_level)
140
141 @property
142 def printer(self):
143 """
144 Returns the underlying printer where logs are printed to.
145 """
146 return self._printer
147
148 def log_message(self, log_message, **kwargs):
149 if not isinstance(log_message, LogMessage):
150 raise TypeError('log_message should be of type LogMessage.')
151 self.logger.log(log_message.log_level, log_message.message)
152
153 def __getstate__(self):
154 # on Windows there are problems with serializing loggers, so omit it
155 oldict = self.__dict__.copy()
156 del oldict['logger']
157 return oldict
158
159 def __setstate__(self, newdict):
160 self.__dict__.update(newdict)
161 # restore logger by name
162 self.logger = logging.getLogger()
163
[end of coalib/output/printers/LogPrinter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/coalib/output/printers/LogPrinter.py b/coalib/output/printers/LogPrinter.py
--- a/coalib/output/printers/LogPrinter.py
+++ b/coalib/output/printers/LogPrinter.py
@@ -85,7 +85,7 @@
def log_message(self, log_message, **kwargs):
"""
- It is your reponsibility to implement this method, if you're using this
+ It is your responsibility to implement this method, if you're using this
mixin.
"""
raise NotImplementedError
| {"golden_diff": "diff --git a/coalib/output/printers/LogPrinter.py b/coalib/output/printers/LogPrinter.py\n--- a/coalib/output/printers/LogPrinter.py\n+++ b/coalib/output/printers/LogPrinter.py\n@@ -85,7 +85,7 @@\n \n def log_message(self, log_message, **kwargs):\n \"\"\"\n- It is your reponsibility to implement this method, if you're using this\n+ It is your responsibility to implement this method, if you're using this\n mixin.\n \"\"\"\n raise NotImplementedError\n", "issue": "Small typo in coalib/output/printers/LogPrinter.py\nShould read responsibility instead of reponsibility.\r\n\n", "before_files": [{"content": "import traceback\nimport logging\n\nfrom coalib.output.printers.LOG_LEVEL import LOG_LEVEL\nfrom coalib.processes.communication.LogMessage import LogMessage\n\n\nclass LogPrinterMixin:\n \"\"\"\n Provides access to the logging interfaces (e.g. err, warn, info) by routing\n them to the log_message method, which should be implemented by descendants\n of this class.\n \"\"\"\n\n def debug(self, *messages, delimiter=' ', timestamp=None, **kwargs):\n self.log_message(LogMessage(LOG_LEVEL.DEBUG,\n *messages,\n delimiter=delimiter,\n timestamp=timestamp),\n **kwargs)\n\n def info(self, *messages, delimiter=' ', timestamp=None, **kwargs):\n self.log_message(LogMessage(LOG_LEVEL.INFO,\n *messages,\n delimiter=delimiter,\n timestamp=timestamp),\n **kwargs)\n\n def warn(self, *messages, delimiter=' ', timestamp=None, **kwargs):\n self.log_message(LogMessage(LOG_LEVEL.WARNING,\n *messages,\n delimiter=delimiter,\n timestamp=timestamp),\n **kwargs)\n\n def err(self, *messages, delimiter=' ', timestamp=None, **kwargs):\n self.log_message(LogMessage(LOG_LEVEL.ERROR,\n *messages,\n delimiter=delimiter,\n timestamp=timestamp),\n **kwargs)\n\n def log(self, log_level, message, timestamp=None, **kwargs):\n self.log_message(LogMessage(log_level,\n message,\n timestamp=timestamp),\n **kwargs)\n\n def log_exception(self,\n message,\n exception,\n log_level=LOG_LEVEL.ERROR,\n timestamp=None,\n **kwargs):\n \"\"\"\n If the log_level of the printer is greater than DEBUG, it prints\n only the message. If it is DEBUG or lower, it shows the message\n along with the traceback of the exception.\n\n :param message: The message to print.\n :param exception: The exception to print.\n :param log_level: The log_level of this message (not used when\n logging the traceback. Tracebacks always have\n a level of DEBUG).\n :param timestamp: The time at which this log occurred. Defaults to\n the current time.\n :param kwargs: Keyword arguments to be passed when logging the\n message (not used when logging the traceback).\n \"\"\"\n if not isinstance(exception, BaseException):\n raise TypeError('log_exception can only log derivatives of '\n 'BaseException.')\n\n traceback_str = '\\n'.join(\n traceback.format_exception(type(exception),\n exception,\n exception.__traceback__))\n\n self.log(log_level, message, timestamp=timestamp, **kwargs)\n self.log_message(\n LogMessage(LOG_LEVEL.INFO,\n 'Exception was:' + '\\n' + traceback_str,\n timestamp=timestamp),\n **kwargs)\n\n def log_message(self, log_message, **kwargs):\n \"\"\"\n It is your reponsibility to implement this method, if you're using this\n mixin.\n \"\"\"\n raise NotImplementedError\n\n\nclass LogPrinter(LogPrinterMixin):\n \"\"\"\n This class is deprecated and will be soon removed. To get logger use\n logging.getLogger(__name__). Make sure that you're getting it when the\n logging configuration is loaded.\n\n The LogPrinter class allows to print log messages to an underlying Printer.\n\n This class is an adapter, means you can create a LogPrinter from every\n existing Printer instance.\n \"\"\"\n\n def __init__(self,\n printer=None,\n log_level=LOG_LEVEL.DEBUG,\n timestamp_format='%X'):\n \"\"\"\n Creates a new log printer from an existing Printer.\n\n :param printer: The underlying Printer where log messages\n shall be written to. If you inherit from\n LogPrinter, set it to self.\n :param log_level: The minimum log level, everything below will\n not be logged.\n :param timestamp_format: The format string for the\n datetime.today().strftime(format) method.\n \"\"\"\n self.logger = logging.getLogger()\n\n self._printer = printer\n self.log_level = log_level\n self.timestamp_format = timestamp_format\n\n @property\n def log_level(self):\n \"\"\"\n Returns current log_level used in logger.\n \"\"\"\n return self.logger.getEffectiveLevel()\n\n @log_level.setter\n def log_level(self, log_level):\n \"\"\"\n Sets log_level for logger.\n \"\"\"\n self.logger.setLevel(log_level)\n\n @property\n def printer(self):\n \"\"\"\n Returns the underlying printer where logs are printed to.\n \"\"\"\n return self._printer\n\n def log_message(self, log_message, **kwargs):\n if not isinstance(log_message, LogMessage):\n raise TypeError('log_message should be of type LogMessage.')\n self.logger.log(log_message.log_level, log_message.message)\n\n def __getstate__(self):\n # on Windows there are problems with serializing loggers, so omit it\n oldict = self.__dict__.copy()\n del oldict['logger']\n return oldict\n\n def __setstate__(self, newdict):\n self.__dict__.update(newdict)\n # restore logger by name\n self.logger = logging.getLogger()\n", "path": "coalib/output/printers/LogPrinter.py"}]} | 2,040 | 125 |
gh_patches_debug_32576 | rasdani/github-patches | git_diff | Kinto__kinto-1735 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Kinto onboarding experience (part 2)
These is a followup from #1733 with random feedback with onboarding when trying to use the [accounts plugin](http://docs.kinto-storage.org/en/stable/api/1.x/accounts.html).
Started with updating my `config/kinto.ini` with:
```
kinto.includes = kinto.plugins.default_bucket
kinto.plugins.admin
kinto.plugins.accounts
```
Restarting the server goes smoothly. The admin loads fine and renders a new entry for *Kinto Account Auth*. I never created any Account just yet, though out of curiosity I try to log in using silly:silly:

Tadaa:

Wait, what?
Oh. It seems it actually used the Basic Auth strategy instead of the account one for login. This is odd and confusing as fsck.
Actually, I didn't went further with toying around with the admin as it looks broken to me. This is a little sad.
Kinto onboarding experience (part 2)
These is a followup from #1733 with random feedback with onboarding when trying to use the [accounts plugin](http://docs.kinto-storage.org/en/stable/api/1.x/accounts.html).
Started with updating my `config/kinto.ini` with:
```
kinto.includes = kinto.plugins.default_bucket
kinto.plugins.admin
kinto.plugins.accounts
```
Restarting the server goes smoothly. The admin loads fine and renders a new entry for *Kinto Account Auth*. I never created any Account just yet, though out of curiosity I try to log in using silly:silly:

Tadaa:

Wait, what?
Oh. It seems it actually used the Basic Auth strategy instead of the account one for login. This is odd and confusing as fsck.
Actually, I didn't went further with toying around with the admin as it looks broken to me. This is a little sad.
</issue>
<code>
[start of kinto/plugins/accounts/__init__.py]
1 from kinto.authorization import PERMISSIONS_INHERITANCE_TREE
2 from pyramid.exceptions import ConfigurationError
3
4 ACCOUNT_CACHE_KEY = 'accounts:{}:verified'
5 ACCOUNT_POLICY_NAME = 'account'
6
7
8 def includeme(config):
9 config.add_api_capability(
10 'accounts',
11 description='Manage user accounts.',
12 url='https://kinto.readthedocs.io/en/latest/api/1.x/accounts.html')
13
14 config.scan('kinto.plugins.accounts.views')
15
16 PERMISSIONS_INHERITANCE_TREE['root'].update({
17 'account:create': {}
18 })
19 PERMISSIONS_INHERITANCE_TREE['account'] = {
20 'write': {'account': ['write']},
21 'read': {'account': ['write', 'read']}
22 }
23
24 # Add some safety to avoid weird behaviour with basicauth default policy.
25 settings = config.get_settings()
26 auth_policies = settings['multiauth.policies']
27 if 'basicauth' in auth_policies and 'account' in auth_policies:
28 if auth_policies.index('basicauth') < auth_policies.index('account'):
29 error_msg = ("'basicauth' should not be mentioned before 'account' "
30 "in 'multiauth.policies' setting.")
31 raise ConfigurationError(error_msg)
32
33 # We assume anyone in account_create_principals is to create
34 # accounts for other people.
35 # No one can create accounts for other people unless they are an
36 # "admin", defined as someone matching account_write_principals.
37 # Therefore any account that is in account_create_principals
38 # should be in account_write_principals too.
39 creators = set(settings.get('account_create_principals', '').split())
40 admins = set(settings.get('account_write_principals', '').split())
41 cant_create_anything = creators.difference(admins)
42 # system.Everyone isn't an account.
43 cant_create_anything.discard('system.Everyone')
44 if cant_create_anything:
45 message = ('Configuration has some principals in account_create_principals '
46 'but not in account_write_principals. These principals will only be '
47 'able to create their own accounts. This may not be what you want.\n'
48 'If you want these users to be able to create accounts for other users, '
49 'add them to account_write_principals.\n'
50 'Affected users: {}'.format(list(cant_create_anything)))
51
52 raise ConfigurationError(message)
53
[end of kinto/plugins/accounts/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/plugins/accounts/__init__.py b/kinto/plugins/accounts/__init__.py
--- a/kinto/plugins/accounts/__init__.py
+++ b/kinto/plugins/accounts/__init__.py
@@ -1,9 +1,13 @@
+import re
+
from kinto.authorization import PERMISSIONS_INHERITANCE_TREE
from pyramid.exceptions import ConfigurationError
ACCOUNT_CACHE_KEY = 'accounts:{}:verified'
ACCOUNT_POLICY_NAME = 'account'
+DOCS_URL = "https://kinto.readthedocs.io/en/stable/api/1.x/accounts.html"
+
def includeme(config):
config.add_api_capability(
@@ -21,13 +25,28 @@
'read': {'account': ['write', 'read']}
}
- # Add some safety to avoid weird behaviour with basicauth default policy.
settings = config.get_settings()
+
+ # Check that the account policy is mentioned in config if included.
+ accountClass = 'AccountsAuthenticationPolicy'
+ policy = None
+ for k, v in settings.items():
+ m = re.match('multiauth\.policy\.(.*)\.use', k)
+ if m:
+ if v.endswith(accountClass):
+ policy = m.group(1)
+
+ if not policy:
+ error_msg = ("Account policy missing the 'multiauth.policy.*.use' "
+ "setting. See {} in docs {}.").format(accountClass, DOCS_URL)
+ raise ConfigurationError(error_msg)
+
+ # Add some safety to avoid weird behaviour with basicauth default policy.
auth_policies = settings['multiauth.policies']
- if 'basicauth' in auth_policies and 'account' in auth_policies:
- if auth_policies.index('basicauth') < auth_policies.index('account'):
- error_msg = ("'basicauth' should not be mentioned before 'account' "
- "in 'multiauth.policies' setting.")
+ if 'basicauth' in auth_policies and policy in auth_policies:
+ if auth_policies.index('basicauth') < auth_policies.index(policy):
+ error_msg = ("'basicauth' should not be mentioned before '%s' "
+ "in 'multiauth.policies' setting.") % policy
raise ConfigurationError(error_msg)
# We assume anyone in account_create_principals is to create
| {"golden_diff": "diff --git a/kinto/plugins/accounts/__init__.py b/kinto/plugins/accounts/__init__.py\n--- a/kinto/plugins/accounts/__init__.py\n+++ b/kinto/plugins/accounts/__init__.py\n@@ -1,9 +1,13 @@\n+import re\n+\n from kinto.authorization import PERMISSIONS_INHERITANCE_TREE\n from pyramid.exceptions import ConfigurationError\n \n ACCOUNT_CACHE_KEY = 'accounts:{}:verified'\n ACCOUNT_POLICY_NAME = 'account'\n \n+DOCS_URL = \"https://kinto.readthedocs.io/en/stable/api/1.x/accounts.html\"\n+\n \n def includeme(config):\n config.add_api_capability(\n@@ -21,13 +25,28 @@\n 'read': {'account': ['write', 'read']}\n }\n \n- # Add some safety to avoid weird behaviour with basicauth default policy.\n settings = config.get_settings()\n+\n+ # Check that the account policy is mentioned in config if included.\n+ accountClass = 'AccountsAuthenticationPolicy'\n+ policy = None\n+ for k, v in settings.items():\n+ m = re.match('multiauth\\.policy\\.(.*)\\.use', k)\n+ if m:\n+ if v.endswith(accountClass):\n+ policy = m.group(1)\n+\n+ if not policy:\n+ error_msg = (\"Account policy missing the 'multiauth.policy.*.use' \"\n+ \"setting. See {} in docs {}.\").format(accountClass, DOCS_URL)\n+ raise ConfigurationError(error_msg)\n+\n+ # Add some safety to avoid weird behaviour with basicauth default policy.\n auth_policies = settings['multiauth.policies']\n- if 'basicauth' in auth_policies and 'account' in auth_policies:\n- if auth_policies.index('basicauth') < auth_policies.index('account'):\n- error_msg = (\"'basicauth' should not be mentioned before 'account' \"\n- \"in 'multiauth.policies' setting.\")\n+ if 'basicauth' in auth_policies and policy in auth_policies:\n+ if auth_policies.index('basicauth') < auth_policies.index(policy):\n+ error_msg = (\"'basicauth' should not be mentioned before '%s' \"\n+ \"in 'multiauth.policies' setting.\") % policy\n raise ConfigurationError(error_msg)\n \n # We assume anyone in account_create_principals is to create\n", "issue": "Kinto onboarding experience (part 2)\nThese is a followup from #1733 with random feedback with onboarding when trying to use the [accounts plugin](http://docs.kinto-storage.org/en/stable/api/1.x/accounts.html).\r\n\r\n\r\n\r\nStarted with updating my `config/kinto.ini` with:\r\n\r\n```\r\nkinto.includes = kinto.plugins.default_bucket\r\n kinto.plugins.admin\r\n kinto.plugins.accounts\r\n```\r\n\r\nRestarting the server goes smoothly. The admin loads fine and renders a new entry for *Kinto Account Auth*. I never created any Account just yet, though out of curiosity I try to log in using silly:silly:\r\n\r\n\r\n\r\nTadaa:\r\n\r\n\r\n\r\nWait, what?\r\n\r\nOh. It seems it actually used the Basic Auth strategy instead of the account one for login. This is odd and confusing as fsck.\r\n\r\nActually, I didn't went further with toying around with the admin as it looks broken to me. This is a little sad.\nKinto onboarding experience (part 2)\nThese is a followup from #1733 with random feedback with onboarding when trying to use the [accounts plugin](http://docs.kinto-storage.org/en/stable/api/1.x/accounts.html).\r\n\r\n\r\n\r\nStarted with updating my `config/kinto.ini` with:\r\n\r\n```\r\nkinto.includes = kinto.plugins.default_bucket\r\n kinto.plugins.admin\r\n kinto.plugins.accounts\r\n```\r\n\r\nRestarting the server goes smoothly. The admin loads fine and renders a new entry for *Kinto Account Auth*. I never created any Account just yet, though out of curiosity I try to log in using silly:silly:\r\n\r\n\r\n\r\nTadaa:\r\n\r\n\r\n\r\nWait, what?\r\n\r\nOh. It seems it actually used the Basic Auth strategy instead of the account one for login. This is odd and confusing as fsck.\r\n\r\nActually, I didn't went further with toying around with the admin as it looks broken to me. This is a little sad.\n", "before_files": [{"content": "from kinto.authorization import PERMISSIONS_INHERITANCE_TREE\nfrom pyramid.exceptions import ConfigurationError\n\nACCOUNT_CACHE_KEY = 'accounts:{}:verified'\nACCOUNT_POLICY_NAME = 'account'\n\n\ndef includeme(config):\n config.add_api_capability(\n 'accounts',\n description='Manage user accounts.',\n url='https://kinto.readthedocs.io/en/latest/api/1.x/accounts.html')\n\n config.scan('kinto.plugins.accounts.views')\n\n PERMISSIONS_INHERITANCE_TREE['root'].update({\n 'account:create': {}\n })\n PERMISSIONS_INHERITANCE_TREE['account'] = {\n 'write': {'account': ['write']},\n 'read': {'account': ['write', 'read']}\n }\n\n # Add some safety to avoid weird behaviour with basicauth default policy.\n settings = config.get_settings()\n auth_policies = settings['multiauth.policies']\n if 'basicauth' in auth_policies and 'account' in auth_policies:\n if auth_policies.index('basicauth') < auth_policies.index('account'):\n error_msg = (\"'basicauth' should not be mentioned before 'account' \"\n \"in 'multiauth.policies' setting.\")\n raise ConfigurationError(error_msg)\n\n # We assume anyone in account_create_principals is to create\n # accounts for other people.\n # No one can create accounts for other people unless they are an\n # \"admin\", defined as someone matching account_write_principals.\n # Therefore any account that is in account_create_principals\n # should be in account_write_principals too.\n creators = set(settings.get('account_create_principals', '').split())\n admins = set(settings.get('account_write_principals', '').split())\n cant_create_anything = creators.difference(admins)\n # system.Everyone isn't an account.\n cant_create_anything.discard('system.Everyone')\n if cant_create_anything:\n message = ('Configuration has some principals in account_create_principals '\n 'but not in account_write_principals. These principals will only be '\n 'able to create their own accounts. This may not be what you want.\\n'\n 'If you want these users to be able to create accounts for other users, '\n 'add them to account_write_principals.\\n'\n 'Affected users: {}'.format(list(cant_create_anything)))\n\n raise ConfigurationError(message)\n", "path": "kinto/plugins/accounts/__init__.py"}]} | 1,623 | 530 |
gh_patches_debug_22977 | rasdani/github-patches | git_diff | digitalfabrik__integreat-cms-410 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve documentation of cms.apps (CmsConfig)
Explain what cms.apps is doing, what CmsConfig is for and add docstrings of the following format:
```
"""
[Summary]
:param [ParamName]: [ParamDescription], defaults to [DefaultParamVal]
:type [ParamName]: [ParamType](, optional)
...
:raises [ErrorType]: [ErrorDescription]
...
:return: [ReturnDescription]
:rtype: [ReturnType]
"""
```
</issue>
<code>
[start of src/cms/apps.py]
1 """
2 Django related class representing a config of an app
3 """
4 import logging
5 import sys
6 from django.conf import settings
7 from django.apps import AppConfig
8
9 logger = logging.getLogger(__name__)
10
11 class CmsConfig(AppConfig):
12 """
13 Class inheriting the django AppConfig
14 """
15
16 name = 'cms'
17
18 def ready(self):
19 if settings.SECRET_KEY == '-!v282$zj815_q@htaxcubylo)(l%a+k*-xi78hw*#s2@i86@_' and not settings.DEBUG:
20 logger.error("You are running the Integreat CMS in production mode. Change the SECRET_KEY in the settings.py!")
21 sys.exit(1)
22
[end of src/cms/apps.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cms/apps.py b/src/cms/apps.py
--- a/src/cms/apps.py
+++ b/src/cms/apps.py
@@ -1,6 +1,3 @@
-"""
-Django related class representing a config of an app
-"""
import logging
import sys
from django.conf import settings
@@ -10,12 +7,23 @@
class CmsConfig(AppConfig):
"""
- Class inheriting the django AppConfig
+ This class represents the Django-configuration of the backend.
+
+ See :class:`django.apps.AppConfig` for more information.
+
+ :param name: The name of the app
+ :type name: str
"""
name = 'cms'
def ready(self):
+ """
+ This function gets executed exactly once each time the cms starts. We use it to check wether the secret key was
+ not changed in production mode and show an error message if this is the case.
+
+ See :meth:`django.apps.AppConfig.ready` for more information.
+ """
if settings.SECRET_KEY == '-!v282$zj815_q@htaxcubylo)(l%a+k*-xi78hw*#s2@i86@_' and not settings.DEBUG:
logger.error("You are running the Integreat CMS in production mode. Change the SECRET_KEY in the settings.py!")
sys.exit(1)
| {"golden_diff": "diff --git a/src/cms/apps.py b/src/cms/apps.py\n--- a/src/cms/apps.py\n+++ b/src/cms/apps.py\n@@ -1,6 +1,3 @@\n-\"\"\"\n-Django related class representing a config of an app\n-\"\"\"\n import logging\n import sys\n from django.conf import settings\n@@ -10,12 +7,23 @@\n \n class CmsConfig(AppConfig):\n \"\"\"\n- Class inheriting the django AppConfig\n+ This class represents the Django-configuration of the backend.\n+\n+ See :class:`django.apps.AppConfig` for more information.\n+\n+ :param name: The name of the app\n+ :type name: str\n \"\"\"\n \n name = 'cms'\n \n def ready(self):\n+ \"\"\"\n+ This function gets executed exactly once each time the cms starts. We use it to check wether the secret key was\n+ not changed in production mode and show an error message if this is the case.\n+\n+ See :meth:`django.apps.AppConfig.ready` for more information.\n+ \"\"\"\n if settings.SECRET_KEY == '-!v282$zj815_q@htaxcubylo)(l%a+k*-xi78hw*#s2@i86@_' and not settings.DEBUG:\n logger.error(\"You are running the Integreat CMS in production mode. Change the SECRET_KEY in the settings.py!\")\n sys.exit(1)\n", "issue": "Improve documentation of cms.apps (CmsConfig)\nExplain what cms.apps is doing, what CmsConfig is for and add docstrings of the following format:\r\n```\r\n\"\"\"\r\n[Summary]\r\n\r\n:param [ParamName]: [ParamDescription], defaults to [DefaultParamVal]\r\n:type [ParamName]: [ParamType](, optional)\r\n...\r\n:raises [ErrorType]: [ErrorDescription]\r\n...\r\n:return: [ReturnDescription]\r\n:rtype: [ReturnType]\r\n\"\"\"\r\n```\n", "before_files": [{"content": "\"\"\"\nDjango related class representing a config of an app\n\"\"\"\nimport logging\nimport sys\nfrom django.conf import settings\nfrom django.apps import AppConfig\n\nlogger = logging.getLogger(__name__)\n\nclass CmsConfig(AppConfig):\n \"\"\"\n Class inheriting the django AppConfig\n \"\"\"\n\n name = 'cms'\n\n def ready(self):\n if settings.SECRET_KEY == '-!v282$zj815_q@htaxcubylo)(l%a+k*-xi78hw*#s2@i86@_' and not settings.DEBUG:\n logger.error(\"You are running the Integreat CMS in production mode. Change the SECRET_KEY in the settings.py!\")\n sys.exit(1)\n", "path": "src/cms/apps.py"}]} | 818 | 313 |
gh_patches_debug_19895 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-57 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Schema getter should return public, but not mathesar_types
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
If a user wants to create a table the `public` schema, they can't currently, because the logic in the `db.schemas.get_all_schemas` function ignores it. This means when they try, an error is thrown. This is especially a problem when they've imported a DB, since most tables are in the `public` schema in most installations of PostgreSQL in the wild.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
The public schema should be available for holding mathesar tables.
**To Reproduce**
Please try to provide a [Minimal, Complete, and Verifiable](http://stackoverflow.com/help/mcve) example.
Start the webapp using the README. Try to upload a CSV to the `public` schema. See the error.
**Have a nice day!**
</issue>
<code>
[start of db/schemas.py]
1 from sqlalchemy.schema import CreateSchema
2 from sqlalchemy import inspect
3
4
5 def get_all_schemas(engine):
6 inspector = inspect(engine)
7 return [
8 schema
9 for schema in inspector.get_schema_names()
10 if schema not in ["public", "information_schema"]
11 ]
12
13
14 def schema_exists(schema, engine):
15 return schema in get_all_schemas(engine)
16
17
18 def create_schema(schema, engine):
19 """
20 This method creates a Postgres schema.
21 """
22 if not schema_exists(schema, engine):
23 with engine.begin() as connection:
24 connection.execute(CreateSchema(schema))
25
[end of db/schemas.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/db/schemas.py b/db/schemas.py
--- a/db/schemas.py
+++ b/db/schemas.py
@@ -1,24 +1,28 @@
from sqlalchemy.schema import CreateSchema
from sqlalchemy import inspect
+from db import types
-def get_all_schemas(engine):
- inspector = inspect(engine)
+TYPES_SCHEMA = types.base.SCHEMA
+
+
+def get_mathesar_schemas(engine):
return [
schema
- for schema in inspector.get_schema_names()
- if schema not in ["public", "information_schema"]
+ for schema in get_all_schemas(engine)
+ if schema not in [TYPES_SCHEMA, "information_schema"]
]
-def schema_exists(schema, engine):
- return schema in get_all_schemas(engine)
+def get_all_schemas(engine):
+ inspector = inspect(engine)
+ return inspector.get_schema_names()
def create_schema(schema, engine):
"""
This method creates a Postgres schema.
"""
- if not schema_exists(schema, engine):
+ if schema not in get_all_schemas(engine):
with engine.begin() as connection:
connection.execute(CreateSchema(schema))
| {"golden_diff": "diff --git a/db/schemas.py b/db/schemas.py\n--- a/db/schemas.py\n+++ b/db/schemas.py\n@@ -1,24 +1,28 @@\n from sqlalchemy.schema import CreateSchema\n from sqlalchemy import inspect\n \n+from db import types\n \n-def get_all_schemas(engine):\n- inspector = inspect(engine)\n+TYPES_SCHEMA = types.base.SCHEMA\n+\n+\n+def get_mathesar_schemas(engine):\n return [\n schema\n- for schema in inspector.get_schema_names()\n- if schema not in [\"public\", \"information_schema\"]\n+ for schema in get_all_schemas(engine)\n+ if schema not in [TYPES_SCHEMA, \"information_schema\"]\n ]\n \n \n-def schema_exists(schema, engine):\n- return schema in get_all_schemas(engine)\n+def get_all_schemas(engine):\n+ inspector = inspect(engine)\n+ return inspector.get_schema_names()\n \n \n def create_schema(schema, engine):\n \"\"\"\n This method creates a Postgres schema.\n \"\"\"\n- if not schema_exists(schema, engine):\n+ if schema not in get_all_schemas(engine):\n with engine.begin() as connection:\n connection.execute(CreateSchema(schema))\n", "issue": "Schema getter should return public, but not mathesar_types\n**Describe the bug**\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\nIf a user wants to create a table the `public` schema, they can't currently, because the logic in the `db.schemas.get_all_schemas` function ignores it. This means when they try, an error is thrown. This is especially a problem when they've imported a DB, since most tables are in the `public` schema in most installations of PostgreSQL in the wild.\r\n\r\n**Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nThe public schema should be available for holding mathesar tables.\r\n\r\n**To Reproduce**\r\nPlease try to provide a [Minimal, Complete, and Verifiable](http://stackoverflow.com/help/mcve) example.\r\n\r\nStart the webapp using the README. Try to upload a CSV to the `public` schema. See the error.\r\n\r\n**Have a nice day!**\r\n\n", "before_files": [{"content": "from sqlalchemy.schema import CreateSchema\nfrom sqlalchemy import inspect\n\n\ndef get_all_schemas(engine):\n inspector = inspect(engine)\n return [\n schema\n for schema in inspector.get_schema_names()\n if schema not in [\"public\", \"information_schema\"]\n ]\n\n\ndef schema_exists(schema, engine):\n return schema in get_all_schemas(engine)\n\n\ndef create_schema(schema, engine):\n \"\"\"\n This method creates a Postgres schema.\n \"\"\"\n if not schema_exists(schema, engine):\n with engine.begin() as connection:\n connection.execute(CreateSchema(schema))\n", "path": "db/schemas.py"}]} | 902 | 249 |
gh_patches_debug_25834 | rasdani/github-patches | git_diff | fedora-infra__bodhi-2899 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Delete bodhi.server.views.admin
Bodhi has a strange view that tells admin users what their username and principals are, but does not allow non-admin users to use it:
https://github.com/fedora-infra/bodhi/blob/3.0.0/bodhi/server/views/admin.py
When I visit https://bodhi.fedoraproject.org/admin/ I see:
```
{"principals": ["system.Everyone", "system.Authenticated", "bowlofeggs", "group:packager", "group:infra-sig", "group:bodhiadmin"], "user": "bowlofeggs"}
```
I don't know what the purpose of this view was, but I'm pretty sure we can delete it.
</issue>
<code>
[start of bodhi/server/views/admin.py]
1 # Copyright © 2014-2017 Red Hat, Inc. and others
2 #
3 # This file is part of Bodhi.
4 #
5 # This program is free software; you can redistribute it and/or
6 # modify it under the terms of the GNU General Public License
7 # as published by the Free Software Foundation; either version 2
8 # of the License, or (at your option) any later version.
9 #
10 # This program is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with this program; if not, write to the Free Software
17 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
18 """Define the admin view."""
19
20 from cornice import Service
21
22 from bodhi.server import log
23 from bodhi.server import security
24
25
26 admin_service = Service(name='admin', path='/admin/',
27 description='Administrator view',
28 factory=security.AdminACLFactory)
29
30
31 @admin_service.get(permission='admin')
32 def admin(request):
33 """
34 Return a dictionary with keys "user" and "principals".
35
36 "user" indexes the current user's name, and "principals" indexes the user's effective
37 principals.
38
39 Args:
40 request (pyramid.request): The current request.
41 Returns:
42 dict: A dictionary as described above.
43 """
44 user = request.user
45 log.info('%s logged into admin panel' % user.name)
46 principals = request.effective_principals
47 return {'user': user.name, 'principals': principals}
48
[end of bodhi/server/views/admin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bodhi/server/views/admin.py b/bodhi/server/views/admin.py
deleted file mode 100644
--- a/bodhi/server/views/admin.py
+++ /dev/null
@@ -1,47 +0,0 @@
-# Copyright © 2014-2017 Red Hat, Inc. and others
-#
-# This file is part of Bodhi.
-#
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2
-# of the License, or (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
-"""Define the admin view."""
-
-from cornice import Service
-
-from bodhi.server import log
-from bodhi.server import security
-
-
-admin_service = Service(name='admin', path='/admin/',
- description='Administrator view',
- factory=security.AdminACLFactory)
-
-
-@admin_service.get(permission='admin')
-def admin(request):
- """
- Return a dictionary with keys "user" and "principals".
-
- "user" indexes the current user's name, and "principals" indexes the user's effective
- principals.
-
- Args:
- request (pyramid.request): The current request.
- Returns:
- dict: A dictionary as described above.
- """
- user = request.user
- log.info('%s logged into admin panel' % user.name)
- principals = request.effective_principals
- return {'user': user.name, 'principals': principals}
| {"golden_diff": "diff --git a/bodhi/server/views/admin.py b/bodhi/server/views/admin.py\ndeleted file mode 100644\n--- a/bodhi/server/views/admin.py\n+++ /dev/null\n@@ -1,47 +0,0 @@\n-# Copyright \u00a9 2014-2017 Red Hat, Inc. and others\n-#\n-# This file is part of Bodhi.\n-#\n-# This program is free software; you can redistribute it and/or\n-# modify it under the terms of the GNU General Public License\n-# as published by the Free Software Foundation; either version 2\n-# of the License, or (at your option) any later version.\n-#\n-# This program is distributed in the hope that it will be useful,\n-# but WITHOUT ANY WARRANTY; without even the implied warranty of\n-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n-# GNU General Public License for more details.\n-#\n-# You should have received a copy of the GNU General Public License\n-# along with this program; if not, write to the Free Software\n-# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n-\"\"\"Define the admin view.\"\"\"\n-\n-from cornice import Service\n-\n-from bodhi.server import log\n-from bodhi.server import security\n-\n-\n-admin_service = Service(name='admin', path='/admin/',\n- description='Administrator view',\n- factory=security.AdminACLFactory)\n-\n-\n-@admin_service.get(permission='admin')\n-def admin(request):\n- \"\"\"\n- Return a dictionary with keys \"user\" and \"principals\".\n-\n- \"user\" indexes the current user's name, and \"principals\" indexes the user's effective\n- principals.\n-\n- Args:\n- request (pyramid.request): The current request.\n- Returns:\n- dict: A dictionary as described above.\n- \"\"\"\n- user = request.user\n- log.info('%s logged into admin panel' % user.name)\n- principals = request.effective_principals\n- return {'user': user.name, 'principals': principals}\n", "issue": "Delete bodhi.server.views.admin\nBodhi has a strange view that tells admin users what their username and principals are, but does not allow non-admin users to use it:\r\n\r\nhttps://github.com/fedora-infra/bodhi/blob/3.0.0/bodhi/server/views/admin.py\r\n\r\nWhen I visit https://bodhi.fedoraproject.org/admin/ I see:\r\n\r\n```\r\n{\"principals\": [\"system.Everyone\", \"system.Authenticated\", \"bowlofeggs\", \"group:packager\", \"group:infra-sig\", \"group:bodhiadmin\"], \"user\": \"bowlofeggs\"}\r\n```\r\n\r\nI don't know what the purpose of this view was, but I'm pretty sure we can delete it.\n", "before_files": [{"content": "# Copyright \u00a9 2014-2017 Red Hat, Inc. and others\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\"\"\"Define the admin view.\"\"\"\n\nfrom cornice import Service\n\nfrom bodhi.server import log\nfrom bodhi.server import security\n\n\nadmin_service = Service(name='admin', path='/admin/',\n description='Administrator view',\n factory=security.AdminACLFactory)\n\n\n@admin_service.get(permission='admin')\ndef admin(request):\n \"\"\"\n Return a dictionary with keys \"user\" and \"principals\".\n\n \"user\" indexes the current user's name, and \"principals\" indexes the user's effective\n principals.\n\n Args:\n request (pyramid.request): The current request.\n Returns:\n dict: A dictionary as described above.\n \"\"\"\n user = request.user\n log.info('%s logged into admin panel' % user.name)\n principals = request.effective_principals\n return {'user': user.name, 'principals': principals}\n", "path": "bodhi/server/views/admin.py"}]} | 1,174 | 480 |
gh_patches_debug_67390 | rasdani/github-patches | git_diff | goauthentik__authentik-4675 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Proxy Provider not working
Hello
Please help me, I updated the authentik server to 23.1.2, it worked perfectly until now, now the Proxy Provider is not working because of the following error
this is in the server log
{"error":"Post \"https://auth.xxx.com/application/o/token/\": dial tcp 192.168.10.240:443: connect: connection refused","event":"failed to redeem code","level":"warning","logger":"authentik.outpost.proxyv2.application","name":"Kuma","timestamp":"2023-01-24T13:01:34Z"}
The IP in the log is the IP of the nginx reverse proxy manager. The proxy works properly, I don't see any errors. Anyone have any ideas?
</issue>
<code>
[start of authentik/core/tasks.py]
1 """authentik core tasks"""
2 from datetime import datetime, timedelta
3
4 from django.contrib.sessions.backends.cache import KEY_PREFIX
5 from django.core.cache import cache
6 from django.utils.timezone import now
7 from structlog.stdlib import get_logger
8
9 from authentik.core.models import (
10 USER_ATTRIBUTE_EXPIRES,
11 USER_ATTRIBUTE_GENERATED,
12 AuthenticatedSession,
13 ExpiringModel,
14 User,
15 )
16 from authentik.events.monitored_tasks import (
17 MonitoredTask,
18 TaskResult,
19 TaskResultStatus,
20 prefill_task,
21 )
22 from authentik.root.celery import CELERY_APP
23
24 LOGGER = get_logger()
25
26
27 @CELERY_APP.task(bind=True, base=MonitoredTask)
28 @prefill_task
29 def clean_expired_models(self: MonitoredTask):
30 """Remove expired objects"""
31 messages = []
32 for cls in ExpiringModel.__subclasses__():
33 cls: ExpiringModel
34 objects = (
35 cls.objects.all().exclude(expiring=False).exclude(expiring=True, expires__gt=now())
36 )
37 amount = objects.count()
38 for obj in objects:
39 obj.expire_action()
40 LOGGER.debug("Expired models", model=cls, amount=amount)
41 messages.append(f"Expired {amount} {cls._meta.verbose_name_plural}")
42 # Special case
43 amount = 0
44 for session in AuthenticatedSession.objects.all():
45 cache_key = f"{KEY_PREFIX}{session.session_key}"
46 try:
47 value = cache.get(cache_key)
48 # pylint: disable=broad-except
49 except Exception as exc:
50 LOGGER.debug("Failed to get session from cache", exc=exc)
51 if not value:
52 session.delete()
53 amount += 1
54 LOGGER.debug("Expired sessions", model=AuthenticatedSession, amount=amount)
55 messages.append(f"Expired {amount} {AuthenticatedSession._meta.verbose_name_plural}")
56 self.set_status(TaskResult(TaskResultStatus.SUCCESSFUL, messages))
57
58
59 @CELERY_APP.task(bind=True, base=MonitoredTask)
60 @prefill_task
61 def clean_temporary_users(self: MonitoredTask):
62 """Remove temporary users created by SAML Sources"""
63 _now = datetime.now()
64 messages = []
65 deleted_users = 0
66 for user in User.objects.filter(**{f"attributes__{USER_ATTRIBUTE_GENERATED}": True}):
67 if not user.attributes.get(USER_ATTRIBUTE_EXPIRES):
68 continue
69 delta: timedelta = _now - datetime.fromtimestamp(
70 user.attributes.get(USER_ATTRIBUTE_EXPIRES)
71 )
72 if delta.total_seconds() > 0:
73 LOGGER.debug("User is expired and will be deleted.", user=user, delta=delta)
74 user.delete()
75 deleted_users += 1
76 messages.append(f"Successfully deleted {deleted_users} users.")
77 self.set_status(TaskResult(TaskResultStatus.SUCCESSFUL, messages))
78
[end of authentik/core/tasks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/authentik/core/tasks.py b/authentik/core/tasks.py
--- a/authentik/core/tasks.py
+++ b/authentik/core/tasks.py
@@ -43,6 +43,7 @@
amount = 0
for session in AuthenticatedSession.objects.all():
cache_key = f"{KEY_PREFIX}{session.session_key}"
+ value = None
try:
value = cache.get(cache_key)
# pylint: disable=broad-except
| {"golden_diff": "diff --git a/authentik/core/tasks.py b/authentik/core/tasks.py\n--- a/authentik/core/tasks.py\n+++ b/authentik/core/tasks.py\n@@ -43,6 +43,7 @@\n amount = 0\n for session in AuthenticatedSession.objects.all():\n cache_key = f\"{KEY_PREFIX}{session.session_key}\"\n+ value = None\n try:\n value = cache.get(cache_key)\n # pylint: disable=broad-except\n", "issue": "Proxy Provider not working \nHello\r\n\r\nPlease help me, I updated the authentik server to 23.1.2, it worked perfectly until now, now the Proxy Provider is not working because of the following error\r\n\r\nthis is in the server log\r\n\r\n{\"error\":\"Post \\\"https://auth.xxx.com/application/o/token/\\\": dial tcp 192.168.10.240:443: connect: connection refused\",\"event\":\"failed to redeem code\",\"level\":\"warning\",\"logger\":\"authentik.outpost.proxyv2.application\",\"name\":\"Kuma\",\"timestamp\":\"2023-01-24T13:01:34Z\"}\r\n\r\nThe IP in the log is the IP of the nginx reverse proxy manager. The proxy works properly, I don't see any errors. Anyone have any ideas?\r\n\n", "before_files": [{"content": "\"\"\"authentik core tasks\"\"\"\nfrom datetime import datetime, timedelta\n\nfrom django.contrib.sessions.backends.cache import KEY_PREFIX\nfrom django.core.cache import cache\nfrom django.utils.timezone import now\nfrom structlog.stdlib import get_logger\n\nfrom authentik.core.models import (\n USER_ATTRIBUTE_EXPIRES,\n USER_ATTRIBUTE_GENERATED,\n AuthenticatedSession,\n ExpiringModel,\n User,\n)\nfrom authentik.events.monitored_tasks import (\n MonitoredTask,\n TaskResult,\n TaskResultStatus,\n prefill_task,\n)\nfrom authentik.root.celery import CELERY_APP\n\nLOGGER = get_logger()\n\n\n@CELERY_APP.task(bind=True, base=MonitoredTask)\n@prefill_task\ndef clean_expired_models(self: MonitoredTask):\n \"\"\"Remove expired objects\"\"\"\n messages = []\n for cls in ExpiringModel.__subclasses__():\n cls: ExpiringModel\n objects = (\n cls.objects.all().exclude(expiring=False).exclude(expiring=True, expires__gt=now())\n )\n amount = objects.count()\n for obj in objects:\n obj.expire_action()\n LOGGER.debug(\"Expired models\", model=cls, amount=amount)\n messages.append(f\"Expired {amount} {cls._meta.verbose_name_plural}\")\n # Special case\n amount = 0\n for session in AuthenticatedSession.objects.all():\n cache_key = f\"{KEY_PREFIX}{session.session_key}\"\n try:\n value = cache.get(cache_key)\n # pylint: disable=broad-except\n except Exception as exc:\n LOGGER.debug(\"Failed to get session from cache\", exc=exc)\n if not value:\n session.delete()\n amount += 1\n LOGGER.debug(\"Expired sessions\", model=AuthenticatedSession, amount=amount)\n messages.append(f\"Expired {amount} {AuthenticatedSession._meta.verbose_name_plural}\")\n self.set_status(TaskResult(TaskResultStatus.SUCCESSFUL, messages))\n\n\n@CELERY_APP.task(bind=True, base=MonitoredTask)\n@prefill_task\ndef clean_temporary_users(self: MonitoredTask):\n \"\"\"Remove temporary users created by SAML Sources\"\"\"\n _now = datetime.now()\n messages = []\n deleted_users = 0\n for user in User.objects.filter(**{f\"attributes__{USER_ATTRIBUTE_GENERATED}\": True}):\n if not user.attributes.get(USER_ATTRIBUTE_EXPIRES):\n continue\n delta: timedelta = _now - datetime.fromtimestamp(\n user.attributes.get(USER_ATTRIBUTE_EXPIRES)\n )\n if delta.total_seconds() > 0:\n LOGGER.debug(\"User is expired and will be deleted.\", user=user, delta=delta)\n user.delete()\n deleted_users += 1\n messages.append(f\"Successfully deleted {deleted_users} users.\")\n self.set_status(TaskResult(TaskResultStatus.SUCCESSFUL, messages))\n", "path": "authentik/core/tasks.py"}]} | 1,462 | 105 |
gh_patches_debug_37428 | rasdani/github-patches | git_diff | spack__spack-15179 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove 'spack bootstrap' from the commands
As a Spack maintainer I want to remove the `spack bootstrap` command (outdated since #14062) so that I could reduce the amount of boilerplate code in the project.
### Rationale
The `spack bootstrap` command was used to "Bootstrap packages needed for spack to run smoothly" and in reality it has always just installed `environment-modules~X`. Since #14062 shell integration doesn't require `environment-modules` anymore making the command outdated. I would therefore remove that command from the code base.
### Description
Just remove the command and any test / package associated only with it.
### Additional information
Opening the issue to check what is the consensus towards this.
</issue>
<code>
[start of lib/spack/spack/cmd/bootstrap.py]
1 # Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 import llnl.util.cpu
7 import llnl.util.tty as tty
8
9 import spack.repo
10 import spack.spec
11 import spack.cmd.common.arguments as arguments
12
13 description = "Bootstrap packages needed for spack to run smoothly"
14 section = "admin"
15 level = "long"
16
17
18 def setup_parser(subparser):
19 arguments.add_common_arguments(subparser, ['jobs'])
20 subparser.add_argument(
21 '--keep-prefix', action='store_true', dest='keep_prefix',
22 help="don't remove the install prefix if installation fails")
23 subparser.add_argument(
24 '--keep-stage', action='store_true', dest='keep_stage',
25 help="don't remove the build stage if installation succeeds")
26 arguments.add_common_arguments(subparser, ['no_checksum'])
27 subparser.add_argument(
28 '-v', '--verbose', action='store_true', dest='verbose',
29 help="display verbose build output while installing")
30
31 cache_group = subparser.add_mutually_exclusive_group()
32 cache_group.add_argument(
33 '--use-cache', action='store_true', dest='use_cache', default=True,
34 help="check for pre-built Spack packages in mirrors (default)")
35 cache_group.add_argument(
36 '--no-cache', action='store_false', dest='use_cache', default=True,
37 help="do not check for pre-built Spack packages in mirrors")
38 cache_group.add_argument(
39 '--cache-only', action='store_true', dest='cache_only', default=False,
40 help="only install package from binary mirrors")
41
42 cd_group = subparser.add_mutually_exclusive_group()
43 arguments.add_common_arguments(cd_group, ['clean', 'dirty'])
44
45
46 def bootstrap(parser, args, **kwargs):
47 kwargs.update({
48 'keep_prefix': args.keep_prefix,
49 'keep_stage': args.keep_stage,
50 'install_deps': 'dependencies',
51 'verbose': args.verbose,
52 'dirty': args.dirty,
53 'use_cache': args.use_cache,
54 'cache_only': args.cache_only
55 })
56
57 # Define requirement dictionary defining general specs which need
58 # to be satisfied, and the specs to install when the general spec
59 # isn't satisfied.
60 requirement_dict = {
61 # Install environment-modules with generic optimizations
62 'environment-modules': 'environment-modules~X target={0}'.format(
63 llnl.util.cpu.host().family
64 )
65 }
66
67 for requirement in requirement_dict:
68 installed_specs = spack.store.db.query(requirement)
69 if(len(installed_specs) > 0):
70 tty.msg("Requirement %s is satisfied with installed "
71 "package %s" % (requirement, installed_specs[0]))
72 else:
73 # Install requirement
74 spec_to_install = spack.spec.Spec(requirement_dict[requirement])
75 spec_to_install.concretize()
76 tty.msg("Installing %s to satisfy requirement for %s" %
77 (spec_to_install, requirement))
78 kwargs['explicit'] = True
79 package = spack.repo.get(spec_to_install)
80 package.do_install(**kwargs)
81
[end of lib/spack/spack/cmd/bootstrap.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/spack/spack/cmd/bootstrap.py b/lib/spack/spack/cmd/bootstrap.py
deleted file mode 100644
--- a/lib/spack/spack/cmd/bootstrap.py
+++ /dev/null
@@ -1,80 +0,0 @@
-# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
-# Spack Project Developers. See the top-level COPYRIGHT file for details.
-#
-# SPDX-License-Identifier: (Apache-2.0 OR MIT)
-
-import llnl.util.cpu
-import llnl.util.tty as tty
-
-import spack.repo
-import spack.spec
-import spack.cmd.common.arguments as arguments
-
-description = "Bootstrap packages needed for spack to run smoothly"
-section = "admin"
-level = "long"
-
-
-def setup_parser(subparser):
- arguments.add_common_arguments(subparser, ['jobs'])
- subparser.add_argument(
- '--keep-prefix', action='store_true', dest='keep_prefix',
- help="don't remove the install prefix if installation fails")
- subparser.add_argument(
- '--keep-stage', action='store_true', dest='keep_stage',
- help="don't remove the build stage if installation succeeds")
- arguments.add_common_arguments(subparser, ['no_checksum'])
- subparser.add_argument(
- '-v', '--verbose', action='store_true', dest='verbose',
- help="display verbose build output while installing")
-
- cache_group = subparser.add_mutually_exclusive_group()
- cache_group.add_argument(
- '--use-cache', action='store_true', dest='use_cache', default=True,
- help="check for pre-built Spack packages in mirrors (default)")
- cache_group.add_argument(
- '--no-cache', action='store_false', dest='use_cache', default=True,
- help="do not check for pre-built Spack packages in mirrors")
- cache_group.add_argument(
- '--cache-only', action='store_true', dest='cache_only', default=False,
- help="only install package from binary mirrors")
-
- cd_group = subparser.add_mutually_exclusive_group()
- arguments.add_common_arguments(cd_group, ['clean', 'dirty'])
-
-
-def bootstrap(parser, args, **kwargs):
- kwargs.update({
- 'keep_prefix': args.keep_prefix,
- 'keep_stage': args.keep_stage,
- 'install_deps': 'dependencies',
- 'verbose': args.verbose,
- 'dirty': args.dirty,
- 'use_cache': args.use_cache,
- 'cache_only': args.cache_only
- })
-
- # Define requirement dictionary defining general specs which need
- # to be satisfied, and the specs to install when the general spec
- # isn't satisfied.
- requirement_dict = {
- # Install environment-modules with generic optimizations
- 'environment-modules': 'environment-modules~X target={0}'.format(
- llnl.util.cpu.host().family
- )
- }
-
- for requirement in requirement_dict:
- installed_specs = spack.store.db.query(requirement)
- if(len(installed_specs) > 0):
- tty.msg("Requirement %s is satisfied with installed "
- "package %s" % (requirement, installed_specs[0]))
- else:
- # Install requirement
- spec_to_install = spack.spec.Spec(requirement_dict[requirement])
- spec_to_install.concretize()
- tty.msg("Installing %s to satisfy requirement for %s" %
- (spec_to_install, requirement))
- kwargs['explicit'] = True
- package = spack.repo.get(spec_to_install)
- package.do_install(**kwargs)
| {"golden_diff": "diff --git a/lib/spack/spack/cmd/bootstrap.py b/lib/spack/spack/cmd/bootstrap.py\ndeleted file mode 100644\n--- a/lib/spack/spack/cmd/bootstrap.py\n+++ /dev/null\n@@ -1,80 +0,0 @@\n-# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n-# Spack Project Developers. See the top-level COPYRIGHT file for details.\n-#\n-# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n-\n-import llnl.util.cpu\n-import llnl.util.tty as tty\n-\n-import spack.repo\n-import spack.spec\n-import spack.cmd.common.arguments as arguments\n-\n-description = \"Bootstrap packages needed for spack to run smoothly\"\n-section = \"admin\"\n-level = \"long\"\n-\n-\n-def setup_parser(subparser):\n- arguments.add_common_arguments(subparser, ['jobs'])\n- subparser.add_argument(\n- '--keep-prefix', action='store_true', dest='keep_prefix',\n- help=\"don't remove the install prefix if installation fails\")\n- subparser.add_argument(\n- '--keep-stage', action='store_true', dest='keep_stage',\n- help=\"don't remove the build stage if installation succeeds\")\n- arguments.add_common_arguments(subparser, ['no_checksum'])\n- subparser.add_argument(\n- '-v', '--verbose', action='store_true', dest='verbose',\n- help=\"display verbose build output while installing\")\n-\n- cache_group = subparser.add_mutually_exclusive_group()\n- cache_group.add_argument(\n- '--use-cache', action='store_true', dest='use_cache', default=True,\n- help=\"check for pre-built Spack packages in mirrors (default)\")\n- cache_group.add_argument(\n- '--no-cache', action='store_false', dest='use_cache', default=True,\n- help=\"do not check for pre-built Spack packages in mirrors\")\n- cache_group.add_argument(\n- '--cache-only', action='store_true', dest='cache_only', default=False,\n- help=\"only install package from binary mirrors\")\n-\n- cd_group = subparser.add_mutually_exclusive_group()\n- arguments.add_common_arguments(cd_group, ['clean', 'dirty'])\n-\n-\n-def bootstrap(parser, args, **kwargs):\n- kwargs.update({\n- 'keep_prefix': args.keep_prefix,\n- 'keep_stage': args.keep_stage,\n- 'install_deps': 'dependencies',\n- 'verbose': args.verbose,\n- 'dirty': args.dirty,\n- 'use_cache': args.use_cache,\n- 'cache_only': args.cache_only\n- })\n-\n- # Define requirement dictionary defining general specs which need\n- # to be satisfied, and the specs to install when the general spec\n- # isn't satisfied.\n- requirement_dict = {\n- # Install environment-modules with generic optimizations\n- 'environment-modules': 'environment-modules~X target={0}'.format(\n- llnl.util.cpu.host().family\n- )\n- }\n-\n- for requirement in requirement_dict:\n- installed_specs = spack.store.db.query(requirement)\n- if(len(installed_specs) > 0):\n- tty.msg(\"Requirement %s is satisfied with installed \"\n- \"package %s\" % (requirement, installed_specs[0]))\n- else:\n- # Install requirement\n- spec_to_install = spack.spec.Spec(requirement_dict[requirement])\n- spec_to_install.concretize()\n- tty.msg(\"Installing %s to satisfy requirement for %s\" %\n- (spec_to_install, requirement))\n- kwargs['explicit'] = True\n- package = spack.repo.get(spec_to_install)\n- package.do_install(**kwargs)\n", "issue": "Remove 'spack bootstrap' from the commands\nAs a Spack maintainer I want to remove the `spack bootstrap` command (outdated since #14062) so that I could reduce the amount of boilerplate code in the project.\r\n\r\n### Rationale\r\n\r\nThe `spack bootstrap` command was used to \"Bootstrap packages needed for spack to run smoothly\" and in reality it has always just installed `environment-modules~X`. Since #14062 shell integration doesn't require `environment-modules` anymore making the command outdated. I would therefore remove that command from the code base.\r\n\r\n### Description\r\n\r\nJust remove the command and any test / package associated only with it.\r\n\r\n\r\n### Additional information\r\n\r\nOpening the issue to check what is the consensus towards this.\r\n\n", "before_files": [{"content": "# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nimport llnl.util.cpu\nimport llnl.util.tty as tty\n\nimport spack.repo\nimport spack.spec\nimport spack.cmd.common.arguments as arguments\n\ndescription = \"Bootstrap packages needed for spack to run smoothly\"\nsection = \"admin\"\nlevel = \"long\"\n\n\ndef setup_parser(subparser):\n arguments.add_common_arguments(subparser, ['jobs'])\n subparser.add_argument(\n '--keep-prefix', action='store_true', dest='keep_prefix',\n help=\"don't remove the install prefix if installation fails\")\n subparser.add_argument(\n '--keep-stage', action='store_true', dest='keep_stage',\n help=\"don't remove the build stage if installation succeeds\")\n arguments.add_common_arguments(subparser, ['no_checksum'])\n subparser.add_argument(\n '-v', '--verbose', action='store_true', dest='verbose',\n help=\"display verbose build output while installing\")\n\n cache_group = subparser.add_mutually_exclusive_group()\n cache_group.add_argument(\n '--use-cache', action='store_true', dest='use_cache', default=True,\n help=\"check for pre-built Spack packages in mirrors (default)\")\n cache_group.add_argument(\n '--no-cache', action='store_false', dest='use_cache', default=True,\n help=\"do not check for pre-built Spack packages in mirrors\")\n cache_group.add_argument(\n '--cache-only', action='store_true', dest='cache_only', default=False,\n help=\"only install package from binary mirrors\")\n\n cd_group = subparser.add_mutually_exclusive_group()\n arguments.add_common_arguments(cd_group, ['clean', 'dirty'])\n\n\ndef bootstrap(parser, args, **kwargs):\n kwargs.update({\n 'keep_prefix': args.keep_prefix,\n 'keep_stage': args.keep_stage,\n 'install_deps': 'dependencies',\n 'verbose': args.verbose,\n 'dirty': args.dirty,\n 'use_cache': args.use_cache,\n 'cache_only': args.cache_only\n })\n\n # Define requirement dictionary defining general specs which need\n # to be satisfied, and the specs to install when the general spec\n # isn't satisfied.\n requirement_dict = {\n # Install environment-modules with generic optimizations\n 'environment-modules': 'environment-modules~X target={0}'.format(\n llnl.util.cpu.host().family\n )\n }\n\n for requirement in requirement_dict:\n installed_specs = spack.store.db.query(requirement)\n if(len(installed_specs) > 0):\n tty.msg(\"Requirement %s is satisfied with installed \"\n \"package %s\" % (requirement, installed_specs[0]))\n else:\n # Install requirement\n spec_to_install = spack.spec.Spec(requirement_dict[requirement])\n spec_to_install.concretize()\n tty.msg(\"Installing %s to satisfy requirement for %s\" %\n (spec_to_install, requirement))\n kwargs['explicit'] = True\n package = spack.repo.get(spec_to_install)\n package.do_install(**kwargs)\n", "path": "lib/spack/spack/cmd/bootstrap.py"}]} | 1,550 | 812 |
gh_patches_debug_27279 | rasdani/github-patches | git_diff | Pylons__pyramid-2618 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pcreate -s shows wrong link to tutorials
after a
```
pcreate -s alchemy scaffold-alchemy
```
I see a link to tutorials, but this link is a 404:
```
Tutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials
```
</issue>
<code>
[start of pyramid/scaffolds/__init__.py]
1 import binascii
2 import os
3 from textwrap import dedent
4
5 from pyramid.compat import native_
6
7 from pyramid.scaffolds.template import Template # API
8
9 class PyramidTemplate(Template):
10 """
11 A class that can be used as a base class for Pyramid scaffolding
12 templates.
13 """
14 def pre(self, command, output_dir, vars):
15 """ Overrides :meth:`pyramid.scaffolds.template.Template.pre`, adding
16 several variables to the default variables list (including
17 ``random_string``, and ``package_logger``). It also prevents common
18 misnamings (such as naming a package "site" or naming a package
19 logger "root".
20 """
21 vars['random_string'] = native_(binascii.hexlify(os.urandom(20)))
22 package_logger = vars['package']
23 if package_logger == 'root':
24 # Rename the app logger in the rare case a project is named 'root'
25 package_logger = 'app'
26 vars['package_logger'] = package_logger
27 return Template.pre(self, command, output_dir, vars)
28
29 def post(self, command, output_dir, vars): # pragma: no cover
30 """ Overrides :meth:`pyramid.scaffolds.template.Template.post`, to
31 print "Welcome to Pyramid. Sorry for the convenience." after a
32 successful scaffolding rendering."""
33
34 separator = "=" * 79
35 msg = dedent(
36 """
37 %(separator)s
38 Tutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials
39 Documentation: http://docs.pylonsproject.org/projects/pyramid
40
41 Twitter (tips & updates): http://twitter.com/pylons
42 Mailing List: http://groups.google.com/group/pylons-discuss
43
44 Welcome to Pyramid. Sorry for the convenience.
45 %(separator)s
46 """ % {'separator': separator})
47
48 self.out(msg)
49 return Template.post(self, command, output_dir, vars)
50
51 def out(self, msg): # pragma: no cover (replaceable testing hook)
52 print(msg)
53
54 class StarterProjectTemplate(PyramidTemplate):
55 _template_dir = 'starter'
56 summary = 'Pyramid starter project'
57
58 class ZODBProjectTemplate(PyramidTemplate):
59 _template_dir = 'zodb'
60 summary = 'Pyramid ZODB project using traversal'
61
62 class AlchemyProjectTemplate(PyramidTemplate):
63 _template_dir = 'alchemy'
64 summary = 'Pyramid SQLAlchemy project using url dispatch'
65
[end of pyramid/scaffolds/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyramid/scaffolds/__init__.py b/pyramid/scaffolds/__init__.py
--- a/pyramid/scaffolds/__init__.py
+++ b/pyramid/scaffolds/__init__.py
@@ -35,11 +35,10 @@
msg = dedent(
"""
%(separator)s
- Tutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials
- Documentation: http://docs.pylonsproject.org/projects/pyramid
-
- Twitter (tips & updates): http://twitter.com/pylons
- Mailing List: http://groups.google.com/group/pylons-discuss
+ Tutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials/en/latest/
+ Documentation: http://docs.pylonsproject.org/projects/pyramid/en/latest/
+ Twitter: https://twitter.com/trypyramid
+ Mailing List: https://groups.google.com/forum/#!forum/pylons-discuss
Welcome to Pyramid. Sorry for the convenience.
%(separator)s
@@ -53,12 +52,13 @@
class StarterProjectTemplate(PyramidTemplate):
_template_dir = 'starter'
- summary = 'Pyramid starter project'
+ summary = 'Pyramid starter project using URL dispatch and Chameleon'
class ZODBProjectTemplate(PyramidTemplate):
_template_dir = 'zodb'
- summary = 'Pyramid ZODB project using traversal'
+ summary = 'Pyramid project using ZODB, traversal, and Chameleon'
class AlchemyProjectTemplate(PyramidTemplate):
_template_dir = 'alchemy'
- summary = 'Pyramid SQLAlchemy project using url dispatch'
+ summary = 'Pyramid project using SQLAlchemy, SQLite, URL dispatch, and'
+ ' Chameleon'
| {"golden_diff": "diff --git a/pyramid/scaffolds/__init__.py b/pyramid/scaffolds/__init__.py\n--- a/pyramid/scaffolds/__init__.py\n+++ b/pyramid/scaffolds/__init__.py\n@@ -35,11 +35,10 @@\n msg = dedent(\n \"\"\"\n %(separator)s\n- Tutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials\n- Documentation: http://docs.pylonsproject.org/projects/pyramid\n-\n- Twitter (tips & updates): http://twitter.com/pylons\n- Mailing List: http://groups.google.com/group/pylons-discuss\n+ Tutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials/en/latest/\n+ Documentation: http://docs.pylonsproject.org/projects/pyramid/en/latest/\n+ Twitter: https://twitter.com/trypyramid\n+ Mailing List: https://groups.google.com/forum/#!forum/pylons-discuss\n \n Welcome to Pyramid. Sorry for the convenience.\n %(separator)s\n@@ -53,12 +52,13 @@\n \n class StarterProjectTemplate(PyramidTemplate):\n _template_dir = 'starter'\n- summary = 'Pyramid starter project'\n+ summary = 'Pyramid starter project using URL dispatch and Chameleon'\n \n class ZODBProjectTemplate(PyramidTemplate):\n _template_dir = 'zodb'\n- summary = 'Pyramid ZODB project using traversal'\n+ summary = 'Pyramid project using ZODB, traversal, and Chameleon'\n \n class AlchemyProjectTemplate(PyramidTemplate):\n _template_dir = 'alchemy'\n- summary = 'Pyramid SQLAlchemy project using url dispatch'\n+ summary = 'Pyramid project using SQLAlchemy, SQLite, URL dispatch, and'\n+ ' Chameleon'\n", "issue": "pcreate -s shows wrong link to tutorials\nafter a \n\n```\npcreate -s alchemy scaffold-alchemy\n```\n\nI see a link to tutorials, but this link is a 404: \n\n```\nTutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials\n```\n\n", "before_files": [{"content": "import binascii\nimport os\nfrom textwrap import dedent\n\nfrom pyramid.compat import native_\n\nfrom pyramid.scaffolds.template import Template # API\n\nclass PyramidTemplate(Template):\n \"\"\"\n A class that can be used as a base class for Pyramid scaffolding\n templates.\n \"\"\"\n def pre(self, command, output_dir, vars):\n \"\"\" Overrides :meth:`pyramid.scaffolds.template.Template.pre`, adding\n several variables to the default variables list (including\n ``random_string``, and ``package_logger``). It also prevents common\n misnamings (such as naming a package \"site\" or naming a package\n logger \"root\".\n \"\"\"\n vars['random_string'] = native_(binascii.hexlify(os.urandom(20)))\n package_logger = vars['package']\n if package_logger == 'root':\n # Rename the app logger in the rare case a project is named 'root'\n package_logger = 'app'\n vars['package_logger'] = package_logger\n return Template.pre(self, command, output_dir, vars)\n\n def post(self, command, output_dir, vars): # pragma: no cover\n \"\"\" Overrides :meth:`pyramid.scaffolds.template.Template.post`, to\n print \"Welcome to Pyramid. Sorry for the convenience.\" after a\n successful scaffolding rendering.\"\"\"\n\n separator = \"=\" * 79\n msg = dedent(\n \"\"\"\n %(separator)s\n Tutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials\n Documentation: http://docs.pylonsproject.org/projects/pyramid\n\n Twitter (tips & updates): http://twitter.com/pylons\n Mailing List: http://groups.google.com/group/pylons-discuss\n\n Welcome to Pyramid. Sorry for the convenience.\n %(separator)s\n \"\"\" % {'separator': separator})\n\n self.out(msg)\n return Template.post(self, command, output_dir, vars)\n\n def out(self, msg): # pragma: no cover (replaceable testing hook)\n print(msg)\n\nclass StarterProjectTemplate(PyramidTemplate):\n _template_dir = 'starter'\n summary = 'Pyramid starter project'\n\nclass ZODBProjectTemplate(PyramidTemplate):\n _template_dir = 'zodb'\n summary = 'Pyramid ZODB project using traversal'\n\nclass AlchemyProjectTemplate(PyramidTemplate):\n _template_dir = 'alchemy'\n summary = 'Pyramid SQLAlchemy project using url dispatch'\n", "path": "pyramid/scaffolds/__init__.py"}]} | 1,258 | 399 |
gh_patches_debug_13808 | rasdani/github-patches | git_diff | lisa-lab__pylearn2-1503 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[bug] print_monitor_cv.py model not iterable
I've tried `print_monitor_cv.py model.pkl` but I've got
```
Traceback (most recent call last):
File "~/pylearn2/pylearn2/scripts/print_monitor_cv.py", line 84, in <module>
main(**vars(args))
File "~/pylearn2/pylearn2/scripts/print_monitor_cv.py", line 38, in main
for model in list(this_models):
TypeError: 'MLP' object is not iterable
```
so I changed [this part](https://github.com/lisa-lab/pylearn2/blob/master/pylearn2/scripts/print_monitor_cv.py#L38):
``` python
this_models = serial.load(filename)
for model in list(this_models):
# ...
```
to
``` python
# ....
this_models = serial.load(filename)
try:
this_models = list(this_models)
except TypeError:
this_models = [this_models]
for model in this_models:
# ...
```
PR?
</issue>
<code>
[start of pylearn2/scripts/print_monitor_cv.py]
1 #!/usr/bin/env python
2 """
3 Print (average) channel values for a collection of models, such as that
4 serialized by TrainCV. Based on print_monitor.py.
5
6 usage: print_monitor_cv.py model.pkl [-a]
7 """
8 from __future__ import print_function
9
10 __author__ = "Steven Kearnes"
11 __copyright__ = "Copyright 2014, Stanford University"
12 __license__ = "3-clause BSD"
13 __maintainer__ = "Steven Kearnes"
14
15 import argparse
16 import numpy as np
17
18 from pylearn2.utils import serial
19
20
21 def main(models, all=False):
22 """
23 Print (average) final channel values for a collection of models.
24
25 Parameters
26 ----------
27 models : list
28 Filename(s) for models to analyze.
29 all : bool, optional (default False)
30 Whether to output values for all models. If False, only averages
31 and standard deviations across all models are displayed.
32 """
33 epochs = []
34 time = []
35 values = {}
36 for filename in np.atleast_1d(models):
37 this_models = serial.load(filename)
38 for model in list(this_models):
39 monitor = model.monitor
40 channels = monitor.channels
41 epochs.append(monitor._epochs_seen)
42 time.append(max(channels[key].time_record[-1] for key in channels))
43 for key in sorted(channels.keys()):
44 if key not in values:
45 values[key] = []
46 values[key].append(channels[key].val_record[-1])
47 n_models = len(epochs)
48 print('number of models: {0}'.format(n_models))
49 if n_models > 1:
50 if all:
51 print('\nepochs seen:\n{0}\n{1} +/- {2}'.format(np.asarray(epochs),
52 np.mean(epochs),
53 np.std(epochs)))
54 print('\ntraining time:\n{0}\n{1} +/- {2}'.format(np.asarray(time),
55 np.mean(time),
56 np.std(time)))
57 else:
58 print('epochs seen: {0} +/- {1}'.format(np.mean(epochs),
59 np.std(epochs)))
60 print('training time: {0} +/- {1}'.format(np.mean(time),
61 np.std(time)))
62 for key in sorted(values.keys()):
63 if all:
64 print('\n{0}:\n{1}\n{2} +/- {3}'.format(
65 key, np.asarray(values[key]),
66 np.mean(values[key]), np.std(values[key])))
67 else:
68 print('{0}: {1} +/- {2}'.format(key, np.mean(values[key]),
69 np.std(values[key])))
70 else:
71 print('epochs seen: {0}'.format(epochs[0]))
72 print('training time: {0}'.format(time[0]))
73 for key in sorted(values.keys()):
74 print('{0}: {1}'.format(key, values[key][0]))
75
76 if __name__ == '__main__':
77 parser = argparse.ArgumentParser()
78 parser.add_argument('models', nargs='+',
79 help='Model or models to analyze.')
80 parser.add_argument('-a', '--all', action='store_true',
81 help='Print values for all models instead of ' +
82 'averages.')
83 args = parser.parse_args()
84 main(**vars(args))
85
[end of pylearn2/scripts/print_monitor_cv.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pylearn2/scripts/print_monitor_cv.py b/pylearn2/scripts/print_monitor_cv.py
--- a/pylearn2/scripts/print_monitor_cv.py
+++ b/pylearn2/scripts/print_monitor_cv.py
@@ -13,6 +13,7 @@
__maintainer__ = "Steven Kearnes"
import argparse
+from collections import Iterable
import numpy as np
from pylearn2.utils import serial
@@ -35,6 +36,8 @@
values = {}
for filename in np.atleast_1d(models):
this_models = serial.load(filename)
+ if not isinstance(this_models, Iterable):
+ this_models = [this_models]
for model in list(this_models):
monitor = model.monitor
channels = monitor.channels
| {"golden_diff": "diff --git a/pylearn2/scripts/print_monitor_cv.py b/pylearn2/scripts/print_monitor_cv.py\n--- a/pylearn2/scripts/print_monitor_cv.py\n+++ b/pylearn2/scripts/print_monitor_cv.py\n@@ -13,6 +13,7 @@\n __maintainer__ = \"Steven Kearnes\"\n \n import argparse\n+from collections import Iterable\n import numpy as np\n \n from pylearn2.utils import serial\n@@ -35,6 +36,8 @@\n values = {}\n for filename in np.atleast_1d(models):\n this_models = serial.load(filename)\n+ if not isinstance(this_models, Iterable):\n+ this_models = [this_models]\n for model in list(this_models):\n monitor = model.monitor\n channels = monitor.channels\n", "issue": "[bug] print_monitor_cv.py model not iterable\nI've tried `print_monitor_cv.py model.pkl` but I've got\n\n```\nTraceback (most recent call last):\n File \"~/pylearn2/pylearn2/scripts/print_monitor_cv.py\", line 84, in <module>\n main(**vars(args))\n File \"~/pylearn2/pylearn2/scripts/print_monitor_cv.py\", line 38, in main\n for model in list(this_models):\nTypeError: 'MLP' object is not iterable\n```\n\nso I changed [this part](https://github.com/lisa-lab/pylearn2/blob/master/pylearn2/scripts/print_monitor_cv.py#L38):\n\n``` python\n this_models = serial.load(filename)\n for model in list(this_models):\n # ...\n```\n\nto\n\n``` python\n # ....\n this_models = serial.load(filename)\n\n try:\n this_models = list(this_models)\n except TypeError:\n this_models = [this_models]\n\n for model in this_models:\n # ...\n```\n\nPR?\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"\nPrint (average) channel values for a collection of models, such as that\nserialized by TrainCV. Based on print_monitor.py.\n\nusage: print_monitor_cv.py model.pkl [-a]\n\"\"\"\nfrom __future__ import print_function\n\n__author__ = \"Steven Kearnes\"\n__copyright__ = \"Copyright 2014, Stanford University\"\n__license__ = \"3-clause BSD\"\n__maintainer__ = \"Steven Kearnes\"\n\nimport argparse\nimport numpy as np\n\nfrom pylearn2.utils import serial\n\n\ndef main(models, all=False):\n \"\"\"\n Print (average) final channel values for a collection of models.\n\n Parameters\n ----------\n models : list\n Filename(s) for models to analyze.\n all : bool, optional (default False)\n Whether to output values for all models. If False, only averages\n and standard deviations across all models are displayed.\n \"\"\"\n epochs = []\n time = []\n values = {}\n for filename in np.atleast_1d(models):\n this_models = serial.load(filename)\n for model in list(this_models):\n monitor = model.monitor\n channels = monitor.channels\n epochs.append(monitor._epochs_seen)\n time.append(max(channels[key].time_record[-1] for key in channels))\n for key in sorted(channels.keys()):\n if key not in values:\n values[key] = []\n values[key].append(channels[key].val_record[-1])\n n_models = len(epochs)\n print('number of models: {0}'.format(n_models))\n if n_models > 1:\n if all:\n print('\\nepochs seen:\\n{0}\\n{1} +/- {2}'.format(np.asarray(epochs),\n np.mean(epochs),\n np.std(epochs)))\n print('\\ntraining time:\\n{0}\\n{1} +/- {2}'.format(np.asarray(time),\n np.mean(time),\n np.std(time)))\n else:\n print('epochs seen: {0} +/- {1}'.format(np.mean(epochs),\n np.std(epochs)))\n print('training time: {0} +/- {1}'.format(np.mean(time),\n np.std(time)))\n for key in sorted(values.keys()):\n if all:\n print('\\n{0}:\\n{1}\\n{2} +/- {3}'.format(\n key, np.asarray(values[key]),\n np.mean(values[key]), np.std(values[key])))\n else:\n print('{0}: {1} +/- {2}'.format(key, np.mean(values[key]),\n np.std(values[key])))\n else:\n print('epochs seen: {0}'.format(epochs[0]))\n print('training time: {0}'.format(time[0]))\n for key in sorted(values.keys()):\n print('{0}: {1}'.format(key, values[key][0]))\n\nif __name__ == '__main__':\n parser = argparse.ArgumentParser()\n parser.add_argument('models', nargs='+',\n help='Model or models to analyze.')\n parser.add_argument('-a', '--all', action='store_true',\n help='Print values for all models instead of ' +\n 'averages.')\n args = parser.parse_args()\n main(**vars(args))\n", "path": "pylearn2/scripts/print_monitor_cv.py"}]} | 1,623 | 170 |
gh_patches_debug_2375 | rasdani/github-patches | git_diff | lutris__lutris-559 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Lutris shortcuts broken
See: https://forums.lutris.net/t/desktop-shortcut-not-work-for-any-game/456
</issue>
<code>
[start of lutris/util/resources.py]
1 import os
2 import re
3 import concurrent.futures
4 from urllib.parse import urlparse, parse_qsl
5
6 from lutris import settings
7 from lutris import api
8 from lutris.util.log import logger
9 from lutris.util.http import Request
10
11 BANNER = "banner"
12 ICON = "icon"
13
14
15 def get_icon_path(game, icon_type):
16 if icon_type == BANNER:
17 return os.path.join(settings.BANNER_PATH, "%s.jpg" % game)
18 if icon_type == ICON:
19 return os.path.join(settings.ICON_PATH, "lutris_%s.png" % game)
20
21
22 def has_icon(game, icon_type):
23 if icon_type == BANNER:
24 icon_path = get_icon_path(game, BANNER)
25 return os.path.exists(icon_path)
26 elif icon_type == ICON:
27 icon_path = get_icon_path(game, ICON)
28 return os.path.exists(icon_path)
29
30
31 def fetch_icons(game_slugs, callback=None):
32 no_banners = [slug for slug in game_slugs if not has_icon(slug, BANNER)]
33 no_icons = [slug for slug in game_slugs if not has_icon(slug, ICON)]
34
35 # Remove duplicate slugs
36 missing_media_slugs = list(set(no_banners) | set(no_icons))
37 if not missing_media_slugs:
38 return
39
40 response = api.get_games(game_slugs=missing_media_slugs)
41 if not response:
42 logger.warning('Unable to get games from API')
43 return
44 results = response['results']
45 while response.get('next'):
46 page_match = re.search(r'page=(\d+)', response['next'])
47 if page_match:
48 page = page_match.group(1)
49 else:
50 logger.error("No page found in %s", response['next'])
51 break
52 response = api.get_games(game_slugs=missing_media_slugs, page=page)
53 if not response:
54 logger.warning("Unable to get response for page %s", page)
55 break
56 else:
57 results += response.get('results', [])
58
59 banner_downloads = []
60 icon_downloads = []
61 updated_slugs = []
62 for game in results:
63 if game['slug'] in no_banners:
64 banner_url = game['banner_url']
65 if banner_url:
66 dest_path = get_icon_path(game['slug'], BANNER)
67 banner_downloads.append((game['banner_url'], dest_path))
68 updated_slugs.append(game['slug'])
69 if game['slug'] in no_icons:
70 icon_url = game['icon_url']
71 if icon_url:
72 dest_path = get_icon_path(game['slug'], ICON)
73 icon_downloads.append((game['icon_url'], dest_path))
74 updated_slugs.append(game['slug'])
75
76 updated_slugs = list(set(updated_slugs)) # Deduplicate slugs
77
78 downloads = banner_downloads + icon_downloads
79 with concurrent.futures.ThreadPoolExecutor(max_workers=16) as executor:
80 for url, dest_path in downloads:
81 executor.submit(download_media, url, dest_path)
82
83 if updated_slugs and callback:
84 callback(updated_slugs)
85
86
87 def download_media(url, dest, overwrite=False):
88 if os.path.exists(dest):
89 if overwrite:
90 os.remove(dest)
91 else:
92 return
93 request = Request(url).get()
94 request.write_to_file(dest)
95
96
97 def parse_installer_url(url):
98 """
99 Parses `lutris:` urls, extracting any info necessary to install or run a game.
100 """
101 try:
102 parsed_url = urlparse(url, scheme="lutris")
103 except:
104 return False
105 if parsed_url.scheme != "lutris":
106 return False
107 game_slug = parsed_url.path
108 if not game_slug:
109 return False
110 revision = None
111 if parsed_url.query:
112 query = dict(parse_qsl(parsed_url.query))
113 revision = query.get('revision')
114 return {
115 'game_slug': game_slug,
116 'revision': revision
117 }
118
[end of lutris/util/resources.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lutris/util/resources.py b/lutris/util/resources.py
--- a/lutris/util/resources.py
+++ b/lutris/util/resources.py
@@ -107,6 +107,8 @@
game_slug = parsed_url.path
if not game_slug:
return False
+ if game_slug.startswith('lutris:'):
+ game_slug = game_slug[7:]
revision = None
if parsed_url.query:
query = dict(parse_qsl(parsed_url.query))
| {"golden_diff": "diff --git a/lutris/util/resources.py b/lutris/util/resources.py\n--- a/lutris/util/resources.py\n+++ b/lutris/util/resources.py\n@@ -107,6 +107,8 @@\n game_slug = parsed_url.path\n if not game_slug:\n return False\n+ if game_slug.startswith('lutris:'):\n+ game_slug = game_slug[7:]\n revision = None\n if parsed_url.query:\n query = dict(parse_qsl(parsed_url.query))\n", "issue": "Lutris shortcuts broken\nSee: https://forums.lutris.net/t/desktop-shortcut-not-work-for-any-game/456\n", "before_files": [{"content": "import os\nimport re\nimport concurrent.futures\nfrom urllib.parse import urlparse, parse_qsl\n\nfrom lutris import settings\nfrom lutris import api\nfrom lutris.util.log import logger\nfrom lutris.util.http import Request\n\nBANNER = \"banner\"\nICON = \"icon\"\n\n\ndef get_icon_path(game, icon_type):\n if icon_type == BANNER:\n return os.path.join(settings.BANNER_PATH, \"%s.jpg\" % game)\n if icon_type == ICON:\n return os.path.join(settings.ICON_PATH, \"lutris_%s.png\" % game)\n\n\ndef has_icon(game, icon_type):\n if icon_type == BANNER:\n icon_path = get_icon_path(game, BANNER)\n return os.path.exists(icon_path)\n elif icon_type == ICON:\n icon_path = get_icon_path(game, ICON)\n return os.path.exists(icon_path)\n\n\ndef fetch_icons(game_slugs, callback=None):\n no_banners = [slug for slug in game_slugs if not has_icon(slug, BANNER)]\n no_icons = [slug for slug in game_slugs if not has_icon(slug, ICON)]\n\n # Remove duplicate slugs\n missing_media_slugs = list(set(no_banners) | set(no_icons))\n if not missing_media_slugs:\n return\n\n response = api.get_games(game_slugs=missing_media_slugs)\n if not response:\n logger.warning('Unable to get games from API')\n return\n results = response['results']\n while response.get('next'):\n page_match = re.search(r'page=(\\d+)', response['next'])\n if page_match:\n page = page_match.group(1)\n else:\n logger.error(\"No page found in %s\", response['next'])\n break\n response = api.get_games(game_slugs=missing_media_slugs, page=page)\n if not response:\n logger.warning(\"Unable to get response for page %s\", page)\n break\n else:\n results += response.get('results', [])\n\n banner_downloads = []\n icon_downloads = []\n updated_slugs = []\n for game in results:\n if game['slug'] in no_banners:\n banner_url = game['banner_url']\n if banner_url:\n dest_path = get_icon_path(game['slug'], BANNER)\n banner_downloads.append((game['banner_url'], dest_path))\n updated_slugs.append(game['slug'])\n if game['slug'] in no_icons:\n icon_url = game['icon_url']\n if icon_url:\n dest_path = get_icon_path(game['slug'], ICON)\n icon_downloads.append((game['icon_url'], dest_path))\n updated_slugs.append(game['slug'])\n\n updated_slugs = list(set(updated_slugs)) # Deduplicate slugs\n\n downloads = banner_downloads + icon_downloads\n with concurrent.futures.ThreadPoolExecutor(max_workers=16) as executor:\n for url, dest_path in downloads:\n executor.submit(download_media, url, dest_path)\n\n if updated_slugs and callback:\n callback(updated_slugs)\n\n\ndef download_media(url, dest, overwrite=False):\n if os.path.exists(dest):\n if overwrite:\n os.remove(dest)\n else:\n return\n request = Request(url).get()\n request.write_to_file(dest)\n\n\ndef parse_installer_url(url):\n \"\"\"\n Parses `lutris:` urls, extracting any info necessary to install or run a game.\n \"\"\"\n try:\n parsed_url = urlparse(url, scheme=\"lutris\")\n except:\n return False\n if parsed_url.scheme != \"lutris\":\n return False\n game_slug = parsed_url.path\n if not game_slug:\n return False\n revision = None\n if parsed_url.query:\n query = dict(parse_qsl(parsed_url.query))\n revision = query.get('revision')\n return {\n 'game_slug': game_slug,\n 'revision': revision\n }\n", "path": "lutris/util/resources.py"}]} | 1,660 | 112 |
gh_patches_debug_53951 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-985 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cfn-lint is failing because of `pkg_resources.ContextualVersionConflict: (jsonschema 2.6.0)`.
*cfn-lint version: (`0.21.6`)*
*Description of issue.*
cfn-lint(python2) requires jsonschema 2.6.0 but aws-sam-translator which got released today requires jsonschema3.0
https://pypi.org/project/aws-sam-translator/#history
pkg_resources.ContextualVersionConflict: (jsonschema 2.6.0 (/usr/lib/python2.7/site-packages), Requirement.parse('jsonschema~=3.0'), set(['aws-sam-translator']))
</issue>
<code>
[start of setup.py]
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 import codecs
18 import re
19 from setuptools import find_packages
20 from setuptools import setup
21
22
23 def get_version(filename):
24 with codecs.open(filename, 'r', 'utf-8') as fp:
25 contents = fp.read()
26 return re.search(r"__version__ = ['\"]([^'\"]+)['\"]", contents).group(1)
27
28
29 version = get_version('src/cfnlint/version.py')
30
31
32 with open('README.md') as f:
33 readme = f.read()
34
35 setup(
36 name='cfn-lint',
37 version=version,
38 description=('checks cloudformation for practices and behaviour \
39 that could potentially be improved'),
40 long_description=readme,
41 long_description_content_type="text/markdown",
42 keywords='aws, lint',
43 author='kddejong',
44 author_email='[email protected]',
45 url='https://github.com/aws-cloudformation/cfn-python-lint',
46 package_dir={'': 'src'},
47 package_data={'cfnlint': [
48 'data/CloudSpecs/*.json',
49 'data/AdditionalSpecs/*.json',
50 'data/Serverless/*.json',
51 'data/ExtendedSpecs/all/*.json',
52 'data/ExtendedSpecs/ap-northeast-1/*.json',
53 'data/ExtendedSpecs/ap-northeast-2/*.json',
54 'data/ExtendedSpecs/ap-northeast-3/*.json',
55 'data/ExtendedSpecs/ap-south-1/*.json',
56 'data/ExtendedSpecs/ap-southeast-1/*.json',
57 'data/ExtendedSpecs/ap-southeast-2/*.json',
58 'data/ExtendedSpecs/ca-central-1/*.json',
59 'data/ExtendedSpecs/eu-central-1/*.json',
60 'data/ExtendedSpecs/eu-north-1/*.json',
61 'data/ExtendedSpecs/eu-west-1/*.json',
62 'data/ExtendedSpecs/eu-west-2/*.json',
63 'data/ExtendedSpecs/eu-west-3/*.json',
64 'data/ExtendedSpecs/sa-east-1/*.json',
65 'data/ExtendedSpecs/us-east-1/*.json',
66 'data/ExtendedSpecs/us-east-2/*.json',
67 'data/ExtendedSpecs/us-gov-east-1/*.json',
68 'data/ExtendedSpecs/us-gov-west-1/*.json',
69 'data/ExtendedSpecs/us-west-1/*.json',
70 'data/ExtendedSpecs/us-west-2/*.json',
71 'data/CfnLintCli/config/schema.json'
72 ]},
73 packages=find_packages('src'),
74 zip_safe=False,
75 install_requires=[
76 'pyyaml',
77 'six~=1.11',
78 'requests>=2.15.0',
79 'aws-sam-translator>=1.10.0',
80 'jsonpatch',
81 'jsonschema~=2.6',
82 'pathlib2>=2.3.0;python_version<"3.4"',
83 'setuptools',
84 ],
85 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',
86 entry_points={
87 'console_scripts': [
88 'cfn-lint = cfnlint.__main__:main'
89 ]
90 },
91 license='MIT no attribution',
92 test_suite="unittest",
93 classifiers=[
94 'Development Status :: 5 - Production/Stable',
95 'Intended Audience :: Developers',
96 'License :: OSI Approved :: MIT License',
97 'Natural Language :: English',
98 'Operating System :: OS Independent',
99 'Programming Language :: Python :: 2',
100 'Programming Language :: Python :: 2.7',
101 'Programming Language :: Python :: 3',
102 'Programming Language :: Python :: 3.4',
103 'Programming Language :: Python :: 3.5',
104 'Programming Language :: Python :: 3.6',
105 'Programming Language :: Python :: 3.7',
106 ],
107 )
108
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -78,7 +78,7 @@
'requests>=2.15.0',
'aws-sam-translator>=1.10.0',
'jsonpatch',
- 'jsonschema~=2.6',
+ 'jsonschema~=3.0',
'pathlib2>=2.3.0;python_version<"3.4"',
'setuptools',
],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -78,7 +78,7 @@\n 'requests>=2.15.0',\n 'aws-sam-translator>=1.10.0',\n 'jsonpatch',\n- 'jsonschema~=2.6',\n+ 'jsonschema~=3.0',\n 'pathlib2>=2.3.0;python_version<\"3.4\"',\n 'setuptools',\n ],\n", "issue": "cfn-lint is failing because of `pkg_resources.ContextualVersionConflict: (jsonschema 2.6.0)`. \n*cfn-lint version: (`0.21.6`)*\r\n\r\n*Description of issue.*\r\ncfn-lint(python2) requires jsonschema 2.6.0 but aws-sam-translator which got released today requires jsonschema3.0\r\n\r\nhttps://pypi.org/project/aws-sam-translator/#history\r\npkg_resources.ContextualVersionConflict: (jsonschema 2.6.0 (/usr/lib/python2.7/site-packages), Requirement.parse('jsonschema~=3.0'), set(['aws-sam-translator']))\n", "before_files": [{"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport codecs\nimport re\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n\ndef get_version(filename):\n with codecs.open(filename, 'r', 'utf-8') as fp:\n contents = fp.read()\n return re.search(r\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", contents).group(1)\n\n\nversion = get_version('src/cfnlint/version.py')\n\n\nwith open('README.md') as f:\n readme = f.read()\n\nsetup(\n name='cfn-lint',\n version=version,\n description=('checks cloudformation for practices and behaviour \\\n that could potentially be improved'),\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n keywords='aws, lint',\n author='kddejong',\n author_email='[email protected]',\n url='https://github.com/aws-cloudformation/cfn-python-lint',\n package_dir={'': 'src'},\n package_data={'cfnlint': [\n 'data/CloudSpecs/*.json',\n 'data/AdditionalSpecs/*.json',\n 'data/Serverless/*.json',\n 'data/ExtendedSpecs/all/*.json',\n 'data/ExtendedSpecs/ap-northeast-1/*.json',\n 'data/ExtendedSpecs/ap-northeast-2/*.json',\n 'data/ExtendedSpecs/ap-northeast-3/*.json',\n 'data/ExtendedSpecs/ap-south-1/*.json',\n 'data/ExtendedSpecs/ap-southeast-1/*.json',\n 'data/ExtendedSpecs/ap-southeast-2/*.json',\n 'data/ExtendedSpecs/ca-central-1/*.json',\n 'data/ExtendedSpecs/eu-central-1/*.json',\n 'data/ExtendedSpecs/eu-north-1/*.json',\n 'data/ExtendedSpecs/eu-west-1/*.json',\n 'data/ExtendedSpecs/eu-west-2/*.json',\n 'data/ExtendedSpecs/eu-west-3/*.json',\n 'data/ExtendedSpecs/sa-east-1/*.json',\n 'data/ExtendedSpecs/us-east-1/*.json',\n 'data/ExtendedSpecs/us-east-2/*.json',\n 'data/ExtendedSpecs/us-gov-east-1/*.json',\n 'data/ExtendedSpecs/us-gov-west-1/*.json',\n 'data/ExtendedSpecs/us-west-1/*.json',\n 'data/ExtendedSpecs/us-west-2/*.json',\n 'data/CfnLintCli/config/schema.json'\n ]},\n packages=find_packages('src'),\n zip_safe=False,\n install_requires=[\n 'pyyaml',\n 'six~=1.11',\n 'requests>=2.15.0',\n 'aws-sam-translator>=1.10.0',\n 'jsonpatch',\n 'jsonschema~=2.6',\n 'pathlib2>=2.3.0;python_version<\"3.4\"',\n 'setuptools',\n ],\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n entry_points={\n 'console_scripts': [\n 'cfn-lint = cfnlint.__main__:main'\n ]\n },\n license='MIT no attribution',\n test_suite=\"unittest\",\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n ],\n)\n", "path": "setup.py"}]} | 1,947 | 110 |
gh_patches_debug_37325 | rasdani/github-patches | git_diff | pallets__werkzeug-1790 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
deprecate posixemulation
It's called out as "not a public interface" in the docstring, and looks like it was only there to support `contrib.sessions`, which has moved to `secure-cookie` now. Move it there if it's still needed, remove it here.
</issue>
<code>
[start of src/werkzeug/posixemulation.py]
1 """A ``rename`` function that follows POSIX semantics. If the target
2 file already exists it will be replaced without asking.
3
4 This is not a public interface.
5 """
6 import errno
7 import os
8 import random
9 import sys
10 import time
11
12 from ._internal import _to_str
13 from .filesystem import get_filesystem_encoding
14
15 can_rename_open_file = False
16
17 if os.name == "nt":
18 try:
19 import ctypes
20
21 _MOVEFILE_REPLACE_EXISTING = 0x1
22 _MOVEFILE_WRITE_THROUGH = 0x8
23 _MoveFileEx = ctypes.windll.kernel32.MoveFileExW # type: ignore
24
25 def _rename(src, dst):
26 src = _to_str(src, get_filesystem_encoding())
27 dst = _to_str(dst, get_filesystem_encoding())
28 if _rename_atomic(src, dst):
29 return True
30 retry = 0
31 rv = False
32 while not rv and retry < 100:
33 rv = _MoveFileEx(
34 src, dst, _MOVEFILE_REPLACE_EXISTING | _MOVEFILE_WRITE_THROUGH
35 )
36 if not rv:
37 time.sleep(0.001)
38 retry += 1
39 return rv
40
41 # new in Vista and Windows Server 2008
42 _CreateTransaction = ctypes.windll.ktmw32.CreateTransaction # type: ignore
43 _CommitTransaction = ctypes.windll.ktmw32.CommitTransaction # type: ignore
44 _MoveFileTransacted = ctypes.windll.kernel32.MoveFileTransactedW # type: ignore
45 _CloseHandle = ctypes.windll.kernel32.CloseHandle # type: ignore
46 can_rename_open_file = True
47
48 def _rename_atomic(src, dst):
49 ta = _CreateTransaction(None, 0, 0, 0, 0, 1000, "Werkzeug rename")
50 if ta == -1:
51 return False
52 try:
53 retry = 0
54 rv = False
55 while not rv and retry < 100:
56 rv = _MoveFileTransacted(
57 src,
58 dst,
59 None,
60 None,
61 _MOVEFILE_REPLACE_EXISTING | _MOVEFILE_WRITE_THROUGH,
62 ta,
63 )
64 if rv:
65 rv = _CommitTransaction(ta)
66 break
67 else:
68 time.sleep(0.001)
69 retry += 1
70 return rv
71 finally:
72 _CloseHandle(ta)
73
74 except Exception:
75
76 def _rename(src, dst):
77 return False
78
79 def _rename_atomic(src, dst):
80 return False
81
82 def rename(src, dst):
83 # Try atomic or pseudo-atomic rename
84 if _rename(src, dst):
85 return
86 # Fall back to "move away and replace"
87 try:
88 os.rename(src, dst)
89 except OSError as e:
90 if e.errno != errno.EEXIST:
91 raise
92 old = f"{dst}-{random.randint(0, sys.maxsize):08x}"
93 os.rename(dst, old)
94 os.rename(src, dst)
95 try:
96 os.unlink(old)
97 except Exception:
98 pass
99
100
101 else:
102 rename = os.rename
103 can_rename_open_file = True
104
[end of src/werkzeug/posixemulation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/werkzeug/posixemulation.py b/src/werkzeug/posixemulation.py
deleted file mode 100644
--- a/src/werkzeug/posixemulation.py
+++ /dev/null
@@ -1,103 +0,0 @@
-"""A ``rename`` function that follows POSIX semantics. If the target
-file already exists it will be replaced without asking.
-
-This is not a public interface.
-"""
-import errno
-import os
-import random
-import sys
-import time
-
-from ._internal import _to_str
-from .filesystem import get_filesystem_encoding
-
-can_rename_open_file = False
-
-if os.name == "nt":
- try:
- import ctypes
-
- _MOVEFILE_REPLACE_EXISTING = 0x1
- _MOVEFILE_WRITE_THROUGH = 0x8
- _MoveFileEx = ctypes.windll.kernel32.MoveFileExW # type: ignore
-
- def _rename(src, dst):
- src = _to_str(src, get_filesystem_encoding())
- dst = _to_str(dst, get_filesystem_encoding())
- if _rename_atomic(src, dst):
- return True
- retry = 0
- rv = False
- while not rv and retry < 100:
- rv = _MoveFileEx(
- src, dst, _MOVEFILE_REPLACE_EXISTING | _MOVEFILE_WRITE_THROUGH
- )
- if not rv:
- time.sleep(0.001)
- retry += 1
- return rv
-
- # new in Vista and Windows Server 2008
- _CreateTransaction = ctypes.windll.ktmw32.CreateTransaction # type: ignore
- _CommitTransaction = ctypes.windll.ktmw32.CommitTransaction # type: ignore
- _MoveFileTransacted = ctypes.windll.kernel32.MoveFileTransactedW # type: ignore
- _CloseHandle = ctypes.windll.kernel32.CloseHandle # type: ignore
- can_rename_open_file = True
-
- def _rename_atomic(src, dst):
- ta = _CreateTransaction(None, 0, 0, 0, 0, 1000, "Werkzeug rename")
- if ta == -1:
- return False
- try:
- retry = 0
- rv = False
- while not rv and retry < 100:
- rv = _MoveFileTransacted(
- src,
- dst,
- None,
- None,
- _MOVEFILE_REPLACE_EXISTING | _MOVEFILE_WRITE_THROUGH,
- ta,
- )
- if rv:
- rv = _CommitTransaction(ta)
- break
- else:
- time.sleep(0.001)
- retry += 1
- return rv
- finally:
- _CloseHandle(ta)
-
- except Exception:
-
- def _rename(src, dst):
- return False
-
- def _rename_atomic(src, dst):
- return False
-
- def rename(src, dst):
- # Try atomic or pseudo-atomic rename
- if _rename(src, dst):
- return
- # Fall back to "move away and replace"
- try:
- os.rename(src, dst)
- except OSError as e:
- if e.errno != errno.EEXIST:
- raise
- old = f"{dst}-{random.randint(0, sys.maxsize):08x}"
- os.rename(dst, old)
- os.rename(src, dst)
- try:
- os.unlink(old)
- except Exception:
- pass
-
-
-else:
- rename = os.rename
- can_rename_open_file = True
| {"golden_diff": "diff --git a/src/werkzeug/posixemulation.py b/src/werkzeug/posixemulation.py\ndeleted file mode 100644\n--- a/src/werkzeug/posixemulation.py\n+++ /dev/null\n@@ -1,103 +0,0 @@\n-\"\"\"A ``rename`` function that follows POSIX semantics. If the target\n-file already exists it will be replaced without asking.\n-\n-This is not a public interface.\n-\"\"\"\n-import errno\n-import os\n-import random\n-import sys\n-import time\n-\n-from ._internal import _to_str\n-from .filesystem import get_filesystem_encoding\n-\n-can_rename_open_file = False\n-\n-if os.name == \"nt\":\n- try:\n- import ctypes\n-\n- _MOVEFILE_REPLACE_EXISTING = 0x1\n- _MOVEFILE_WRITE_THROUGH = 0x8\n- _MoveFileEx = ctypes.windll.kernel32.MoveFileExW # type: ignore\n-\n- def _rename(src, dst):\n- src = _to_str(src, get_filesystem_encoding())\n- dst = _to_str(dst, get_filesystem_encoding())\n- if _rename_atomic(src, dst):\n- return True\n- retry = 0\n- rv = False\n- while not rv and retry < 100:\n- rv = _MoveFileEx(\n- src, dst, _MOVEFILE_REPLACE_EXISTING | _MOVEFILE_WRITE_THROUGH\n- )\n- if not rv:\n- time.sleep(0.001)\n- retry += 1\n- return rv\n-\n- # new in Vista and Windows Server 2008\n- _CreateTransaction = ctypes.windll.ktmw32.CreateTransaction # type: ignore\n- _CommitTransaction = ctypes.windll.ktmw32.CommitTransaction # type: ignore\n- _MoveFileTransacted = ctypes.windll.kernel32.MoveFileTransactedW # type: ignore\n- _CloseHandle = ctypes.windll.kernel32.CloseHandle # type: ignore\n- can_rename_open_file = True\n-\n- def _rename_atomic(src, dst):\n- ta = _CreateTransaction(None, 0, 0, 0, 0, 1000, \"Werkzeug rename\")\n- if ta == -1:\n- return False\n- try:\n- retry = 0\n- rv = False\n- while not rv and retry < 100:\n- rv = _MoveFileTransacted(\n- src,\n- dst,\n- None,\n- None,\n- _MOVEFILE_REPLACE_EXISTING | _MOVEFILE_WRITE_THROUGH,\n- ta,\n- )\n- if rv:\n- rv = _CommitTransaction(ta)\n- break\n- else:\n- time.sleep(0.001)\n- retry += 1\n- return rv\n- finally:\n- _CloseHandle(ta)\n-\n- except Exception:\n-\n- def _rename(src, dst):\n- return False\n-\n- def _rename_atomic(src, dst):\n- return False\n-\n- def rename(src, dst):\n- # Try atomic or pseudo-atomic rename\n- if _rename(src, dst):\n- return\n- # Fall back to \"move away and replace\"\n- try:\n- os.rename(src, dst)\n- except OSError as e:\n- if e.errno != errno.EEXIST:\n- raise\n- old = f\"{dst}-{random.randint(0, sys.maxsize):08x}\"\n- os.rename(dst, old)\n- os.rename(src, dst)\n- try:\n- os.unlink(old)\n- except Exception:\n- pass\n-\n-\n-else:\n- rename = os.rename\n- can_rename_open_file = True\n", "issue": "deprecate posixemulation\nIt's called out as \"not a public interface\" in the docstring, and looks like it was only there to support `contrib.sessions`, which has moved to `secure-cookie` now. Move it there if it's still needed, remove it here.\n", "before_files": [{"content": "\"\"\"A ``rename`` function that follows POSIX semantics. If the target\nfile already exists it will be replaced without asking.\n\nThis is not a public interface.\n\"\"\"\nimport errno\nimport os\nimport random\nimport sys\nimport time\n\nfrom ._internal import _to_str\nfrom .filesystem import get_filesystem_encoding\n\ncan_rename_open_file = False\n\nif os.name == \"nt\":\n try:\n import ctypes\n\n _MOVEFILE_REPLACE_EXISTING = 0x1\n _MOVEFILE_WRITE_THROUGH = 0x8\n _MoveFileEx = ctypes.windll.kernel32.MoveFileExW # type: ignore\n\n def _rename(src, dst):\n src = _to_str(src, get_filesystem_encoding())\n dst = _to_str(dst, get_filesystem_encoding())\n if _rename_atomic(src, dst):\n return True\n retry = 0\n rv = False\n while not rv and retry < 100:\n rv = _MoveFileEx(\n src, dst, _MOVEFILE_REPLACE_EXISTING | _MOVEFILE_WRITE_THROUGH\n )\n if not rv:\n time.sleep(0.001)\n retry += 1\n return rv\n\n # new in Vista and Windows Server 2008\n _CreateTransaction = ctypes.windll.ktmw32.CreateTransaction # type: ignore\n _CommitTransaction = ctypes.windll.ktmw32.CommitTransaction # type: ignore\n _MoveFileTransacted = ctypes.windll.kernel32.MoveFileTransactedW # type: ignore\n _CloseHandle = ctypes.windll.kernel32.CloseHandle # type: ignore\n can_rename_open_file = True\n\n def _rename_atomic(src, dst):\n ta = _CreateTransaction(None, 0, 0, 0, 0, 1000, \"Werkzeug rename\")\n if ta == -1:\n return False\n try:\n retry = 0\n rv = False\n while not rv and retry < 100:\n rv = _MoveFileTransacted(\n src,\n dst,\n None,\n None,\n _MOVEFILE_REPLACE_EXISTING | _MOVEFILE_WRITE_THROUGH,\n ta,\n )\n if rv:\n rv = _CommitTransaction(ta)\n break\n else:\n time.sleep(0.001)\n retry += 1\n return rv\n finally:\n _CloseHandle(ta)\n\n except Exception:\n\n def _rename(src, dst):\n return False\n\n def _rename_atomic(src, dst):\n return False\n\n def rename(src, dst):\n # Try atomic or pseudo-atomic rename\n if _rename(src, dst):\n return\n # Fall back to \"move away and replace\"\n try:\n os.rename(src, dst)\n except OSError as e:\n if e.errno != errno.EEXIST:\n raise\n old = f\"{dst}-{random.randint(0, sys.maxsize):08x}\"\n os.rename(dst, old)\n os.rename(src, dst)\n try:\n os.unlink(old)\n except Exception:\n pass\n\n\nelse:\n rename = os.rename\n can_rename_open_file = True\n", "path": "src/werkzeug/posixemulation.py"}]} | 1,516 | 853 |
gh_patches_debug_24098 | rasdani/github-patches | git_diff | encode__uvicorn-666 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
color codes in windows console not escaped
Fixes https://github.com/tiangolo/fastapi/issues/815 that should have been reported upstream
There are many ways to handle the case obviously, I choose to use click.clear() since we use already click.style and because it already performs the os check and issues the right command for that.
Use optional package installs.
Instead of the platform detection I’d like uvicorn to use optional installs.
* `pip install uvicorn` - Just the package itself.
* `pip install uvicorn[standard]` - uvloop/httptools/websockets
* `pip install uvicorn[pure]` - asyncio/h11/wsproto
* `pip install uvicorn[full]` - Everything
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 import os
5 import re
6
7 from setuptools import setup
8
9
10 def get_version(package):
11 """
12 Return package version as listed in `__version__` in `init.py`.
13 """
14 path = os.path.join(package, "__init__.py")
15 init_py = open(path, "r", encoding="utf8").read()
16 return re.search("__version__ = ['\"]([^'\"]+)['\"]", init_py).group(1)
17
18
19 def get_long_description():
20 """
21 Return the README.
22 """
23 return open("README.md", "r", encoding="utf8").read()
24
25
26 def get_packages(package):
27 """
28 Return root package and all sub-packages.
29 """
30 return [
31 dirpath
32 for dirpath, dirnames, filenames in os.walk(package)
33 if os.path.exists(os.path.join(dirpath, "__init__.py"))
34 ]
35
36
37 env_marker = (
38 "sys_platform != 'win32'"
39 " and sys_platform != 'cygwin'"
40 " and platform_python_implementation != 'PyPy'"
41 )
42
43 requirements = [
44 "click==7.*",
45 "h11>=0.8,<0.10",
46 "websockets==8.*",
47 "httptools==0.1.* ;" + env_marker,
48 "uvloop>=0.14.0 ;" + env_marker,
49 ]
50
51 extras_require = {"watchgodreload": ["watchgod>=0.6,<0.7"]}
52
53
54 setup(
55 name="uvicorn",
56 version=get_version("uvicorn"),
57 url="https://github.com/encode/uvicorn",
58 license="BSD",
59 description="The lightning-fast ASGI server.",
60 long_description=get_long_description(),
61 long_description_content_type="text/markdown",
62 author="Tom Christie",
63 author_email="[email protected]",
64 packages=get_packages("uvicorn"),
65 install_requires=requirements,
66 extras_require=extras_require,
67 include_package_data=True,
68 classifiers=[
69 "Development Status :: 4 - Beta",
70 "Environment :: Web Environment",
71 "Intended Audience :: Developers",
72 "License :: OSI Approved :: BSD License",
73 "Operating System :: OS Independent",
74 "Topic :: Internet :: WWW/HTTP",
75 "Programming Language :: Python :: 3",
76 "Programming Language :: Python :: 3.6",
77 "Programming Language :: Python :: 3.7",
78 "Programming Language :: Python :: 3.8",
79 "Programming Language :: Python :: Implementation :: CPython",
80 "Programming Language :: Python :: Implementation :: PyPy",
81 ],
82 entry_points="""
83 [console_scripts]
84 uvicorn=uvicorn.main:main
85 """,
86 )
87
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -34,21 +34,28 @@
]
-env_marker = (
+env_marker_cpython = (
"sys_platform != 'win32'"
" and sys_platform != 'cygwin'"
" and platform_python_implementation != 'PyPy'"
)
-requirements = [
+env_marker_win = "sys_platform == 'win32'"
+
+
+minimal_requirements = [
"click==7.*",
"h11>=0.8,<0.10",
- "websockets==8.*",
- "httptools==0.1.* ;" + env_marker,
- "uvloop>=0.14.0 ;" + env_marker,
]
-extras_require = {"watchgodreload": ["watchgod>=0.6,<0.7"]}
+extra_requirements = [
+ "websockets==8.*",
+ "httptools==0.1.* ;" + env_marker_cpython,
+ "uvloop>=0.14.0 ;" + env_marker_cpython,
+ "colorama>=0.4.*;" + env_marker_win,
+ "watchgod>=0.6,<0.7",
+ "python-dotenv==0.13.*",
+]
setup(
@@ -62,8 +69,8 @@
author="Tom Christie",
author_email="[email protected]",
packages=get_packages("uvicorn"),
- install_requires=requirements,
- extras_require=extras_require,
+ install_requires=minimal_requirements,
+ extras_require={"standard": extra_requirements},
include_package_data=True,
classifiers=[
"Development Status :: 4 - Beta",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -34,21 +34,28 @@\n ]\n \n \n-env_marker = (\n+env_marker_cpython = (\n \"sys_platform != 'win32'\"\n \" and sys_platform != 'cygwin'\"\n \" and platform_python_implementation != 'PyPy'\"\n )\n \n-requirements = [\n+env_marker_win = \"sys_platform == 'win32'\"\n+\n+\n+minimal_requirements = [\n \"click==7.*\",\n \"h11>=0.8,<0.10\",\n- \"websockets==8.*\",\n- \"httptools==0.1.* ;\" + env_marker,\n- \"uvloop>=0.14.0 ;\" + env_marker,\n ]\n \n-extras_require = {\"watchgodreload\": [\"watchgod>=0.6,<0.7\"]}\n+extra_requirements = [\n+ \"websockets==8.*\",\n+ \"httptools==0.1.* ;\" + env_marker_cpython,\n+ \"uvloop>=0.14.0 ;\" + env_marker_cpython,\n+ \"colorama>=0.4.*;\" + env_marker_win,\n+ \"watchgod>=0.6,<0.7\",\n+ \"python-dotenv==0.13.*\",\n+]\n \n \n setup(\n@@ -62,8 +69,8 @@\n author=\"Tom Christie\",\n author_email=\"[email protected]\",\n packages=get_packages(\"uvicorn\"),\n- install_requires=requirements,\n- extras_require=extras_require,\n+ install_requires=minimal_requirements,\n+ extras_require={\"standard\": extra_requirements},\n include_package_data=True,\n classifiers=[\n \"Development Status :: 4 - Beta\",\n", "issue": "color codes in windows console not escaped\nFixes https://github.com/tiangolo/fastapi/issues/815 that should have been reported upstream\r\n\r\nThere are many ways to handle the case obviously, I choose to use click.clear() since we use already click.style and because it already performs the os check and issues the right command for that.\r\n\r\n\nUse optional package installs.\nInstead of the platform detection I\u2019d like uvicorn to use optional installs.\r\n\r\n* `pip install uvicorn` - Just the package itself.\r\n* `pip install uvicorn[standard]` - uvloop/httptools/websockets\r\n* `pip install uvicorn[pure]` - asyncio/h11/wsproto\r\n* `pip install uvicorn[full]` - Everything\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nimport os\nimport re\n\nfrom setuptools import setup\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n path = os.path.join(package, \"__init__.py\")\n init_py = open(path, \"r\", encoding=\"utf8\").read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_long_description():\n \"\"\"\n Return the README.\n \"\"\"\n return open(\"README.md\", \"r\", encoding=\"utf8\").read()\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [\n dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, \"__init__.py\"))\n ]\n\n\nenv_marker = (\n \"sys_platform != 'win32'\"\n \" and sys_platform != 'cygwin'\"\n \" and platform_python_implementation != 'PyPy'\"\n)\n\nrequirements = [\n \"click==7.*\",\n \"h11>=0.8,<0.10\",\n \"websockets==8.*\",\n \"httptools==0.1.* ;\" + env_marker,\n \"uvloop>=0.14.0 ;\" + env_marker,\n]\n\nextras_require = {\"watchgodreload\": [\"watchgod>=0.6,<0.7\"]}\n\n\nsetup(\n name=\"uvicorn\",\n version=get_version(\"uvicorn\"),\n url=\"https://github.com/encode/uvicorn\",\n license=\"BSD\",\n description=\"The lightning-fast ASGI server.\",\n long_description=get_long_description(),\n long_description_content_type=\"text/markdown\",\n author=\"Tom Christie\",\n author_email=\"[email protected]\",\n packages=get_packages(\"uvicorn\"),\n install_requires=requirements,\n extras_require=extras_require,\n include_package_data=True,\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n ],\n entry_points=\"\"\"\n [console_scripts]\n uvicorn=uvicorn.main:main\n \"\"\",\n)\n", "path": "setup.py"}]} | 1,440 | 388 |
gh_patches_debug_51634 | rasdani/github-patches | git_diff | akvo__akvo-rsr-2576 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
non-EUTF search results appearing (estimate: 8)
Created via Reamaze:
Link: https://akvoo.reamaze.com/admin/conversations/rsr-release-3-dot-22-chisinau-is-out
Assignee: Unassigned
Message:
Hi RSR Team,
Just saw this email, nice that the new release is already out! However, I tried to use the search function, and it shows organizations that are not related to the Akvo Page, in this case the EUTF Page. Randomly searching for “Tom(bouctou)” gives the following search options. Clicking on the first organization “Catholic Diocese of Tombu", it leads you nowhere..
Please see image below.
Thanks!
Christien
Christien Bosman
Project Officer
Akvo • 's-Gravenhekje 1A • 1011 TG • Amsterdam (NL)
T +31 20 8200 175 • M +31 6 1191 5449 • S christien.bosman • I www.akvo.org <http://www.akvo.org/>
</issue>
<code>
[start of akvo/rest/views/typeahead.py]
1 # -*- coding: utf-8 -*-
2
3 """Akvo RSR is covered by the GNU Affero General Public License.
4 See more details in the license.txt file located at the root folder of the
5 Akvo RSR module. For additional details on the GNU license please
6 see < http://www.gnu.org/licenses/agpl.html >.
7 """
8
9 from akvo.rest.serializers import (TypeaheadCountrySerializer,
10 TypeaheadOrganisationSerializer,
11 TypeaheadProjectSerializer,
12 TypeaheadProjectUpdateSerializer)
13
14 from akvo.codelists.models import Country, Version
15 from akvo.rsr.models import Organisation, Project, ProjectUpdate
16 from akvo.rsr.views.project import _project_directory_coll
17
18 from django.conf import settings
19
20 from rest_framework.decorators import api_view
21 from rest_framework.response import Response
22
23
24 def rejig(queryset, serializer):
25 """Rearrange & add queryset count to the response data."""
26 return {
27 'count': queryset.count(),
28 'results': serializer.data
29 }
30
31
32 @api_view(['GET'])
33 def typeahead_country(request):
34 iati_version = Version.objects.get(code=settings.IATI_VERSION)
35 countries = Country.objects.filter(version=iati_version)
36 return Response(
37 rejig(countries, TypeaheadCountrySerializer(countries, many=True))
38 )
39
40
41 @api_view(['GET'])
42 def typeahead_organisation(request):
43 organisations = Organisation.objects.all()
44 return Response(
45 rejig(organisations, TypeaheadOrganisationSerializer(organisations,
46 many=True))
47 )
48
49
50 @api_view(['GET'])
51 def typeahead_user_organisations(request):
52 user = request.user
53 is_admin = user.is_active and (user.is_superuser or user.is_admin)
54 organisations = user.approved_organisations() if not is_admin else Organisation.objects.all()
55 return Response(
56 rejig(organisations, TypeaheadOrganisationSerializer(organisations,
57 many=True))
58 )
59
60
61 @api_view(['GET'])
62 def typeahead_project(request):
63 """Return the typeaheads for projects.
64
65 Without any query parameters, it returns the info for all the projects in
66 the current context -- changes depending on whether we are on a partner
67 site, or the RSR site.
68
69 If a published query parameter is passed, only projects that have been
70 published are returned.
71
72 NOTE: The unauthenticated user gets information about all the projects when
73 using this API endpoint. More permission checking will need to be added,
74 if the amount of data being returned is changed.
75
76 """
77 if request.GET.get('published', '0') == '0':
78 # Project editor - organization projects, all
79 page = request.rsr_page
80 projects = page.organisation.all_projects() if page else Project.objects.all()
81 else:
82 # Search bar - organization projects, published
83 projects = _project_directory_coll(request)
84
85 projects = projects.exclude(title='')
86 return Response(
87 rejig(projects, TypeaheadProjectSerializer(projects, many=True))
88 )
89
90
91 @api_view(['GET'])
92 def typeahead_user_projects(request):
93 user = request.user
94 is_admin = user.is_active and (user.is_superuser or user.is_admin)
95 if is_admin:
96 projects = Project.objects.all()
97 else:
98 projects = user.approved_organisations().all_projects()
99 projects = projects.exclude(title='')
100 return Response(
101 rejig(projects, TypeaheadProjectSerializer(projects, many=True))
102 )
103
104
105 @api_view(['GET'])
106 def typeahead_impact_projects(request):
107 user = request.user
108 projects = Project.objects.all() if user.is_admin or user.is_superuser else user.my_projects()
109 projects = projects.published().filter(is_impact_project=True).order_by('title')
110
111 return Response(
112 rejig(projects, TypeaheadProjectSerializer(projects, many=True))
113 )
114
115
116 @api_view(['GET'])
117 def typeahead_projectupdate(request):
118 updates = ProjectUpdate.objects.all()
119 return Response(
120 rejig(updates, TypeaheadProjectUpdateSerializer(updates, many=True))
121 )
122
[end of akvo/rest/views/typeahead.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/akvo/rest/views/typeahead.py b/akvo/rest/views/typeahead.py
--- a/akvo/rest/views/typeahead.py
+++ b/akvo/rest/views/typeahead.py
@@ -40,7 +40,8 @@
@api_view(['GET'])
def typeahead_organisation(request):
- organisations = Organisation.objects.all()
+ page = request.rsr_page
+ organisations = page.organisation.partners().distinct() if page else Organisation.objects.all()
return Response(
rejig(organisations, TypeaheadOrganisationSerializer(organisations,
many=True))
| {"golden_diff": "diff --git a/akvo/rest/views/typeahead.py b/akvo/rest/views/typeahead.py\n--- a/akvo/rest/views/typeahead.py\n+++ b/akvo/rest/views/typeahead.py\n@@ -40,7 +40,8 @@\n \n @api_view(['GET'])\n def typeahead_organisation(request):\n- organisations = Organisation.objects.all()\n+ page = request.rsr_page\n+ organisations = page.organisation.partners().distinct() if page else Organisation.objects.all()\n return Response(\n rejig(organisations, TypeaheadOrganisationSerializer(organisations,\n many=True))\n", "issue": "non-EUTF search results appearing (estimate: 8)\nCreated via Reamaze:\r\n\r\nLink: https://akvoo.reamaze.com/admin/conversations/rsr-release-3-dot-22-chisinau-is-out\r\nAssignee: Unassigned\r\n\r\nMessage:\r\nHi RSR Team,\r\n\r\nJust saw this email, nice that the new release is already out! However, I tried to use the search function, and it shows organizations that are not related to the Akvo Page, in this case the EUTF Page. Randomly searching for \u201cTom(bouctou)\u201d gives the following search options. Clicking on the first organization \u201cCatholic Diocese of Tombu\", it leads you nowhere..\r\n\r\nPlease see image below.\r\n\r\nThanks!\r\nChristien\r\n\r\nChristien Bosman\r\nProject Officer\r\n\r\nAkvo \u2022 's-Gravenhekje 1A \u2022 1011 TG \u2022 Amsterdam (NL)\r\nT +31 20 8200 175 \u2022 M +31 6 1191 5449 \u2022 S christien.bosman \u2022 I www.akvo.org <http://www.akvo.org/>\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\nSee more details in the license.txt file located at the root folder of the\nAkvo RSR module. For additional details on the GNU license please\nsee < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom akvo.rest.serializers import (TypeaheadCountrySerializer,\n TypeaheadOrganisationSerializer,\n TypeaheadProjectSerializer,\n TypeaheadProjectUpdateSerializer)\n\nfrom akvo.codelists.models import Country, Version\nfrom akvo.rsr.models import Organisation, Project, ProjectUpdate\nfrom akvo.rsr.views.project import _project_directory_coll\n\nfrom django.conf import settings\n\nfrom rest_framework.decorators import api_view\nfrom rest_framework.response import Response\n\n\ndef rejig(queryset, serializer):\n \"\"\"Rearrange & add queryset count to the response data.\"\"\"\n return {\n 'count': queryset.count(),\n 'results': serializer.data\n }\n\n\n@api_view(['GET'])\ndef typeahead_country(request):\n iati_version = Version.objects.get(code=settings.IATI_VERSION)\n countries = Country.objects.filter(version=iati_version)\n return Response(\n rejig(countries, TypeaheadCountrySerializer(countries, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_organisation(request):\n organisations = Organisation.objects.all()\n return Response(\n rejig(organisations, TypeaheadOrganisationSerializer(organisations,\n many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_user_organisations(request):\n user = request.user\n is_admin = user.is_active and (user.is_superuser or user.is_admin)\n organisations = user.approved_organisations() if not is_admin else Organisation.objects.all()\n return Response(\n rejig(organisations, TypeaheadOrganisationSerializer(organisations,\n many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_project(request):\n \"\"\"Return the typeaheads for projects.\n\n Without any query parameters, it returns the info for all the projects in\n the current context -- changes depending on whether we are on a partner\n site, or the RSR site.\n\n If a published query parameter is passed, only projects that have been\n published are returned.\n\n NOTE: The unauthenticated user gets information about all the projects when\n using this API endpoint. More permission checking will need to be added,\n if the amount of data being returned is changed.\n\n \"\"\"\n if request.GET.get('published', '0') == '0':\n # Project editor - organization projects, all\n page = request.rsr_page\n projects = page.organisation.all_projects() if page else Project.objects.all()\n else:\n # Search bar - organization projects, published\n projects = _project_directory_coll(request)\n\n projects = projects.exclude(title='')\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_user_projects(request):\n user = request.user\n is_admin = user.is_active and (user.is_superuser or user.is_admin)\n if is_admin:\n projects = Project.objects.all()\n else:\n projects = user.approved_organisations().all_projects()\n projects = projects.exclude(title='')\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_impact_projects(request):\n user = request.user\n projects = Project.objects.all() if user.is_admin or user.is_superuser else user.my_projects()\n projects = projects.published().filter(is_impact_project=True).order_by('title')\n\n return Response(\n rejig(projects, TypeaheadProjectSerializer(projects, many=True))\n )\n\n\n@api_view(['GET'])\ndef typeahead_projectupdate(request):\n updates = ProjectUpdate.objects.all()\n return Response(\n rejig(updates, TypeaheadProjectUpdateSerializer(updates, many=True))\n )\n", "path": "akvo/rest/views/typeahead.py"}]} | 1,909 | 131 |
gh_patches_debug_31755 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-299 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Whidbey Coffee
http://www.whidbeycoffee.com/pages/locations
</issue>
<code>
[start of locations/spiders/whidbeycoffee.py]
1 import scrapy
2 import re
3 from locations.items import GeojsonPointItem
4
5 DAY_MAPPING = {
6 "Mon": "Mo",
7 "Tue": "Tu",
8 "Wed": "We",
9 "Thu": "Th",
10 "Fri": "Fr",
11 "Sat": "Sa",
12 "Sun": "Su"
13 }
14
15
16 class WhidbeycoffeeSpider(scrapy.Spider):
17
18 name = "whidbeycoffee"
19 allowed_domains = ["www.whidbeycoffee.com"]
20 download_delay = 1
21 start_urls = (
22 'http://www.whidbeycoffee.com/pages/locations',
23 )
24
25 def parse_day(self, day):
26 if re.search('-', day):
27 days = day.split('-')
28 osm_days = []
29 if len(days) == 2:
30 for day in days:
31 try:
32 osm_day = DAY_MAPPING[day.strip()]
33 osm_days.append(osm_day)
34 except:
35 return None
36 return ["-".join(osm_days)]
37 if re.search('Sat', day) or re.search('Sun', day):
38 if re.search('Sat', day) and re.search('Sun', day):
39 return ['Sa' ,'Su']
40 else:
41 return [DAY_MAPPING[day.strip()]]
42
43
44
45 def parse_times(self, times):
46 if times.strip() == 'Closed':
47 return 'off'
48 hours_to = [x.strip() for x in times.split('-')]
49 cleaned_times = []
50
51 for hour in hours_to:
52 if re.search('pm$', hour):
53 hour = re.sub('pm', '', hour).strip()
54 hour_min = hour.split(":")
55 if int(hour_min[0]) < 12:
56 hour_min[0] = str(12 + int(hour_min[0]))
57 cleaned_times.append(":".join(hour_min))
58
59 if re.search('am$', hour):
60 hour = re.sub('am', '', hour).strip()
61 hour_min = hour.split(":")
62 if len(hour_min[0]) <2:
63 hour_min[0] = hour_min[0].zfill(2)
64 else:
65 hour_min[0] = str(12 + int(hour_min[0]))
66
67 cleaned_times.append(":".join(hour_min))
68 return "-".join(cleaned_times)
69
70 def parse_hours(self, lis):
71 hours = []
72 for li in lis:
73 li= li.lstrip()
74 if re.search('&' ,li):
75 day = li.split(':')[0]
76 else:
77 day = re.findall(r"^[^( |:)]+" ,li)[0]
78 times = li.replace(day , "")[1:]
79 print(times)
80 if times and day:
81 parsed_time = self.parse_times(times)
82 parsed_day = self.parse_day(day)
83 if parsed_day!=None:
84 if (len(parsed_day)==2):
85 hours.append(parsed_day[0] + ' ' + parsed_time)
86 hours.append(parsed_day[1] + ' ' + parsed_time)
87 else:
88 hours.append(parsed_day[0] + ' ' + parsed_time)
89
90 return "; ".join(hours)
91
92 def parse(self, response):
93 stores = response.xpath('//h5')
94 for index , store in enumerate(stores):
95 direction_link = store.xpath('normalize-space(./following-sibling::p/a/@href)').extract_first()
96 properties = {
97 'addr_full': store.xpath('./following-sibling::p/a/text()').extract()[0],
98 'phone': store.xpath('./following-sibling::p/following-sibling::p/text()').extract()[0],
99 'city': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[0],
100 'state': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[1].split(' ')[1],
101 'postcode': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[1].split(' ')[2],
102 'ref':response.url,
103 'lat':re.findall(r"\/@[^(\/)]+", direction_link)[0].split(',')[0][2:],
104 'lon': re.findall(r"\/@[^(\/)]+", direction_link)[0].split(',')[1],
105 }
106 if(index==0):
107 hours = self.parse_hours(store.xpath('./following-sibling::p[3]/text()').extract())
108 else:
109 hours = self.parse_hours(store.xpath('./following-sibling::p[2]/text()').extract()[2:])
110
111 if hours:
112 properties['opening_hours'] = hours
113
114 yield GeojsonPointItem(**properties)
115
[end of locations/spiders/whidbeycoffee.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/whidbeycoffee.py b/locations/spiders/whidbeycoffee.py
--- a/locations/spiders/whidbeycoffee.py
+++ b/locations/spiders/whidbeycoffee.py
@@ -76,7 +76,6 @@
else:
day = re.findall(r"^[^( |:)]+" ,li)[0]
times = li.replace(day , "")[1:]
- print(times)
if times and day:
parsed_time = self.parse_times(times)
parsed_day = self.parse_day(day)
@@ -90,6 +89,7 @@
return "; ".join(hours)
def parse(self, response):
+
stores = response.xpath('//h5')
for index , store in enumerate(stores):
direction_link = store.xpath('normalize-space(./following-sibling::p/a/@href)').extract_first()
@@ -99,7 +99,7 @@
'city': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[0],
'state': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[1].split(' ')[1],
'postcode': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[1].split(' ')[2],
- 'ref':response.url,
+ 'ref':store.xpath('normalize-space(./text())').extract_first(),
'lat':re.findall(r"\/@[^(\/)]+", direction_link)[0].split(',')[0][2:],
'lon': re.findall(r"\/@[^(\/)]+", direction_link)[0].split(',')[1],
}
| {"golden_diff": "diff --git a/locations/spiders/whidbeycoffee.py b/locations/spiders/whidbeycoffee.py\n--- a/locations/spiders/whidbeycoffee.py\n+++ b/locations/spiders/whidbeycoffee.py\n@@ -76,7 +76,6 @@\n else:\n day = re.findall(r\"^[^( |:)]+\" ,li)[0]\n times = li.replace(day , \"\")[1:]\n- print(times)\n if times and day:\n parsed_time = self.parse_times(times)\n parsed_day = self.parse_day(day)\n@@ -90,6 +89,7 @@\n return \"; \".join(hours)\n \n def parse(self, response):\n+\n stores = response.xpath('//h5')\n for index , store in enumerate(stores):\n direction_link = store.xpath('normalize-space(./following-sibling::p/a/@href)').extract_first()\n@@ -99,7 +99,7 @@\n 'city': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[0],\n 'state': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[1].split(' ')[1],\n 'postcode': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[1].split(' ')[2],\n- 'ref':response.url,\n+ 'ref':store.xpath('normalize-space(./text())').extract_first(),\n 'lat':re.findall(r\"\\/@[^(\\/)]+\", direction_link)[0].split(',')[0][2:],\n 'lon': re.findall(r\"\\/@[^(\\/)]+\", direction_link)[0].split(',')[1],\n }\n", "issue": "Whidbey Coffee\nhttp://www.whidbeycoffee.com/pages/locations\n", "before_files": [{"content": "import scrapy\nimport re\nfrom locations.items import GeojsonPointItem\n\nDAY_MAPPING = {\n \"Mon\": \"Mo\",\n \"Tue\": \"Tu\",\n \"Wed\": \"We\",\n \"Thu\": \"Th\",\n \"Fri\": \"Fr\",\n \"Sat\": \"Sa\",\n \"Sun\": \"Su\"\n}\n\n\nclass WhidbeycoffeeSpider(scrapy.Spider):\n\n name = \"whidbeycoffee\"\n allowed_domains = [\"www.whidbeycoffee.com\"]\n download_delay = 1\n start_urls = (\n 'http://www.whidbeycoffee.com/pages/locations',\n )\n\n def parse_day(self, day):\n if re.search('-', day):\n days = day.split('-')\n osm_days = []\n if len(days) == 2:\n for day in days:\n try:\n osm_day = DAY_MAPPING[day.strip()]\n osm_days.append(osm_day)\n except:\n return None\n return [\"-\".join(osm_days)]\n if re.search('Sat', day) or re.search('Sun', day):\n if re.search('Sat', day) and re.search('Sun', day):\n return ['Sa' ,'Su']\n else:\n return [DAY_MAPPING[day.strip()]]\n\n\n\n def parse_times(self, times):\n if times.strip() == 'Closed':\n return 'off'\n hours_to = [x.strip() for x in times.split('-')]\n cleaned_times = []\n\n for hour in hours_to:\n if re.search('pm$', hour):\n hour = re.sub('pm', '', hour).strip()\n hour_min = hour.split(\":\")\n if int(hour_min[0]) < 12:\n hour_min[0] = str(12 + int(hour_min[0]))\n cleaned_times.append(\":\".join(hour_min))\n\n if re.search('am$', hour):\n hour = re.sub('am', '', hour).strip()\n hour_min = hour.split(\":\")\n if len(hour_min[0]) <2:\n hour_min[0] = hour_min[0].zfill(2)\n else:\n hour_min[0] = str(12 + int(hour_min[0]))\n\n cleaned_times.append(\":\".join(hour_min))\n return \"-\".join(cleaned_times)\n\n def parse_hours(self, lis):\n hours = []\n for li in lis:\n li= li.lstrip()\n if re.search('&' ,li):\n day = li.split(':')[0]\n else:\n day = re.findall(r\"^[^( |:)]+\" ,li)[0]\n times = li.replace(day , \"\")[1:]\n print(times)\n if times and day:\n parsed_time = self.parse_times(times)\n parsed_day = self.parse_day(day)\n if parsed_day!=None:\n if (len(parsed_day)==2):\n hours.append(parsed_day[0] + ' ' + parsed_time)\n hours.append(parsed_day[1] + ' ' + parsed_time)\n else:\n hours.append(parsed_day[0] + ' ' + parsed_time)\n\n return \"; \".join(hours)\n\n def parse(self, response):\n stores = response.xpath('//h5')\n for index , store in enumerate(stores):\n direction_link = store.xpath('normalize-space(./following-sibling::p/a/@href)').extract_first()\n properties = {\n 'addr_full': store.xpath('./following-sibling::p/a/text()').extract()[0],\n 'phone': store.xpath('./following-sibling::p/following-sibling::p/text()').extract()[0],\n 'city': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[0],\n 'state': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[1].split(' ')[1],\n 'postcode': store.xpath('./following-sibling::p/a/text()').extract()[1].split(',')[1].split(' ')[2],\n 'ref':response.url,\n 'lat':re.findall(r\"\\/@[^(\\/)]+\", direction_link)[0].split(',')[0][2:],\n 'lon': re.findall(r\"\\/@[^(\\/)]+\", direction_link)[0].split(',')[1],\n }\n if(index==0):\n hours = self.parse_hours(store.xpath('./following-sibling::p[3]/text()').extract())\n else:\n hours = self.parse_hours(store.xpath('./following-sibling::p[2]/text()').extract()[2:])\n\n if hours:\n properties['opening_hours'] = hours\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/whidbeycoffee.py"}]} | 1,799 | 373 |
gh_patches_debug_5651 | rasdani/github-patches | git_diff | projectmesa__mesa-2049 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
JupyterViz: the default grid space drawer doesn't scale to large size
**Describe the bug**
<!-- A clear and concise description the bug -->
Here is Schelling space for 60x60:

**Expected behavior**
<!-- A clear and concise description of what you expected to happen -->
Should either scale down the circle marker size automatically, or scale up the figure size automatically.
</issue>
<code>
[start of mesa/experimental/components/matplotlib.py]
1 from typing import Optional
2
3 import networkx as nx
4 import solara
5 from matplotlib.figure import Figure
6 from matplotlib.ticker import MaxNLocator
7
8 import mesa
9
10
11 @solara.component
12 def SpaceMatplotlib(model, agent_portrayal, dependencies: Optional[list[any]] = None):
13 space_fig = Figure()
14 space_ax = space_fig.subplots()
15 space = getattr(model, "grid", None)
16 if space is None:
17 # Sometimes the space is defined as model.space instead of model.grid
18 space = model.space
19 if isinstance(space, mesa.space.NetworkGrid):
20 _draw_network_grid(space, space_ax, agent_portrayal)
21 elif isinstance(space, mesa.space.ContinuousSpace):
22 _draw_continuous_space(space, space_ax, agent_portrayal)
23 else:
24 _draw_grid(space, space_ax, agent_portrayal)
25 solara.FigureMatplotlib(space_fig, format="png", dependencies=dependencies)
26
27
28 def _draw_grid(space, space_ax, agent_portrayal):
29 def portray(g):
30 x = []
31 y = []
32 s = [] # size
33 c = [] # color
34 for i in range(g.width):
35 for j in range(g.height):
36 content = g._grid[i][j]
37 if not content:
38 continue
39 if not hasattr(content, "__iter__"):
40 # Is a single grid
41 content = [content]
42 for agent in content:
43 data = agent_portrayal(agent)
44 x.append(i)
45 y.append(j)
46 if "size" in data:
47 s.append(data["size"])
48 if "color" in data:
49 c.append(data["color"])
50 out = {"x": x, "y": y}
51 if len(s) > 0:
52 out["s"] = s
53 if len(c) > 0:
54 out["c"] = c
55 return out
56
57 space_ax.set_xlim(-1, space.width)
58 space_ax.set_ylim(-1, space.height)
59 space_ax.scatter(**portray(space))
60
61
62 def _draw_network_grid(space, space_ax, agent_portrayal):
63 graph = space.G
64 pos = nx.spring_layout(graph, seed=0)
65 nx.draw(
66 graph,
67 ax=space_ax,
68 pos=pos,
69 **agent_portrayal(graph),
70 )
71
72
73 def _draw_continuous_space(space, space_ax, agent_portrayal):
74 def portray(space):
75 x = []
76 y = []
77 s = [] # size
78 c = [] # color
79 for agent in space._agent_to_index:
80 data = agent_portrayal(agent)
81 _x, _y = agent.pos
82 x.append(_x)
83 y.append(_y)
84 if "size" in data:
85 s.append(data["size"])
86 if "color" in data:
87 c.append(data["color"])
88 out = {"x": x, "y": y}
89 if len(s) > 0:
90 out["s"] = s
91 if len(c) > 0:
92 out["c"] = c
93 return out
94
95 # Determine border style based on space.torus
96 border_style = "solid" if not space.torus else (0, (5, 10))
97
98 # Set the border of the plot
99 for spine in space_ax.spines.values():
100 spine.set_linewidth(1.5)
101 spine.set_color("black")
102 spine.set_linestyle(border_style)
103
104 width = space.x_max - space.x_min
105 x_padding = width / 20
106 height = space.y_max - space.y_min
107 y_padding = height / 20
108 space_ax.set_xlim(space.x_min - x_padding, space.x_max + x_padding)
109 space_ax.set_ylim(space.y_min - y_padding, space.y_max + y_padding)
110
111 # Portray and scatter the agents in the space
112 space_ax.scatter(**portray(space))
113
114
115 @solara.component
116 def PlotMatplotlib(model, measure, dependencies: Optional[list[any]] = None):
117 fig = Figure()
118 ax = fig.subplots()
119 df = model.datacollector.get_model_vars_dataframe()
120 if isinstance(measure, str):
121 ax.plot(df.loc[:, measure])
122 ax.set_ylabel(measure)
123 elif isinstance(measure, dict):
124 for m, color in measure.items():
125 ax.plot(df.loc[:, m], label=m, color=color)
126 fig.legend()
127 elif isinstance(measure, (list, tuple)):
128 for m in measure:
129 ax.plot(df.loc[:, m], label=m)
130 fig.legend()
131 # Set integer x axis
132 ax.xaxis.set_major_locator(MaxNLocator(integer=True))
133 solara.FigureMatplotlib(fig, dependencies=dependencies)
134
[end of mesa/experimental/components/matplotlib.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mesa/experimental/components/matplotlib.py b/mesa/experimental/components/matplotlib.py
--- a/mesa/experimental/components/matplotlib.py
+++ b/mesa/experimental/components/matplotlib.py
@@ -48,6 +48,9 @@
if "color" in data:
c.append(data["color"])
out = {"x": x, "y": y}
+ # This is the default value for the marker size, which auto-scales
+ # according to the grid area.
+ out["s"] = (180 / min(g.width, g.height)) ** 2
if len(s) > 0:
out["s"] = s
if len(c) > 0:
| {"golden_diff": "diff --git a/mesa/experimental/components/matplotlib.py b/mesa/experimental/components/matplotlib.py\n--- a/mesa/experimental/components/matplotlib.py\n+++ b/mesa/experimental/components/matplotlib.py\n@@ -48,6 +48,9 @@\n if \"color\" in data:\n c.append(data[\"color\"])\n out = {\"x\": x, \"y\": y}\n+ # This is the default value for the marker size, which auto-scales\n+ # according to the grid area.\n+ out[\"s\"] = (180 / min(g.width, g.height)) ** 2\n if len(s) > 0:\n out[\"s\"] = s\n if len(c) > 0:\n", "issue": "JupyterViz: the default grid space drawer doesn't scale to large size\n**Describe the bug**\r\n<!-- A clear and concise description the bug -->\r\nHere is Schelling space for 60x60:\r\n\r\n\r\n**Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen -->\r\nShould either scale down the circle marker size automatically, or scale up the figure size automatically.\n", "before_files": [{"content": "from typing import Optional\n\nimport networkx as nx\nimport solara\nfrom matplotlib.figure import Figure\nfrom matplotlib.ticker import MaxNLocator\n\nimport mesa\n\n\[email protected]\ndef SpaceMatplotlib(model, agent_portrayal, dependencies: Optional[list[any]] = None):\n space_fig = Figure()\n space_ax = space_fig.subplots()\n space = getattr(model, \"grid\", None)\n if space is None:\n # Sometimes the space is defined as model.space instead of model.grid\n space = model.space\n if isinstance(space, mesa.space.NetworkGrid):\n _draw_network_grid(space, space_ax, agent_portrayal)\n elif isinstance(space, mesa.space.ContinuousSpace):\n _draw_continuous_space(space, space_ax, agent_portrayal)\n else:\n _draw_grid(space, space_ax, agent_portrayal)\n solara.FigureMatplotlib(space_fig, format=\"png\", dependencies=dependencies)\n\n\ndef _draw_grid(space, space_ax, agent_portrayal):\n def portray(g):\n x = []\n y = []\n s = [] # size\n c = [] # color\n for i in range(g.width):\n for j in range(g.height):\n content = g._grid[i][j]\n if not content:\n continue\n if not hasattr(content, \"__iter__\"):\n # Is a single grid\n content = [content]\n for agent in content:\n data = agent_portrayal(agent)\n x.append(i)\n y.append(j)\n if \"size\" in data:\n s.append(data[\"size\"])\n if \"color\" in data:\n c.append(data[\"color\"])\n out = {\"x\": x, \"y\": y}\n if len(s) > 0:\n out[\"s\"] = s\n if len(c) > 0:\n out[\"c\"] = c\n return out\n\n space_ax.set_xlim(-1, space.width)\n space_ax.set_ylim(-1, space.height)\n space_ax.scatter(**portray(space))\n\n\ndef _draw_network_grid(space, space_ax, agent_portrayal):\n graph = space.G\n pos = nx.spring_layout(graph, seed=0)\n nx.draw(\n graph,\n ax=space_ax,\n pos=pos,\n **agent_portrayal(graph),\n )\n\n\ndef _draw_continuous_space(space, space_ax, agent_portrayal):\n def portray(space):\n x = []\n y = []\n s = [] # size\n c = [] # color\n for agent in space._agent_to_index:\n data = agent_portrayal(agent)\n _x, _y = agent.pos\n x.append(_x)\n y.append(_y)\n if \"size\" in data:\n s.append(data[\"size\"])\n if \"color\" in data:\n c.append(data[\"color\"])\n out = {\"x\": x, \"y\": y}\n if len(s) > 0:\n out[\"s\"] = s\n if len(c) > 0:\n out[\"c\"] = c\n return out\n\n # Determine border style based on space.torus\n border_style = \"solid\" if not space.torus else (0, (5, 10))\n\n # Set the border of the plot\n for spine in space_ax.spines.values():\n spine.set_linewidth(1.5)\n spine.set_color(\"black\")\n spine.set_linestyle(border_style)\n\n width = space.x_max - space.x_min\n x_padding = width / 20\n height = space.y_max - space.y_min\n y_padding = height / 20\n space_ax.set_xlim(space.x_min - x_padding, space.x_max + x_padding)\n space_ax.set_ylim(space.y_min - y_padding, space.y_max + y_padding)\n\n # Portray and scatter the agents in the space\n space_ax.scatter(**portray(space))\n\n\[email protected]\ndef PlotMatplotlib(model, measure, dependencies: Optional[list[any]] = None):\n fig = Figure()\n ax = fig.subplots()\n df = model.datacollector.get_model_vars_dataframe()\n if isinstance(measure, str):\n ax.plot(df.loc[:, measure])\n ax.set_ylabel(measure)\n elif isinstance(measure, dict):\n for m, color in measure.items():\n ax.plot(df.loc[:, m], label=m, color=color)\n fig.legend()\n elif isinstance(measure, (list, tuple)):\n for m in measure:\n ax.plot(df.loc[:, m], label=m)\n fig.legend()\n # Set integer x axis\n ax.xaxis.set_major_locator(MaxNLocator(integer=True))\n solara.FigureMatplotlib(fig, dependencies=dependencies)\n", "path": "mesa/experimental/components/matplotlib.py"}]} | 2,017 | 161 |
gh_patches_debug_32679 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-1879 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider rei is broken
During the global build at 2021-05-26-14-42-23, spider **rei** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/rei.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/rei.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/rei.geojson))
</issue>
<code>
[start of locations/spiders/rei.py]
1 # -*- coding: utf-8 -*-
2 import scrapy
3 import json
4 import re
5 from locations.items import GeojsonPointItem
6
7 DAY_MAPPING = {
8 'Mon': 'Mo',
9 'Tue': 'Tu',
10 'Wed': 'We',
11 'Thu': 'Th',
12 'Fri': 'Fr',
13 'Sat': 'Sa',
14 'Sun': 'Su'
15 }
16
17 class ReiSpider(scrapy.Spider):
18 name = "rei"
19 allowed_domains = ["www.rei.com"]
20 start_urls = (
21 'https://www.rei.com/map/store',
22 )
23
24 # Fix formatting for ["Mon - Fri 10:00-1800","Sat 12:00-18:00"]
25 def format_days(self, range):
26 pattern = r'^(.{3})( - (.{3}) | )(\d.*)'
27 start_day, seperator, end_day, time_range = re.search(pattern, range.strip()).groups()
28 result = DAY_MAPPING[start_day]
29 if end_day:
30 result += "-"+DAY_MAPPING[end_day]
31 result += " "+time_range
32 return result
33
34 def fix_opening_hours(self, opening_hours):
35 return ";".join(map(self.format_days, opening_hours))
36
37
38 def parse_store(self, response):
39 json_string = response.xpath('//script[@id="store-schema"]/text()').extract_first()
40 store_dict = json.loads(json_string)
41 yield GeojsonPointItem(
42 lat=store_dict["geo"]["latitude"],
43 lon=store_dict["geo"]["longitude"],
44 addr_full=store_dict["address"]["streetAddress"],
45 city=store_dict["address"]["addressLocality"],
46 state=store_dict["address"]["addressRegion"],
47 postcode=store_dict["address"]["postalCode"],
48 country=store_dict["address"]["addressCountry"],
49 opening_hours=self.fix_opening_hours(store_dict["openingHours"]),
50 phone=store_dict["telephone"],
51 website=store_dict["url"],
52 ref=store_dict["url"],
53 )
54
55 def parse(self, response):
56 urls = response.xpath('//a[@class="store-name-link"]/@href').extract()
57 for path in urls:
58 yield scrapy.Request(response.urljoin(path), callback=self.parse_store)
59
60
61
[end of locations/spiders/rei.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/rei.py b/locations/spiders/rei.py
--- a/locations/spiders/rei.py
+++ b/locations/spiders/rei.py
@@ -33,28 +33,34 @@
def fix_opening_hours(self, opening_hours):
return ";".join(map(self.format_days, opening_hours))
-
def parse_store(self, response):
json_string = response.xpath('//script[@id="store-schema"]/text()').extract_first()
store_dict = json.loads(json_string)
- yield GeojsonPointItem(
- lat=store_dict["geo"]["latitude"],
- lon=store_dict["geo"]["longitude"],
- addr_full=store_dict["address"]["streetAddress"],
- city=store_dict["address"]["addressLocality"],
- state=store_dict["address"]["addressRegion"],
- postcode=store_dict["address"]["postalCode"],
- country=store_dict["address"]["addressCountry"],
- opening_hours=self.fix_opening_hours(store_dict["openingHours"]),
- phone=store_dict["telephone"],
- website=store_dict["url"],
- ref=store_dict["url"],
- )
+
+ properties = {
+ "lat": store_dict["geo"]["latitude"],
+ "lon": store_dict["geo"]["longitude"],
+ "addr_full": store_dict["address"]["streetAddress"],
+ "city": store_dict["address"]["addressLocality"],
+ "state": store_dict["address"]["addressRegion"],
+ "postcode": store_dict["address"]["postalCode"],
+ "country": store_dict["address"]["addressCountry"],
+ "opening_hours": self.fix_opening_hours(store_dict["openingHours"]),
+ "phone": store_dict["telephone"],
+ "website": store_dict["url"],
+ "ref": store_dict["url"],
+ }
+
+ yield GeojsonPointItem(**properties)
def parse(self, response):
- urls = response.xpath('//a[@class="store-name-link"]/@href').extract()
+ urls = set(response.xpath('//a[contains(@href,"stores") and contains(@href,".html")]/@href').extract())
for path in urls:
- yield scrapy.Request(response.urljoin(path), callback=self.parse_store)
+ if path == "/stores/bikeshop.html":
+ continue
-
+ yield scrapy.Request(
+ response.urljoin(path),
+ callback=self.parse_store,
+ )
| {"golden_diff": "diff --git a/locations/spiders/rei.py b/locations/spiders/rei.py\n--- a/locations/spiders/rei.py\n+++ b/locations/spiders/rei.py\n@@ -33,28 +33,34 @@\n \n def fix_opening_hours(self, opening_hours):\n return \";\".join(map(self.format_days, opening_hours))\n- \n \n def parse_store(self, response):\n json_string = response.xpath('//script[@id=\"store-schema\"]/text()').extract_first()\n store_dict = json.loads(json_string)\n- yield GeojsonPointItem(\n- lat=store_dict[\"geo\"][\"latitude\"],\n- lon=store_dict[\"geo\"][\"longitude\"],\n- addr_full=store_dict[\"address\"][\"streetAddress\"],\n- city=store_dict[\"address\"][\"addressLocality\"],\n- state=store_dict[\"address\"][\"addressRegion\"],\n- postcode=store_dict[\"address\"][\"postalCode\"],\n- country=store_dict[\"address\"][\"addressCountry\"],\n- opening_hours=self.fix_opening_hours(store_dict[\"openingHours\"]),\n- phone=store_dict[\"telephone\"],\n- website=store_dict[\"url\"],\n- ref=store_dict[\"url\"],\n- )\n+\n+ properties = {\n+ \"lat\": store_dict[\"geo\"][\"latitude\"],\n+ \"lon\": store_dict[\"geo\"][\"longitude\"],\n+ \"addr_full\": store_dict[\"address\"][\"streetAddress\"],\n+ \"city\": store_dict[\"address\"][\"addressLocality\"],\n+ \"state\": store_dict[\"address\"][\"addressRegion\"],\n+ \"postcode\": store_dict[\"address\"][\"postalCode\"],\n+ \"country\": store_dict[\"address\"][\"addressCountry\"],\n+ \"opening_hours\": self.fix_opening_hours(store_dict[\"openingHours\"]),\n+ \"phone\": store_dict[\"telephone\"],\n+ \"website\": store_dict[\"url\"],\n+ \"ref\": store_dict[\"url\"],\n+ }\n+\n+ yield GeojsonPointItem(**properties)\n \n def parse(self, response):\n- urls = response.xpath('//a[@class=\"store-name-link\"]/@href').extract()\n+ urls = set(response.xpath('//a[contains(@href,\"stores\") and contains(@href,\".html\")]/@href').extract())\n for path in urls:\n- yield scrapy.Request(response.urljoin(path), callback=self.parse_store)\n+ if path == \"/stores/bikeshop.html\":\n+ continue\n \n- \n+ yield scrapy.Request(\n+ response.urljoin(path),\n+ callback=self.parse_store,\n+ )\n", "issue": "Spider rei is broken\nDuring the global build at 2021-05-26-14-42-23, spider **rei** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/rei.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/rei.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/rei.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\nimport re\nfrom locations.items import GeojsonPointItem\n\nDAY_MAPPING = {\n 'Mon': 'Mo',\n 'Tue': 'Tu',\n 'Wed': 'We',\n 'Thu': 'Th',\n 'Fri': 'Fr',\n 'Sat': 'Sa',\n 'Sun': 'Su'\n}\n\nclass ReiSpider(scrapy.Spider):\n name = \"rei\"\n allowed_domains = [\"www.rei.com\"]\n start_urls = (\n 'https://www.rei.com/map/store',\n )\n\n # Fix formatting for [\"Mon - Fri 10:00-1800\",\"Sat 12:00-18:00\"]\n def format_days(self, range):\n pattern = r'^(.{3})( - (.{3}) | )(\\d.*)'\n start_day, seperator, end_day, time_range = re.search(pattern, range.strip()).groups()\n result = DAY_MAPPING[start_day]\n if end_day:\n result += \"-\"+DAY_MAPPING[end_day]\n result += \" \"+time_range\n return result\n\n def fix_opening_hours(self, opening_hours):\n return \";\".join(map(self.format_days, opening_hours))\n \n\n def parse_store(self, response):\n json_string = response.xpath('//script[@id=\"store-schema\"]/text()').extract_first()\n store_dict = json.loads(json_string)\n yield GeojsonPointItem(\n lat=store_dict[\"geo\"][\"latitude\"],\n lon=store_dict[\"geo\"][\"longitude\"],\n addr_full=store_dict[\"address\"][\"streetAddress\"],\n city=store_dict[\"address\"][\"addressLocality\"],\n state=store_dict[\"address\"][\"addressRegion\"],\n postcode=store_dict[\"address\"][\"postalCode\"],\n country=store_dict[\"address\"][\"addressCountry\"],\n opening_hours=self.fix_opening_hours(store_dict[\"openingHours\"]),\n phone=store_dict[\"telephone\"],\n website=store_dict[\"url\"],\n ref=store_dict[\"url\"],\n )\n\n def parse(self, response):\n urls = response.xpath('//a[@class=\"store-name-link\"]/@href').extract()\n for path in urls:\n yield scrapy.Request(response.urljoin(path), callback=self.parse_store)\n\n \n", "path": "locations/spiders/rei.py"}]} | 1,312 | 535 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.